ID
stringlengths
11
54
url
stringlengths
33
64
title
stringlengths
11
184
abstract
stringlengths
17
3.87k
label_nlp4sg
bool
2 classes
task
sequence
method
sequence
goal1
stringclasses
9 values
goal2
stringclasses
9 values
goal3
stringclasses
1 value
acknowledgments
stringlengths
28
1.28k
year
stringlengths
4
4
sdg1
bool
1 class
sdg2
bool
1 class
sdg3
bool
2 classes
sdg4
bool
2 classes
sdg5
bool
2 classes
sdg6
bool
1 class
sdg7
bool
1 class
sdg8
bool
2 classes
sdg9
bool
2 classes
sdg10
bool
2 classes
sdg11
bool
2 classes
sdg12
bool
1 class
sdg13
bool
2 classes
sdg14
bool
1 class
sdg15
bool
1 class
sdg16
bool
2 classes
sdg17
bool
2 classes
liu-soo-1994-corpus
https://aclanthology.org/C94-1073
A Corpus-Based Learning Technique for Building A Self-Extensible Parser
IIuman intervention and/or training corpora tagged with various kinds of information were often assumed in many natural language acquisition models. This assumption is a major source of inconsistencies, errors, and inefficiency in learning. In this paper, we explore the extent to which a parser may extend itself without relying on extra input from the outside world. A learning technique called SEP is proposed and attached to the parser. The input to SEP is raw sentences, while the output is the knowledge that is missing in the parser. Since parsers and raw sentences are commonly available and no human intervention is needed in learning, SEP could make fully automatic large-scale acquisition more feasible.
false
[]
[]
null
null
null
Acknowledgement This research is supported in part by NSC (National Science Council of R.@.C.) under the grant NSC83-0408-E-007-008.
1994
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ikehara-etal-1996-statistical
https://aclanthology.org/C96-1097
A Statistical Method for Extracting Uninterrupted and Interrupted Collocations from Very Large Corpora
In order to extract rigid expressions with a high frequency of use, new algorithm that can efficiently extract both uninterrupted and interrupted collocations from very large corpora has been proposed. The statistical method recently proposed for calculating N-gram of m'bitrary N can be applied to the extraction of uninterrupted collocations. But this method posed problems that so large volumes of fractional and unnecessary expressions are extracted that it was impossible to extract interrupted collocations combining the results. To solve this problem, this paper proposed a new algorithm that restrains extraction of unnecessary substrings. This is followed by the proposal of a method that enable to extract interrupted collocations. The new methods are applied to Japanese newspaper articles involving 8.92 million characters. In the case of uninterrupted collocations with string length of 2 or mere characters and frequency of appearance 2 or more times, there were 4.4 millions types of expressions (total frequency of 31.2 millions times) extracted by the N-gram method. In contrast, the new method has reduced this to 0.97 million types (total frequency of 2.6 million times) revealing a substantial reduction in fractional and unnecessary expressions. In the case of interrupted collocational substring extraction, combining the substring with frequency of 10 times or more extracted by the first method, 6.5 thousand types of pairs of substrings with the total frequency of 21.8 thousands were extracted.
false
[]
[]
null
null
null
null
1996
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ren-etal-2020-simulspeech
https://aclanthology.org/2020.acl-main.350
SimulSpeech: End-to-End Simultaneous Speech to Text Translation
In this work, we develop SimulSpeech, an endto-end simultaneous speech to text translation system which translates speech in source language to text in target language concurrently. SimulSpeech consists of a speech encoder, a speech segmenter and a text decoder, where 1) the segmenter builds upon the encoder and leverages a connectionist temporal classification (CTC) loss to split the input streaming speech in real time, 2) the encoder-decoder attention adopts a wait-k strategy for simultaneous translation. SimulSpeech is more challenging than previous cascaded systems (with simultaneous automatic speech recognition (ASR) and simultaneous neural machine translation (NMT)). We introduce two novel knowledge distillation methods to ensure the performance: 1) Attention-level knowledge distillation transfers the knowledge from the multiplication of the attention matrices of simultaneous NMT and ASR models to help the training of the attention mechanism in SimulSpeech; 2) Data-level knowledge distillation transfers the knowledge from the full-sentence NMT model and also reduces the complexity of data distribution to help on the optimization of Simul-Speech. Experiments on MuST-C English-Spanish and English-German spoken language translation datasets show that SimulSpeech achieves reasonable BLEU scores and lower delay compared to full-sentence end-to-end speech to text translation (without simultaneous translation), and better performance than the two-stage cascaded simultaneous translation model in terms of BLEU scores and translation delay. Simultaneous speech to text translation (Fügen et al., 2007; Oda et al., 2014; Dalvi et al., 2018) , which translates source-language speech into targetlanguage text concurrently, is of great importance to the real-time understanding of spoken lectures or conversations and now widely used in many scenarios including live video streaming and international conferences. However, it is widely considered as one of the challenging tasks in machine translation domain because simultaneous speech to text translation has to understand the speech and trade off translation accuracy and delay. Conventional approaches to simultaneous speech to text translation (Fügen et al., 2007; Oda et al., 2014; Dalvi et al., 2018) divide the translation process into two stages: simultaneous automatic speech recognition (ASR) (Rao et al., 2017) and simultaneous neural machine translation (NMT) (Gu et al., 2016) , which cannot be optimized jointly and result in inferior accuracy, and also incurs more translation delay due to two stages.
false
[]
[]
null
null
null
This work was supported in part by the National Key R&D Program of China (Grant No.2018AAA0100603), Zhejiang Natural Science Foundation (LR19F020006), National Natural Science Foundation of China (Grant No.61836002), National Natural Science Foundation of China (Grant No.U1611461), and National Natural Science Foundation of China (Grant No.61751209). This work was also partially funded by Microsoft Research Asia.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tam-etal-2019-optimal
https://aclanthology.org/P19-1592
Optimal Transport-based Alignment of Learned Character Representations for String Similarity
String similarity models are vital for record linkage, entity resolution, and search. In this work, we present STANCE-a learned model for computing the similarity of two strings. Our approach encodes the characters of each string, aligns the encodings using Sinkhorn Iteration (alignment is posed as an instance of optimal transport) and scores the alignment with a convolutional neural network. We evaluate STANCE's ability to detect whether two strings can refer to the same entity-a task we term alias detection. We construct five new alias detection datasets (and make them publicly available). We show that STANCE (or one of its variants) outperforms both state-ofthe-art and classic, parameter-free similarity models on four of the five datasets. We also demonstrate STANCE's ability to improve downstream tasks by applying it to an instance of cross-document coreference and show that it leads to a 2.8 point improvement in B 3 F1 over the previous state-of-the-art approach.
false
[]
[]
null
null
null
1 We used a xml dump of Wikipedia from 2016-03-05. We restrict the entities and hyperlinked spans to come from non-talk, non-list Wikipedia pages.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mundra-etal-2021-wassa
https://aclanthology.org/2021.wassa-1.12
WASSA@IITK at WASSA 2021: Multi-task Learning and Transformer Finetuning for Emotion Classification and Empathy Prediction
This paper describes our contribution to the WASSA 2021 shared task on Empathy Prediction and Emotion Classification. The broad goal of this task was to model an empathy score, a distress score and the overall level of emotion of an essay written in response to a newspaper article associated with harm to someone. We have used the ELECTRA model abundantly and also advanced deep learning approaches like multi-task learning. Additionally, we also leveraged standard machine learning techniques like ensembling. Our system achieves a Pearson Correlation Coefficient of 0.533 on sub-task I and a macro F1 score of 0.5528 on sub-task II. We ranked 1 st in Emotion Classification sub-task and 3 rd in Empathy Prediction sub-task.
true
[]
[]
Good Health and Well-Being
null
null
null
2021
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
light-1996-morphological
https://aclanthology.org/P96-1004
Morphological Cues for Lexical Semantics
Most natural language processing tasks require lexical semantic information. Automated acquisition of this information would thus increase the robustness and portability of NLP systems. This paper describes an acquisition method which makes use of fixed correspondences between derivational affixes and lexical semantic information. One advantage of this method, and of other methods that rely only on surface characteristics of language, is that the necessary input is currently available.
false
[]
[]
null
null
null
A portion of this work was performed at the University of Rochester Computer Science Department and supported by ONR/ARPA research grant number N00014-92-J-1512.
1996
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
janssen-2021-udwiki
https://aclanthology.org/2021.udw-1.7
UDWiki: guided creation and exploitation of UD treebanks
UDWiki is an online environment designed to make creating new UD treebanks easier. It helps in setting up all the necessary data needed for a new treebank up in a GUI, where the interface takes care of guiding you through all the descriptive files needed, adding new texts to your corpus, and helping in annotating the texts. The system is built on top of the TEITOK corpus environment, using an XML based version of UD annotation, where dependencies can be combined with various other types of annotations. UDWiki can run all the necessary or helpful scripts (taggers, parsers, validators) via the interface. It also makes treebanks under development directly searchable, and can be used to maintain or search existing UD treebanks.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bod-2007-linguistic
https://aclanthology.org/W07-0601
A Linguistic Investigation into Unsupervised DOP
Unsupervised Data-Oriented Parsing models (U-DOP) represent a class of structure bootstrapping models that have achieved some of the best unsupervised parsing results in the literature. While U-DOP was originally proposed as an engineering approach to language learning (Bod 2005, 2006a), it turns out that the model has a number of properties that may also be of linguistic and cognitive interest. In this paper we will focus on the original U-DOP model proposed in Bod (2005) which computes the most probable tree from among the shortest derivations of sentences. We will show that this U-DOP model can learn both rule-based and exemplar-based aspects of language, ranging from agreement and movement phenomena to discontiguous contructions, provided that productive units of arbitrary size are allowed. We argue that our results suggest a rapprochement between nativism and empiricism.
false
[]
[]
null
null
null
null
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
indurkhya-2021-using
https://aclanthology.org/2021.ranlp-1.71
Using Collaborative Filtering to Model Argument Selection
This study evaluates whether model-based Collaborative Filtering (CF) algorithms, which have been extensively studied and widely used to build recommender systems, can be used to predict which common nouns a predicate can take as its complement. We find that, when trained on verb-noun co-occurrence data drawn from the Corpus of Contemporary American-English (COCA), two popular model-based CF algorithms, Singular Value Decomposition and Non-negative Matrix Factorization, perform well on this task, each achieving an AUROC of at least 0.89 and surpassing several different baselines. We then show that the embedding-vectors for verbs and nouns learned by the two CF models can be quantized (via application of k-means clustering) with minimal loss of performance on the prediction task while only using a small number of verb and noun clusters (relative to the number of distinct verbs and nouns). Finally we evaluate the alignment between the quantized embedding vectors for verbs and the Levin verb classes, finding that the alignment surpassed several randomized baselines. We conclude by discussing how model-based CF algorithms might be applied to learning restrictions on constituent selection between various lexical categories and how these (learned) models could then be used to augment a (rulebased) constituency grammar.
false
[]
[]
null
null
null
Three anonymous reviewers are thanked for critically reading the manuscript and providing helpful comments.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
claeser-etal-2018-multilingual
https://aclanthology.org/W18-3218
Multilingual Named Entity Recognition on Spanish-English Code-switched Tweets using Support Vector Machines
This paper describes our system submission for the ACL 2018 shared task on named entity recognition (NER) in codeswitched Twitter data. Our best result (F1 = 53.65) was obtained using a Support Vector Machine (SVM) with 14 features combined with rule-based postprocessing.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
blackwood-etal-2010-fluency
https://aclanthology.org/C10-1009
Fluency Constraints for Minimum Bayes-Risk Decoding of Statistical Machine Translation Lattices
A novel and robust approach to improving statistical machine translation fluency is developed within a minimum Bayesrisk decoding framework. By segmenting translation lattices according to confidence measures over the maximum likelihood translation hypothesis we are able to focus on regions with potential translation errors. Hypothesis space constraints based on monolingual coverage are applied to the low confidence regions to improve overall translation fluency.
false
[]
[]
null
null
null
We would like to thank Matt Gibson and the human judges who participated in the evaluation. This work was supported in part under the GALE program of the Defense Advanced Research Projects Agency, Contract No. HR0011-06-C-0022 and the European Union Seventh Framework Programme (FP7-ICT-2009-4) under Grant Agreement No. 247762.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ikehara-etal-1991-toward
https://aclanthology.org/1991.mtsummit-papers.16
Toward an MT System without Pre-Editing: Effects of a New Method in ALT-J/E
Recently, several types of Japanese to English MT (machine translation) systems have been developed, but prior to using such systems, they have required a pre-editing process of rewriting the original text into Japanese that could be easily translated. For communication of translated information requiring speed in dissemination, application of these systems would necessarily pose problems. To overcome such problems, a Multi-Level Translation Method based on Constructive Process Theory had been proposed. In this paper, the benefits of this method in ALT-J/E will be described. In comparison with the conventional elementary composition method, the Multi-Level Translation Method, emphasizing the importance of the meaning contained in expression structures, has been ascertained to be capable of conducting translation according to meaning and context processing with comparative ease. We are now hopeful of realizing machine translation omitting the process of pre-editing.
false
[]
[]
null
null
null
The authors wish to thank Dr. Masahiro Miyazaki, Mr. Kentarou Ogura and other members of the research group on MT for their valuable contribution to discussions.
1991
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chalaguine-schulz-2017-assessing
https://aclanthology.org/E17-4008
Assessing Convincingness of Arguments in Online Debates with Limited Number of Features
We propose a new method in the field of argument analysis in social media to determining convincingness of arguments in online debates, following previous research by Habernal and Gurevych (2016). Rather than using argument specific feature values, we measure feature values relative to the average value in the debate, allowing us to determine argument convincingness with fewer features (between 5 and 35) than normally used for natural language processing tasks. We use a simple forward-feeding neural network for this task and achieve an accuracy of 0.77 which is comparable to the accuracy obtained using 64k features and a support vector machine by Habernal and Gurevych.
false
[]
[]
null
null
null
We thank our colleague Oana Cocarascu from Imperial College London who provided insight and expertise that greatly assisted the research, as well as Luka Milic for assistance with the implementation of the neural network.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
pezzelle-etal-2018-comparatives
https://aclanthology.org/N18-1039
Comparatives, Quantifiers, Proportions: a Multi-Task Model for the Learning of Quantities from Vision
The present work investigates whether different quantification mechanisms (set comparison, vague quantification, and proportional estimation) can be jointly learned from visual scenes by a multi-task computational model. The motivation is that, in humans, these processes underlie the same cognitive, nonsymbolic ability, which allows an automatic estimation and comparison of set magnitudes. We show that when information about lowercomplexity tasks is available, the higher-level proportional task becomes more accurate than when performed in isolation. Moreover, the multi-task model is able to generalize to unseen combinations of target/non-target objects. Consistently with behavioral evidence showing the interference of absolute number in the proportional task, the multi-task model no longer works when asked to provide the number of target objects in the scene.
false
[]
[]
null
null
null
We kindly acknowledge Gemma Boleda and the AMORE team (UPF), Raquel Fernández and the Dialogue Modelling Group (UvA) for the feedback, advice and support. We are also grateful to Aurélie Herbelot, Stephan Lee, Manuela Piazza, Sebastian Ruder, and the anonymous reviewers for their valuable comments. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 715154). We gratefully acknowledge the support of NVIDIA Corporation with the donation of GPUs used for this research. This paper reflects the authors' view only, and the EU is not responsible for any use that may be made of the information it contains.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hsieh-etal-2019-robustness
https://aclanthology.org/P19-1147
On the Robustness of Self-Attentive Models
This work examines the robustness of selfattentive neural networks against adversarial input perturbations. Specifically, we investigate the attention and feature extraction mechanisms of state-of-the-art recurrent neural networks and self-attentive architectures for sentiment analysis, entailment and machine translation under adversarial attacks. We also propose a novel attack algorithm for generating more natural adversarial examples that could mislead neural models but not humans. Experimental results show that, compared to recurrent neural models, self-attentive models are more robust against adversarial perturbation. In addition, we provide theoretical explanations for their superior robustness to support our claims.
false
[]
[]
null
null
null
We are grateful for the insightful comments from anonymous reviewers. This work is supported by the Ministry of Science and Technology of Taiwan under grant numbers 107-2917-I-004-001, 108-2634-F-001-005. The author Yu-Lun Hsieh wishes to acknowledge, with thanks, the Taiwan International Graduate Program (TIGP) of Academia Sinica for financial support towards attending this conference. We also acknowledge the support from NSF via IIS1719097, Intel and Google Cloud.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hahn-choi-2019-self
https://aclanthology.org/R19-1050
Self-Knowledge Distillation in Natural Language Processing
Since deep learning became a key player in natural language processing (NLP), many deep learning models have been showing remarkable performances in a variety of NLP tasks, and in some cases, they are even outperforming humans. Such high performance can be explained by efficient knowledge representation of deep learning models. While many methods have been proposed to learn more efficient representation, knowledge distillation from pretrained deep networks suggest that we can use more information from the soft target probability to train other neural networks. In this paper, we propose a new knowledge distillation method self-knowledge distillation, based on the soft target probabilities of the training model itself, where multimode information is distilled from the word embedding space right below the softmax layer. Due to the time complexity, our method approximates the soft target probabilities. In experiments, we applied the proposed method to two different and fundamental NLP tasks: language model and neural machine translation. The experiment results show that our proposed method improves performance on the tasks.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
vanni-zajac-1996-temple
https://aclanthology.org/X96-1024
The Temple Translator's Workstation Project
Tipster document management architecture and it allows both translator/analysts and monolingual analysts to use the machinetranslation function for assessing the relevance of a translated document or otherwise using its information in the performance of other types of information processing. Translators can also use its output as a rough draft from which to begin the process of producing a translation, following up with specific post-editing functions. Glossary-Based Machine-Translation (GBMT) was first developed at CMU as part of the Pangloss project [Nirenburg 95; Cohen et al., 93; Nirenburg et al., 93; Frederking et al., 93] , and a sizeable Spanish-English GBMT system was implemented.
false
[]
[]
null
null
null
null
1996
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tihelka-matousek-2004-design
http://www.lrec-conf.org/proceedings/lrec2004/pdf/119.pdf
The Design of Czech Language Formal Listening Tests for the Evaluation of TTS Systems
This paper presents an attempt to design listening tests for the Czech synthesis speech evaluation. The design is based on standardized and widely used listening tests for English; therefore, we can benefit from the advantages provided by standards. Bearing the Czech language phenomena in mind, we filled the standard frameworks of several listening tests, especially the MRT (Modified Rhyme Test) and the SUS (Semantically Unpredictable Sentences) test; the Czech National Corpus was used for this purpose. Designed tests were instantly used for real tests in which 88 people took part, a procedure which proved correct. This was the first attempt to design Czech listening tests according to given standard frameworks and it was successful.
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lin-dyer-2009-data
https://aclanthology.org/N09-4001
Data Intensive Text Processing with MapReduce
This half-day tutorial introduces participants to data-intensive text processing with the MapReduce programming model [1], using the open-source Hadoop implementation. The focus will be on scalability and the tradeoffs associated with distributed processing of large datasets. Content will include general discussions about algorithm design, presentation of illustrative algorithms, case studies in HLT applications, as well as practical advice in writing Hadoop programs and running Hadoop clusters. Amazon has generously agreed to provide each participant with $100 in Amazon Web Services (AWS) credits that can used toward its Elastic Compute Cloud (EC2) "utility computing" service (sufficient for 1000 instance-hours). EC2 allows anyone to rapidly provision Hadoop clusters "on the fly" without upfront hardware investments, and provides a low-cost vehicle for exploring Hadoop.
false
[]
[]
null
null
null
This work is supported by NSF under awards IIS-0705832 and IIS-0836560; the Intramural Research Program of the NIH, National Library of Medicine; DARPA/IPTO Contract No. HR0011-06-2-0001 under the GALE program. Any opinions, findings, conclusions, or recommendations expressed here are the instructors' and do not necessarily reflect those of the sponsors. We are grateful to Amazon for its support of tutorial participants.
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
vulic-korhonen-2016-role
https://aclanthology.org/P16-1024
On the Role of Seed Lexicons in Learning Bilingual Word Embeddings
A shared bilingual word embedding space (SBWES) is an indispensable resource in a variety of cross-language NLP and IR tasks. A common approach to the SB-WES induction is to learn a mapping function between monolingual semantic spaces, where the mapping critically relies on a seed word lexicon used in the learning process. In this work, we analyze the importance and properties of seed lexicons for the SBWES induction across different dimensions (i.e., lexicon source, lexicon size, translation method, translation pair reliability). On the basis of our analysis, we propose a simple but effective hybrid bilingual word embedding (BWE) model. This model (HYBWE) learns the mapping between two monolingual embedding spaces using only highly reliable symmetric translation pairs from a seed document-level embedding space. We perform bilingual lexicon learning (BLL) with 3 language pairs and show that by carefully selecting reliable translation pairs our new HYBWE model outperforms benchmarking BWE learning models, all of which use more expensive bilingual signals. Effectively, we demonstrate that a SBWES may be induced by leveraging only a very weak bilingual signal (document alignments) along with monolingual data.
false
[]
[]
null
null
null
This work is supported by ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909). The authors are grateful to Roi Reichart and the anonymous reviewers for their helpful comments and suggestions.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
li-etal-2021-tdeer
https://aclanthology.org/2021.emnlp-main.635
TDEER: An Efficient Translating Decoding Schema for Joint Extraction of Entities and Relations
Joint extraction of entities and relations from unstructured texts to form factual triples is a fundamental task of constructing a Knowledge Base (KB). A common method is to decode triples by predicting entity pairs to obtain the corresponding relation. However, it is still challenging to handle this task efficiently, especially for the overlapping triple problem. To address such a problem, this paper proposes a novel efficient entities and relations extraction model called TDEER, which stands for Translating Decoding Schema for Joint Extraction of Entities and Relations. Unlike the common approaches, the proposed translating decoding schema regards the relation as a translating operation from subject to objects, i.e., TDEER decodes triples as subject + relation → objects. TDEER can naturally handle the overlapping triple problem, because the translating decoding schema can recognize all possible triples, including overlapping and non-overlapping triples. To enhance model robustness, we introduce negative samples to alleviate error accumulation at different stages. Extensive experiments on public datasets demonstrate that TDEER produces competitive results compared with the state-of-the-art (SOTA) baselines. Furthermore, the computation complexity analysis indicates that TDEER is more efficient than powerful baselines. Especially, the proposed TDEER is 2 times faster than the recent SOTA models. The code is available at https://github.com/4AI/TDEER.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
schilder-1999-reference
https://aclanthology.org/W99-0112
Reference Hashed
This paper argues for a novel data structure for the representation of discourse referents. A so-called hashing list is employed to store discourse referents according to their grammatical features. The account proposed combines insights from several theodes of discourse comprehension. Segmented Discourse Representation Theory (Asher, 1993) is enriched by the ranking system developed in centering theory (Grosz et al., 1995). In addition, a tree logic is used to represent underspecification within the discourse structure (Schilder, 1998).
false
[]
[]
null
null
null
I would like to thank the two annomynous reviewers to their comments and feedback. Special thanks to Christie Manning for providing me with all her help.
1999
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mosbach-etal-2020-interplay
https://aclanthology.org/2020.blackboxnlp-1.7
On the Interplay Between Fine-tuning and Sentence-Level Probing for Linguistic Knowledge in Pre-Trained Transformers
Fine-tuning pre-trained contextualized embedding models has become an integral part of the NLP pipeline. At the same time, probing has emerged as a way to investigate the linguistic knowledge captured by pre-trained models. Very little is, however, understood about how fine-tuning affects the representations of pre-trained models and thereby the linguistic knowledge they encode. This paper contributes towards closing this gap. We study three different pre-trained models: BERT, RoBERTa, and ALBERT, and investigate through sentence-level probing how finetuning affects their representations. We find that for some probing tasks fine-tuning leads to substantial changes in accuracy, possibly suggesting that fine-tuning introduces or even removes linguistic knowledge from a pre-trained model. These changes, however, vary greatly across different models, fine-tuning and probing tasks. Our analysis reveals that while finetuning indeed changes the representations of a pre-trained model and these changes are typically larger for higher layers, only in very few cases, fine-tuning has a positive effect on probing accuracy that is larger than just using the pre-trained model with a strong pooling method. Based on our findings, we argue that both positive and negative effects of finetuning on probing require a careful interpretation.
false
[]
[]
null
null
null
We thank Badr Abdullah for his comments and suggestions. We would also like to thank the reviewers for their useful comments and feedback, in particular R1. This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -project-id 232722074 -SFB 1102.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kundu-choudhury-2014-know
https://aclanthology.org/W14-5127
How to Know the Best Machine Translation System in Advance before Translating a Sentence?
The aim of the paper is to identify a machine translation (MT) system from a set of multiple MT systems in advance, capable of producing most appropriate translation for a source sentence. The prediction is done based on the analysis of a source sentence before translating it using these MT systems. This selection procedure has been framed as a classification task. A machine learning based approach leveraging features extracting from analysis of a source sentence has been proposed here. The main contribution of the paper is selection of sourceside features. These features help machine learning approaches to discriminate MT systems according to their translation quality though these approaches have no idea about working principle of these MT systems. The proposed approach is language independent and has shown promising result when applied on English-Bangla MT task.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ravi-kozareva-2019-device
https://aclanthology.org/P19-1368
On-device Structured and Context Partitioned Projection Networks
A challenging problem in on-device text classification is to build highly accurate neural models that can fit in small memory footprint and have low latency. To address this challenge, we propose an on-device neural network SGNN++ which dynamically learns compact projection vectors from raw text using structured and context-dependent partition projections. We show that this results in accelerated inference and performance improvements. We conduct extensive evaluation on multiple conversational tasks and languages such as English, Japanese, Spanish and French. Our SGNN++ model significantly outperforms all baselines, improves upon existing on-device neural models and even surpasses RNN, CNN and BiLSTM models on dialog act and intent prediction. Through a series of ablation studies we show the impact of the partitioned projections and structured information leading to 10% improvement. We study the impact of the model size on accuracy and introduce quantization-aware training for SGNN++ to further reduce the model size while preserving the same quality. Finally, we show fast inference on mobile phones.
false
[]
[]
null
null
null
We would like to thank the organizers of the customer feedback challenging for sharing the data and the anonymous reviewers for their valuable feedback and suggestions.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
etcheverry-wonsever-2019-unraveling
https://aclanthology.org/P19-1319
Unraveling Antonym's Word Vectors through a Siamese-like Network
Discriminating antonyms and synonyms is an important NLP task that has the difficulty that both, antonyms and synonyms, contains similar distributional information. Consequently, pairs of antonyms and synonyms may have similar word vectors. We present an approach to unravel antonymy and synonymy from word vectors based on a siamese network inspired approach. The model consists of a two-phase training of the same base network: a pre-training phase according to a siamese model supervised by synonyms and a training phase on antonyms through a siamese-like model that supports the antitransitivity present in antonymy. The approach makes use of the claim that the antonyms in common of a word tend to be synonyms. We show that our approach outperforms distributional and patternbased approaches, relaying on a simple feed forward network as base network of the training phases.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
slawik-etal-2015-stripping
https://aclanthology.org/2015.eamt-1.18
Stripping Adjectives: Integration Techniques for Selective Stemming in SMT Systems
In this paper we present an approach to reduce data sparsity problems when translating from morphologically rich languages into less inflected languages by selectively stemming certain word types. We develop and compare three different integration strategies: replacing words with their stemmed form, combined input using alternative lattice paths for the stemmed and surface forms and a novel hidden combination strategy, where we replace the stems in the stemmed phrase table by the observed surface forms in the test data. This allows us to apply advanced models trained on the surface forms of the words. We evaluate our approach by stemming German adjectives in two German→English translation scenarios: a low-resource condition as well as a large-scale state-of-the-art translation system. We are able to improve between 0.2 and 0.4 BLEU points over our baseline and reduce the number of out-of-vocabulary words by up to 16.5%.
false
[]
[]
null
null
null
The project leading to this application has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement n • 645452.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
di-eugenio-1992-understanding
https://aclanthology.org/P92-1016
Understanding Natural Language Instructions: The Case of Purpose Clauses
This paper presents an analysis of purpose clauses in the context of instruction understanding. Such analysis shows that goals affect the interpretation and / or execution of actions, lends support to the proposal of using generation and enablement to model relations between actions, and sheds light on some inference processes necessary to interpret purpose clauses.
false
[]
[]
null
null
null
For financial support I acknowledge DARPA grant no. N0014-90-J-1863 and ARt grant no. DAALO3-89-C0031PR1. Thanks to Bonnie Webber for support, insights and countless discussions, and to all the members of the AnimNL group, in particular to Mike White. Finally, thanks to the Dipartimento di Informatica -Universita' di Torino -Italy for making their computing environment available to me, and in particular thanks to Felice Cardone, Luca Console, Leonardo Lesmo, and Vincenzo Lombardo, who helped me through a last minute computer crash.
1992
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
piits-etal-2007-designing
https://aclanthology.org/W07-2459
Designing a Speech Corpus for Estonian Unit Selection Synthesis
The article reports the development of a speech corpus for Estonian text-to-speech synthesis based on unit selection. Introduced are the principles of the corpus as well as the procedure of its creation, from text compilation to corpus analysis and text recording. Also described are the choices made in the process of producing a text of 400 sentences, the relevant lexical and morphological preferences, and the way to the most natural sentence context for the words used.
false
[]
[]
null
null
null
The support from the program Language Technology Support of the Estonian has made the present work possible.
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
choi-etal-2010-propbank
http://www.lrec-conf.org/proceedings/lrec2010/pdf/73_Paper.pdf
Propbank Frameset Annotation Guidelines Using a Dedicated Editor, Cornerstone
This paper gives guidelines of how to create and update Propbank frameset files using a dedicated editor, Cornerstone. Propbank is a corpus in which the arguments of each verb predicate are annotated with their semantic roles in relation to the predicate. Propbank annotation also requires the choice of a sense ID for each predicate. Thus, for each predicate in Propbank, there exists a corresponding frameset file showing the expected predicate argument structure of each sense related to the predicate. Since most Propbank annotations are based on the predicate argument structure defined in the frameset files, it is important to keep the files consistent, simple to read as well as easy to update. The frameset files are written in XML, which can be difficult to edit when using a simple text editor. Therefore, it is helpful to develop a user-friendly editor such as Cornerstone, specifically customized to create and edit frameset files. Cornerstone runs platform independently, is light enough to run as an X11 application and supports multiple languages such as Arabic, Chinese, English, Hindi and Korean.
false
[]
[]
null
null
null
We gratefully acknowledge the support of the National Science Foundation Grants CISE-CRI-0551615, Towards a Comprehensive Linguistic Annotation and CISE-CRI 0709167, Collaborative: A Multi-Representational and Multi-Layered Treebank for Hindi/Urdu, and a grant from the Defense Advanced Research Projects Agency (DARPA/IPTO) under the GALE program, DARPA/CMO Contract No. HR0011-06-C-0022, subcontract from BBN, Inc. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ueda-washio-2021-relationship
https://aclanthology.org/2021.acl-srw.6
On the Relationship between Zipf's Law of Abbreviation and Interfering Noise in Emergent Languages
This paper studies whether emergent languages in a signaling game follow Zipf's law of abbreviation (ZLA), especially when the communication ability of agents is limited because of interfering noises. ZLA is a wellknown tendency in human languages where the more frequently a word is used, the shorter it will be. Surprisingly, previous work demonstrated that emergent languages do not obey ZLA at all when neural agents play a signaling game. It also reported that a ZLA-like tendency appeared by adding an explicit penalty on word lengths, which can be considered some external factors in reality such as articulatory effort. We hypothesize, on the other hand, that there might be not only such external factors but also some internal factors related to cognitive abilities. We assume that it could be simulated by modeling the effect of noises on the agents' environment. In our experimental setup, the hidden states of the LSTM-based speaker and listener were added with Gaussian noise, while the channel was subject to discrete random replacement. Our results suggest that noise on a speaker is one of the factors for ZLA or at least causes emergent languages to approach ZLA, while noise on a listener and a channel is not.
false
[]
[]
null
null
null
We would like to thank Professor Yusuke Miyao for supervising our research, Jason Naradowsky for fruitful discussions and proofreading, and the anonymous reviewers for helpful suggestions. The first author would also like to thank his colleagues Taiga Ishii and Hiroaki Mizuno as they have encouraged each other in their senior theses.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
indig-etal-2018-whats
https://aclanthology.org/L18-1091
What's Wrong, Python? -- A Visual Differ and Graph Library for NLP in Python
The correct analysis of the output of a program based on supervised learning is inevitable in order to be able to identify the errors it produced and characterise its error types. This task is fairly difficult without a proper tool, especially if one works with complex data structures such as parse trees or sentence alignments. In this paper, we present a library that allows the user to interactively visualise and compare the output of any program that yields a well-known data format. Our goal is to create a tool granting the total control of the visualisation to the user, including extensions, but also have the common primitives and data-formats implemented for typical cases. We describe the common features of the common NLP tasks from the viewpoint of visualisation in order to specify the essential primitive functions. We enumerate many popular off-the-shelf NLP visualisation programs to compare with our implementation, which unifies all of the profitable features of the existing programs adding extendibility as a crucial feature to them.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
brandt-skelbye-dannells-2021-ocr
https://aclanthology.org/2021.ranlp-1.23
OCR Processing of Swedish Historical Newspapers Using Deep Hybrid CNN--LSTM Networks
Deep CNN-LSTM hybrid neural networks have proven to improve the accuracy of Optical Character Recognition (OCR) models for different languages. In this paper we examine to what extent these networks improve the OCR accuracy rates on Swedish historical newspapers. By experimenting with the open source OCR engine Calamari, we are able to show that mixed deep CNN-LSTM hybrid models outperform previous models on the task of character recognition of Swedish historical newspapers spanning 1818-1848. We achieved an average character accuracy rate (CAR) of 97.43% which is a new state-of-the-art result on 19th century Swedish newspaper text. Our data, code and models are released under CC BY licence.
false
[]
[]
null
null
null
This work has been funded by the Swedish Research Council as part of the project Evaluation and refinement of an enhanced OCR-process for mass digitisation (2019-2020; dnr IN18-0940:1). It is also supported by Språkbanken Text and Swe-Clarin, a Swedish consortium in Common Language Resources and Technology Infrastructure (CLARIN) Swedish CLARIN (dnr 821-2013CLARIN (dnr 821- -2003. The authors would like to thank the RANLP anonymous reviewers for their valuable comments.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
amin-etal-2022-using
https://aclanthology.org/2022.ltedi-1.5
Using BERT Embeddings to Model Word Importance in Conversational Transcripts for Deaf and Hard of Hearing Users
Deaf and hard of hearing individuals regularly rely on captioning while watching live TV. Live TV captioning is evaluated by regulatory agencies using various caption evaluation metrics. However, caption evaluation metrics are often not informed by preferences of DHH users or how meaningful the captions are. There is a need to construct caption evaluation metrics that take the relative importance of words in a transcript into account. We conducted correlation analysis between two types of word embeddings and human-annotated labeled wordimportance scores in existing corpus. We found that normalized contextualized word embeddings generated using BERT correlated better with manually annotated importance scores than word2vec-based word embeddings. We make available a pairing of word embeddings and their human-annotated importance scores. We also provide proof-of-concept utility by training word importance models, achieving an F1-score of 0.57 in the 6-class word importance classification task.
true
[]
[]
Reduced Inequalities
null
null
This material is based on work supported by the Department of Health and Human Services under Award No. 90DPCP0002-0100, and by the National Science Foundation under Award No. DGE-2125362. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Department of Health and Human Services or National Science Foundation.
2022
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
arora-etal-2020-supervised
https://aclanthology.org/2020.acl-main.696
Supervised Grapheme-to-Phoneme Conversion of Orthographic Schwas in Hindi and Punjabi
Hindi grapheme-to-phoneme (G2P) conversion is mostly trivial, with one exception: whether a schwa represented in the orthography is pronounced or unpronounced (deleted). Previous work has attempted to predict schwa deletion in a rule-based fashion using prosodic or phonetic analysis. We present the first statistical schwa deletion classifier for Hindi, which relies solely on the orthography as the input and outperforms previous approaches. We trained our model on a newly-compiled pronunciation lexicon extracted from various online dictionaries. Our best Hindi model achieves state of the art performance, and also achieves good performance on a closely related language, Punjabi, without modification.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhao-chen-2009-simplex
https://aclanthology.org/N09-2006
A Simplex Armijo Downhill Algorithm for Optimizing Statistical Machine Translation Decoding Parameters
We propose a variation of simplex-downhill algorithm specifically customized for optimizing parameters in statistical machine translation (SMT) decoder for better end-user automatic evaluation metric scores for translations, such as versions of BLEU, TER and mixtures of them. Traditional simplexdownhill has the advantage of derivative-free computations of objective functions, yet still gives satisfactory searching directions in most scenarios. This is suitable for optimizing translation metrics as they are not differentiable in nature. On the other hand, Armijo algorithm usually performs line search efficiently given a searching direction. It is a deep hidden fact that an efficient line search method will change the iterations of simplex, and hence the searching trajectories. We propose to embed the Armijo inexact line search within the simplexdownhill algorithm. We show, in our experiments, the proposed algorithm improves over the widelyapplied Minimum Error Rate training algorithm for optimizing machine translation parameters.
false
[]
[]
null
null
null
null
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
clarke-lapata-2006-constraint
https://aclanthology.org/P06-2019
Constraint-Based Sentence Compression: An Integer Programming Approach
The ability to compress sentences while preserving their grammaticality and most of their meaning has recently received much attention. Our work views sentence compression as an optimisation problem. We develop an integer programming formulation and infer globally optimal compressions in the face of linguistically motivated constraints. We show that such a formulation allows for relatively simple and knowledge-lean compression models that do not require parallel corpora or largescale resources. The proposed approach yields results comparable and in some cases superior to state-of-the-art.
false
[]
[]
null
null
null
Thanks to Jean Carletta, Amit Dubey, Frank Keller, Steve Renals, and Sebastian Riedel for helpful comments and suggestions. Lapata acknowledges the support of EPSRC (grant GR/T04540/01).
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
saif-etal-2014-stopwords
http://www.lrec-conf.org/proceedings/lrec2014/pdf/292_Paper.pdf
On Stopwords, Filtering and Data Sparsity for Sentiment Analysis of Twitter
Sentiment classification over Twitter is usually affected by the noisy nature (abbreviations, irregular forms) of tweets data. A popular procedure to reduce the noise of textual data is to remove stopwords by using pre-compiled stopword lists or more sophisticated methods for dynamic stopword identification. However, the effectiveness of removing stopwords in the context of Twitter sentiment classification has been debated in the last few years. In this paper we investigate whether removing stopwords helps or hampers the effectiveness of Twitter sentiment classification methods. To this end, we apply six different stopword identification methods to Twitter data from six different datasets and observe how removing stopwords affects two well-known supervised sentiment classification methods. We assess the impact of removing stopwords by observing fluctuations on the level of data sparsity, the size of the classifier's feature space and its classification performance. Our results show that using pre-compiled lists of stopwords negatively impacts the performance of Twitter sentiment classification approaches. On the other hand, the dynamic generation of stopword lists, by removing those infrequent terms appearing only once in the corpus, appears to be the optimal method to maintaining a high classification performance while reducing the data sparsity and substantially shrinking the feature space.
false
[]
[]
null
null
null
This work was supported by the EU-FP7 project SENSE4US (grant no. 611242).
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sennrich-haddow-2016-linguistic
https://aclanthology.org/W16-2209
Linguistic Input Features Improve Neural Machine Translation
Neural machine translation has recently achieved impressive results, while using little in the way of external linguistic information. In this paper we show that the strong learning capability of neural MT models does not make linguistic features redundant; they can be easily incorporated to provide further improvements in performance. We generalize the embedding layer of the encoder in the attentional encoder-decoder architecture to support the inclusion of arbitrary features, in addition to the baseline word feature. We add morphological features, part-ofspeech tags, and syntactic dependency labels as input features to English↔German and English→Romanian neural machine translation systems. In experiments on WMT16 training and test sets, we find that linguistic input features improve model quality according to three metrics: perplexity, BLEU and CHRF3. An opensource implementation of our neural MT system is available 1 , as are sample files and configurations 2 .
false
[]
[]
null
null
null
This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreements 645452 (QT21), and 644402 (HimL).
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
heylen-etal-2014-termwise
http://www.lrec-conf.org/proceedings/lrec2014/pdf/706_Paper.pdf
TermWise: A CAT-tool with Context-Sensitive Terminological Support.
Increasingly, large bilingual document collections are being made available online, especially in the legal domain. This type of Big Data is a valuable resource that specialized translators exploit to search for informative examples of how domain-specific expressions should be translated. However, general purpose search engines are not optimized to retrieve previous translations that are maximally relevant to a translator. In this paper, we report on the TermWise project, a cooperation of terminologists, corpus linguists and computer scientists, that aims to leverage big online translation data for terminological support to legal translators at the Belgian Federal Ministry of Justice. The project developed dedicated knowledge extraction algorithms and a server-based tool to provide translators with the most relevant previous translations of domain-specific expressions relative to the current translation assignment. The functionality is implemented as an extra database, a Term&Phrase Memory, that is meant to be integrated with existing Computer Assisted Translation tools. In the paper, we give an overview of the system, give a demo of the user interface, we present a user-based evaluation by translators and discuss how the tool is part of the general evolution towards exploiting Big Data in translation.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sabir-etal-2021-reinforcebug
https://aclanthology.org/2021.naacl-main.477
ReinforceBug: A Framework to Generate Adversarial Textual Examples
Adversarial Examples (AEs) generated by perturbing original training examples are useful in improving the robustness of Deep Learning (DL) based models. Most prior works generate AEs that are either unconscionable due to lexical errors or semantically and functionally deviant from original examples. In this paper, we present ReinforceBug, a reinforcement learning framework, that learns a policy that is transferable on unseen datasets and generates utility-preserving and transferable (on other models) AEs. Our experiments show that ReinforceBug is on average 10% more successful as compared to the state-ofthe-art attack TextFooler. Moreover, the target models have on average 73.64% confidence in wrong prediction, the generated AEs preserve the functional equivalence and semantic similarity (83.38%) to their original counterparts, and are transferable on other models with an average success rate of 46%.
false
[]
[]
null
null
null
This work was supported with super-computing resources provided by the Phoenix HPC service at the University of Adelaide.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
matero-etal-2019-suicide
https://aclanthology.org/W19-3005
Suicide Risk Assessment with Multi-level Dual-Context Language and BERT
Mental health predictive systems typically model language as if from a single context (e.g. Twitter posts, status updates, or forum posts) and often limited to a single level of analysis (e.g. either the message-level or userlevel). Here, we bring these pieces together to explore the use of open-vocabulary (BERT embeddings, topics) and theoretical features (emotional expression lexica, personality) for the task of suicide risk assessment on support forums (the CLPsych-2019 Shared Task). We used dual context based approaches (modeling content from suicide forums separate from other content), built over both traditional ML models as well as a novel dual RNN architecture with user-factor adaptation. We find that while affect from the suicide context distinguishes with no-risk from those with "anyrisk", personality factors from the non-suicide contexts provide distinction of the levels of risk: low, medium, and high risk. Within the shared task, our dual-context approach (listed as SBU-HLAB in the official results) achieved state-of-the-art performance predicting suicide risk using a combination of suicide-context and non-suicide posts (Task B), achieving an F1 score of 0.50 over hidden test set labels.
true
[]
[]
Good Health and Well-Being
null
null
null
2019
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
leonova-zuters-2021-frustration
https://aclanthology.org/2021.ranlp-1.93
Frustration Level Annotation in Latvian Tweets with Non-Lexical Means of Expression
We present a neural-network-driven model for annotating frustration intensity in customer support tweets, based on representing tweet texts using a bag-ofwords encoding after processing with subword segmentation together with nonlexical features. The model was evaluated on tweets in English and Latvian languages, focusing on aspects beyond the pure bag-of-words representations used in previous research. The experimental results show that the model can be successfully applied for texts in a non-English language, and that adding non-lexical features to tweet representations significantly improves performance, while subword segmentation has a moderate but positive effect on model accuracy. Our code and training data are publicly available 1 .
false
[]
[]
null
null
null
The research has been supported by the European Regional Development Fund within the joint project of SIA TILDE and University of Latvia "Multilingual Artificial Intelligence Based Human Computer Interaction" No.1.1.1.1/18/A/148.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
maillette-de-buy-wenniger-simaan-2013-formal
https://aclanthology.org/W13-0807
A Formal Characterization of Parsing Word Alignments by Synchronous Grammars with Empirical Evidence to the ITG Hypothesis.
Deciding whether a synchronous grammar formalism generates a given word alignment (the alignment coverage problem) depends on finding an adequate instance grammar and then using it to parse the word alignment. But what does it mean to parse a word alignment by a synchronous grammar? This is formally undefined until we define an unambiguous mapping between grammatical derivations and word-level alignments. This paper proposes an initial, formal characterization of alignment coverage as intersecting two partially ordered sets (graphs) of translation equivalence units, one derived by a grammar instance and another defined by the word alignment. As a first sanity check, we report extensive coverage results for ITG on automatic and manual alignments. Even for the ITG formalism, our formal characterization makes explicit many algorithmic choices often left underspecified in earlier work. The training data used by current statistical machine translation (SMT) models consists of source and target sentence pairs aligned together at the word level (word alignments). For the hierarchical and syntactically-enriched SMT models, e.g., (Chiang, 2007; Zollmann and Venugopal, 2006) , this training data is used for extracting statistically weighted Synchronous Context-Free Grammars (SCFGs). Formally speaking, a synchronous grammar defines a set of (source-target) sentence pairs derived synchronously by the grammar. Contrary to common belief, however, a synchronous grammar (see e.g., (Chiang, 2005; Satta and Peserico, 2005) ) does not accept (or parse) word alignments. This is because a synchronous derivation generates a tree pair with a bijective binary relation (links) between their nonterminal nodes. For deciding whether a given word alignment is generated/accepted by a given synchronous grammar, it is necessary to interpret the synchronous derivations down to the lexical level. However, it is formally defined yet how to unambiguously interpret the synchronous derivations of a synchronous grammar as word alignments. One major difficulty is that synchronous productions, in their most general form, may contain unaligned terminal sequences. Consider, for instance, the relatively non-complex synchronous production X → α X (1) β X (2) γ X (3) , X → σ X (2) τ X (1) µ X (3) where superscript (i) stands for aligned instances of nonterminal X and all Greek symbols stand for arbitrary non-empty terminals sequences. Given a word aligned sentence pair it is necessary to bind the terminal sequence by alignments consistent with the given word alignment, and then parse the word alignment with the thus enriched grammar rules. This is not complex if we assume that each of the source terminal sequences is contiguously aligned with a target contiguous sequence, but difficult if we assume arbitrary alignments, including many-to-one and non-contiguously aligned chunks.
false
[]
[]
null
null
null
We thank reviewers for their helpful comments, and thank Mark-Jan Nederhof for illuminating discussions on parsing as intersection. This work is supported by The Netherlands Organization for Scientific Research (NWO) under grant nr. 612.066.929.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ciobotaru-dinu-2021-red
https://aclanthology.org/2021.ranlp-1.34
RED: A Novel Dataset for Romanian Emotion Detection from Tweets
In Romanian language there are some resources for automatic text comprehension, but for Emotion Detection, not lexicon-based, there are none. To cover this gap, we extracted data from Twitter and created the first dataset containing tweets annotated with five types of emotions: joy, fear, sadness, anger and neutral, with the intent of being used for opinion mining and analysis tasks. In this article we present some features of our novel dataset, and create a benchmark to achieve the first supervised machine learning model for automatic Emotion Detection in Romanian short texts. We investigate the performance of four classical machine learning models: Multinomial Naive Bayes, Logistic Regression, Support Vector Classification and Linear Support Vector Classification. We also investigate more modern approaches like fastText, which makes use of subword information. Lastly, we finetune the Romanian BERT for text classification and our experiments show that the BERTbased model has the best performance for the task of Emotion Detection from Romanian tweets.
false
[]
[]
null
null
null
We would like to thank Nicu Ciobotaru and Ioana Alexandra Rȃducanu for their help with the annotation process, Ligia Maria Bȃtrînca for proof reading and suggestions, as well as the anonymous reviewers for their time and valuable comments.We acknowledge the support of a grant of the Romanian Ministry of Education and Research, CCCDI-UEFISCDI, project number 411PED/2020, code PN-III-P2-2.1-PED-2019-2271, within PNCDI III.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hattasch-etal-2020-summarization
https://aclanthology.org/2020.lrec-1.827
Summarization Beyond News: The Automatically Acquired Fandom Corpora
Large state-of-the-art corpora for training neural networks to create abstractive summaries are mostly limited to the news genre, as it is expensive to acquire human-written summaries for other types of text at a large scale. In this paper, we present a novel automatic corpus construction approach to tackle this issue as well as three new large open-licensed summarization corpora based on our approach that can be used for training abstractive summarization models. Our constructed corpora contain fictional narratives, descriptive texts, and summaries about movies, television, and book series from different domains. All sources use a creative commons (CC) license, hence we can provide the corpora for download. In addition, we also provide a ready-to-use framework that implements our automatic construction approach to create custom corpora with desired parameters like the length of the target summary and the number of source documents from which to create the summary. The main idea behind our automatic construction approach is to use existing large text collections (e.g., thematic wikis) and automatically classify whether the texts can be used as (query-focused) multi-document summaries and align them with potential source texts. As a final contribution, we show the usefulness of our automatic construction approach by running state-of-the-art summarizers on the corpora and through a manual evaluation with human annotators.
false
[]
[]
null
null
null
This work has been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) under grant No. GRK 1994/1. Thanks to Aurel Kilian and Ben Kohr who helped with the implementation of the first prototype and to all human annotators.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
neill-2019-lda
https://aclanthology.org/W19-7505
LDA Topic Modeling for pram\=aṇa Texts: A Case Study in Sanskrit NLP Corpus Building
Sanskrit texts in epistemology, metaphysics, and logic (i.e., pramāṇa texts) remain underrepresented in computational work. To begin to remedy this, a 3.5 million-token digital corpus has been prepared for document-and word-level analysis, and its potential demonstrated through Latent Dirichlet Allocation (LDA) topic modeling. Attention is also given to data consistency issues, with special reference to the SARIT corpus. 1 Credits This research was supported by DFG Project 279803509 "Digitale kritische Edition des Nyāyabhāṣya" 1 and by the Humboldt Chair of Digital Humanities at the University of Leipzig, especially Dr. Thomas Köntges. Special thanks also to conversation partner Yuki Kyogoku.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhao-etal-2020-spanmlt
https://aclanthology.org/2020.acl-main.296
SpanMlt: A Span-based Multi-Task Learning Framework for Pair-wise Aspect and Opinion Terms Extraction
Aspect terms extraction and opinion terms extraction are two key problems of fine-grained Aspect Based Sentiment Analysis (ABSA). The aspect-opinion pairs can provide a global profile about a product or service for consumers and opinion mining systems. However, traditional methods can not directly output aspect-opinion pairs without given aspect terms or opinion terms. Although some recent co-extraction methods have been proposed to extract both terms jointly, they fail to extract them as pairs. To this end, this paper proposes an end-to-end method to solve the task of Pair-wise Aspect and Opinion Terms Extraction (PAOTE). Furthermore, this paper treats the problem from a perspective of joint term and relation extraction rather than under the sequence tagging formulation performed in most prior works. We propose a multi-task learning framework based on shared spans, where the terms are extracted under the supervision of span boundaries. Meanwhile, the pair-wise relations are jointly identified using the span representations. Extensive experiments show that our model consistently outperforms stateof-the-art methods.
false
[]
[]
null
null
null
This research is supported in part by the National Natural Science Foundation of China under Grant 61702500.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ye-etal-2020-safer
https://aclanthology.org/2020.acl-main.317
SAFER: A Structure-free Approach for Certified Robustness to Adversarial Word Substitutions
State-of-the-art NLP models can often be fooled by human-unaware transformations such as synonymous word substitution. For security reasons, it is of critical importance to develop models with certified robustness that can provably guarantee that the prediction is can not be altered by any possible synonymous word substitution. In this work, we propose a certified robust method based on a new randomized smoothing technique, which constructs a stochastic ensemble by applying random word substitutions on the input sentences, and leverage the statistical properties of the ensemble to provably certify the robustness. Our method is simple and structure-free in that it only requires the black-box queries of the model outputs, and hence can be applied to any pre-trained models (such as BERT) and any types of models (world-level or subwordlevel). Our method significantly outperforms recent state-of-the-art methods for certified robustness on both IMDB and Amazon text classification tasks. To the best of our knowledge, we are the first work to achieve certified robustness on large systems such as BERT with practically meaningful certified accuracy.
false
[]
[]
null
null
null
This work is supported in part by NSF CRII 1830161 and NSF CAREER 1846421.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
siegel-1997-learning
https://aclanthology.org/W97-0318
Learning Methods for Combining Linguistic Indicators to Classify Verbs
Fourteen linguistically-motivated numerical indicators are evaluated for their ability to categorize verbs as either states or events. The values for each indicator are computed automatically across a corpus of text. To improve classification performance, machine learning techniques are employed to combine multiple indicators. Three machine learning methods are compared for this task: decision tree induction, a genetic algorithm, and log-linear regression.
false
[]
[]
null
null
null
Kathleen R. McKeown was extremely helpful regarding the formulation of our work and Judith Klavans regarding linguistic techniques. Alexander D. Charfee, Vasileios Hatzivassiloglou, Dragomir Radev and Dekai Wu provided many helpful insights regarding the evaluation and presentation of our results.This research is supported in part by the Columbia University Center for Advanced Technology in High Performance Computing and Communications in Healthcare (funded by the New York State Science and Technology Foundation), the Office of Naval Research under contract N00014-95-1-0745 and by the National Science Foundation under contract GER-90-24069.Finally, we would like to thank Andy Singleton for the use of his GPQuick software.
1997
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tiedemann-2013-experiences
https://aclanthology.org/W13-5606
Experiences in Building the Let's MT! Portal on Amazon EC2
In this presentation I will discuss the design and implementation of Let's MT!, a collaborative platform for building statistical machine translation systems. The goal of this platform is to make MT technology, that has been developed in academia, accessible for professional translators, freelancers and everyday users without requiring technical skills and deep background knowledge of the approaches used in the backend of the translation engine. The main challenge in this project was the development of a robust environment that can serve a growing community and large numbers of user requests. The key for success is a distributed environment that allows a maximum of scalability and robustness. With this in mind, we developed a modular platform that can be scaled by adding new nodes to the different components of the system. We opted for a cloud-based solution based on Amazon EC2 to create a cost-efficient environment that can dynamically be adjusted to user needs and system load. In the presentation I will explain our design of the distributed resource repository, the SMT training facilities and the actual translation service. I will mention issues of data security and optimization of the training procedures in order to fit our setup and the expected usage of the system.
false
[]
[]
null
null
null
null
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
liu-etal-2021-dexperts
https://aclanthology.org/2021.acl-long.522
DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts
Despite recent advances in natural language generation, it remains challenging to control attributes of generated text. We propose DEX-PERTS: Decoding-time Experts, a decodingtime method for controlled text generation that combines a pretrained language model with "expert" LMs and/or "anti-expert" LMs in a product of experts. Intuitively, under the ensemble, tokens only get high probability if they are considered likely by the experts and unlikely by the anti-experts. We apply DEXPERTS to language detoxification and sentiment-controlled generation, where we outperform existing controllable generation methods on both automatic and human evaluations. Moreover, because DEXPERTS operates only on the output of the pretrained LM, it is effective with (anti-)experts of smaller size, including when operating on GPT-3. Our work highlights the promise of tuning small LMs on text with (un)desirable attributes for efficient decoding-time steering.
false
[]
[]
null
null
null
This research is supported in part by NSF (IIS-1714566), DARPA MCS program through NIWC Pacific (N66001-19-2-4031), and Allen Institute for AI. We thank OpenAI, specifically Bianca Martin and Miles Brundage, for providing access to GPT-3 through the OpenAI API Academic Access Program. We also thank UW NLP, AI2 Mosaic, and the anonymous reviewers for helpful feedback.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
liu-seneff-2009-review
https://aclanthology.org/D09-1017
Review Sentiment Scoring via a Parse-and-Paraphrase Paradigm
This paper presents a parse-and-paraphrase paradigm to assess the degrees of sentiment for product reviews. Sentiment identification has been well studied; however, most previous work provides binary polarities only (positive and negative), and the polarity of sentiment is simply reversed when a negation is detected. The extraction of lexical features such as unigram/bigram also complicates the sentiment classification task, as linguistic structure such as implicit long-distance dependency is often disregarded. In this paper, we propose an approach to extracting adverb-adjective-noun phrases based on clause structure obtained by parsing sentences into a hierarchical representation. We also propose a robust general solution for modeling the contribution of adverbials and negation to the score for degree of sentiment. In an application involving extracting aspect-based pros and cons from restaurant reviews, we obtained a 45% relative improvement in recall through the use of parsing methods, while also improving precision.
false
[]
[]
null
null
null
null
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
pratt-pacak-1969-automated
https://aclanthology.org/C69-1101
Automated Processing of Medical English
Int ro duct ion The present interest of the scientific community in automated language processing has been awakened by the enormous capabilities of the high speed digltal computer. It was recognized that the computer which has the capacity to handle symbols effectively can also treat words as symbols and language as a string of symbols. Automated language processing as exemplified by current research, had its origin in machine translation. The first attempt to use the computer for automatic language processing took place in 1954. It is known as the "IBM-Georgetown Experiment" in machine translation from Russian into English. (I ,2) The experiment revealed the following facts: a. the digital computer can be used for automated language processing 2 but b. much deeper knowledge about the structure and semantics of language will be required for the determination and semantic interpretation of sentence structure. The field of automated language processing is quite broad; it includes machine translation, automatic information retrieval (if based on language data), production of computer generated abstracts, indexes and catalogs, development of artificial languages, question answering systems, automatic speech analysis and synthesis, and others.
true
[]
[]
Good Health and Well-Being
null
null
null
1969
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
christensen-etal-2014-hierarchical
https://aclanthology.org/P14-1085
Hierarchical Summarization: Scaling Up Multi-Document Summarization
Multi-document summarization (MDS) systems have been designed for short, unstructured summaries of 10-15 documents, and are inadequate for larger document collections. We propose a new approach to scaling up summarization called hierarchical summarization, and present the first implemented system, SUMMA. SUMMA produces a hierarchy of relatively short summaries, in which the top level provides a general overview and users can navigate the hierarchy to drill down for more details on topics of interest. SUMMA optimizes for coherence as well as coverage of salient information. In an Amazon Mechanical Turk evaluation, users prefered SUMMA ten times as often as flat MDS and three times as often as timelines.
false
[]
[]
null
null
null
We thank Amitabha Bagchi, Niranjan Balasubramanian, Danish Contractor, Oren Etzioni, Tony Fader, Carlos Guestrin, Prachi Jain, Lucy Vanderwende, Luke Zettlemoyer, and the anonymous reviewers for their helpful suggestions and feedback. We thank Hui Lin and Jeff Bilmes for providing us with their code. This research was supported in part by ARO contract W911NF-13-1-0246, DARPA Air Force Research Laboratory (AFRL) contract FA8750-13-2-0019, UW-IITD subcontract RP02815, and the Yahoo! Faculty Research and Engagement Award. This paper is also supported in part by the Intelligence Advanced Research Projects Activity (IARPA) via AFRL contract number FA8650-10-C-7058. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, AFRL, or the U.S. Government.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
johnson-1984-discovery
https://aclanthology.org/P84-1070
A Discovery Procedure for Certain Phonological Rules
Acquisition of phonological systems can be insightfully studied in terms of discovery procedures. This paper describes a discovery procedure, implemented in Lisp, capable of determining a set of ordered phonological rules, which may be in opaque contexts~ from a set of surface forms arranged in paradigms. 1.
false
[]
[]
null
null
null
null
1984
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
adams-etal-2020-induced
https://aclanthology.org/2020.sigmorphon-1.25
Induced Inflection-Set Keyword Search in Speech
We investigate the problem of searching for a lexeme-set in speech by searching for its inflectional variants. Experimental results indicate how lexeme-set search performance changes with the number of hypothesized inflections, while ablation experiments highlight the relative importance of different components in the lexeme-set search pipeline and the value of using curated inflectional paradigms. We provide a recipe and evaluation set for the community to use as an extrinsic measure of the performance of inflection generation approaches.
false
[]
[]
null
null
null
We would like to thank all reviewers for their constructive feedback.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
aggarwal-etal-2020-sukhan
https://aclanthology.org/2020.icon-main.29
SUKHAN: Corpus of Hindi Shayaris annotated with Sentiment Polarity Information
Shayari is a form of poetry mainly popular in the Indian subcontinent, in which the poet expresses his emotions and feelings in a very poetic manner. It is one of the best ways to express our thoughts and opinions. Therefore, it is of prime importance to have an annotated corpus of Hindi shayaris for the task of sentiment analysis. In this paper, we introduce SUKHAN, a dataset consisting of Hindi shayaris along with sentiment polarity labels. To the best of our knowledge, this is the first corpus of Hindi shayaris annotated with sentiment polarity information. This corpus contains a total of 733 Hindi shayaris of various genres. Also, this dataset is of utmost value as all the annotation is done manually by five annotators and this makes it a very rich dataset for training purposes. This annotated corpus is also used to build baseline sentiment classification models using machine learning techniques.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ramanand-etal-2010-wishful
https://aclanthology.org/W10-0207
Wishful Thinking - Finding suggestions and 'buy' wishes from product reviews
This paper describes methods aimed at solving the novel problem of automatically discovering 'wishes' from (English) documents such as reviews or customer surveys. These wishes are sentences in which authors make suggestions (especially for improvements) about a product or service or show intentions to purchase a product or service. Such 'wishes' are of great use to product managers and sales personnel, and supplement the area of sentiment analysis by providing insights into the minds of consumers. We describe rules that can help detect these 'wishes' from text. We evaluate these methods on texts from the electronic and banking industries.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
channarukul-etal-2000-enriching
https://aclanthology.org/W00-1422
Enriching partially-specified representations for text realization using an attribute grammar
We present a new approach to enriching underspecified representations of content to be realized as text. Our approach uses an attribute grammar to propagate missing information where needed in a tree that represents the text to be realized. This declaratively-specified grammar mediates between application-produced output and the input to a generation system and, as a consequence, can easily augment an existing generation system. Endapplications that use this approach can produce high quality text without a fine-grained specification of the text to be realized, thereby reducing the burden to the application. Additionally, representations used by the generator are compact, because values that can be constructed from the constraints encoded by the grammar will be propagated where necessary. This approach is more flexible than defaulting or making a statistically good choice because it can deal with long-distance dependencies (such as gaps and reflexive pronouns). Our approach differs from other approaches that use attribute grammars in that we use the grammar to enrich the representations of the content to be realized, rather than to generate the text itself. We illustrate the approach with examples from our template-based textrealizer, YAG.
false
[]
[]
null
null
null
The authors are indebted to John T. Boyland for his helpful comments and suggestions.
2000
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
xu-etal-2021-adaptive
https://aclanthology.org/2021.emnlp-main.198
Adaptive Bridge between Training and Inference for Dialogue Generation
Although exposure bias has been widely studied in some NLP tasks, it faces its unique challenges in dialogue response generation, the representative one-to-various generation scenario. In real human dialogue, there are many appropriate responses for the same context, not only with different expressions, but also with different topics. Therefore, due to the much bigger gap between various ground-truth responses and the generated synthetic response, exposure bias is more challenging in dialogue generation task. What's more, as MLE encourages the model to only learn the common words among different ground-truth responses, but ignores the interesting and specific parts, exposure bias may further lead to the common response generation problem, such as "I don't know" and "HaHa?" In this paper, we propose a novel adaptive switching mechanism, which learns to automatically transit between ground-truth learning and generated learning regarding the word-level matching score, such as the cosine similarity. Experimental results on both Chinese STC dataset and English Reddit dataset, show that our adaptive method achieves a significant improvement in terms of metric-based evaluation and human evaluation, as compared with the state-of-the-art exposure bias approaches. Further analysis on NMT task also shows that our model can achieve a significant improvement.
false
[]
[]
null
null
null
This work is supported by the Beijing Academy of Artificial Intelligence (BAAI), and the National Natural Science Foundation of China (NSFC) (No.61773362).
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
davoodi-kosseim-2016-contribution
https://aclanthology.org/W16-3620
On the Contribution of Discourse Structure on Text Complexity Assessment
This paper investigates the influence of discourse features on text complexity assessment. To do so, we created two data sets based on the Penn Discourse Treebank and the Simple English Wikipedia corpora and compared the influence of coherence, cohesion, surface, lexical and syntactic features to assess text complexity. Results show that with both data sets coherence features are more correlated to text complexity than the other types of features. In addition, feature selection revealed that with both data sets the top most discriminating feature is a coherence feature.
false
[]
[]
null
null
null
The authors would like to thank the anonymous reviewers for their feedback on the paper. This work was financially supported by NSERC.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
buck-vlachos-2021-trajectory
https://aclanthology.org/2021.adaptnlp-1.15
Trajectory-Based Meta-Learning for Out-Of-Vocabulary Word Embedding Learning
Word embedding learning methods require a large number of occurrences of a word to accurately learn its embedding. However, outof-vocabulary (OOV) words which do not appear in the training corpus emerge frequently in the smaller downstream data. Recent work formulated OOV embedding learning as a fewshot regression problem and demonstrated that meta-learning can improve results obtained. However, the algorithm used, model-agnostic meta-learning (MAML) is known to be unstable and perform worse when a large number of gradient steps are used for parameter updates. In this work, we propose the use of Leap, a meta-learning algorithm which leverages the entire trajectory of the learning process instead of just the beginning and the end points, and thus ameliorates these two issues. In our experiments on a benchmark OOV embedding learning dataset and in an extrinsic evaluation, Leap performs comparably or better than MAML. We go on to examine which contexts are most beneficial to learn an OOV embedding from, and propose that the choice of contexts may matter more than the meta-learning employed.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kondratyuk-2019-cross
https://aclanthology.org/W19-4203
Cross-Lingual Lemmatization and Morphology Tagging with Two-Stage Multilingual BERT Fine-Tuning
We present our CHARLES-SAARLAND system for the SIGMORPHON 2019 Shared Task on Crosslinguality and Context in Morphology, in task 2, Morphological Analysis and Lemmatization in Context. We leverage the multilingual BERT model and apply several fine-tuning strategies introduced by UDify demonstrating exceptional evaluation performance on morpho-syntactic tasks. Our results show that fine-tuning multilingual BERT on the concatenation of all available treebanks allows the model to learn cross-lingual information that is able to boost lemmatization and morphology tagging accuracy over fine-tuning it purely monolingually. Unlike UDify, however, we show that when paired with additional character-level and word-level LSTM layers, a second stage of fine-tuning on each treebank individually can improve evaluation even further. Out of all submissions for this shared task, our system achieves the highest average accuracy and f1 score in morphology tagging and places second in average lemmatization accuracy.
false
[]
[]
null
null
null
Daniel Kondratyuk has been supported by the Erasmus Mundus program in Language & Communication Technologies (LCT).
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gardner-etal-2020-determining
https://aclanthology.org/2020.wnut-1.4
Determining Question-Answer Plausibility in Crowdsourced Datasets Using Multi-Task Learning
Datasets extracted from social networks and online forums are often prone to the pitfalls of natural language, namely the presence of unstructured and noisy data. In this work, we seek to enable the collection of high-quality question-answer datasets from social media by proposing a novel task for automated quality analysis and data cleaning: question-answer (QA) plausibility. Given a machine or usergenerated question and a crowd-sourced response from a social media user, we determine if the question and response are valid; if so, we identify the answer within the free-form response. We design BERT-based models to perform the QA plausibility task, and we evaluate the ability of our models to generate a clean, usable question-answer dataset. Our highestperforming approach consists of a singletask model which determines the plausibility of the question, followed by a multitask model which evaluates the plausibility of the response as well as extracts answers (Question Plausibility AUROC=0.75, Response Plausibility AUROC=0.78, Answer Extraction F1=0.665).
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhang-lapata-2014-chinese
https://aclanthology.org/D14-1074
Chinese Poetry Generation with Recurrent Neural Networks
We propose a model for Chinese poem generation based on recurrent neural networks which we argue is ideally suited to capturing poetic content and form. Our generator jointly performs content selection ("what to say") and surface realization ("how to say") by learning representations of individual characters, and their combinations into one or more lines as well as how these mutually reinforce and constrain each other. Poem lines are generated incrementally by taking into account the entire history of what has been generated so far rather than the limited horizon imposed by the previous line or lexical n-grams. Experimental results show that our model outperforms competitive Chinese poetry generation systems using both automatic and manual evaluation methods.
false
[]
[]
null
null
null
We would like to thank Eva Halser for valuable discussions on the machine translation baseline. We are grateful to the 30 Chinese poetry experts for participating in our rating study. Thanks to Gujing Lu, Chu Liu, and Yibo Wang for their help with translating the poems in Table 6 and Table 1.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
takehisa-2017-remarks
https://aclanthology.org/Y17-1028
Remarks on Denominal -Ed Adjectives
This paper discusses denominal adjectives derived by affixation of-ed in English in light of recent advances in linguistic theory and makes the following three claims. First, unlike recent proposals arguing against their denominal status, the paper defends the widely held view that these adjectives are derived from nominals and goes on to argue that the nominal bases involved are structurally reduced: nP. Second, the paper argues that the suffixed in denominal adjectives shows no contextual allomorphy, which is a natural consequence that follows from the workings of the mechanism of exponent insertion in Distributed Morphology (Halle and Marantz, 1993). Third, the meaning associated with denominal-ed adjectives stems from the suffix's denotation requiring a relation, which effectively restricts base nominals to relational nouns, derived or underived. It is also argued that the suffix is crucially different from possessive determiners in English (e.g., 's) in that, while the former imposes type shifting on non-relational nouns, the latter undergo type shifting to accommodate them.
false
[]
[]
null
null
null
I am grateful to an anonymous reviewer for providing invaluable comments on an earlier version of this paper. The usual disclaimers apply.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yan-etal-2021-adatag
https://aclanthology.org/2021.acl-long.362
AdaTag: Multi-Attribute Value Extraction from Product Profiles with Adaptive Decoding
Automatic extraction of product attribute values is an important enabling technology in e-Commerce platforms. This task is usually modeled using sequence labeling architectures, with several extensions to handle multi-attribute extraction. One line of previous work constructs attribute-specific models, through separate decoders or entirely separate models. However, this approach constrains knowledge sharing across different attributes. Other contributions use a single multiattribute model, with different techniques to embed attribute information. But sharing the entire network parameters across all attributes can limit the model's capacity to capture attribute-specific characteristics. In this paper we present AdaTag, which uses adaptive decoding to handle extraction. We parameterize the decoder with pretrained attribute embeddings, through a hypernetwork and a Mixture-of-Experts (MoE) module. This allows for separate, but semantically correlated, decoders to be generated on the fly for different attributes. This approach facilitates knowledge sharing, while maintaining the specificity of each attribute. Our experiments on a realworld e-Commerce dataset show marked improvements over previous methods. * Most of the work was done during an internship at Amazon.
false
[]
[]
null
null
null
This work has been supported in part by NSF SMA 18-29268. We would like to thank Jun Ma, Chenwei Zhang, Colin Lockard, Pascual Martínez-Gómez, Binxuan Huang from Amazon, and all the collaborators in USC INK research lab, for their constructive feedback on the work. We would also like to thank the anonymous reviewers for their valuable comments.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chen-kageura-2020-multilingualization
https://aclanthology.org/2020.lrec-1.512
Multilingualization of Medical Terminology: Semantic and Structural Embedding Approaches
The multilingualization of terminology is an essential step in the translation pipeline, to ensure the correct transfer of domain-specific concepts. Many institutions and language service providers construct and maintain multilingual terminologies, which constitute important assets. However, the curation of such multilingual resources requires significant human effort; though automatic multilingual term extraction methods have been proposed so far, they are of limited success as term translation cannot be satisfied by simply conveying meaning, but requires the terminologists and domain experts' knowledge to fit the term within the existing terminology. Here we propose a method to encode the structural properties of terms by aligning their embeddings using graph convolutional networks trained from separate languages. The results show that the structural information can augment the standard bilingual lexicon induction methods, and that taking into account the structural nature of terminologies allows our method to produce better results.
true
[]
[]
Good Health and Well-Being
null
null
null
2020
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ge-etal-2013-event
https://aclanthology.org/D13-1001
Event-Based Time Label Propagation for Automatic Dating of News Articles
Since many applications such as timeline summaries and temporal IR involving temporal analysis rely on document timestamps, the task of automatic dating of documents has been increasingly important. Instead of using feature-based methods as conventional models, our method attempts to date documents in a year level by exploiting relative temporal relations between documents and events, which are very effective for dating documents. Based on this intuition, we proposed an eventbased time label propagation model called confidence boosting in which time label information can be propagated between documents and events on a bipartite graph. The experiments show that our event-based propagation model can predict document timestamps in high accuracy and the model combined with a MaxEnt classifier outperforms the state-ofthe-art method for this task especially when the size of the training set is small.
false
[]
[]
null
null
null
We thank the anonymous reviewers for their valuable suggestions. This paper is supported by NSFC Project 61075067, NSFC Project 61273318 and National Key Technology R&D Program (No: 2011BAH10B04-03).
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
murveit-etal-1991-speech
https://aclanthology.org/H91-1015
Speech Recognition in SRI's Resource Management and ATIS Systems
This paper describes improvements to DECIPHER, the speech recognition component in SKI's Air Travel Information Systems (ATIS) and Resource Management systems. DECIPHER is a speaker-independent continuous speech recognition system based on hidden Markov model (HMM) technology. We show significant performance improvements in DECIPHER due to (I) the addition of tied-mixture I-IMM modeling (2) rejection of outof-vocabulary speech and background noise while continuing to recognize speech (3) adapting to the current speaker (4) the implementation of N-gram statistical grammars with DECIPHER. Finally we describe our performance in the February 1991 DARPA Resource Management evaluation (4.8 percent word error) and in the February 1991 DARPA-ATIS speech and SLS evaluations (95 sentences correct, 15 wrong of 140). We show that, for the ATIS evaluation, a well-conceived system integration can be relatively robust to speech recognition errors and to linguistic variability and errors.
false
[]
[]
null
null
null
null
1991
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
meile-1961-problems
https://aclanthology.org/1961.earlymt-1.21
On problems of address in an automatic dictionary of French
In most printed dictionaries, the address of each article, that is of each set of information pertaining to that particular entry, is simply the word itself. It has to be so in a book for common use: for the general reader's sake, the word must be entered in its complete form. In the case of long words, part only of the letters contained in the word would be enough to provide an adequate address, that is to achieve an alphabetical classification. As a matter of fact, the last letters of a long word (say a word of more than ten letters) do not play any part whatsoever as classificators. The first four or five letters are very often sufficient; subsequent letters provide an over-definition which, from the point of view of address only, remains useless.
false
[]
[]
null
null
null
null
1961
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bosch-etal-2006-towards
http://www.lrec-conf.org/proceedings/lrec2006/pdf/597_pdf.pdf
Towards machine-readable lexicons for South African Bantu languages
Lexical information for South African Bantu languages is not readily available in the form of machine-readable lexicons. At present the availability of lexical information is restricted to a variety of paper dictionaries. These dictionaries display considerable diversity in the organisation and representation of data. In order to proceed towards the development of reusable and suitably standardised machine-readable lexicons for these languages, a data model for lexical entries becomes a prerequisite. In this study the general purpose model as developed by Bell and Bird (2000) is used as a point of departure. Firstly, the extent to which the Bell and Bird (2000) data model may be applied to and modified for the above-mentioned languages is investigated. Initial investigations indicate that modification of this data model is necessary to make provision for the specific requirements of lexical entries in these languages. Secondly, a data model in the form of an XML DTD for the languages in question, based on our findings regarding (Bell & Bird, 2000) and (Weber, 2002) is presented. Included in this model are additional particular requirements for complete and appropriate representation of linguistic information as identified in the study of available paper dictionaries.
false
[]
[]
null
null
null
This material is based upon work supported by the National Research Foundation under grant number 2053403. Any opinion, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Research Foundation.
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
liu-haghighi-2011-ordering
https://aclanthology.org/P11-1111
Ordering Prenominal Modifiers with a Reranking Approach
In this work, we present a novel approach to the generation task of ordering prenominal modifiers. We take a maximum entropy reranking approach to the problem which admits arbitrary features on a permutation of modifiers, exploiting hundreds of thousands of features in total. We compare our error rates to the state-of-the-art and to a strong Google ngram count baseline. We attain a maximum error reduction of 69.8% and average error reduction across all test sets of 59.1% compared to the state-of-the-art and a maximum error reduction of 68.4% and average error reduction across all test sets of 41.8% compared to our Google n-gram count baseline.
false
[]
[]
null
null
null
Many thanks to Margaret Mitchell, Regina Barzilay, Xiao Chen, and members of the CSAIL NLP group for their help and suggestions.
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
song-etal-2012-joint
https://aclanthology.org/D12-1114
Joint Learning for Coreference Resolution with Markov Logic
Pairwise coreference resolution models must merge pairwise coreference decisions to generate final outputs. Traditional merging methods adopt different strategies such as the bestfirst method and enforcing the transitivity constraint, but most of these methods are used independently of the pairwise learning methods as an isolated inference procedure at the end. We propose a joint learning model which combines pairwise classification and mention clustering with Markov logic. Experimental results show that our joint learning system outperforms independent learning systems. Our system gives a better performance than all the learning-based systems from the CoNLL-2011 shared task on the same dataset. Compared with the best system from CoNLL-2011, which employs a rule-based method, our system shows competitive performance.
false
[]
[]
null
null
null
Part of the work was done when the first author was a visiting student in the Singapore Management University. And this work was partially supported by the National High Technology Research and Development Program of China(863 Program) (No.2012AA011101), the National Natural Science Foundation of China (No.91024009, No.60973053, No.90920011), and the Specialized Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20090001110047).
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
velardi-etal-2012-new
http://www.lrec-conf.org/proceedings/lrec2012/pdf/295_Paper.pdf
A New Method for Evaluating Automatically Learned Terminological Taxonomies
Evaluating a taxonomy learned automatically against an existing gold standard is a very complex problem, because differences stem from the number, label, depth and ordering of the taxonomy nodes. In this paper we propose casting the problem as one of comparing two hierarchical clusters. To this end we defined a variation of the Fowlkes and Mallows measure (Fowlkes and Mallows, 1983). Our method assigns a similarity value B i (l,r) to the learned (l) and reference (r) taxonomy for each cut i of the corresponding anonymised hierarchies, starting from the topmost nodes down to the leaf concepts. For each cut i, the two hierarchies can be seen as two clusterings C i l , C i r of the leaf concepts. We assign a prize to early similarity values, i.e. when concepts are clustered in a similar way down to the lowest taxonomy levels (close to the leaf nodes). We apply our method to the evaluation of the taxonomy learning methods put forward by Navigli et al. (2011) and Kozareva and Hovy (2010).
false
[]
[]
null
null
null
Roberto Navigli and Stefano Faralli gratefully acknowledge the support of the ERC Starting Grant MultiJEDI No. 259234.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
cettolo-etal-2015-iwslt
https://aclanthology.org/2015.iwslt-evaluation.1
The IWSLT 2015 Evaluation Campaign
null
false
[]
[]
null
null
null
null
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
saers-wu-2013-unsupervised-learning
https://aclanthology.org/2013.iwslt-papers.15
Unsupervised learning of bilingual categories in inversion transduction grammar induction
We present the first known experiments incorporating unsupervised bilingual nonterminal category learning within end-to-end fully unsupervised transduction grammar induction using matched training and testing models. Despite steady recent progress, such induction experiments until now have not allowed for learning differentiated nonterminal categories. We divide the learning into two stages: (1) a bootstrap stage that generates a large set of categorized short transduction rule hypotheses, and (2) a minimum conditional description length stage that simultaneously prunes away less useful short rule hypotheses, while also iteratively segmenting full sentence pairs into useful longer categorized transduction rules. We show that the second stage works better when the rule hypotheses have categories than when they do not, and that the proposed conditional description length approach combines the rules hypothesized by the two stages better than a mixture model does. We also show that the compact model learned during the second stage can be further improved by combining the result of different iterations in a mixture model. In total, we see a jump in BLEU score, from 17.53 for a standalone minimum description length baseline with no category learning, to 20.93 when incorporating category induction on a Chinese-English translation task.
false
[]
[]
null
null
null
This material is based upon work supported in part by the Defense Advanced Research Projects Agency (DARPA) under BOLT contract no. HR0011-12-C-0016, and GALE contract nos. HR0011-06-C-0022 and HR0011-06-C-0023; by the European Union under the FP7 grant agreement no. 287658; and by the Hong Kong Research Grants Council (RGC) research grants GRF620811, GRF621008, and GRF612806. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA, the EU, or RGC.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
arumae-liu-2019-guiding
https://aclanthology.org/N19-1264
Guiding Extractive Summarization with Question-Answering Rewards
Highlighting while reading is a natural behavior for people to track salient content of a document. It would be desirable to teach an extractive summarizer to do the same. However, a major obstacle to the development of a supervised summarizer is the lack of ground-truth. Manual annotation of extraction units is costprohibitive, whereas acquiring labels by automatically aligning human abstracts and source documents can yield inferior results. In this paper we describe a novel framework to guide a supervised, extractive summarization system with question-answering rewards. We argue that quality summaries should serve as a document surrogate to answer important questions, and such question-answer pairs can be conveniently obtained from human abstracts. The system learns to promote summaries that are informative, fluent, and perform competitively on question-answering. Our results compare favorably with those reported by strong summarization baselines as evaluated by automatic metrics and human assessors. During the investigation, detectives with the Metro Nashville Police Department in Tennessee also found that the agent, 33-year-old Daniel Boykin, entered the woman's home multiple times, where he took videos, photos and other data. Police found more than 90 videos and 1,500 photos of the victim on Boykin's phone and computer. The victim filed a complaint after seeing images of herself on his phone last year. [...]
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mann-1981-two
https://aclanthology.org/P81-1012
Two Discourse Generators
The task of discourse generation is to produce multisentential text in natural language which (when heard or read) produces effects (informing, motivating, etc.) and impressions (conciseness, correctness, ease of reading, etc.) which are appropriate to a need or goal held by the creator of the text. Because even little children can produce multieententiaJ text, the task of discourse generation appears deceptively easy. It is actually extremely complex, in part because it usually involves many different kinds of knowledge. The skilled writer must know the subiect matter, the beliefs of the reader and his own reasons for writing. He must also know the syntax, semantics, inferential patterns, text structures and words of the language. It would be complex enough if these were all independent bodies of knowledge, independently employed. Unfortunately, they are all interdependent in intricate ways. The use of each must be coordinated with all of the others.
false
[]
[]
null
null
null
null
1981
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
khosla-rose-2020-using
https://aclanthology.org/2020.codi-1.3
Using Type Information to Improve Entity Coreference Resolution
Coreference resolution (CR) is an essential part of discourse analysis. Most recently, neural approaches have been proposed to improve over SOTA models from earlier paradigms. So far none of the published neural models leverage external semantic knowledge such as type information. This paper offers the first such model and evaluation, demonstrating modest gains in accuracy by introducing either gold standard or predicted types. In the proposed approach, type information serves both to (1) improve mention representation and (2) create a soft type consistency check between coreference candidate mentions. Our evaluation covers two different grain sizes of types over four different benchmark corpora.
false
[]
[]
null
null
null
We thank the anonymous reviewers for their insightful comments. We are also grateful to the members of the TELEDIA group at LTI, CMU for the invaluable feedback. This work was funded in part by Dow Chemical, and Microsoft.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ye-etal-2016-interactive
https://aclanthology.org/C16-1169
Interactive-Predictive Machine Translation based on Syntactic Constraints of Prefix
Interactive-predictive machine translation (IPMT) is a translation mode which combines machine translation technology and human behaviours. In the IPMT system, the utilization of the prefix greatly affects the interaction efficiency. However, state-of-the-art methods filter translation hypotheses mainly according to their matching results with the prefix on character level, and the advantage of the prefix is not fully developed. Focusing on this problem, this paper mines the deep constraints of prefix on syntactic level to improve the performance of IPMT systems. Two syntactic subtree matching rules based on phrase structure grammar are proposed to filter the translation hypotheses more strictly. Experimental results on LDC Chinese-English corpora show that the proposed method outperforms state-of-the-art phrase-based IPMT system while keeping comparable decoding speed.
false
[]
[]
null
null
null
This work is supported by the National Natural Science Foundation of China (No. 61402299). We would like to thank the anonymous reviewers for their insightful and constructive comments. We also want to thank Yapeng Zhang for help in the preparation of experimental systems in this paper.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
li-etal-2016-extending
https://aclanthology.org/W16-0602
Extending Phrase-Based Translation with Dependencies by Using Graphs
In this paper, we propose a graph-based translation model which takes advantage of discontinuous phrases. The model segments a graph which combines bigram and dependency relations into subgraphs and produces translations by combining translations of these subgraphs. Experiments on Chinese-English and German-English tasks show that our system is significantly better than the phrase-based model. By explicitly modeling the graph segmentation, our system gains further improvement.
false
[]
[]
null
null
null
This research has received funding from the People Programme (
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
duma-menzel-2017-sef
https://aclanthology.org/S17-2024
SEF@UHH at SemEval-2017 Task 1: Unsupervised Knowledge-Free Semantic Textual Similarity via Paragraph Vector
This paper describes our unsupervised knowledge-free approach to the SemEval-2017 Task 1 Competition. The proposed method makes use of Paragraph Vector for assessing the semantic similarity between pairs of sentences. We experimented with various dimensions of the vector and three state-of-the-art similarity metrics. Given a cross-lingual task, we trained models corresponding to its two languages and combined the models by averaging the similarity scores. The results of our submitted runs are above the median scores for five out of seven test sets by means of Pearson Correlation. Moreover, one of our system runs performed best on the Spanish-English-WMT test set ranking first out of 53 runs submitted in total by all participants.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
denero-etal-2006-generative
https://aclanthology.org/W06-3105
Why Generative Phrase Models Underperform Surface Heuristics
We investigate why weights from generative models underperform heuristic estimates in phrasebased machine translation. We first propose a simple generative, phrase-based model and verify that its estimates are inferior to those given by surface statistics. The performance gap stems primarily from the addition of a hidden segmentation variable, which increases the capacity for overfitting during maximum likelihood training with EM. In particular, while word level models benefit greatly from re-estimation, phrase-level models do not: the crucial difference is that distinct word alignments cannot all be correct, while distinct segmentations can. Alternate segmentations rather than alternate alignments compete, resulting in increased determinization of the phrase table, decreased generalization, and decreased final BLEU score. We also show that interpolation of the two methods can result in a modest increase in BLEU score.
false
[]
[]
null
null
null
null
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
atwell-drakos-1987-pattern
https://aclanthology.org/E87-1010
Pattern Recognition Applied to the Acquisition of a Grammatical Classification System From Unrestricted English Text
Within computational linguistics, the use of statistical pattern matching is generally restricted to speech processing. We have attempted to apply statistical techniques to discover a grammatical classification system from a Corpus of 'raw' English text. A discovery procedure is simpler for a simpler
false
[]
[]
null
null
null
null
1987
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
varanasi-etal-2020-copybert
https://aclanthology.org/2020.nlp4convai-1.3
CopyBERT: A Unified Approach to Question Generation with Self-Attention
Contextualized word embeddings provide better initialization for neural networks that deal with various natural language understanding (NLU) tasks including question answering (QA) and more recently, question generation (QG). Apart from providing meaningful word representations, pre-trained transformer models, such as BERT also provide self-attentions which encode syntactic information that can be probed for dependency parsing and POStagging. In this paper, we show that the information from self-attentions of BERT are useful for language modeling of questions conditioned on paragraph and answer phrases. To control the attention span, we use semidiagonal mask and utilize a shared model for encoding and decoding, unlike sequence-tosequence. We further employ copy mechanism over self-attentions to achieve state-of-the-art results for question generation on SQuAD dataset.
false
[]
[]
null
null
null
The authors would like to thank the anonymous reviewers for helpful feedback. The work was partially funded by the German Federal Ministry of Education and Research (BMBF) through the project DEEPLEE (01IW17001).
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ager-etal-2018-modelling
https://aclanthology.org/K18-1051
Modelling Salient Features as Directions in Fine-Tuned Semantic Spaces
In this paper we consider semantic spaces consisting of objects from some particular domain (e.g. IMDB movie reviews). Various authors have observed that such semantic spaces often model salient features (e.g. how scary a movie is) as directions. These feature directions allow us to rank objects according to how much they have the corresponding feature, and can thus play an important role in interpretable classifiers, recommendation systems, or entity-oriented search engines, among others. Methods for learning semantic spaces, however, are mostly aimed at modelling similarity. In this paper, we argue that there is an inherent trade-off between capturing similarity and faithfully modelling features as directions. Following this observation, we propose a simple method to fine-tune existing semantic spaces, with the aim of improving the quality of their feature directions. Crucially, our method is fully unsupervised, requiring only a bag-of-words representation of the objects as input.
false
[]
[]
null
null
null
This work has been supported by ERC Starting Grant 637277.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dirkson-2019-knowledge
https://aclanthology.org/P19-2009
Knowledge Discovery and Hypothesis Generation from Online Patient Forums: A Research Proposal
The unprompted patient experiences shared on patient forums contain a wealth of unexploited knowledge. Mining this knowledge and crosslinking it with biomedical literature, could expose novel insights, which could subsequently provide hypotheses for further clinical research. As of yet, automated methods for open knowledge discovery on patient forum text are lacking. Thus, in this research proposal, we outline future research into methods for mining, aggregating and cross-linking patient knowledge from online forums. Additionally, we aim to address how one could measure the credibility of this extracted knowledge.
true
[]
[]
Good Health and Well-Being
null
null
null
2019
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
symonds-etal-2011-modelling
https://aclanthology.org/Y11-1033
Modelling Word Meaning using Efficient Tensor Representations
Models of word meaning, built from a corpus of text, have demonstrated success in emulating human performance on a number of cognitive tasks. Many of these models use geometric representations of words to store semantic associations between words. Often word order information is not captured in these models. The lack of structural information used by these models has been raised as a weakness when performing cognitive tasks. This paper presents an efficient tensor based approach to modelling word meaning that builds on recent attempts to encode word order information, while providing flexible methods for extracting task specific semantic information.
false
[]
[]
null
null
null
null
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lin-2004-computational
https://aclanthology.org/N04-2004
A Computational Framework for Non-Lexicalist Semantics
Under a lexicalist approach to semantics, a verb completely encodes its syntactic and semantic structures, along with the relevant syntax-tosemantics mapping; polysemy is typically attributed to the existence of different lexical entries. A lexicon organized in this fashion contains much redundant information and is unable to capture cross-categorial morphological derivations. The solution is to spread the "semantic load" of lexical entries to other morphemes not typically taken to bear semantic content. This approach follows current trends in linguistic theory, and more perspicuously accounts for alternations in argument structure. I demonstrate how such a framework can be computationally realized with a feature-based, agenda-driven chart parser for the Minimalist Program.
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
makrai-etal-2013-applicative
https://aclanthology.org/W13-3207
Applicative structure in vector space models
We introduce a new 50-dimensional embedding obtained by spectral clustering of a graph describing the conceptual structure of the lexicon. We use the embedding directly to investigate sets of antonymic pairs, and indirectly to argue that function application in CVSMs requires not just vectors but two transformations (corresponding to subject and object) as well.
false
[]
[]
null
null
null
Makrai did the work on antonym set testing, Nemeskey built the embedding, Kornai advised. We would like to thank Zsófia Tardos (BUTE) and the anonymous reviewers for useful comments. Work supported by OTKA grant #82333.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
libovicky-etal-2020-expand
https://aclanthology.org/2020.ngt-1.18
Expand and Filter: CUNI and LMU Systems for the WNGT 2020 Duolingo Shared Task
We present our submission to the Simultaneous Translation And Paraphrase for Language Education (STAPLE) challenge. We used a standard Transformer model for translation, with a crosslingual classifier predicting correct translations on the output n-best list. To increase the diversity of the outputs, we used additional data to train the translation model, and we trained a paraphrasing model based on the Levenshtein Transformer architecture to generate further synonymous translations. The paraphrasing results were again filtered using our classifier. While the use of additional data and our classifier filter were able to improve results, the paraphrasing model produced too many invalid outputs to further improve the output quality. Our model without the paraphrasing component finished in the middle of the field for the shared task, improving over the best baseline by a margin of 10-22% weighted F1 absolute.
true
[]
[]
Quality Education
null
null
null
2020
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
varadi-2000-lexical
http://www.lrec-conf.org/proceedings/lrec2000/pdf/122.pdf
Lexical and Translation Equivalence in Parallel Corpora
In the present paper we intend to investigate to what extent use of parallel corpora can help to eliminate some of the difficulties noted with bilingual dictionaries. The particular issues addressed are the bidirectionality of translation equivalence, the coverage of multiword units, and the amount of implicit knowledge presupposed on the part of the user in interpreting the data. Three lexical items belonging to different word classes were chosen for analysis: the noun head, the verb give and the preposition with. George Orwell's novel 1984 was used as source material, which is available in English-Hungarian sentence aligned form. It is argued that the analysis of translation equivalents displayed in sets of concordances with aligned sentences in the target language holds important implications for bilingual lexicography and automatic word alignment methodology.
false
[]
[]
null
null
null
The research reported in the paper was supported by Országos Tudományos Kutatási Alapprogramok (grant number T026091).
2000
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
choi-etal-1999-english
https://aclanthology.org/1999.mtsummit-1.64
English-to-Korean Web translator : ``FromTo/Web-EK''
The previous English-Korean MT system that have been developed in Korea have dealt with only written text as translation object. Most of them enumerated a following list of the problems that had not seemed to be easy to solve in the near future : 1) processing of non-continuous idiomatic expressions 2) reduction of too many POS or structural ambiguities 3) robust processing for long sentence and parsing failure 4) selecting correct word correspondence between several alternatives. The problems can be considered as important factors that have influence on the translation quality of machine translation system. This paper describes not only the solutions of problems of the previous English-to-Korean machine translation systems but also the HTML tags management between two structurally different languages, English and Korean. Through the solutions we translate successfully English web documents into Korean one in the English-to-Korean web translator "FromTo/Web-EK" which has been developed from 1997.
false
[]
[]
null
null
null
null
1999
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tanase-etal-2020-upb
https://aclanthology.org/2020.semeval-1.296
UPB at SemEval-2020 Task 12: Multilingual Offensive Language Detection on Social Media by Fine-tuning a Variety of BERT-based Models
Offensive language detection is one of the most challenging problem in the natural language processing field, being imposed by the rising presence of this phenomenon in online social media. This paper describes our Transformer-based solutions for identifying offensive language on Twitter in five languages (i.e., English, Arabic, Danish, Greek, and Turkish), which was employed in Subtask A of the Offenseval 2020 shared task. Several neural architectures (i.e., BERT, mBERT, Roberta, XLM-Roberta, and ALBERT), pre-trained using both single-language and multilingual corpora, were fine-tuned and compared using multiple combinations of datasets. Finally, the highest-scoring models were used for our submissions in the competition, which ranked our
true
[]
[]
Peace, Justice and Strong Institutions
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
ahlberg-enache-2012-combining
http://www.lrec-conf.org/proceedings/lrec2012/pdf/360_Paper.pdf
Combining Language Resources Into A Grammar-Driven Swedish Parser
This paper describes work on a rule-based, open-source parser for Swedish. The central component is a wide-coverage grammar implemented in the GF formalism (Grammatical Framework), a dependently typed grammar formalism based on Martin-Löf type theory. GF has strong support for multilinguality and has so far been used successfully for controlled languages (Angelov and Ranta, 2009) and recent experiments have showed that it is also possible to use the framework for parsing unrestricted language. In addition to GF, we use two other main resources: the Swedish treebank Talbanken and the electronic lexicon SALDO. By combining the grammar with a lexicon extracted from SALDO we obtain a parser accepting all sentences described by the given rules. We develop and test this on examples from Talbanken. The resulting parser gives a full syntactic analysis of the input sentences. It will be highly reusable, freely available, and as GF provides libraries for compiling grammars to a number of programming languages, chosen parts of the the grammar may be used in various NLP applications.
false
[]
[]
null
null
null
The work has been funded by Center of Language Technology. We would also like to give special thanks to Aarne Ranta, Elisabet Engdahl, Krasimir Angelov, Olga Caprotti, Lars Borin and John Camilleri for their help and support.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sinopalnikova-smrz-2006-intelligent
http://www.lrec-conf.org/proceedings/lrec2006/pdf/275_pdf.pdf
Intelligent Dictionary Interfaces: Usability Evaluation of Access-Supporting Enhancements
The present paper describes psycholinguistic experiments aimed at exploring the way people behave while accessing electronic dictionaries. In our work we focused on the access by meaning that, in comparison with the access by form, is currently less studied and very seldom implemented in modern dictionary interfaces. Thus, the goal of our experiments was to explore dictionary users' requirements and to study what services an intelligent dictionary interface should be able to supply to help solving access by meaning problems. We tested several access-supporting enhancements of electronic dictionaries based on various language resources (corpora, wordnets, word association norms and explanatory dictionaries). Experiments were carried out with native speakers of three European languages-English, Czech and Russian. Results for monolingual and bilingual cases are presented.
false
[]
[]
null
null
null
null
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
boldrini-etal-2010-emotiblog
https://aclanthology.org/W10-1801
EmotiBlog: A Finer-Grained and More Precise Learning of Subjectivity Expression Models
The exponential growth of the subjective information in the framework of the Web 2.0 has led to the need to create Natural Language Processing tools able to analyse and process such data for multiple practical applications. They require training on specifically annotated corpora, whose level of detail must be fine enough to capture the phenomena involved. This paper presents EmotiBlog-a finegrained annotation scheme for subjectivity. We show the manner in which it is built and demonstrate the benefits it brings to the systems using it for training, through the experiments we carried out on opinion mining and emotion detection. We employ corpora of different textual genres-a set of annotated reported speech extracted from news articles, the set of news titles annotated with polarity and emotion from the SemEval 2007 (Task 14) and ISEAR, a corpus of real-life selfexpressed emotion. We also show how the model built from the EmotiBlog annotations can be enhanced with external resources. The results demonstrate that EmotiBlog, through its structure and annotation paradigm, offers high quality training data for systems dealing both with opinion mining, as well as emotion detection.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tufis-etal-2020-collection
https://aclanthology.org/2020.lrec-1.337
Collection and Annotation of the Romanian Legal Corpus
We present the Romanian legislative corpus which is a valuable linguistic asset for the development of machine translation systems, especially for under-resourced languages. The knowledge that can be extracted from this resource is necessary for a deeper understanding of how law terminology is used and how it can be made more consistent. At this moment, the corpus contains more than 144k documents representing the legislative body of Romania. This corpus is processed and annotated at different levels: linguistically (tokenized, lemmatized and POS-tagged), dependency parsed, chunked, named entities identified and labeled with IATE terms and EUROVOC descriptors. Each annotated document has a CONLL-U Plus format consisting of 14 columns; in addition to the standard 10-column format, four other types of annotations were added. Moreover the repository will be periodically updated as new legislative texts are published. These will be automatically collected and transmitted to the processing and annotation pipeline. The access to the corpus is provided through ELRC infrastructure.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
This research was supported by the EC grant no. INEA/CEF/ICT/A2017/1565710 for the Action no. 2017-EU-IA-0136 entitled "Multilingual Resources for CEF.AT in the legal domain" (MARCELL).
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false