ID
stringlengths
11
54
url
stringlengths
33
64
title
stringlengths
11
184
abstract
stringlengths
17
3.87k
label_nlp4sg
bool
2 classes
task
sequence
method
sequence
goal1
stringclasses
9 values
goal2
stringclasses
9 values
goal3
stringclasses
1 value
acknowledgments
stringlengths
28
1.28k
year
stringlengths
4
4
sdg1
bool
1 class
sdg2
bool
1 class
sdg3
bool
2 classes
sdg4
bool
2 classes
sdg5
bool
2 classes
sdg6
bool
1 class
sdg7
bool
1 class
sdg8
bool
2 classes
sdg9
bool
2 classes
sdg10
bool
2 classes
sdg11
bool
2 classes
sdg12
bool
1 class
sdg13
bool
2 classes
sdg14
bool
1 class
sdg15
bool
1 class
sdg16
bool
2 classes
sdg17
bool
2 classes
mehri-eskenazi-2020-usr
https://aclanthology.org/2020.acl-main.64
USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation
The lack of meaningful automatic evaluation metrics for dialog has impeded open-domain dialog research. Standard language generation metrics have been shown to be ineffective for evaluating dialog models. To this end, this paper presents USR, an UnSupervised and Reference-free evaluation metric for dialog. USR is a reference-free metric that trains unsupervised models to measure several desirable qualities of dialog. USR is shown to strongly correlate with human judgment on both Topical-Chat (turn-level: 0.42, systemlevel: 1.0) and PersonaChat (turn-level: 0.48 and system-level: 1.0). USR additionally produces interpretable measures for several desirable properties of dialog.
false
[]
[]
null
null
null
We thank the following individuals for their help with annotation: Evgeniia Razumovskaia, Felix Labelle, Mckenna Brown and Yulan Feng.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
partanen-rueter-2019-survey
https://aclanthology.org/W19-8009
Survey of Uralic Universal Dependencies development
This paper attempts to evaluate some of the systematic differences in Uralic Universal Dependencies treebanks from a perspective that would help to introduce reasonable improvements in treebank annotation consistency within this language family. The study finds that the coverage of Uralic languages in the project is already relatively high, and the majority of typically Uralic features are already present and can be discussed on the basis of existing treebanks. Some of the idiosyncrasies found in individual treebanks stem from language-internal grammar traditions, and could be a target for harmonization in later phases.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
shinnou-sasaki-2008-spectral
http://www.lrec-conf.org/proceedings/lrec2008/pdf/62_paper.pdf
Spectral Clustering for a Large Data Set by Reducing the Similarity Matrix Size
Spectral clustering is a powerful clustering method for document data set. However, spectral clustering needs to solve an eigenvalue problem of the matrix converted from the similarity matrix corresponding to the data set. Therefore, it is not practical to use spectral clustering for a large data set. To overcome this problem, we propose the method to reduce the similarity matrix size. First, using k-means, we obtain a clustering result for the given data set. From each cluster, we pick up some data, which are near to the central of the cluster. We take these data as one data. We call these data set as "committee." Data except for committees remain one data. For these data, we construct the similarity matrix. Definitely, the size of this similarity matrix is reduced so much that we can perform spectral clustering using the reduced similarity matrix
false
[]
[]
null
null
null
This research was partially supported by the Ministry of Education, Science, Sports and Culture, Grant-in-Aid for Scientific Research on Priority Areas, Japanese Corpus , 19011001, 2007.
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bartolini-etal-2004-semantic
http://www.lrec-conf.org/proceedings/lrec2004/pdf/709.pdf
Semantic Mark-up of Italian Legal Texts Through NLP-based Techniques
In this paper we illustrate an approach to information extraction from legal texts using SALEM. SALEM is an NLP architecture for semantic annotation and indexing of Italian legislative texts, developed by ILC in close collaboration with ITTIG-CNR, Florence. Results of SALEM performance on a test sample of about 500 Italian law paragraphs are provided.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
huo-etal-2019-graph
https://aclanthology.org/D19-5319
Graph Enhanced Cross-Domain Text-to-SQL Generation
Semantic parsing is a fundamental problem in natural language understanding, as it involves the mapping of natural language to structured forms such as executable queries or logic-like knowledge representations. Existing deep learning approaches for semantic parsing have shown promise on a variety of benchmark data sets, particularly on textto-SQL parsing. However, most text-to-SQL parsers do not generalize to unseen data sets in different domains. In this paper, we propose a new cross-domain learning scheme to perform text-to-SQL translation and demonstrate its use on Spider, a large-scale cross-domain text-to-SQL data set. We improve upon a state-of-the-art Spider model, SyntaxSQLNet, by constructing a graph of column names for all databases and using graph neural networks to compute their embeddings. The resulting embeddings offer better cross-domain representations and SQL queries, as evidenced by substantial improvement on the Spider data set compared to SyntaxSQLNet.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
petiwala-etal-2012-textbook
https://aclanthology.org/W12-5806
Textbook Construction from Lecture Transcripts
null
true
[]
[]
Quality Education
null
null
null
2012
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
huang-etal-1997-segmentation
https://aclanthology.org/O97-4003
Segmentation Standard for Chinese Natural Language Processing
This paper proposes a segmentation standard for Chinese natural language processing. The standard is proposed to achieve linguistic felicity, computational feasibility, and data uniformity. Linguistic felicity is maintained by a definition of segmentation unit that is equivalent to the theoretical definition of word, as well as a set of segmentation principles that are equivalent to a functional definition of a word. Computational feasibility is ensured by the fact that the above functional definitions are procedural in nature and can be converted to segmentation algorithms as well as by the implementable heuristic guidelines which deal with specific linguistic categories. Data uniformity is achieved by stratification of the standard itself and by defining a standard lexicon as part of the standard.
false
[]
[]
null
null
null
Research reported in this paper is partially supported by the Standardization Bureau of Taiwan, ROC. The authors are indebted to the following taskforce committee members for their invaluable contribution to the project: Claire H.H. Chang, One-Soon Her, Shuan-fan Huang, James H.Y. Tai, Charles T.C Tang, Jyun-shen Chang, Hsin-hsi Chen, Hsi-jiann Lee, Jhing-fa Wang, Chao-Huang Chang, Chiu-tang Chen, Una Y.L. Hsu, Jyn-jie Kuo, Hui-chun Ma, and Lin-Mei Wei. We would like to thank the three CLCLP reviewers for their constructive comments. We are also indebted to our colleagues at CKIP, Academia Sinica for their unfailing support as well as helpful suggestions. Any remaining errors are, of course, ours.
1997
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
xia-etal-2022-structured
https://aclanthology.org/2022.acl-long.107
Structured Pruning Learns Compact and Accurate Models
The growing size of neural language models has led to increased attention in model compression. The two predominant approaches are pruning, which gradually removes weights from a pre-trained model, and distillation, which trains a smaller compact model to match a larger one. Pruning methods can significantly reduce the model size but hardly achieve large speedups as distillation. However, distillation methods require large amounts of unlabeled data and are expensive to train. In this work, we propose a task-specific structured pruning method CoFi 1 (Coarse-and Fine-grained Pruning), which delivers highly parallelizable subnetworks and matches the distillation methods in both accuracy and latency, without resorting to any unlabeled data. Our key insight is to jointly prune coarse-grained (e.g., layers) and fine-grained (e.g., heads and hidden units) modules, which controls the pruning decision of each parameter with masks of different granularity. We also devise a layerwise distillation strategy to transfer knowledge from unpruned to pruned models during optimization. Our experiments on GLUE and SQuAD datasets show that CoFi yields models with over 10× speedups with a small accuracy drop, showing its effectiveness and efficiency compared to previous pruning and distillation approaches. 2
false
[]
[]
null
null
null
The authors thank Tao Lei from Google Research, Ameet Deshpande, Dan Friedman, Sadhika Malladi from Princeton University and the anonymous reviewers for their valuable feedback on our paper. This research is supported by a Hisashi and Masae Kobayashi *67 Fellowship and a Google Research Scholar Award.
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhang-lan-2015-ecnu
https://aclanthology.org/S15-2125
ECNU: Extracting Effective Features from Multiple Sequential Sentences for Target-dependent Sentiment Analysis in Reviews
This paper describes our systems submitted to the target-dependent sentiment polarity classification subtask in aspect based sentiment analysis (ABSA) task (i.e., Task 12) in Se-mEval 2015. To settle this problem, we extracted several effective features from three sequential sentences, including sentiment lexicon, linguistic and domain specific features. Then we employed these features to construct classifiers using supervised classification algorithm. In laptop domain, our systems ranked 2nd out of 6 constrained submissions and 2nd out of 7 unconstrained submissions. In restaurant domain, the rankings are 5th out of 6 and 2nd out of 8 respectively.
false
[]
[]
null
null
null
This research is supported by grants from Science and Technology Commission of Shanghai Municipality under research grant no. (14DZ2260800 and 15ZR1410700) and Shanghai Collaborative Innovation Center of Trustworthy Software for Internet of Things (ZF1213).
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
cheng-etal-2020-dynamically
https://aclanthology.org/2020.findings-emnlp.121
Dynamically Updating Event Representations for Temporal Relation Classification with Multi-category Learning
Temporal relation classification is a pair-wise task for identifying the relation of a temporal link (TLINK) between two mentions, i.e. event, time and document creation time (DCT). It leads to two crucial limits: 1) Two TLINKs involving a common mention do not share information. 2) Existing models with independent classifiers for each TLINK category (E2E, E2T and E2D) 1 hinder from using the whole data. This paper presents an event centric model that allows to manage dynamic event representations across multiple TLINKs. Our model deals with three TLINK categories with multi-task learning to leverage the full size of data. The experimental results show that our proposal outperforms state-of-the-art models and two transfer learning baselines on both the English and Japanese data.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
cohn-blunsom-2009-bayesian
https://aclanthology.org/D09-1037
A Bayesian Model of Syntax-Directed Tree to String Grammar Induction
Tree based translation models are a compelling means of integrating linguistic information into machine translation. Syntax can inform lexical selection and reordering choices and thereby improve translation quality. Research to date has focussed primarily on decoding with such models, but less on the difficult problem of inducing the bilingual grammar from data. We propose a generative Bayesian model of tree-to-string translation which induces grammars that are both smaller and produce better translations than the previous heuristic two-stage approach which employs a separate word alignment step.
false
[]
[]
null
null
null
The authors acknowledge the support of the EP-SRC (grants GR/T04557/01 and EP/D074959/1). This work has made use of the resources provided by the Edinburgh Compute and Data Facility (ECDF). The ECDF is partially supported by the eDIKT initiative.
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
stathopoulos-etal-2018-variable
https://aclanthology.org/N18-1028
Variable Typing: Assigning Meaning to Variables in Mathematical Text
Information about the meaning of mathematical variables in text is useful in NLP/IR tasks such as symbol disambiguation, topic modeling and mathematical information retrieval (MIR). We introduce variable typing, the task of assigning one mathematical type (multiword technical terms referring to mathematical concepts) to each variable in a sentence of mathematical text. As part of this work, we also introduce a new annotated data set composed of 33,524 data points extracted from scientific documents published on arXiv. Our intrinsic evaluation demonstrates that our data set is sufficient to successfully train and evaluate current classifiers from three different model architectures. The best performing model is evaluated on an extrinsic task: MIR, by producing a typed formula index. Our results show that the best performing MIR models make use of our typed index, compared to a formula index only containing raw symbols, thereby demonstrating the usefulness of variable typing. Let P be a parabolic subgroup of GL(n) with Levi decomposition P = M N , where N is the unipotent radical.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
anthony-patrick-2004-dependency
https://aclanthology.org/W04-0815
Dependency based logical form transformations
This paper describes a system developed for the transformation of English sentences into a first order logical form representation. The metho dology is centered on the use of a d ependency grammar based parser. We demonstrate the suitability of applying a dependency parser based solution to the given task a nd in turn e xplain some of the limitations and challenges involved when using such an approach. The efficiencies and deficiencies of our approach are discussed as well as considerations for further enhanc ements.
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
terrell-mutlu-2012-regression
https://aclanthology.org/W12-1639
A Regression-based Approach to Modeling Addressee Backchannels
During conversations, addressees produce conversational acts-verbal and nonverbal backchannels-that facilitate turn-taking, acknowledge speakership, and communicate common ground without disrupting the speaker's speech. These acts play a key role in achieving fluent conversations. Therefore, gaining a deeper understanding of how these acts interact with speaker behaviors in shaping conversations might offer key insights into the design of technologies such as computer-mediated communication systems and embodied conversational agents. In this paper, we explore how a regression-based approach might offer such insights into modeling predictive relationships between speaker behaviors and addressee backchannels in a storytelling scenario. Our results reveal speaker eye contact as a significant predictor of verbal, nonverbal, and bimodal backchannels and utterance boundaries as predictors of nonverbal and bimodal backchannels.
false
[]
[]
null
null
null
We would like to thank Faisal Khan for his help in data collection and processing. This work was supported by National Science Foundation award 1149970.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chen-etal-2020-reconstructing
https://aclanthology.org/2020.aacl-main.81
Reconstructing Event Regions for Event Extraction via Graph Attention Networks
Event information is usually scattered across multiple sentences within a document. The local sentence-level event extractors often yield many noisy event role filler extractions in the absence of a broader view of the documentlevel context. Filtering spurious extractions and aggregating event information in a document remains a challenging problem. Following the observation that a document has several relevant event regions densely populated with event role fillers, we build graphs with candidate role filler extractions enriched by sentential embeddings as nodes, and use graph attention networks to identify event regions in a document and aggregate event information. We characterize edges between candidate extractions in a graph into rich vector representations to facilitate event region identification. The experimental results on two datasets of two languages show that our approach yields new state-of-the-art performance for the challenging event extraction task.
false
[]
[]
null
null
null
This work is supported by the Natural Key RD Program of China (No.2018YFB1005100), the National Natural Science Foundation of China (No.61922085, No.U1936207, No.61806201) and the Key Research Program of the Chinese Academy of Sciences (Grant NO. ZDBS-SSW-JSC006). This work is also supported by CCF-Tencent Open Research Fund, Beijing Academy of Artificial Intelligence (BAAI2019QN0301) and independent research project of National Laboratory of Pattern Recognition.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhu-etal-2013-improved
https://aclanthology.org/P13-1019
Improved Bayesian Logistic Supervised Topic Models with Data Augmentation
Supervised topic models with a logistic likelihood have two issues that potentially limit their practical use: 1) response variables are usually over-weighted by document word counts; and 2) existing variational inference methods make strict mean-field assumptions. We address these issues by: 1) introducing a regularization constant to better balance the two parts based on an optimization formulation of Bayesian inference; and 2) developing a simple Gibbs sampling algorithm by introducing auxiliary Polya-Gamma variables and collapsing out Dirichlet variables. Our augment-and-collapse sampling algorithm has analytical forms of each conditional distribution without making any restricting assumptions and can be easily parallelized. Empirical results demonstrate significant improvements on prediction performance and time efficiency.
false
[]
[]
null
null
null
null
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
diab-etal-2004-automatic
https://aclanthology.org/N04-4038
Automatic Tagging of Arabic Text: From Raw Text to Base Phrase Chunks
To date, there are no fully automated systems addressing the community's need for fundamental language processing tools for Arabic text. In this paper, we present a Support Vector Machine (SVM) based approach to automatically tokenize (segmenting off clitics), part-ofspeech (POS) tag and annotate base phrases (BPs) in Arabic text. We adapt highly accurate tools that have been developed for English text and apply them to Arabic text. Using standard evaluation metrics, we report that the SVM-TOK tokenizer achieves an ¡ £ ¢ ¥ ¤ £ ¦ score of 99.12, the SVM-POS tagger achieves an accuracy of 95.49%, and the SVM-BP chunker yields an ¡ ¢ § ¤ £ ¦ score of 92.08.
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
anastasopoulos-etal-2020-tico
https://aclanthology.org/2020.nlpcovid19-2.5
TICO-19: the Translation Initiative for COvid-19
The COVID-19 pandemic is the worst pandemic to strike the world in over a century. Crucial to stemming the tide of the SARS-CoV-2 virus is communicating to vulnerable populations the means by which they can protect themselves. To this end, the collaborators forming the Translation Initiative for COvid-19 (TICO-19) 1 have made test and development data available to AI and MT researchers in 35 different languages in order to foster the development of tools and resources for improving access to information about COVID-19 in these languages. In addition to 9 highresourced, "pivot" languages, the team is targeting 26 lesser resourced languages, in particular languages of Africa, South Asia and SouthEast Asia, whose populations may be the most vulnerable to the spread of the virus. The same data is translated into all of the languages represented, meaning that testing or development can be done for any pairing of languages in the set. Further, the team is converting the test and development data into translation memories (TMXs) that can be used by localizers from and to any of the languages. 2
true
[]
[]
Good Health and Well-Being
null
null
We would like to thank the people who made this effort possible: Tanya Badeka, Jen Wang, William Wong, Rebekkah Hogan, Cynthia Gao, Rachael Brunckhorst, Ian Hill, Bob Jung, Jason Smith, Susan Kim Chan, Romina Stella, Keith Stevens. We also extend our gratitude to the many translators and the quality reviewers whose hard work are represented in our benchmarks and in our translation memories. Some of the languages were very difficult to source, and the burden in these cases often fell to a very small number of translators. We thank you for the many hours you spent translating and, in many cases, re-translating content.
2020
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
giannakopoulos-etal-2017-multiling
https://aclanthology.org/W17-1001
MultiLing 2017 Overview
In this brief report we present an overview of the MultiLing 2017 effort and workshop, as implemented within EACL 2017. MultiLing is a community-driven initiative that pushes the state-of-the-art in Automatic Summarization by providing data sets and fostering further research and development of summarization systems. This year the scope of the workshop was widened, bringing together researchers that work on summarization across sources, languages and genres. We summarize the main tasks planned and implemented this year, also providing insights on next steps.
false
[]
[]
null
null
null
This work was supported by project MediaGist, EUs FP7 People Programme (Marie Curie Actions), no. 630786, MediaGist.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kireyev-2009-semantic
https://aclanthology.org/N09-1060
Semantic-based Estimation of Term Informativeness
The idea that some words carry more semantic content than others, has led to the notion of term specificity, or informativeness. Computational estimation of this quantity is important for various applications such as information retrieval. We propose a new method of computing term specificity, based on modeling the rate of learning of word meaning in Latent Semantic Analysis (LSA). We analyze the performance of this method both qualitatively and quantitatively and demonstrate that it shows excellent performance compared to existing methods on a broad range of tests. We also demonstrate how it can be used to improve existing applications in information retrieval and summarization.
false
[]
[]
null
null
null
null
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
semmar-laib-2017-building
https://doi.org/10.26615/978-954-452-049-6_085
Building Multiword Expressions Bilingual Lexicons for Domain Adaptation of an Example-Based Machine Translation System
null
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
inoue-etal-2022-learning
https://aclanthology.org/2022.findings-acl.81
Learning and Evaluating Character Representations in Novels
We address the problem of learning fixedlength vector representations of characters in novels. Recent advances in word embeddings have proven successful in learning entity representations from short texts, but fall short on longer documents because they do not capture full book-level information. To overcome the weakness of such text-based embeddings, we propose two novel methods for representing characters: (i) graph neural network-based embeddings from a full corpus-based character network; and (ii) low-dimensional embeddings constructed from the occurrence pattern of characters in each novel. We test the quality of these character embeddings using a new benchmark suite to evaluate character representations, encompassing 12 different tasks. We show that our representation techniques combined with text-based embeddings lead to the best character representations, outperforming text-based embeddings in four tasks. Our dataset is made publicly available to stimulate additional work in this area.
false
[]
[]
null
null
null
We would like to thank anonymous reviewers for valuable and insightful feedback.
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
netisopakul-chattupan-2015-thai
https://aclanthology.org/Y15-1022
Thai Stock News Sentiment Classification using Wordpair Features
Thai stock brokers issue daily stock news for their customers. One broker labels these news with plus, minus and zero sign to indicate the type of recommendation. This paper proposed to classify Thai stock news by extracting important texts from the news. The extracted text is in a form of a 'wordpair'. Three wordpair sets, manual wordpairs extraction (ME), manual wordpairs addition (MA), and automate wordpairs combination (AC), are constructed and compared for their precision, recall and f-measure. Using this broker's news as a training set and unseen stock news from other brokers as a testing set, the experiment shows that all three sets have similar results for the training set but the second and the third set have better classification results in classifying stock news from unseen brokers.
false
[]
[]
null
null
null
null
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ma-etal-2017-text
https://aclanthology.org/P17-3009
Text-based Speaker Identification on Multiparty Dialogues Using Multi-document Convolutional Neural Networks
We propose a convolutional neural network model for text-based speaker identification on multiparty dialogues extracted from the TV show, Friends. While most previous works on this task rely heavily on acoustic features, our approach attempts to identify speakers in dialogues using their speech patterns as captured by transcriptions to the TV show. It has been shown that different individual speakers exhibit distinct idiolectal styles. Several convolutional neural network models are developed to discriminate between differing speech patterns. Our results confirm the promise of text-based approaches, with the best performing model showing an accuracy improvement of over 6% upon the baseline CNN model.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
deng-nakamura-2005-investigating
https://aclanthology.org/I05-2025
Investigating the Features that Affect Cue Usage of Non-native Speakers of English
At present, the population of non-native speakers is twice that of native speakers. It is necessary to explore the text generation strategies for non-native users. However, little has been done in this field. This study investigates the features that affect the placement (where to place a cue) of because for non-native speakers. A machine learning program-C4.5 was applied to induce the classification models of the placement.
false
[]
[]
null
null
null
null
2005
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jain-gandhi-2022-comprehensive
https://aclanthology.org/2022.findings-acl.270
Comprehensive Multi-Modal Interactions for Referring Image Segmentation
We investigate Referring Image Segmentation (RIS), which outputs a segmentation map corresponding to the natural language description. Addressing RIS efficiently requires considering the interactions happening across visual and linguistic modalities and the interactions within each modality. Existing methods are limited because they either compute different forms of interactions sequentially (leading to error propagation) or ignore intramodal interactions. We address this limitation by performing all three interactions simultaneously through a Synchronous Multi-Modal Fusion Module (SFM). Moreover, to produce refined segmentation masks, we propose a novel Hierarchical Cross-Modal Aggregation Module (HCAM), where linguistic features facilitate the exchange of contextual information across the visual hierarchy. We present thorough ablation studies and validate our approach's performance on four benchmark datasets, showing considerable performance gains over the existing state-ofthe-art (SOTA) methods.
false
[]
[]
null
null
null
null
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
faraj-etal-2021-sarcasmdet
https://aclanthology.org/2021.wanlp-1.44
SarcasmDet at Sarcasm Detection Task 2021 in Arabic using AraBERT Pretrained Model
This paper presents one of the top five winning solutions for the Shared Task on Sarcasm and Sentiment Detection in Arabic (sub-task1 Sarcasm Detection). The goal of the sub-task is to identify whether a tweet is sarcastic or not. Our solution has been developed using ensemble technique with AraBERT pre-trained model. This paper describes the architecture of the submitted solution in the shared task. It also provides in detail the experiments and the hyperparameters tuning that lead to this outperforming result. Besides, the paper discusses and analyzes the results by comparing all the models that we have trained or tested to build a robust model in a table design. Our model is ranked fifth out of 27 teams with an F1-score of 0.5989 of the sarcastic class. It is worth mentioning that our model achieved the highest accuracy score of 0.7830 in this competition.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chang-etal-2014-semantic-frame
https://aclanthology.org/Y14-1011
Semantic Frame-based Statistical Approach for Topic Detection
We propose a statistical frame-based approach (FBA) for natural language processing, and demonstrate its advantage over traditional machine learning methods by using topic detection as a case study. FBA perceives and identifies semantic knowledge in a more general manner by collecting important linguistic patterns within documents through a unique flexible matching scheme that allows word insertion, deletion and substitution (IDS) to capture linguistic structures within the text. In addition, FBA can also overcome major issues of the rule-based approach by reducing human effort through its highly automated pattern generation and summarization. Using Yahoo! Chinese news corpus containing about 140,000 news articles, we provide a comprehensive performance evaluation that demonstrates the effectiveness of FBA in detecting the topic of a document by exploiting the semantic association and the context within the text. Moreover, it outperforms common topic models like Naïve Bayes, Vector Space Model, and LDA-SVM. On the other hand, there are several machine learning-based approaches. For instance, Nallapati et al. (2004) attempted to find characteristics of topics by clustering keywords using statistical similarity. The clusters are then connected chronologically to form a time-line of the topic. Furthermore, many previous methods treated topic detection as a supervised classification problem (Blei et al., 2003; Zhang and Wang, 2010). These approaches can achieve substantial performance without much human involvement. However, to manifest topic as
false
[]
[]
null
null
null
This study is conducted under the NSC 102-3114-Y-307-026 "A Research on Social Influence and Decision Support Analytics" of the Institute for Information Industry which is subsidized by the National Science Council.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ling-etal-2015-design
https://aclanthology.org/Q15-1023
Design Challenges for Entity Linking
Recent research on entity linking (EL) has introduced a plethora of promising techniques, ranging from deep neural networks to joint inference. But despite numerous papers there is surprisingly little understanding of the state of the art in EL. We attack this confusion by analyzing differences between several versions of the EL problem and presenting a simple yet effective, modular, unsupervised system, called VINCULUM, for entity linking. We conduct an extensive evaluation on nine data sets, comparing VINCULUM with two state-of-theart systems, and elucidate key aspects of the system that include mention extraction, candidate generation, entity type prediction, entity coreference, and coherence.
false
[]
[]
null
null
null
Acknowledgements The authors thank Luke Zettlemoyer, Tony Fader, Kenton Lee, Mark Yatskar for constructive suggestions on an early draft and all members of the LoudLab group and the LIL group for helpful discussions. We also thank the action editor and the anonymous reviewers for valuable comments. This work is supported in part by the Air Force Research Laboratory (AFRL) under prime contract no. FA8750-13-2-0019, an ONR grant N00014-12-1-0211, a WRF / TJ Cable Professorship, a gift from Google, an ARO grant number W911NF-13-1-0246, and by TerraSwarm, one of six centers of STARnet, a Semiconductor Research Corporation program sponsored by MARCO and DARPA. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the view of DARPA, AFRL, or the US government.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhong-etal-2020-element
https://aclanthology.org/2020.emnlp-main.540
An Element-aware Multi-representation Model for Law Article Prediction
Existing works have proved that using law articles as external knowledge can improve the performance of the Legal Judgment Prediction. However, they do not fully use law article information and most of the current work is only for single label samples. In this paper, we propose a Law Article Element-aware Multi-representation Model (LEMM), which can make full use of law article information and can be used for multi-label samples. The model uses the labeled elements of law articles to extract fact description features from multiple angles. It generates multiple representations of a fact for classification. Every label has a law-aware fact representation to encode more information. To capture the dependencies between law articles, the model also introduces a self-attention mechanism between multiple representations. Compared with baseline models like TopJudge, this model improves the accuracy of 5.84%, the macro F1 of 6.42%, and the micro F1 of 4.28%.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
We thank all reviewers for the valuable comments. This work is supported by the National Natural Science Foundation of China (No. 61472191 and No. 61772278).
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
goldwater-etal-2008-words
https://aclanthology.org/P08-1044
Which Words Are Hard to Recognize? Prosodic, Lexical, and Disfluency Factors that Increase ASR Error Rates
Many factors are thought to increase the chances of misrecognizing a word in ASR, including low frequency, nearby disfluencies, short duration, and being at the start of a turn. However, few of these factors have been formally examined. This paper analyzes a variety of lexical, prosodic, and disfluency factors to determine which are likely to increase ASR error rates. Findings include the following. (1) For disfluencies, effects depend on the type of disfluency: errors increase by up to 15% (absolute) for words near fragments, but decrease by up to 7.2% (absolute) for words near repetitions. This decrease seems to be due to longer word duration. (2) For prosodic features, there are more errors for words with extreme values than words with typical values. (3) Although our results are based on output from a system with speaker adaptation, speaker differences are a major factor influencing error rates, and the effects of features such as frequency, pitch, and intensity may vary between speakers.
false
[]
[]
null
null
null
This work was supported by the Edinburgh-Stanford LINK and ONR MURI award N000140510388. We thank Andreas Stolcke for providing the ASR output, language model, and forced alignments used here, and Raghunandan Kumaran and Katrin Kirchhoff for earlier datasets and additional help.
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mass-etal-2022-conversational
https://aclanthology.org/2022.dialdoc-1.7
Conversational Search with Mixed-Initiative - Asking Good Clarification Questions backed-up by Passage Retrieval
We deal with the scenario of conversational search, where user queries are under-specified or ambiguous. This calls for a mixed-initiative setup. User-asks (queries) and system-answers, as well as system-asks (clarification questions) and user response, in order to clarify her information needs. We focus on the task of selecting the next clarification question, given the conversation context. Our method leverages passage retrieval from a background content to fine-tune two deep-learning models for ranking candidate clarification questions. We evaluated our method on two different use-cases. The first is an open domain conversational search in a large web collection. The second is a taskoriented customer-support setup. We show that our method performs well on both use-cases.
false
[]
[]
null
null
null
null
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
cinkova-etal-2016-graded
https://aclanthology.org/L16-1137
Graded and Word-Sense-Disambiguation Decisions in Corpus Pattern Analysis: a Pilot Study
We present a pilot analysis of a new linguistic resource, VPS-GradeUp (available at http://hdl.handle.net/11234/1-1585). The resource contains 11,400 graded human decisions on usage patterns of 29 English lexical verbs, randomly selected from the Pattern Dictionary of English Verbs (Hanks, 2000 2014). The selection was random and based on their frequency and the number of senses their lemmas have in PDEV. This data set has been created to observe the interannotator agreement on PDEV patterns produced using the Corpus Pattern Analysis (Hanks, 2013). Apart from the graded decisions, the data set also contains traditional Word-Sense-Disambiguation (WSD) labels. We analyze the associations between the graded annotation and WSD annotation. The results of the respective annotations do not correlate with the size of the usage pattern inventory for the respective verbs lemmas, which makes the data set worth further linguistic analysis.
false
[]
[]
null
null
null
This work has been using language resources developed and/or stored and/or distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth, and Sports of the Czech Republic (project LM2015071). For most implementation we used R (R Core Team, 2015).
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
steele-specia-2018-vis
https://aclanthology.org/N18-5015
Vis-Eval Metric Viewer: A Visualisation Tool for Inspecting and Evaluating Metric Scores of Machine Translation Output
Machine Translation systems are usually evaluated and compared using automated evaluation metrics such as BLEU and METEOR to score the generated translations against human translations. However, the interaction with the output from the metrics is relatively limited and results are commonly a single score along with a few additional statistics. Whilst this may be enough for system comparison it does not provide much useful feedback or a means for inspecting translations and their respective scores. Vis-Eval Metric Viewer (VEMV) is a tool designed to provide visualisation of multiple evaluation scores so they can be easily interpreted by a user. VEMV takes in the source, reference, and hypothesis files as parameters, and scores the hypotheses using several popular evaluation metrics simultaneously. Scores are produced at both the sentence and dataset level and results are written locally to a series of HTML files that can be viewed on a web browser. The individual scored sentences can easily be inspected using powerful search and selection functions and results can be visualised with graphical representations of the scores and distributions.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
diwersy-2014-varitext
https://aclanthology.org/W14-5306
The Varitext platform and the Corpus des vari\'et\'es nationales du fran\ccais (CoVaNa-FR) as resources for the study of French from a pluricentric perspective
This paper reports on the francophone corpus archive Corpus des variétés nationales du français (CoVaNa-FR) and the lexico-statistical platform Varitext. It outlines the design and data format of the samples as well as presenting various usage scenarios related to the applications featured by the platform's toolbox.
false
[]
[]
null
null
null
The author wishes to thank the reviewers for their valuable comments which helped to clarify the main points of the paper.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
eschenbach-etal-1989-remarks
https://aclanthology.org/E89-1022
Remarks on Plural Anaphora
The interpretation of plural anaphora often requires the construction of complex reference objects (RefOs) out of RefOs which were formerly introduced not by plural terms but by a number of singular terms only. Often, several complex RefOs can be constructed, but only one of them is the preferred referent for the plural anaphor in question. As a means of explanation for preferred and non-preferred interpretations of plural anaphora, the concept of a Common Association Basis (CAB) for the potential atomic parts of a complex object is introduced in the following. CABs pose conceptual constraints on the formation of complex RefOs in general. We argue that in cases where a suitable CAB for the atomic RefOs introduced in the text exists, the corresponding complex RefO is constructed as early as in the course of processing the antecedent sentence and put into the focus domain of the discourse model. Thus, the search for a referent for a plural anaphor is constrained to a limited domain of RefOs according to the general principles of focus theory in NLP. Further principles of interpretation are suggested which guide the resolution of plural anaphora in cases where more than one suitable complex RefO is in focus. * The research on this paper was supported in part by the Deutsche Forschungsgemeinschaft (DFG) under grant Ha 1237/2-1. GAP is the acronym for "Gruppierungs-und Abgrenzungsgrozesse beim Aufbau sprachlich angeregter mentaler Modelle" (Processes of grouping and separation in the construction of mental models from texts), a research project carried out in the DFG-program "Kognitive Linguistik".
false
[]
[]
null
null
null
We thank Ewald Lang, Geoff Simmons (who also corrected our English) and Andrea Schopp for stimulating discussions and three anonymous referees from ACL for their comments on an earlier version of this paper.
1989
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sjobergh-araki-2008-multi
http://www.lrec-conf.org/proceedings/lrec2008/pdf/133_paper.pdf
A Multi-Lingual Dictionary of Dirty Words
We present a multilingual dictionary of dirty words. We have collected about 3,200 dirty words in several languages and built a database of these. The language with the most words in the database is English, though there are several hundred dirty words in for instance Japanese too. Words are classified into their general meaning, such as what part of the human anatomy they refer to. Words can also be assigned a nuance label to indicate if it is a cute word used when speaking to children, a very rude word, a clinical word etc. The database is available online and will hopefully be enlarged over time. It has already been used in research on for instance automatic joke generation and emotion detection.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
This work was done as part of a project funded by the Japanese Society for the Promotion of Science (JSPS). We would like to thank some of the anonymous reviewers for interesting suggestions for extending our work. We would also like to thank the volunteers who have contributed dirty words to the dictionary, especially Svetoslav Dankov who also helped out with various practical things.
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
fernandez-etal-2007-referring
https://aclanthology.org/2007.sigdial-1.25
Referring under Restricted Interactivity Conditions
We report results on how the collaborative process of referring in task-oriented dialogue is affected by the restrictive interactivity of a turn-taking policy commonly used in dialogue systems, namely push-to-talk. Our findings show that the restriction did not have a negative effect. Instead, the stricter control imposed at the interaction level favoured longer, more effective referring expressions, and induced a stricter and more structured performance at the level of the task.
false
[]
[]
null
null
null
Acknowledgements. This work was supported by the EU Marie Curie Programme (first author) and the DFG Emmy Noether Programme (last author). Thanks to the anonymous reviewers for their helpful comments.
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
suzuki-2004-phrase
http://www.lrec-conf.org/proceedings/lrec2004/pdf/272.pdf
Phrase-Based Dependency Evaluation of a Japanese Parser
Extraction of predicate-argument structure is an important task that requires evaluation for many applications, yet annotated resources of predicate-argument structure are currently scarce, especially for languages other than English. This paper presents an evaluation of a Japanese parser based on dependency relations as proposed by Lin (1995, 1998), but using phrase dependency instead of word dependency. Phrase-based dependency analysis has been the preferred form of Japanese syntactic analysis, yet the use of annotated resources in this format has so far been limited to training and evaluation of dependency analyzers. We will show that (1) evaluation based on phrase-dependency is particularly well-suited for Japanese, even for an evaluation of phrase-structure grammar, and that (2) in spite of shortcomings, the proposed evaluation method has the advantage of utilizing currently available surface-based annotations in a way that is relevant to predicate-argument structure.
false
[]
[]
null
null
null
I would like to thank Mari Brunson for producing the KCstyle annotation for various data sets for our experiments.
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
abu-jbara-etal-2011-towards
https://aclanthology.org/P11-2043
Towards Style Transformation from Written-Style to Audio-Style
In this paper, we address the problem of optimizing the style of textual content to make it more suitable to being listened to by a user as opposed to being read. We study the differences between the written style and the audio style by consulting the linguistics and journalism literatures. Guided by this study, we suggest a number of linguistic features to distinguish between the two styles. We show the correctness of our features and the impact of style transformation on the user experience through statistical analysis, a style classification task, and a user study.
false
[]
[]
null
null
null
null
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sperber-etal-2019-attention
https://aclanthology.org/Q19-1020
Attention-Passing Models for Robust and Data-Efficient End-to-End Speech Translation
Speech translation has traditionally been approached through cascaded models consisting of a speech recognizer trained on a corpus of transcribed speech, and a machine translation system trained on parallel texts. Several recent works have shown the feasibility of collapsing the cascade into a single, direct model that can be trained in an end-to-end fashion on a corpus of translated speech. However, experiments are inconclusive on whether the cascade or the direct model is stronger, and have only been conducted under the unrealistic assumption that both are trained on equal amounts of data, ignoring other available speech recognition and machine translation corpora. In this paper, we demonstrate that direct speech translation models require more data to perform well than cascaded models, and although they allow including auxiliary data through multi-task training, they are poor at exploiting such data, putting them at a severe disadvantage. As a remedy, we propose the use of endto-end trainable models with two attention mechanisms, the first establishing source speech to source text alignments, the second modeling source to target text alignment. We show that such models naturally decompose into multitask-trainable recognition and translation tasks and propose an attention-passing technique that alleviates error propagation issues in a previous formulation of a model with two attention stages. Our proposed model outperforms all examined baselines and is able to exploit auxiliary training data much more effectively than direct attentional models.
false
[]
[]
null
null
null
We thank Adam Lopez, Stefan Constantin, and the anonymous reviewers for their helpful comments. The work leading to these results has received funding from the European Union under grant agreement no. 825460.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
erbs-etal-2013-hierarchy
https://aclanthology.org/R13-1033
Hierarchy Identification for Automatically Generating Table-of-Contents
A table-of-contents (TOC) provides a quick reference to a document's content and structure. We present the first study on identifying the hierarchical structure for automatically generating a TOC using only textual features instead of structural hints e.g. from HTML-tags. We create two new datasets to evaluate our approaches for hierarchy identification. We find that our algorithm performs on a level that is sufficient for a fully automated system. For documents without given segment titles, we extend our work by automatically generating segment titles. We make the datasets and our experimental framework publicly available in order to foster future research in TOC generation.
false
[]
[]
null
null
null
This work has been supported by the Volkswagen Foundation as part of the
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
li-wu-2016-multi
https://aclanthology.org/C16-1185
Multi-level Gated Recurrent Neural Network for dialog act classification
In this paper we focus on the problem of dialog act (DA) labelling. This problem has recently attracted a lot of attention as it is an important sub-part of an automatic dialog model, which is currently in great demand. Traditional methods tend to see this problem as a sequence labelling task and deal with it by applying classifiers with rich features. Most of the current neural network models still omit the sequential information in the conversation. Henceforth, we apply a novel multi-level gated recurrent neural network (GRNN) with non-textual information to predict the DA tag. Our model not only utilizes textual information, but also makes use of non-textual and contextual information. In comparison, our model has shown significant improvement over previous works on the Switchboard Dialog Act (SWDA) data by over 6%.
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ravi-knight-2009-learning
https://aclanthology.org/N09-1005
Learning Phoneme Mappings for Transliteration without Parallel Data
We present a method for performing machine transliteration without any parallel resources. We frame the transliteration task as a decipherment problem and show that it is possible to learn cross-language phoneme mapping tables using only monolingual resources. We compare various methods and evaluate their accuracies on a standard name transliteration task.
false
[]
[]
null
null
null
This research was supported by the Defense Advanced Research Projects Agency under SRI International's prime Contract Number NBCHD040058.
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tokunaga-etal-2011-discriminative
https://aclanthology.org/W11-3502
Discriminative Method for Japanese Kana-Kanji Input Method
The most popular type of input method in Japan is kana-kanji conversion, conversion from a string of kana to a mixed kanjikana string. However there is no study using discriminative methods like structured SVMs for kana-kanji conversion. One of the reasons is that learning a discriminative model from a large data set is often intractable. However, due to progress of recent researches, large scale learning of discriminative models become feasible in these days. In the present paper, we investigate whether discriminative methods such as structured SVMs can improve the accuracy of kana-kanji conversion. To the best of our knowledge, this is the first study comparing a generative model and a discriminative model for kana-kanji conversion. An experiment revealed that a discriminative method can improve the performance by approximately 3%.
false
[]
[]
null
null
null
null
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
andreas-klein-2015-log
https://aclanthology.org/N15-1027
When and why are log-linear models self-normalizing?
Several techniques have recently been proposed for training "self-normalized" discriminative models. These attempt to find parameter settings for which unnormalized model scores approximate the true label probability. However, the theoretical properties of such techniques (and of self-normalization generally) have not been investigated. This paper examines the conditions under which we can expect self-normalization to work. We characterize a general class of distributions that admit self-normalization, and prove generalization bounds for procedures that minimize empirical normalizer variance. Motivated by these results, we describe a novel variant of an established procedure for training self-normalized models. The new procedure avoids computing normalizers for most training examples, and decreases training time by as much as factor of ten while preserving model quality.
false
[]
[]
null
null
null
The authors would like to thank Peter Bartlett, Robert Nishihara and Maxim Rabinovich for useful discussions. This work was partially supported by BBN under DARPA contract HR0011-12-C-0014. The first author is supported by a National Science Foundation Graduate Fellowship.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ws-2002-coling
https://aclanthology.org/W02-1100
COLING-02: SEMANET: Building and Using Semantic Networks
null
false
[]
[]
null
null
null
null
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
vestre-1991-algorithm
https://aclanthology.org/E91-1044
An Algorithm for Generating Non-Redundant Quantifier Scopings
This paper describes an algorithm for generating quantifier scopings. The algorithm is designed to generate only logically non-redundant scopings and to partially order the scopings with a given :default scoping first. Removing logical redundancy is not only interesting per se, but also drastically reduces the processing time. The input and output formats are described through a few access and construction functions. Thus, the algorithm is interesting for a modular linguistic theory, which is flexible with respect to syntactic and semantic framework.
false
[]
[]
null
null
null
null
1991
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
khan-etal-2013-generative
https://aclanthology.org/W13-5409
Generative Lexicon Theory and Linguistic Linked Open Data
In this paper we look at how Generative Lexicon theory can assist in providing a more thorough definition of word senses as links between items in a RDF-based lexicon and concepts in an ontology. We focus on the definition of lexical sense in lemon and show its limitations before defining a new model based on lemon and which we term lemonGL. This new model is an initial attempt at providing a way of structuring lexico-ontological resources as linked data in such a way as to allow a rich representation of word meaning (following the GL theory) while at the same time (attempting to) remain faithful to the separation between the lexicon and the ontology as recommended by the lemon model.
false
[]
[]
null
null
null
null
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
samardzic-etal-2015-automatic
https://aclanthology.org/W15-3710
Automatic interlinear glossing as two-level sequence classification
Interlinear glossing is a type of annotation of morphosyntactic categories and crosslinguistic lexical correspondences that allows linguists to analyse sentences in languages that they do not necessarily speak. Automatising this annotation is necessary in order to provide glossed corpora big enough to be used for quantitative studies. In this paper, we present experiments on the automatic glossing of Chintang. We decompose the task of glossing into steps suitable for statistical processing. We first perform grammatical glossing as standard supervised part-of-speech tagging. We then add lexical glosses from a stand-off dictionary applying context disambiguation in a similar way to word lemmatisation. We obtain the highest accuracy score of 96% for grammatical and 94% for lexical glossing.
false
[]
[]
null
null
null
null
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
belz-etal-2010-finding
https://aclanthology.org/W10-4237
Finding Common Ground: Towards a Surface Realisation Shared Task
In many areas of NLP reuse of utility tools such as parsers and POS taggers is now common, but this is still rare in NLG. The subfield of surface realisation has perhaps come closest, but at present we still lack a basis on which different surface realisers could be compared, chiefly because of the wide variety of different input representations used by different realisers. This paper outlines an idea for a shared task in surface realisation, where inputs are provided in a common-ground representation formalism which participants map to the types of input required by their system. These inputs are derived from existing annotated corpora developed for language analysis (parsing etc.). Outputs (realisations) are evaluated by automatic comparison against the human-authored text in the corpora as well as by human assessors.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
satta-1992-recognition
https://aclanthology.org/P92-1012
Recognition of Linear Context-Free Rewriting Systems
The class of linear context-free rewriting systems has been introduced as a generalization of a class of grammar formalisms known as mildly context-sensitive. The recognition problem for linear context-free rewriting languages is studied at length here, presenting evidence that, even in some restricted cases, it cannot be solved efficiently. This entails the existence of a gap between, for example, tree adjoining languages and the subclass of linear context-free rewriting languages that generalizes the former class; such a gap is attributed to "crossing configurations". A few other interesting consequences of the main result are discussed, that concern the recognition problem for linear context-free rewriting languages.
false
[]
[]
null
null
null
null
1992
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kiso-etal-2011-hits
https://aclanthology.org/P11-2006
HITS-based Seed Selection and Stop List Construction for Bootstrapping
In bootstrapping (seed set expansion), selecting good seeds and creating stop lists are two effective ways to reduce semantic drift, but these methods generally need human supervision. In this paper, we propose a graphbased approach to helping editors choose effective seeds and stop list instances, applicable to Pantel and Pennacchiotti's Espresso bootstrapping algorithm. The idea is to select seeds and create a stop list using the rankings of instances and patterns computed by Kleinberg's HITS algorithm. Experimental results on a variation of the lexical sample task show the effectiveness of our method.
false
[]
[]
null
null
null
We thank Masayuki Asahara and Kazuo Hara for helpful discussions and the anonymous reviewers for valuable comments. MS was partially supported by Kakenhi Grant-in-Aid for Scientific Research C 21500141.
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
olaussen-2011-evaluating
https://aclanthology.org/W11-4653
Evaluating the speech quality of the Norwegian synthetic voice Brage
This document describes the method, results and conclusions from my master's thesis in Nordic studies. My aim was to assess the speech quality of the Norwegian Filibuster text-to-speech system with the synthetic voice Brage. The assessment was carried out with a survey and an intelligibility test at phoneme, word and sentence level. The evaluation criteria used in the study were intelligibility, naturalness, likeability, acceptance and suitability.
false
[]
[]
null
null
null
null
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dligach-etal-2017-neural
https://aclanthology.org/E17-2118
Neural Temporal Relation Extraction
We experiment with neural architectures for temporal relation extraction and establish a new state-of-the-art for several scenarios. We find that neural models with only tokens as input outperform state-ofthe-art hand-engineered feature-based models, that convolutional neural networks outperform LSTM models, and that encoding relation arguments with XML tags outperforms a traditional position-based encoding.
false
[]
[]
null
null
null
This work was partially funded by the US National Institutes of Health (U24CA184407; R01 LM 10090; R01GM114355). The Titan X GPU used for this research was donated by the NVIDIA Corporation.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
meshgi-etal-2022-uncertainty
https://aclanthology.org/2022.wassa-1.8
Uncertainty Regularized Multi-Task Learning
By sharing parameters and providing taskindependent shared features, multi-task deep neural networks are considered one of the most interesting ways for parallel learning from different tasks and domains. However, fine-tuning on one task may compromise the performance of other tasks or restrict the generalization of the shared learned features. To address this issue, we propose to use task uncertainty to gauge the effect of the shared feature changes on other tasks and prevent the model from overfitting or over-generalizing. We conducted an experiment on 16 text classification tasks, and findings showed that the proposed method consistently improves the performance of the baseline, facilitates the knowledge transfer of learned features to unseen data, and provides explicit control over the generalization of the shared model.
false
[]
[]
null
null
null
null
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gaspari-2006-added-value
https://aclanthology.org/2006.amta-users.3
The Added Value of Free Online MT Services
This paper reports on an experiment investigating how effective free online machine translation (MT) is in helping Internet users to access the contents of websites written only in languages they do not know. This study explores the extent to which using Internet-based MT tools affects the confidence of web-surfers in the reliability of the information they find on websites available only in languages unfamiliar to them. The results of a case study for the language pair Italian-English involving 101 participants show that the chances of identifying correctly basic information (i.e. understanding the nature of websites and finding contact telephone numbers from their web-pages) are consistently enhanced to varying degrees (up to nearly 20%) by translating online content into a familiar language. In addition, confidence ratings given by users to the reliability and accuracy of the information they find are significantly higher (with increases between 5 and 11%) when they translate websites into their preferred language with free online MT services.
true
[]
[]
Decent Work and Economic Growth
null
null
The author wishes to thank his colleagues at the Universities of Manchester, Salford and Liverpool Hope in the United Kingdom for their assistance in distributing the questionnaires to their students. Special thanks also to all the students who volunteered to fill in the questionnaire on which this study was based.
2006
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
mathur-etal-2018-offend
https://aclanthology.org/W18-5118
Did you offend me? Classification of Offensive Tweets in Hinglish Language
The use of code-switched languages (e.g., Hinglish, which is derived by the blending of Hindi with the English language) is getting much popular on Twitter due to their ease of communication in native languages. However, spelling variations and absence of grammar rules introduce ambiguity and make it difficult to understand the text automatically. This paper presents the Multi-Input Multi-Channel Transfer Learning based model (MIMCT) to detect offensive (hate speech or abusive) Hinglish tweets from the proposed Hinglish Offensive Tweet (HOT) dataset using transfer learning coupled with multiple feature inputs. Specifically, it takes multiple primary word embedding along with secondary extracted features as inputs to train a multi-channel CNN-LSTM architecture that has been pre-trained on English tweets through transfer learning. The proposed MIMCT model outperforms the baseline supervised classification models, transfer learning based CNN and LSTM models to establish itself as the state of the art in the unexplored domain of Hinglish offensive text classification.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
wall-1960-system
https://aclanthology.org/1960.earlymt-nsmt.62
System Design of a Computer for Russian-English Translation
Session 11: EQUIPMENT problem to the equipment. This paper presents the general specifications for a digital data-processing system which would be desirable for machine translation according to the experience of the group at the University of Washington. First the problem of lexicon storage will be considered.
false
[]
[]
null
null
null
null
1960
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
shen-etal-2003-effective
https://aclanthology.org/W03-1307
Effective Adaptation of Hidden Markov Model-based Named Entity Recognizer for Biomedical Domain
In this paper, we explore how to adapt a general Hidden Markov Model-based named entity recognizer effectively to biomedical domain. We integrate various features, including simple deterministic features, morphological features, POS features and semantic trigger features, to capture various evidences especially for biomedical named entity and evaluate their contributions. We also present a simple algorithm to solve the abbreviation problem and a rule-based method to deal with the cascaded phenomena in biomedical domain. Our experiments on GENIA V3.0 and GENIA V1.1 achieve the 66.1 and 62.5 F-measure respectively, which outperform the previous best published results by 8.1 F-measure when using the same training and testing data.
true
[]
[]
Good Health and Well-Being
null
null
We would like to thank Mr. Tan Soon Heng for his support of biomedical knowledge.
2003
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wei-jia-2021-statistical
https://aclanthology.org/2021.acl-long.533
The statistical advantage of automatic NLG metrics at the system level
Estimating the expected output quality of generation systems is central to NLG. This paper qualifies the notion that automatic metrics are not as good as humans in estimating systemlevel quality. Statistically, humans are unbiased, high variance estimators, while metrics are biased, low variance estimators. We compare these estimators by their error in pairwise prediction (which generation system is better?) using the bootstrap. Measuring this error is complicated: predictions are evaluated against noisy, human predicted labels instead of the ground truth, and metric predictions fluctuate based on the test sets they were calculated on. By applying a bias-variance-noise decomposition, we adjust this error to a noise-free, infinite test set setting. Our analysis compares the adjusted error of metrics to humans and a derived, perfect segment-level annotator, both of which are unbiased estimators dependent on the number of judgments collected. In MT, we identify two settings where metrics outperform humans due to a statistical advantage in variance: when the number of human judgments used is small, and when the quality difference between compared systems is small. 1
false
[]
[]
null
null
null
Discussions with Nitika Mathur, Markus Freitag, and Thibault Sellam led to several insights. Nelson Liu and Tianyi Zhang provided feedback on our first draft, and anonymous reviewers provided feedback on the submitted draft. Nanyun Peng advised the first author, and on this work. Alex Fabbri provided a scored version of the SummEval dataset. We thank all who have made our work possible.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
goldwater-etal-2000-building
https://aclanthology.org/W00-0312
Building a Robust Dialogue System with Limited Data
We describe robustness techniques used in the Com-mandTalk system at: the recognition level, the parsing level, and th~ dia16gue level, and how these were influenced by the lack of domain data. We used interviews with subject matter experts (SME's) to develop a single grammar for recognition, understanding, and generation, thus eliminating the need for a robust parser. We broadened the coverage of the recognition grammar by allowing word insertions and deletions, and we implemented clarification and correction subdialogues to increase robustness at the dialogue level. We discuss the applicability of these techniques to other domains.
false
[]
[]
null
null
null
null
2000
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
novikova-etal-2018-rankme
https://aclanthology.org/N18-2012
RankME: Reliable Human Ratings for Natural Language Generation
Human evaluation for natural language generation (NLG) often suffers from inconsistent user ratings. While previous research tends to attribute this problem to individual user preferences, we show that the quality of human judgements can also be improved by experimental design. We present a novel rank-based magnitude estimation method (RankME), which combines the use of continuous scales and relative assessments. We show that RankME significantly improves the reliability and consistency of human ratings compared to traditional evaluation methods. In addition, we show that it is possible to evaluate NLG systems according to multiple, distinct criteria, which is important for error analysis. Finally, we demonstrate that RankME, in combination with Bayesian estimation of system quality, is a cost-effective alternative for ranking multiple NLG systems.
false
[]
[]
null
null
null
This research received funding from the EPSRC projects DILiGENt (EP/M005429/1) and MaDrI-gAL (EP/N017536/1). The Titan Xp used for this research was donated by the NVIDIA Corporation.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kiyono-etal-2018-reducing
https://aclanthology.org/Y18-1034
Reducing Odd Generation from Neural Headline Generation
The Encoder-Decoder model is widely used in natural language generation tasks. However, the model sometimes suffers from repeated redundant generation, misses important phrases, and includes irrelevant entities. Toward solving these problems we propose a novel sourceside token prediction module. Our method jointly estimates the probability distributions over source and target vocabularies to capture the correspondence between source and target tokens. Experiments show that the proposed model outperforms the current state-of-the-art method in the headline generation task. We also show that our method can learn a reasonable token-wise correspondence without knowing any true alignment 1. * This work is a product of collaborative research program of Tohoku University and NTT Communication Science Laboratories. 1 Our code for reproducing the experiments is available at https://github.com/butsugiri/UAM Unfortunately, as often discussed in the community, EncDec sometimes generates sentences with repeating phrases or completely irrelevant phrases and the reason for their generation cannot be interpreted intuitively. Moreover, EncDec also sometimes generates sentences that lack important phrases. We refer to these observations as the odd generation problem (odd-gen) in EncDec. The following table shows typical examples of odd-gen actually generated by a typical EncDec. (1) Repeating Phrases Gold: duran duran group fashionable again EncDec: duran duran duran duran (2) Lack of Important Phrases Gold: graf says goodbye to tennis due to injuries EncDec: graf retires (3) Irrelevant Phrases Gold: u.s. troops take first position in serb-held bosnia EncDec: precede sarajevo This paper tackles for reducing the odd-gen in the task of abstractive summarization. In machine translation literature, coverage (Tu et al., 2016; Mi et al., 2016) and reconstruction (Tu et al., 2017) are promising extensions of EncDec to address the odd-gen. These models take advantage of the fact that machine translation is the loss-less generation (lossless-gen) task, where the semantic information of source-and target-side sequence is equivalent. However, as discussed in previous studies, abstractive summarization is a lossy-compression generation (lossy-gen) task. Here, the task is to delete certain semantic information from the source to generate target-side sequence.
false
[]
[]
null
null
null
We are grateful to anonymous reviewers for their insightful comments. We thank Sosuke Kobayashi for providing helpful comments. We also thank Qingyu Zhou for providing a dataset and information for a fair comparison.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bhargava-penn-2021-proof
https://aclanthology.org/2021.iwpt-1.2
Proof Net Structure for Neural Lambek Categorial Parsing
In this paper, we present the first statistical parser for Lambek categorial grammar (LCG), a grammatical formalism for which the graphical proof method known as proof nets is applicable. Our parser incorporates proof net structure and constraints into a system based on selfattention networks via novel model elements. Our experiments on an English LCG corpus show that incorporating term graph structure is helpful to the model, improving both parsing accuracy and coverage. Moreover, we derive novel loss functions by expressing proof net constraints as differentiable functions of our model output, enabling us to train our parser without ground-truth derivations.
false
[]
[]
null
null
null
We thank Elizabeth Patitsas as well as our anonymous reviewers for their feedback. This research was enabled in part by support provided by NSERC, SHARCNET, and Compute Canada.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhao-etal-2010-paraphrasing
https://aclanthology.org/C10-1148
Paraphrasing with Search Engine Query Logs
This paper proposes a method that extracts paraphrases from search engine query logs. The method first extracts paraphrase query-title pairs based on an assumption that a search query and its corresponding clicked document titles may mean the same thing. It then extracts paraphrase query-query and title-title pairs from the query-title paraphrases with a pivot approach. Paraphrases extracted in each step are validated with a binary classifier. We evaluate the method using a query log from Baidu 1 , a Chinese search engine. Experimental results show that the proposed method is effective, which extracts more than 3.5 million pairs of paraphrases with a precision of over 70%. The results also show that the extracted paraphrases can be used to generate high-quality paraphrase patterns.
false
[]
[]
null
null
null
We would like to thank Wanxiang Che, Hua Wu, and the anonymous reviewers for their useful comments on this paper.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
al-boni-etal-2015-model
https://aclanthology.org/P15-2126
Model Adaptation for Personalized Opinion Analysis
Humans are idiosyncratic and variable: towards the same topic, they might hold different opinions or express the same opinion in various ways. It is hence important to model opinions at the level of individual users; however it is impractical to estimate independent sentiment classification models for each user with limited data. In this paper, we adopt a modelbased transfer learning solution-using linear transformations over the parameters of a generic model-for personalized opinion analysis. Extensive experimental results on a large collection of Amazon reviews confirm our method significantly outperformed a user-independent generic opinion model as well as several state-ofthe-art transfer learning algorithms.
false
[]
[]
null
null
null
This research was funded in part by grant W911NF-10-2-0051 from the United States Army Research Laboratory. Also, Hongning Wang is partially supported by the Yahoo Academic Career Enhancement Award.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zaenen-2016-modality
https://aclanthology.org/2016.lilt-14.1
Modality: logic, semantics, annotation and machine learning
Up to rather recently Natural Language Processing has not given much attention to modality. As long as the main task was to determined what a text was about (Information Retrieval) or who the participants in an eventuality were (Information Extraction), this neglect was understandable. With the focus moving to questions of natural language understanding and inferencing as well as to sentiment and opinion analysis, it becomes necessary to distinguish between actual and envisioned eventualities and to draw conclusions about the attitude of the writer or speaker towards the eventualities referred to. This means, i.a., to be able to distinguish 'John went to Paris' and 'John wanted to go to Paris'. To do this one has to calculate the effect of different linguistic operators on the eventuality predication. 1 Modality has different shades of meaning that are subtle, and often difficult to distinguish, being able to express hypothetical situations (he could/may come in), desired or undesired (permitted or non-permitted situations (he can/may come in/enter), or (physical) abilities: he can enter. The study of modality often focusses on the semantics and pragmatics of the modal auxiliaries because of their notorious ambiguity but modality can also be expressed through other means than auxiliaries, such as adverbial modification and non-auxiliary verbs such as want or believe. In fact, the same modality can be expressed by different linguistic means, e.g. 'Maybe he is already home' or 'He may already
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
moschitti-2010-kernel
https://aclanthology.org/C10-5001
Kernel Engineering for Fast and Easy Design of Natural Language Applications
null
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ichikawa-etal-2005-ebonsai
https://aclanthology.org/I05-2019
eBonsai: An Integrated Environment for Annotating Treebanks
Syntactically annotated corpora (treebanks) play an important role in recent statistical natural language processing. However, building a large treebank is labor intensive and time consuming work. To remedy this problem, there have been many attempts to develop software tools for annotating treebanks. This paper presents an integrated environment for annotating a treebank, called eBonsai. eBonsai helps annotators to choose a correct syntactic structure of a sentence from outputs of a parser, allowing the annotators to retrieve similar sentences in the treebank for referring to their structures.
false
[]
[]
null
null
null
null
2005
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kim-etal-2010-computational
https://aclanthology.org/Y10-1050
A Computational Treatment of Korean Serial Verb Constructions
The so-called serial verb construction (SVC) is a complex predicate structure consisting of two or more verbal heads but denotes one single event. This paper first discusses the grammatical properties of Korean SVCs and provides a lexicalist, constructionbased analysis couched upon a typed-feature structure grammar. We also show the results of implementing the grammar in the LKB (Linguistics Knowledge Building) system couched upon the existing the KRG (Korean Resource Grammar) which has been developed since 2003. The implementation results provides us with a feasible direction of expanding the analysis to cover a wider range of relevant data.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tao-etal-2019-one
https://aclanthology.org/P19-1001
One Time of Interaction May Not Be Enough: Go Deep with an Interaction-over-Interaction Network for Response Selection in Dialogues
Currently, researchers have paid great attention to retrieval-based dialogues in opendomain. In particular, people study the problem by investigating context-response matching for multi-turn response selection based on publicly recognized benchmark data sets. State-of-the-art methods require a response to interact with each utterance in a context from the beginning, but the interaction is performed in a shallow way. In this work, we let utterance-response interaction go deep by proposing an interaction-over-interaction network (IoI). The model performs matching by stacking multiple interaction blocks in which residual information from one time of interaction initiates the interaction process again. Thus, matching information within an utterance-response pair is extracted from the interaction of the pair in an iterative fashion, and the information flows along the chain of the blocks via representations. Evaluation results on three benchmark data sets indicate that IoI can significantly outperform state-of-theart methods in terms of various matching metrics. Through further analysis, we also unveil how the depth of interaction affects the performance of IoI.
false
[]
[]
null
null
null
We would like to thank the anonymous reviewers for their constructive comments. This work was supported by the National Key Research and Development Program of China (No. 2017YFC0804001), the National Science Foundation of China (NSFC Nos. 61672058 and 61876196).
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
osborne-baldridge-2004-ensemble
https://aclanthology.org/N04-1012
Ensemble-based Active Learning for Parse Selection
Supervised estimation methods are widely seen as being superior to semi and fully unsupervised methods. However, supervised methods crucially rely upon training sets that need to be manually annotated. This can be very expensive, especially when skilled annotators are required. Active learning (AL) promises to help reduce this annotation cost. Within the complex domain of HPSG parse selection, we show that ideas from ensemble learning can help further reduce the cost of annotation. Our main results show that at times, an ensemble model trained with randomly sampled examples can outperform a single model trained using AL. However, converting the single-model AL method into an ensemble-based AL method shows that even this much stronger baseline model can be improved upon. Our best results show a ¢ ¤ £ ¦ ¥ reduction in annotation cost compared with single-model random sampling.
false
[]
[]
null
null
null
We would like to thank Markus Becker, Steve Clark, and the anonymous reviewers for their comments. Jeremiah Crim developed some of the feature extraction code and conglomerate features, and Alex Lascarides made suggestions for the semantic features. This work was supported by Edinburgh-Stanford Link R36763, ROSIE project.
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ribeiro-etal-2018-semantically
https://aclanthology.org/P18-1079
Semantically Equivalent Adversarial Rules for Debugging NLP models
Complex machine learning models for NLP are often brittle, making different predictions for input instances that are extremely similar semantically. To automatically detect this behavior for individual instances, we present semantically equivalent adversaries (SEAs)-semantic-preserving perturbations that induce changes in the model's predictions. We generalize these adversaries into semantically equivalent adversarial rules (SEARs)-simple, universal replacement rules that induce adversaries on many instances. We demonstrate the usefulness and flexibility of SEAs and SEARs by detecting bugs in black-box state-of-the-art models for three domains: machine comprehension, visual questionanswering, and sentiment analysis. Via user studies, we demonstrate that we generate high-quality local adversaries for more instances than humans, and that SEARs induce four times as many mistakes as the bugs discovered by human experts. SEARs are also actionable: retraining models using data augmentation significantly reduces bugs, while maintaining accuracy.
false
[]
[]
null
null
null
We are grateful to Dan Weld, Robert L. Logan IV, and to the anonymous reviewers for their feedback. This work was supported in part by ONR award #N00014-13-1-0023, in part by NSF award #IIS-1756023, and in part by funding from FICO. The views expressed are of the authors and do not reflect the policy or opinion of the funding agencies.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhao-etal-2020-robust
https://aclanthology.org/2020.coling-main.248
Robust Machine Reading Comprehension by Learning Soft labels
Neural models have achieved great success on the task of machine reading comprehension (MRC), which are typically trained on hard labels. We argue that hard labels limit the model capability on generalization due to the label sparseness problem. In this paper, we propose a robust training method for MRC models to address this problem. Our method consists of three strategies, 1) label smoothing, 2) word overlapping, 3) distribution prediction. All of them help to train models on soft labels. We validate our approach on the representative architecture-ALBERT. Experimental results show that our method can greatly boost the baseline with 1% improvement in average, and achieve state-of-the-art performance on NewsQA and QUOREF. Paragraph: One of the first Norman mercenaries to serve as a Byzantine general was Hervé in the 1050s. By then however, there were already Norman mercenaries serving as far away as Trebizond and Georgia.. .. Question: When did Hervé serve as a Byzantine general? Answer1: 1050s Answer2: in the 1050s Figure 1: An example of multiple answer in extractive reading comprehension † Work done while the first author was an intern at Tencent.
false
[]
[]
null
null
null
We thank anonymous reviewers for their insightful comments. This work is sponsored in part by the National Key Research and Development Program of China (2018YFC0830700) and the National Natural Science Foundation of China (61806075). And the work of M. Yang was also supported in part by the project granted by Zhizhesihai(Beijing) Technology Co., Ltd.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhao-huang-2013-minibatch
https://aclanthology.org/N13-1038
Minibatch and Parallelization for Online Large Margin Structured Learning
Online learning algorithms such as perceptron and MIRA have become popular for many NLP tasks thanks to their simpler architecture and faster convergence over batch learning methods. However, while batch learning such as CRF is easily parallelizable, online learning is much harder to parallelize: previous efforts often witness a decrease in the converged accuracy, and the speedup is typically very small (∼3) even with many (10+) processors. We instead present a much simpler architecture based on "mini-batches", which is trivially parallelizable. We show that, unlike previous methods, minibatch learning (in serial mode) actually improves the converged accuracy for both perceptron and MIRA learning, and when combined with simple parallelization, minibatch leads to very significant speedups (up to 9x on 12 processors) on stateof-the-art parsing and tagging systems.
false
[]
[]
null
null
null
We thank Ryan McDonald, Yoav Goldberg, and Hal Daumé, III for helpful discussions, and the anonymous reviewers for suggestions. This work was partially supported by DARPA FA8750-13-2-0041 "Deep Exploration and Filtering of Text" (DEFT) Program and by Queens College for equipment.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
vasiljevs-etal-2012-creation
http://www.lrec-conf.org/proceedings/lrec2012/pdf/744_Paper.pdf
Creation of an Open Shared Language Resource Repository in the Nordic and Baltic Countries
The META-NORD project has contributed to an open infrastructure for language resources (data and tools) under the META-NET umbrella. This paper presents the key objectives of META-NORD and reports on the results achieved in the first year of the project. META-NORD has mapped and described the national language technology landscape in the Nordic and Baltic countries in terms of language use, language technology and resources, main actors in the academy, industry, government and society; identified and collected the first batch of language resources in the Nordic and Baltic countries; documented, processed, linked, and upgraded the identified language resources to agreed standards and guidelines. The three horizontal multilingual actions in META-NORD are overviewed in this paper: linking and validating Nordic and Baltic wordnets, the harmonisation of multilingual Nordic and Baltic treebanks, and consolidating multilingual terminology resources across European countries. This paper also touches upon intellectual property rights for the sharing of language resources.
false
[]
[]
null
null
null
The META-NORD project has received funding from the European Commission through the ICT PSP Programme, grant agreement no 270899.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ws-2000-anlp-naacl
https://aclanthology.org/W00-0500
ANLP-NAACL 2000 Workshop: Embedded Machine Translation Systems
null
false
[]
[]
null
null
null
null
2000
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
han-sun-2014-semantic
https://aclanthology.org/P14-2117
Semantic Consistency: A Local Subspace Based Method for Distant Supervised Relation Extraction
One fundamental problem of distant supervision is the noisy training corpus problem. In this paper, we propose a new distant supervision method, called Semantic Consistency, which can identify reliable instances from noisy instances by inspecting whether an instance is located in a semantically consistent region. Specifically, we propose a semantic consistency model, which first models the local subspace around an instance as a sparse linear combination of training instances, then estimate the semantic consistency by exploiting the characteristics of the local subspace. Experimental results verified the effectiveness of our method.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
vela-etal-2019-improving
https://aclanthology.org/W19-6702
Improving CAT Tools in the Translation Workflow: New Approaches and Evaluation
This paper describes strategies to improve an existing web-based computeraided translation (CAT) tool entitled CATaLog Online. CATaLog Online provides a post-editing environment with simple yet helpful project management tools. It offers translation suggestions from translation memories (TM), machine translation (MT), and automatic post-editing (APE) and records detailed logs of post-editing activities. To test the new approaches proposed in this paper, we carried out a user study on an English-German translation task using CATaLog Online. User feedback revealed that the users preferred using CATaLog Online over existing CAT tools in some respects, especially by selecting the output of the MT system and taking advantage of the color scheme for TM suggestions.
false
[]
[]
null
null
null
We would like to thank the participants of this user study for their valuable contribution. We further thank the MT Summit anonymous reviewers for their insightful feedback.This research was funded in part by the Ger- /2007-2013) under REA grant agreement no 317471. We are also thankful to Pangeanic, Valencia, Spain for kindly providing us with professional translators for these experiments.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lopez-etal-2016-encoding
https://aclanthology.org/L16-1177
Encoding Adjective Scales for Fine-grained Resources
We propose an automatic approach towards determining the relative location of adjectives on a common scale based on their strength. We focus on adjectives expressing different degrees of goodness occurring in French product (perfumes) reviews. Using morphosyntactic patterns, we extract from the reviews short phrases consisting of a noun that encodes a particular aspect of the perfume and an adjective modifying that noun. We then associate each such n-gram with the corresponding product aspect and its related star rating. Next, based on the star scores, we generate adjective scales reflecting the relative strength of specific adjectives associated with a shared attribute of the product. An automatic ordering of the adjectives "correct" (correct), "sympa" (nice), "bon" (good) and "excellent" (excellent) according to their score in our resource is consistent with an intuitive scale based on human judgments. Our long-term objective is to generate different adjective scales in an empirical manner, which could allow the enrichment of lexical resources.
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
abdelali-etal-2021-qadi
https://aclanthology.org/2021.wanlp-1.1
QADI: Arabic Dialect Identification in the Wild
Proper dialect identification is important for a variety of Arabic NLP applications. In this paper, we present a method for rapidly constructing a tweet dataset containing a wide range of country-level Arabic dialects-covering 18 different countries in the Middle East and North Africa region. Our method relies on applying multiple filters to identify users who belong to different countries based on their account descriptions and to eliminate tweets that either write mainly in Modern Standard Arabic or mostly use vulgar language. The resultant dataset contains 540k tweets from 2,525 users who are evenly distributed across 18 Arab countries. Using intrinsic evaluation, we show that the labels of a set of randomly selected tweets are 91.5% accurate. For extrinsic evaluation, we are able to build effective countrylevel dialect identification on tweets with a macro-averaged F1-score of 60.6% across 18 classes.
false
[]
[]
null
null
null
9 https://en.wikipedia.org/wiki/ Egyptian_Arabic
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
minnema-herbelot-2019-brain
https://aclanthology.org/P19-2021
From Brain Space to Distributional Space: The Perilous Journeys of fMRI Decoding
Recent work in cognitive neuroscience has introduced models for predicting distributional word meaning representations from brain imaging data. Such models have great potential, but the quality of their predictions has not yet been thoroughly evaluated from a computational linguistics point of view. Due to the limited size of available brain imaging datasets, standard quality metrics (e.g. similarity judgments and analogies) cannot be used. Instead, we investigate the use of several alternative measures for evaluating the predicted distributional space against a corpus-derived distributional space. We show that a stateof-the-art decoder, while performing impressively on metrics that are commonly used in cognitive neuroscience, performs unexpectedly poorly on our metrics. To address this, we propose strategies for improving the model's performance. Despite returning promising results, our experiments also demonstrate that much work remains to be done before distributional representations can reliably be predicted from brain data.
true
[]
[]
Good Health and Well-Being
null
null
The first author of this paper (GM) was enrolled in the European Master Program in Language and Communication Technologies (LCT) while writing the paper, and was supported by the European Union Erasmus Mundus program.
2019
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chen-etal-2019-capturing
https://aclanthology.org/D19-1544
Capturing Argument Interaction in Semantic Role Labeling with Capsule Networks
Semantic role labeling (SRL) involves extracting propositions (i.e. predicates and their typed arguments) from natural language sentences. State-of-the-art SRL models rely on powerful encoders (e.g., LSTMs) and do not model non-local interaction between arguments. We propose a new approach to modeling these interactions while maintaining efficient inference. Specifically, we use Capsule Networks (Sabour et al., 2017): each proposition is encoded as a tuple of capsules, one capsule per argument type (i.e. role). These tuples serve as embeddings of entire propositions. In every network layer, the capsules interact with each other and with representations of words in the sentence. Each iteration results in updated proposition embeddings and updated predictions about the SRL structure. Our model substantially outperforms the nonrefinement baseline model on all 7 CoNLL-2019 languages and achieves state-of-the-art results on 5 languages (including English) for dependency SRL. We analyze the types of mistakes corrected by the refinement procedure. For example, each role is typically (but not always) filled with at most one argument. Whereas enforcing this approximate constraint is not useful with the modern SRL system, iterative procedure corrects the mistakes by capturing this intuition in a flexible and contextsensitive way. 1
false
[]
[]
null
null
null
We thank Diego Marcheggiani, Jonathan Mallinson and Philip Williams for constructive feedback and suggestions, as well as anonymous reviewers for their comments. The project was supported by the European Research Council (ERC StG BroadSem 678254) and the Dutch National Science Foundation (NWO VIDI 639.022.518).
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
robin-favero-2000-content
https://aclanthology.org/W00-1417
Content aggregation in natural language hypertext summarization of OLAP and Data Mining Discoveries
We present a new approach to paratactic content aggregation in the context of generating hypertext summaries of OLAP and data mining discoveries. Two key properties make this approach innovative and interesting: (1) it encapsulates aggregation inside the sentence planning component, and (2) it relies on a domain independent algorithm working on a data structure that abstracts from lexical and syntactic knowledge.
false
[]
[]
null
null
null
null
2000
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
alfonseca-manandhar-2002-proposal
http://www.lrec-conf.org/proceedings/lrec2002/pdf/38.pdf
Proposal for Evaluating Ontology Refinement Methods
Ontologies are a tool for Knowledge Representation that is now widely used, but the effort employed to build an ontology is still high. There are a few automatic and semi-automatic methods for extending ontologies with domain-specific information, but they use different training and test data, and different evaluation metrics. The work described in this paper is an attempt to build a benchmark corpus that can be used for comparing these systems. We provide standard evaluation metrics as well as two different annotated corpora: one in which every unknown word has been labelled with the places where it should be added onto the ontology, and other in which only the high-frequency unknown terms have been annotated
false
[]
[]
null
null
null
This work does not attempt to evaluate learning of nontaxonomic relations (e.g. meronymy, holonymy, telic, etc.), but we believe that similar evaluation metrics could be used (Maedche and Staab, 2000) . Further work can be done on this topic.
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chiang-su-1996-statistical
https://aclanthology.org/W96-0110
Statistical Models for Deep-structure Disambiguation
In this paper, an integrated score function is proposed to resolve the ambiguity of deepstructure, which includes the cases of constituents and the senses of words. With the integrated score function, different knowledge sources, including part-of-speech, syntax and semantics, are integrated in a uniform formulation. Based on this formulation, different models for case identification and word-sense disambiguation are derived. In the baseline system, the values of parameters are estimated by using the maximum likelihood estimation method. The accuracy rates of 56.3% for parse tree, 77.5% for case and 86.2% for word sense are obtained when the baseline system is tested on a corpus of 800 sentences. Afterwards, to reduce the estimation error caused by the maximum likelihood estimation, the Good-Turing's smoothing method is applied. In addition, a robust discriminative learning algorithm is also derived to minimize the testing set error rate. By applying these algorithms, the accuracy rates of 77% for parse tree, 88,9% for case, and 88.6% for sense are obtained. Compared with the baseline system; 17.4% error reduction rate for sense discrimination, 50.7% for case identification, and 47.4% for parsing accuracy are obtained. These results clearly demonstrate the superiority of the proposed models for deep-structure disambiguation.
false
[]
[]
null
null
null
null
1996
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tang-etal-2011-clgvsm
https://aclanthology.org/I11-1065
CLGVSM: Adapting Generalized Vector Space Model to Cross-lingual Document Clustering
Cross-lingual document clustering (CLDC) is the task to automatically organize a large collection of cross-lingual documents into groups considering content or topic. Different from the traditional hard matching strategy, this paper extends traditional generalized vector space model (GVSM) to handle cross-lingual cases, referred to as CLGVSM, by incorporating cross-lingual word similarity measures. With this model, we further compare different word similarity measures in cross-lingual document clustering. To select cross-lingual features effectively, we also propose a softmatching based feature selection method in CLGVSM. Experimental results on benchmarking data set show that (1) the proposed CLGVSM is very effective for cross-document clustering, outperforming the two strong baselines vector space model (VSM) and latent semantic analysis (LSA) significantly; and (2) the new feature selection method can further improve CLGVSM.
false
[]
[]
null
null
null
This work is partially supported by NSFC (60703051) and MOST (2009DFA12970). We thank the reviewers for the valuable comments.
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
guo-etal-2020-text
https://aclanthology.org/2020.coling-main.542
Text Classification by Contrastive Learning and Cross-lingual Data Augmentation for Alzheimer's Disease Detection
Data scarcity is always a constraint on analyzing speech transcriptions for automatic Alzheimer's disease (AD) detection, especially when the subjects are non-English speakers. To deal with this issue, this paper first proposes a contrastive learning method to obtain effective representations for text classification based on monolingual embeddings of BERT. Furthermore, a cross-lingual data augmentation method is designed by building autoencoders to learn the text representations shared by both languages. Experiments on a Mandarin AD corpus show that the contrastive learning method can achieve better detection accuracy than conventional CNN-based and BERTbased methods. Our cross-lingual data augmentation method also outperforms other compared methods when using another English AD corpus for augmentation. Finally, a best detection accuracy of 81.6% is obtained by our proposed methods on the Mandarin AD corpus.
true
[]
[]
Good Health and Well-Being
null
null
null
2020
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
schabes-etal-1988-parsing
https://aclanthology.org/C88-2121
Parsing Strategies with `Lexicalized' Grammars: Application to Tree Adjoining Grammars
In this paper we present a general parsing strategy that arose from the development of an Earley-type parsing algorithm for TAGs (Schabes and Joshi 1988) and from recent linguistic work in TAGs (Abeille 1988).
false
[]
[]
null
null
null
null
1988
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
loukachevitch-etal-2018-comparing
https://aclanthology.org/2018.gwc-1.5
Comparing Two Thesaurus Representations for Russian
In the paper we presented a new Russian wordnet, RuWordNet, which was semiautomatically obtained by transformation of the existing Russian thesaurus RuThes. At the first step, the basic structure of wordnets was reproduced: synsets' hierarchy for each part of speech and the basic set of relations between synsets (hyponym-hypernym, partwhole, antonyms). At the second stage, we added causation, entailment and domain relations between synsets. Also derivation relations were established for single words and the component structure for phrases included in RuWordNet. The described procedure of transformation highlights the specific features of each type of thesaurus representations.
false
[]
[]
null
null
null
This work is partially supported by Russian Scientific Foundation, according to the research project No. 16-18-020.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bhandwaldar-zadrozny-2018-uncc
https://aclanthology.org/W18-5308
UNCC QA: Biomedical Question Answering system
In this paper, we detail our submission to the BioASQ competition's Biomedical Semantic Question and Answering task. Our system uses extractive summarization techniques to generate answers and has scored highest ROUGE-2 and Rogue-SU4 in all test batch sets. Our contributions are named-entity based method for answering factoid and list questions, and an extractive summarization techniques for building paragraph-sized summaries, based on lexical chains. Our system got highest ROUGE-2 and ROUGE-SU4 scores for ideal-type answers in all test batch sets. We also discuss the limitations of the described system, such lack of the evaluation on other criteria (e.g. manual). Also, for factoidand list-type question our system got low accuracy (which suggests that our algorithm needs to improve in the ranking of entities).
true
[]
[]
Good Health and Well-Being
null
null
Acknowledgment. We would like to thank the referees for their comments and suggestions. All the remaining faults are ours.
2018
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chatzichrisafis-etal-2006-evaluating
https://aclanthology.org/W06-3702
Evaluating Task Performance for a Unidirectional Controlled Language Medical Speech Translation System
We present a task-level evaluation of the French to English version of MedSLT, a medium-vocabulary unidirectional controlled language medical speech translation system designed for doctor-patient diagnosis interviews. Our main goal was to establish task performance levels of novice users and compare them to expert users. Tests were carried out on eight medical students with no previous exposure to the system, with each student using the system for a total of three sessions. By the end of the third session, all the students were able to use the system confidently, with an average task completion time of about 4 minutes.
true
[]
[]
Good Health and Well-Being
null
null
We would like to thank Agnes Lisowska, Alia Rahal, and Nancy Underwood for being impartial judges over our system's results.This work was funded by the Swiss National Science Foundation.
2006
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kunath-weinberger-2010-wisdom
https://aclanthology.org/W10-0726
The Wisdom of the Crowd's Ear: Speech Accent Rating and Annotation with Amazon Mechanical Turk
Human listeners can almost instantaneously judge whether or not another speaker is part of their speech community. The basis of this judgment is the speaker's accent. Even though humans judge speech accents with ease, it has been tremendously difficult to automatically evaluate and rate accents in any consistent manner. This paper describes an experiment using the Amazon Mechanical Turk to develop an automatic speech accent rating dataset.
false
[]
[]
null
null
null
The authors would like to thank Amazon.com and the workshop organizers for providing MTurk credits to perform this research.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lin-etal-2021-rumor
https://aclanthology.org/2021.emnlp-main.786
Rumor Detection on Twitter with Claim-Guided Hierarchical Graph Attention Networks
Rumors are rampant in the era of social media. Conversation structures provide valuable clues to differentiate between real and fake claims. However, existing rumor detection methods are either limited to the strict relation of user responses or oversimplify the conversation structure. In this study, to substantially reinforces the interaction of user opinions while alleviating the negative impact imposed by irrelevant posts, we first represent the conversation thread as an undirected interaction graph. We then present a Claim-guided Hierarchical Graph Attention Network for rumor classification, which enhances the representation learning for responsive posts considering the entire social contexts and attends over the posts that can semantically infer the target claim. Extensive experiments on three Twitter datasets demonstrate that our rumor detection method achieves much better performance than stateof-the-art methods and exhibits a superior capacity for detecting rumors at early stages.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
We thank all anonymous reviewers for their helpful comments and suggestions. This work was partially supported by the Foundation of Guizhou Provincial Key Laboratory of Public Big Data (No.2019BDKFJJ002). Jing Ma was supported by HKBU direct grant (Ref. AIS 21-22/02).
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
al-negheimish-etal-2021-numerical
https://aclanthology.org/2021.emnlp-main.759
Numerical reasoning in machine reading comprehension tasks: are we there yet?
Numerical reasoning based machine reading comprehension is a task that involves reading comprehension along with using arithmetic operations such as addition, subtraction, sorting, and counting. The DROP benchmark (Dua et al., 2019) is a recent dataset that has inspired the design of NLP models aimed at solving this task. The current standings of these models in the DROP leaderboard, over standard metrics, suggest that the models have achieved near-human performance. However, does this mean that these models have learned to reason? In this paper, we present a controlled study on some of the top-performing model architectures for the task of numerical reasoning. Our observations suggest that the standard metrics are incapable of measuring progress towards such tasks.
false
[]
[]
null
null
null
This research has been supported by a PhD scholarship from King Saud University. We thank our anonymous reviewers for their constructive comments and suggestions, and SPIKE research group members for their feedback throughout this work.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ishizaki-kato-1998-exploring
https://aclanthology.org/C98-1092
Exploring the Characteristics of Multi-Party Dialogues
This paper describes novel results on the charactcristics of three-party dialogues by quantitatively comparing them with those of two-party. In previous dialogue research, two-party dialogues are mainly focussed because data collection of multi-party dialogues is difficult and there are very few theories handling them, although research on multi-party dialogues is expected to be of much use in building computer supported collaborative work environments and computer assisted instruction systems, in this paper, firstly we describe our data collection method of multi-party dialogues using a meeting scheduling task, which enables us to compare three-party dialogues with those of two party. Then we quantitively compare these two kinds of dialogues su('h as the number of characters and turns and patterns of inforlnation exchanges. Lastly we show that patterns of information exchanges in speaker alternation and initiative-taking can be used to characterise three-party dialogues.
false
[]
[]
null
null
null
null
1998
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
berovic-etal-2012-croatian
http://www.lrec-conf.org/proceedings/lrec2012/pdf/719_Paper.pdf
Croatian Dependency Treebank: Recent Development and Initial Experiments
We present the current state of development of the Croatian Dependency Treebank-with special empahsis on adapting the Prague Dependency Treebank formalism to Croatian language specifics-and illustrate its possible applications in an experiment with dependency parsing using MaltParser. The treebank currently contains approximately 2870 sentences, out of which the 2699 sentences and 66930 tokens were used in this experiment. Three linear-time projective algorithms implemented by the MaltParser system-Nivre eager, Nivre standard and stack projective-running on default settings were used in the experiment. The highest performing system, implementing the Nivre eager algorithm, scored (LAS 71.31 UAS 80.93 LA 83.87) within our experiment setup. The results obtained serve as an illustration of treebank's usefulness in natural language processing research and as a baseline for further research in dependency parsing of Croatian.
false
[]
[]
null
null
null
Special thanks to our colleagues Tena Gnjatović and Ida Raffaelli from the Department of Linguistics, Faculty of Humanities and Social Sciences, University of Zagreb, for substantial contributions to the process of manual annotation of sentences for HOBS.The results presented here were partially obtained from research within projects ACCURAT (FP7, grant 248347), CESAR (ICT-PSP, grant 271022) funded by EC, and and partially from projects 130-1300646-0645 and 130-1300646-1776 funded by the Ministry of Science, Education and Sports of the Republic of Croatia.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
arsenos-siolas-2020-ntuaails
https://aclanthology.org/2020.semeval-1.195
NTUAAILS at SemEval-2020 Task 11: Propaganda Detection and Classification with biLSTMs and ELMo
This paper describes the NTUAAILS submission for SemEval 2020 Task 11 Detection of Propaganda Techniques in News Articles. This task comprises of two different sub-tasks, namely A: Span Identification (SI), B: Technique Classification (TC). The goal for the SI sub-task is to identify specific fragments, in a given plain text, containing at least one propaganda technique. The TC sub-task aims to identify the applied propaganda technique in a given text fragment. A different model was trained for each sub-task. Our best performing system for the SI task consists of pre-trained ELMo word embeddings followed by residual bidirectional LSTM network. For the TC sub-task pre-trained word embeddings from GloVe fed to a bidirectional LSTM neural network. The models achieved rank 28 among 36 teams with F1 score of 0.335 and rank 25 among 31 teams with 0.463 F1 score for SI and TC sub-tasks respectively. Our results indicate that the proposed deep learning models, although relatively simple in architecture and fast to train, achieve satisfactory results in the tasks on hand.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
takeuchi-etal-2004-construction
https://aclanthology.org/W04-1814
Construction of Grammar Based Term Extraction Model for Japanese
null
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false