ID
stringlengths
11
54
url
stringlengths
33
64
title
stringlengths
11
184
abstract
stringlengths
17
3.87k
label_nlp4sg
bool
2 classes
task
sequence
method
sequence
goal1
stringclasses
9 values
goal2
stringclasses
9 values
goal3
stringclasses
1 value
acknowledgments
stringlengths
28
1.28k
year
stringlengths
4
4
sdg1
bool
1 class
sdg2
bool
1 class
sdg3
bool
2 classes
sdg4
bool
2 classes
sdg5
bool
2 classes
sdg6
bool
1 class
sdg7
bool
1 class
sdg8
bool
2 classes
sdg9
bool
2 classes
sdg10
bool
2 classes
sdg11
bool
2 classes
sdg12
bool
1 class
sdg13
bool
2 classes
sdg14
bool
1 class
sdg15
bool
1 class
sdg16
bool
2 classes
sdg17
bool
2 classes
shapiro-1978-path-based
https://aclanthology.org/T78-1031
Path-Based and Node-Based Inference in Semantic Networks
Two styles of performing inference in semantic networks are presented and compared. Path-based inference allows an arc or a path of arcs between two given nodes to be inferred from the existence of another specified path between the same two nodes. Path-based inference rules may be written using a binary relational calculus notation. Node-based inference allows a structure of nodes to be inferred from the existence of an instance of a pattern of node structures. Node-based inference rules can be constructed in a semantic network using a variant of a predicate calculus notation. Path-based inference is more efficient, while node-based inference is more general. A method is described of combining the two styles in a single system in order to take advantage of the strengths of each. Applications of path-based inference rules to the representation of the extensional equivalence of intensional concepts, and to the explication of inheritance in hierarchies are sketched. I.
false
[]
[]
null
null
null
null
1978
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
shnarch-etal-2020-unsupervised
https://aclanthology.org/2020.findings-emnlp.243
Unsupervised Expressive Rules Provide Explainability and Assist Human Experts Grasping New Domains
Approaching new data can be quite deterrent; you do not know how your categories of interest are realized in it, commonly, there is no labeled data at hand, and the performance of domain adaptation methods is unsatisfactory. Aiming to assist domain experts in their first steps into a new task over a new corpus, we present an unsupervised approach to reveal complex rules which cluster the unexplored corpus by its prominent categories (or facets). These rules are human-readable, thus providing an important ingredient which has become in short supply lately-explainability. Each rule provides an explanation for the commonality of all the texts it clusters together. We present an extensive evaluation of the usefulness of these rules in identifying target categories, as well as a user study which assesses their interpretability.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yu-etal-2018-syntaxsqlnet
https://aclanthology.org/D18-1193
SyntaxSQLNet: Syntax Tree Networks for Complex and Cross-Domain Text-to-SQL Task
Most existing studies in text-to-SQL tasks do not require generating complex SQL queries with multiple clauses or sub-queries, and generalizing to new, unseen databases. In this paper we propose SyntaxSQLNet, a syntax tree network to address the complex and crossdomain text-to-SQL generation task. Syn-taxSQLNet employs a SQL specific syntax tree-based decoder with SQL generation path history and table-aware column attention encoders. We evaluate SyntaxSQLNet on a new large-scale text-to-SQL corpus containing databases with multiple tables and complex SQL queries containing multiple SQL clauses and nested queries. We use a database split setting where databases in the test set are unseen during training. Experimental results show that SyntaxSQLNet can handle a significantly greater number of complex SQL examples than prior work, outperforming the previous state-of-the-art model by 9.5% in exact matching accuracy. To our knowledge, we are the first to study this complex text-to-SQL task. Our task and models with the latest updates are available at https://yale-lily. github.io/seq2sql/spider.
false
[]
[]
null
null
null
We thank Graham Neubig, Tianze Shi, and three anonymous reviewers for their helpful feedback and discussion on this work.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
shen-etal-2017-conditional
https://aclanthology.org/P17-2080
A Conditional Variational Framework for Dialog Generation
Deep latent variable models have been shown to facilitate the response generation for open-domain dialog systems. However, these latent variables are highly randomized, leading to uncontrollable generated responses. In this paper, we propose a framework allowing conditional response generation based on specific attributes. These attributes can be either manually assigned or automatically detected. Moreover, the dialog states for both speakers are modeled separately in order to reflect personal features. We validate this framework on two different scenarios, where the attribute refers to genericness and sentiment states respectively. The experiment result testified the potential of our model, where meaningful responses can be generated in accordance with the specified attributes.
false
[]
[]
null
null
null
This work was supported by the National Natural Science of China under Grant No. 61602451, 61672445 and JSPS KAKENHI Grant Numbers 15H02754, 16K12546.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tursun-cakici-2017-noisy
https://aclanthology.org/W17-4412
Noisy Uyghur Text Normalization
Uyghur is the second largest and most actively used social media language in China. However, a non-negligible part of Uyghur text appearing in social media is unsystematically written with the Latin alphabet, and it continues to increase in size. Uyghur text in this format is incomprehensible and ambiguous even to native Uyghur speakers. In addition, Uyghur texts in this form lack the potential for any kind of advancement for the NLP tasks related to the Uyghur language. Restoring and preventing noisy Uyghur text written with unsystematic Latin alphabets will be essential to the protection of Uyghur language and improving the accuracy of Uyghur NLP tasks. To this purpose, in this work we propose and compare the noisy channel model and the neural encoderdecoder model as normalizing methods.
false
[]
[]
null
null
null
We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPU used for this research.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
baldridge-kruijff-2002-coupling
https://aclanthology.org/P02-1041
Coupling CCG and Hybrid Logic Dependency Semantics
Categorial grammar has traditionally used the λ-calculus to represent meaning. We present an alternative, dependency-based perspective on linguistic meaning and situate it in the computational setting. This perspective is formalized in terms of hybrid logic and has a rich yet perspicuous propositional ontology that enables a wide variety of semantic phenomena to be represented in a single meaning formalism. Finally, we show how we can couple this formalization to Combinatory Categorial Grammar to produce interpretations compositionally.
false
[]
[]
null
null
null
We would like to thank Patrick Blackburn, Johan Bos, Nissim Francez, Alex Lascarides, Mark Steedman, Bonnie Webber and the ACL reviewers for helpful comments on earlier versions of this paper. All errors are, of course, our own. Jason Baldridge's work is supported in part by Overseas Research Student Award ORS/98014014. Geert-Jan Kruijff's work is supported by the DFG Sonderforschungsbereich 378 Resource-Sensitive Cognitive Processes, Project NEGRA EM6.
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
li-etal-2021-retrieve
https://aclanthology.org/2021.findings-acl.39
Retrieve \& Memorize: Dialog Policy Learning with Multi-Action Memory
Dialogue policy learning, a subtask that determines the content of system response generation and then the degree of task completion, is essential for task-oriented dialogue systems. However, the unbalanced distribution of system actions in dialogue datasets often causes difficulty in learning to generate desired actions and responses. In this paper, we propose a retrieve-and-memorize framework to enhance the learning of system actions. Specially, we first design a neural context-aware retrieval module to retrieve multiple candidate system actions from the training set given a dialogue context. Then, we propose a memoryaugmented multi-decoder network to generate the system actions conditioned on the candidate actions, which allows the network to adaptively select key information in the candidate actions and ignore noises. We conduct experiments on the large-scale multi-domain task-oriented dialogue dataset MultiWOZ 2.0 and MultiWOZ 2.1. Experimental results show that our method achieves competitive performance among several state-of-the-art models in the context-to-response generation task.
false
[]
[]
null
null
null
The paper was supported by the National Natural Science Foundation of China (No.61906217) and the Program for Guangdong Introducing Innovative and Entrepreneurial Teams (No.2017ZT07X355).
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wu-etal-2019-open
https://aclanthology.org/D19-1021
Open Relation Extraction: Relational Knowledge Transfer from Supervised Data to Unsupervised Data
Open relation extraction (OpenRE) aims to extract relational facts from the open-domain corpus. To this end, it discovers relation patterns between named entities and then clusters those semantically equivalent patterns into a united relation cluster. Most OpenRE methods typically confine themselves to unsupervised paradigms, without taking advantage of existing relational facts in knowledge bases (KBs) and their high-quality labeled instances. To address this issue, we propose Relational Siamese Networks (RSNs) to learn similarity metrics of relations from labeled data of pre-defined relations, and then transfer the relational knowledge to identify novel relations in unlabeled data. Experiment results on two real-world datasets show that our framework can achieve significant improvements as compared with other state-of-the-art methods. Our code is available at https://github. com/thunlp/RSN.
false
[]
[]
null
null
null
This work is supported by the National Key Research and Development Program of China (No. 2018YFB1004503) and the National Natural Science Foundation of China (NSFC No. 61572273, 61661146007). Ruidong Wu is also supported by Tsinghua University Initiative Scientific Research Program.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sevgili-etal-2019-improving
https://aclanthology.org/P19-2044
Improving Neural Entity Disambiguation with Graph Embeddings
Entity Disambiguation (ED) is the task of linking an ambiguous entity mention to a corresponding entry in a knowledge base. Current methods have mostly focused on unstructured text data to learn representations of entities, however, there is structured information in the knowledge base itself that should be useful to disambiguate entities. In this work, we propose a method that uses graph embeddings for integrating structured information from the knowledge base with unstructured information from text-based representations. Our experiments confirm that graph embeddings trained on a graph of hyperlinks between Wikipedia articles improve the performances of simple feed-forward neural ED model and a state-ofthe-art neural ED system.
false
[]
[]
null
null
null
We thank the SRW mentor Matt Gardner and anonymous reviewers for their most useful feedback on this work. The work was partially supported by a Deutscher Akademischer Austauschdienst (DAAD) doctoral stipend and the DFGfunded JOIN-T project BI 1544/4.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wang-etal-2015-feature
https://aclanthology.org/P15-1110
Feature Optimization for Constituent Parsing via Neural Networks
The performance of discriminative constituent parsing relies crucially on feature engineering, and effective features usually have to be carefully selected through a painful manual process. In this paper, we propose to automatically learn a set of effective features via neural networks. Specifically, we build a feedforward neural network model, which takes as input a few primitive units (words, POS tags and certain contextual tokens) from the local context, induces the feature representation in the hidden layer and makes parsing predictions in the output layer. The network simultaneously learns the feature representation and the prediction model parameters using a back propagation algorithm. By pre-training the model on a large amount of automatically parsed data, and then fine-tuning on the manually annotated Treebank data, our parser achieves the highest F 1 score at 86.6% on Chinese Treebank 5.1, and a competitive F 1 score at 90.7% on English Treebank. More importantly, our parser generalizes well on cross-domain test sets, where we significantly outperform Berkeley parser by 3.4 points on average for Chinese and 2.5 points for English.
false
[]
[]
null
null
null
We thank the anonymous reviewers for comments. Haitao Mi is supported by DARPA HR0011-12-C-0015 (BOLT) and Nianwen Xue is supported by DAPRA HR0011-11-C-0145 (BOLT). The views and findings in this paper are those of the authors and are not endorsed by the DARPA.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
das-bandyopadhyay-2010-towards
https://aclanthology.org/Y10-1092
Towards the Global SentiWordNet
The discipline where sentiment/opinion/emotion has been identified and classified in human written text is well known as sentiment analysis. A typical computational approach to sentiment analysis starts with prior polarity lexicons where entries are tagged with their prior out of context polarity as human beings perceive using cognitive knowledge. Till date, all research efforts found in sentiment analysis literature deal mostly with English texts. In this article, we propose an interactive gaming (Dr Sentiment) technology to create and validate SentiWordNet in 56 languages by involving Internet population. Dr Sentiment is a fictitious character, interact with players using series of questions and finally reveal the behavioral or sentimental status of any player and store the lexicons as the players polarized during playing. The interactive gaming technology is then compared with other multiple automatic linguistics techniques like, WordNet based, dictionary based, corpus based or generative approaches for generating SentiWordNet(s) for Indian languages and other International languages as well. A number of automatic, semiautomatic and manual validations and evaluation methodologies have been adopted to measure the coverage and credibility of the developed SentiWordNet(s).
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
al-saleh-menai-2018-ant
https://aclanthology.org/C18-1062
Ant Colony System for Multi-Document Summarization
This paper proposes an extractive multi-document summarization approach based on an ant colony system to optimize the information coverage of summary sentences. The implemented system was evaluated on both English and Arabic versions of the corpus of the Text Analysis Conference 2011 MultiLing Pilot by using ROUGE metrics. The evaluation results are promising in comparison to those of the participating systems. Indeed, our system achieved the best scores based on several ROUGE metrics.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
trippel-etal-2014-towards
http://www.lrec-conf.org/proceedings/lrec2014/pdf/1011_Paper.pdf
Towards automatic quality assessment of component metadata
Measuring the quality of metadata is only possible by assessing the quality of the underlying schema and the metadata instance. We propose some factors that are measurable automatically for metadata according to the CMD framework, taking into account the variability of schemas that can be defined in this framework. The factors include among others the number of elements, the (re-)use of reusable components, the number of filled in elements. The resulting score can serve as an indicator of the overall quality of the CMD instance, used for feedback to metadata providers or to provide an overview of the overall quality of metadata within a repository. The score is independent of specific schemas and generalizable. An overall assessment of harvested metadata is provided in form of statistical summaries and the distribution, based on a corpus of harvested metadata. The score is implemented in XQuery and can be used in tools, editors and repositories.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hori-etal-2004-evaluation
https://aclanthology.org/W04-1014
Evaluation Measures Considering Sentence Concatenation for Automatic Summarization by Sentence or Word Extraction
Automatic summaries of text generated through sentence or word extraction has been evaluated by comparing them with manual summaries generated by humans by using numerical evaluation measures based on precision or accuracy. Although sentence extraction has previously been evaluated based only on precision of a single sentence, sentence concatenations in the summaries should be evaluated as well. We have evaluated the appropriateness of sentence concatenations in summaries by using evaluation measures used for evaluating word concatenations in summaries through word extraction. We determined that measures considering sentence concatenation much better reflect the human judgment rather than those based only on the precision of a single sentence.
false
[]
[]
null
null
null
We thank NHK (Japan Broadcasting Corporation) for providing the broadcast news database. We also thank Prof. Sadaoki Furui at Tokyo Institute of Technology for providing the summaries of the broadcast news speech.
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
salehi-etal-2016-determining
https://aclanthology.org/C16-1046
Determining the Multiword Expression Inventory of a Surprise Language
Much previous research on multiword expressions (MWEs) has focused on the token-and typelevel tasks of MWE identification and extraction, respectively. Such studies typically target known prevalent MWE types in a given language. This paper describes the first attempt to learn the MWE inventory of a "surprise" language for which we have no explicit prior knowledge of MWE patterns, certainly no annotated MWE data, and not even a parallel corpus. Our proposed model is trained on a treebank with MWE relations of a source language, and can be applied to the monolingual corpus of the surprise language to identify its MWE construction types.
false
[]
[]
null
null
null
We wish to thank Long Duong for help with the transfer-based dependency parsing, Jan Snajder for his kind assistance with the Croatian annotation, and Dan Flickinger, Lars Hellan, Ned Letcher and João Silva for valuable advice in the early stages of development of this work. We would also like to thank the anonymous reviewers for their insightful comments and valuable suggestions. NICTA is funded by the Australian government as represented by Department of Broadband, Communication and Digital Economy, and the Australian Research Council through the ICT Centre of Excellence programme.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wu-etal-2003-totalrecall
https://aclanthology.org/O03-3005
TotalRecall: A Bilingual Concordance in National Digital Learning Project - CANDLE
This paper describes a Web-based English-Chinese concordance system, TotalRecall, being developed in National Digital Learning Project-CANDLE, to promote translation reuse and encourage authentic and idiomatic use in second language learning. We exploited and structured existing high-quality translations from the bilingual Sinorama Magazine to build the concordance of authentic text and translation. Novel approaches were taken to provide high-precision bilingual alignment on the sentence, phrase and word levels. A browser-based user interface also developed for ease of access over the Internet. Users can search for word, phrase or expression in English or Chinese. The Web-based user interface facilitates the recording of the user actions to provide data for further research.
false
[]
[]
null
null
null
We acknowledge the support for this study through grants from National Science Council and Ministry of Education, Taiwan (NSC 90-2411-H-007-033-MC and MOE EX-91-E-FA06-4-4) and a special grant for preparing the Sinorama Corpus for distribution by the Association for Computational Linguistics and Chinese Language Processing.
2003
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chakrabarty-etal-2021-mermaid
https://aclanthology.org/2021.naacl-main.336
MERMAID: Metaphor Generation with Symbolism and Discriminative Decoding
Generating metaphors is a challenging task as it requires a proper understanding of abstract concepts, making connections between unrelated concepts, and deviating from the literal meaning. In this paper, we aim to generate a metaphoric sentence given a literal expression by replacing relevant verbs. Based on a theoretically-grounded connection between metaphors and symbols, we propose a method to automatically construct a parallel corpus by transforming a large number of metaphorical sentences from the Gutenberg Poetry corpus (Jacobs, 2018) to their literal counterpart using recent advances in masked language modeling coupled with commonsense inference. For the generation task, we incorporate a metaphor discriminator to guide the decoding of a sequence to sequence model finetuned on our parallel data to generate high quality metaphors. Human evaluation on an independent test set of literal statements shows that our best model generates metaphors better than three well-crafted baselines 66% of the time on average. Moreover, a task-based evaluation shows that human-written poems enhanced with metaphors proposed by our model are preferred 68% of the time compared to poems without metaphors.
false
[]
[]
null
null
null
This work was supported in part by the MCS program under Cooperative Agreement N66001-19-2-4032, and the CwC program under Contract W911NF-15-1-0543 with the US Defense Advanced Research Projects Agency (DARPA). The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. The authors would like to thank the members of PLUS-Lab at the University of California Los Angeles and University of Southern California and the anonymous reviewers for helpful comments.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
daille-morin-2005-french
https://aclanthology.org/I05-1062
French-English Terminology Extraction from Comparable Corpora
This article presents a method of extracting bilingual lexica composed of single-word terms (SWTs) and multi-word terms (MWTs) from comparable corpora of a technical domain. First, this method extracts MWTs in each language, and then uses statistical methods to align single words and MWTs by exploiting the term contexts. After explaining the difficulties involved in aligning MWTs and specifying our approach, we show the adopted process for bilingual terminology extraction and the resources used in our experiments. Finally, we evaluate our approach and demonstrate its significance, particularly in relation to non-compositional MWT alignment.
false
[]
[]
null
null
null
We are particularly grateful to Samuel Dufour-Kowalski, who undertook the computer programs. This work has also benefited from his comments.
2005
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
grundkiewicz-etal-2015-human
https://aclanthology.org/D15-1052
Human Evaluation of Grammatical Error Correction Systems
The paper presents the results of the first large-scale human evaluation of automatic grammatical error correction (GEC) systems. Twelve participating systems and the unchanged input of the CoNLL-2014 shared task have been reassessed in a WMT-inspired human evaluation procedure. Methods introduced for the Workshop of Machine Translation evaluation campaigns have been adapted to GEC and extended where necessary. The produced rankings are used to evaluate standard metrics for grammatical error correction in terms of correlation with human judgment.
false
[]
[]
null
null
null
Partially funded by the Polish National Science Centre (Grant No. 2014/15/N/ST6/02330).The authors would like to thank the following judges for their hard work on the ranking task: Sam Bennett, Peter Dunne, Stacia Levy, Kenneth Turner, and John Winward.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
schrodt-2020-keynote
https://aclanthology.org/2020.aespen-1.3
Keynote Abstract: Current Open Questions for Operational Event Data
In this brief keynote, I will address what I see as five major issues in terms of development for operational event data sets (that is, event data intended for real time monitoring and forecasting, rather than purely for academic research). First, there are no currently active real time systems with fully open and transparent pipelines: instead, one or more components are proprietary. Ideally we need several of these, using different approaches (and in particular, comparisons between classical dictionary-and rule-based coders versus newer coders based on machine-learning approaches). Second, the CAMEO event ontology needs to be replaced by a more general system that includes, for example, political codes for electoral competition, legislative debate, and parliamentary coalition formation, as well as a robust set of codes for non-political events such as natural disasters, disease, and economic dislocations. Third, the issue of duplicate stories needs to be addressed -for example, the ICEWS system can generate as many as 150 coded events from a single occurrence on the groundeither to reduce these sets of related stories to a single set of events, or at least to label clusters of related stories as is already done in a number of systems (for example European Media Monitor). Fourth, a systematic analysis needs to be done as to the additional information provided by hundreds of highly local sources (which have varying degrees of varacity and independence from states and local elites) as opposed to a relatively small number of international sources: obviously this will vary depending on the specific question being asked but has yet to be addressed at all. Finally, and this will overlap with academic work, a number of open benchmarks need to be constructed for the calibration of both coding systems and resulting models: these could be historical but need to include an easily licensed (or open) very large set of texts covering a substantial period of time, probably along the lines of the Linguistics Data Consortium Gigaword sets; if licensed, these need to be accessible to individual researchers and NGOs, not just academic institutions.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
baquero-arnal-etal-2019-mllp
https://aclanthology.org/W19-5423
The MLLP-UPV Spanish-Portuguese and Portuguese-Spanish Machine Translation Systems for WMT19 Similar Language Translation Task
This paper describes the participation of the MLLP research group of the Universitat Politècnica de València in the WMT 2019 Similar Language Translation Shared Task. We have submitted systems for the Portuguese ↔ Spanish language pair, in both directions. They are based on the Transformer architecture as well as on a novel architecture called 2D alternating RNN. Both systems have been domain adapted through fine-tuning that has been shown to be very effective.
false
[]
[]
null
null
null
The research leading to these results has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement no. 761758 X5gon
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
loukachevitch-dobrov-2004-development
http://www.lrec-conf.org/proceedings/lrec2004/pdf/343.pdf
Development of Bilingual Domain-Specific Ontology for Automatic Conceptual Indexing
In the paper we describe development, means of evaluation and applications of Russian-English Sociopolitical Thesaurus specially developed as a linguistic resource for automatic text processing applications. The Sociopolitical domain is not a domain of social research but a broad domain of social relations including economic, political, military, cultural, sports and other subdomains. The knowledge of this domain is necessary for automatic text processing of such important documents as official documents, legislative acts, newspaper articles.
false
[]
[]
null
null
null
Partial support for this work is provided by the Russian Foundation for Basic Research through grant # 03-01-00472.
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
takamichi-saruwatari-2018-cpjd
https://aclanthology.org/L18-1067
CPJD Corpus: Crowdsourced Parallel Speech Corpus of Japanese Dialects
Public parallel corpora of dialects can accelerate related studies such as spoken language processing. Various corpora have been collected using a well-equipped recording environment, such as voice recording in an anechoic room. However, due to geographical and expense issues, it is impossible to use such a perfect recording environment for collecting all existing dialects. To address this problem, we used web-based recording and crowdsourcing platforms to construct a crowdsourced parallel speech corpus of Japanese dialects (CPJD corpus) including parallel text and speech data of 21 Japanese dialects. We recruited native dialect speakers on the crowdsourcing platform, and the hired speakers recorded their dialect speech using their personal computer or smartphone in their homes. This paper shows the results of the data collection and analyzes the audio data in terms of the signal-to-noise ratio and mispronunciations.
false
[]
[]
null
null
null
Part of this work was supported by the SECOM Science and Technology Foundation.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jiang-zhai-2006-exploiting
https://aclanthology.org/N06-1010
Exploiting Domain Structure for Named Entity Recognition
Named Entity Recognition (NER) is a fundamental task in text mining and natural language understanding. Current approaches to NER (mostly based on supervised learning) perform well on domains similar to the training domain, but they tend to adapt poorly to slightly different domains. We present several strategies for exploiting the domain structure in the training data to learn a more robust named entity recognizer that can perform well on a new domain. First, we propose a simple yet effective way to automatically rank features based on their generalizabilities across domains. We then train a classifier with strong emphasis on the most generalizable features. This emphasis is imposed by putting a rank-based prior on a logistic regression model. We further propose a domain-aware cross validation strategy to help choose an appropriate parameter for the rank-based prior. We evaluated the proposed method with a task of recognizing named entities (genes) in biology text involving three species. The experiment results show that the new domainaware approach outperforms a state-ofthe-art baseline method in adapting to new domains, especially when there is a great difference between the new domain and the training domain.
false
[]
[]
null
null
null
This work was in part supported by the National Science Foundation under award numbers 0425852, 0347933, and 0428472. We would like to thank Bruce Schatz, Xin He, Qiaozhu Mei, Xu Ling, and some other BeeSpace project members for useful discussions. We would like to thank Mark Sammons for his help with FEX. We would also like to thank the anonymous reviewers for their comments.
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nn-1976-finite-string-volume-13-number-4
https://aclanthology.org/J76-2010
The FINITE STRING, Volume 13, Number 4 (continued)
Each year the federal government contracts for billions of dollars of work t o support efforts deemed t o be in the national interest. A significant percentage of the contract services are in the form of Research and ~e v e l o p m e n t (R & D) or programmatic work which colleges and universities are particularly wcll-suited t o perform. The government commits these funds i n eithet of two WAYS: grants or contracts. University researchers are generally more familiar with the grant procedure than with the contract procedure. Under a grant program, a given federal agency is authorized to grant funds to non-profit institutions, frequently educational institutions, for the purpose of supporting research or a in a given general area. A body of general conditions are established by the Congress and refined by the applicable agency t o set parameters for the pro$am as a whole. A specific grant for a program can be made so long as it fits within the gevral stpndards (the Guidelines) of the program and meets whatever qualitative standards for review that have been established. tion; forest product utilization and marketing.
false
[]
[]
null
null
null
null
1976
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wang-etal-1999-lexicon
https://aclanthology.org/Y99-1023
The Lexicon in FCIDB : A Friendly Chinese Interface for DBMS
FCIDB (Friendly Chinese Interface for DataBase management systems) can understand users' queries in the Chinese language. It works like a translator that translates Chinese queries into SQL commands. In the translation process, the lexicon of FCIDB plays a key role in both parsing and word segmentation. We designed some questionnaires to collect the frequently occurring words and add them to the public 'lexicon in FCIDB. FCIDB will produce a private lexicon for every new connected database. This paper will focus on the words included in the public lexicon and in the private lexicon. We also discuss the function, the structure, and the contents of the lexicon in FCIDB.
false
[]
[]
null
null
null
We carried out an experiment to explore the lexicon. We constructed two different databases and designed questionnaires to collect queries. The results helped us to identify which words we needed in the public and private lexicon.We still need to simplify the word definition process to make it easier for users to add terminology and to move from one database to another. Now, the system can be an interface with ACCESS and Visual dBASE. In the future, we hope to port it to other systems.
1999
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mclauchlan-2004-thesauruses
https://aclanthology.org/W04-2410
Thesauruses for Prepositional Phrase Attachment
Probabilistic models have been effective in resolving prepositional phrase attachment ambiguity, but sparse data remains a significant problem. We propose a solution based on similarity-based smoothing, where the probability of new PPs is estimated with information from similar examples generated using a thesaurus. Three thesauruses are compared on this task: two existing generic thesauruses and a new specialist PP thesaurus tailored for this problem. We also compare three smoothing techniques for prepositional phrases. We find that the similarity scores provided by the thesaurus tend to weight distant neighbours too highly, and describe a better score based on the rank of a word in the list of similar words. Our smoothing methods are applied to an existing PP attachment model and we obtain significant improvements over the baseline.
false
[]
[]
null
null
null
Many thanks to Julie Weeds and Adam Kilgarriff for providing the specialist and WASPS thesauruses, and for useful discussions. Thanks also to the anonymous reviewers for many helpful comments.
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
elkaref-hassan-2021-joint
https://aclanthology.org/2021.smm4h-1.16
A Joint Training Approach to Tweet Classification and Adverse Effect Extraction and Normalization for SMM4H 2021
In this work we describe our submissions to the Social Media Mining for Health (SMM4H) 2021 Shared Task (Magge et al., 2021). We investigated the effectiveness of a joint training approach to Task 1, specifically classification, extraction and normalization of Adverse Drug Effect (ADE) mentions in English tweets. Our approach performed well on the normalization task, achieving an above average f1 score of 24%, but less so on classification and extraction, with f1 scores of 22% and 37% respectively. Our experiments also showed that a larger dataset with more negative results led to stronger results than a smaller more balanced dataset, even when both datasets have the same positive examples. Finally we also submitted a tuned BERT model for Task 6: Classification of Covid-19 tweets containing symptoms, which achieved an above average f1 score of 96%.
true
[]
[]
Good Health and Well-Being
null
null
null
2021
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gu-etal-2018-language
https://aclanthology.org/D18-1493
Language Modeling with Sparse Product of Sememe Experts
Most language modeling methods rely on large-scale data to statistically learn the sequential patterns of words. In this paper, we argue that words are atomic language units but not necessarily atomic semantic units. Inspired by HowNet, we use sememes, the minimum semantic units in human languages, to represent the implicit semantics behind words for language modeling, named Sememe-Driven Language Model (SDLM). More specifically, to predict the next word, SDLM first estimates the sememe distribution given textual context. Afterwards, it regards each sememe as a distinct semantic expert, and these experts jointly identify the most probable senses and the corresponding word. In this way, SDLM enables language models to work beyond word-level manipulation to fine-grained sememe-level semantics, and offers us more powerful tools to fine-tune language models and improve the interpretability as well as the robustness of language models. Experiments on language modeling and the downstream application of headline generation demonstrate the significant effectiveness of SDLM. Source code and data used in the experiments can be accessed at https:// github.com/thunlp/SDLM-pytorch.
false
[]
[]
null
null
null
This work is supported by the 973 Program (No. 2014CB340501), the National Natural Science Foundation of China (NSFC No. 61572273) and the research fund of Tsinghua University-Tencent Joint Laboratory for Internet Innovation Technology. This work is also funded by China Association for Science and Technology (2016QNRC001). Hao Zhu and Jun Yan are supported by Tsinghua University Initiative Scientific Research Program. We thank all members of Tsinghua NLP lab. We also thank anonymous reviewers for their careful reading and their insightful comments.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chen-chen-2006-high
https://aclanthology.org/P06-2011
A High-Accurate Chinese-English NE Backward Translation System Combining Both Lexical Information and Web Statistics
Named entity translation is indispensable in cross language information retrieval nowadays. We propose an approach of combining lexical information, web statistics, and inverse search based on Google to backward translate a Chinese named entity (NE) into English. Our system achieves a high Top-1 accuracy of 87.6%, which is a relatively good performance reported in this area until present.
false
[]
[]
null
null
null
null
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hopkins-may-2013-models
https://aclanthology.org/P13-1139
Models of Translation Competitions
What do we want to learn from a translation competition and how do we learn it with confidence? We argue that a disproportionate focus on ranking competition participants has led to lots of different rankings, but little insight about which rankings we should trust. In response, we provide the first framework that allows an empirical comparison of different analyses of competition results. We then use this framework to compare several analytical models on data from the Workshop on Machine Translation (WMT).
false
[]
[]
null
null
null
null
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
fischer-laubli-2020-whats
https://aclanthology.org/2020.eamt-1.23
What's the Difference Between Professional Human and Machine Translation? A Blind Multi-language Study on Domain-specific MT
Machine translation (MT) has been shown to produce a number of errors that require human post-editing, but the extent to which professional human translation (HT) contains such errors has not yet been compared to MT. We compile pretranslated documents in which MT and HT are interleaved, and ask professional translators to flag errors and post-edit these documents in a blind evaluation. We find that the post-editing effort for MT segments is only higher in two out of three language pairs, and that the number of segments with wrong terminology, omissions, and typographical problems is similar in HT.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nghiem-ananiadou-2018-aplenty
https://aclanthology.org/D18-2019
APLenty: annotation tool for creating high-quality datasets using active and proactive learning
In this paper, we present APLenty, an annotation tool for creating high-quality sequence labeling datasets using active and proactive learning. A major innovation of our tool is the integration of automatic annotation with active learning and proactive learning. This makes the task of creating labeled datasets easier, less time-consuming and requiring less human effort. APLenty is highly flexible and can be adapted to various other tasks.
false
[]
[]
null
null
null
This research has been carried out with funding from BBSRC BB/P025684/1 and BB/M006891/1. We would like to thank the anonymous reviewers for their helpful comments.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
naskar-bandyopadhyay-2005-use
https://aclanthology.org/2005.mtsummit-posters.21
Use of Machine Translation in India: Current Status
A survey of the machine translation systems that have been developed in India for translation from English to Indian languages and among Indian languages reveals that the MT softwares are used in field testing or are available as web translation service. These systems are also used for teaching machine translation to the students and researchers. Most of these systems are in the English-Hindi or Indian language-Indian language domain. The translation domains are mostly government documents/reports and news stories. There are a number of other MT systems that are at their various phases of development and have been demonstrated at various forums. Many of these systems cover other Indian languages beside Hindi.
false
[]
[]
null
null
null
null
2005
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wang-etal-2005-web
https://aclanthology.org/I05-1046
Web-Based Unsupervised Learning for Query Formulation in Question Answering
Converting questions to effective queries is crucial to open-domain question answering systems. In this paper, we present a web-based unsupervised learning approach for transforming a given natural-language question to an effective query. The method involves querying a search engine for Web passages that contain the answer to the question, extracting patterns that characterize fine-grained classification for answers, and linking these patterns with n-grams in answer passages. Independent evaluation on a set of questions shows that the proposed approach outperforms a naive keywordbased approach in terms of mean reciprocal rank and human effort.
false
[]
[]
null
null
null
null
2005
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhang-bansal-2021-finding
https://aclanthology.org/2021.emnlp-main.531
Finding a Balanced Degree of Automation for Summary Evaluation
Human evaluation for summarization tasks is reliable but brings in issues of reproducibility and high costs. Automatic metrics are cheap and reproducible but sometimes poorly correlated with human judgment. In this work, we propose flexible semiautomatic to automatic summary evaluation metrics, following the Pyramid human evaluation method. Semi-automatic Lite 2 Pyramid retains the reusable human-labeled Summary Content Units (SCUs) for reference(s) but replaces the manual work of judging SCUs' presence in system summaries with a natural language inference (NLI) model. Fully automatic Lite 3 Pyramid further substitutes SCUs with automatically extracted Semantic Triplet Units (STUs) via a semantic role labeling (SRL) model. Finally, we propose in-between metrics, Lite 2.x Pyramid, where we use a simple regressor to predict how well the STUs can simulate SCUs and retain SCUs that are more difficult to simulate, which provides a smooth transition and balance between automation and manual evaluation. Comparing to 15 existing metrics, we evaluate human-metric correlations on 3 existing meta-evaluation datasets and our newlycollected PyrXSum (with 100/10 XSum examples/systems). It shows that Lite 2 Pyramid consistently has the best summary-level correlations; Lite 3 Pyramid works better than or comparable to other automatic metrics; Lite 2.x Pyramid trades off small correlation drops for larger manual effort reduction, which can reduce costs for future data collection. 1
false
[]
[]
null
null
null
We thank the reviewers for their helpful comments. We thank Xiang Zhou for useful discussions and thank Steven Chen for proofreading SCUs for PyrXSum. This work was supported by NSF-CAREER Award 1846185.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
escudero-etal-2000-comparison
https://aclanthology.org/W00-0706
A Comparison between Supervised Learning Algorithms for Word Sense Disambiguation
This paper describes a set of comparative experiments, including cross-corpus evaluation, between five alternative algorithms for supervised Word Sense Disambiguation (WSD), namely Naive Bayes, Exemplar-based learning, SNOW, Decision Lists, and Boosting. Two main conclusions can be drawn: 1) The LazyBoosting algorithm outperforms the other four state-of-theart algorithms in terms of accuracy and ability to tune to new domains; 2) The domain dependence of WSD systems seems very strong and suggests that some kind of adaptation or tuning is required for cross-corpus application.
false
[]
[]
null
null
null
null
2000
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
federmann-lewis-2016-microsoft
https://aclanthology.org/2016.iwslt-1.12
Microsoft Speech Language Translation (MSLT) Corpus: The IWSLT 2016 release for English, French and German
We describe the Microsoft Speech Language Translation (MSLT) corpus, which was created in order to evaluate endto-end conversational speech translation quality. The corpus was created from actual conversations over Skype, and we provide details on the recording setup and the different layers of associated text data. The corpus release includes Test and Dev sets with reference transcripts for speech recognition. Additionally, cleaned up transcripts and reference translations are available for evaluation of machine translation quality. The IWSLT 2016 release described here includes the source audio, raw transcripts, cleaned up transcripts, and translations to or from English for both French and German.
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhang-duh-2020-reproducible
https://aclanthology.org/2020.tacl-1.26
Reproducible and Efficient Benchmarks for Hyperparameter Optimization of Neural Machine Translation Systems
Hyperparameter selection is a crucial part of building neural machine translation (NMT) systems across both academia and industry. Fine-grained adjustments to a model's architecture or training recipe can mean the difference between a positive and negative research result or between a state-of-the-art and underperforming system. While recent literature has proposed methods for automatic hyperparameter optimization (HPO), there has been limited work on applying these methods to neural machine translation (NMT), due in part to the high costs associated with experiments that train large numbers of model variants. To facilitate research in this space, we introduce a lookup-based approach that uses a library of pre-trained models for fast, low cost HPO experimentation. Our contributions include (1) the release of a large collection of trained NMT models covering a wide range of hyperparameters, (2) the proposal of targeted metrics for evaluating HPO methods on NMT, and (3) a reproducible benchmark of several HPO methods against our model library, including novel graph-based and multiobjective methods.
false
[]
[]
null
null
null
This work is supported in part by an Amazon Research Award and an IARPA MATERIAL grant. We are especially grateful to Michael Denkowski for helpful discussions and feedback throughout the project.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
angrosh-etal-2014-lexico
https://aclanthology.org/C14-1188
Lexico-syntactic text simplification and compression with typed dependencies
We describe two systems for text simplification using typed dependency structures, one that performs lexical and syntactic simplification, and another that performs sentence compression optimised to satisfy global text constraints such as lexical density, the ratio of difficult words, and text length. We report a substantial evaluation that demonstrates the superiority of our systems, individually and in combination, over the state of the art, and also report a comprehension based evaluation of contemporary automatic text simplification systems with target non-native readers. This work is licensed under a Creative Commons Attribution 4.0 International Licence.
false
[]
[]
null
null
null
This research is supported by an award made by the EPSRC; award reference: EP/J018805/1.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
meng-rumshisky-2018-triad
https://aclanthology.org/C18-1004
Triad-based Neural Network for Coreference Resolution
We propose a triad-based neural network system that generates affinity scores between entity mentions for coreference resolution. The system simultaneously accepts three mentions as input, taking mutual dependency and logical constraints of all three mentions into account, and thus makes more accurate predictions than the traditional pairwise approach. Depending on system choices, the affinity scores can be further used in clustering or mention ranking. Our experiments show that a standard hierarchical clustering using the scores produces state-of-art results with MUC and B 3 metrics on the English portion of CoNLL 2012 Shared Task. The model does not rely on many handcrafted features and is easy to train and use. The triads can also be easily extended to polyads of higher orders. To our knowledge, this is the first neural network system to model mutual dependency of more than two members at mention level.
false
[]
[]
null
null
null
This project is funded in part by an NSF CAREER award to Anna Rumshisky (IIS-1652742).
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
camargo-de-souza-etal-2013-fbk
https://aclanthology.org/W13-2243
FBK-UEdin Participation to the WMT13 Quality Estimation Shared Task
In this paper we present the approach and system setup of the joint participation of Fondazione Bruno Kessler and University of Edinburgh in the WMT 2013 Quality Estimation shared-task. Our submissions were focused on tasks whose aim was predicting sentence-level Human-mediated Translation Edit Rate and sentence-level post-editing time (Task 1.1 and 1.3, respectively). We designed features that are built on resources such as automatic word alignment, n-best candidate translation lists, back-translations and word posterior probabilities. Our models consistently overcome the baselines for both tasks and performed particularly well for Task 1.3, ranking first among seven participants.
false
[]
[]
null
null
null
This work was partially funded by the European Commission under the project MateCat, Grant 287688. The authors want to thank Philipp Koehn for training two of the models used in Section 2.2.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
rieser-lemon-2008-automatic
http://www.lrec-conf.org/proceedings/lrec2008/pdf/592_paper.pdf
Automatic Learning and Evaluation of User-Centered Objective Functions for Dialogue System Optimisation
The ultimate goal when building dialogue systems is to satisfy the needs of real users, but quality assurance for dialogue strategies is a non-trivial problem. The applied evaluation metrics and resulting design principles are often obscure, emerge by trial-and-error, and are highly context dependent. This paper introduces data-driven methods for obtaining reliable objective functions for system design. In particular, we test whether an objective function obtained from Wizard-of-Oz (WOZ) data is a valid estimate of real users' preferences. We test this in a test-retest comparison between the model obtained from the WOZ study and the models obtained when testing with real users. We can show that, despite a low fit to the initial data, the objective function obtained from WOZ data makes accurate predictions for automatic dialogue evaluation, and, when automatically optimising a policy using these predictions, the improvement over a strategy simply mimicking the data becomes clear from an error analysis.
false
[]
[]
null
null
null
This work was partially funded by the International Research Training Group Language Technology and Cognitive Systems, Saarland University, and by EPSRC project number EP/E019501/1. The research leading to these results has also received funding from the European Community's Seventh Framework Programme (FP7/2007(FP7/ -2013 under grant agreement number 216594 (CLASSIC project: www.classic-project.org)
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ney-popovic-2004-improving
https://aclanthology.org/C04-1045
Improving Word Alignment Quality using Morpho-syntactic Information
In this paper, we present an approach to include morpho-syntactic dependencies into the training of the statistical alignment models. Existing statistical translation systems usually treat different derivations of the same base form as they were independent of each other. We propose a method which explicitly takes into account such interdependencies during the EM training of the statistical alignment models. The evaluation is done by comparing the obtained Viterbi alignments with a manually annotated reference alignment. The improvements of the alignment quality compared to the, to our knowledge, best system are reported on the German-English Verbmobil corpus.
false
[]
[]
null
null
null
We assume that the method can be very effective for cases where only small amount of data is available. We also expect further improvements by performing a special modelling for the rare words.We are planning to investigate possibilities of improving the alignment quality for different language pairs using different types of morphosyntactic information, like for example to use word stems and suffixes for morphologicaly rich languages where some parts of the words have to be aligned to the whole English words (e.g. Spanish verbs, Finnish in general, etc.) We are also planning to use the refined alignments for the translation process.
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
woodley-etal-2006-natural
https://aclanthology.org/U06-1026
Natural Language Processing and XML Retrieval
XML information retrieval (XML-IR) systems respond to user queries with results more specific than documents. XML-IR queries contain both content and structural requirements traditionally expressed in a formal language. However, an intuitive alternative is natural language queries (NLQs). Here, we discuss three approaches for handling NLQs in an XML-IR system that are comparable to, and even outperform formal language queries.
false
[]
[]
null
null
null
null
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gulati-2015-extracting
https://aclanthology.org/W15-5921
Extracting Information from Indian First Names
First name of a person can tell important demographic and cultural information about that person. This paper proposes statistical models for extracting vital information that is gender, religion and name validity from Indian first names. Statistical models combine some classical features like ngrams and Levenshtein distance along with some self observed features like vowel score and religion belief. Rigorous evaluation of models has been performed through several machine learning algorithms to compare the accuracy, F-Measure, Kappa Static and RMS error. Experimental results give promising and favorable results which indicate that these models proposed can be directly used in other information extraction systems.
false
[]
[]
null
null
null
null
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
devault-stone-2004-interpreting
https://aclanthology.org/C04-1181
Interpreting Vague Utterances in Context
We use the interpretation of vague scalar predicates like small as an illustration of how systematic semantic models of dialogue context enable the derivation of useful, fine-grained utterance interpretations from radically underspecified semantic forms. Because dialogue context suffices to determine salient alternative scales and relevant distinctions along these scales, we can infer implicit standards of comparison for vague scalar predicates through completely general pragmatics, yet closely constrain the intended meaning to within a natural range.
false
[]
[]
null
null
null
We thank Kees van Deemter and our anonymous reviewers for valuable comments. This work was supported by NSF grant HLC 0308121.
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
fell-etal-2020-love
https://aclanthology.org/2020.lrec-1.262
Love Me, Love Me, Say (and Write!) that You Love Me: Enriching the WASABI Song Corpus with Lyrics Annotations
We present the WASABI Song Corpus, a large corpus of songs enriched with metadata extracted from music databases on the Web, and resulting from the processing of song lyrics and from audio analysis. More specifically, given that lyrics encode an important part of the semantics of a song, we focus here on the description of the methods we proposed to extract relevant information from the lyrics, such as their structure segmentation, their topics, the explicitness of the lyrics content, the salient passages of a song and the emotions conveyed. The creation of the resource is still ongoing: so far, the corpus contains 1.73M songs with lyrics (1.41M unique lyrics) annotated at different levels with the output of the above mentioned methods. Such corpus labels and the provided methods can be exploited by music search engines and music professionals (e.g. journalists, radio presenters) to better handle large collections of lyrics, allowing an intelligent browsing, categorization and recommendation of songs. We provide the files of the current version of the WASABI Song Corpus, the models we have built on it as well as updates here: https://github.com/micbuffa/WasabiDataset.
false
[]
[]
null
null
null
This work is partly funded by the French Research National Agency (ANR) under the WASABI project (contract ANR-16-CE23-0017-01) and by the EU Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 690974 (MIREL).
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
shi-etal-2021-keyword
https://aclanthology.org/2021.ecnlp-1.5
Keyword Augmentation via Generative Methods
Keyword augmentation is a fundamental problem for sponsored search modeling and business. Machine generated keywords can be recommended to advertisers for better campaign discoverability as well as used as features for sourcing and ranking models. Generating highquality keywords is difficult, especially for cold campaigns with limited or even no historical logs; and the industry trend of including multiple products in a single ad campaign is making the problem more challenging. In this paper, we propose a keyword augmentation method based on generative seq2seq model and triebased search mechanism, which is able to generate high-quality keywords for any products or product lists. We conduct human annotations, offline analysis, and online experiments to evaluate the performance of our method against benchmarks in terms of augmented keyword quality as well as lifted ad exposure. The experiment results demonstrate that our method is able to generate more valid keywords which can serve as an efficient addition to advertiser selected keywords.
false
[]
[]
null
null
null
We would like to thank to Hongyu Zhu, Weiming Wu, Barry Bai, Hirohisa Fujita for their help to set up the online A/B testing, and all the reviewers for their valuable suggestions.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kanerva-etal-2014-turku
https://aclanthology.org/S14-2121
Turku: Broad-Coverage Semantic Parsing with Rich Features
In this paper we introduce our system capable of producing semantic parses of sentences using three different annotation formats. The system was used to participate in the SemEval-2014 Shared Task on broad-coverage semantic dependency parsing and it was ranked third with an overall F 1-score of 80.49%. The system has a pipeline architecture, consisting of three separate supervised classification steps.
false
[]
[]
null
null
null
This work was supported by the Emil Aaltonen Foundation and the Kone Foundation. Computational resources were provided by CSC -IT Center for Science.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
miller-etal-2008-infrastructure
http://www.lrec-conf.org/proceedings/lrec2008/pdf/805_paper.pdf
An Infrastructure, Tools and Methodology for Evaluation of Multicultural Name Matching Systems
This paper describes a Name Matching Evaluation Laboratory that is a joint effort across multiple projects. The lab houses our evaluation infrastructure as well as multiple name matching engines and customized analytical tools. Included is an explanation of the methodology used by the lab to carry out evaluations. This methodology is based on standard information retrieval evaluation, which requires a carefully-constructed test data set. The paper describes how we created that test data set, including the "ground truth" used to score the systems' performance. Descriptions and snapshots of the lab's various tools are provided, as well as information on how the different tools are used throughout the evaluation process. By using this evaluation process, the lab has been able to identify strengths and weaknesses of different name matching engines. These findings have led the lab to an ongoing investigation into various techniques for combining results from multiple name matching engines to achieve optimal results, as well as into research on the more general problem of identity management and resolution.
false
[]
[]
null
null
null
null
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
van-der-meer-2010-thousand
https://aclanthology.org/2010.eamt-1.3
Let a Thousand MT Systems Bloom
Looking into the future, I see a thousand MT systems blooming. I see fortune for the translation industry, and new solutions to overcome failed translations. I see a better world due to improved communications among the world's seven billion citizens. And the reason why I am so optimistic is that the process of data effectiveness is joining hands with the trend towards profit of sharing. The first is somewhat hidden from view in academic circles; the other leads a public life in the media and on the internet. One is simply science at work, steadily proving that numbers count and synergies work. The other is part of the ongoing battle between self-interest and the Zeitgeist. And the Zeitgeist is destined to win." In his presentation Jaap van der Meer will share a perspective on translation automation, localization business innovation and industry collaboration.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
carl-etal-2005-reversible
https://aclanthology.org/2005.mtsummit-ebmt.3
Reversible Template-based Shake \& Bake Generation
Corpus-based MT systems that analyse and generalise texts beyond the surface forms of words require generation tools to regenerate the various internal representations into valid target language (TL) sentences. While the generation of word-forms from lemmas is probably the last step in every text generation process at its very bottom end, token-generation cannot be accomplished without structural and morpho-syntactic knowledge of the sentence to be generated. As in many other MT models, this knowledge is composed of a target language model and a bag of information transferred from the source language. In this paper we establish an abstracted, linguistically informed, target language model. We use a tagger, a lemmatiser and a parser to infer a template grammar from the TL corpus. Given a linguistically informed TL model, the aim is to see what need be provided from the transfer module for generation. During computation of the template grammar, we simultaneously build up for each TL sentence the content of the bag such that the sentence can be deterministically reproduced. In this way we control the completeness of the approach and will have an idea of what pieces of information we need to code in the TL bag.
false
[]
[]
null
null
null
null
2005
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jiang-etal-2016-encoding
https://aclanthology.org/D16-1260
Encoding Temporal Information for Time-Aware Link Prediction
Most existing knowledge base (KB) embedding methods solely learn from time-unknown fact triples but neglect the temporal information in the knowledge base. In this paper, we propose a novel time-aware KB embedding approach taking advantage of the happening time of facts. Specifically, we use temporal order constraints to model transformation between time-sensitive relations and enforce the embeddings to be temporally consistent and more accurate. We empirically evaluate our approach in two tasks of link prediction and triple classification. Experimental results show that our method outperforms other baselines on the two tasks consistently.
false
[]
[]
null
null
null
This research is supported by National Key Basic Research Program of China (No.2014CB340504) and National Natural Science Foundation of China (No.61375074,61273318). The contact author for this paper is Baobao Chang and Zhifang Sui.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lee-etal-2017-mit
https://aclanthology.org/S17-2171
MIT at SemEval-2017 Task 10: Relation Extraction with Convolutional Neural Networks
Over 50 million scholarly articles have been published: they constitute a unique repository of knowledge. In particular, one may infer from them relations between scientific concepts. Artificial neural networks have recently been explored for relation extraction. In this work, we continue this line of work and present a system based on a convolutional neural network to extract relations. Our model ranked first in the SemEval-2017 task 10 (ScienceIE) for relation extraction in scientific articles (subtask C).
false
[]
[]
null
null
null
The authors would like to thank the ScienceIE organizers as well as the anonymous reviewers. The project was supported by Philips Research. The content is solely the responsibility of the authors and does not necessarily represent the official views of Philips Research.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
meza-ruiz-riedel-2009-jointly
https://aclanthology.org/N09-1018
Jointly Identifying Predicates, Arguments and Senses using Markov Logic
In this paper we present a Markov Logic Network for Semantic Role Labelling that jointly performs predicate identification, frame disambiguation, argument identification and argument classification for all predicates in a sentence. Empirically we find that our approach is competitive: our best model would appear on par with the best entry in the CoNLL 2008 shared task open track, and at the 4th place of the closed track-right behind the systems that use significantly better parsers to generate their input features. Moreover, we observe that by fully capturing the complete SRL pipeline in a single probabilistic model we can achieve significant improvements over more isolated systems, in particular for out-of-domain data. Finally, we show that despite the joint approach, our system is still efficient.
false
[]
[]
null
null
null
The authors are grateful to Mihai Surdeanu for providing the version of the corpus used in this work.
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
doukhan-etal-2012-designing
http://www.lrec-conf.org/proceedings/lrec2012/pdf/876_Paper.pdf
Designing French Tale Corpora for Entertaining Text To Speech Synthesis
Text and speech corpora for training a tale telling robot have been designed, recorded and annotated. The aim of these corpora is to study expressive storytelling behaviour, and to help in designing expressive prosodic and co-verbal variations for the artificial storyteller). A set of 89 children tales in French serves as a basis for this work. The tales annotation principles and scheme are described, together with the corpus description in terms of coverage and inter-annotator agreement. Automatic analysis of a new tale with the help of this corpus and machine learning is discussed. Metrics for evaluation of automatic annotation methods are discussed. A speech corpus of about 1 hour, with 12 tales has been recorded and aligned and annotated. This corpus is used for predicting expressive prosody in children tales, above the level of the sentence.
false
[]
[]
null
null
null
This work has been funded by the French project GV-LEx (ANR-08-CORD-024 http://www.gvlex.com).
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tang-etal-2020-syntactic
https://aclanthology.org/2020.findings-emnlp.69
Syntactic and Semantic-driven Learning for Open Information Extraction
One of the biggest bottlenecks in building accurate, high coverage neural open IE systems is the need for large labelled corpora. The diversity of open domain corpora and the variety of natural language expressions further exacerbate this problem. In this paper, we propose a syntactic and semantic-driven learning approach, which can learn neural open IE models without any human-labelled data by leveraging syntactic and semantic knowledge as noisier, higher-level supervisions. Specifically, we first employ syntactic patterns as data labelling functions and pretrain a base model using the generated labels. Then we propose a syntactic and semantic-driven reinforcement learning algorithm, which can effectively generalize the base model to open situations with high accuracy. Experimental results show that our approach significantly outperforms the supervised counterparts, and can even achieve competitive performance to supervised stateof-the-art (SoA) model. * Corresponding author Sentences Pattern-based Data labeling Syntax and Semantic Driven RL Open IE Model Noisy Training Corpus [Parragon] ARG1 [operates] P [more than 35 markets] ARG2 and has 10 offices. Parragon operates more than 35 markets and has 10 offices. def dl(x): all verbs are labeled as P …
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
shen-etal-2020-blank
https://aclanthology.org/2020.emnlp-main.420
Blank Language Models
We propose Blank Language Model (BLM), a model that generates sequences by dynamically creating and filling in blanks. The blanks control which part of the sequence to expand, making BLM ideal for a variety of text editing and rewriting tasks. The model can start from a single blank or partially completed text with blanks at specified locations. It iteratively determines which word to place in a blank and whether to insert new blanks, and stops generating when no blanks are left to fill. BLM can be efficiently trained using a lower bound of the marginal data likelihood. On the task of filling missing text snippets, BLM significantly outperforms all other baselines in terms of both accuracy and fluency. Experiments on style transfer and damaged ancient text restoration demonstrate the potential of this framework for a wide range of applications. 1
false
[]
[]
null
null
null
We thank all reviewers and the MIT NLP group for their thoughtful feedback.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kim-riloff-2015-stacked
https://aclanthology.org/W15-3807
Stacked Generalization for Medical Concept Extraction from Clinical Notes
The goal of our research is to extract medical concepts from clinical notes containing patient information. Our research explores stacked generalization as a metalearning technique to exploit a diverse set of concept extraction models. First, we create multiple models for concept extraction using a variety of information extraction techniques, including knowledgebased, rule-based, and machine learning models. Next, we train a meta-classifier using stacked generalization with a feature set generated from the outputs of the individual classifiers. The meta-classifier learns to predict concepts based on information about the predictions of the component classifiers. Our results show that the stacked generalization learner performs better than the individual models and achieves state-of-the-art performance on the 2010 i2b2 data set.
true
[]
[]
Good Health and Well-Being
null
null
This research was supported in part by the National Science Foundation under grant IIS-1018314.
2015
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
choi-etal-1994-yanhui
https://aclanthology.org/O94-1002
Yanhui (宴會), a Softwre Based High Performance Mandarin Text-To-Speech System
null
false
[]
[]
null
null
null
null
1994
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tian-etal-2014-um
http://www.lrec-conf.org/proceedings/lrec2014/pdf/774_Paper.pdf
UM-Corpus: A Large English-Chinese Parallel Corpus for Statistical Machine Translation
Parallel corpus is a valuable resource for cross-language information retrieval and data-driven natural language processing systems, especially for Statistical Machine Translation (SMT). However, most existing parallel corpora to Chinese are subject to in-house use, while others are domain specific and limited in size. To a certain degree, this limits the SMT research. This paper describes the acquisition of a large scale and high quality parallel corpora for English and Chinese. The corpora constructed in this paper contain about 15 million English-Chinese (E-C) parallel sentences, and more than 2 million training data and 5,000 testing sentences are made publicly available. Different from previous work, the corpus is designed to embrace eight different domains. Some of them are further categorized into different topics. The corpus will be released to the research community, which is available at the NLP 2 CT 1 website.
false
[]
[]
null
null
null
The authors would like to thank all reviewers for the very careful reading and helpful suggestions. The authors are grateful to the Science and Technology Development Fund of Macau and the Research Committee of the University of Macau for the funding support for their research, under the
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
fomicheva-etal-2016-cobaltf
https://aclanthology.org/W16-2339
CobaltF: A Fluent Metric for MT Evaluation
The vast majority of Machine Translation (MT) evaluation approaches are based on the idea that the closer the MT output is to a human reference translation, the higher its quality. While translation quality has two important aspects, adequacy and fluency, the existing referencebased metrics are largely focused on the former. In this work we combine our metric UPF-Cobalt, originally presented at the WMT15 Metrics Task, with a number of features intended to capture translation fluency. Experiments show that the integration of fluency-oriented features significantly improves the results, rivalling the best-performing evaluation metrics on the WMT15 data.
false
[]
[]
null
null
null
This work was partially funded by TUNER (TIN2015-65308-C5-5-R) and MINECO/FEDER, UE. Marina Fomicheva was supported by funding from the FI-DGR grant program of the Generalitat de Catalunya. Iria da Cunha was supported by a Ramón y Cajal contract (RYC-2014-16935). Lucia Specia was supported by the QT21 project (H2020 No. 645452).
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
abzaliev-2019-gap
https://aclanthology.org/W19-3816
On GAP Coreference Resolution Shared Task: Insights from the 3rd Place Solution
This paper presents the 3rd-place-winning solution to the GAP coreference resolution shared task. The approach adopted consists of two key components: fine-tuning the BERT language representation model (Devlin et al., 2018) and the usage of external datasets during the training process. The model uses hidden states from the intermediate BERT layers instead of the last layer. The resulting system almost eliminates the difference in log loss per gender during the cross-validation, while providing high performance.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
slawik-etal-2014-kit
https://aclanthology.org/2014.iwslt-evaluation.17
The KIT translation systems for IWSLT 2014
In this paper, we present the KIT systems participating in the TED translation tasks of the IWSLT 2014 machine translation evaluation. We submitted phrase-based translation systems for all three official directions, namely English→German, German→English, and English→French, as well as for the optional directions English→Chinese and English→Arabic. For the official directions we built systems both for the machine translation as well as the spoken language translation track. This year we improved our systems' performance over last year through n-best list rescoring using neural networkbased translation and language models and novel preordering rules based on tree information of multiple syntactic levels. Furthermore, we could successfully apply a novel phrase extraction algorithm and transliteration of unknown words for Arabic. We also submitted a contrastive system for German→English built with stemmed German adjectives. For the SLT tracks, we used a monolingual translation system to translate the lowercased ASR hypotheses with all punctuation stripped to truecased, punctuated output as a preprocessing step to our usual translation system.
false
[]
[]
null
null
null
The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n • 287658.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
pedersen-etal-2010-merging
http://www.lrec-conf.org/proceedings/lrec2010/pdf/200_Paper.pdf
Merging Specialist Taxonomies and Folk Taxonomies in Wordnets - A case Study of Plants, Animals and Foods in the Danish Wordnet
In this paper we investigate the problem of merging specialist taxonomies with the more intuitive folk taxonomies in lexical-semantic resources like wordnets; and we focus in particular on plants, animals and foods. We show that a traditional dictionary like Den Danske Ordbog (DDO) survives well with several inconsistencies between different taxonomies of the vocabulary and that a restructuring is therefore necessary in order to compile a consistent wordnet resource on its basis. To this end, we apply Cruse's definitions for hyponymies, namely those of natural kinds (such as plants and animals) on the one hand and functional kinds (such as foods) on the other. We pursue this distinction in the development of the Danish wordnet, DanNet, which has recently been built on the basis of DDO and is made open source for all potential users at www.wordnet.dk. Not surprisingly, we conclude that cultural background influences the structure of folk taxonomies quite radically, and that wordnet builders must therefore consider these carefully in order to capture their central characteristics in a systematic way.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chambers-jurafsky-2011-template
https://aclanthology.org/P11-1098
Template-Based Information Extraction without the Templates
Standard algorithms for template-based information extraction (IE) require predefined template schemas, and often labeled data, to learn to extract their slot fillers (e.g., an embassy is the Target of a Bombing template). This paper describes an approach to template-based IE that removes this requirement and performs extraction without knowing the template structure in advance. Our algorithm instead learns the template structure automatically from raw text, inducing template schemas as sets of linked events (e.g., bombings include detonate, set off, and destroy events) associated with semantic roles. We also solve the standard IE task, using the induced syntactic patterns to extract role fillers from specific documents. We evaluate on the MUC-4 terrorism dataset and show that we induce template structure very similar to handcreated gold structure, and we extract role fillers with an F1 score of .40, approaching the performance of algorithms that require full knowledge of the templates.
false
[]
[]
null
null
null
This work was supported by the National Science Foundation IIS-0811974, and this material is also based upon work supported by the Air Force Research Laboratory (AFRL) under prime contract no. FA8750-09-C-0181. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of the Air Force Research Laboratory (AFRL). Thanks to the Stanford NLP Group and reviewers for helpful suggestions.
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wang-etal-2015-learning-domain
https://aclanthology.org/W15-4654
Learning Domain-Independent Dialogue Policies via Ontology Parameterisation
This paper introduces a novel approach to eliminate the domain dependence of dialogue state and action representations, such that dialogue policies trained based on the proposed representation can be transferred across different domains. The experimental results show that the policy optimised in a restaurant search domain using our domain-independent representations can be deployed to a laptop sale domain, achieving a task success rate very close (96.4% relative) to that of the policy optimised on in-domain dialogues.
false
[]
[]
null
null
null
The authors would like to thank David Vandyke, Milica Gašić and Steve Young for providing the BUDS system and the simulator, as well as for their help in setting up the crowdsourcing experiments.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tatsumi-etal-2012-good
https://aclanthology.org/2012.amta-wptp.8
How Good Is Crowd Post-Editing? Its Potential and Limitations
This paper is a partial report of a research effort on evaluating the effect of crowdsourced post-editing. We first discuss the emerging trend of crowd-sourced postediting of machine translation output, along with its benefits and drawbacks. Second, we describe the pilot study we have conducted on a platform that facilitates crowd-sourced post-editing. Finally, we provide our plans for further studies to have more insight on how effective crowdsourced post-editing is.
false
[]
[]
null
null
null
This project was funded by International Affairs Division at Toyohashi University of Technology, and we would like to give special thanks to all the members of International Affairs Division for their support during the project. We are also thankful to Dr. Anthony Hartley for his support on conducting the experiment.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
habash-etal-2006-challenges
https://aclanthology.org/2006.amta-papers.7
Challenges in Building an Arabic-English GHMT System with SMT Components
The research context of this paper is developing hybrid machine translation (MT) systems that exploit the advantages of linguistic rule-based and statistical MT systems. Arabic, as a morphologically rich language, is especially challenging even without addressing the hybridization question. In this paper, we describe the challenges in building an Arabic-English generation-heavy machine translation (GHMT) system and boosting it with statistical machine translation (SMT) components. We present an extensive evaluation of multiple system variants and report positive results on the advantages of hybridization.
false
[]
[]
null
null
null
This work has been supported, in part, under Army Research
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
aralikatte-etal-2021-ellipsis
https://aclanthology.org/2021.eacl-main.68
Ellipsis Resolution as Question Answering: An Evaluation
Most, if not all forms of ellipsis (e.g., 'so does Mary') are similar to reading comprehension questions ('what does Mary do'), in that in order to resolve them, we need to identify an appropriate text span in the preceding discourse. Following this observation, we present an alternative approach for English ellipsis resolution relying on architectures developed for question answering (QA). We present both single-task models, and joint models trained on auxiliary QA and coreference resolution datasets, clearly outperforming the current state of the art for Sluice Ellipsis (from 70.00 to 86.01 F 1) and Verb Phrase Ellipsis (from 72.89 to 78.66 F 1).
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
borg-gatt-2017-morphological
https://aclanthology.org/W17-1304
Morphological Analysis for the Maltese Language: The challenges of a hybrid system
Maltese is a morphologically rich language with a hybrid morphological system which features both concatenative and non-concatenative processes. This paper analyses the impact of this hybridity on the performance of machine learning techniques for morphological labelling and clustering. In particular, we analyse a dataset of morphologically related word clusters to evaluate the difference in results for concatenative and nonconcatenative clusters. We also describe research carried out in morphological labelling, with a particular focus on the verb category. Two evaluations were carried out, one using an unseen dataset, and another one using a gold standard dataset which was manually labelled. The gold standard dataset was split into concatenative and non-concatenative to analyse the difference in results between the two morphological systems.
false
[]
[]
null
null
null
The authors acknowledge the insight and expertise of Prof. Ray Fabri. The research work disclosed in this publication is partially funded by the Malta Government Scholarship Scheme grant.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
li-etal-2017-bibi
https://aclanthology.org/W17-5404
BIBI System Description: Building with CNNs and Breaking with Deep Reinforcement Learning
This paper describes our submission to the sentiment analysis sub-task of "Build It, Break It: The Language Edition (BIBI)", on both the builder and breaker sides. As a builder, we use convolutional neural nets, trained on both phrase and sentence data. As a breaker, we use Q-learning to learn minimal change pairs, and apply a token substitution method automatically. We analyse the results to gauge the robustness of NLP systems.
false
[]
[]
null
null
null
We would like to thank the three anonymous reviewers for their helpful feedback and suggestions, and to Meng Fang for assisting with the implementation of the RL system.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nakazawa-2015-promoting
https://aclanthology.org/2015.mtsummit-wpslt.5
Promoting science and technology exchange using machine translation
There are plenty of useful scientific and technical documents which are written in languages other than English, and are referenced domestically. Accessing these domestic documents in other countries is very important in order to know what has been accomplished and what is needed next in the science and technology fields. However, we need to surmount the language barrier to directly access these valuable documents. One obvious way to achieve this is using machine translation systems to translate foreign documents into the users' language. Even after the long history of developing machine translation systems among East Asian languages, there is still no practical system. We have launched a project to develop practical machine translation technology for promoting science and technology exchange. As the starting point, we aim at developping Chinese ↔ Japanese practical machine translation system. In this talk, I will introduce the background, goals and status of the project. Also, I will give you the summary of the 2nd Workshop on Asian Translation (WAT2015) 1 where Chinese ↔ Japanese scientific paper translation subtasks has been carried out. Figure 1 shows the number of scientific papers in the world which are written in "English". We can presume that the number of papers written in each language has the similar proportion to this graph. You can see that the number of papers from China is rapidly growing in recent years, which means we have a large number of "Chinese" papers.
true
[]
[]
Industry, Innovation and Infrastructure
null
null
null
2015
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
summers-sawaf-2010-user
https://aclanthology.org/2010.amta-government.8
User-generated System for Critical Document Triage and Exploitation--Version 2011
null
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wang-etal-2019-multi-hop
https://aclanthology.org/D19-5813
Do Multi-hop Readers Dream of Reasoning Chains?
General Question Answering (QA) systems over texts require the multi-hop reasoning capability, i.e. the ability to reason with information collected from multiple passages to derive the answer. In this paper we conduct a systematic analysis to assess such an ability of various existing models proposed for multi-hop QA tasks. Specifically, our analysis investigates that whether providing the full reasoning chain of multiple passages, instead of just one final passage where the answer appears, could improve the performance of the existing QA models. Surprisingly, when using the additional evidence passages, the improvements of all the existing multi-hop reading approaches are rather limited, with the highest error reduction of 5.8% on F1 (corresponding to 1.3% absolute improvement) from the BERT model. To better understand whether the reasoning chains could indeed help find correct answers, we further develop a co-matchingbased method that leads to 13.1% error reduction with passage chains when applied to two of our base readers (including BERT). Our results demonstrate the existence of the potential improvement using explicit multi-hop reasoning and the necessity to develop models with better reasoning abilities. 1 * Equal contributions. 1 Code and data released at https://github.com/ helloeve/bert-co-matching.
false
[]
[]
null
null
null
We thank the anonymous reviewers for their very valuable comments and suggestions.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
versley-2007-antecedent
https://aclanthology.org/D07-1052
Antecedent Selection Techniques for High-Recall Coreference Resolution
We investigate methods to improve the recall in coreference resolution by also trying to resolve those definite descriptions where no earlier mention of the referent shares the same lexical head (coreferent bridging). The problem, which is notably harder than identifying coreference relations among mentions which have the same lexical head, has been tackled with several rather different approaches, and we attempt to provide a meaningful classification along with a quantitative comparison. Based on the different merits of the methods, we discuss possibilities to improve them and show how they can be effectively combined.
false
[]
[]
null
null
null
Acknowledgements I am very grateful to Sabine Schulte im Walde, Piklu Gupta and Sandra Kübler for useful criticism of an earlier version, and to Simone Ponzetto and Michael Strube for feedback on a talk related to this paper. The research reported in this paper was supported by the Deutsche Forschungsgemeinschaft (DFG) as part of Collaborative Research Centre (Sonderforschungsbereich) 441 "Linguistic Data Structures".
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
xu-etal-2021-probing
https://aclanthology.org/2021.naacl-main.7
Probing Word Translations in the Transformer and Trading Decoder for Encoder Layers
Due to its effectiveness and performance, the Transformer translation model has attracted wide attention, most recently in terms of probing-based approaches. Previous work focuses on using or probing source linguistic features in the encoder. To date, the way word translation evolves in Transformer layers has not yet been investigated. Naively, one might assume that encoder layers capture source information while decoder layers translate. In this work, we show that this is not quite the case: translation already happens progressively in encoder layers and even in the input embeddings. More surprisingly, we find that some of the lower decoder layers do not actually do that much decoding. We show all of this in terms of a probing approach where we project representations of the layer analyzed to the final trained and frozen classifier level of the Transformer decoder to measure word translation accuracy. Our findings motivate and explain a Transformer configuration change: if translation already happens in the encoder layers, perhaps we can increase the number of encoder layers, while decreasing the number of decoder layers, boosting decoding speed, without loss in translation quality? Our experiments show that this is indeed the case: we can increase speed by up to a factor 2.3 with small gains in translation quality, while an 18-4 deep encoder configuration boosts translation quality by +1.42 BLEU (En-De) at a speed-up of 1.4.
false
[]
[]
null
null
null
We thank anonymous reviewers for their insightful comments. Hongfei Xu acknowledges the support of China Scholarship Council ([2018
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhao-etal-2017-n
https://aclanthology.org/W17-5907
N-gram Model for Chinese Grammatical Error Diagnosis
Detection and correction of Chinese grammatical errors have been two of major challenges for Chinese automatic grammatical error diagnosis. This paper presents an N-gram model for automatic detection and correction of Chinese grammatical errors in NLPTEA 2017 task. The experiment results show that the proposed method is good at correction of Chinese grammatical errors.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
li-etal-2020-low
https://aclanthology.org/2020.ccl-1.92
Low-Resource Text Classification via Cross-lingual Language Model Fine-tuning
Text classification tends to be difficult when data are inadequate considering the amount of manually labeled text corpora. For low-resource agglutinative languages including Uyghur, Kazakh, and Kyrgyz (UKK languages), in which words are manufactured via stems concatenated with several suffixes and stems are used as the representation of text content, this feature allows infinite derivatives vocabulary that leads to high uncertainty of writing forms and huge redundant features. There are major challenges of low-resource agglutinative text classification the lack of labeled data in a target domain and morphologic diversity of derivations in language structures. It is an effective solution which fine-tuning a pre-trained language model to provide meaningful and favorable-to-use feature extractors for downstream text classification tasks. To this end, we propose a low-resource agglutinative language model fine-tuning AgglutiF iT , specifically, we build a low-noise fine-tuning dataset by morphological analysis and stem extraction, then finetune the cross-lingual pre-training model on this dataset. Moreover, we propose an attentionbased fine-tuning strategy that better selects relevant semantic and syntactic information from the pre-trained language model and uses those features on downstream text classification tasks. We evaluate our methods on nine Uyghur, Kazakh, and Kyrgyz classification datasets, where they have significantly better performance compared with several strong baselines.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yeh-etal-2016-grammatical
https://aclanthology.org/W16-4918
Grammatical Error Detection Based on Machine Learning for Mandarin as Second Language Learning
Mandarin is not simple language for foreigner. Even using Mandarin as the mother tongue, they have to spend more time to learn when they were child. The following issues are the reason why causes learning problem. First, the word is envolved by Hieroglyphic. So a character can express meanings independently, but become a word has another semantic. Second, the Mandarin's grammars have flexible rule and special usage. Therefore, the common grammatical errors can classify to missing, redundant, selection and disorder. In this paper, we proposed the structure of the Recurrent Neural Networks using Long Short-term memory (RNN-LSTM). It can detect the error type from the foreign learner writing. The features based on the word vector and part-of-speech vector. In the test data found that our method in the detection level of recall better than the others, even as high as 0.9755. That is because we give the possibility of greater choice in detecting errors.
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gildea-etal-2006-factoring
https://aclanthology.org/P06-2036
Factoring Synchronous Grammars by Sorting
Synchronous Context-Free Grammars (SCFGs) have been successfully exploited as translation models in machine translation applications. When parsing with an SCFG, computational complexity grows exponentially with the length of the rules, in the worst case. In this paper we examine the problem of factorizing each rule of an input SCFG to a generatively equivalent set of rules, each having the smallest possible length. Our algorithm works in time O(n log n), for each rule of length n. This improves upon previous results and solves an open problem about recognizing permutations that can be factored.
false
[]
[]
null
null
null
Acknowledgments This work was partially supported by NSF ITR IIS-09325646 and NSF ITR IIS-0428020.
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
han-etal-2004-subcategorization
https://aclanthology.org/C04-1104
Subcategorization Acquisition and Evaluation for Chinese Verbs
This paper describes the technology and an experiment of subcategorization acquisition for Chinese verbs. The SCF hypotheses are generated by means of linguistic heuristic information and filtered via statistical methods. Evaluation on the acquisition of 20 multi-pattern verbs shows that our experiment achieved the similar precision and recall with former researches. Besides, simple application of the acquired lexicon to a PCFG parser indicates great potentialities of subcategorization information in the fields of NLP.
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
papay-etal-2018-addressing
https://aclanthology.org/W18-1204
Addressing Low-Resource Scenarios with Character-aware Embeddings
Most modern approaches to computing word embeddings assume the availability of text corpora with billions of words. In this paper, we explore a setup where only corpora with millions of words are available, and many words in any new text are out of vocabulary. This setup is both of practical interest-modeling the situation for specific domains and low-resource languages-and of psycholinguistic interest, since it corresponds much more closely to the actual experiences and challenges of human language learning and use. We evaluate skip-gram word embeddings and two types of character-based embeddings on word relatedness prediction. On large corpora, performance of both model types is equal for frequent words, but character awareness already helps for infrequent words. Consistently, on small corpora, the characterbased models perform overall better than skipgrams. The concatenation of different embeddings performs best on small corpora and robustly on large corpora.
false
[]
[]
null
null
null
Acknowledgments. Partial funding for this study was provided by Deutsche Forschungsgemeinschaft (project PA 1956/4-1).
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
georgi-etal-2015-enriching
https://aclanthology.org/W15-3709
Enriching Interlinear Text using Automatically Constructed Annotators
In this paper, we will demonstrate a system that shows great promise for creating Part-of-Speech taggers for languages with little to no curated resources available, and which needs no expert involvement. Interlinear Glossed Text (IGT) is a resource which is available for over 1,000 languages as part of the Online Database of INterlinear text (ODIN) (Lewis and Xia, 2010). Using nothing more than IGT from this database and a classification-based projection approach tailored for IGT, we will show that it is feasible to train reasonably performing annotators of interlinear text using projected annotations for potentially hundreds of world's languages. Doing so can facilitate automatic enrichment of interlinear resources to aid the field of linguistics.
false
[]
[]
null
null
null
This work is supported by the National Science Foundation Grant BCS-0748919. We would also like to thank Balthasar Bickel and his team for allowing us to use the Chintang data set in our experiments, and our three anonymous reviewers for the helpful feedback.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wahlster-etal-1978-glancing
https://aclanthology.org/J78-3004
Glancing, Referring and Explaining in the Dialogue System HAM-RPM
P r o j e c t : 'Simulat ion of Language Understand1 ng ' Germanisches Seminar der U n i v e r s i t a t Hamburg von-Melle-Park 6 , D-2000 Hamburg 13, West Getmany SUMMARY T h i s paper tocusses on t h r e e components oft h e d i a l o g u e system HAM-KYM, which converses i n n a t u r a l language about v i s i b l e scenes. F~r s t , ~t i s demonstrated how the system's communicative competence i s enhanced by i t s i m i t a t i o n of human v i s u a l-s e a r c h processes. The approach taken t o nounphrase r e s o l u t i o n i s then d e s c r i b e d , and an a l g o r i t h m f o r t h e generation o f noun phrases i s illustrated w i t h a s e r i e s o f examples: Finally, the s y s t e m ' s a b i l i t y to e x p l a i n i t s own reasoning i s d i s c u s s e d , w i t h emphasis on the novel a s p e c t s o f i t s i m p l e m e n t a t~o n .
false
[]
[]
null
null
null
null
1978
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
molins-lapalme-2015-jsrealb
https://aclanthology.org/W15-4719
JSrealB: A Bilingual Text Realizer for Web Programming
JSrealB is an English and French text realizer written in JavaScript to ease its integration in web applications. The realization engine is mainly rule-based. Table driven rules are defined for inflection and algorithmic propagation rules, for agreements. It allows its user to build a variety of French and English expressions and sentences from a single specification to produce dynamic output depending on the content of a web page.
false
[]
[]
null
null
null
null
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
steedman-2013-robust
https://aclanthology.org/U13-1001
Robust Computational Semantics
Practical tasks like question answering and machine translational ultimately require computing meaning representations that support inference. Standard linguistic accounts of meaning are impracticable for such purposes, both because they assume nonmonotonic operations such as quantifier movement, and because they lack a representation for the meaning of content words that supports efficient computation of entailment. I'll discuss practical solutions to some of these problems within a near-context free grammar formalism for a working wide-coverage parser, in current work with Mike Lewis, and show how these solutions can be usefully applied in NLP tasks.
false
[]
[]
null
null
null
null
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
coto-solano-etal-2021-towards
https://aclanthology.org/2021.udw-1.2
Towards Universal Dependencies for Bribri
This paper presents a first attempt to apply Universal Dependencies (Nivre et al., 2016; de Marneffe et al., 2021) to Bribri, an Indigenous language from Costa Rica belonging to the Chibchan family. There is limited previous work on Bribri NLP, so we also present a proposal for a dependency parser, as well as a listing of structures that were challenging to parse (e.g. flexible word order, verbal sequences, arguments of intransitive verbs and mismatches between the tense systems of Bribri and UD). We also list some of the challenges in performing NLP with an extremely low-resource Indigenous language, including issues with tokenization, data normalization and the training of tools like POS taggers which are necessary for the parsing. In total we collected 150 sentences (760 words) from publicly available sources like grammar books and corpora. We then used a context-free grammar for the initial parse, and then applied the headfloating algorithm in Xia and Palmer (2001) to automatically generate dependency parses. This work is a first step towards building a UD treebank for Bribri, and we hope to use this tool to improve the documentation of the language and develop language-learning materials and NLP tools like chatbots and question answering-systems. Resumen Este artículo presenta un primer intento de aplicar Dependencias Universales (Nivre et al., 2016; de Marneffe et al., 2021) al bribri, una lengua indígena chibchense de Costa Rica. Dado el limitado trabajo existente en procesamiento de lenguaje natural (PLN) en bribri incluimos también una propuesta para un analizador sintáctico de dependencias, así como una lista de estructuras difíciles de analizar (e.g. palabras con orden flexible, secuencias verbales, argumentos de verbos intransitivos y diferencias entre el sistema verbal del bribri y los rasgos morfológicos de UD). También mencionamos algunos retos del PLN en lenguas indígenas extremadamente bajas en recursos, como la tokenización, la normalización de los datos y el entrenamiento de herramientas como el etiquetado gramatical, necesario para el análisis sintáctico. Se recolectaron 150 oraciones (760 palabras) de fuentes públicas como gramáticas y corpus y se usó una gramática libre de contexto para el análisis inicial. Luego se aplicó el algoritmo de flotación de cabezas de Xia y Palmer (2001) para generar automáticamente los análisis sintácticos de dependencias. Este es el primer paso hacia la construcción de un treebank de dependencias en bribri. Esperamos usar esta herramienta para mejorar la documentación de la lengua y desarrollar materiales de aprendizaje de la lengua y herramientas de PLN como chatbots y sistemas de pregunta-respuesta.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
rhyne-2020-reconciling
https://aclanthology.org/2020.scil-1.51
Reconciling historical data and modern computational models in corpus creation
We live in a time of unprecedented access to linguistic data, from audio recordings to corpora of billions of words. Linguists have used these resources to advance their research and understanding of language. Historical linguistics, despite being the oldest linguistic subfield, has lagged behind in this regard. However, this is due to several unique challenges that face the subfield. Historical data is plagued by two problems: a lack of overall data due to the ravages of time and a lack of model-ready data that have gone through standard NLP processing. Barring the discovery of more texts, the former issue cannot be solved; the latter can, though it is time-consuming and resourceintensive. These problems have only begun to be addressed for well-documented language families like Indo-European, but even within these progress is slow. There have been numerous advances in synchronic models for basic NLP tasks like POS and morphological tagging. However, modern models are not designed to work with historical data: they depend on large volumes of data and pretagged training sets that are not available for the majority of historical languages. Some have found success with methods that are designed to imitate traditional historical approaches, e.g. (Bouchard-Côté et al., 2013; McMahon and McMahon, 2003; Nakleh et al., 2005), but, if we intend to use stateof-the-art computational tools, they are essentially incompatible. This is an important challenge that computational historical linguists must address if they are going to meet the standards set by both modern corpora and historical analyses. This paper approaches the issue by treating historical data in the same way as a low-resource language (Fang
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
surdeanu-etal-2015-two
https://aclanthology.org/N15-3001
Two Practical Rhetorical Structure Theory Parsers
We describe the design, development, and API for two discourse parsers for Rhetorical Structure Theory. The two parsers use the same underlying framework, but one uses features that rely on dependency syntax, produced by a fast shift-reduce parser, whereas the other uses a richer feature space, including both constituent-and dependency-syntax and coreference information, produced by the Stanford CoreNLP toolkit. Both parsers obtain state-of-the-art performance, and use a very simple API consisting of, minimally, two lines of Scala code. We accompany this code with a visualization library that runs the two parsers in parallel, and displays the two generated discourse trees side by side, which provides an intuitive way of comparing the two parsers.
false
[]
[]
null
null
null
This work was funded by the DARPA Big Mechanism program under ARO contract W911NF-14-1-0395.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
cercas-curry-etal-2021-convabuse
https://aclanthology.org/2021.emnlp-main.587
ConvAbuse: Data, Analysis, and Benchmarks for Nuanced Abuse Detection in Conversational AI
We present the first English corpus study on abusive language towards three conversational AI systems gathered 'in the wild': an opendomain social bot, a rule-based chatbot, and a task-based system. To account for the complexity of the task, we take a more 'nuanced' approach where our ConvAI dataset reflects fine-grained notions of abuse, as well as views from multiple expert annotators. We find that the distribution of abuse is vastly different compared to other commonly used datasets, with more sexually tinted aggression towards the virtual persona of these systems. Finally, we report results from bench-marking existing models against this data. Unsurprisingly, we find that there is substantial room for improvement with F1 scores below 90%. Warning: This paper contains examples of language that some people may find offensive or upsetting.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
This research received funding from the EPSRC project 'Designing Conversational Assistants to Reduce Gender Bias' (EP/T023767/1). The authors would like to thank Juules Bare, Lottie Basil, Susana Demelas, Maina Flintham Hjelde, Lauren Galligan, Lucile Logan, Megan McElhone, MollieMcLean and the reviewers for their helpful comments.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
hakkani-tur-2015-keynote
https://aclanthology.org/W15-4628
Keynote: Graph-based Approaches for Spoken Language Understanding
Following an upsurge in mobile device usage and improvements in speech recognition performance, multiple virtual personal assistant systems have emerged, and have been widely adopted by users. While these assistants proved to be beneficial, their usage has been limited to certain scenarios and domains, with underlying language understanding models that have been finely tuned by their builders.
false
[]
[]
null
null
null
null
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bak-oh-2019-variational
https://aclanthology.org/D19-1202
Variational Hierarchical User-based Conversation Model
Generating appropriate conversation responses requires careful modeling of the utterances and speakers together. Some recent approaches to response generation model both the utterances and the speakers, but these approaches tend to generate responses that are overly tailored to the speakers. To overcome this limitation, we propose a new model with a stochastic variable designed to capture the speaker information and deliver it to the conversational context. An important part of this model is the network of speakers in which each speaker is connected to one or more conversational partner, and this network is then used to model the speakers better. To test whether our model generates more appropriate conversation responses, we build a new conversation corpus containing approximately 27,000 speakers and 770,000 conversations. With this corpus, we run experiments of generating conversational responses and compare our model with other state-of-the-art models. By automatic evaluation metrics and human evaluation, we show that our model outperforms other models in generating appropriate responses. An additional advantage of our model is that it generates better responses for various new user scenarios, for example when one of the speakers is a known user in our corpus but the partner is a new user. For replicability, we make available all our code and data 1 .
false
[]
[]
null
null
null
We would like to thank the anonymous reviewers for helpful questions and comments. This work was supported by IITP grant funded by the Korea government (MSIT) (No.2017-0-01779, XAI).
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yuan-2006-language
https://aclanthology.org/Y06-1056
Language Model Based on Word Clustering
Category-based statistic language model is an important method to solve the problem of sparse data. But there are two bottlenecks about this model: (1) the problem of word clustering, it is hard to find a suitable clustering method that has good performance and not large amount of computation. (2) class-based method always loses some prediction ability to adapt the text of different domain. The authors try to solve above problems in this paper. This paper presents a definition of word similarity by utilizing mutual information. Based on word similarity, this paper gives the definition of word set similarity. Experiments show that word clustering algorithm based on similarity is better than conventional greedy clustering method in speed and performance. At the same time, this paper presents a new method to create the vari-gram model.
false
[]
[]
null
null
null
null
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dingli-etal-2003-mining
https://aclanthology.org/E03-1011
Mining Web Sites Using Unsupervised Adaptive Information Extraction
Adaptive Information Extraction systems (IES) are currently used by some Semantic Web (SW) annotation tools as support to annotation (Handschuh et al., 2002; Vargas-Vera et al., 2002) . They are generally based on fully supervised methodologies requiring fairly intense domain-specific annotation. Unfortunately, selecting representative examples may be difficult and annotations can be incorrect and require time. In this paper we present a methodology that drastically reduce (or even remove) the amount of manual annotation required when annotating consistent sets of pages. A very limited number of user-defined examples are used to bootstrap learning. Simple, high precision (and possibly high recall) IE patterns are induced using such examples, these patterns will then discover more examples which will in turn discover more patterns, etc.
false
[]
[]
null
null
null
null
2003
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
palakurthi-etal-2015-classification
https://aclanthology.org/R15-1065
Classification of Attributes in a Natural Language Query into Different SQL Clauses
Attribute information in a natural language query is one of the key features for converting a natural language query into a Structured Query Language 1 (SQL) in Natural Language Interface to Database systems. In this paper, we explore the task of classifying the attributes present in a natural language query into different SQL clauses in a SQL query. In particular, we investigate the effectiveness of various features and Conditional Random Fields for this task. Our system uses a statistical classifier trained on manually prepared data. We report our results on three different domains and also show how our system can be used for generating a complete SQL query.
false
[]
[]
null
null
null
We would like to thank the anonymous reviewers for the valuable feedback on this work. This research was supported in part by the Information Technology Research Academy (ITRA), Government of India under ITRA-Mobile grant ITRA/15(62)/Mobile/VAMD/01. Any opinions, findings, conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the ITRA.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ishiwatari-etal-2019-learning
https://aclanthology.org/N19-1350
Learning to Describe Unknown Phrases with Local and Global Contexts
When reading a text, it is common to become stuck on unfamiliar words and phrases, such as polysemous words with novel senses, rarely used idioms, internet slang, or emerging entities. If we humans cannot figure out the meaning of those expressions from the immediate local context, we consult dictionaries for definitions or search documents or the web to find other global context to help in interpretation. Can machines help us do this work? Which type of context is more important for machines to solve the problem? To answer these questions, we undertake a task of describing a given phrase in natural language based on its local and global contexts. To solve this task, we propose a neural description model that consists of two context encoders and a description decoder. In contrast to the existing methods for non-standard English explanation (Ni and Wang, 2017) and definition generation (Noraset et al., 2017; Gadetsky et al., 2018), our model appropriately takes important clues from both local and global contexts. Experimental results on three existing datasets (including WordNet, Oxford and Urban Dictionaries) and a dataset newly created from Wikipedia demonstrate the effectiveness of our method over previous work.
false
[]
[]
null
null
null
The authors are grateful to Thanapon Noraset for sharing the details of his implementation of the previous work. We also thank the anonymous reviewers for their careful reading of our paper and insightful comments, and the members of Kitsuregawa-Toyoda-Nemoto-Yoshinaga-Goda laboratory in the University of Tokyo for proofreading the draft.This work was partially supported by Grant-in-Aid for JSPS Fellows (Grant Number 17J06394) and Commissioned Research (201) of the National Institute of Information and Communications Technology of Japan.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chen-styler-2013-anafora
https://aclanthology.org/N13-3004
Anafora: A Web-based General Purpose Annotation Tool
Anafora is a newly-developed open source web-based text annotation tool built to be lightweight, flexible, easy to use and capable of annotating with a variety of schemas, simple and complex. Anafora allows secure web-based annotation of any plaintext file with both spanned (e.g. named entity or markable) and relation annotations, as well as adjudication for both types of annotation. Anafora offers automatic set assignment and progress-tracking, centralized and humaneditable XML annotation schemas, and filebased storage and organization of data in a human-readable single-file XML format.
false
[]
[]
null
null
null
The development of this annotation tool was supported by award numbers NLM R0110090 (THYME) and 90TR002 (SHARP), as well as DARPA FA8750-09-C-0179 (via BBN) Machine Reading. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NLM/NIH or DARPA. We would also like to especially thank Jinho Choi for his input on the data format, schemas, and UI/UX.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gracia-etal-2014-enabling
http://www.lrec-conf.org/proceedings/lrec2014/pdf/863_Paper.pdf
Enabling Language Resources to Expose Translations as Linked Data on the Web
Language resources, such as multilingual lexica and multilingual electronic dictionaries, contain collections of lexical entries in several languages. Having access to the corresponding explicit or implicit translation relations between such entries might be of great interest for many NLP-based applications. By using Semantic Web-based techniques, translations can be available on the Web to be consumed by other (semantic enabled) resources in a direct manner, not relying on application-specific formats. To that end, in this paper we propose a model for representing translations as linked data, as an extension of the lemon model. Our translation module represents some core information associated to term translations and does not commit to specific views or translation theories. As a proof of concept, we have extracted the translations of the terms contained in Terminesp, a multilingual terminological database, and represented them as linked data. We have made them accessible on the Web both for humans (via a Web interface) and software agents (with a SPARQL endpoint).
false
[]
[]
null
null
null
Acknowledgements. We are very thankful to AETER and AENOR for making Terminesp data available. We also thank Javier Bezos, from FUNDEU, for his assistance with the data. Some ideas contained in this paper were inspired after fruitful discussions with other members of the W3C Ontology-Lexica community group. This work is supported by the FP7 European
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false