ID
stringlengths
11
54
url
stringlengths
33
64
title
stringlengths
11
184
abstract
stringlengths
17
3.87k
label_nlp4sg
bool
2 classes
task
sequence
method
sequence
goal1
stringclasses
9 values
goal2
stringclasses
9 values
goal3
stringclasses
1 value
acknowledgments
stringlengths
28
1.28k
year
stringlengths
4
4
sdg1
bool
1 class
sdg2
bool
1 class
sdg3
bool
2 classes
sdg4
bool
2 classes
sdg5
bool
2 classes
sdg6
bool
1 class
sdg7
bool
1 class
sdg8
bool
2 classes
sdg9
bool
2 classes
sdg10
bool
2 classes
sdg11
bool
2 classes
sdg12
bool
1 class
sdg13
bool
2 classes
sdg14
bool
1 class
sdg15
bool
1 class
sdg16
bool
2 classes
sdg17
bool
2 classes
zhu-etal-2021-neural
https://aclanthology.org/2021.acl-long.339
Neural Stylistic Response Generation with Disentangled Latent Variables
Generating open-domain conversational responses in the desired style usually suffers from the lack of parallel data in the style. Meanwhile, using monolingual stylistic data to increase style intensity often leads to the expense of decreasing content relevance. In this paper, we propose to disentangle the content and style in latent space by diluting sentence-level information in style representations. Combining the desired style representation and a response content representation will then obtain a stylistic response. Our approach achieves a higher BERT-based style intensity score and comparable BLEU scores, compared with baselines. Human evaluation results show that our approach significantly improves style intensity and maintains content relevance.
false
[]
[]
null
null
null
The authors would like to thank all the anonymous reviewers for their insightful comments. The authors from HIT are supported by the National Natural Science Foundation of China (No. 62076081, No. 61772153, and No. 61936010) and Science and Technology Innovation 2030 Major Project of China (No. 2020AAA0108605). The author from UCSB is not supported by any of the projects above.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
cybulska-vossen-2013-semantic
https://aclanthology.org/R13-1021
Semantic Relations between Events and their Time, Locations and Participants for Event Coreference Resolution
In this study, we measure the contribution of different event components and particular semantic relations to the task of event coreference resolution. First we calculate what event times, locations and participants add to event coreference resolution. Secondly, we analyze the contribution by hyponymy and granularity within the participant component. Coreference of events is then calculated from the coreference match scores of each event component. Coreferent action candidates are accordingly filtered based on compatibility of their time, locations, or participants. We report the success rates of our experiments on a corpus annotated with coreferent events.
false
[]
[]
null
null
null
This study is part of the Semantics of History research project at the VU University Amsterdam and the European FP7 project NewsReader (316404). The authors are grateful to the anonymous reviewers as well as the generous support of the Network Institute of the VU University Amsterdam. All errors are our own.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
peldszus-2014-towards
https://aclanthology.org/W14-2112
Towards segment-based recognition of argumentation structure in short texts
Despite recent advances in discourse parsing and causality detection, the automatic recognition of argumentation structure of authentic texts is still a very challenging task. To approach this problem, we collected a small corpus of German microtexts in a text generation experiment, resulting in texts that are authentic but of controlled linguistic and rhetoric complexity. We show that trained annotators can determine the argumentation structure on these microtexts reliably. We experiment with different machine learning approaches for automatic argumentation structure recognition on various levels of granularity of the scheme. Given the complex nature of such a discourse understanding tasks, the first results presented here are promising, but invite for further investigation.
false
[]
[]
null
null
null
Thanks to Manfred Stede and to the anonymous reviewers for their helpful comments. The author was supported by a grant from Cusanuswerk.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
girju-etal-2007-semeval
https://aclanthology.org/S07-1003
SemEval-2007 Task 04: Classification of Semantic Relations between Nominals
The NLP community has shown a renewed interest in deeper semantic analyses, among them automatic recognition of relations between pairs of words in a text. We present an evaluation task designed to provide a framework for comparing different approaches to classifying semantic relations between nominals in a sentence. This is part of SemEval, the 4 th edition of the semantic evaluation event previously known as SensEval. We define the task, describe the training/test data and their creation, list the participating systems and discuss their results. There were 14 teams who submitted 15 systems.
false
[]
[]
null
null
null
We thank Eneko Agirre, Lluís Màrquez and Richard Wicentowski, the organizers of SemEval 2007, for their guidance and prompt support in all organizational matters. We thank Marti Hearst for valuable advice throughout the task description and debates on semantic relation definitions. We thank the anonymous reviewers for their helpful comments.
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
van-der-goot-etal-2021-multilexnorm
https://aclanthology.org/2021.wnut-1.55
MultiLexNorm: A Shared Task on Multilingual Lexical Normalization
Lexical normalization is the task of transforming an utterance into its standardized form. This task is beneficial for downstream analysis, as it provides a way to harmonize (often spontaneous) linguistic variation. Such variation is typical for social media on which information is shared in a multitude of ways, including diverse languages and code-switching. Since the seminal work of Han and Baldwin (2011) a decade ago, lexical normalization has attracted attention in English and multiple other languages. However, there exists a lack of a common benchmark for comparison of systems across languages with a homogeneous data and evaluation setup. The MUL-TILEXNORM shared task sets out to fill this gap. We provide the largest publicly available multilingual lexical normalization benchmark including 12 language variants. We propose a homogenized evaluation setup with both intrinsic and extrinsic evaluation. As extrinsic evaluation, we use dependency parsing and part-ofspeech tagging with adapted evaluation metrics (a-LAS, a-UAS, and a-POS) to account for alignment discrepancies. The shared task hosted at W-NUT 2021 attracted 9 participants and 18 submissions. The results show that neural normalization systems outperform the previous state-of-the-art system by a large margin. Downstream parsing and part-of-speech tagging performance is positively affected but to varying degrees, with improvements of up to 1.72 a-LAS, 0.85 a-UAS, and 1.54 a-POS for the winning system. 1
false
[]
[]
null
null
null
B.M. was funded by the French Research Agency via the ANR ParSiTi project (ANR-16-CE33-0021).
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
einolghozati-etal-2021-el
https://aclanthology.org/2021.eacl-main.87
El Volumen Louder Por Favor: Code-switching in Task-oriented Semantic Parsing
Being able to parse code-switched (CS) utterances, such as Spanish+English or Hindi+English, is essential to democratize task-oriented semantic parsing systems for certain locales. In this work, we focus on Spanglish (Spanish+English) and release a dataset, CSTOP, containing 5800 CS utterances alongside their semantic parses. We examine the CS generalizability of various Cross-lingual (XL) models and exhibit the advantage of pre-trained XL language models when data for only one language is present. As such, we focus on improving the pre-trained models for the case when only English corpus alongside either zero or a few CS training instances are available. We propose two data augmentation methods for the zero-shot and the few-shot settings: fine-tune using translate-and-align and augment using a generation model followed by match-and-filter. Combining the few-shot setting with the above improvements decreases the initial 30-point accuracy gap between the zero-shot and the full-data settings by two thirds.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
taji-etal-2017-universal
https://aclanthology.org/W17-1320
Universal Dependencies for Arabic
We describe the process of creating NUDAR, a Universal Dependency treebank for Arabic. We present the conversion from the Penn Arabic Treebank to the Universal Dependency syntactic representation through an intermediate dependency representation. We discuss the challenges faced in the conversion of the trees, the decisions we made to solve them, and the validation of our conversion. We also present initial parsing results on NUDAR.
false
[]
[]
null
null
null
The work done by the third author was supported by the grant 15-10472S of the Czech Science Foundation.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
habash-2012-mt
https://aclanthology.org/2012.amta-tutorials.3
MT and Arabic Language Issues
The artistic pieces carry the personal characteristics of the artists. • … and they produce a ranked list of translations in the target language • Popular decoders: Moses (Koehn et al., 2007 ), cdec (Dyer et al., 2010 ), Joshua (Li et al., 2009 , Portage (Sadat et al, 2005) and others. • BLEU (Papineni et al, 2001) -BiLingual Evaluation Understudy -Modified n-gram precision with length penalty -Quick, inexpensive and language independent -Bias against synonyms and inflectional variations -Most commonly used MT metric -Official metric of the NIST Open MT Evaluation
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
arthern-1978-machine
https://aclanthology.org/1978.tc-1.5
Machine translation and computerised terminology systems - a translator's viewpoint
Whether these criticisms were valid or not, machine translation development in the States was cut back immediately, translators heaved a sigh of relief, and machine translation researchers went underground. As we have already heard this morning however, they are now coming out into the open again and translators are asking the same question once more. The short answer is that no translator working now is going to lose his or her job in the next five years because of machine translation, and probably never will. Machine translation systems which are now operating are either limited in their scope, such as the Canadian "METEO" system which translates weather forecasts from English into French, or the CULT system which we are to hear about this afternoon, or cannot provide translations of generally acceptable quality without extensive revision, or "post-editing". In addition on, machine translation systems are expensive to develop and can only pay their way by translating large amounts of material. Another bar to using machine translation in smallscale operations is the variety of work, and therefore the variety of terminology involved. If a word is not in the machine's dictionary it just won't be translated, and if a translator has to spend time looking up terms and inserting them in a translation full of gaps, any economic benefit of machine translation will be lost. Consequently, as things stand at present most freelance translators and staff translators in small firms are unlikely to come into direct contact with machine translation, or to suffer from competition from machine translation. Competition would only come from the possible use of machine translation by large commercial agencies. It would be felt first either in very general areas, or in very specialized areas, with a clearly delimited vocabulary and standardized phraseology-in both cases, perhaps, in order to have a quick cheap translation to get the gist of a text, or to decide whether to have it translated by a translator. A final thought in this connection is that both freelances and small firms might conceivably buy raw machine translation from a large agency and post-edit it themselves. This would constitute a particular form of "interactive" machine translation, and would only be worth attempting if the time taken in post-editing to an acceptable standard was less than the time required to translate the text from scratch. While the size and complexity of machine translation operations mean that freelances and translators in small firms are unlikely to become directly involved with it, some MACHINE TRANSLATION AND COMPUTERIZED TERM1NOLOGY SYSTEMS 79 translators and revisers in the Commission of the European Communities have already done so. Some comments on "Systran"
false
[]
[]
null
null
null
null
1978
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tiedemann-thottingal-2020-opus
https://aclanthology.org/2020.eamt-1.61
OPUS-MT -- Building open translation services for the World
Equality among people requires, among other things, the ability to access information in the same way as others independent of the linguistic background of the individual user. Achieving this goal becomes an even more important challenge in a globalized world with digital channels and information flows being the most decisive factor in our integration in modern societies. Language barriers can lead to severe disadvantages and discrimination not to mention conflicts caused by simple misunderstandings based on broken communication. Linguistic discrimination leads to frustration, isolation and racism and the lack of technological language support may also cause what is known as the digital language death (Kornai, 2013) .
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
grabski-etal-2012-controle
https://aclanthology.org/F12-1037
Contr\^ole pr\'edictif et codage du but des actions oro-faciales (Predictice control and coding of orofacial actions) [in French]
Predictice control and coding of orofacial actions Predictice control and coding of orofacial actions Predictice control and coding of orofacial actions Predictice control and coding of orofacial actions Recent studies provide evidence for action goal coding of manual actions in premotor and posterior parietal cortices. To further extend these results, we used a repetition suppression paradigm while measuring neural activity with functional magnetic resonance imaging during repeated orofocial movements (lip protrusion, jaw lowering and tongue retraction movements). In the motor domain, this adaptation paradigm refers to decreased activity in specific neural populations due to repeated motor acts and has been proposed to reflect sensorimotor learning and reduced prediction errors by means of forward motor-to-sensory predictive processes. In the present study, orofacial movements activated a set of largely overlapping, common brain areas forming a core neural network classically involved in orofacial motor control. Crucially, suppressed neural responses during repeated orofacial actions were specifically observed in the left hemisphere, within the intraparietal sulcus and adjacent inferior parietal lobule, the superior parietal lobule and the ventral premotor cortex. These results provide evidence for action goal coding and forward motor-tosomatosensory predictive control of intransitive and silent orofacial actions in this frontoparietal circuit. (Rizzolatti et al., 1988; Fogassi et al., 2005; Bonnini et al., 2011) . Chez l'homme, la méthode d'imagerie par résonance magnétique fonctionnelle (IRMf) a été récemment utilisée conjointement à un paradigme d'adaptation afin de dissocier les substrats neuronaux liés aux différents niveaux de représentation des actions manuelles. Ce paradigme IRMf d'adaptation s'appuie sur un effet de répétition suppression (RS) consistant en une réduction du signal BOLD (pour blood oxygen level-dependent) de régions cérébrales spécifiquement reliées à différents niveaux de traitements d'une action perçue ou produite, lors de la présentation de stimuli ou de l'exécution d'un acte moteur répété (Grill-Spector & Malach, 2001; Grill-Spector et al., 2006) . En accord avec les études sur les primates nonhumains, cette approche a révélé que les actions manuelles répétées avec un but similaire induisent un effet RS dans le sulcus intrapariétal et la partie adjacente dorsale du lobule pariétal inférieur ainsi que dans le gyrus frontal inférieur et le cortex prémoteur ventral adjacent (Dinstein et al., 2007; Hamilton & Grafton, 2009; Kilner et al., 2009) . Bien que discuté en termes de codage du but des actions, une interprétation convergente de l'effet RS dans ces aires pariétales et prémotrices est basée sur l'existence de processus prédictifs sensorimoteurs. Ces processus permettraient en effet de comparer les conséquences sensorielles d'une action réalisée avec les informations exogènes effectivement perçues et, de là, d'estimer de possibles erreurs en vue de corriger en ligne l'acte moteur (Wolpert, Ghahramani & Jordan, 1995; Kawato, 1999 ; Friston, 2011) . Dans ce cadre et relativement aux études IRMf précédemment citées, il est possible que la répétition d'actes moteurs manuels impliquant un même but ait entrainé un apprentissage sensorimoteur graduel et des mises à jour des représentations motrices liées au codage du but de l'action dans les aires pariétales et frontales inférieures, avec des erreurs de prédiction réduites reflétées par une diminution du signal BOLD.
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ostendorff-etal-2020-aspect
https://aclanthology.org/2020.coling-main.545
Aspect-based Document Similarity for Research Papers
Traditional document similarity measures provide a coarse-grained distinction between similar and dissimilar documents. Typically, they do not consider in what aspects two documents are similar. This limits the granularity of applications like recommender systems that rely on document similarity. In this paper, we extend similarity with aspect information by performing a pairwise document classification task. We evaluate our aspect-based document similarity approach for research papers. Paper citations indicate the aspect-based similarity, i. e., the title of a section in which a citation occurs acts as a label for the pair of citing and cited paper. We apply a series of Transformer models such as RoBERTa, ELECTRA, XLNet, and BERT variations and compare them to an LSTM baseline. We perform our experiments on two newly constructed datasets of 172,073 research paper pairs from the ACL Anthology and CORD-19 corpus. According to our results, SciBERT is the best performing system with F1-scores of up to 0.83. A qualitative analysis validates our quantitative results and indicates that aspect-based document similarity indeed leads to more fine-grained recommendations.
true
[]
[]
Industry, Innovation and Infrastructure
null
null
We would like to thank all reviewers and Christoph Alt for their comments and valuable feedback. The research presented in this article is funded by the German Federal Ministry of Education and Research (BMBF) through the project QURATOR (Unternehmen Region, Wachstumskern, no. 03WKDA1A).
2020
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
lu-etal-2016-joint
https://aclanthology.org/C16-1308
Joint Inference for Event Coreference Resolution
Event coreference resolution is a challenging problem since it relies on several components of the information extraction pipeline that typically yield noisy outputs. We hypothesize that exploiting the inter-dependencies between these components can significantly improve the performance of an event coreference resolver, and subsequently propose a novel joint inference based event coreference resolver using Markov Logic Networks (MLNs). However, the rich features that are important for this task are typically very hard to explicitly encode as MLN formulas since they significantly increase the size of the MLN, thereby making joint inference and learning infeasible. To address this problem, we propose a novel solution where we implicitly encode rich features into our model by augmenting the MLN distribution with low dimensional unit clauses. Our approach achieves state-of-the-art results on two standard evaluation corpora.
false
[]
[]
null
null
null
We thank the three anonymous reviewers for their detailed comments. This work was supported in part by NSF Grants IIS-1219142 and IIS-1528037, and by the DARPA PPAML Program under AFRL prime contract number FA8750-14-C-0005. Any opinions, findings, conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views or official policies, either expressed or implied, of NSF, DARPA and AFRL.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhou-etal-2010-exploiting
https://aclanthology.org/W10-3015
Exploiting Multi-Features to Detect Hedges and their Scope in Biomedical Texts
In this paper, we present a machine learning approach that detects hedge cues and their scope in biomedical texts. Identifying hedged information in texts is a kind of semantic filtering of texts and it is important since it could extract speculative information from factual information. In order to deal with the semantic analysis problem, various evidential features are proposed and integrated through a Conditional Random Fields (CRFs) model. Hedge cues that appear in the training dataset are regarded as keywords and employed as an important feature in hedge cue identification system. For the scope finding, we construct a CRF-based system and a syntactic pattern-based system, and compare their performances. Experiments using test data from CoNLL-2010 shared task show that our proposed method is robust. F-score of the biological hedge detection task and scope finding task achieves 86.32% and 54.18% in in-domain evaluations respectively.
true
[]
[]
Good Health and Well-Being
null
null
null
2010
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
knauth-alfter-2014-dictionary
https://aclanthology.org/W14-5509
A Dictionary Data Processing Environment and Its Application in Algorithmic Processing of Pali Dictionary Data for Future NLP Tasks
This paper presents a highly flexible infrastructure for processing digitized dictionaries and that can be used to build NLP tools in the future. This infrastructure is especially suitable for low resource languages where some digitized information is available but not (yet) suitable for algorithmic use. It allows researchers to do at least some processing in an algorithmic way using the full power of the C# programming language, reducing the effort of manual editing of the data. To test this in practice, the paper describes the processing steps taken by making use of this infrastructure in order to identify word classes and cross references in the dictionary of Pali in the context of the SeNeReKo project. We also conduct an experiment to make use of this data and show the importance of the dictionary. This paper presents the experiences and results of the selected approach.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dilsizian-etal-2014-new
http://www.lrec-conf.org/proceedings/lrec2014/pdf/1138_Paper.pdf
A New Framework for Sign Language Recognition based on 3D Handshape Identification and Linguistic Modeling
Current approaches to sign recognition by computer generally have at least some of the following limitations: they rely on laboratory conditions for sign production, are limited to a small vocabulary, rely on 2D modeling (and therefore cannot deal with occlusions and off-plane rotations), and/or achieve limited success. Here we propose a new framework that (1) provides a new tracking method less dependent than others on laboratory conditions and able to deal with variations in background and skin regions (such as the face, forearms, or other hands); (2) allows for identification of 3D hand configurations that are linguistically important in American Sign Language (ASL); and (3) incorporates statistical information reflecting linguistic constraints in sign production. For purposes of large-scale computer-based sign language recognition from video, the ability to distinguish hand configurations accurately is critical. Our current method estimates the 3D hand configuration to distinguish among 77 hand configurations linguistically relevant for ASL. Constraining the problem in this way makes recognition of 3D hand configuration more tractable and provides the information specifically needed for sign recognition. Further improvements are obtained by incorporation of statistical information about linguistic dependencies among handshapes within a sign derived from an annotated corpus of almost 10,000 sign tokens.
true
[]
[]
Reduced Inequalities
null
null
null
2014
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
clark-fijalkow-2021-consistent
https://aclanthology.org/2021.scil-1.60
Consistent unsupervised estimators for anchored PCFGs
Learning probabilistic context-free grammars just from a sample of strings from the grammars is a classic problem going back to Horning (1969) . This abstract, based on the full paper in Clark and Fijalkow (2020) , presents an approach for strongly learning a linguistically interesting subclass of probabilistic context free grammars from strings in the realizable case. Unpacking this, we assume that we have some PCFG that we are interested in learning and that we have access only to a sample of strings generated by the PCFGi.e. sampled from the distribution defined by the context free grammar. Crucially we do not observe the derivation trees -the hierarchical latent structure. Strong learning means that we want the learned grammar to define the same distribution over derivation trees -i.e. the labeled trees -as the original grammar and not just the same distribution over strings.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tanenhaus-1996-using
https://aclanthology.org/P96-1007
Using Eye Movements to Study Spoken Language Comprehension: Evidence for Incremental Interpretation (Invited Talk)
We present an overview of recent work in which eye movements are monitored as people follow spoken instructions to move objects or pictures in a visual workspace. Subjects naturally make saccadic eye-movements to objects that are closely time-locked to relevant information in the instruction. Thus the eye-movements provide a window into the rapid mental processes that underlie spoken language comprehension. We review studies of reference resolution, word recognition, and pragmatic effects on syntactic ambiguity resolution. Our studies show that people seek to establish reference with respect to their behavioral goals during the earliest moments of linguistic processing. Moreover, referentially relevant non-linguistic information immediately affects how the linguistic input is initially structured.
false
[]
[]
null
null
null
* This paper summarizes work that the invited talk by the first author (MKT) was based upon. Supported by NIH resource grant 1-P41-RR09283; NIH HD27206 to MKT; NIH F32DC00210 to PDA, NSF Graduate Research Fellowships to MJS-K and JSM and a Canadian Social Science Research Fellowship to JCS.
1996
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
patil-etal-2013-named
https://aclanthology.org/I13-1180
Named Entity Extraction using Information Distance
Named entities (NE) are important information carrying units within documents. Named Entity extraction (NEX) task consists of automatic construction of a list of phrases belonging to each NE of interest. NEX is important for domains which lack a corpus with tagged NEs. We present an enhanced version and improved results of our unsupervised (bootstrapping) NEX technique (Patil et al., 2013) and establish its domain independence using experimental results on corpora from two different domains: agriculture and mechanical engineering (IC engine 1 parts). We use a new variant of Multiword Expression Distance (MED) (Bu et al., 2010) to quantify proximity of a candidate phrase with a given NE type. MED itself is an approximation of the information distance (Bennett et al., 1998). Efficacy of our method is shown using experimental comparison with pointwise mutual information (PMI), BASILISK and KNOWITALL. Our method discovered 8 new plant diseases which are not found in Wikipedia. To the best of our knowledge, this is the first use of NEX techniques for agriculture and mechanical engineering (engine parts) domains.
false
[]
[]
null
null
null
null
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhou-2000-local
https://aclanthology.org/C00-2141
Local context templates for Chinese constituent boundary prediction
In this paper, we proposed a shallow syntactic knowledge description: constituent boundary representation and its simple and efficient prediction algorithm, based on different local context templates learned from the annotated corpus. An open test on 2780 Chinese real text sentences showed the satisfying results: 94%(92%) precision for the words with multiple (single) boundary tag output.
false
[]
[]
null
null
null
The research was supported by National Natural Science Foundation of China (NSFC) (Grant No. 69903007).
2000
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yirmibesoglu-gungor-2020-ermi
https://aclanthology.org/2020.mwe-1.17
ERMI at PARSEME Shared Task 2020: Embedding-Rich Multiword Expression Identification
This paper describes the ERMI system submitted to the closed track of the PARSEME shared task 2020 on automatic identification of verbal multiword expressions (VMWEs). ERMI is an embedding-rich bidirectional LSTM-CRF model, which takes into account the embeddings of the word, its POS tag, dependency relation, and its head word. The results are reported for 14 languages, where the system is ranked 1 st in the general cross-lingual ranking of the closed track systems, according to the Unseen MWE-based F 1 .
false
[]
[]
null
null
null
The numerical calculations reported in this paper were partially performed at TUBITAK ULAKBIM, High Performance and Grid Computing Center (TRUBA resources).
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nn-1978-finite-string-volume
https://aclanthology.org/J78-2005
The FINITE STRING, Volume 15, Number 2 (continued)
Information Indw t r y ASsociation. Fol lowing Mr, 2urk0wski'~s presentation, Sen. llollings mlicited "help from your organization and others, on t h e convergence of computer and cofnmunications.
false
[]
[]
null
null
null
null
1978
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
christensen-etal-2009-rose
https://aclanthology.org/P09-2049
A Rose is a Roos is a Ruusu: Querying Translations for Web Image Search
We query Web Image search engines with words (e.g., spring) but need images that correspond to particular senses of the word (e.g., flexible coil). Querying with polysemous words often yields unsatisfactory results from engines such as Google Images. We build an image search engine, IDIOM, which improves the quality of returned images by focusing search on the desired sense. Our algorithm, instead of searching for the original query, searches for multiple, automatically chosen translations of the sense in several languages. Experimental results show that IDIOM outperforms Google Images and other competing algorithms returning 22% more relevant images.
false
[]
[]
null
null
null
null
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
marinelli-2010-lexical
http://www.lrec-conf.org/proceedings/lrec2010/pdf/830_Paper.pdf
Lexical Resources and Ontological Classifications for the Recognition of Proper Names Sense Extension
Particular uses of PNs with sense extension are focussed on and inspected taking into account the presence of PNs in lexical semantic databases and electronic corpora. Methodology to select ad include PNs in semantic databases is described; the use of PNs in corpora of Italian Language is examined and evaluated, analyzing the behaviour of a set of PNs in different periods of time. Computational resources can facilitate our study in this field in an effective way by helping codify, translate and handle particular cases of polysemy, but also guiding in metaphorical and metonymic sense recognition, supported by the ontological classification of the lexical semantic entities. The relationship between the "abstract" and the "concrete", which is at the basis of the Conceptual Metaphor perspective, can be considered strictly related to the variation of the ontological values found in our analysis of the PNs and their belonging classes which are codified in the ItalWordNet database.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
polajnar-etal-2015-exploration
https://aclanthology.org/W15-2701
An Exploration of Discourse-Based Sentence Spaces for Compositional Distributional Semantics
This paper investigates whether the wider context in which a sentence is located can contribute to a distributional representation of sentence meaning. We compare a vector space for sentences in which the features are words occurring within the sentence, with two new vector spaces that only make use of surrounding context. Experiments on simple subject-verbobject similarity tasks show that all sentence spaces produce results that are comparable with previous work. However, qualitative analysis and user experiments indicate that extra-sentential contexts capture more diverse, yet topically coherent information.
false
[]
[]
null
null
null
null
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dehouck-denis-2019-phylogenic
https://aclanthology.org/N19-1017
Phylogenic Multi-Lingual Dependency Parsing
Languages evolve and diverge over time. Their evolutionary history is often depicted in the shape of a phylogenetic tree. Assuming parsing models are representations of their languages grammars, their evolution should follow a structure similar to that of the phylogenetic tree. In this paper, drawing inspiration from multi-task learning, we make use of the phylogenetic tree to guide the learning of multilingual dependency parsers leveraging languages structural similarities. Experiments on data from the Universal Dependency project show that phylogenetic training is beneficial to low resourced languages and to well furnished languages families. As a side product of phylogenetic training, our model is able to perform zero-shot parsing of previously unseen languages.
false
[]
[]
null
null
null
This work was supported by ANR Grant GRASP No. ANR-16-CE33-0011-01 and Grant from CPER Nord-Pas de Calais/FEDER DATA Advanced data science and technologies 2015-2020. We also thank the reviewers for their valuable feedback.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
terragni-etal-2021-octis
https://aclanthology.org/2021.eacl-demos.31
OCTIS: Comparing and Optimizing Topic models is Simple!
In this paper, we present OCTIS, a framework for training, analyzing, and comparing Topic Models, whose optimal hyper-parameters are estimated using a Bayesian Optimization approach. The proposed solution integrates several state-of-the-art topic models and evaluation metrics. These metrics can be targeted as objective by the underlying optimization procedure to determine the best hyper-parameter configuration. OCTIS allows researchers and practitioners to have a fair comparison between topic models of interest, using several benchmark datasets and well-known evaluation metrics, to integrate novel algorithms, and to have an interactive visualization of the results for understanding the behavior of each model. The code is available at the following link: https://github.com/ MIND-Lab/OCTIS.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
maillard-clark-2015-learning
https://aclanthology.org/K15-1035
Learning Adjective Meanings with a Tensor-Based Skip-Gram Model
We present a compositional distributional semantic model which is an implementation of the tensor-based framework of Coecke et al. (2011). It is an extended skipgram model (Mikolov et al., 2013) which we apply to adjective-noun combinations, learning nouns as vectors and adjectives as matrices. We also propose a novel measure of adjective similarity, and show that adjective matrix representations lead to improved performance in adjective and adjective-noun similarity tasks, as well as in the detection of semantically anomalous adjective-noun pairs.
false
[]
[]
null
null
null
Jean Maillard is supported by an EPSRC Doctoral Training Grant and a St John's Scholarship. Stephen Clark is supported by ERC Starting Grant DisCoTex (306920) and EPSRC grant EP/I037512/1. We would like to thank Tamara Polajnar, Laura Rimell, and Eva Vecchi for useful discussion.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bilac-tanaka-2004-hybrid
https://aclanthology.org/C04-1086
A hybrid back-transliteration system for Japanese
null
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
al-sabbagh-etal-2013-using
https://aclanthology.org/I13-1047
Using the Semantic-Syntactic Interface for Reliable Arabic Modality Annotation
We introduce a novel modality scheme where triggers are words and phrases that convey modality meanings and subcategorize for clauses and verbal phrases. This semanticsyntactic working definition of modality enables us to design practical and replicable annotation guidelines and procedures that alleviate some shortcomings of current purely semantic modality annotation schemes and yield high inter-annotator agreement rates. We use this scheme to annotate a tweet-based Arabic corpus for modality information. This novel language resource, being the first, initiates NLP research on Arabic modality.
false
[]
[]
null
null
null
This work has been partially supported by a grant on social media and mobile computing from the Beckman Institute for Advanced Science and Technology.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
popovic-ney-2004-towards
http://www.lrec-conf.org/proceedings/lrec2004/pdf/372.pdf
Towards the Use of Word Stems and Suffixes for Statistical Machine Translation
In this paper we present methods for improving the quality of translation from an inflected language into English by making use of part-of-speech tags and word stems and suffixes in the source language. Results for translations from Spanish and Catalan into English are presented on the LC-STAR trilingual corpus which consists of spontaneously spoken dialogues in the domain of travelling and appointment scheduling. Results for translation from Serbian into English are presented on the Assimil language course, the bilingual corpus from unrestricted domain. We achieve up to 5% relative reduction of error rates for Spanish and Catalan and about 8% for Serbian.
false
[]
[]
null
null
null
This work was partly supported by the LC-STAR project by the European Community (IST project ref. no. 2001-32216).
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhang-etal-2021-beyond
https://aclanthology.org/2021.acl-long.200
Beyond Sentence-Level End-to-End Speech Translation: Context Helps
Document-level contextual information has shown benefits to text-based machine translation, but whether and how context helps endto-end (E2E) speech translation (ST) is still under-studied. We fill this gap through extensive experiments using a simple concatenationbased context-aware ST model, paired with adaptive feature selection on speech encodings for computational efficiency. We investigate several decoding approaches, and introduce inmodel ensemble decoding which jointly performs document-and sentence-level translation using the same model. Our results on the MuST-C benchmark with Transformer demonstrate the effectiveness of context to E2E ST. Compared to sentence-level ST, context-aware ST obtains better translation quality (+0.18-2.61 BLEU), improves pronoun and homophone translation, shows better robustness to (artificial) audio segmentation errors, and reduces latency and flicker to deliver higher quality for simultaneous translation. 1
false
[]
[]
null
null
null
We thank the reviewers for their insightful comments. This project has received funding from the European Union's Horizon 2020 Research and Innovation Programme under Grant Agreements 825460 (ELITR). Rico Sennrich acknowledges support of the Swiss National Science Foundation (MUTAMUR; no. 176727).
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yuret-2007-ku
https://aclanthology.org/S07-1044
KU: Word Sense Disambiguation by Substitution
Data sparsity is one of the main factors that make word sense disambiguation (WSD) difficult. To overcome this problem we need to find effective ways to use resources other than sense labeled data. In this paper I describe a WSD system that uses a statistical language model based on a large unannotated corpus. The model is used to evaluate the likelihood of various substitutes for a word in a given context. These likelihoods are then used to determine the best sense for the word in novel contexts. The resulting system participated in three tasks in the Se-mEval 2007 workshop. The WSD of prepositions task proved to be challenging for the system, possibly illustrating some of its limitations: e.g. not all words have good substitutes. The system achieved promising results for the English lexical sample and English lexical substitution tasks.
false
[]
[]
null
null
null
null
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
srinet-etal-2020-craftassist
https://aclanthology.org/2020.acl-main.427
CraftAssist Instruction Parsing: Semantic Parsing for a Voxel-World Assistant
We propose a semantic parsing dataset focused on instruction-driven communication with an agent in the game Minecraft 1. The dataset consists of 7K human utterances and their corresponding parses. Given proper world state, the parses can be interpreted and executed in game. We report the performance of baseline models, and analyze their successes and failures. * Equal contribution † Work done while at Facebook AI Research 1 Minecraft features: c Mojang Synergies AB included courtesy of Mojang AB
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
xu-etal-2016-unimelb
https://aclanthology.org/S16-1027
UNIMELB at SemEval-2016 Tasks 4A and 4B: An Ensemble of Neural Networks and a Word2Vec Based Model for Sentiment Classification
This paper describes our sentiment classification system for microblog-sized documents, and documents where a topic is present. The system consists of a softvoting ensemble of a word2vec language model adapted to classification, a convolutional neural network (CNN), and a longshort term memory network (LSTM). Our main contribution consists of a way to introduce topic information into this model, by concatenating a topic embedding, consisting of the averaged word embedding for that topic, to each word embedding vector in our neural networks. When we apply our models to SemEval 2016 Task 4 subtasks A and B, we demonstrate that the ensemble performed better than any single classifier, and our method of including topic information achieves a substantial performance gain. According to results on the official test sets, our model ranked 3rd for PN in the message-only subtask A (among 34 teams) and 1st for accuracy on the topic-dependent subtask B (among 19 teams). 1 There were some issues surrounding the evaluation metrics. We only got 7th for PN and 2nd for PN officially, but when we retrained our model using PN as the subtask intended, we place first across all metrics.
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
koyama-etal-1998-japanese
https://aclanthology.org/Y98-1029
Japanese Kana-to-Kanji Conversion Using Large Scale Collocation Data
Japanese wad prucessa. cr the cvmputer rated in Japaz employs, input method through keyboard vole canbinxIwith Kay Ohmetic) character b Kaiji (ickogrcphi4 Chime) cirraier aynersiattedsvlogy. .71r key fret►. of Karkto-Kanji co► tersion technology is how to rase the wary cfthe cantersicn hough the hamophae pvcwsirg we hate so many homcplvnes kits pcpet. , we sprat the mass cf our Karr-taKayi canersicn experiments which embo* dr homcialme processing using catnsite colloartion daft It is shown that ciprzimately 135,000 °goo:0m dai2yields 9.1 %rnise cfie amtersicn axunory ccmparedwith the protoope .Dstan which ha rro collocatiatcbta
false
[]
[]
null
null
null
null
1998
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zeyrek-basibuyuk-2019-tcl
https://aclanthology.org/W19-3308
TCL - a Lexicon of Turkish Discourse Connectives
It is known that discourse connectives are the most salient indicators of discourse relations. State-of-the-art parsers being developed to predict explicit discourse connectives exploit annotated discourse corpora but a lexicon of discourse connectives is also needed to enable further research in discourse structure and support the development of language technologies that use these structures for text understanding. This paper presents a lexicon of Turkish discourse connectives built by automatic means. The lexicon has the format of the German connective lexicon, DiMLex, where for each discourse connective, information about the connective's orthographic variants, syntactic category and senses are provided along with sample relations. In this paper, we describe the data sources we used and the development steps of the lexicon.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mota-etal-2004-multiword
https://aclanthology.org/W04-2115
Multiword Lexical Acquisition and Dictionary Formalization
In this paper, we present the current state of development of a large-scale lexicon built at LabEL 1 for Portuguese. We will concentrate on multiword expressions (MWE), particularly on multiword nouns, (i) illustrating their most relevant morphological features, and (ii) pointing out the methods and techniques adopted to generate the inflected forms from lemmas. Moreover, we describe a corpus-based aproach for the acquisition of new multiword nouns, which led to a significant enlargement of the existing lexicon. Evaluation results concerning lexical coverage in the corpus are also discussed.
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
su-etal-2020-towards
https://aclanthology.org/2020.acl-main.63
Towards Unsupervised Language Understanding and Generation by Joint Dual Learning
In modular dialogue systems, natural language understanding (NLU) and natural language generation (NLG) are two critical components, where NLU extracts the semantics from the given texts and NLG is to construct corresponding natural language sentences based on the input semantic representations. However, the dual property between understanding and generation has been rarely explored. The prior work (Su et al., 2019) is the first attempt that utilized the duality between NLU and NLG to improve the performance via a dual supervised learning framework. However, the prior work still learned both components in a supervised manner; instead, this paper introduces a general learning framework to effectively exploit such duality, providing flexibility of incorporating both supervised and unsupervised learning algorithms to train language understanding and generation models in a joint fashion. The benchmark experiments demonstrate that the proposed approach is capable of boosting the performance of both NLU and NLG. 1
false
[]
[]
null
null
null
We thank reviewers for their insightful comments. This work was financially supported from the Young Scholar Fellowship Program by Ministry of Science and Technology (MOST) in Taiwan, under Grant 109-2636-E-002-026.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
losch-etal-2018-european
https://aclanthology.org/L18-1213
European Language Resource Coordination: Collecting Language Resources for Public Sector Multilingual Information Management
In order to help improve the quality, coverage and performance of automated translation solutions for current and future Connecting Europe Facility (CEF) digital services, the European Language Resource Coordination (ELRC) consortium was set up through a service contract operating under the European Commission's CEF SMART 2014/1074 programme to initiate a number of actions to support the collection of Language Resources (LRs) within the public sector in EU member and CEF-affiliated countries. The first action focused on raising awareness in the public sector through the organisation of dedicated events: 2 international conferences and 29 country-specific workshops to engage national as well as regional/municipal governmental organisations, language competence centres, relevant European institutions and other potential holders of LRs from public service administrations and NGOs. In order to gather resources shared by the contributors, the ELRC-SHARE Repository was set up together with services supporting the sharing of LRs, such as the ELRC Helpdesk and Intellectual Property Rights (IPR) clearance support. All collected LRs pass a validation process developed by ELRC. The collected LRs cover all official EU languages, plus Icelandic and Norwegian.
true
[]
[]
Industry, Innovation and Infrastructure
null
null
null
2018
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
lurcock-etal-2004-framework
https://aclanthology.org/U04-1014
A framework for utterance disambiguation in dialogue
We discuss the data sources available for utterance disambiguation in a bilingual dialogue system, distinguishing global, contextual, and user-specific domains, and syntactic and semantic levels. We propose a framework for combining the available information, and techniques for increasing a stochastic grammar's sensitivity to local context and a speaker's idiolect.
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gerlach-etal-2013-combining
https://aclanthology.org/2013.mtsummit-wptp.6
Combining pre-editing and post-editing to improve SMT of user-generated content
The poor quality of user-generated content (UGC) found in forums hinders both readability and machine-translatability. To improve these two aspects, we have developed human-and machine-oriented pre-editing rules, which correct or reformulate this content. In this paper we present the results of a study which investigates whether pre-editing rules that improve the quality of statistical machine translation (SMT) output also have a positive impact on post-editing productivity. For this study, pre-editing rules were applied to a set of French sentences extracted from a technical forum. After SMT, the post-editing temporal effort and final quality are compared for translations of the raw source and its pre-edited version. Results obtained suggest that pre-editing speeds up post-editing and that the combination of the two processes is worthy of further investigation. J'ai redémarrer l'ordi (apparition de la croix rouge) mais pas besoin de restaurer le système:Toute ces mises à jour on été faite le 2013-03-13
false
[]
[]
null
null
null
The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007 under grant agreement n° 288769.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dinu-lapata-2010-measuring
https://aclanthology.org/D10-1113
Measuring Distributional Similarity in Context
The computation of meaning similarity as operationalized by vector-based models has found widespread use in many tasks ranging from the acquisition of synonyms and paraphrases to word sense disambiguation and textual entailment. Vector-based models are typically directed at representing words in isolation and thus best suited for measuring similarity out of context. In his paper we propose a probabilistic framework for measuring similarity in context. Central to our approach is the intuition that word meaning is represented as a probability distribution over a set of latent senses and is modulated by context. Experimental results on lexical substitution and word similarity show that our algorithm outperforms previously proposed models.
false
[]
[]
null
null
null
Acknowledgments The authors acknowledge the support of the DFG (Dinu; International Research Training Group "Language Technology and Cognitive Systems") and EPSRC (Lapata; grant GR/T04540/01).
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
butnaru-2019-bam
https://aclanthology.org/W19-1413
BAM: A combination of deep and shallow models for German Dialect Identification.
In this paper, we present a machine learning approach for the German Dialect Identification (GDI) Closed Shared Task of the DSL 2019 Challenge. The proposed approach combines deep and shallow models, by applying a voting scheme on the outputs resulted from a Character-level Convolutional Neural Networks (Char-CNN), a Long Short-Term Memory (LSTM) network, and a model based on String Kernels. The first model used is the Char-CNN model that merges multiple convolutions computed with kernels of different sizes. The second model is the LSTM network which applies a global max pooling over the returned sequences over time. Both models pass the activation maps to two fullyconnected layers. The final model is based on String Kernels, computed on character pgrams extracted from speech transcripts. The model combines two blended kernel functions, one is the presence bits kernel, and the other is the intersection kernel. The empirical results obtained in the shared task prove that the approach can achieve good results. The system proposed in this paper obtained the fourth place with a macro-F 1 score of 62.55%.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bunescu-mooney-2005-shortest
https://aclanthology.org/H05-1091
A Shortest Path Dependency Kernel for Relation Extraction
We present a novel approach to relation extraction, based on the observation that the information required to assert a relationship between two named entities in the same sentence is typically captured by the shortest path between the two entities in the dependency graph. Experiments on extracting top-level relations from the ACE (Automated Content Extraction) newspaper corpus show that the new shortest path dependency kernel outperforms a recent approach based on dependency tree kernels.
false
[]
[]
null
null
null
This work was supported by grants IIS-0117308 and IIS-0325116 from the NSF.
2005
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
koizumi-etal-2002-annotated
http://www.lrec-conf.org/proceedings/lrec2002/pdf/318.pdf
An Annotated Japanese Sign Language Corpus
Sign language is characterized by its interactivity and multimodality, which cause difficulties in data collection and annotation. To address these difficulties, we have developed a video-based Japanese sign language (JSL) corpus and a corpus tool for annotation and linguistic analysis. As the first step of linguistic annotation, we transcribed manual signs expressing lexical information as well as non-manual signs (NMSs)-including head movements, facial actions, and posture-that are used to express grammatical information. Our purpose is to extract grammatical rules from this corpus for the sign-language translation system underdevelopment. From this viewpoint, we will discuss methods for collecting elicited data, annotation required for grammatical analysis, as well as corpus tool required for annotation and grammatical analysis. As the result of annotating 2800 utterances, we confirmed that there are at least 50 kinds of NMSs in JSL, using head (seven kinds), jaw (six kinds), mouth (18 kinds), cheeks (one kind), eyebrows (four kinds), eyes (seven kinds), eye gaze (two kinds), bydy posture (five kinds). We use this corpus for designing and testing an algorithm and grammatical rules for the sign-language translation system underdevelopment.
true
[]
[]
Reduced Inequalities
null
null
The research reported here was carried out within the Real World Computing Project, supported by Ministry of Economy, Trade and Industry.
2002
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
couto-vale-etal-2016-automatic
https://aclanthology.org/L16-1574
Automatic Recognition of Linguistic Replacements in Text Series Generated from Keystroke Logs
This paper introduces a toolkit used for the purpose of detecting replacements of different grammatical and semantic structures in ongoing text production logged as a chronological series of computer interaction events (so-called keystroke logs). The specific case we use involves human translations where replacements can be indicative of translator behaviour that leads to specific features of translations that distinguish them from non-translated texts. The toolkit uses a novel CCG chart parser customised so as to recognise grammatical words independently of space and punctuation boundaries. On the basis of the linguistic analysis, structures in different versions of the target text are compared and classified as potential equivalents of the same source text segment by 'equivalence judges'. In that way, replacements of grammatical and semantic structures can be detected. Beyond the specific task at hand the approach will also be useful for the analysis of other types of spaceless text such as Twitter hashtags and texts in agglutinative or spaceless languages like Finnish or Chinese.
false
[]
[]
null
null
null
The research reported here was funded by the German Research Council, grant no. NE 1822/2-1.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gu-etal-2018-incorporating
https://aclanthology.org/W18-5212
Incorporating Topic Aspects for Online Comment Convincingness Evaluation
In this paper, we propose to incorporate topic aspects information for online comments convincingness evaluation. Our model makes use of graph convolutional network to utilize implicit topic information within a discussion thread to assist the evaluation of convincingness of each single comment. In order to test the effectiveness of our proposed model, we annotate topic information on top of a public dataset for argument convincingness evaluation. Experimental results show that topic information is able to improve the performance for convincingness evaluation. We also make a move to detect topic aspects automatically.
false
[]
[]
null
null
null
The work is partially supported by National Natural Science Foundation of China (Grant No. 61702106), Shanghai Science and Technology Commission (Grant No. 17JC1420200, Grant No. 17YF1427600 and Grant No.16JC1420401).
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wang-etal-2009-classifying
https://aclanthology.org/D09-1157
Classifying Relations for Biomedical Named Entity Disambiguation
Named entity disambiguation concerns linking a potentially ambiguous mention of named entity in text to an unambiguous identifier in a standard database. One approach to this task is supervised classification. However, the availability of training data is often limited, and the available data sets tend to be imbalanced and, in some cases, heterogeneous. We propose a new method that distinguishes a named entity by finding the informative keywords in its surrounding context, and then trains a model to predict whether each keyword indicates the semantic class of the entity. While maintaining a comparable performance to supervised classification, this method avoids using expensive manually annotated data for each new domain, and thus achieves better portability.
true
[]
[]
Good Health and Well-Being
null
null
The work reported in this paper is funded by Pfizer Ltd.. The UK National Centre for Text Mining is funded by JISC. The ITI-TXM corpus used in the experiments was developed at School of Informatics, University of Edinburgh, in the TXM project, which was funded by ITI Life Sciences, Scotland.
2009
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
rosti-etal-2007-combining
https://aclanthology.org/N07-1029
Combining Outputs from Multiple Machine Translation Systems
Currently there are several approaches to machine translation (MT) based on different paradigms; e.g., phrasal, hierarchical and syntax-based. These three approaches yield similar translation accuracy despite using fairly different levels of linguistic knowledge. The availability of such a variety of systems has led to a growing interest toward finding better translations by combining outputs from multiple systems. This paper describes three different approaches to MT system combination. These combination methods operate on sentence, phrase and word level exploiting information from ¤-best lists, system scores and target-to-source phrase alignments. The word-level combination provides the most robust gains but the best results on the development test sets (NIST MT05 and the newsgroup portion of GALE 2006 dry-run) were achieved by combining all three methods.
false
[]
[]
null
null
null
This work was supported by DARPA/IPTO Contract No. HR0011-06-C-0022 under the GALE program (approved for public release, distribution unlimited). The authors would like to thank ISI and University of Edinburgh for sharing their MT system outputs.
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
luo-etal-2019-improving
https://aclanthology.org/P19-1144
Improving Neural Language Models by Segmenting, Attending, and Predicting the Future
Common language models typically predict the next word given the context. In this work, we propose a method that improves language modeling by learning to align the given context and the following phrase. The model does not require any linguistic annotation of phrase segmentation. Instead, we define syntactic heights and phrase segmentation rules, enabling the model to automatically induce phrases, recognize their task-specific heads, and generate phrase embeddings in an unsupervised learning manner. Our method can easily be applied to language models with different network architectures since an independent module is used for phrase induction and context-phrase alignment, and no change is required in the underlying language modeling network. Experiments have shown that our model outperformed several strong baseline models on different data sets. We achieved a new state-of-the-art performance of 17.4 perplexity on the Wikitext-103 dataset. Additionally, visualizing the outputs of the phrase induction module showed that our model is able to learn approximate phrase-level structural knowledge without any annotation.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
opitz-etal-2018-induction
https://aclanthology.org/W18-4518
Induction of a Large-Scale Knowledge Graph from the Regesta Imperii
We induce and visualize a Knowledge Graph over the Regesta Imperii (RI), an important largescale resource for medieval history research. The RI comprise more than 150,000 digitized abstracts of medieval charters issued by the Roman-German kings and popes distributed over many European locations and a time span of more than 700 years. Our goal is to provide a resource for historians to visualize and query the RI, possibly aiding medieval history research. The resulting medieval graph and visualization tools are shared publicly.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
waszczuk-etal-2019-neural
https://aclanthology.org/W19-5113
A Neural Graph-based Approach to Verbal MWE Identification
We propose to tackle the problem of verbal multiword expression (VMWE) identification using a neural graph parsing-based approach. Our solution involves encoding VMWE annotations as labellings of dependency trees and, subsequently, applying a neural network to model the probabilities of different labellings. This strategy can be particularly effective when applied to discontinuous VMWEs and, thanks to dense, pre-trained word vector representations, VMWEs unseen during training. Evaluation of our approach on three PARSEME datasets (German, French, and Polish) shows that it allows to achieve performance on par with the previous state-ofthe-art (Al Saied et al., 2018).
false
[]
[]
null
null
null
We thank the anonymous reviewers for their valuable comments. The work presented in this paper was funded by the German Research Foundation (DFG) within the CRC 991 and the Beyond CFG project, as well as by the Land North Rhine-Westphalia within the NRW-Forschungskolleg Online-Partizipation.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
li-etal-2019-findings
https://aclanthology.org/W19-5303
Findings of the First Shared Task on Machine Translation Robustness
We share the findings of the first shared task on improving robustness of Machine Translation (MT). The task provides a testbed representing challenges facing MT models deployed in the real world, and facilitates new approaches to improve models' robustness to noisy input and domain mismatch. We focus on two language pairs (English-French and English-Japanese), and the submitted systems are evaluated on a blind test set consisting of noisy comments on Reddit 1 and professionally sourced translations. As a new task, we received 23 submissions by 11 participating teams from universities, companies, national labs, etc. All submitted systems achieved large improvements over baselines, with the best improvement having +22.33 BLEU. We evaluated submissions by both human judgment and automatic evaluation (BLEU), which shows high correlations (Pearson's r = 0.94 and 0.95). Furthermore, we conducted a qualitative analysis of the submitted systems using compare-mt 2 , which revealed their salient differences in handling challenges in this task. Such analysis provides additional insights when there is occasional disagreement between human judgment and BLEU, e.g. systems better at producing colloquial expressions received higher score from human judgment.
false
[]
[]
null
null
null
We thank Facebook for funding the human evaluation and blind test set creation.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
huck-etal-2017-lmu
https://aclanthology.org/W17-4730
LMU Munich's Neural Machine Translation Systems for News Articles and Health Information Texts
This paper describes the LMU Munich English→German machine translation systems. We participated with neural translation engines in the WMT17 shared task on machine translation of news, as well as in the biomedical translation task. LMU Munich's systems deliver competitive machine translation quality on both news articles and health information texts.
true
[]
[]
Good Health and Well-Being
null
null
This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement № 644402 (HimL). This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement № 640550).
2017
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zou-li-2021-lz1904
https://aclanthology.org/2021.semeval-1.138
LZ1904 at SemEval-2021 Task 5: Bi-LSTM-CRF for Toxic Span Detection using Pretrained Word Embedding
Recurrent Neural Networks (RNN) have been widely used in various Natural Language Processing (NLP) tasks such as text classification, sequence tagging, and machine translation. Long Short Term Memory (LSTM), a special unit of RNN, has the advantage of memorizing past and even future information in a sentence (especially for bidirectional LSTM). In the shared task of detecting toxic spans in texts, we first apply pretrained word embedding (GloVe) to generate the word vectors after tokenization. Then we construct Bidirectional Long Short Term Memory-Conditional Random Field (Bi-LSTM-CRF) model by Baidu research to predict whether each word in the sentence is toxic or not. We tune hyperparameters of dropout rate, number of LSTM units, embedding size with 10 epochs and choose the epoch with best validation recall. Our model achieves an F1 score of 66.99% on test dataset.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
kotani-yoshimi-2015-design
https://aclanthology.org/Y15-1040
Design of a Learner Corpus for Listening and Speaking Performance
A learner corpus is a useful resource for developing automatic assessment techniques for implementation in a computer-assisted language learning system. However, presently, learner corpora are only helpful in terms of evaluating the accuracy of learner output (speaking and writing). Therefore, the present study proposes a learner corpus annotated with evaluation results regarding the accuracy and fluency of performance in speaking (output) and listening (input).
true
[]
[]
Quality Education
null
null
This work was supported by JSPS KAKENHI Grant Numbers, 22300299, 15H02940
2015
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
fraser-etal-2012-modeling
https://aclanthology.org/E12-1068
Modeling Inflection and Word-Formation in SMT
The current state-of-the-art in statistical machine translation (SMT) suffers from issues of sparsity and inadequate modeling power when translating into morphologically rich languages. We model both inflection and word-formation for the task of translating into German. We translate from English words to an underspecified German representation and then use linearchain CRFs to predict the fully specified German representation. We show that improved modeling of inflection and wordformation leads to improved SMT.
false
[]
[]
null
null
null
The authors wish to thank the anonymous reviewers for their comments. Aoife Cahill was partly supported by Deutsche Forschungsgemeinschaft grant SFB 732. Alexander Fraser, Marion Weller and Fabienne Cap were funded by Deutsche Forschungsgemeinschaft grant Models of Morphosyntax for Statistical Machine Translation. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement Nr. 248005. This work was supported in part by the IST Programme of the European Community, under the PASCAL2 Network of Excellence, IST-2007-216886. This publication only reflects the authors' views. We thank Thomas Lavergne and Helmut Schmid.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lang-etal-2022-visually
https://aclanthology.org/2022.cmcl-1.3
Visually Grounded Interpretation of Noun-Noun Compounds in English
Noun-noun compounds (NNCs) occur frequently in the English language. Accurate NNC interpretation, i.e. determining the implicit relationship between the constituents of a NNC, is crucial for the advancement of many natural language processing tasks. Until now, computational NNC interpretation has been limited to approaches involving linguistic representations only. However, research suggests that grounding linguistic representations in vision or other modalities can increase performance on this and other tasks. Our work is a novel comparison of linguistic and visuolinguistic representations for the task of NNC interpretation. We frame NNC interpretation as a relation classification task, evaluating on a large, relationally-annotated NNC dataset. We combine distributional word vectors with image vectors to investigate how visual information can help improve NNC interpretation systems. We find that adding visual vectors yields modest increases in performance on several configurations of our dataset. We view this as a promising first exploration of the benefits of using visually grounded representations for NNC interpretation.
false
[]
[]
null
null
null
null
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kozhevnikov-titov-2013-cross
https://aclanthology.org/P13-1117
Cross-lingual Transfer of Semantic Role Labeling Models
Semantic Role Labeling (SRL) has become one of the standard tasks of natural language processing and proven useful as a source of information for a number of other applications. We address the problem of transferring an SRL model from one language to another using a shared feature representation. This approach is then evaluated on three language pairs, demonstrating competitive performance as compared to a state-of-the-art unsupervised SRL system and a cross-lingual annotation projection baseline. We also consider the contribution of different aspects of the feature representation to the performance of the model and discuss practical applicability of this method.
false
[]
[]
null
null
null
The authors would like to thank Alexandre Klementiev and Ryan McDonald for useful suggestions and Täckström et al. (2012) for sharing the cross-lingual word representations. This research is supported by the MMCI Cluster of Excellence.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kuhlmann-2013-mildly
https://aclanthology.org/J13-2004
Mildly Non-Projective Dependency Grammar
Syntactic representations based on word-to-word dependencies have a long-standing tradition in descriptive linguistics, and receive considerable interest in many applications. Nevertheless, dependency syntax has remained something of an island from a formal point of view. Moreover, most formalisms available for dependency grammar are restricted to projective analyses, and thus not able to support natural accounts of phenomena such as wh-movement and cross-serial dependencies. In this article we present a formalism for non-projective dependency grammar in the framework of linear context-free rewriting systems. A characteristic property of our formalism is a close correspondence between the non-projectivity of the dependency trees admitted by a grammar on the one hand, and the parsing complexity of the grammar on the other. We show that parsing with unrestricted grammars is intractable. We therefore study two constraints on non-projectivity, block-degree and well-nestedness. Jointly, these two constraints define a class of "mildly" non-projective dependency grammars that can be parsed in polynomial time. An evaluation on five dependency treebanks shows that these grammars have a good coverage of empirical data.
false
[]
[]
null
null
null
The author gratefully acknowledges financial support from The German Research Foundation (Sonderforschungsbereich 378, project MI 2) and The Swedish Research Council (diary no. 2008-296).
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
guo-diab-2009-improvements
https://aclanthology.org/W09-2410
Improvements To Monolingual English Word Sense Disambiguation
Word Sense Disambiguation remains one of the most complex problems facing computational linguists to date. In this paper we present modification to the graph based state of the art algorithm In-Degree. Our modifications entail augmenting the basic Lesk similarity measure with more relations based on the structure of WordNet, adding SemCor examples to the basic WordNet lexical resource and finally instead of using the LCH similarity measure for computing verb verb similarity in the In-Degree algorithm, we use JCN. We report results on three standard data sets using three different versions of WordNet. We report the highest performing monolingual unsupervised results to date on the Senseval 2 all words data set. Our system yields a performance of 62.7% using WordNet 1.7.1. * The second author has been partially funded by DARPA GALE project. We would also like to thank the useful comments rendered by three anonymous reviewers.
false
[]
[]
null
null
null
null
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nuhn-etal-2012-deciphering
https://aclanthology.org/P12-1017
Deciphering Foreign Language by Combining Language Models and Context Vectors
In this paper we show how to train statistical machine translation systems on reallife tasks using only non-parallel monolingual data from two languages. We present a modification of the method shown in (Ravi and Knight, 2011) that is scalable to vocabulary sizes of several thousand words. On the task shown in (Ravi and Knight, 2011) we obtain better results with only 5% of the computational effort when running our method with an n-gram language model. The efficiency improvement of our method allows us to run experiments with vocabulary sizes of around 5,000 words, such as a non-parallel version of the VERBMOBIL corpus. We also report results using data from the monolingual French and English GIGAWORD corpora.
false
[]
[]
null
null
null
This work was realized as part of the Quaero Programme, funded by OSEO, French State agency for innovation. The authors would like to thank Sujith Ravi and Kevin Knight for providing us with the OPUS subtitle corpus and David Rybach for kindly sharing his knowledge about the OpenFST library.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
xu-etal-2014-joint
https://aclanthology.org/C14-1064
Joint Opinion Relation Detection Using One-Class Deep Neural Network
Detecting opinion relation is a crucial step for fine-gained opinion summarization. A valid opinion relation has three requirements: a correct opinion word, a correct opinion target and the linking relation between them. Previous works prone to only verifying two of these requirements for opinion extraction, while leave the other requirement unverified. This could inevitably introduce noise terms. To tackle this problem, this paper proposes a joint approach, where all three requirements are simultaneously verified by a deep neural network in a classification scenario. Some seeds are provided as positive labeled data for the classifier. However, negative labeled data are hard to acquire for this task. We consequently introduce one-class classification problem and develop a One-Class Deep Neural Network. Experimental results show that the proposed joint approach significantly outperforms state-of-the-art weakly supervised methods.
false
[]
[]
null
null
null
This work was sponsored by the National Natural Science Foundation of China (No. 61202329 and No. 61333018) and CCF-Tencent Open Research Fund.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
muraki-etal-1985-augmented
https://aclanthology.org/E85-1029
Augmented Dependency Grammar: A Simple Interface between the Grammar Rule and the Knowledge
The VENUS analysis model consists of two components, Legato and Crescendo, as shown in Fig. I . Legato based on the ADG framework, constructs semantic dependency structure of Japanese input sentences by feature-oriented dependency grammar rules as main control information for syntactic analysis, and by semantic inference mechanism on a object fields' fact knowledge base.
false
[]
[]
null
null
null
null
1985
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
choi-etal-1998-hybrid-approaches
https://aclanthology.org/P98-1039
Hybrid Approaches to Improvement of Translation Quality in Web-based English-Korean Machine Translation
The previous English-Korean MT system that was the transfer-based MT system and applied to only written text enumerated a following brief list of the problems that had not seemed to be easy to solve in the near future : 1) processing of non-continuous idiomatic expressions 2) reduction of too many ambiguities in English syntactic analysis 3) robust processing for failed or illformed sentences 4) selecting correct word correspondence between several alternatives 5) generation of Korean sentence style. The problems can be considered as factors that have influence on the translation quality of machine translation system. This paper describes the symbolic and statistical hybrid approaches to solutions of problems of the previous English-to-Korean machine translation system in terms of the improvement of translation quality. The solutions are now successfully applied to the web-based English-Korean machine translation system "FromTo/EK" which has been developed from 1997.
false
[]
[]
null
null
null
null
1998
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hoffman-1992-ccg
https://aclanthology.org/P92-1044
A CCG Approach to Free Word Order Languages
In this paper, I present work in progress on an extension of Combinatory Categorial Grammars, CCGs, (Steedman 1985) to handle languages with freer word order than English, specifically Turkish. The approach I develop takes advantage of CCGs' ability to combine the syntactic as well as the semantic representations of adjacent elements in a sentence in an incremental manner. The linguistic claim behind my approach is that free word order in Turkish is a direct result of its grammar and lexical categories; this approach is not compatible with a linguistic theory involving movement operations and traces. A rich system of case markings identifies the predicate-argument structure of a Turkish sentence, while the word order serves a pragmatic function. The pragmatic functions of certain positions in the sentence roughly consist of a sentence-initial position for the topic, an immediately pre-verbal position for the focus, and post-verbal positions for backgrounded information (Erguvanli 1984) . The most common word order in simple transitive sentences is SOV (Subject-Object-Verb). However, all of the permutations of the sentence seen below are grammatical in the proper discourse situations. (1) a. Ay~e gazeteyi okuyor. Ay~e newspaper-acc read-present. Ay~e is reading the newspaper. b. Gazeteyi Ay~e okuyor. c. Ay~e okuyor gazeteyi. d. Gazeteyi okuyor Ay~e. e. Okuyor gazeteyi Ay~e. f. Okuyor Ay~e gazeteyi. Elements with overt case marking generally can scramble freely, even out of embedded clauses. This suggest a CCG approach where case-marked elements are functions which can combine with one another and with verbs in any order. *I thank Young-Suk Lee, Michael Niv, Jong Park, Mark Steedman, and Michael White for their valuable advice. This work was partially supported by ARt DAAL03-89-C-0031, DARPA N00014-90-J-1863, NSF IRI 90-16592, Ben Franklin 91S.3078C-1. Karttunen (1986) has proposed a Categorial Grammar formalism to handle free word order in Finnish, in which noun phrases are functors that apply to the verbal basic elements. Our approach treats case-marked noun phrases as functors as well; however, we allow verbs to maintain their status as functors in order to handle object-incorporation and the combining of nested verbs. In addition, CCGs, unlike Karttunen's grammar, allow the operations of composition and type raising which have been useful in handling a variety of linguistic phenomena including long distance dependencies and nonconstituent coordination (Steedman 1985) and will play an essential role in this analysis.
false
[]
[]
null
null
null
null
1992
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
li-etal-2008-optimal
https://aclanthology.org/W08-0118
Optimal Dialog in Consumer-Rating Systems using POMDP Framework
Voice-Rate is an experimental dialog system through which a user can call to get product information. In this paper, we describe an optimal dialog management algorithm for Voice-Rate. Our algorithm uses a POMDP framework, which is probabilistic and captures uncertainty in speech recognition and user knowledge. We propose a novel method to learn a user knowledge model from a review database. Simulation results show that the POMDP system performs significantly better than a deterministic baseline system in terms of both dialog failure rate and dialog interaction time. To the best of our knowledge, our work is the first to show that a POMDP can be successfully used for disambiguation in a complex voice search domain like Voice-Rate.
false
[]
[]
null
null
null
This work was conducted during the first author's internship at Microsoft Research; thanks to Dan Bohus, Ghinwa Choueiter, Yun-Cheng Ju, Xiao Li, Milind Mahajan, Tim Paek, Yeyi Wang, and Dong Yu for helpful discussions.
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
riloff-etal-2002-inducing
https://aclanthology.org/C02-1070
Inducing Information Extraction Systems for New Languages via Cross-language Projection
Information extraction (IE) systems are costly to build because they require development texts, parsing tools, and specialized dictionaries for each application domain and each natural language that needs to be processed. We present a novel method for rapidly creating IE systems for new languages by exploiting existing IE systems via crosslanguage projection. Given an IE system for a source language (e.g., English), we can transfer its annotations to corresponding texts in a target language (e.g., French) and learn information extraction rules for the new language automatically. In this paper, we explore several ways of realizing both the transfer and learning processes using off-theshelf machine translation systems, induced word alignment, attribute projection, and transformationbased learning. We present a variety of experiments that show how an English IE system for a plane crash domain can be leveraged to automatically create a French IE system for the same domain.
false
[]
[]
null
null
null
null
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bakhshandeh-etal-2016-learning
https://aclanthology.org/K16-1007
Learning to Jointly Predict Ellipsis and Comparison Structures
Domain-independent meaning representation of text has received a renewed interest in the NLP community. Comparison plays a crucial role in shaping objective and subjective opinion and measurement in natural language, and is often expressed in complex constructions including ellipsis. In this paper, we introduce a novel framework for jointly capturing the semantic structure of comparison and ellipsis constructions. Our framework models ellipsis and comparison as interconnected predicate-argument structures, which enables automatic ellipsis resolution. We show that a structured prediction model trained on our dataset of 2,800 gold annotated review sentences yields promising results. Together with this paper we release the dataset and an annotation tool which enables two-stage expert annotation on top of tree structures.
false
[]
[]
null
null
null
We thank the anonymous reviewers for their invaluable comments and Brian Rinehart and other annotators for their great work on the annotations. This work was supported in part by Grant W911NF-15-1-0542 with the US Defense Advanced Research Projects Agency (DARPA) and the Army Research Office (ARO).
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sanchan-etal-2017-automatic
https://doi.org/10.26615/978-954-452-038-0_003
Automatic Summarization of Online Debates
null
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
naskar-bandyopadhyay-2005-phrasal
https://aclanthology.org/2005.mtsummit-posters.8
A Phrasal EBMT System for Translating English to Bengali
The present work describes a Phrasal Example Based Machine Translation system from English to Bengali that identifies the phrases in the input through a shallow analysis, retrieves the target phrases using a Phrasal Example base and finally combines the target language phrases employing some heuristics based on the phrase ordering rules for Bengali. The paper focuses on the structure of the noun, verb and prepositional phrases in English and how these phrases are realized in Bengali. This study has an effect on the design of the phrasal Example Base and recombination rules for the target language phrases.
false
[]
[]
null
null
null
null
2005
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wojatzki-etal-2018-quantifying
https://aclanthology.org/L18-1224
Quantifying Qualitative Data for Understanding Controversial Issues
Understanding public opinion on complex controversial issues such as 'Legalization of Marijuana' and 'Gun Rights' is of considerable importance for a number of objectives such as identifying the most divisive facets of the issue, developing a consensus, and making informed policy decisions. However, an individual's position on a controversial issue is often not just a binary support-or-oppose stance on the issue, but rather a conglomerate of nuanced opinions and beliefs on various aspects of the issue. These opinions and beliefs are often expressed qualitatively in free text in issue-focused surveys or on social media. However, quantifying vast amounts of qualitative information remains a significant challenge. The goal of this work is to provide a new approach for quantifying qualitative data for the understanding of controversial issues. First, we show how we can engage people directly through crowdsourcing to create a comprehensive dataset of assertions (claims, opinions, arguments, etc.) relevant to an issue. Next, the assertions are judged for agreement and strength of support or opposition, again by crowdsourcing. The collected Dataset of Nuanced Assertions on Controversial Issues (NAoCI dataset) consists of over 2,000 assertions on sixteen different controversial issues. It has over 100,000 judgments of whether people agree or disagree with the assertions, and of about 70,000 judgments indicating how strongly people support or oppose the assertions. This dataset allows for several useful analyses that help summarize public opinion. Across the sixteen issues, we find that when people judge a large set of assertions they often do not disagree with the individual assertions that the opposite side makes, but that they differently judge the relative importance of these assertions. We show how assertions that cause dissent or consensus can be identified by ranking the whole set of assertions based on the collected judgments. We also show how free-text assertions in social media can be analyzed in conjunction with the crowdsourced information to quantify and summarize public opinion on controversial issues.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
jang-etal-1999-using
https://aclanthology.org/P99-1029
Using Mutual Information to Resolve Query Translation Ambiguities and Query Term Weighting
An easy way of translating queries in one language to the other for cross-language information retrieval (IR) is to use a simple bilingual dictionary. Because of the generalpurpose nature of such dictionaries, however, this simple method yields a severe translation ambiguity problem. This paper describes the degree to which this problem arises in Korean-English cross-language IR and suggests a relatively simple yet effective method for disambiguation using mutual information statistics obtained only from the target document collection. In this method, mutual information is used not only to select the best candidate but also to assign a weight to query terms in the target language. Our experimental results based on the TREC-6 collection shows that this method can achieve up to 85% of the monolingual retrieval case and 96% of the manual disambiguation case.
false
[]
[]
null
null
null
null
1999
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yamamoto-etal-2021-dependency
https://aclanthology.org/2021.starsem-1.20
Dependency Patterns of Complex Sentences and Semantic Disambiguation for Abstract Meaning Representation Parsing
Meaning Representation (AMR) is a sentence-level meaning representation based on predicate argument structure. One of the challenges we find in AMR parsing is to capture the structure of complex sentences which expresses the relation between predicates. Knowing the core part of the sentence structure in advance may be beneficial in such a task. In this paper, we present a list of dependency patterns for English complex sentence constructions designed for AMR parsing. With a dedicated pattern matcher, all occurrences of complex sentence constructions are retrieved from an input sentence. While some of the subordinators have semantic ambiguities, we deal with this problem through training classification models on data derived from AMR and Wikipedia corpus, establishing a new baseline for future works. The developed complex sentence patterns and the corresponding AMR descriptions will be made public1.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
finch-etal-2011-nict
https://aclanthology.org/2011.iwslt-evaluation.5
The NICT translation system for IWSLT 2011
This paper describes NICT's participation in the IWSLT 2011 evaluation campaign for the TED speech translation Chinese-English shared-task. Our approach was based on a phrasebased statistical machine translation system that was augmented in two ways. Firstly we introduced rule-based reordering constraints on the decoding. This consisted of a set of rules that were used to segment the input utterances into segments that could be decoded almost independently. This idea here being that constraining the decoding process in this manner would greatly reduce the search space of the decoder, and cut out many possibilities for error while at the same time allowing for a correct output to be generated. The rules we used exploit punctuation and spacing in the input utterances, and we use these positions to delimit our segments. Not all punctuation/spacing positions were used as segment boundaries, and the set of used positions were determined by a set of linguistically-based heuristics. Secondly we used two heterogeneous methods to build the translation model, and lexical reordering model for our systems. The first method employed the popular method of using GIZA++ for alignment in combination with phraseextraction heuristics. The second method used a recentlydeveloped Bayesian alignment technique that is able to perform both phrase-to-phrase alignment and phrase pair extraction within a single unsupervised process. The models produced by this type of alignment technique are typically very compact whilst at the same time maintaining a high level of translation quality. We evaluated both of these methods of translation model construction in isolation, and our results show their performance is comparable. We also integrated both models by linear interpolation to obtain a model that outperforms either component. Finally, we added an indicator feature into the log-linear model to indicate those phrases that were in the intersection of the two translation models. The addition of this feature was also able to provide a small improvement in performance.
false
[]
[]
null
null
null
This work was performed while the first was supported by the JSPS Research Fellowsh Young Scientists.
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
banik-etal-2016-smt
https://aclanthology.org/W16-6303
Can SMT and RBMT Improve each other's Performance?- An Experiment with English-Hindi Translation
Rule-based machine translation (RBMT) and Statistical machine translation (SMT) are two well-known approaches for translation which have their own benefits. System architecture of SMT often complements RBMT, and the vice-versa. In this paper, we propose an effective method of serial coupling where we attempt to build a hybrid model that exploits the benefits of both the architectures. The first part of coupling is used to obtain good lexical selection and robustness, second part is used to improve syntax and the final one is designed to combine other modules along with the best phrase reordering. Our experiments on a English-Hindi product domain dataset show the effectiveness of the proposed approach with improvement in BLEU score.
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dhuliawala-etal-2015-judge
https://aclanthology.org/W15-5925
Judge a Book by its Cover: Conservative Focused Crawling under Resource Constraints
In this paper, we propose a domain specific crawler that decides the domain relevance of a URL without downloading the page. In contrast, a focused crawler relies on the content of the page to make the same decision. To achieve this, we use a classifier model which harnesses features such as the page's URL and its parents' information to score a page. The classifier model is incrementally trained at each depth in order to learn the facets of the domain. Our approach modifies the focused crawler by circumventing the need for extra resource usage in terms of bandwidth. We test the performance of our approach on Wikipedia data. Our Conservative Focused Crawler (CFC) shows a performance equivalent to that of a focused crawler (skyline system) with an average resource usage reduction of ≈30% across two domains viz., tourism and sports.
false
[]
[]
null
null
null
null
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lakew-etal-2017-fbks
https://aclanthology.org/2017.iwslt-1.5
FBK's Multilingual Neural Machine Translation System for IWSLT 2017
Neural Machine Translation has been shown to enable inference and cross-lingual knowledge transfer across multiple language directions using a single multilingual model. Focusing on this multilingual translation scenario, this work summarizes FBK's participation in the IWSLT 2017 shared task. Our submissions rely on two multilingual systems trained on five languages (English, Dutch, German, Italian, and Romanian). The first one is a 20 language direction model, which handles all possible combinations of the five languages. The second multilingual system is trained only on 16 directions, leaving the others as zero-shot translation directions (i.e representing a more complex inference task on language pairs not seen at training time). More specifically, our zero-shot directions are Dutch$German and Italian$Romanian (resulting in four language combinations). Despite the small amount of parallel data used for training these systems, the resulting multilingual models are effective, even in comparison with models trained separately for every language pair (i.e. in more favorable conditions). We compare and show the results of the two multilingual models against a baseline single language pair systems. Particularly, we focus on the four zero-shot directions and show how a multilingual model trained with small data can provide reasonable results. Furthermore, we investigate how pivoting (i.e using a bridge/pivot language for inference in a source!pivot!target translations) using a multilingual model can be an alternative to enable zero-shot translation in a low resource setting.
false
[]
[]
null
null
null
This work has been partially supported by the EC-funded projects ModernMT (H2020 grant agreement no. 645487) and QT21 (H2020 grant agreement no. 645452). The Titan Xp used for this research was donated by the NVIDIA Corporation. This work was also supported by The Alan Turing Institute under the EPSRC grant EP/N510129/1 and by a donation of Azure credits by Microsoft.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hasan-ng-2014-taking
https://aclanthology.org/D14-1083
Why are You Taking this Stance? Identifying and Classifying Reasons in Ideological Debates
Recent years have seen a surge of interest in stance classification in online debates. Oftentimes, however, it is important to determine not only the stance expressed by an author in her debate posts, but also the reasons behind her supporting or opposing the issue under debate. We therefore examine the new task of reason classification in this paper. Given the close interplay between stance classification and reason classification, we design computational models for examining how automatically computed stance information can be profitably exploited for reason classification. Experiments on our reason-annotated corpus of ideological debate posts from four domains demonstrate that sophisticated models of stances and reasons can indeed yield more accurate reason and stance classification results than their simpler counterparts.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
We thank the three anonymous reviewers for their detailed and insightful comments on an earlier draft of this paper. This work was supported in part by NSF Grants IIS-1147644 and IIS-1219142. Any opinions, findings, conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views or official policies, either expressed or implied, of NSF.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
glavas-etal-2012-experiments
https://aclanthology.org/W12-0501
Experiments on Hybrid Corpus-Based Sentiment Lexicon Acquisition
Numerous sentiment analysis applications make usage of a sentiment lexicon. In this paper we present experiments on hybrid sentiment lexicon acquisition. The approach is corpus-based and thus suitable for languages lacking general dictionarybased resources. The approach is a hybrid two-step process that combines semisupervised graph-based algorithms and supervised models. We evaluate the performance on three tasks that capture different aspects of a sentiment lexicon: polarity ranking task, polarity regression task, and sentiment classification task. Extensive evaluation shows that the results are comparable to those of a well-known sentiment lexicon SentiWordNet on the polarity ranking task. On the sentiment classification task, the results are also comparable to SentiWordNet when restricted to monosentimous (all senses carry the same sentiment) words. This is satisfactory, given the absence of explicit semantic relations between words in the corpus.
false
[]
[]
null
null
null
We thank the anonymous reviewers for their useful comments. This work has been supported by the Ministry of Science, Education and Sports, Republic of Croatia under the Grant 036-1300646-1986.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
trieu-etal-2016-dealing
https://aclanthology.org/Y16-2024
Dealing with Out-Of-Vocabulary Problem in Sentence Alignment Using Word Similarity
Sentence alignment plays an essential role in building bilingual corpora which are valuable resources for many applications like statistical machine translation. In various approaches of sentence alignment, length-and-word-based methods which are based on sentence length and word correspondences have been shown to be the most effective. Nevertheless a drawback of using bilingual dictionaries trained by IBM Models in length-and-word-based methods is the problem of out-of-vocabulary (OOV). We propose using word similarity learned from monolingual corpora to overcome the problem. Experimental results showed that our method can reduce the OOV ratio and achieve a better performance than some other lengthand-word-based methods. This implies that using word similarity learned from monolingual data may help to deal with OOV problem in sentence alignment.
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ernestus-etal-2014-nijmegen
http://www.lrec-conf.org/proceedings/lrec2014/pdf/134_Paper.pdf
The Nijmegen Corpus of Casual Czech
This article introduces a new speech corpus, the Nijmegen Corpus of Casual Czech (NCCCz), which contains more than 30 hours of high-quality recordings of casual conversations in Common Czech, among ten groups of three male and ten groups of three female friends. All speakers were native speakers of Czech, raised in Prague or in the region of Central Bohemia, and were between 19 and 26 years old. Every group of speakers consisted of one confederate, who was instructed to keep the conversations lively, and two speakers naive to the purposes of the recordings. The naive speakers were engaged in conversations for approximately 90 minutes, while the confederate joined them for approximately the last 72 minutes. The corpus was orthographically annotated by experienced transcribers and this orthographic transcription was aligned with the speech signal. In addition, the conversations were videotaped. This corpus can form the basis for all types of research on casual conversations in Czech, including phonetic research and research on how to improve automatic speech recognition. The corpus will be freely available.
false
[]
[]
null
null
null
Our thanks to the staff at the Phonetic Institute at Charles University in Prague for their help during the recordings of the corpus in Prague. Our special thanks to Lou Boves for valuable discussions. This work was funded by a European Young Investigator Award given to the first author. In addition, it was supported by two Czech grants
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nakano-etal-2022-pseudo
https://aclanthology.org/2022.dialdoc-1.4
Pseudo Ambiguous and Clarifying Questions Based on Sentence Structures Toward Clarifying Question Answering System
Question answering (QA) with disambiguation questions is essential for practical QA systems because user questions often do not contain information enough to find their answers. We call this task clarifying question answering, a task to find answers to ambiguous user questions by disambiguating their intents through interactions. There are two major problems in building a clarifying question answering system: data preparation of possible ambiguous questions and the generation of clarifying questions. In this paper, we tackle these problems by sentence generation methods using sentence structures. Ambiguous questions are generated by eliminating a part of a sentence considering the sentence structure. Clarifying the question generation method based on case frame dictionary and sentence structure is also proposed. Our experimental results verify that our pseudo ambiguous question generation successfully adds ambiguity to questions. Moreover, the proposed clarifying question generation recovers the performance drop by asking the user for missing information.
false
[]
[]
null
null
null
null
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
balusu-2012-complex
https://aclanthology.org/C12-3001
Complex Predicates in Telugu: A Computational Perspective
Complex predicates raise the question of how to encode them in computational lexicons. Their computational implementation in South Asian languages is in its infancy. This paper examines in detail the variety of complex predicates in Telugu revealing the syntactic process of their composition and the constraints on their formation. The framework used is First Phase Syntax (Ramchand 2008). In this lexical semantic approach that ties together the constraints on the meaning and the argument structure of complex predicates, each verb breaks down into 3 sub-event heads which determine the nature of the verb. Complex predicates are formed by one verb subsuming the sub-event heads of another verb, and this is constrained in principled ways. The data analysed and the constraints developed in the paper are of use to linguists working on computational solutions for Telugu and other languages, for design and development of predicate structure functions in linguistic processors.
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
laban-etal-2020-summary
https://aclanthology.org/2020.acl-main.460
The Summary Loop: Learning to Write Abstractive Summaries Without Examples
This work presents a new approach to unsupervised abstractive summarization based on maximizing a combination of coverage and fluency for a given length constraint. It introduces a novel method that encourages the inclusion of key terms from the original document into the summary: key terms are masked out of the original document and must be filled in by a coverage model using the current generated summary. A novel unsupervised training procedure leverages this coverage model along with a fluency model to generate and score summaries. When tested on popular news summarization datasets, the method outperforms previous unsupervised methods by more than 2 R-1 points, and approaches results of competitive supervised methods. Our model attains higher levels of abstraction with copied passages roughly two times shorter than prior work, and learns to compress and merge sentences without supervision.
false
[]
[]
null
null
null
We would like to thank Forrest Huang, David Chan, Roshan Rao, Katie Stasaski and the ACL reviewers for their helpful comments. This work was supported by the first author's internship at Bloomberg, and a Bloomberg Data Science grant. We also gratefully acknowledge support received from an Amazon Web Services Machine Learning Research Award and an NVIDIA Corporation GPU grant.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
arnold-etal-1985-mul
https://aclanthology.org/1985.tmi-1.1
A MUl View of the \textlessC,A\textgreater, T Framework in EUROTRA
null
false
[]
[]
null
null
null
null
1985
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
feng-etal-2012-hierarchical
https://aclanthology.org/P12-1100
Hierarchical Chunk-to-String Translation
We present a hierarchical chunk-to-string translation model, which can be seen as a compromise between the hierarchical phrasebased model and the tree-to-string model, to combine the merits of the two models. With the help of shallow parsing, our model learns rules consisting of words and chunks and meanwhile introduce syntax cohesion. Under the weighed synchronous context-free grammar defined by these rules, our model searches for the best translation derivation and yields target translation simultaneously. Our experiments show that our model significantly outperforms the hierarchical phrasebased model and the tree-to-string model on English-Chinese Translation tasks.
false
[]
[]
null
null
null
We would like to thank Trevor Cohn, Shujie Liu, Nan Duan, Lei Cui and Mo Yu for their help, and anonymous reviewers for their valuable comments and suggestions. This work was supported in part by EPSRC grant EP/I034750/1 and in part by High Technology R&D Program Project No. 2011AA01A207.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lindsey-etal-2012-phrase
https://aclanthology.org/D12-1020
A Phrase-Discovering Topic Model Using Hierarchical Pitman-Yor Processes
Topic models traditionally rely on the bagof-words assumption. In data mining applications, this often results in end-users being presented with inscrutable lists of topical unigrams, single words inferred as representative of their topics. In this article, we present a hierarchical generative probabilistic model of topical phrases. The model simultaneously infers the location, length, and topic of phrases within a corpus and relaxes the bagof-words assumption within phrases by using a hierarchy of Pitman-Yor processes. We use Markov chain Monte Carlo techniques for approximate inference in the model and perform slice sampling to learn its hyperparameters. We show via an experiment on human subjects that our model finds substantially better, more interpretable topical phrases than do competing models.
false
[]
[]
null
null
null
The first author is supported by an NSF Graduate Research Fellowship. The first and second authors began this project while working at J.D. Power & Associates. We are indebted to Michael Mozer, Matt Wilder, and Nicolas Nicolov for their advice.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
li-etal-2012-simple
https://aclanthology.org/W12-4508
Simple Maximum Entropy Models for Multilingual Coreference Resolution
This paper describes our system participating in the CoNLL-2012 shared task: Modeling Multilingual Unrestricted Coreference in Ontonotes. Maximum entropy models are used for our system as classifiers to determine the coreference relationship between every two mentions (usually noun phrases and pronouns) in each document. We exploit rich lexical, syntactic and semantic features for the system, and the final features are selected using a greedy forward and backward strategy from an initial feature set. Our system participated in the closed track for both English and Chinese languages.
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ben-ari-etal-1988-translational
https://aclanthology.org/1988.tmi-1.15
Translational ambiguity rephrased
Presented are the special aspects of translation-oriented disambiguation, which differentiate it from conventional text-understanding-oriented disambiguation. Also presented are the necessity of interaction to cover the failure of automatic disambiguation, and the idea of disambiguation by rephrasing. The types of ambiguities to which rephrasing is applicable are defined, and the four stages of the rephrasing procedure are described for each type of ambiguity. The concept of an interactive disambiguation module, which is logically located between the parser and the transfer phase, is described. The function of this module is to bridge the gap between several possible trees and/or other ambiguities, and one well-defined tree that may be satisfactorily translated.
false
[]
[]
null
null
null
null
1988
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gladkova-drozd-2016-intrinsic
https://aclanthology.org/W16-2507
Intrinsic Evaluations of Word Embeddings: What Can We Do Better?
This paper presents an analysis of existing methods for the intrinsic evaluation of word embeddings. We show that the main methodological premise of such evaluations is "interpretability" of word embeddings: a "good" embedding produces results that make sense in terms of traditional linguistic categories. This approach is not only of limited practical use, but also fails to do justice to the strengths of distributional meaning representations. We argue for a shift from abstract ratings of word embedding "quality" to exploration of their strengths and weaknesses.
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
elita-birladeanu-2005-first
https://aclanthology.org/2005.mtsummit-swtmt.5
A First Step in Integrating an EBMT into the Semantic Web
In this paper we present the actions we made to prepare an EBMT system to be integrated into the Semantic Web. We also described briefly the developed EBMT tool for translators.
false
[]
[]
null
null
null
null
2005
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
krishnakumaran-zhu-2007-hunting
https://aclanthology.org/W07-0103
Hunting Elusive Metaphors Using Lexical Resources.
In this paper we propose algorithms to automatically classify sentences into metaphoric or normal usages. Our algorithms only need the WordNet and bigram counts, and does not require training. We present empirical results on a test set derived from the Master Metaphor List. We also discuss issues that make classification of metaphors a tough problem in general.
false
[]
[]
null
null
null
null
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
cho-2017-wh
https://aclanthology.org/Y17-1044
Wh-island Effects in Korean Scrambling Constructions
This study examines the wh-island effects in Korean. Since wh-in-situ languages like Korean allow wh-scrambling, the absence of wh-island constraints is accepted. However, it is controversial whether whclauses can take a matrix scope or not. In order to clarify the issue of wh-islands in Korean, the current paper designed an offline experiment with three factors: island or non-island, scrambling or non-scrambling, and embedded scope or matrix scope. The following acceptability judgment task revealed that wh-PF-island does not exist but wh-LF-island plays a role in Korean. Among results of wh-LF-island, it was observed that a majority of speakers prefer the matrix scope reading.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kawamori-etal-1996-phonological
https://aclanthology.org/Y96-1031
A Phonological Study on Japanese Discourse Markers
A spontaneously spoken, natural Japanese discourse contains many instances of the so-called redundant interjections and of backchannel utterances. These expressions have not hitherto received much attention and few systematic analyses have been made. We show that these utterances are characterizable as discourse markers, and that they comprise a well-defined category, characterizable in a regular manner by their phonologico-prosodic properties. Our report is based on an experiment involving spontaneously spoken conversations, recorded in a laboratory environment and analyzed using digital devices. Prosodic patterns of discourse markers occurring in the recorded conversations have been analyzed. Several pitch patterns have been found that characterize the most frequently used Japanese discourse markers.
false
[]
[]
null
null
null
null
1996
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
pyysalo-etal-2009-static
https://aclanthology.org/W09-1301
Static Relations: a Piece in the Biomedical Information Extraction Puzzle
We propose a static relation extraction task to complement biomedical information extraction approaches. We argue that static relations such as part-whole are implicitly involved in many common extraction settings, define a task setting making them explicit, and discuss their integration into previously proposed tasks and extraction methods. We further identify a specific static relation extraction task motivated by the BioNLP'09 shared task on event extraction, introduce an annotated corpus for the task, and demonstrate the feasibility of the task by experiments showing that the defined relations can be reliably extracted. The task setting and corpus can serve to support several forms of domain information extraction.
true
[]
[]
Good Health and Well-Being
null
null
Discussions with members of the BioInfer group were central for developing many of the ideas presented here. We are grateful for the efforts of Maki Niihori in producing supporting annotation applied in this work. This work was partially supported by Grant-in-Aid for Specially Promoted Research (Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan), and Genome Network Project (MEXT, Japan).
2009
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
popovic-etal-2020-neural
https://aclanthology.org/2020.vardial-1.10
Neural Machine Translation for translating into Croatian and Serbian
In this work, we systematically investigate different setups for training of neural machine translation (NMT) systems for translation into Croatian and Serbian, two closely related South Slavic languages. We explore English and German as source languages, different sizes and types of training corpora, as well as bilingual and multilingual systems. We also explore translation of English IMDb user movie reviews, a domain/genre where only monolingual data are available. First, our results confirm that multilingual systems with joint target languages perform better. Furthermore, translation performance from English is much better than from German, partly because German is morphologically more complex and partly because the corpus consists mostly of parallel human translations instead of original text and its human translation. The translation from German should be further investigated systematically. For translating user reviews, creating synthetic in-domain parallel data through back-and forward-translation and adding them to a small out-of-domain parallel corpus can yield performance comparable with a system trained on a full out-of-domain corpus. However, it is still not clear what is the optimal size of synthetic in-domain data, especially for forward-translated data where the target language is machine translated. More detailed research including manual evaluation and analysis is needed in this direction.
false
[]
[]
null
null
null
The ADAPT SFI Centre for Digital Media Technology is funded by Science Foundation Ireland through the SFI Research Centres Programme and is co-funded under the European Regional Development Fund (ERDF) through Grant 13/RC/2106. This research was partly funded by financial support of the European Association for Machine Translation (EAMT) under its programme "2019 Sponsorship of Activities".
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kate-mooney-2007-semi
https://aclanthology.org/N07-2021
Semi-Supervised Learning for Semantic Parsing using Support Vector Machines
We present a method for utilizing unannotated sentences to improve a semantic parser which maps natural language (NL) sentences into their formal meaning representations (MRs). Given NL sentences annotated with their MRs, the initial supervised semantic parser learns the mapping by training Support Vector Machine (SVM) classifiers for every production in the MR grammar. Our new method applies the learned semantic parser to the unannotated sentences and collects unlabeled examples which are then used to retrain the classifiers using a variant of transductive SVMs. Experimental results show the improvements obtained over the purely supervised parser, particularly when the annotated training set is small.
false
[]
[]
null
null
null
This research was supported by a Google research grant. The experiments were run on the Mastodon cluster provided by NSF grant EIA-0303609.
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mitchell-etal-2013-community
https://aclanthology.org/2013.mtsummit-wptp.5
Community-based post-editing of machine-translated content: monolingual vs. bilingual
We carried out a machine-translation postediting pilot study with users of an IT support forum community. For both language pairs (English to German, English to French), 4 native speakers for each language were recruited. They performed monolingual and bilingual postediting tasks on machine-translated forum content. The post-edited content was evaluated using human evaluation (fluency, comprehensibility, fidelity). We found that monolingual post-editing can lead to improved fluency and comprehensibility scores similar to those achieved through bilingual post-editing, while we found that fidelity improved considerably more for the bilingual setup. Furthermore, the performance across post-editors varied greatly and it was found that some post-editors are able to produce better quality in a monolingual setup than others.
false
[]
[]
null
null
null
This work is supported by the European Commission's Seventh Framework Programme (Grant 288769). The authors would like to thank Dr. Pratyush Banerjee for contributing the building of the clusters to group similar posts together for this post-editing study.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false

Dataset Card for NLP4SGPapers

Dataset Summary

Scientific dataset with three associated tasks that can help identify NLP4SG papers.

Languages

The language in the dataset is English.

Dataset Structure

Data Instances

Each instance is an annotated paper with title, abstract, year.

Data Fields

  • ID: Paper ID in ACL Anthology
  • url: URL where the paper is available
  • title: Title of the paper
  • abstract: Abstract
  • label_nlp4sg: Whether is an NLP4SG paper or not. For more info on the criteria check our paper
  • task: List of tasks (Only available for the test set and for SG papers)
  • method: List of methods (Only available for the test set and for SG papers)
  • goal1: goal in string format
  • goal2: goal in string format
  • goal3: goal in string format
  • acknowledgments: acknowledgments
  • year: Year of publication
  • sdg1 to sdg17: Boolean value that indicates if the paper addresses the United Nations Social Development Goal.

Data Splits

NLP4SGPapers contains train, test and validation splits.

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Information about the data collection can be found in the appendix of [our paper].

Personal and Sensitive Information

The NLP4SGPapers dataset does not have privacy concerns.

Considerations for Using the Data

Social Impact of Dataset

The intended use of this work is to help the creation of an overview of the NLP4SG research landscape.

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

The NLP4SGPapers dataset is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Citation Information


Downloads last month
2
Edit dataset card