sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
turney uses the number of hits returned by a web search engine to calculate the pointwise mutual information between terms , as an indicator of synonymy .
turney defines a point wise mutual information measure using the number of hits returned by a web search engine to recognize synonyms .
word sense disambiguation ( wsd ) is a problem of finding the relevant clues in a surrounding context .
word sense disambiguation ( wsd ) is a difficult natural language processing task which requires that for every content word ( noun , adjective , verb or adverb ) the appropriate meaning is automatically selected from the available sense inventory 1 .
we use our system combination module which includes a language modeling tool , a mert process , and mbr decoding of its own .
we use our system combination module , which has its own language modeling tool , mert process , and mbr decoding .
translation performances are measured with case-insensitive bleu4 score .
translation quality is evaluated by case-insensitive bleu-4 metric .
methods perform the embedding task based solely on observed facts .
most of the existing methods perform this task based solely on observed facts .
we introduced three techniques for better constituent parsing of morphologically rich languages .
we introduce three techniques for improving constituent parsing for morphologically rich languages .
we use svm light to learn a linear-kernel classifier on pairwise examples in the training set .
we employ support vector machines to perform the classification .
to calculate language model features , we train traditional n-gram language models with ngram lengths of four and five using the srilm toolkit .
for the n-gram lm , we use srilm toolkits to train a 4-gram lm on the xinhua portion of the gigaword corpus .
based on ibm model 1 ( cite-p-12-1-0 ) , i . e . math-w-2-3-1-51 , namely , the target word math-w-2-3-1-63 is triggered by the source word math-w-2-3-1-70 .
in order to make full use of the dependency information , we assume that the target word math-w-10-1-0-72 is triggered by dependency edge of the corresponding source word math-w-10-1-0-83 .
we define a parsing algorithm for well-nested dependency structures of gap degree .
we have defined a parsing algorithm for well-nested dependency structures with bounded gap degree .
in this paper , the unitor system participating in the semeval-2013 sentiment analysis in twitter task .
in this paper , the unitor system participating in the semeval-2013 sentiment analysis in twitter task is presented .
we use a fourgram language model with modified kneser-ney smoothing as implemented in the srilm toolkit .
for language model scoring , we use the srilm toolkit training a 5-gram language model for english .
the srilm toolkit was used for training the language models using kneser-ney smoothing .
the language model is a 5-gram with interpolation and kneser-ney smoothing .
when labeled training data is available , we can use the maximum entropy principle to optimize the 位 weights .
we use the mallet implementation of a maximum entropy classifier to construct our models .
wong and dras exploited probabilistic context-free grammar rules as features for native language identification .
wong and dras looked particularly at syntactic structure , in the form of production rules and parse reranking templates .
so we can estimate it more accurately via a semi-supervised or transductive extension .
in addition , we show that the regularizer can be applied naturally in the semi-supervised setting .
it has been found that a key factor that determines the effect of query expansion is the selection of appropriate expansion terms .
however , the effect of query expansion is strongly determined by the term relations used .
with hyperedge replacement grammars , our implementations outperform the best previous system by several orders of magnitude .
our implementations outperform bolinas , the previously best system , by several orders of magnitude .
this approach relies on word embeddings for the computation of semantic relatedness with word2vec .
the cosine similarity is based on a distributional model constructed with the word2vec tool and the french corpus frwac .
if the phrase generation is carried out , the nmt decoder generates a multi-word phrase and updates its decoding state .
during decoding , the nmt decoder enquires the phrase memory and properly generates phrase translations .
information extraction ( ie ) is a technology that can be applied to identifying both sources and targets of new hyperlinks .
information extraction ( ie ) is a fundamental technology for nlp .
word alignment is a natural language processing task that aims to specify the correspondence between words in two languages ( cite-p-19-1-0 ) .
word alignment is the task of identifying corresponding words in sentence pairs .
with a variety of manual features which are helpful in solving the problem that the correct answer can be easily found in the given document .
these manual features are helpful in solving the problem that the correct answer can be easily found in the given document .
continuous representations of words have been found to capture syntactic and semantic regularities in language .
it has been empirically shown that word embeddings can capture semantic and syntactic similarities between words .
the language models were created using the srilm toolkit on the standard training sections of the ccgbank , with sentenceinitial words uncapitalized .
a 5-gram language model was created with the sri language modeling toolkit and trained using the gigaword corpus and english sentences from the parallel data .
sentiment classification is the task of identifying the sentiment polarity of a given text , which is traditionally categorized as either positive or negative .
sentiment classification is the task of classifying an opinion document as expressing a positive or negative sentiment .
we selected these three methods because they perform best on the widely used pp attachment evaluation set created by ratnaparkhi , reynar , and roukos .
in order to make a fair comparison , we evaluated our methods on the common benchmark dataset first used in ratnaparkhi , reynar , and roukos .
we used the phrase-based smt model , as implemented in the moses toolkit , to train an smt system translating from english to arabic .
for the machine translation framework , we used phrase-based smt with the moses toolkit as a decoder .
this is called open ( world ) classification .
this problem is called open ( world ) classification .
neural networks perform well for many small-scale classification tasks .
recently , convolutional neural networks are reported to perform well on a range of nlp tasks .
the target-side language models were estimated using the srilm toolkit .
we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit .
however , this kind of theory encounters problems in dealing with indexicals .
unfortunately , we have seen that this kind of theory can not explain opaque indexicals .
for creating the word embeddings , we used the tool word2vec 1 .
for feature building , we use word2vec pre-trained word embeddings .
dense , low-dimensional , real-valued vector representations of words known as word embeddings have proven very useful for nlp tasks .
vectors of real values , also known as embeddings ) have been shown to be beneficial in many nlp applications .
we used the implementation of the scikit-learn 2 module .
we use a set of 318 english function words from the scikit-learn package .
bilingual lexicon is a crucial resource for cross-lingual applications of natural language processing including machine translation , and cross-lingual information retrieval .
parallel bilingual corpora are critical resources for statistical machine translation , and cross-lingual information retrieval .
the use of unsupervised word embeddings in various natural language processing tasks has received much attention .
distributed representations of text have been the target of much research in natural language processing .
we trained a specific language model using srilm from each of these corpora in order to estimate n-gram log-probabilities .
we used srilm to build a 4-gram language model with interpolated kneser-ney discounting .
chang et al stated that one reason is that the objective function of topic models does not always correlate well with human judgments .
however , chang et al have demonstrated that models with high perplexity do not necessarily generate semantically coherent topics in human perception .
advantage of our approach is that we alleviate the data sparseness problem without increasing the amount of bilingual corpus .
the advantage of this method is that it can alleviate the data sparseness problem if the amount of bilingual corpus is limited .
mikolov et al proposed vector representation of words with the help of negative sampling that improves both word vector quality and training speed .
mikolov et al presents a neural network-based architecture which learns a word representation by learning to predict its context words .
recognising textual entailment between two sentences was also addressed in which used lstms and a word-by-word neural attention mechanism on the snli corpus .
recognizing textual entailment between two sentences is also addressed by rockt盲schel et al , using lstms and word-by-word neural attention mechanisms on the snli data set .
interestingly , park and cardie concluded on the worthlessness of word-based features , as long as hand-crafted linguistic features were used .
interestingly , park and cardie conclude on the worthlessness of word pair features , given the existence of such resources .
we train the parameters of the stages separately using adagrad with the perceptron loss function .
the parameters are optimized with adagrad under a cosine proximity objective function .
koo et al used the brown algorithm to learn word clusters from a large amount of unannotated data and defined a set of word cluster-based features for dependency parsing models .
in order to reduce the amount of annotated data to train a dependency parser , koo et al used word clusters computed from unlabelled data as features for training a parser .
we built a 5-gram language model on the english side of europarl and used the kneser-ney smoothing method and srilm as the language model toolkit .
we used the srilm toolkit to train a 4-gram language model on the xinhua portion of the gigaword corpus , which contains 238m english words .
the weights for these features are optimized using mert .
the feature weights were tuned on the wmt newstest2008 development set using mert .
we perform pre-training using the skip-gram nn architecture available in the word2vec 13 tool .
we train skip-gram word embeddings with the word2vec toolkit 1 on a large amount of twitter text data .
in this work , we implemented a simple supervised predominant-sense heuristic .
third , we use an mwe-aware predominant-sense heuristic for supersense tagging .
central to the skip-gram is a log linear model of word prediction .
our cdsm feature is based on word vectors derived using a skip-gram model .
wordnet is a human created database that defines how english words share meaning , similar to an extended thesaurus .
wordnet is a large semantic lexicon database of english words , where nouns , verbs , adjectives and adverbs are grouped into sets of cognitive synonyms .
in this paper we explore modelling the joint distribution of string pairs using a deep generative model and employing a discrete variational autoencoder ( vae ) .
in this paper we have introduced a generative model for jointly modelling pairs of sequences and evaluated its efficacy on the task of sentence compression .
fujita et al extended this work in implementing a discriminative parse selection model incorporating word sense information mapped onto upper-level ontologies of differing depths .
fujita et al extended this work by implementing a discriminative parse selection model , incorporating word sense information and achieved great improvements as well .
as a pivot language , we proposed an approach to estimate the parameters of the statistical word alignment model .
based on these two additional corpora and with l3 as the pivot language , we build a word alignment model for l1 and l2 .
the phrase-based-like submodels have been proved useful in phrase-based approaches to smt .
this is the strategy that is usually adopted in other phrase-based mt approaches .
in pustejovsky and pustejovsky and anick , i suggest that there is a system of relations that characterizes the semantics of nominals , very much like the argument structure of a verb .
in , i suggest that there is a system of relations that characterizes the semantics of nominals , very much like the argument structure of a verb .
that applied reinforcement learning to nlp has , to our knowledge , not shown that it improved results by reducing error propagation .
to our knowledge , this paper is the first to show experimentally that reinforcement learning can reduce error propagation in an nlp task .
the language model pis implemented as an n-gram model using the irstlm-toolkit with kneser-ney smoothing .
the irstlm toolkit is used to build ngram language models with modified kneser-ney smoothing .
the bleu score is based on the geometric mean of n-gram precision .
bleu is a precision measure based on m-gram count vectors .
we train a 4-gram language model on the xinhua portion of the gigaword corpus using the sri language toolkit with modified kneser-ney smoothing .
we created 5-gram language models for every domain using srilm with improved kneserney smoothing on the target side of the training parallel corpora .
we follow the description of the naive bayes classifier given in mccallum and nigam .
note that we use the naive bayes multinomial classifier in weka for classification .
dependency parsing consists of finding the structure of a sentence as expressed by a set of directed links ( dependencies ) between words .
dependency parsing is a fundamental task for language processing which has been investigated for decades .
that the cost is an order of magnitude cheaper than professional translation .
the total cost is more than an order of magnitude lower than professional translation .
on the other hand , text mining builds mainly on theoretical and computational linguistics by data pre-processing .
on the other hand , text mining is mainly based on theoretical and computational linguistics by data preprocessing .
keyphrase extraction is the task of extracting a selection of phrases from a text document to concisely summarize its contents .
keyphrase extraction is a basic text mining procedure that can be used as a ground for other , more sophisticated text analysis methods .
our baseline is an in-house phrase-based statistical machine translation system very similar to moses .
to that end , we use the state-of-the-art phrase based statistical machine translation system moses .
in most cases directly correspond to specific linguistic phenomena .
in most cases our patterns correspond to linguistic phenomena .
we used the srilm toolkit and kneser-ney discounting for estimating 5-grams lms .
we first trained a trigram bnlm as the baseline with interpolated kneser-ney smoothing , using srilm toolkit .
we consider three automatic scoring metrics , namely meteor , nist , and bleu , which are all well-renowned evaluation metrics commonly used for mt evaluation .
we use the automatic mt evaluation metrics bleu , meteor , and ter , to evaluate the absolute translation quality obtained .
meng et al and popat et al also use the unlabeled parallel data to reduce the negative influence of the noisy and incorrect sentiment labels introduced by machine translation and knowledge transfer .
to solve the feature coverage problem with the em algorithm , meng et al leverage the unlabeled parallel data to learn unseen sentiment words .
further , we apply a 4-gram language model trained with the srilm toolkit on the target side of the training corpus .
furthermore , we train a 5-gram language model using the sri language toolkit .
semeval 2016 task 5 ( cite-p-13-1-6 ) consists of three subtasks .
the semeval 2018 task 3 ( cite-p-12-3-0 ) consists of two subtasks .
we also use glove vectors to initialize the word embedding matrix in the caption embedding module .
we use the glove vectors of 300 dimension to represent the input words .
we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing .
we trained a trigram language model on the chinese side , with the srilm toolkit , using the modified kneser-ney smoothing option .
ganchev et al propose a posterior regularization framework for weakly supervised learning to derive a multi-view learning algorithm .
ganchev et al propose postcat which uses posterior regularization to enforce posterior agreement between the two models .
we used a 5-gram language model trained on 126 million words of the xinhua section of the english gigaword corpus , estimated with srilm .
we used the srilm toolkit to train a 4-gram language model on the xinhua portion of the gigaword corpus , which contains 238m english words .
it has been shown that features from language models can be used to detect impairment in monolingual and bilingual children .
nlp-driven analysis of clinical language data has been used to assess language development , language impairment and cognitive status .
word sense disambiguation ( wsd ) is the task of determining the meaning of an ambiguous word in its context .
word sense disambiguation ( wsd ) is a key task in computational lexical semantics , inasmuch as it addresses the lexical ambiguity of text by making explicit the meaning of words occurring in a given context ( cite-p-18-3-10 ) .
to this end , we propose a much richer novel set of linguistically motivated cues for detecting entailment between verbs .
first , we suggest a novel set of entailment indicators that help to detect the likelihood of verb entailment .
the data collection methods used to compile the dataset provided in offenseval is described in zampieri et al .
the data collection methods used to compile the dataset used in offenseval are described in zampieri et al .
however , these models typically integrate only limited additional contextual information .
however , none of these models includes any contextual information beyond the neighbouring words .
we used the scikit-learn library the svm model .
we implemented the different aes models using scikit-learn .
as a countbased baseline , we use modified kneser-ney as implemented in kenlm .
we also use a 4-gram language model trained using srilm with kneser-ney smoothing .
link grammar is closely related to dependency formalisms .
link grammar is a context-free lexicalized grammar without explicit constituents .
weeds et al evaluate various similarity measures based on 1000 frequent and 1000 infrequent words .
weeds et al evaluate various similarity measures based on 1000 frequent and 1000 infrequent target terms .
spatiotemporal signals are critical in advancing entity linking .
therefore , exploiting spatiotemporal signals is crucial to entity linking .
therefore , dependency parsing is a potential “ sweet spot ” that deserves investigation .
dependency parsing is a simpler task than constituent parsing , since dependency trees do not have extra non-terminal nodes and there is no need for a grammar to generate them .
we used the svm implementation provided within scikit-learn .
we employed the machine learning tool of scikit-learn 3 , for training the classifier .
this goes beyond previous work on semantic parsing such as lu et al or zettlemoyer and collins which rely on unambiguous training data where every sentence is paired only with its meaning .
in the approach of zettlemoyer and collins , the training data consists of sentences paired with their meanings in lambda form .
we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus .
we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing .
we show that ccg-gtrc can actually be simulated by a ccg-std , proving the equivalence .
we have shown that ccg-gtrc as formulated above is weakly equivalent to ccg-std .
the conll-x and conll 2007 shared tasks focused on multilingual dependency parsing .
the conll shared tasks on coreference or on dependency parsing .
abe and zaki independently proposed an efficient method , rightmost-extension , to enumerate all subtrees from a given tree .
abe and zaki independently proposed an efficient method , rightmost-extension , for enumerating all subtrees from a given tree .
we use a pbsmt model built with the moses smt toolkit .
we used a standard pbmt system built using moses toolkit .
kilicoglu and bergler apply a linguistically motivated approach to the same classification task by using knowledge from existing lexical resources and incorporating syntactic patterns .
kilicoglu and bergler showed that manually identified syntactic patterns are effective in classifying sentences as speculative or not .
for each language pair , the source dataset is pos-tagged and parsed using the transition-based version of the mateparser , trained on the udt corpus with a beam size of 40 .
for each language pair , the source dataset is pos-tagged and parsed using the transition-based version of the mateparser with a beam of 40 , which was trained on the udt corpus .
and the present work , we introduced the framework of three-valued logic as a means of defining the semantics of a feature structure .
we will also show that the framework of three-valued logic that we present can be used as a basis for comparison of the different approaches .
we use the stanford ner tool to identify proper names in the source text .
we use the stanford ner and textpro to identify nes in english 4 and italian 5 , respectively .
we use the pre-trained glove vectors to initialize word embeddings .
for the word-embedding based classifier , we use the glove pre-trained word embeddings .
we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit .
for the fluency and grammaticality features , we train 4-gram lms using the development dataset with the sri toolkit .