sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
relevance for satisfaction ’ , β€˜ contrastive weight ’ and certain adverbials , that work to affect polarity in a more subtle but crucial manner , as evidenced also by the statistical analysis .
we argue that relevance for satisfaction , contrastive weight clues , and certain adverbials work to affect the polarity , as evidenced by the statistical analysis .
we also trained 5-gram language models using kenlm .
and for language modeling , we used kenlm to build a 5-gram language model .
for both syntactic and semantic chunking , we used tinysvm along with yamcha 7 .
in this paper , we use an svms-based chunking tool yamcha 8 .
and its position is decided by the difference vector between tail and head entity .
the point coordinate is the difference vector between tail and head entity , which should be near to the centre .
mintz et al , 2009 ) proposes distant supervision to automatically generate training data via aligning kbs and texts .
mintz et al , 2009 , generates training data automatically by aligning a kb with plain text .
the language model is trained on the target side of the parallel training corpus using srilm .
gram language models are trained over the target-side of the training data , using srilm with modified kneser-ney discounting .
rentoumi et al suggest an approach to use word senses to detect sentence level polarity using graph-based similarity .
rentoumi et al suggest using word senses to detect sentence level polarity of news headlines .
in this paper , we present an algorithm that transforms an lcfrs into a strongly equivalent form in which all productions have rank .
the algorithm introduced in this paper can be used to transform an lcfrs into an equivalent form with rank 2 .
papineni et al shows that expanding the number of references reduces the gap between automatic and human evaluation .
papineni et al addressed the evaluation problem by introducing an automatic scoring metric , called bleu , which allowed the automatic calculation of translation quality .
representation choice has a minor influence on chunking performance .
the data representation choice might influence the performance of chunking systems .
row-less ’ models accurately predict relations on unseen entity pairs and types on unseen entities .
more importantly , aggregation models predict on unseen row entries without much loss in accuracy .
algorithms show that our methods produce more accurate reordering models , as can be shown by an increase over the regular msd models .
experiments show that our models perform better than the distance-based model and the regular msd model .
binarized trees are then transformed into right-corner trees using transform rules similar to those described by johnson .
binarized trees 2 are then transformed into rightcorner trees using transform rules similar to those described by johnson .
picked by uncertainty selection , while our non-expert did best with random selection aided by machine label suggestions .
our expert performed best with uncertainty selection , but gained little from suggestions .
for the source side we use the pos tags from stanford corenlp mapped to universal pos tags .
for part of speech tagging and dependency parsing of the text , we used the toolset from stanford corenlp .
the data consists of sections of the wall street journal part of the penn treebank , with information on predicate-argument structures extracted from the propbank corpus .
the data consist of sections of the wall street journal part of the penn treebank , with information on predicate-argument structures extracted from the propbank corpus .
for the translation from german into english , german compound words were split using the frequency-based method described in .
in order to reduce the source vocabulary size translation , the german text was preprocessed by splitting german compound words with the frequency-based method described in .
we substitute our language model and use mert to optimize the bleu score .
we evaluate the translation quality using the case-sensitive bleu-4 metric .
we use our reordering model for n-best re-ranking and optimize bleu using minimum error rate training .
we optimise the feature weights of the model with minimum error rate training against the bleu evaluation metric .
training time was decided using early stopping .
training duration was decided using early stopping .
the systems were tuned using a small extracted parallel dataset with minimum error rate training and then tested with different test sets .
finally , the trained system was tuned with minimum error rate training to learn the weights of different parameters of the model .
relation extraction is a challenging task in natural language processing .
relation extraction ( re ) is the process of generating structured relation knowledge from unstructured natural language texts .
in this process is differentiable and the model can be trained efficiently endto-end , while inducing structural information .
this process is differentiable , so the model can be trained endto-end and learn structural information without relying on a parser .
in this paper , we presented a generalized answer selection framework which was applied to chinese and japanese .
in this paper , we describe a probabilistic answer ranking framework for multiple languages .
stance detection is the task of estimating whether the attitude expressed in a text towards a given topic is β€˜ in favour ’ , β€˜ against ’ , or β€˜ neutral ’ .
stance detection is the task of assigning stance labels to a piece of text with respect to a topic , i.e . whether a piece of text is in favour of β€œ abortion ” , neutral , or against .
applied techniques include building deep semantic representations , application of categories of patterns underlying a formal reconstruction , and using pragmatically-motivated and empirically justified preferences .
applied techniques include building deep semantic representations , application of patterns underlying formal reconstruction , and using pragmatically-motivated and empirically justified preferences .
random forests is an ensemble learning method for classification and regression .
random forest is an ensemble method that learns many classification trees and predicts an aggregation of their result .
huang et al presented a new neural network architecture which incorporated both local and global document context , and offered an impressive result .
huang et al presented an rnn model that uses document-level context information to construct more accurate word representations .
the word embeddings used in our experiments are learned with the word2vec tool 5 .
the word embeddings used in our experiments are learned with the word2vec tool 2 , introduced by .
in this paper , we address the problem of identifying implicit relations in text .
in this paper we presented a new methodology to identify relations between entities in text .
in this paper , we propose a new measure of word association based on the statistical significance of the observed span .
in this paper , we introduced a new measure called csr for word-association based on statistical significance of lexical co-occurrences .
however , when sufficient training data is not available , generative models are known to perform better than discriminative models .
the training set is very small , and it is a known fact that generative models tend to work better for small datasets and discriminative models tend to work better for larger datasets .
svms have proven to be an effective means for text categorization as they are capable to robustly deal with high-dimensional , sparse feature spaces .
support vector machines have been shown to outperform other existing methods in text categorization .
however , the high level of computational cost has prevented the use of clustering .
however , the clustering algorithms , especially the em-based algorithms , are computationally expensive .
in section 4 , we verify our model ' s superiority over the others .
in the next section , we will experimentally verify svmv 's superiority .
transliteration is a process of rewriting a word from a source language to a target language in a different writing system using the word ’ s phonological equivalent .
transliteration is the process of converting terms written in one language into their approximate spelling or phonetic equivalents in another language .
we apply the rules to each sentence with its dependency tree structure acquired from the stanford parser .
we extract the corresponding feature from the output of the stanford parser .
coreference resolution is the task of determining whether two or more noun phrases refer to the same entity in a text .
since coreference resolution is a pervasive discourse phenomenon causing performance impediments in current ie systems , we considered a corpus of aligned english and romanian texts to identify coreferring expressions .
in this paper , we develop an effective approach to clean the bilingual data .
to this end , we propose an unsupervised approach to clean the bilingual data .
semantic parsing is the task of mapping natural language sentences to a formal representation of meaning .
semantic parsing is the task of mapping a natural language ( nl ) sentence into a completely formal meaning representation ( mr ) or logical form .
as a proof of concept , we demonstrate how our knowledge graph can be used to solve complex questions .
as a proof of concept , we demonstrate the application of our graph for arithmetic question-answering .
all the language models are built with the sri language modeling toolkit .
srilm toolkit is used to build these language models .
in this research , we use the pre-trained google news dataset 2 by word2vec algorithms .
for the cluster- based method , we use word2vec 2 which provides the word vectors trained on the google news corpus .
we use the selectfrommodel 4 feature selection method as implemented in scikit-learn .
we use a set of 318 english function words from the scikit-learn package .
word sense disambiguation ( wsd ) is formally defined as the task of computationally identifying senses of a word in a context .
word sense disambiguation ( wsd ) is a key task in computational lexical semantics , inasmuch as it addresses the lexical ambiguity of text by making explicit the meaning of words occurring in a given context ( cite-p-18-3-10 ) .
word sense disambiguation ( wsd ) is a particular problem of computational linguistics which consists in determining the correct sense for a given ambiguous word .
word sense disambiguation ( wsd ) is the task of determining the correct meaning ( β€œ sense ” ) of a word in context , and several efforts have been made to develop automatic wsd systems .
the model can be formalized as a synchronous context-free grammar .
the hierarchical model is built on a weighted synchronous contextfree grammar .
the attention strategies have been widely used in machine translation and question answering .
the attention mechanism was proposed by in machine translation .
previous work has shown that off-the-shelf nlp tools can perform poorly on microblogs .
to reduce this effect , attempts have been made to adapt nlp tools to microblog data .
stance detection is the task of automatically determining whether the authors of a text are against or in favour of a given target .
stance detection is the task of automatically determining from text whether the author of the text is in favor of , against , or neutral towards a proposition or target .
relation extraction is a fundamental task that enables a wide range of semantic applications from question answering ( cite-p-13-3-12 ) to fact checking ( cite-p-13-3-10 ) .
relation extraction is a core task in information extraction and natural language understanding .
a context-free grammar g is a 4-tuple ( math-w-3-1-1-8 , where math-w-3-1-1-18 and n are two finite disjoint sets of terminals and nonterminals , respectively , math-w-3-1-1-33 is the start symbol , and math-w-3-1-1-42 is a finite set of rules , each of the form math-w-3-1-1-54 , where math-w-3-1-1-59 and math-w-3-1-1-63 .
a context-free grammar g is a 4-tuple math-w-3-1-1-13 , where g and n are two finite disjoint sets of terminals and nonterminals , respectively , s e n is the start symbol , and p is a finite set of rules .
from a theoretical point of view , our findings demonstrate the feasibility of cue phrase disambiguation in both text and speech .
on a practical level , this association permits the application of our findings to the identification and appropriate generation of cue phrases in synthetic speech .
furthermore , we introduce the attention mechanism that encourages the model to focus on the important information .
in addition , we add an attention mechanism to make the seq2seq baseline stronger .
language as uniform loses these distinctions , and thus causes performance drops .
modeling language as uniform loses these distinctions , and thus causes performance drops .
we also show how mbr decoding can be used to incorporate syntactic structure into a statistical mt system .
we have shown in such a situation how mbr decoding can be applied to the mt system .
coreference resolution is the problem of partitioning a sequence of noun phrases ( or mentions ) , as they occur in a natural language text , into a set of referential entities .
coreference resolution is the task of identifying all mentions which refer to the same entity in a document .
for one label , the predictions-as-features methods can model dependencies between former labels and the current label , but they can ’ t model dependencies between the current label and the latter labels .
when training a classifier for one label , predictions-as-features methods can model dependencies between former labels and the current label , but they can ’ t model dependencies between the current label and the latter labels .
parser produces a full syntactic parse of any sentence , while simultaneously producing logical forms for portions of the sentence that have a semantic representation within the parser ‘¯ s predicate vocabulary .
our parser produces a full syntactic parse of every sentence , and furthermore produces logical forms for portions of the sentence that have a semantic representation within the parser‘¯s predicate vocabulary .
we train a 4-gram language model on the xinhua portion of the gigaword corpus using the sri language toolkit with modified kneser-ney smoothing .
we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit .
for simplicity , we use the well-known conditional random fields for sequential labeling .
we use conditional random fields sequence labeling as described in .
we used 4-gram language models , trained using kenlm .
we trained an english 5-gram language model using kenlm .
a story is usually viewed as a sequence of events based on information extraction .
a narrative event chain is a partially ordered set of events related by a common protagonist .
this idea was formalised by blum and mitchell in their presentation of co-training .
the co-training approach was first introduced by blum and mitchell .
the log-linear parameter weights are tuned with mert on the development set .
the model parameters are trained using minimum error-rate training .
it has been shown that images from google yield higher quality representations than comparable resources such as flickr and are competitive with hand-crafted datasets .
images from google have been shown to yield representations that are competitive in quality compared to alternative resources .
we create mwes with word2vec skipgram 1 and estimate w with scikit-learn .
we implement classification models using keras and scikit-learn .
we trained two 5-gram language models on the entire target side of the parallel data , with srilm .
we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit .
szarvas extends the work of medlock and briscoe by performing feature selection , using bi-and trigrams and exploiting external dictionaries .
szarvas extended the methodology of medlock and briscoe to use n-gram features and a semi-supervised selection of the keyword features .
chinese is a language without natural word delimiters .
besides , chinese is a topic-prominent language , the subject is usually covert and the usage of words is relatively flexible .
we evaluate the translation quality using the case-insensitive bleu-4 metric .
we used the bleu score to evaluate the translation accuracy with and without the normalization .
we propose an algorithm to assess the quality of forum posts automatically .
we proposed an algorithm that has shown to be able to assess the quality of forum posts .
design of the classifier is based on three important guidelines , namely , adopting linguistically motivated features , using all available resources , and easy integration into a parsing model .
the design of the classifier is based on three important guidelines , namely , adopting linguistically motivated features , using all available resources , and easy integration into parsing model .
chiang shows significant improvement by keeping the strengths of phrases , while incorporating syntax into smt .
chiang shows significant improvement by keeping the strengths of phrases while incorporating syntax into statistical translation .
to solve the traditional recurrent neural networks , hochreiter and schmidhuber proposed the lstm architecture .
long short-term memory was introduced by hochreiter and schmidhuber to overcome the issue of vanishing gradients in the vanilla recurrent neural networks .
the 5-gram kneser-ney smoothed language models were trained by srilm , with kenlm used at runtime .
language models were built using the sri language modeling toolkit with modified kneser-ney smoothing .
in one experiment , we cut the gap between unsupervised and supervised performance .
in one experiment , we cut the gap between unsupervised and supervised performance by nearly two thirds .
for the embeddings trained on stack overflow corpus , we use the word2vec implementation of gensim 8 toolkit .
we use word2vec 1 toolkit to pre-train the character embeddings on the chinese wikipedia corpus .
word segmentation is a fundamental task for chinese language processing .
word segmentation is a prerequisite for many natural language processing ( nlp ) applications on those languages that have no explicit space between words , such as arabic , chinese and japanese .
relation extraction ( re ) is the task of recognizing relationships between entities mentioned in text .
relation extraction is a core task in information extraction and natural language understanding .
the n-gram models are created using the srilm toolkit with good-turning smoothing for both the chinese and english data .
all language models are created with the srilm toolkit and are standard 4-gram lms with interpolated modified kneser-ney smoothing .
in the full-supervision setting of topic id , the lower-dimensional learned representations converge in performance to the raw representation .
in the full-supervision setting of topic id , the lower-dimensional learned representations converge in performance to the raw representation as the dimension math-w-12-1-1-23 increases .
as a baseline smt system , we use the hierarchical phrase-based translation with an efficient left-to-right generation originally proposed by chiang .
we used an in-house implementation of the hierarchical phrase-based decoder as described in chiang .
length and highly informal nature of tweets presents a serious challenge for the automated extraction of such sentiments .
the shortness of the length and the highly informal nature of tweets render it very difficult to automatically detect such information .
word sense disambiguation ( wsd ) is formally defined as the task of computationally identifying senses of a word in a context .
word sense disambiguation ( wsd ) is the task of identifying the correct meaning of a word in context .
we applied liblinear via its scikitlearn python interface to train the logistic regression model with l2 regularization .
we trained a linear log-loss model using stochastic gradient descent learning as implemented in the scikit learn library .
we train a 4-gram language model on the xinhua portion of the english gigaword corpus using the srilm toolkits with modified kneser-ney smoothing .
for the semantic language model , we used the srilm package and trained a tri-gram language model with the default goodturing smoothing .
by effectively creating pseudo-tasks with the help of a relevance function .
this is done by effectively creating pseudo-tasks with the help of a relevance function .
language models are built using the sri-lm toolkit .
all language models were trained using the srilm toolkit .
the 5-gram kneser-ney smoothed language models were trained by srilm , with kenlm used at runtime .
language models of order 5 have been built and interpolated with srilm and kenlm .
blitzer et al apply the structural correspondence learning algorithm to train a crossdomain sentiment classifier .
blitzer et al used the structural correspondence learning algorithm with mutual information .
on this problem , we identify two main strands directly relevant to our work .
within the large body of research on this problem , we identify two main strands directly relevant to our work .
we used the scikit-learn library the svm model .
for all classifiers , we used the scikit-learn implementation .
the language model is a 5-gram with interpolation and kneser-ney smoothing .
we use a pbsmt model where the language model is a 5-gram lm with modified kneser-ney smoothing .
we describe our participation to the semeval-2018 task .
in this paper we describe our participation at semeval-2018 task 3 .
ahmed et al proposed a unified framework to group temporally and topically related news articles into same storylines in order to reveal the temporal evolution of events .
however , ahmed et al proposed a framework to group temporally and tocipally related news articles into same story clusters in order to reveal the temporal evolution of stories .
optimization with regard to the bleu score is done using minimum error rate training as described by venugopal et al .
optimization with regard to the bleu score is done using minimum error rate training as described in venugopal et al .
framenet is a comprehensive lexical database that lists descriptions of words in the frame-semantic paradigm .
framenet is an expert-built lexical-semantic resource incorporating the theory of frame-semantics .
discourse segmentation is the task of identifying coherent clusters of sentences and the points of transition between those groupings .
discourse segmentation is the process of decomposing discourse into elementary discourse units ( edus ) , which may be simple sentences or clauses in a complex sentence , and from which discourse trees are constructed .
all parameters are initialized using glorot initialization .
they are randomly initialized with xavier initialization .