sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
we train a 4-gram language model on the xinhua portion of the gigaword corpus using the sri language toolkit with modified kneser-ney smoothing .
we used the target side of the parallel corpus and the srilm toolkit to train a 5-gram language model .
compared with n-gram models , syntactic models give overall better performance .
syntactic models give better performance compared with n-gram models , despite trained with less data .
the rhetorical structure theory is a language independent theory based on the idea that a text can be segmented into elementary discourse units linked by means of nucleus-satellite or multinuclear rhetorical relations .
rhetorical structure theory is a well known text representation technique that represents the knowledge present in the text using semantic relations known as discourse relations .
we used smoothed bleu for benchmarking purposes .
therefore , we used bleu and rouge as automatic evaluation measures .
we used the phrasebased translation system in moses 5 as a baseline smt system .
for training the translation model and for decoding we used the moses toolkit .
in our paper , we use te to compute connectivity between nodes of the graph and apply the weighted minimum vertex cover ( w mvc ) algorithm on the graph .
in our paper , we use te to compute connectivity between nodes of the graph and apply the weighted minimum vertex cover ( w mvc ) algorithm on the graph to select the sentences for the summary .
aspect extraction is a key task of opinion mining ( cite-p-15-1-14 ) .
aspect extraction is a central problem in sentiment analysis .
by cite-p-8-1-4 , recent attempts that apply either complex linguistic reasoning or attention-based complex neural network architectures achieve up to 76 % accuracy on benchmark sets .
while this does not seem like a challenging task , many recent attempts that apply either complex linguistic reasoning or deep neural networks achieve 65 % ¨c76 % accuracy on benchmark sets .
li et al replaced oovs with in-vocabulary words by semantic similarity to reduce the negative effect for words around the oovs .
li et al substituted oov words in training corpora with a similar in-vocabulary word as pre-and post-processing steps .
we tested our methods on the english penn treebank .
we used small portions of the penn wsj treebank for the experiments .
for instance , bengio et al present a neural probabilistic language model that uses the n-gram model to learn word embeddings .
bengio et al presented a neural network language model where word embeddings are simultaneously learned along with a language model .
examples of topic models include plsi and lda .
one of the most popular and well-known topic models is lda .
lin and he propose a joint topic-sentiment model , but topic words and sentiment words are still not explicitly separated .
lin and he propose a method based on lda that explicitly deals with the interaction of topics and sentiments in text .
our baseline is a standard phrase-based smt system .
our direct system uses the phrase-based translation system .
word sense disambiguation ( wsd ) is a key task in computational lexical semantics , inasmuch as it addresses the lexical ambiguity of text by making explicit the meaning of words occurring in a given context ( cite-p-18-3-10 ) .
word sense disambiguation ( wsd ) is the task to identify the intended sense of a word in a computational manner based on the context in which it appears ( cite-p-13-3-4 ) .
given the model parameters and a sentence pair math-w-2-14-1-11 , compute math-w-2-14-1-18 .
given the parameters of ibm model 3 , and a sentence pair math-w-5-1-0-21 , compute the probability math-w-5-1-0-30 .
kamp is a multiple-agent planning system designed around a noah-like hierarchical planner [ 10 ] .
kamp is a multiagent planning system that can be given a high-level description of an agent 's goals , and then produce a plan that includes the performance of both physical and linguistic actions by several agents that will achieve the agent 's goals .
the bilda model is a straightforward multilingual extension of the standard lda model .
we compare our graphbtm approach with the avitm and the lda model .
to that end , we take the classification algorithm outlined earlier in section 4 , and apply it to the switchboard corpus for both training and testing , replicating the work reported in webb et al .
first , to establish our baseline tagging performance , we take the classification algorithm outlined earlier in section 4 , and apply it to the switchboard corpus for both training and testing , replicating the work reported in webb et al .
we then lowercase all data and use all sentences from the modern dutch part of the corpus to train an n-gram language model with the srilm toolkit .
for improving the word alignment , we use the word-classes that are trained from a monolingual corpus using the srilm toolkit .
can be evaluated by maximizing the pseudo-likelihood on a training corpus , .
can be evaluated by maximising the pseudo-likelihood on a training corpus .
relation extraction is a challenging task in natural language processing .
relation extraction is a traditional information extraction task which aims at detecting and classifying semantic relations between entities in text ( cite-p-10-1-18 ) .
entity linking ( el ) is the task of disambiguating mentions in text by associating them with entries in a predefined database of mentions ( persons , organizations , etc ) .
entity linking ( el ) is the task of automatically linking mentions of entities ( e.g . persons , locations , organizations ) in a text to their corresponding entry in a given knowledge base ( kb ) , such as wikipedia or freebase .
phrase reordering is a common problem when translating between two grammatically different languages .
phrase reordering is a challenge for statistical machine translation systems .
semantic role labeling ( srl ) is a form of shallow semantic parsing whose goal is to discover the predicate-argument structure of each predicate in a given input sentence .
semantic role labeling ( srl ) has been defined as a sentence-level natural-language processing task in which semantic roles are assigned to the syntactic arguments of a predicate ( cite-p-14-1-7 ) .
in the reranking stage is performed using linear interpolation of these models .
in the reranking stage , we use linearly combined model of these models .
relation extraction ( re ) is a task of identifying typed relations between known entity mentions in a sentence .
relation extraction is a challenging task in natural language processing .
our trigram word language model was trained on the target side of the training corpus using the srilm toolkit with modified kneser-ney smoothing .
a 4-gram language model which was trained on the entire training corpus using srilm was used to generate responses in conjunction with the phrase-based translation model .
we used a phrase-based smt model as implemented in the moses toolkit .
we use phrase based moses with default options as the spe engine .
abbasi et al applies sentiment analysis techniques to identify and classify documentlevel opinions in text crawled from english and arabic web forums .
abbasi et al use a genetic algorithm for both english and arabic web forums sentiment detection on the document level .
for the classification task , we use pre-trained glove embedding vectors as lexical features .
the statistics for these datasets are summarized in settings we use glove vectors with 840b tokens as the pre-trained word embeddings .
cao et al explained topic models from the perspective of neural networks and proposed a neural topic model where the representation of words and documents are combined into a unified framework .
cao et al , 2015 ) proposed a novel neural topic model where the representation of words and documents are efficiently and naturally combined into a uniform framework .
the model weights are automatically tuned using minimum error rate training .
the feature weights are tuned with mert to maximize bleu-4 .
we created a data collection for research , development and evaluation of a method for automatically answering why-questions ( why-qa ) .
we created a data collection for research into why-questions and for development of a method for why-qa .
an idiom is a relatively frozen expression whose meaning can not be built compositionally from the meanings of its component words .
an idiom is a combination of words that has a figurative meaning which differs from its literal meaning .
for all three classifiers , we used the word2vec 300d pre-trained embeddings as features .
as embedding vectors , we used the publicly available representations obtained from the word2vec cbow model .
sentiment analysis is a recent attempt to deal with evaluative aspects of text .
one of the first challenges in sentiment analysis is the vast lexical diversity of subjective language .
all language models are created with the srilm toolkit and are standard 4-gram lms with interpolated modified kneser-ney smoothing .
language models were built with srilm , modified kneser-ney smoothing , default pruning , and order 5 .
in a preprocessing step , we apply the coreference resolution module of stanford corenlp to the whole corpus .
first , we extract the named entities in the text using stanford corenlp .
we evaluated the performance of the three pruning criteria in a real application of chinese text input .
we evaluated the performance of the three pruning criteria in a real application of chinese text input ( cite-p-15-1-2 ) through cer .
nlp researchers have the potential to significantly advance gun violence research .
these nlp tools have the potential to make a marked difference for gun violence researchers .
the backbone of our system is a character-based segmenter with the application of crf that provides a framework to use a large number of linguistic features .
the backbone of our system is a character-based segmenter with the application of conditional random fields .
but the development of cohesion-based unsupervised methods is an interesting possibility for future work .
the development of more comprehensive bayesian models for discourse structure seems an exciting direction for future research .
in this paper , we develop an approach based on recurrent neural networks .
we have explored a variety of neural network models in this paper to identify the most suitable model .
user : i want to prevent tom from reading my file .
user : so i have to remove a file to create a file ?
word embeddings are commonly estimated from large text corpora utilizing statistics concerning the co-occurrences of words .
word embedding models are aimed at learning vector representations of word meaning .
the candidate answer with the highest probability will be selected as the target .
the candidate with the highest probability was chosen as the target entity .
for the fluency and grammaticality features , we train 4-gram lms using the development dataset with the sri toolkit .
we train a 4-gram language model on the xinhua portion of english gigaword corpus by srilm toolkit .
in core task , using 6 types of similarity measures , i . e . , string similarity , number similarity , knowledge-based similarity , corpus-based similarity , syntactic dependency similarity and machine translation similarity .
for core task , we collect 6 types of similarity measures , i.e. , string similarity , number similarity , knowledge-based similarity , corpus-based similarity , syntactic dependency similarity and machine translation similarity .
the restaurants dataset contains 3,710 english sentences from the reviews of ganu et al .
the restaurants dataset contains 3,710 english sentences from the restaurant reviews of ganu et al .
we used the logistic regression implementation in scikit-learn for the maximum entropy models in our experiments .
we use the logistic regression implementation of liblinear wrapped by the scikit-learn library .
abstract meaning representation is a popular framework for annotating whole sentence meaning .
the abstract meaning representation is a semantic meaning representation language that is purposefully syntax-agnostic .
lstm and gru networks are known to be successful remedies to these problems .
the lstm architecture is proposed to address this problem .
we use srilm to train a 5-gram language model on the target side of our training corpus with modified kneser-ney discounting .
we used the sri language modeling toolkit to train lms on our training data for each ilr level .
main tasks include aspect extraction , opinion polarity identification and subjectivity analysis .
main tasks include aspect extraction , polarity identification and subjectivity analysis .
we ’ ve demonstrated that the benefits of unsupervised multilingual learning increase steadily with the number of available languages .
our experiments show that performance improves steadily as the number of languages increases .
probabilistic context-free grammars are commonly used in parsing and grammar induction systems .
probabilistic context-free grammars underlie most high-performance parsers in one way or another .
coreference resolution is the task of clustering a set of mentions in the text such that all mentions in the same cluster refer to the same entity .
coreference resolution is a well known clustering task in natural language processing .
since our dataset is not so large , we make use of pre-trained word embeddings , which are trained on a much larger corpus with word2vec toolkit .
due to their ability to capture syntactic and semantic information of words from large scale unlabeled texts , we pre-train the word embeddings from the given training dataset by word2vec toolkit .
the various smt systems are evaluated using the bleu score .
the various models developed are evaluated using bleu and nist .
coreference resolution is the task of determining when two textual mentions name the same individual .
coreference resolution is the task of clustering referring expressions in a text so that each resulting cluster represents an entity .
gamon shows that svm with deep linguistic features can further improve the performance .
gamon shows that introducing deeper linguistic features into svm can help to improve the performance .
twitter is a huge microblogging service with more than 500 million tweets per day from different locations of the world and in different languages ( cite-p-10-1-6 ) .
twitter is a fantastic data resource for many tasks : measuring political ( o ’ connor et al. , 2010 ; tumasjan et al. , 2010 ) , and general sentiment ( cite-p-11-1-3 ) , studying linguistic variation ( cite-p-11-3-2 ) and detecting earthquakes ( cite-p-11-3-18 ) .
ccg is a linguistically-motivated categorial formalism for modeling a wide range of language phenomena .
ccg is a linguistic formalism that tightly couples syntax and semantic .
the system output is evaluated using the meteor and bleu scores computed against a single reference sentence .
to evaluate the full abstract generation system , the bleu score is computed with human abstracts as reference .
teufel and moens , 2002 ) introduced az and applied it first to computational linguistics papers .
teufel and moens introduced az and applied it to computational linguistics papers .
by including age or gender information , we consistently and significantly improve performance over demographic-agnostic models .
we find that demographic-aware models consistently outperform their agnostic counterparts in all tasks .
we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit .
firstly , we built a forward 5-gram language model using the srilm toolkit with modified kneser-ney smoothing .
relation extraction ( re ) is a task of identifying typed relations between known entity mentions in a sentence .
relation extraction is the task of predicting attributes and relations for entities in a sentence ( zelenko et al. , 2003 ; bunescu and mooney , 2005 ; guodong et al. , 2005 ) .
potash et al use an encoder-decoder problem formulation by employing a pointer network based deep neural network architecture .
potash et al proposed a neural architecture based on a pointer network for jointly predicting types of argumentative components and identifying argumentative relations .
discourse parsing is a difficult , multifaceted problem involving the understanding and modeling of various semantic and pragmatic phenomena as well as understanding the structural properties that a discourse graph can have .
discourse parsing is the task of identifying the presence and the type of the discourse relations between discourse units .
statistical machine translation methods are well established in speech-to-speech translation systems as the main translation technique .
concept classification , as an alternative translation method , has been successfully integrated in speech-to-speech translators .
smith et al proposed a log-linear model for the context-based disambiguation of a morphological dictionary .
smith et al used a conditional random field to learn to disambiguate over sentence by modeling local contexts .
we used the moses mt toolkit with default settings and features for both phrase-based and hierarchical systems .
we used the moses machine translation decoder , using the default features and decoding settings .
using appropriate word weighting functions is known to improve the performance of text categorization .
the use of word unigrams is a standard approach in text classification , and has also been successfully used to predict reading difficulty .
we consider a phrase-based translation model and a hierarchical translation model .
the target-normalized hierarchical phrase-based model is based on a more general hierarchical phrase-based model .
to learn the user-dependent word embeddings for stance classification and visualization , we train the 50-dimensional word embeddings via glove .
we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training .
after harvesting axioms from textbooks , we also present an approach to parse the axiom mentions to horn clause rules .
in this paper , we present an approach to obtain axiomatic knowledge of geometry in the form of horn-clause rules from math textbooks .
to keep consistent , we initialize the embedding weight with pre-trained word embeddings .
we initialize the word embedding matrix with pre-trained glove embeddings .
we use pre-trained vectors from glove for word-level embeddings .
for input representation , we used glove word embeddings .
dagan and itai proposed an approach to wsd using monolingual corpora , a bilingual lexicon and a parser for the source language .
to overcome this problem , dagan and itai used a bilingual lexicon and statistical data from a monolingual corpus of the target language for disambiguation .
the stanford parser we used produced parse trees with minor errors in some sentences .
we used the stanford parser to generate the grammatical structure of sentences .
we show that the performance of such a classifier can be significantly improved by complementing it with a second-stage supervised classifier .
we show that a combination of both classifiers leads to significant improvements over using the unsupervised classifier alone .
we pre-train the 200-dimensional word embeddings on each dataset in with skipgram .
for training , we pre-train the word vector and set the dimension to be 200 with skipgram .
sentiment analysis is a multi-faceted problem .
sentiment analysis is the natural language processing ( nlp ) task dealing with the detection and classification of sentiments in texts .
we employ widely used and standard machine translation tool moses to train the phrasebased smt system .
we use the open source moses phrase-based mt system to test the impact of the preprocessing technique on translation quality .
tasks show that our method statistically significantly outperforms the baseline methods .
it also shows that our method significantly outperforms the baseline methods .
we use a set of 318 english function words from the scikit-learn package .
we used the svd implementation provided in the scikit-learn toolkit .
and finally , the baselines reported for resnik ¡¯ s test set were higher than those for the all-words task .
third and finally , the baselines reported for resnik¡¯s test set were higher than those for the all-words task .
for language model , we used sri language modeling toolkit to train a 4-gram model with modified kneser-ney smoothing .
we use a fourgram language model with modified kneser-ney smoothing as implemented in the srilm toolkit .
we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing .
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided .
for nb and svm , we used their implementation available in scikit-learn .
we employed the machine learning tool of scikit-learn 3 , for training the classifier .
mihalcea et al developed several corpus-based and knowledge-based word similarity measures and applied them to a paraphrase recognition task .
mihalcea et al proposed a method to measure the semantic similarity of words or short texts , considering both corpus-based and knowledge-based information .
we show that using a post-processing morphology generation model can improve translation .
therefore , we propose a novel combination of post-processing morphology prediction with morpheme-based translation .
we use 50 dimensional word embeddings , which are initialized by the 50 dimensional pre-trained word vectors 6 from glove , and updated in the training process .
we use the 300-dimensional pre-trained word2vec 3 word embeddings and compare the performance with that of glove 4 embeddings .
we use the linear svm classifier from scikit-learn .
we implemented linear models with the scikit learn package .
a pattern is a sequence of conditions that must hold true for a sequence of terms .
a pattern is a phrasal cons~ruc~ oi varyxng degrees of specificity .
employment of tree kernel-based methods indicates the critical role of the structural information in zero anaphora resolution and the necessity of tree kernel methods in modeling such structural information .
this indicates the critical role of the structural information in zero anaphora resolution and the necessity of tree kernel-based methods in modeling such structural information .
semantic parsing is the problem of deriving a structured meaning representation from a natural language utterance .
semantic parsing is the task of converting a sentence into a representation of its meaning , usually in a logical form grounded in the symbols of some fixed ontology or relational database ( cite-p-21-3-3 , cite-p-21-3-4 , cite-p-21-1-11 ) .
we present a clean , human-annotated subset of 1975 question-document-answer triples .
we present triviaqa , a new dataset of 650k question-document-evidence triples .