sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
we use glove word embeddings , an unsupervised learning algorithm for obtaining vector representations of words .
we use glove word embeddings , which are 50-dimension word vectors trained with a crawled large corpus with 840 billion tokens .
we estimated lexical surprisal using trigram models trained on 1 million hindi sentences from emille corpus using the srilm toolkit .
for the tree-based system , we applied a 4-gram language model with kneserney smoothing using srilm toolkit trained on the whole monolingual corpus .
that enables the straightforward integration of additional annotation at the word-level .
the new approach allows additional annotation at the word level .
in this paper we address the domain adaptation scenario without access to source data .
in this paper we address the problem of adapting classifiers trained on the source data and available as black boxes .
the use of unsupervised word embeddings in various natural language processing tasks has received much attention .
the use of neural-networks language models was originally introduced in and successfully applied to largescale speech recognition and machine translation tasks .
words or phrases still remain a challenge in statistical machine translation .
out-of-vocabulary ( oov ) words or phrases still remain a challenge in statistical machine translation .
zhou and xu use a bidirectional wordlevel lstm combined with a conditional random field for semantic role labeling .
liu et al focused on the sentence boundary detection task , by making use of conditional random fields .
blei and mcauliffe and ramage et al used document labels in supervised setting .
blei and mcauliffe and ramage et al used document label information in a supervised setting .
in section 3 , we describe our stemming methodology , followed by three types of evaluation experiments .
in section 3 , we describe our stemming methodology , followed by three types of evaluation experiments in section 4 .
for both attributes addressed in this paper , we use the same corpus , the 2009 icwsm spinn3r dataset , a publicly-available blog corpus which we also used in our earlier work on lexical formality .
for all the methods in this section , we use the same corpus , the icwsm spinn3r 2009 dataset , which has been used successfully in earlier work .
the language model used in our paraphraser and the clarke and lapata baseline system is a kneser-ney discounted 5-gram model estimated on the gigaword corpus using the srilm toolkit .
the language model was a 5-gram language model estimated on the target side of the parallel corpora by using the modified kneser-ney smoothing implemented in the srilm toolkit .
generative topic models widely used for ir include plsa and lda .
lda is the most popular unsupervised topic model .
experimental results have demonstrated the effectiveness of our approach .
the experimental results demonstrate the effectiveness of our approach .
this paper proposes a method of correcting errors in a treebank by using a synchronous tree .
this paper proposes a method of correcting errors in structural annotation .
we use the moses smt toolkit to test the augmented datasets .
we use the moses statistical mt toolkit to perform the translation .
and the alignment structures considered in previous works is that we can align multiple sentences in the text to the hypothesis .
in this work , we extended the word alignment formalism to align multiple sentences in the text to the hypothesis .
chambers and jurafsky proposed a narrative chain model based on scripts .
chambers and jurafsky extracted narrative event chains based on common protagonists .
we train a 4-gram language model on the xinhua portion of english gigaword corpus by srilm toolkit .
we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .
the berkeley framenet project is an ongoing effort of building a semantic lexicon for english based on the theory of frame semantics .
the berkeley framenet project aims at creating a human and machine-readable lexical database of english , supported by corpus evidence annotated in terms of frame semantics .
in order to acquire syntactic rules , we parse the chinese sentence using the stanford parser with its default chinese grammar .
to extract part-of-speech tags , phrase structure trees , and typed dependencies , we use the stanford parser on both train and test sets .
yu and hatzivassiloglou used semantic orientation of words to identify polarity at sentence level .
yu and hatzivassiloglou use semanticallyoriented words for identification of polarity at the sentence level .
to construct a novel word-context matrix , which is further weighted and factorized using truncated svd to generate low-dimension word embedding vectors .
this is then used to create a word-context matrix from which row vectors can be used to measure word similarity .
in this paper , we focus on identification of independent mentions ( basic as well as composite ) .
however , in this paper we focus on identity type of relationships only .
theorists have long noted that verbs can be organized into classes based on their syntactic constructions and the events they express .
most theorists note that verbs can be organized into a hierarchy of verb classes based on the frames they admit .
we used chainer , a framework of neural networks , for implementing our architecture .
our implementation was done using the chainer 3 toolkit .
bunescu and pa艧ca presented a method of disambiguating ambiguous entities exploiting internal links in wikipedia as training examples .
bunescu and pasca defined a sematic relatedness by similarity measure using wikipedia categories .
various matching algorithms correlate with human judgments of helpfulness .
the evaluations of the tm fuzzy match algorithms use human judgments of helpfulness .
since chinese is the dominant language in our data set , a word-by-word statistical machine translation strategy ( cite-p-14-1-22 ) is adopted to translate english words into chinese .
besides , chinese is a topic-prominent language , the subject is usually covert and the usage of words is relatively flexible .
we connect nodes based on synonyms , hypernyms , and similar-to relations from wordnet .
we use wordnet to link re-lated words based on synonyms , hypernyms , and similar to relations .
the official training and test data comes from the national university of singapore corpus of learner english .
the training data is the nus corpus of learner english that provided by the national university of singapore .
in particular , we use the liblinear 3 package which has been shown to be efficient for text classification problems such as this .
in particular , we use the liblinear 7 svm package which has been shown to be efficient for text classification problems with large numbers of features and documents .
ccg is a lexicalized grammar formalism -- a lexicon assigns each word to one or more grammatical categories .
ccg is a lexicalized grammar formalism in which every constituent in a sentence is associated with a structured category that specifies its syntactic relationship to other constituents .
on wmt german→english , we outperform the best single system reported on matrix . statmt . org by 0 . 8 % .
on wmt german→english , we outperform the best single system reported on matrix.statmt.org by 0.8 % b leu absolute .
blanc is a link-based metric that adapts the rand index to coreference resolution evaluation .
this measure implements the rand index which has been originally developed to evaluate clustering methods .
semantic parsing is the task of mapping a natural language query to a logical form ( lf ) such as prolog or lambda calculus , which can be executed directly through database query ( zettlemoyer and collins , 2005 , 2007 ; haas and riezler , 2016 ; kwiatkowksi et al. , 2010 ) .
semantic parsing is the task of mapping natural language to a formal meaning representation .
that uncertainty reduction is the essence of collaborative bootstrapping , which includes both co-training and bilingual bootstrapping .
we point out that uncertainty reduction is an important factor for enhancing the performances of the classifiers in collaborative bootstrapping .
and we annotated a corpus of sentences according to this definition .
we produce a new corpus , annotated according to this definition .
twitter is a social platform which contains rich textual content .
twitter is a widely used microblogging environment which serves as a medium to share opinions on various events and products .
we report weighted f-measures on gold alignments specified by one annotator , for 144 and 110 sentences respectively .
in all cases we report weighted f-measures on the publicly available gold alignments .
to that end , we use the state-of-the-art phrase based statistical machine translation system moses .
in order to do so , we use the moses statistical machine translation toolkit .
it was implemented using multinomial naive bayes algorithm from scikit-learn .
the regression model was trained using the extremly randomized trees implementation of scikitlearn library .
relation extraction is the task of finding relations between entities in text , which is useful for several tasks such as information extraction , summarization , and question answering ( cite-p-14-3-7 ) .
relation extraction ( re ) is the process of generating structured relation knowledge from unstructured natural language texts .
in this paper , we proposed a new neural network architecture , called an rnn encoder – decoder that is able to learn .
in this paper , we propose a novel neural network model called rnn encoder– decoder that consists of two recurrent neural networks ( rnn ) .
brockett et al treat error correction as a translation task , and solve it by using the noisy channel model .
brockett et al trained the translation model on a corpus where the errors are restricted to mass noun errors .
we used the scikit-learn library the svm model .
we used the svm implementation provided within scikit-learn .
word sense disambiguation ( wsd ) is the task of determining the meaning of a word in a given context .
word sense disambiguation ( wsd ) is a widely studied task in natural language processing : given a word and its context , assign the correct sense of the word based on a predefined sense inventory ( cite-p-15-3-4 ) .
zeng et al and dos santos et al respectively proposed a standard and a ranking-based cnn model based on the raw word sequences .
santos et al proposed a ranking cnn model , which is trained by a pairwise ranking loss function .
in this work , we propose to use context gates to control the contributions of source and target contexts .
our shouldmodel jointly controls the contributions from the source and target contexts .
coreference resolution is the problem of identifying which noun phrases ( nps , or mentions ) refer to the same real-world entity in a text or dialogue .
coreference resolution is the task of clustering a sequence of textual entity mentions into a set of maximal non-overlapping clusters , such that mentions in a cluster refer to the same discourse entity .
hatzivassiloglou and mckeown did the first work to tackle the problem for adjectives using a corpus .
hatzivassiloglou and mckeown were the first to explore automatically learning the polarity of words from corpora .
similarity between sentences is a central concept of text analysis , however previous studies about semantic similarities have mainly focused either on single word similarity or complete document similarity .
the similarity of sentences is a confidence score that reflects the relationship between the meanings of two sentences .
erk et al propose the exemplar-based model of selectional preferences , in turn based on erk .
erk et al also model selectional preferences using vector spaces .
for classification we have used liblinear , which approximates a linear svm .
we use liblinear 9 to solve the lr and svm classification problems .
in the following sections , we show the features used in our experiments .
in the following sections , we show the features used in our experiments and the results .
over the last few years , several large scale knowledge bases such as freebase , nell , and yago have been developed .
in the last decades , large scale knowledge bases , such as freebase , have been constructed .
the lms are build using the srilm language modelling toolkit with modified kneserney discounting and interpolation .
a trigram language model with modified kneser-ney discounting and interpolation was used as produced by the srilm toolkit .
we used srilm -sri language modeling toolkit to train several character models .
we used the srilm software 4 to build langauge models as well as to calculate cross-entropy based features .
we train the models for 20 epochs using categorical cross-entropy loss and the adam optimization method .
we use the adam optimizer and mini-batch gradient to solve this optimization problem .
in our experiments we use a publicly available implementation of conditional random fields .
we use the mallet implementation of conditional random fields .
for all machine learning results , we train a logistic regression classifier implemented in scikitlearn with l2 regularization and the liblinear solver .
for the machine learning component of our system we use the l2-regularised logistic regression implementation of the liblinear 3 software library .
for phrase-based smt translation , we used the moses decoder and its support training scripts .
we used the phrase-based smt in moses 5 for the translation experiments .
information extraction ( ie ) is the nlp field of research that is concerned with obtaining structured information from unstructured text .
information extraction ( ie ) is the task of identifying information in texts and converting it into a predefined format .
we use the same metrics as described in wu et al , which is similar to those in .
we use the same evaluation metrics as described in , which is similar to those in .
nenkova et al found that high frequency word entrainment in dialogue is correlated with engagement and task success .
nenkova et al noted that the entrainment score between dialogue partners is higher than the entrainment score between non-partners in dialogue .
we built a 5-gram language model on the english side of europarl and used the kneser-ney smoothing method and srilm as the language model toolkit .
we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit .
dependency parsing is a simpler task than constituent parsing , since dependency trees do not have extra non-terminal nodes and there is no need for a grammar to generate them .
dependency parsing is a core task in nlp , and it is widely used by many applications such as information extraction , question answering , and machine translation .
in this work , we make an observation that there exists an efficient model for recognizing overlapping mentions .
in this paper , we propose a new model that is capable of recognizing overlapping mentions .
we also presented information that shows that adding a sequence model of da progressions -an n-gram model of dasresults in no significant increase in performance .
we also presented information that shows that adding a sequence model of da progressions -an n-gram model of das -results in no increase in performance .
from a distributional view , and show that there are two distinct needs for adaptation , corresponding to the different distributions of instances and classification functions in the source and the target domains .
such an analysis reveals that there are two distinct needs for adaptation , corresponding to the different distributions of instances and the different classification functions in the source and the target domains .
for example , dirt aims to discover different representations of the same semantic relation using distributional similarity of dependency paths .
for instance , the dirt system uses the mutual information between the argument pairs for two binary relations to measure the similarity between them , and clusters relations accordingly .
word embeddings are initialized with pretrained glove vectors 1 , and updated during the training .
word embeddings are initialized with 300d glove vectors and are not fine-tuned during training .
one of the first challenges in sentiment analysis is the vast lexical diversity of subjective language .
sentiment analysis is a research area in the field of natural language processing .
furthermore , the same effort should be invested for each different language .
furthermore , the same effort should be invested for each different language .
resolution , and the performance of the system is comparable to the best existing systems for pronoun resolution .
the system performance is comparable to the best existing systems for pronoun resolution .
i introduce the task of multiple narrative disentanglement ( mnd ) , in which the aim is to tease these narratives apart .
i refer to the task of identifying these independent threads and untangling them from one another as multiple narrative disentanglement ( mnd ) .
in our advanced model , we employ the word2vec algorithm ( cite-p-13-3-15 ) .
we compare our models to one such method ( msda-dan , ( cite-p-13-3-4 ) ) .
a technique of parser stacking is employed , which enables a data-driven parser to learn from the output of another parser , in addition to gold standard treebank annotations .
a technique dubbed parser stacking enables the data-driven parser to learn , not only from gold standard treebank annotations , but from the output of another parser .
we trained a 5-gram language model on the xinhua portion of gigaword corpus using the srilm toolkit .
we train a 4-gram language model on the xinhua portion of the english gigaword corpus using the srilm toolkits with modified kneser-ney smoothing .
we ran mt experiments using the moses phrase-based translation system .
we obtained a phrase table out of this data using the moses toolkit .
in this paper , we propose a method for referring to the real world to improve named entity recognition ( ner ) .
in this paper , we propose a method for enhancing a named entity ( ne ) recognizer referring to the real world .
we used 5-gram models , estimated using the sri language modeling toolkit with modified kneser-ney smoothing .
we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit .
on average-length penn treebank sentences , our most detailed estimate reduces the total number of edges processed to less than 3 % of that required by exhaustive parsing .
using these estimates , our parser is capable of finding the viterbi parse of an average-length penn treebank sentence in a few seconds , processing less than 3 % of the edges which would be constructed by an exhaustive parser .
there are multiple studies in classification of flu-related tweets .
multiple studies have been done to analyze flu-related tweets .
we use the sri language modeling toolkit for language modeling .
the trigram language model is implemented in the srilm toolkit .
using a different approach , blitzer et al induces correspondences between feature spaces in different domains , by detecting pivot features .
blitzer et al induced a correspondence between features from a source and target domain based on structural correspondence learning over unlabelled target domain data .
our trigram word language model was trained on the target side of the training corpus using the srilm toolkit with modified kneser-ney smoothing .
we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing .
we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit .
we train trigram language models on the training set using the sri language modeling tookit .
experiments on the chinese-english dataset show that agreement-based learning is more robust to noisy data and leads to substantial improvements in phrase alignment and machine translation .
experiments on the chineseenglish dataset show that agreement-based learning significantly improves both alignment and translation performance .
we use 5-grams for all language models implemented using the srilm toolkit .
we use srilm for n-gram language model training and hmm decoding .
semantic parsing is to learn an explicit synchronous grammar .
a challenge for grammar-based semantic parsing is grammar induction from data .
zelenko et al described a recursive kernel based on shallow parse trees to detect personaffiliation and organization-location relations , in which a relation example is the least common subtree containing two entity nodes .
zelenko et al described a kernel between shallow parse trees to extract semantic relations , where a relation instance is transformed into the least common sub-tree connecting the two entity nodes .
he et al attempted to find bursts , periods of elevated occurrence of events as a dynamic phenomenon instead of focusing on arrival rates .
he and parket attempted to find bursts , periods of elevated occurrence of events as a dynamic phenomenon instead of focusing on arrival rates .
as textual features , we use the pretrained google news word embeddings , obtained by training the skip-gram model with negative sampling .
we use a popular word2vec neural language model to learn the word embeddings on an unsupervised tweet corpus .
to employ the features described above in an actual classifier , we trained a logistic regression model using the weka toolkit .
we deploy the machine learning toolkit weka for learning a regression model to predict the similarity scores .
relation extraction is a key step towards question answering systems by which vital structured data is acquired from underlying free text resources .
relation extraction ( re ) is the task of extracting instances of semantic relations between entities in unstructured data such as natural language text .
brown et al described a hierarchical word clustering method which maximizes the mutual information of bigrams .
brown et al present a hierarchical word clustering algorithm that can handle a large number of classes and a large vocabulary .
we employ the crf implementation in the wapiti toolkit , using default settings .
to train a crf model , we use the wapiti sequence labelling toolkit .
in this domain can be derived from our speaker model , providing an explanation from first principles for the relation between discourse salience and speakers ’ choices of referring expressions .
these results suggest that this model formalizes underlying principles that account for speakers ’ choices of referring expressions .
cite-p-24-1-12 proposed a joint model for word segmentation , pos tagging and normalization .
cite-p-24-3-6 propose a joint model to process word segmentation and informal word detection .
baroni et al conducted a set of experiments comparing the popular w2v implementation for creating wes to other distributional methods with state-of-the-art results across various tasks .
baroni et al conducted a set of experiments comparing the popular word2vec implementation for creating wes with other wellknown distributional methods across various tasks .