sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
we perform the mert training to tune the optimal feature weights on the development set .
we tune model weights using minimum error rate training on the wmt 2008 test data .
bleu is smoothed , and it considers only matching up to bigrams because this has higher correlations with human judgments than when higher-ordered n-grams are included .
bleu is smoothed to be more appropriate for sentencelevel evaluation , and the bigram versions of bleu and hwcm are reported because they have higher correlations than when longer n-grams are included .
blacoe and lapata compare count and predict representations as input to composition functions .
blacoe and lapata compare different arithmetic functions across multiple representations on a range of compositionality benchmarks .
in section 3 , we describe the three resources we use in our experiments .
in section 3 , we describe the three resources we use in our experiments and how we model them .
word sense induction ( wsi ) is the task of automatically identifying the senses of words in texts , without the need for handcrafted resources or manually annotated data .
word sense induction ( wsi ) is the task of automatically inducing the different senses of a given word , generally in the form of an unsupervised learning task with senses represented as clusters of token instances .
inversion transduction grammar is a formalism for synchronous parsing of bilingual sentence pairs .
inversion transduction grammar is a synchronous grammar for synchronous parsing of source and target language sentences .
we then created trigram language models from a variety of sources using the srilm toolkit , and measured their perplexity on this data .
we trained a specific language model using srilm from each of these corpora in order to estimate n-gram log-probabilities .
we evaluate global translation quality with bleu and meteor .
we evaluate translations with bleu and meteor .
a 4-gram language model is trained on the xinhua portion of the gigaword corpus with the srilm toolkit .
gram language model with modified kneser-ney smoothing is trained with the srilm toolkit on the epps , ted , newscommentary , and the gigaword corpora .
in future work , we will try to collect and annotate data for microblogs in other languages .
in future work , we will try to collect and annotate data for microblogs in other languages to test the robustness of our method .
for all data sets , we trained a 5-gram language model using the sri language modeling toolkit .
we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit .
however , to the best of our knowledge , there is no attempt in the literature to build a resource .
however , to the best of our knowledge , there is no systematic attempt in the literature to build such a resource .
glove is an unsupervised learning algorithm for word embeddings .
glove is an unsupervised algorithm that constructs embeddings from large corpora .
evaluation on the ace data set shows that the ilp based entity-mention model is effective for the coreference resolution task .
and our experimental results on the ace data set shows the model is effective for coreference resolution .
in this case , it may be preferable to look for near-duplicate documents .
in this setting , reuse may be mixed with text derived from other sources .
sagae and lavie apply a notion of reparsing to a two stage parser combination chartbased approach .
sagae and lavie proposed a constituent reparsing method for multiple parsers combination .
in li and roth , they used wordnet for english and built a set of class-specific words as semantic features and achieved the high precision .
later li and roth used more semantic information sources including named entities , wordnet senses , class-specific related words , and distributional similarity based categories in question classification task .
given such parallel data , we can easily train an encoder-decoder model that takes a sentence and target syntactic template .
such data allows us to train a neural encoder-decoder model with extra inputs to specify the target syntax .
we trained a 5-gram language model on the xinhua portion of gigaword corpus using the srilm toolkit .
we then created trigram language models from a variety of sources using the srilm toolkit , and measured their perplexity on this data .
on the remaining tweets , we trained a 10-gram word length model , and a 5-gram language model , using srilm with kneyser-ney smoothing .
for the semantic language model , we used the srilm package and trained a tri-gram language model with the default goodturing smoothing .
evaluated on a news headline dataset , our model yielded higher accuracy .
this approach yielded a precision between 71 % and 82 % on the news headline dataset .
for our classifiers , we used the weka implementation of na茂ve bayes and the svmlight implementation of the svm .
here too , we used the weka implementation of the na茂ve bayes model and the svmlight implementation of the svm .
experiments show that the models can achieve 0 . 85 precision at a level of 0 . 89 recall , and even higher precision .
experimental results show that our system can achieve 0.85 precision at 0.89 recall , excluding exact matches .
we obtained a vocabulary of 320,935 unique words after eliminating words which occur only once , stemming by a part-ofspeech tagger , and stop word removal .
we obtained a vocabulary of 183,400 unique words after eliminating words which occur only once , stemming by a partof-speech tagger , and stop word removal .
we evaluated the system using bleu score on the test set .
we evaluated the translation quality using the bleu-4 metric .
in order to reduce the source vocabulary size translation , the german text was preprocessed by splitting german compound words with the frequencybased method described in .
for the translation from german into english , german compound words were split using the frequency-based method described in .
a sentiment lexicon is a list of words and phrases , such as “ excellent ” , “ awful ” and “ not bad ” , each of them is assigned with a positive or negative score reflecting its sentiment polarity and strength ( cite-p-18-3-8 ) .
a sentiment lexicon is a list of sentiment expressions , which are used to indicate sentiment polarity ( e.g. , positive or negative ) .
a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit from stolcke .
a trigram language model with modified kneser-ney discounting and interpolation was used as produced by the srilm toolkit .
training is done using stochastic gradient descent over mini-batches with the adadelta update rule .
we train the model through stochastic gradient descent with the adadelta update rule .
in this task , we use the 300-dimensional 840b glove word embeddings .
for representing words , we used 100 dimensional pre-trained glove embeddings .
we used the google news pretrained word2vec word embeddings for our model .
we use the word2vec skip-gram model to train our word embeddings .
semantic parsing is the problem of deriving a structured meaning representation from a natural language utterance .
semantic parsing is the problem of translating human language into computer language , and therefore is at the heart of natural language understanding .
with regard to inputs , we use 50-d glove word embeddings pretrianed on wikipedia and gigaword and 5-d postion embedding .
we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings , which we do not optimize during training .
this method does not use any parallel corpora for learning .
the method is a naive-bayes classifier which learns from noisy data .
we trained word vectors with the two architectures included in the word2vec software .
we also used word2vec to generate dense word vectors for all word types in our learning corpus .
the first is a reimplementation of the pronoun prediction neural network proposed by hardmeier et al .
the first component of our model is a modified reimplementation of the pronoun prediction network introduced by hardmeier et al .
we formulate the inference procedures in the training algorithm as integer linear programming ( ilp ) problems , ( ii ) we introduce a soft-constraint in the ilp objective to model noisy-or in training .
we formulate the inference procedures in training as integer linear programming ( ilp ) problems and implement the relaxation to the “ at least one ” heuristic via a soft constraint in this formulation .
system that participated in semeval-2013 task 2 : sentiment analysis in twitter .
our system participated in semeval-2013 task 2 : sentiment analysis in twitter ( cite-p-12-3-1 ) .
details about svm and krr can be found in .
more details about svm and krr can be found in .
to gain a more accurate basis for the pattern search , the oc uses the stanford parser to derive grammatical structures for each sentence .
the oc makes use of the stanford parser to derive grammatical structures for each sentence , which then form a more accurate basis for the later pattern search .
the mert was used to tune the feature weights on the development set and the translation performance was evaluated on the test set with the tuned weights .
the feature weights for each system were tuned on development sets using the moses implementation of minimum error rate training .
color ¨c name pairs obtained from an online color design forum , we evaluate our model on a ¡° color turing test ¡± and find that , given a name , the colors predicted by our model are preferred by annotators to color names created by humans .
using a large set of color¨cname pairs obtained from a color design forum , we evaluate our model on a ¡°color turing test¡± and find that , given a name , the colors predicted by our model are preferred by annotators to color names created by humans .
for this reason , we used glove vectors to extract the vector representation of words .
for input representation , we used glove word embeddings .
we presented a complete , correct , terminating extension of earley ' s algorithm that uses restriction .
in section 4 , we develop a correct , complete and terminating extension of earley 's algorithm for the patr-ii formalism using the restriction notion .
the semantic roles in the example are labeled in the style of propbank , a broad-coverage human-annotated corpus of semantic roles and their syntactic realizations .
the semantic roles in the examples are labeled in the style of propbank , a broad-coverage human-annotated corpus of semantic roles and their syntactic realizations .
we use the skll and scikit-learn toolkits .
for data preparation and processing we use scikit-learn .
our sd is the penn treebank of wall street journal text .
we used the wall street journal articles article boundary .
sentiment classification is the task of identifying the sentiment polarity of a given text .
sentiment classification is the task of labeling a review document according to the polarity of its prevailing opinion ( favorable or unfavorable ) .
also , li et al incorporate textual topic and userword factors with supervised topic modeling .
meanwhile , li et al present a topic model incorporating reviewer and item information for sentiment analysis .
for language modeling , we use kenlm to train 6-gram character-level language models on opensubs f iltered and huawei m onot r .
after standard preprocessing of the data , we train a 3-gram language model using kenlm .
as for ej translation , we use the stanford parser to obtain english abstraction trees .
we use the stanford dependency parser to extract nouns and their grammatical roles .
then we review the path ranking algorithm introduced by lao and cohen .
we briefly review the path ranking algorithm , described in more detail by lao and cohen .
event schema is a high-level representation of a bunch of similar events .
an event schema is a structured representation of an event , it defines a set of atomic predicates or facts and a set of role slots that correspond to the typical entities that participate in the event .
our baseline is a phrase-based mt system trained using the moses toolkit .
both systems are phrase-based smt models , trained using the moses toolkit .
which has the dual effect of factoring computationally costly null heads out from parsing ( but not from the resulting parse trees ) and rendering mgs fully compatible for the first time with existing supertagging techniques .
the second is a method for factoring computationally costly null heads out from bottom-up mg parsing ; this has the additional benefit of rendering the formalism fully compatible for the first time with highly efficient markovian supertaggers .
we assessed the statistical significance of f-measure improvements over baseline , using the approximate randomness test .
statistical significance of system differences in terms of f1 was assessed by an approximate randomization test .
fasttext is a library for efficient text classification and representation learning .
fasttext is a simple and effective method for classifying texts based on n-gram embeddings .
this is the first attempt at infusing general world knowledge for task specific training of deep learning .
to the best of our knowledge this is the first attempt to incorporate world knowledge from a knowledge base for learning models .
when used as the underlying input representation , word vectors have been shown to boost the performance in nlp tasks .
latent feature vectors have been recently successfully exploited for a wide range of nlp tasks .
later , their work was extended to take into account syntactic structure and grammars .
later , their work was extended to take into account the syntactic relation between words and grammars .
some researchers have applied the rule of transliteration to automatically translate proper names .
some researchers have found that transliteration is quite useful in proper name translation .
in addition , machine translation systems can be improved by training on sentences extracted from parallel or comparable documents mined from the web .
for instance , machine translation systems can benefit from training on sentences extracted from parallel or comparable documents retrieved from the web .
coreference resolution is the problem of identifying which noun phrases ( nps , or mentions ) refer to the same real-world entity in a text or dialogue .
coreference resolution is a well known clustering task in natural language processing .
we implemented the algorithms in python using the stochastic gradient descent method for nmf from the scikit-learn package .
within this subpart of our ensemble model , we used a svm model from the scikit-learn library .
a language model is a probability distribution that captures the statistical regularities of natural language use .
traditionally , a language model is a probabilistic model which assigns a probability value to a sentence or a sequence of words .
similarity is a kind of association implying the presence of characteristics in common .
similarity is a fundamental concept in theories of knowledge and behavior .
sentiment analysis is a much-researched area that deals with identification of positive , negative and neutral opinions in text .
sentiment analysis is a research area in the field of natural language processing .
in the translation tasks , we used the moses phrase-based smt systems .
we used moses with the default configuration for phrase-based translation .
for training the model , we use the linear kernel svm implemented in the scikit-learn toolkit .
we use a random forest classifier , as implemented in scikit-learn .
in this paper , we took a focused form of humorous tercets in hindi-dur se dekha , and performed an analysis of its structure and humour .
in this paper , we look at dur se dekha jokes , a restricted domain of humorous three liner poetry in hindi .
the annotation scheme is based on an evolution of stanford dependencies and google universal part-of-speech tags .
the ud scheme is built on the google universal part-of-speech tagset , the interset interlingua of morphosyntactic features , and stanford dependencies .
the present paper is a report of these investigations , their results and conclusions drawn therefrom .
the present paper is a contribution towards this goal : it presents the results of a large-scale evaluation of window-based dsms on a wide variety of semantic tasks .
the smt weighting parameters were tuned by mert using the development data .
the same data was used for tuning the systems with mert .
the automobile , kitchen and software reviews are taken from blitzer et al .
the automobile and software reviews 2 are taken from blitzer et al .
openccg uses a hybrid symbolic-statistical chart realizer which takes logical forms as input and produces sentences by using ccg com- binators to combine signs .
openccg uses a hybrid symbolic-statistical chart realizer which takes logical forms as input and produces sentences by using ccg combinators to combine signs .
despite its simplicity , our directional similarity approach provides a robust model for relational similarity .
empirically , mixing heterogeneous models tends to make the final relational similarity measure more robust .
the basic idea of this approach is to project the word indices onto a continuous space and to use a probability estimator operating on this space .
the basic idea of the neural network lm is to project the word indices onto a continuous space and to use a probability estimator operating on this space .
five-gram language models are trained using kenlm .
word-based lms were trained using the kenlm package .
we use svm-light-tk to train our reranking models , 9 which enables the use of tree kernels in svm-light .
we used svm-light-tk , which enables the use of the partial tree kernel .
one of the main stumbling blocks for spoken natural language understanding systems is the lack of reliability of automatic speech recognizers .
one of the main stumbling blocks for spoken dialogue systems is the lack of reliability of automatic speech recognizers .
sentiment analysis is the task of automatically identifying the valence or polarity of a piece of text .
sentiment analysis is a fundamental problem aiming to give a machine the ability to understand the emotions and opinions expressed in a written text .
the language model is a large interpolated 5-gram lm with modified kneser-ney smoothing .
we choose modified kneser ney as the smoothing algorithm when learning the ngram model .
all the feature weights and the weight for each probability factor are tuned on the development set with minimum-error-rate training .
the feature weights are tuned to optimize bleu using the minimum error rate training algorithm .
we use the stanford nlp pos tagger to generate the tagged text .
we use stanford log-linear partof-speech tagger to produce pos tags for the english side .
neural models can be categorized into two classes : recursive models and convolutional neural networks ( cnn ) models .
most methods fall into three types : unordered models , sequence models , and convolutional neural networks models .
thus , optimizing this objective remains straightforward with the expectation-maximization algorithm .
hence we use the expectation maximization algorithm for parameter learning .
as training examples , we formulate the learning problem as a structured prediction problem and derive a maximum-margin algorithm .
by taking a structured prediction approach , we provide a large-margin method that directly optimizes a convex relaxation of the desired performance measure .
we solve this problem by adding shortcut connections between different layers inspired by residual networks .
we further add skip connections between the lstm layers to the softmax layers , since they are proved effective for training neural networks .
throughout this work , we use mstperl , an unlabelled first-order non-projective single-best implementation of the mstparser of mcdonald et al , trained using 3 iterations of mira .
throughout this work , we use mstperl , an implementation of the mstparser of mcdonald et al , with first-order features and non-projective parsing .
we used the weka implementation of svm with 10-fold cross-validation to estimate the accuracy of the classifier .
we used the weka implementation of na茂ve bayes for this baseline nb system .
here , we propose s em a xis , a simple yet powerful framework to characterize word semantics .
in this work , we propose s em a xis , a lightweight framework to characterize domain-specific word semantics beyond sentiment .
abstract meaning representation is a sembanking language that captures whole sentence meanings in a rooted , directed , labeled , and acyclic graph structure .
abstract meaning representation is a semantic formalism which represents sentence meaning in a form of a rooted directed acyclic graph .
given in advance , we are interested in utilizing the emotion information in microblog messages for real-world event detection .
we evaluate our approach on large-scale microblog data sets by using real-world event list for each community .
a 5-gram lm was trained using the srilm toolkit 12 , exploiting improved modified kneser-ney smoothing , and quantizing both probabilities and back-off weights .
a 5-gram lm was trained using the srilm toolkit 5 , exploiting improved modified kneser-ney smoothing , and quantizing both probabilities and back-off weights .
topic models such as latent dirichlet allocation have emerged as a powerful tool to analyze document collections in an unsupervised fashion .
topic models , such as plsa and lda , have shown great success in discovering latent topics in text collections .
key ciphers also use a secret substitution function .
these ciphers use a substitution table as the secret key .
sentiment analysis is a research area where does a computational analysis of people ’ s feelings or beliefs expressed in texts such as emotions , opinions , attitudes , appraisals , etc . ( cite-p-12-1-3 ) .
sentiment analysis is the computational analysis of people ’ s feelings or beliefs expressed in texts such as emotions , opinions , attitudes , appraisals , etc . ( cite-p-11-3-3 ) .
coreference resolution is the problem of identifying which mentions ( i.e. , noun phrases ) refer to which real-world entities .
coreference resolution is the task of determining whether two or more noun phrases refer to the same entity in a text .
in the multi-agent decentralizedpomdp reach implicature-rich interpretations simply as a by-product of the way they reason about each other to maximize joint utility .
we show that agents in the dec-pomdp reach implicature-rich interpretations simply as a byproduct of the way they reason about each other to maximize joint utility .
sennrich et al also created synthetic parallel data by translating target-language monolingual text into the source language .
sennrich et al proposed a method using synthetic parallel texts , in which target monolingual corpora are translated back into the source language .