sentence1
stringlengths 16
446
| sentence2
stringlengths 14
436
|
---|---|
sentiment analysis is the task of identifying positive and negative opinions , sentiments , emotions and attitudes expressed in text . | sentiment analysis is the computational analysis of people ’ s feelings or beliefs expressed in texts such as emotions , opinions , attitudes , appraisals , etc . ( cite-p-11-3-3 ) . |
we train the twitter sentiment classifier on the benchmark dataset in semeval 2013 . | we conduct experiments on the latest twitter sentiment classification benchmark dataset in semeval 2013 . |
we utilize minimum error rate training to optimize feature weights of the paraphrasing model according to ndcg . | then we use the standard minimum error-rate training to tune the feature weights to maximize the system潞s bleu score . |
takamatsu et al design a generative model to identify noise patterns . | takamatsu et al directly models the labeling process of ds to find noisy patterns . |
the penn discourse treebank corpus is the best-known resource for obtaining english connectives . | the penn discourse treebank is the largest available discourse-annotated corpus in english . |
this paper proposes a two-stage framework for mining opinion words and opinion targets . | this paper proposes a novel two-stage framework for mining opinion words and opinion targets . |
we substitute our language model and use mert to optimize the bleu score . | we use bleu scores as the performance measure in our evaluation . |
to calculate language model features , we train traditional n-gram language models with ngram lengths of four and five using the srilm toolkit . | we use a fourgram language model with modified kneser-ney smoothing as implemented in the srilm toolkit . |
kawahara and uchimoto used a separately trained binary classifier to select sentences as additional training data . | kawahara and uchimoto used a separately trained binary classifier to select reliable sentences as additional training data . |
we use pre-trained 50 dimensional glove vectors 4 for word embeddings initialization . | we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training . |
for our logistic regression classifier we use the implementation included in the scikit-learn toolkit 2 . | for the feature-based system we used logistic regression classifier from the scikit-learn library . |
we perform pre-training using the skip-gram nn architecture available in the word2vec 13 tool . | we learn our word embeddings by using word2vec 3 on unlabeled review data . |
semantic parsing is the task of converting natural language utterances into formal representations of their meaning . | semantic parsing is the task of mapping natural language sentences to a formal representation of meaning . |
to construct the word vectors we used the continuous bag-of-words , and skip-gram model by . | here , we choose the skip-gram model and continuous-bag-of-words model for comparison with the lbl model . |
in doing so , we revert the multi-category bootstrapping framework back to its originally intended minimally supervised framework , with little performance . | in this paper , we aim to push multi-category bootstrapping back into its original minimally-supervised framework , with as little performance loss as possible . |
coreference resolution is the task of grouping all the mentions of entities 1 in a document into equivalence classes so that all the mentions in a given class refer to the same discourse entity . | coreference resolution is a task aimed at identifying phrases ( mentions ) referring to the same entity . |
a 3-gram language model was trained from the target side of the training data for chinese and arabic , using the srilm toolkit . | the language model is a 3-gram language model trained using the srilm toolkit on the english side of the training data . |
we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit . | for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided . |
cue expansion strategy is proposed to increase the coverage in cue detection . | moreover , a bilingual cue expansion method is proposed to increase the coverage in cue detection . |
the true-caser is trained on all of the available training corpus using moses . | the true-caser is trained on all of the training corpus using moses . |
developing features has been shown to be crucial to advancing the state-of-the-art in dependency parsing . | developing features has been shown crucial to advancing the state-of-the-art in dependency tree parsing . |
following , we assume discourse commitments represent the set of propositions which can necessarily be inferred to be true given a conventional reading of a text . | following , we assume that a discourse commitment represents the any of the set of propositions that can necessarily be inferred to be true , given a conventional reading of a text passage . |
works aim to use huge amount of unsegmented data to further improve the performance of an already well-trained supervised model . | the goal is to make use of the in-domain unsegmented data to improve the ultimate performance of word segmentation . |
in the example sentence , this generated the subsequent sentence ¡° us urges israel plan . | in the example sentence , this generated the subsequent sentence ¡°us urges israel plan.¡± |
we report the mt performance using the original bleu metric . | we evaluate the translation quality using the case-insensitive bleu-4 metric . |
posite kernel to calculate the similarity between two structured features , we use the convolution tree kernel that is defined by collins and duffy and moschitti . | to calculate the similarity between two structured features , we use the convolution tree kernel that is defined by collins and duffy and moschitti . |
soft clustering approaches are required for the task but reveal quite different attitudes towards predicting ambiguity . | most interestingly , a qualitative analysis zoomed into the assignment behaviour of the soft clustering approaches , and revealed different attitudes towards predicting ambiguity . |
we use negative sampling to approximate softmax in the objective function . | we use skip-gram with negative sampling for obtaining the word embeddings . |
for each morph mention , we discover a list of target candidates math-w-3-1-1-12 from chinese web data for morph mention . | for each morph mention , we discover a list of target candidates math-w-3-1-1-12 from chinese web data for morph mention resolution . |
word entrainment is positively and significantly correlated with task success and proportion of overlaps . | entrainment over classes of common words also strongly correlates with task success and highly engaged and coordinated turn-taking behavior . |
text segmentation is the task of automatically segmenting texts into parts . | text segmentation is the task of determining the positions at which topics change in a stream of text . |
features represent a new state of the art for syntactic dependency parsing for all five languages . | the final results improve the state of the art in dependency parsing for all languages . |
for the classification task , we use pre-trained glove embedding vectors as lexical features . | we initialize the embedding weights by the pre-trained word embeddings with 200 dimensional vectors . |
we use a fourgram language model with modified kneser-ney smoothing as implemented in the srilm toolkit . | such a model is easily represented using a factored language model , an idea introduced in , and incorporated into the srilm toolkit . |
for the textual sources , we populate word embeddings from the google word2vec embeddings trained on roughly 100 billion words from google news . | we use 300-dimensional word embeddings provided by google , and for greater number of ds , we train word2vec on unlabeled data , see table 1 . |
smor is a german fstbased morphological analyzer which covers inflection , compounding , and prefix as well as suffix derivation . | smor is a finite-state based morphological analyzer covering the productive word formation processes of german , namely inflection , derivation and compounding . |
a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit from stolcke . | we also use a 4-gram language model trained using srilm with kneser-ney smoothing . |
for the textual sources , we populate word embeddings from the google word2vec embeddings trained on roughly 100 billion words from google news . | for english , we rely on 500-dimensional english skip-gram word embeddings trained on the january 2017 wikipedia dump with bag-of-words contexts . |
we trained a trigram language model on the chinese side , with the srilm toolkit , using the modified kneser-ney smoothing option . | for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided . |
abstract meaning representation is a semantic representation where the meaning of a sentence is encoded as a rooted , directed and acyclic graph . | abstract meaning representation is a semantic representation that expresses the logical meaning of english sentences with rooted , directed , acylic graphs . |
we will notate lcfrs with the syntax of simple range concatenation grammars , a formalism that is equivalent to lcfrs . | for convenience we will will use the rule notation of simple rcg , which is a syntactic variant of lcfrs , with an arguably more transparent notation . |
our second set of experiments is based on the phrase similarity task of mitchell and lapata . | our experiments are based on the adjective-noun section of the evaluation data set released by mitchell and lapata . |
where the authors argue that the approach used in humor 99 is general enough to be well suitable for a wide range of languages , and can serve as basis for higher-level linguistic operations such as shallow or even full parsing . | the authors conclude the paper by arguing that the approach used in humor 99 is general enough to be well suitable for a wide range of languages , and can serve as basis for higher-level linguistic operations such as shallow parsing . |
the penn discourse treebank , developed by prasad et al , is currently the largest discourse-annotated corpus , consisting of 2159 wall street journal articles . | the penn discourse treebank is the largest available corpus of annotations for discourse relations , covering one million words of the wall street journal . |
named entity recognition ( ner ) is a fundamental task in text mining and natural language understanding . | named entity recognition ( ner ) is the task of finding rigid designators as they appear in free text and classifying them into coarse categories such as person or location ( cite-p-24-4-6 ) . |
in this work , we use the expectation-maximization algorithm . | we estimate the parameters by maximizingp using the expectation maximization algorithm . |
the results evaluated by bleu score is shown in table 2 . | table 4 shows the comparison of the performances on bleu metric . |
a homographic pun is a pun that “ exploits distinct meanings of the same written word ” ( cite-p-7-1-2 ) ( these can be meanings of a polysemantic word or homonyms , including homonymic word forms ) . | a homographic pun is a form of wordplay in which one signifier ( usually a word ) suggests two or more meanings by exploiting polysemy for an intended humorous or rhetorical effect . |
the word vectors were initialized with the 300-dimensional glove embeddings , and were also updated during training . | the word-embeddings were initialized using the glove 300-dimensions pre-trained embeddings and were kept fixed during training . |
we use skipgram model to train the embeddings on review texts for k-means clustering . | we use a cws-oriented model modified from the skip-gram model to derive word embeddings . |
in such work on question answering , question generation models are typically not evaluated for their intrinsic quality , but rather with respect to their utility . | in such work on question answering , question generation models are typically not evaluated for their intrinsic quality , but rather with respect to their utility as an intermediate step in the question answering process . |
a 5-gram language model of the target language was trained using kenlm . | a 5-gram language model on the english side of the training data was trained with the kenlm toolkit . |
we used the scikit-learn implementation of svrs and the skll toolkit . | specifically , we used the python scikit-learn module , which interfaces with the widely-used libsvm . |
sentiment classification is the task of classifying an opinion document as expressing a positive or negative sentiment . | sentiment classification is the task of detecting whether a textual item ( e.g. , a product review , a blog post , an editorial , etc . ) expresses a p ositive or a n egative opinion in general or about a given entity , e.g. , a product , a person , a political party , or a policy . |
we used cdec as our decoder , and tuned the parameters of the system to optimize bleu on the nist mt06 tuning corpus using the margin infused relaxed algorithm . | we used cdec as our hierarchical phrase-based decoder , and tuned the parameters of the system to optimize bleu on the nist mt06 corpus . |
whereas frequency and co-occurrence have been captured in many previous approaches , we boost multiword candidates t by their grade of distributional similarity with single word terms . | whereas frequency and co-occurrence have been captured in many previous approaches , and korkontzelos for a survey , we boost multiword candidates t by their grade of distributional similarity with single word terms . |
translation quality can be measured in terms of the bleu metric . | the defacto standard metric in machine translation is bleu . |
we use mini-batch update and adagrad to optimize the parameter learning . | we apply online training , where model parameters are optimized by using adagrad . |
we used adam optimizer with its standard parameters . | for optimization , we used adam with default parameters . |
plagiarism is a very significant problem nowadays , specifically in higher education institutions . | plagiarism is a problem of primary concern among publishers , scientists , teachers ( cite-p-21-1-7 ) . |
we evaluated our models using bleu and ter . | for evaluation , we used two toolkits based on bleu . |
in this dataset , it is also possible to explore the task of automatic fact-checking . | this dataset can be used for fact-checking research as well . |
according to the availability of bilingual resources , and we show that it is possible to deal with the problem even when no such resources are accessible . | in this work we present many solutions according to the availability of bilingual resources , and we show that it is possible to deal with the problem even when no such resources are accessible . |
we used the implementation of the scikit-learn 2 module . | we used svm classifier that implements linearsvc from the scikit-learn library . |
the srilm toolkit was used to build the trigram mkn smoothed language model . | the srilm toolkit was used to build the 5-gram language model . |
we develop a cascade model which can jointly learn the latent semantics and latent similarity . | in this paper , we propose a novel cascade model , which can capture both the latent semantics and latent similarity by modeling mooc data . |
several researches also attempted to compare existing methods and suggested different evaluation schemes , eg kita or evert . | several researchers also attempted to compare existing methods and suggest different evaluation schemes , eg kita and evert . |
we describe an application of the api for automatic extraction of glossaries in a japanese online news service . | in this section , we present a real-world application of the al+ ener api : glossary linking in an online news service . |
1 ¡® speakers ¡¯ and ¡® listeners ¡¯ are interchangeably used with ¡® authors ¡¯ and ¡® readers ¡¯ . | 1 ¡®speakers¡¯ and ¡®listeners¡¯ are interchangeably used with ¡®authors¡¯ and ¡®readers¡¯ in this article |
the translations were evaluated with the widely used bleu and nist scores . | the bleu metric was used to automatically evaluate the quality of the translations . |
unsupervised parsing has been explored for several decades for a recent review ) . | unsupervised parsing has attracted researchers for over a quarter of a century for reviews ) . |
in this paper , we train our linear classifiers using liblinear 4 . | in particular , we use the liblinear svm 1va classifier . |
language models were built using the srilm toolkit 16 . | the language models were trained using srilm toolkit . |
sentiment analysis ( sa ) is a fundamental problem aiming to allow machines to automatically extract subjectivity information from text ( cite-p-16-5-8 ) , whether at the sentence or the document level ( cite-p-16-3-3 ) . | sentiment analysis ( sa ) is the determination of the polarity of a piece of text ( positive , negative , neutral ) . |
it can be applied as a method for doing automated measurement of team performance . | the ability to automatically predict team performance would be of great value for team training systems . |
we report bleu scores computed using sacrebleu . | for the evaluation of the results we use the bleu score . |
trigram language models are implemented using the srilm toolkit . | a tri-gram language model is estimated using the srilm toolkit . |
finally , we used kenlm to create a trigram language model with kneser-ney smoothing on that data . | we have used the srilm with kneser-ney smoothing for training a language model for the first stage of decoding . |
it was trained on the webnlg dataset using the moses toolkit . | it is a standard phrasebased smt system built using the moses toolkit . |
we present a method for detecting sentiment polarity in short video clips of a person . | we have presented a novel method for determining sentiment polarity in video clips of people speaking . |
we use srilm to train a 5-gram language model on the target side of our training corpus with modified kneser-ney discounting . | we use a 5-gram language model with modified kneser-ney smoothing , trained on the english side of set1 , as our baseline lm . |
ravichandran and hovy extract semantic relations for various terms in a question answering system . | ravichandran and hovy proposed automatically learning surface text patterns for answer extraction . |
support vector machines are one class of such model . | one representative example is support vector machines . |
a 4-gram language model was trained on the monolingual data by the srilm toolkit . | a 4-gram language model is trained on the xinhua portion of the gigaword corpus with the srilm toolkit . |
in order to address the oov problem , jean et al further extend the model of bahdanau et al with importance sampling so that it can hold a larger vocabulary without increasing training complexity . | jean et al proposed a method based on importance sampling that uses a very large target vocabulary without increasing training complexity . |
preparing an aligned abbreviation corpus , we obtain the optimal combination of the features by using the maximum entropy framework . | when labeled training data is available , we can use the maximum entropy principle to optimize the 位 weights . |
most of the works are devoted to phoneme-based transliteration modeling . | most of these works are devoted to phoneme 1 -based transliteration modeling . |
our algorithm for selecting features and weights is based on the search optimization algorithm of , which decides to update feature weights when mistakes are made during search on training examples . | for doctor perez , this yields about 600 features ) our training algorithm is based on the search optimization algorithm of , which updates feature weights when mistakes are made during search on training examples . |
we used the svd implementation provided in the scikit-learn toolkit . | we feed our features to a multinomial naive bayes classifier in scikit-learn . |
the srilm toolkit was used to build the trigram mkn smoothed language model . | the language model was trained using srilm toolkit . |
for our smt experiments , we use the moses toolkit . | we train our systems using the moses decoder . |
these include syntactic , semantic and mixed syntacticsemantic classifications . | these include syntactic and semantic classifications , as well as ones which integrate aspects of both . |
because we can obtain multilingual word and title embeddings . | in this paper , we address this problem by using multilingual title and word embeddings . |
text and the selection of keyphrases are governed by the underlying hidden properties of the document . | each document may be marked with multiple keyphrases that express unseen semantic properties . |
organization of ugc in social media is not effective for content browsing and knowledge learning . | thus , both topic models and social tagging are not suitable for structuralizing ugc in social media . |
srilm toolkit is used to build these language models . | a 4-grams language model is trained by the srilm toolkit . |
when visible units are given , hssm has extra connections utilized to formulate the dependency between adjacent softmax units . | in addition , the model contains extra connections between adjacent hidden softmax units to formulate the dependency between latent states . |
goldberg and zhu presented a graphbased semi-supervised learning algorithm for the sentiment analysis task of rating inference . | goldberg and zhu also used in-domain labeled data to approximate sentiment similarity for semi-supervised sentiment classification . |
one exception is , which showed that systems can make the user have the sense of being heard by using gestures , such as nodding and shaking of the head . | one early work is , which showed that virtual agents can give users the sense of being heard using such gestures as nodding and head shaking . |
semantic role labeling ( srl ) is a task of analyzing predicate-argument structures in texts . | semantic role labeling ( srl ) consists of finding the arguments of a predicate and labeling them with semantic roles ( cite-p-9-1-5 , cite-p-9-3-0 ) . |