sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
sentiment analysis is the task of identifying positive and negative opinions , sentiments , emotions and attitudes expressed in text .
sentiment analysis is a natural language processing ( nlp ) task ( cite-p-10-3-0 ) which aims at classifying documents according to the opinion expressed about a given subject ( federici and dragoni , 2016a , b ) .
we used srilm -sri language modeling toolkit to train several character models .
we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit .
relation extraction is a fundamental step in many natural language processing applications such as learning ontologies from texts ( cite-p-12-1-0 ) and question answering ( cite-p-12-3-6 ) .
relation extraction is the key component for building relation knowledge graphs , and it is of crucial significance to natural language processing applications such as structured search , sentiment analysis , question answering , and summarization .
table 2 shows the blind test results using bleu-4 , meteor and ter .
table 2 gives the results measured by caseinsensitive bleu-4 .
to overcome this problem , shen et al proposed a dependency language model to exploit longdistance word relations for smt .
typically , shen et al propose a string-todependency model , which integrates the targetside well-formed dependency structure into translation rules .
semantic role labeling ( srl ) is a kind of shallow sentence-level semantic analysis and is becoming a hot task in natural language processing .
semantic role labeling ( srl ) is a major nlp task , providing a shallow sentence-level semantic analysis .
we train a word2vec cbow model on raw 517 , 400 emails from the en-ron email dataset to obtain the word embeddings .
for the textual sources , we populate word embeddings from the google word2vec embeddings trained on roughly 100 billion words from google news .
the model weights are automatically tuned using minimum error rate training .
tuning is performed to maximize bleu score using minimum error rate training .
finally , the ape system was tuned on the development set , optimizing ter with minimum error rate training .
the feature weights for each system were tuned on development sets using the moses implementation of minimum error rate training .
and they do not include a mechanism to also take semantics into account .
furthermore , they do not take semantic reasoning into account .
we use the 300-dimensional skip-gram word embeddings built on the google-news corpus .
we preinitialize the word embeddings by running the word2vec tool on the english wikipedia dump .
the present paper is a contribution towards this goal : it presents the results of a large-scale evaluation of window-based dsms on a wide variety of semantic tasks .
the present paper is the first to use a reranking parser and the first to address the adaptation scenario for this problem .
our 5-gram language model was trained by srilm toolkit .
furthermore , we train a 5-gram language model using the sri language toolkit .
in general , we could get the optimized parameters though minimum error rate training on the development set .
we use minimal error rate training to maximize bleu on the complete development data .
in this paper , we extend chain-structured lstm to a directed acyclic graph ( dag ) structure , with the aim to provide the popular chain lstm with the capability of considering both compositionality and non-compositionality .
in this paper , we extend the popular chain-structured lstm to directed acyclic graph ( dag ) structures , with the aim to endow conventional lstm with the capability of considering compositionality and non-compositionality together .
we create a manually-labeled dataset of dialogue from tv series ¡® friends ¡¯ .
for our experiments , we create a manually labeled dataset of dialogues from tv series ¡®friends¡¯ .
learning based approaches are proposed to solve the math word problems .
we have proposed an rnn-based seq2seq model to automatically solve math word problems .
feature weights were trained with minimum error-rate training on the news-test2008 development set using the dp beam search decoder and the mert implementation of the moses toolkit .
the feature weights for each system were tuned on development sets using the moses implementation of minimum error rate training .
we trained kneser-ney discounted 5-gram language models on each available corpus using the srilm toolkit .
we trained a 5-gram language model on the english side of each training corpus using the sri language modeling toolkit .
in this paper , we show that better feature representations serve the above purpose and that structure .
in this paper , we propose discriminative reranking of concept annotation to jointly exploit generative and discriminative models .
we used the pre-trained word embeddings that are learned using the word2vec toolkit on google news dataset .
we used the pre-trained word embeddings that were learned using the word2vec toolkit on google news dataset .
examples of topic models include plsi and lda .
generative topic models widely used for ir include plsa and lda .
our baseline system is phrase-based moses with feature weights trained using mert .
our phrase-based mt system is trained by moses with standard parameters settings .
our mt decoder is a proprietary engine similar to moses .
our machine translation system is a phrase-based system using the moses toolkit .
work on parsing has focused on solving these problems using powerful optimization techniques .
a lot of work has gone into developing powerful optimization methods for solving these combinatorial problems .
string-based automatic evaluation metrics such as bleu have led directly to quality improvements in machine translation .
current metrics to automatically evaluate machine translations , such as the popular bleu , are heavily based on string matching .
we propose the first embedding-based fully joint parsing model , ( 2 ) .
we propose the joint parsing models by the feed-forward and bi-lstm neural networks .
with the connective donc , causality is imposed by the connective , but in its turn .
with the connective donc , causality is imposed by the connective , but in its turn it brings new constraints ( § 3.2 ) .
we use ranking svms to learn a ranking function from preference constraints .
in the supervised ranking aggregation method , we apply ranking svm .
we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing .
we estimate a 5-gram language model using interpolated kneser-ney discounting with srilm .
pantel and pennacchiotti proposed espresso , a relation extraction method based on the co-training bootstrapping algorithm with entities and attributes .
pantel and pennacchiotti developed espresso , a weakly-supervised system that exploits patterns in large-scale web data to distinguish between five noun-noun relations .
we tag the source language with the stanford pos tagger .
we use the stanford pos-tagger and name entity recognizer .
in this paper can also be applied successfully to other relational reasoning tasks .
in this paper we have presented a combined model of lexical and relational similarity for relational reasoning tasks .
information we learn can not be equally derived from a large dataset of annotated microposts .
we show that the information we learn can not be equally derived from a large dataset with labeled microposts .
we train a 4-gram language model on the xinhua portion of the gigaword corpus using the sri language toolkit with modified kneser-ney smoothing .
for the n-gram lm , we use srilm toolkits to train a 4-gram lm on the xinhua portion of the gigaword corpus .
we used svm classifier that implements linearsvc from the scikit-learn library .
we used the logistic regression implemented in the scikit-learn library with the default settings .
as a classifier , we employ support vector machines as implemented in svm light .
we employ support vector machines to perform the classification .
lexical functional grammar is a member of the family of constraint-based grammars .
lexical functional grammar is a constraint-based theory of grammar .
dependency annotation for hindi is based on paninian framework for building the treebank .
the dependency grammar formalism , used for hindi is computational paninian framework .
kalchbrenner et al introduced a dynamic k-max pooling to handle variable length sequences .
kalchbrenner et al proposed to extend cnns max-over-time pooling to k-max pooling for sentence modeling .
to encode the original sentences we used word2vec embeddings pre-trained on google news .
for all three classifiers , we used the word2vec 300d pre-trained embeddings as features .
in section 2 . 1 . 1 , and bidir refers to bidirectional rnns introduced in ( cite-p-14-5-4 ) .
unidir refers to the regular rnns presented in section 2.1.1 , and bidir refers to bidirectional rnns introduced in ( cite-p-14-5-4 ) .
these results were corroborated by lembersky et al , 2012a lembersky et al , 2013 , who showed that translation models can be adapted to translationese , thereby improving the quality of smt even further .
these results were corroborated by lembersky et al , 2012a lembersky et al , 2013 , who further demonstrated that translation models can be adapted to translationese , thereby improving the quality of smt even further .
and an argument model finds trees that are linguistically more plausible .
as a result , an argument model is needed to identify linguistically plausible spanning trees .
sentiment analysis is a fundamental problem aiming to give a machine the ability to understand the emotions and opinions expressed in a written text .
sentiment analysis is a growing research field , especially on web social networks .
if the anaphor is a pronoun but no referent is found in the cache , it is then necessary to operatingsearch memory .
the anaphor is a pronoun and the referent is in the cache ( in focus ) .
sentiment analysis is the process of identifying and extracting subjective information using natural language processing ( nlp ) .
sentiment analysis is a research area in the field of natural language processing .
we built a global reranking parser model using multiple decoders from mstparser .
we used the mstparser as the basic dependency parsing model .
twitter is a microblogging site where people express themselves and react to content in real-time .
twitter is a rich resource for information about everyday events – people post their tweets to twitter publicly in real-time as they conduct their activities throughout the day , resulting in a significant amount of mundane information about common events .
we rely on conditional random fields 1 for predicting one label per reference .
we use the mallet implementation of conditional random fields .
the word based japanese segmenters are described in the previous paper .
word segmentation policy for these languages are described in the previous paper .
since each task involves a separate schema and database of entities .
each task is based on a database schema which defines the domain of interest .
we train word embeddings by predicting each of the words between noun pairs using lexical relation-specific features on a large unlabeled corpus .
our word embeddings are trained by predicting words between noun pairs using lexical relation-specific features on a large unlabeled corpus .
lda is a topic model that generates topics based on word frequency from a set of documents .
lda is a generative model that learns a set of latent topics for a document collection .
in this study , we focus on improving the confidence measurement to maintain the accuracy .
in this study we discuss real-world applications of confidence scoring in a customer service scenario .
we propose a neural architecture which learns a distributional semantic representation that leverage both document and sentence level information .
we propose a neural architecture which learns a distributional semantic representation that leverages a greater amount of semantic context – both document and sentence level information – than prior work .
in related work on modeling arabic case and syntax , habash et al compared rule-based and machine learning approaches to capture the complexity of arabic case assignment and agreement .
in related work on modeling arabic syntax and morphology , habash et al demonstrated that given good syntactic representations , case prediction can be done with a high degree of accuracy .
both online and offline , the need for automatic document summarization that can be implement in practical scenarios is increasing .
the need for automatic document summarization that can be used for practical applications is increasing rapidly .
this combinatorial optimization can be solved in polynomial time by modifying the hungarian assignment algorithm .
this maximum weighted bipartite matching problem can be solved in otime using the kuhnmunkres algorithm .
in this paper , we investigate ways to transfer information from one ( source ) language to another ( target ) language in a single semantic parsing .
in this paper , we investigate the question of transferring a semantic parser from a source language ( e.g . english ) to a target language ( e.g . german ) .
the first-stage model we use is a first-order dependency model , with labeled dependencies , as described in .
our baseline discriminative model uses first-and second-order features provided in .
in this paper , we study the task of response selection .
in this paper , we focused on the task of response selection .
the annotation was performed manually using the brat annotation tool .
the annotation was performed using the brat 2 tool .
we assume familiarity with theories of unification grammars , as formulated by , eg , carpenter and penn .
we assume familiarity with theories of unification grammar , as formulated by , for example , carpenter and penn .
we introduced a novel , more difficult task combining hypernym detection and directionality , and showed that our methods outperform a frequency baseline .
we also introduce a novel task that combines hypernym detection and directionality , significantly outperforming a competitive frequency-based baseline .
in this paper we presented a technique for extracting order constraints among plan elements .
in this paper , we present a method for learning the basic patterns contained within a plan and the ordering among them .
iyyer et al , 2014 ) addresses political ideology detection using recursive neural networks .
iyyer et al applied a recursive neural network framework to detect political positions .
knowledge bases like freebase , dbpedia , and nell are extremely useful resources for many nlp tasks .
knowledge bases such as freebase and yago play a pivotal role in many nlp related applications .
random indexing is a method for building a compressed wsm with a fixed dimensionality , done in an incremental fashion .
random indexing is an approach which incrementally builds word vectors in a dimensionally-reduced space .
we use long shortterm memory networks to build another semanticsbased sentence representation .
nevertheless , we can apply long short-term memory structure for source and target words embedding .
as social media messages are typically ambiguous , we argue that effective concept normalisation should deal with them .
however , we argue that effective concept normalisation requires a system to take into account the semantics of social media messages and medical concepts .
the model weights are automatically tuned using minimum error rate training .
the minimum error rate training was used to tune the feature weights .
we use a pointer-generator network , which is a combination of a seq2seq model with attention and a pointer network .
we use opennmt , which is an implementation of the popular nmt approach that uses an attentional encoder-decoder network .
we can cite lexical-functional grammar , head-driven phrase structure grammar and probabilistic context-free grammars .
for probabilistic parsing , we can cite lfg , head-driven phrase structure grammar and probabilistic context-free grammars .
lda is a topic model that generates topics based on word frequency from a set of documents .
lda is a probabilistic generative model that can be used to uncover the underlying semantic structure of a document collection .
we also use a 4-gram language model trained using srilm with kneser-ney smoothing .
our translation model is implemented as an n-gram model of operations using the srilm toolkit with kneser-ney smoothing .
circles denote variable nodes , and squares denote factor nodes .
the circles denote fixations , and the lines are saccades .
more concretely , a context-free grammar can be read off from discontinuous trees that have been transformed to context-free trees by the procedure introduced in boyd .
concretely , a context-free grammar can be read off from discontinuous trees that have been transformed to context-free trees by the procedure introduced by boyd .
englishto-german tasks show that the proposed method can significantly accelerate the nmt training and improve the nmt performance .
the empirical results illustrated that the proposed method can significantly accelerate the nmt training and improve the nmt performances .
in this study , we experimentally investigated the impact of contextual information selection , by extracting three kinds of contextual information ¡ª dependency , sentence co-occurrence , and proximity .
this study has experimentally investigated the impact of contextual information selection , by extracting three kinds of word relationships from corpora : dependency , sentence co-occurrence , and proximity .
zhao et al enrich this approach by adding multiple resources and further extend the method by generating different paraphrase in different applications .
zhao et al further increase the utility of this combination approach by incorporating application specific constraints on the pivoted paraphrases .
word sense disambiguation ( wsd ) is the task of determining the meaning of an ambiguous word in its context .
word sense disambiguation ( wsd ) is formally defined as the task of computationally identifying senses of a word in a context .
in the future , i need to evaluate the quality of the resulting scfs by manual analysis .
we definitely need some method to control the quality of the acquired scfs .
since sarcasm is a refined and indirect form of speech , its interpretation may be challenging for certain populations .
sarcasm is a form of speech in which speakers say the opposite of what they truly mean in order to convey a strong sentiment .
these language models were built up to an order of 5 with kneser-ney smoothing using the srilm toolkit .
models were built and interpolated using srilm with modified kneser-ney smoothing and the default pruning settings .
we use an nmt-small model from the opennmt framework for the neural translation .
we obtain the pre-tokenized dataset from the open-nmt project .
we used the 300-dimensional glove word embeddings learned from 840 billion tokens in the web crawl data , as general word embeddings .
we used 200 dimensional glove word representations , which were pre-trained on 6 billion tweets .
senseclusters is a freely–available open– source system that served as the university of minnesota , duluth entry in the s enseval -4 sense induction task .
senseclusters is a freely available system that identifies similar contexts in text .
pcfg parsing features were generated on the output of the berkeley parser , trained over an english and a spanish treebank .
pcfg parsing features were generated on the output of the berkeley parser , with the default grammars based on an english and a german treebank .
corpus has attracted people both inside and outside the nlp community .
it has attracted the attention of people both inside and outside the nlp community .
phrase-based models have been widely used in practical machine translation systems due to their effectiveness , simplicity , and applicability .
besides phrase-based machine translation systems , syntax-based systems have become widely used because of their ability to handle non-local reordering .
for all submissions , we used the phrase-based variant of the moses decoder .
we used moses with the default configuration for phrase-based translation .
we used 5-gram models , estimated using the sri language modeling toolkit with modified kneser-ney smoothing .
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided .
we represent terms using pre-trained glove wikipedia 6b word embeddings .
we use 300d glove vectors trained on 840b tokens as the word embedding input to the lstm .
sentence completion is a challenging semantic modelling problem .
sentence completion is a challenging semantic modeling task in which models must choose the most appropriate word from a given set to complete a sentence .
the srilm toolkit was used to build the 5-gram language model .
the model was built using the srilm toolkit with backoff and good-turing smoothing .
we use pre-trained 50-dimensional word embeddings vector from glove .
our word embeddings is initialized with 100-dimensional glove word embeddings .
in this paper , we have demonstrated the viability of a regression approach to learning .
this paper further analyzes aspects of learning that impact performance .
chen et al used long short-term memoryto capture long term dependency .
chen et al used lstm to capture long distance dependencies .
the task of semantic textual similarity measures the degree of semantic equivalence between two sentences .
the semantic textual similarity task examines semantic similarity at a sentence-level .