sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
socher et al assign a vector and a matrix to each word for the purpose of semantic composition , and build recursive neural network along constituency tree .
socher et al build rnn on constituency trees of sentences , and apply the model to relation recognition task .
in this paper , we learn the semantic contribution of characters to a word by exploiting the similarity between a word and its component characters .
in this paper , we exploit the internal structure in chinese words by learning the semantic contribution of internal characters to the word .
sentence compression is the task of compressing long , verbose sentences .
sentence compression can be seen as sentence-level summarization .
event extraction is a particularly challenging information extraction task , which intends to identify and classify event triggers and arguments from raw text .
thus , event extraction is a difficult task and requires substantial training data .
we use the average glove embedding as the sentence embedding .
for the word-embedding based classifier , we use the glove pre-trained word embeddings .
accuracy , we then extract humor anchors in sentences via a simple and effective method .
furthermore , we develop a simple and effective method to extract anchors that enable humor in a sentence .
we represent each word by a vector with length 300 .
the dimension of glove word vectors is set as 300 .
system incorporating a phrase-based error model significantly outperforms its baseline systems .
results show that the system using the phrase-based error model outperforms significantly its baseline systems .
dhingra et al proposed an end-to-end differentiable kb-infobot for efficient information access .
li et al and dhingra et al also proposed end-to-end task-oriented dialog models that can be trained with hybrid supervised learning and rl .
we propose a joint model for answer sentence ranking and answer extraction .
we present a joint model for the important qa tasks of answer sentence ranking and answer extraction .
we use stanford named entity recognizer 7 to extract named entities from the texts .
we use the stanford named entity recognizer for this purpose .
semantic role labeling ( srl ) is the process of producing such a markup .
semantic role labeling ( srl ) is the task of identifying the predicate-argument structure of a sentence .
in fact , the rule-based system of raghunathan et al exhibited the top score in the recent conll evaluation .
this capability is very desirable as shown by the success of the rule-based deterministic approach of raghunathan et al in the conll shared task 2011 .
word alignment is the task of identifying word correspondences between parallel sentence pairs .
word alignment is a key component in most statistical machine translation systems .
one solution is to consider only the normal-form derivation , which is the route taken in hockenmaier and steedman .
we use the baseline model of hockenmaier and steedman , which is a simple generative model that is equivalent to an unlexicalized pcfg .
we participated only in the task 2a in which the gold standard disorder mentions were given .
we participated only in the disorder attribute detection task 2a .
that show substantial improvements in spearman correlation scores over the baseline models provided by task 1 organizers , ( ranging from 0 . 03 to 0 . 23 ) .
the performance of the system for all subtasks in both languages shows substantial improvements in spearman correlation scores over the baseline models provided by task 1 organizers , ranging from 0.03 to 0.23 .
the word embeddings are initialized with 100-dimensions vectors pre-trained by the cbow model .
we initialize the embedding weights by the pre-trained word embeddings with 200 dimensional vectors .
on several data conditions , we show that our method outperforms the baseline and results in up to 8 . 5 % improvement .
on several data conditions , we show that our method outperforms the baseline and results in up to 8.5 % improvement in the f 1 -score .
we think the natural language and speech processing technology will be useful for the efficient production of tv programs with closed captions .
we propose natural language and speech processing techniques should be used for efficient closed caption production of tv programs .
we used the srilm toolkit to build unpruned 5-gram models using interpolated modified kneser-ney smoothing .
for all experiments , we used a 4-gram language model with modified kneser-ney smoothing which was trained with the srilm toolkit .
coreference resolution is the task of partitioning a set of mentions ( i.e . person , organization and location ) into entities .
coreference resolution is a well known clustering task in natural language processing .
we also use glove vectors to initialize the word embedding matrix in the caption embedding module .
we initialize the embedding weights by the pre-trained word embeddings with 200 dimensional vectors .
distributed word representations induced through deep neural networks have been shown to be useful in several natural language processing applications .
word embeddings have proved useful in downstream nlp tasks such as part of speech tagging , named entity recognition , and machine translation .
a 5-gram language model was built using srilm on the target side of the corresponding training corpus .
the language model is a 3-gram language model trained using the srilm toolkit on the english side of the training data .
we compared sn models with two different pre-trained word embeddings , using either word2vec or fasttext .
we also used word2vec to generate dense word vectors for all word types in our learning corpus .
misra et al use a latent dirichlet allocation topic model to find coherent segment boundaries .
brody and lapata extend the latent dirichlet allocation model to combine evidence from different types of contexts .
in this paper , we focus on class-based models of selectional preferences .
in this paper , we evaluated five models for the acquisition of selectional preferences .
pang et al observed that the top 2633 unigrams are better features than unigrams or adjectives for sentiment classification of a document .
pang et al proved that unigrams and bigrams , adjectives and part of speech tags are important features for a machine learning based sentiment classifier .
however , only a few techniques to learn finite-state transducers for machine translation purposes can be found .
nowadays , only a few techniques exist for inferring finite-state transducers .
relation extraction is the task of detecting and classifying relationships between two entities from text .
relation extraction ( re ) is the task of extracting instances of semantic relations between entities in unstructured data such as natural language text .
we show that this method achieves state of the art performance .
we have shown that state of the art performance can be achieved by using this approach .
the standard minimum error rate training algorithm was used for tuning .
parameters were tuned using minimum error rate training .
coreference resolution is the task of partitioning a set of entity mentions in a text , where each partition corresponds to some entity in an underlying discourse model .
coreference resolution is a well known clustering task in natural language processing .
stephens et al propose 17 classes targeted to relations between genes .
stephens et al propose 17 very specific classes targeting relations between genes .
however , dependency parsing , which is a popular choice for japanese , can incorporate only shallow syntactic information , i.e. , pos tags , compared with the richer syntactic phrasal categories in constituency parsing .
dependency parsing is the task of building dependency links between words in a sentence , which has recently gained a wide interest in the natural language processing community .
in the work of wang et al , a variant of attention-based lstm was proposed .
wang et al utilized attention-based lstm , which takes into account aspect information during attention .
from the perspective of online language comprehension , processing difficulty is quantified by surprisal .
pcfg surprisal is a measure of incremental hierarchic syntactic processing .
the task of automatically assigning predefined meanings to words in contexts , known as word sense disambiguation , is a fundamental task in computational lexical semantics .
word sense disambiguation , the task of automatically assigning predefined meanings to words occurring in context , is a fundamental task in computational lexical semantics .
we use phrase-based and hierarchical mt systems as implemented by koehn et al for our experiments .
we have implemented a hierarchical phrase-based smt model similar to chiang .
we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting .
we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing .
pcfg surprisal is a measure of incremental hierarchic syntactic processing .
from the perspective of online language comprehension , processing difficulty is quantified by surprisal .
luong et al segment words using morfessor , and use recursive neural networks to build word embeddings from morph embeddings .
luong et al created a hierarchical language model that uses rnn to combine morphemes of a word to obtain a word representation .
rewrite rules are used in many areas of natural language and speech processing , including syntax .
context-dependent rewrite rules are used in many areas of natural language and speech processing .
keyphrases also offers a programming framework for developing new extraction .
alchemyapi 8 offers a web service for keyword extraction .
we demonstrate superagent as an add-on extension to mainstream web browsers such as microsoft edge and google chrome .
we demonstrate superagent as an add-on extension to mainstream web browsers and show its usefulness to user ’ s online shopping experience .
yao et al attempted to improve the specificity with the reinforcement learning framework by using the averaged idf score of the words in the response as a reward .
yao et al diversified the response by a loss function in which words with high inverse document frequency values are preferred .
semantic role labeling ( srl ) is a task of analyzing predicate-argument structures in texts .
semantic role labeling ( srl ) is defined as the task to recognize arguments for a given predicate and assign semantic role labels to them .
automatic detection of semantic roles has received a lot of attention lately .
copious work has been done lately on semantic roles .
in natural language , a word often assumes different meanings , and the task of determining the correct meaning , or sense , of a word in different contexts is known as word sense disambiguation ( wsd ) .
word sense disambiguation ( wsd ) is a problem of finding the relevant clues in a surrounding context .
we trained a specific language model using srilm from each of these corpora in order to estimate n-gram log-probabilities .
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided .
we apply a pretrained glove word embedding on .
we use pre-trained embeddings from glove .
parameter optimisation is done by mini-batch stochastic gradient descent where back-propagation is performed using adadelta update rule .
training is done using stochastic gradient descent over mini-batches with the adadelta update rule .
coreference resolution is a key task in natural language processing ( cite-p-13-1-8 ) aiming to detect the referential expressions ( mentions ) in a text that point to the same entity .
coreference resolution is the task of clustering a set of mentions in the text such that all mentions in the same cluster refer to the same entity .
we preprocessed all the corpora used with scripts from the moses toolkit .
we tokenized , cleaned , and truecased our data using the standard tools from the moses toolkit .
soricut and echihabi explore pseudo-references and document-aware features for document-level ranking , using bleu as quality label .
soricut and echihabi propose documentlevel features to predict document-level quality for ranking purposes , having bleu as quality label .
annotation was conducted on a modified version of the brat web-based annotation tool .
the annotation was performed manually using the brat annotation tool .
negation is a grammatical category which comprises various kinds of devices to reverse the truth value of a proposition ( cite-p-18-3-8 ) .
although negation is a very relevant and complex semantic aspect of language , current proposals to annotate meaning either dismiss negation or only treat it in a partial manner .
experimental results show substantial improvements of the acm in comparison with classical cluster models and word n-gram models .
results show approximately 6-10 % cer reduction of the acms in comparison with the word trigram models , even when the acms are slightly smaller .
we use the stanford parser to extract a set of dependencies from each comment .
we use the stanford parser to generate a dg for each sentence .
in ( 2 ) , however , it seems clear from context that we are dealing with an unpleasant person for whom laugh .
in ( 2 ) , however , it seems clear from context that we are dealing with an unpleasant person for whom laugh entails bitter laugh .
semantic role labeling ( srl ) is defined as the task to recognize arguments for a given predicate and assign semantic role labels to them .
semantic role labeling ( srl ) is the task of automatic recognition of individual predicates together with their major roles ( e.g . frame elements ) as they are grammatically realized in input sentences .
secondly , a knowledge-based criterion is used to supervise the hierarchical splitting of these semantic-related tags .
by introducing a knowledge-based criterion , these new tags are decided whether or not to split into subcategories from a semantic perspective .
curran and moens have demonstrated that dramatically increasing the volume of raw input text used to extract context information significantly improves the quality of extracted synonyms .
curran and moens found that dramatically increasing the volume of raw input data for distributional similarity tasks increases the accuracy of synonyms extracted .
to calculate language model features , we train traditional n-gram language models with ngram lengths of four and five using the srilm toolkit .
we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus .
we used a 5-gram language model trained on 126 million words of the xinhua section of the english gigaword corpus , estimated with srilm .
we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting .
conjecture and empirically show that entailment graphs exhibit a ¡° tree-like ¡± property , i . e . , that they can be reduced into a structure similar to a directed forest .
we first identify that entailment graphs exhibit a ¡°tree-like¡± property and are very similar to a novel type of graph termed forest-reducible graph .
further , we apply a 4-gram language model trained with the srilm toolkit on the target side of the training corpus .
we employ srilm toolkit to linearly interpolate the target side of the training corpus with the wmt english corpus , optimizing towards the mt tuning set .
sentiment analysis is the task of automatically identifying the valence or polarity of a piece of text .
sentiment analysis is the task of identifying the polarity ( positive , negative or neutral ) of review .
modified kneser-ney trigram models are trained using srilm on the chinese portion of the training data .
the language model is a 3-gram language model trained using the srilm toolkit on the english side of the training data .
in this paper we present l obby b ack , a system to reconstruct the “ dark corpora ” that is comprised of model .
in this paper we present l obby b ack , a system that reverse engineers model legislation from observed text .
automatically acquired lexicons with subcategorization information have already proved accurate and useful enough for some purposes .
at present , automatically acquired verb lexicons with scf information have already proved accurate and useful enough for some nlp purposes ( cite-p-8-3-5 , cite-p-8-3-3 ) .
a tri-gram language model is estimated using the srilm toolkit .
a 4-grams language model is trained by the srilm toolkit .
we introduce a new clustering method called hierarchical graph factorization clustering ( hgfc ) .
we introduce then a new method called hierarchical graph factorization clustering ( hgfc ) ( cite-p-17-5-8 ) .
in machine translation and text summarization , results are automatically evaluated based on sentence comparison .
in text summarization and machine translation , summaries comparison based on sentence similarity has been applied for automatic evaluation .
we implemented the different aes models using scikit-learn .
for all classifiers , we used the scikit-learn implementation .
a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit .
we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit .
word sense disambiguation ( wsd ) is the task of automatically determining the correct sense for a target word given the context in which it occurs .
word sense disambiguation ( wsd ) is a widely studied task in natural language processing : given a word and its context , assign the correct sense of the word based on a predefined sense inventory ( cite-p-15-3-4 ) .
relation extraction ( re ) is a task of identifying typed relations between known entity mentions in a sentence .
relation extraction is the task of finding relational facts in unstructured text and putting them into a structured ( tabularized ) knowledge base .
we convert the question into a sequence of learned word embeddings by looking up the pre-trained vectors , such as glove .
for the classification task , we use pre-trained glove embedding vectors as lexical features .
we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting .
we used the sri language modeling toolkit to calculate the log probability and two measures of perplexity .
we have created the first publicly-available corpus of gold standard negative deceptive opinion spam , containing 400 reviews of 20 chicago hotels , which we have used to compare the deception detection capabilities of untrained human judges .
following the framework of cite-p-12-1-12 , we use amazon ’ s mechanical turk service to produce the first publicly available 1 dataset of negative deceptive opinion spam , containing 400 gold standard deceptive negative reviews of 20 popular chicago hotels .
in this paper , we use the term uncertain information .
in this paper , we develop attention mechanisms for uncertainty detection .
as is shown in , the japanese orthography is highly irregular , which contributes to a substantial number of out-of-vocabulary words in the machine translation output .
in addition , the highly irregular japanese orthography as is analyzed in poses a challenge for machine translation tasks .
our approach to this subtask is based on the sieves proposed by lee et al .
most of these sieves are relaxed versions of the ones proposed by lee et al .
crowdsourcing is the use of the mass collaboration of internet passersby for large enterprises on the world wide web such as wikipedia and survey companies .
crowdsourcing is a scalable and inexpensive data collection method , but collecting high quality data efficiently requires thoughtful orchestration of crowdsourcing jobs .
krishnakumaran and zhu use the isa relation in wordnet for metaphor recognition .
krishnakumaran and zhu use wordnet knowledge to differentiate between metaphors and literal usage .
as mentioned earlier , our approach was motivated by karttunen ' s implementation .
as mentioned earlier , our approach was motivated by karttunen 's implementation as described in karttunen 1984 .
a 4-gram language model is trained on the monolingual data by srilm toolkit .
the language model is trained with the sri lm toolkit , on all the available french data without the ted data .
in addition , we automatically rescale models so that they have physically plausible sizes and orient them so that they have a consistent up and front direction .
in addition , we assume the models have been scaled to physically plausible sizes and oriented with consistent up and front direction .
in all cases , we used the implementations from the scikitlearn machine learning library .
for the feature-based system we used logistic regression classifier from the scikit-learn library .
relation extraction ( re ) is the task of recognizing the assertion of a particular relationship between two or more entities in text .
relation extraction is the task of finding semantic relations between two entities from text .
our model by construction is similar to approach based on the ising spin model described in .
under this setting , we compare our method to the spin model described in .
ji et al introduced an extra latent variable to a hierarchical rnn model to represent discourse relation .
ji et al proposed a latent variable rnn for modeling discourse relations between sentences .
events and entities is highly contextually dependent .
the interpretation of event descriptions is highly contextually dependent .
we set all feature weights by optimizing bleu directly using minimum error rate training on the tuning part of the development set .
we used minimum error rate training to tune the feature weights for maximum bleu on the development set .
we used a 5-gram language model trained on 126 million words of the xinhua section of the english gigaword corpus , estimated with srilm .
we trained a 4-gram language model on the xinhua portion of gigaword corpus using the sri language modeling toolkit with modified kneser-ney smoothing .
given the model parameters and a sentence math-w-2-16-0-10 , determine the most probable translation of math-w-2-16-0-18 .
given the parameters of ibm model 3 , and a sentence pair math-w-5-1-0-21 , compute the probability math-w-5-1-0-30 .
here we investigate the benefits of displaying the discourse structure information .
in this paper we explore the utility of the navigation map , a graphical representation of the discourse structure .
for full text , in this paper , we introduce a large corpus of chinese short text summarization dataset constructed from the chinese microblogging website sina weibo , which is released to the public .
in this paper , we take one step back and focus on constructing lcsts , the large-scale chinese short text summarization dataset by utilizing the naturally annotated web resources on sina weibo .