sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
word2vec is a language modeling technique that maps words from vocabulary to continuous vectors .
word2vec is the method to obtain distributed representations for a word by using neural networks with one hidden layer .
word sense disambiguation ( wsd ) is the task of determining the meaning of an ambiguous word in its context .
word sense disambiguation ( wsd ) is a widely studied task in natural language processing : given a word and its context , assign the correct sense of the word based on a predefined sense inventory ( cite-p-15-3-4 ) .
blei et al proposed lda as a general bayesian framework and gave a variational model for learning topics from data .
lau et al leverage a common framework to address sense induction and disambiguation based on topic models .
ceylan and kim compare a number of methods for identifying the language of search engine queries of 2 to 3 words .
ceylan and kim compared a number of methods for identifying the language of search engine queries of 2 to 3 words .
in this paper , we propose a machine learning algorithm for shallow semantic parsing .
we have described an algorithm which significantly improves the state-of-the-art in shallow semantic parsing .
human syntactic processing shows many signs of taking place within a general-purpose short-term memory .
it is an interesting possibility that human syntactic processing may occur entirely within a general-purpose short-term memory .
we use pre-trained glove vector for initialization of word embeddings .
word embeddings are initialized from glove 100-dimensional pre-trained embeddings .
we focus on training classifiers with weakly and strongly labeled data , as well as semi-supervised learning .
more importantly , semi-supervised learning on large amount of unlabeled data effectively increases the classification accuracy .
the system used a tri-gram language model built from sri toolkit with modified kneser-ney interpolation smoothing technique .
the 5-gram kneser-ney smoothed language models were trained by srilm , with kenlm used at runtime .
we used the state-of-the-art phrase-based smt toolkit moses with default options , except for the distortion limit .
we used the moses machine translation decoder , using the default features and decoding settings .
in this paper , we propose a hierarchical attention model to identify the right warrant .
in this paper , we propose a hierarchical attention model to select the supporting warrant for the argument .
twitter is the medium where people post real time messages to discuss on the different topics , and express their sentiments .
twitter is a social platform which contains rich textual content .
phrase based systems rely on a lexicalized distortion model and the target language model to produce output words in the correct order .
the phrase-based translation systems rely on language model and lexicalized reordering model to capture lexical dependencies that span phrase boundaries .
for pos tagging and syntactic parsing , we use the stanford nlp toolkit .
we use the stanford corenlp shift-reduce parsers for english , german , and french .
first , we introduce sentiment lexicon features , which effectively improve classification .
we investigate an effective way to use sentiment lexicon features .
relation extraction is a crucial task in the field of natural language processing ( nlp ) .
relation extraction is a subtask of information extraction that finds various predefined semantic relations , such as location , affiliation , rival , etc. , between pairs of entities in text .
in this paper , we explore spelling errors as a source of information for detecting the native language .
in this work , we investigate the usefulness of spelling errors for the native language identification task .
lexical simplification is the task of identifying and replacing cws in a text to improve the overall understandability and readability .
lexical simplification is a subtask of the more general text simplification task which attempts at reducing the cognitive complexity of a text so that it can be ( better ) understood by a larger audience .
the systems were tuned using a small extracted parallel dataset with minimum error rate training and then tested with different test sets .
the weights of the different feature functions were optimised by means of minimum error rate training .
that allows us to extract paraphrasal templates from sentences that are not , by themselves , paraphrases and avoid using a comparable corpus .
looking for paraphrases among templates , instead of among sentences , allows us to avoid using an aligned corpus .
barzilay and mckeown propose a text-to-text generation technique for synthesizing common information across documents using sentence fusion .
barzilay and mckeown proposed an idea called sentence fusion that integrates information in overlapping sentences to produce a nonoverlapping summary sentence .
we present an unsupervised model for coreference resolution .
we presented an unsupervised graph-based model for coreference resolution .
previous work has shown that parsers typically perform poorly outside of their training domain .
previous works have shown that , in general , parser performances degrade when applied to out-ofdomain sentences .
for the classification task , we use pre-trained glove embedding vectors as lexical features .
we use 100-dimension glove vectors which are pre-trained on a large twitter corpus and fine-tuned during training .
the 5-gram target language model was trained using kenlm .
the un-pre-marked japanese corpus is used to train a language model using kenlm .
this baseline uses pre-trained word embeddings using word2vec cbow and fasttext .
the neural embeddings were created using the word2vec software 3 accompanying .
the output of our experiments was evaluated using two metrics , bleu , and lexical accuracy .
we evaluated the proposed method using four evaluation measures , bleu , nist , wer , and per .
the influential work of grosz and sidner provides a helpful starting point for understanding our approach .
we believe the seminal work on attentions , intentions and the structure of discourse by grosz and sidner is best suited for the inference of social goals .
we introduce the notion of “ frame relatedness ” , i . e . relatedness among prototypical situations .
in this paper we introduce the notion of “ frame relatedness ” , i.e . relatedness among prototypical situations as represented in the framenet database .
minimum error rate training under bleu criterion is used to estimate 20 feature function weights over the larger development set .
the feature weights are tuned to optimize bleu using the minimum error rate training algorithm .
word sense disambiguation ( wsd ) is a problem long recognised in computational linguistics ( yngve 1955 ) and there has been a recent resurgence of interest , including a special issue of this journal devoted to the topic ( cite-p-27-8-11 ) .
word sense disambiguation ( wsd ) is a key task in computational lexical semantics , inasmuch as it addresses the lexical ambiguity of text by making explicit the meaning of words occurring in a given context ( cite-p-18-3-10 ) .
the feature weights are tuned to optimize bleu using the minimum error rate training algorithm .
the weight parameter 位 is tuned by a minimum error-rate training algorithm .
monolingual parallel corpora have also been used as a source of paraphrases .
monolingual comparable corpora are also useful sources of paraphrases .
medlock and briscoe also used single words as input features in order to classify sentences from scientific articles in biomedical domain as speculative or non-speculative .
medlock and briscoe used single words as input features in order to classify sentences from biological articles as speculative or non-speculative based on semi-automatically collected training examples .
it is well known that improved parsing performance can be achieved by leveraging the alternative perspectives provided by several parsing models .
it has recently been shown that different nlp models can be effectively combined using dual decomposition .
the language model was trained using srilm toolkit .
we trained a 5-grams language model by the srilm toolkit .
we use pre-trained 50 dimensional glove vectors 4 for word embeddings initialization .
we use pre-trained glove vector for initialization of word embeddings .
we have also shown that these preferences are useful for disambiguating polysemous verbs within their local contexts of occurrence .
we additionally show that these preferences are useful as priors for a verb sense disambiguator .
sentiment analysis is a natural language processing task whose aim is to classify documents according to the opinion ( polarity ) they express on a given subject ( cite-p-13-8-14 ) .
sentiment analysis is a natural language processing ( nlp ) task ( cite-p-10-3-0 ) which aims at classifying documents according to the opinion expressed about a given subject ( federici and dragoni , 2016a , b ) .
davidov and rappoport proposed a method that detects function words by their high frequency , and utilizes these words for the discovery of symmetric patterns .
davidov and rappoport describe an algorithm for unsupervised discovery of word categories and evaluate it on russian and english corpora .
for every grammar element , we can find which part of the sentence is difficult to read .
from the detected items and their readability levels , we can identify which part of the sentence is difficult to read .
we train a trigram language model with modified kneser-ney smoothing from the training dataset using the srilm toolkit , and use the same language model for all three systems .
we use srilm to train a 5-gram language model on the target side of our training corpus with modified kneser-ney discounting .
for all models , we use the 300-dimensional glove word embeddings .
we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings .
we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit .
we trained a 3-gram language model on the spanish side using srilm .
semantic parsing is the task of converting natural language utterances into formal representations of their meaning .
semantic parsing is the task of mapping natural language sentences to complete formal meaning representations .
we consider the domain adversarial training network on the user factor adaptation task .
to this end , we use a domain adversarial training module to prevent the classifier on distinguishing differences between domains .
topic signatures are weighted topical vectors that are associated with senses or concepts .
in summarization , topic signatures are a set of terms indicative of a topic .
named entity recognition ( ner ) is a fundamental information extraction task that automatically detects named entities in text and classifies them into predefined entity types such as person , organization , gpe ( geopolitical entities ) , event , location , time , date , etc .
named entity recognition ( ner ) is the process by which named entities are identified and classified in an open-domain text .
alternative approximations are presented , which differ in index size and the strictness of the phrase-matching constraints .
four approximations were presented , which differ in size and the strictness of phrase-matching constraints .
our translation model is implemented as an n-gram model of operations using srilm-toolkit with kneser-ney smoothing .
we then perform mert which optimizes parameter settings using the bleu metric , while a 5-gram language model is derived with kneser-ney smoothing trained using srilm .
on the web , a lot of research effort is spent to aggregate the results of nlp tools .
recently , research and commercial communities have spent efforts to publish nlp services on the web .
we use sri language modeling toolkit to train a 5-gram language model on the english sentences of fbis corpus .
we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit .
in this paper , we introduce our system for the shared task of irony detection in english tweets , a part of the 2018 semeval .
in this paper we introduce our system for the task of irony detection in english tweets , a part of semeval 2018 .
one of the first to automatically induce selectional preferences from corpora was resnik .
the idea of inducing selectional preferences from corpora was introduced by resnik .
in this work , we have proposed a discriminative model for unsupervised morphological .
in this work , we make use of semantic information to help morphological analysis .
we used srilm to build a 4-gram language model with kneser-ney discounting .
we also use a 4-gram language model trained using srilm with kneser-ney smoothing .
fader et al presented a qa system that maps questions onto simple queries against open ie extractions , by learning paraphrases from a large monolingual parallel corpus , and performing a single paraphrasing step .
fader et al recently presented a scalable approach to learning an open domain qa system , where ontological mismatches are resolved with learned paraphrases .
recurrent neural network architectures have proven to be well suited for many natural language generation tasks .
different types of architectures such as feedforward neural networks and recurrent neural networks have since been used for language modeling .
choi et al explore oh extraction using crfs with several manually defined linguistic features and automatically learnt surface patterns .
choi et al examine opinion holder extraction using crfs with several manually defined linguistic features and automatically learnt surface patterns .
grammar rules were extracted from europarl using the collins parser for syntax on the english side .
the target-syntax system is based on english parses from the collins parser .
zeng et al proposed a cnn network integrating with position embeddings to make up for the shortcomings of cnn missing contextual information .
zeng et al developed a deep convolutional neural network to extract lexical and sentence level features , which are concatenated and fed into the softmax classifier .
relation extraction is a challenging task in natural language processing .
relation extraction ( re ) is the task of determining semantic relations between entities mentioned in text .
as a model learning method , we adopt the maximum entropy model learning method .
for learning coreference decisions , we used a maximum entropy model .
the language models used were 7-gram srilm with kneser-ney smoothing and linear interpolation .
the language model was a kneser-ney interpolated trigram model generated using the srilm toolkit .
and yields state-of-the-art performance for a majority of languages .
moreover , it yields state-of-the art performance for a majority of languages .
the support vector machine based machine learning approach works on discriminative approach and makes use of both positive and negative examples to learn the distinction between the two classes .
the svm is based on discr iminative approach and makes use of both pos itive and negative examples to learn the distinction between the two classes .
text simplification ( ts ) is the task of modifying an original text into a simpler version of it .
text simplification ( ts ) is a monolingual text-to-text transformation task where an original ( complex ) text is transformed into a target ( simpler ) text .
relation extraction is the task of predicting attributes and relations for entities in a sentence ( zelenko et al. , 2003 ; bunescu and mooney , 2005 ; guodong et al. , 2005 ) .
relation extraction ( re ) is the task of determining semantic relations between entities mentioned in text .
we use the cube pruning method to approximately intersect the translation forest with the language model .
we use cube-pruning to approximately intersect with non-local features , such as n-gram language models .
with experiments on many relations from two separate kbs , we show that our methods significantly outperform prior work on kb inference , both in the size of problem .
with experiments on many relations from two separate knowledge bases , we have shown that our methods significantly outperform prior work on knowledge base inference .
arabic is a highly inflectional language with 85 % of words derived from trilateral roots ( alfedaghi and al-anzi 1989 ) .
arabic is a morphologically rich language that is much more challenging to work , mainly due to its significantly larger vocabulary .
we set the feature weights by optimizing the bleu score directly using minimum error rate training on the development set .
we set all feature weights using minimum error rate training , and we optimize their number on the development dataset .
our mt system is a phrase-based , that is developed using the moses statistical machine translation toolkit .
our baseline system is a popular phrase-based smt system , moses , with 5-gram srilm language model , tuned with minimum error training .
we showed that this approach reliably helps performance on both iwslt and nist chinese-english test sets , yielding consistent gains on all eight of the most commonly used automatic evaluation metrics and , .
this approach reliably improves performance on both iwslt and nist chinese-english test sets , producing consistent gains on all eight of the most commonly used automated evaluation metrics .
with the user and product attention , our model can take account of the global user preference and product characteristics .
in contrast , our model puts forward the idea of user-product attention by utilizing the global user preference and product characteristics .
the grammar is based on the standard hpsg analysis of english .
the grammar design is based on the standard hpsg analysis of english .
we use a popular word2vec neural language model to learn the word embeddings on an unsupervised tweet corpus .
our model builds on word2vec , a neural network based language model that learns word embeddings by maximizing the probability of raw text .
without using any labeled data , experiments on a chinese data set from four product domains show that the three-component framework is feasible .
without using any additional labeled data , experiments on a chinese data set from four product domains show that the proposed framework outperforms other previous work .
stance detection is the task of automatically determining from text whether the author is in favor of the given target , against the given target , or whether neither inference is likely .
stance detection has been defined as automatically detecting whether the author of a piece of text is in favor of the given target or against it .
the natural language toolkit is a suite of program modules , data sets and tutorials supporting research and teaching in computational linguistics and natural language processing .
the natural language toolkit is a suite of program modules , data sets , tutorials and exercises , covering symbolic and statistical natural language processing .
for instance , mihalcea et al studied pmi-ir , lsa , and six wordnet-based measures on the text similarity task .
for instance , mihalcea et al compare two corpus-based and six knowledge-based measures on the task of text similarity computation .
we used a phrase-based smt model as implemented in the moses toolkit .
for this purpose , we used phrase tables learned by the standard statistical mt toolkit moses .
recently , a new pre-trained model bert obtains new state-of-the-art results on a variety of natural language processing tasks .
pre-trained language models such as bert have been demonstrated to achieve state of the art performance on a range of language understanding tasks .
we use logistic regression as the per-class binary classifier , implemented using liblinear .
we use a multi-class logistic regression classifier , and concatenate multiple features into a single vector .
coreference resolution is the problem of identifying which noun phrases ( nps , or mentions ) refer to the same real-world entity in a text or dialogue .
coreference resolution is the task of determining which mentions in a text are used to refer to the same real-world entity .
we train a kn-smoothed 5-gram language model on the target side of the parallel training data with srilm .
we employ srilm toolkit to linearly interpolate the target side of the training corpus with the wmt english corpus , optimizing towards the mt tuning set .
we successfully apply the attention scheme to detect word senses and learn representations according to contexts .
here we employ the target word embedding as an attention to select the most appropriate senses to make up context word embeddings .
we utilize the mate parser to generate pseudo trees .
we use the tree parsing model proposed in to score the converted trees .
for improving shift-reduce parsing , we propose a novel neural model to predict the constituent hierarchy related to each word before parsing .
in particular , we build a bidirectional lstm model , which leverages full sentence information to predict the hierarchy of constituents that each word starts and ends .
in a relatively high-dimensional feature space may suffer from the data sparseness problem and thus exhibit less discriminative power on unseen data .
features in such a space may suffer from the data sparseness problem and thus have less discriminative power on unseen data .
for the semeval-2014 shared task , we found that improvements over the baseline are possible for all classes except “ conflict ” .
we showed that the performance for this task can be improved by using linguistically motivated features for all classes except conflict .
for phrase-based smt translation , we used the moses decoder and its support training scripts .
we use the opensource moses toolkit to build a phrase-based smt system .
word alignment was performed using the berkeley cross-em aligner .
word alignment is produced by the berkeley aligner .
cite-p-17-3-2 proposed a recursive neural network designed to model the subtrees , and cnn .
cite-p-17-3-15 proposed a convolutional neural network with position embeddings .
the tagging was performed using the stanford corenlp software 21 and the robust accurate statistical parsing 22 system , respectively .
extraction of pos tags was performed using the postaggerannotator from the stanford corenlp suite .
named entity recognition ( ner ) is the first step for many tasks in the fields of natural language processing and information retrieval .
named entity recognition ( ner ) is the process by which named entities are identified and classified in an open-domain text .
in this paper , we present an approach for the unsupervised knowledge extraction for taxonomies of concepts .
in this study , we consider the challenging problem of deriving taxonomies of a set of concepts under a specific domain of interest .
in this paper , we propose a novel method to obtain word representation by training blstm-rnn model .
our work demonstrates an alternative way to improve blstm-rnn¡¯s performance by learning useful word representations .
text segmentation is the task of splitting text into segments by placing boundaries within it .
text segmentation is the task of automatically segmenting texts into parts .
for the classifiers we use the scikit-learn machine learning toolkit .
we use a set of 318 english function words from the scikit-learn package .