sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
based metrics are robust to a variety of training conditions , such as the data volume and domain .
furthermore , the proposed metrics are robust to various training conditions , such as the data size and domain .
for the document embedding , we use a doc2vec implementation that downsamples higher-frequency words for the composition .
with english gigaword corpus , we use the skip-gram model as implemented in word2vec 3 to induce embeddings .
for english , there is no significant dependency treebank so we followed most previous work in using dependency trees automatically derived from constituent trees in the large penn treebank wsj corpus .
there exists no large-scale dependency treebank for english , and we thus had to construct a dependency-annotated corpus automatically from the penn treebank .
we trained word embeddings for this dataset using word2vec on over around 10m documents of clinical records .
we trained the embedding vectors with the word2vec tool on the large unlabeled corpus of clinical texts provided by the task organizers .
while wen et al . ’ s dataset is more than twice larger than ours , it is less diverse both in terms of input and in terms of text .
we show that while wen et al. ’ s dataset is more than twice larger than ours , it is less diverse both in terms of input and in terms of text .
such that math-w-3-1-2-54 , define math-w-3-1-2-59 .
recall that a derivation takes the form math-w-8-1-1-7 .
in this paper , we have provided a new perspective to predict the cqa answer quality .
in this paper , we address the problem for predicting cqa answer quality as a classification task .
in this paper , we develop declarative rules which govern the translation of natural language description of these concepts .
in this paper , we introduce a framework for incorporating declarative knowledge in word problem solving .
jansen et al describe answer reranking experiments on ya using a diverse range of lexical , syntactic and discourse features .
jansen et al report that answer reranking benefits from lexical semantic models , and describe experiments using skipavg embeddings pretrained using the english gigaword corpus .
we then evaluate the effect of word alignment on machine translation quality using the phrase-based translation system moses .
we compare the model against the moses phrase-based translation system , applied to phoneme sequences .
in section 6 we describe our machine learning approach and show results on pos tagging .
in section 6 we describe our machine learning approach and show results on pos tagging code-switched text .
we used kappa statistics to evaluate the annotations made by the annotators in the second phase .
we evaluated annotation reliability by using the kappa statistic .
therefore , we employ negative sampling and adam to optimize the overall objective function .
note that we employ negative sampling to transform the objective .
statistical significance of difference from the baseline bleu score was measured by using paired bootstrap re-sampling .
we performed paired bootstrap sampling to test the significance in bleu score differences .
we used the implementation of random forest in scikitlearn as the classifier .
for the feature-based system we used logistic regression classifier from the scikit-learn library .
we pre-train the 200-dimensional word embeddings on each dataset in with skipgram .
all word vectors are trained on the skipgram architecture .
we trained svm models with rbf kernel using scikit-learn .
we trained an svm with rbf kernel using scikit-learn .
in the supervised phase , sentiment polarity labels of documents are used to guide bswe learning .
through the supervised learning phase , math-w-8-10-0-6 is optimized by maximizing sentiment polarity probability .
weller et al use noun class information as tree labels in syntactic smt to model selectional preferences of prepositions .
weller et al propose using noun class information to model selectional preferences of prepositions in a string-to-tree translation system .
birke and sarkar proposed the trope finder system to recognize verbs with non-literal meaning using word sense disambiguation and clustering .
birke and sarkar present a sentence clustering approach for non-literal language recognition implemented in the trofi system .
the induction of selectional preferences from corpus data was pioneered by resnik .
one of the first approaches to the automatic induction of selectional preferences from corpora was the one by resnik .
for language modeling , we computed 5-gram models using irstlm 7 and queried the model with kenlm .
after standard preprocessing of the data , we train a 3-gram language model using kenlm .
we use the distance based logistic triplet loss which gave better results than a contrastive loss .
we use the distance based logistic triplet loss , which vo and hays report exhibits better performance in image similarity tasks .
the penn discourse treebank is the largest available corpus of annotations for discourse relations , covering one million words of the wall street journal .
the penn discourse treebank is the largest manually annotated corpus of discourse relations on top of one million word tokens from the wall street journal .
named entity recognition ( ner ) is the task of identifying and typing phrases that contain the names of persons , organizations , locations , and so on .
named entity recognition ( ner ) is the task of identifying and classifying phrases that denote certain types of named entities ( nes ) , such as persons , organizations and locations in news articles , and genes , proteins and chemicals in biomedical literature .
we use word2vec as the vector representation of the words in tweets .
in this run , we use a sentence vector derived from word embeddings obtained from word2vec .
we used the moses decoder , with default settings , to obtain the translations .
we obtained a phrase table out of this data using the moses toolkit .
one is the bilexical dependency model and the other is the generative model .
one is a bilexical model , which is a kind of discriminative model , and the other is a generative model .
hierarchical machine translation extends the phrase-based model by allowing the use of non-contiguous phrase pairs .
a hierarchical phrase-based translation model reorganizes phrases into hierarchical ones by reducing sub-phrases to variables .
recent work has focused on a much larger set of fine grained labels .
recent work has focused on a much larger set of fine-grained types .
coreference resolution is the task of grouping mentions to entities .
coreference resolution is the process of determining whether two expressions in natural language refer to the same entity in the world .
the encoder and decoder are two-layer lstms with a 500-dimension hidden size and 500-dimension word embeddings .
the decoder and encoder word embeddings are of size 500 , the encoder uses a bidirectional lstm layer with 1k units to encode the source side .
for dependency grammar induction , smith and eisner favored short attachments using a fixed-weight feature whose weight was optionally annealed during learning .
smith and eisner propose structural annealing , in which a strong bias for local dependency attachments is enforced early in learning , and then gradually relaxed .
we represent each citation as a feature set in a support vector machine framework and use n-grams of length 1 to 3 as well as dependency triplets as features .
we represent each citation as a feature set in a support vector machine framework which has been shown to produce good results for sentiment classification .
phrase-based models excel at capturing local reordering phenomena and memorizing multi-word translations .
the phrase-based translation systems rely on language model and lexicalized reordering model to capture lexical dependencies that span phrase boundaries .
disfluency detection is the task of recognizing non-fluent word sequences in spoken language transcripts ( cite-p-25-3-15 , cite-p-25-3-10 , cite-p-25-3-12 ) .
disfluency detection is the task of detecting these infelicities in spoken language transcripts .
gram language models are trained over the target-side of the training data , using srilm with modified kneser-ney discounting .
the targetside 4-gram language model was estimated using the srilm toolkit and modified kneser-ney discounting with interpolation .
a typical approach for sentiment classification is to use supervised machine learning algorithms with bag-of-words as features .
a typical approach for sentiment classification is to use supervised machine learning algorithms with bag-of-words as features , which is widely used in topic-based text classification .
faruqui et al introduce a graph-based retrofitting method where they post-process learned vectors with respect to semantic relationships extracted from additional lexical resources .
faruqui et al demonstrated that embeddings learned without supervision can be retro-fitted to better conform to some semantic lexicon .
finkel and manning modeled the task of named entity recognition together with parsing .
finkel and manning show how to model parsing and named entity recognition together .
we conceptualized the induction problem as one of detecting alternate linkings and finding their canonical syntactic form .
we formulate the role induction problem as one of detecting alternations and finding a canonical syntactic form for them .
a simile is a figure of speech comparing two essentially unlike things .
a simile is a figure of speech comparing two fundamentally different things .
case-insensitive bleu4 was used as the evaluation metric .
case-insensitive nist bleu was used to measure translation performance .
we present a novel interactive visualisation that we have developed for displaying collaborations .
we also present a novel visualisation interface for browsing collaborations .
we preprocessed the corpus with tokenization and true-casing tools from the moses toolkit .
we used the moses toolkit for performing statistical machine translation .
we have presented a technique for creating a ? estimates for inference .
we present a novel method for creating a ? estimates for structured search problems .
we considered one layer and used the adam optimizer for parameter optimization .
for optimization , we used adam with default parameters .
luong and manning , 2016 ) proposes a hybrid architecture for nmt that translates mostly at the word level and consults the character components for rare words when necessary .
luong and manning proposed a hybrid scheme that consults character-level information whenever the model encounters an oov word .
for our parsing experiments , we use the berkeley parser .
we adopt berkeley parser 1 to train our sub-models .
in the parse tree , strong evidence about either aspect of the model should positively impact the other aspect .
because a named entity should correspond to a node in the parse tree , strong evidence about either aspect of the model should positively impact the other aspect .
kendall ¡¯ s math-w-2-5-2-97 as a performance measure for evaluating the output of information-ordering components .
kendall¡¯s math-w-11-5-2-1 can be easily used to evaluate the output of automatic systems , irrespectively of the domain or application at hand .
we use the skll and scikit-learn toolkits .
for all classifiers , we used the scikit-learn implementation .
grammatical information for the sentential context is obtained using the dependency relation output of the stanford parser .
sentences are passed through the stanford dependency parser to identify the dependency relations .
due to the imbalanced characteristic of the training data , we specifically adopted a two-step classifier to deal with subtask a .
since the training data is imbalanced , we specifically designed a two-step classifier to address subtask a .
and the results demonstrate the good effectiveness of the proposed model .
moreover , the results show the robustness of the proposed model .
while we extend the seq2seq framework to conduct template reranking and template-aware summary generation .
then , we extend the seq2seq framework to jointly conduct template reranking and template-aware summary generation ( rewriting ) .
in our experiments , we used the srilm toolkit to build 5-gram language model using the ldc arabic gigaword corpus .
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided .
the srilm toolkit was used to build this language model .
srilm toolkit is used to build these language models .
active learning is a framework that makes it possible to efficiently train statistical models by selecting informative examples from a pool of unlabeled data .
active learning is a promising way for sentiment classification to reduce the annotation cost .
a total of 42 systems were submitted from 21 distinct teams , and nine .
a total of 42 systems were submitted to the task .
in this demo , we introduce need4tweet , a twitterbot for a combined system for nee and ned in tweets .
in this demo paper , we present need4tweet , a twitterbot for nee and ned in tweets .
for this task , we use a deep learning method to obtain final predict answer .
to address this machine comprehension task , we utilized rule-based methods and a deep learning method .
semantic role labeling ( srl ) consists of finding the arguments of a predicate and labeling them with semantic roles ( cite-p-9-1-5 , cite-p-9-3-0 ) .
semantic role labeling ( srl ) is the task of automatically annotating the predicate-argument structure in a sentence with semantic roles .
where math-w-8-3-0-1 is a fresh nonterminal symbol , the characteristic string math-w-8-3-0-12 is the string obtained from math-w-8-3-0-22 .
for math-w-2-6-2-13 , we write math-w-2-6-2-21 to denote the interval math-w-2-6-2-30 , and use [ i ] as a shorthand for math-w-2-6-2-51 .
on all datasets and models , we use 300-dimensional word vectors pre-trained on google news .
we use distributed word vectors trained on the wikipedia corpus using the word2vec algorithm .
in particular , we use a rnn based on the long short term memory unit , designed to avoid vanishing gradients and to remember some long-distance dependences from the input sequence .
we consider both long short-term memory networks and gated recurrent unit networks , two variants of rnns that use gating to mitigate vanishing gradients .
cattoni et al apply statistical language models to da classification .
cattoni et al also apply statistical language models to da classification .
we first used a variant of the lesk algorithm , which is based on word exact match .
we used lesk as the similarity measure in our algorithm which is based on lesk .
to extract the features of the rule selection model , we parse the english part of our training data using the berkeley parser .
for samt grammar extraction , we parsed the english training data using the berkeley parser with the provided treebank-trained grammar .
we experiment with word2vec and glove for estimating similarity of words .
we use pre-trained glove vector for initialization of word embeddings .
we use the logistic regression classifier as implemented in the skll package , which is based on scikitlearn , with f1 optimization .
we use the logistic regression classifier in the skll package , which is based on scikit-learn , optimizing for f 1 score .
he et al proposes maximum entropy models which combine rich context information for selecting translation rules during decoding .
besides , he et al built a maximum entropy model which combines rich context information for selecting translation rules during decoding .
according to the metrics of semeval 2018 , our system gets the final scores of 0 . 636 , 0 . 531 , 0 . 731 , 0 . 708 , and 0 . 408 in terms of pearson correlation .
according to semeval 2018¡¯s metrics , our model runs got final scores of 0.636 , 0.531 , 0.731 , 0.708 , and 0.408 in terms of pearson correlation on 5 subtasks , respectively .
novel metaphors are marked by their unusualness in a given context .
metaphorical instances tend to have personal topics .
in this paper , we have proposed a deep belief network based approach to model the semantic relevance for the question answering pairs .
to solve the first problem , we present a deep belief network ( dbn ) to model the semantic relevance between questions and their answers .
named entity recognition ( ner ) is a fundamental task in text mining and natural language understanding .
named entity recognition ( ner ) is a challenging learning problem .
we used 300 dimensional skip-gram word embeddings pre-trained on pubmed .
to encode the original sentences we used word2vec embeddings pre-trained on google news .
nuclearity in rhetorical structure theory is explained in terms of relative importance of text spans .
rhetorical structure theory posits a hierarchical structure of discourse relations between spans of text .
semantic role labeling ( srl ) is a kind of shallow sentence-level semantic analysis and is becoming a hot task in natural language processing .
semantic role labeling ( srl ) is defined as the task to recognize arguments for a given predicate and assign semantic role labels to them .
we use case-insensitive bleu as evaluation metric .
for evaluation metric , we used bleu at the character level .
due to the underspecified representation we are using .
this is largely due to the underspecified representation we are using .
we train an english language model on the whole training set using the srilm toolkit and train mt models mainly on a 10k sentence pair subset of the acl training set .
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided .
we formalize the problem of reference page selection .
we propose an automatic method that can select reference pages .
for sampling nodes , non-interactive active learning algorithms exclude expert annotators ’ human labels from the protocol .
note that , unlike active learning used in the nlp community , non-interactive active learning algorithms exclude expert annotators ’ human labels from the protocol .
therefore , we used bleu and rouge as automatic evaluation measures .
additionally , we used bleu , a very popular machine translation evaluation metric , as a feature .
part-of-speech ( pos ) tagging is a critical task for natural language processing ( nlp ) applications , providing lexical syntactic information .
part-of-speech ( pos ) tagging is a fundamental nlp task , used by a wide variety of applications .
shen et al describe the result of filtering rules by insisting that target-side rules are well-formed dependency trees .
shen et al proposed a target dependency language model for smt to employ target-side structured information .
abstract meaning representations are a graph-based representation of the semantics of sentences .
abstract meaning representation is a compact , readable , whole-sentence semantic annotation .
cite-p-17-3-16 tackled this issue by allowing the number to be dynamically adjusted for each word .
cite-p-17-5-4 modified the skip-gram model in order to learn multiple embeddings for each word type .
in tmhmm , tmhmms and tmhmmss , the number of ¡° topics ¡± in the latent states .
in tmhmm , tmhmms and tmhmmss , the number of ¡°topics¡± in the latent states and a dialogue is a hyperparameter .
the parameter weights are optimized with minimum error rate training .
the model parameters are trained using minimum error-rate training .
if the anaphor is a pronoun , the cache is searched for a plausible referent .
the anaphor is a definite noun phrase and the referent is in focus , that is .
we use the english penn treebank to evaluate our model implementations and yamada and matsumoto head rules are used to extract dependency trees .
we extract dependency structures from the penn treebank using the head rules of yamada and matsumoto .
predicate models such as framenet are core resources in most advanced nlp tasks , such as question answering , textual entailment or information extraction .
predicate models such as framenet , verbnet or propbank are core resources in most advanced nlp tasks , such as question answering , textual entailment or information extraction .
wordnet is a large lexical database of english , where open class words are grouped into concepts represented by synonyms that are linked to each other by semantic relations such as hyponymy and meronymy .
wordnet is a large semantic lexicon database of english words , where nouns , verbs , adjectives and adverbs are grouped into sets of cognitive synonyms .
the bleu score measures the precision of n-grams with respect to a reference translation with a penalty for too short sentences .
this score measures the precision of unigrams , bigrams , trigrams and fourgrams with respect to a reference translation with a penalty for too short sentences .
we use randomization test to calculate statistical significance .
for assessing significance , we apply the approximate randomization test .
we used the moses decoder , with default settings , to obtain the translations .
for all submissions , we used the phrase-based variant of the moses decoder .
specifically , we generalise the model of cohn and lapata to our abstractive task .
our work builds on the model developed by cohn and lapata .
we consider a phrase-based translation model and a hierarchical translation model .
the disadvantage of word-to-word translation is overcome by phrase-based translation and log-linear model combination .