sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
we trained a 4-gram language model on this data with kneser-ney discounting using srilm .
we trained two 5-gram language models on the entire target side of the parallel data , with srilm .
we use the word2vec tool to train monolingual vectors , 6 and the cca-based tool for projecting word vectors .
we use a cws-oriented model modified from the skip-gram model to derive word embeddings .
semantic parsing is the task of mapping natural language utterances to machine interpretable meaning representations .
semantic parsing is the task of transducing natural language ( nl ) utterances into formal meaning representations ( mrs ) , commonly represented as tree structures .
before querying , a corpus is automatically terminologically analysed by the atract system , which performs terminology recognition .
before querying , a corpus is subjected to automatic terminological analysis and the results are annotated in xml .
entity linking ( el ) is a central task in information extraction — given a textual passage , identify entity mentions ( substrings corresponding to world entities ) and link them to the corresponding entry in a given knowledge base ( kb , e.g . wikipedia or freebase ) .
entity linking ( el ) is the task of mapping mentions of an entity in text to the corresponding entity in knowledge graph ( kg ) ( cite-p-16-3-6 , cite-p-16-1-11 , cite-p-16-1-7 ) .
the srilm toolkit is used to build the character-level language model for generating the lm features in nsw detection system .
uedin has used the srilm toolkit to train the language model and relies on kenlm for language model scoring during decoding .
we take fully advantage of questions ¡¯ textual descriptions to address data sparseness problem and cold-start problem .
by incorporating textual information , rcm can effectively deal with data sparseness problem .
we presented a supervised classification algorithm for metonymy recognition , which exploits the similarity between examples of conventional metonymy , operates on semantic classes .
we present a supervised machine learning algorithm for metonymy resolution , which exploits the similarity between examples of conventional metonymy .
for the experiment reported in section 5 , we use one of the largest , multi-lingual , freely available aligned corpus , europarl .
for experiments reported in this paper , we use one of the largest , multi-lingual , freely available aligned corpus , europarl .
luong et al created a hierarchical language model that uses rnn to combine morphemes of a word to obtain a word representation .
luong et al learn word representations based on morphemes that are obtained from an external morphological segmentation system .
that must be predefined – number of frames and number of roles – which is the most limiting property of the algorithm .
the most limiting property of the algorithm is such that the number of frames and roles must be predefined .
semantic parsing is the task of mapping natural language utterances to machine interpretable meaning representations .
semantic parsing is the task of translating natural language utterances into a machine-interpretable meaning representation .
pennington et al incorporated aggregated global word co-occurrence statistics from the corpus when inducing word embeddings .
pennington et al shows that the word embeddings produced by the model achieves state-of-the-art performance in word analogy task .
this result is opposed to yamashita stating that scrambling is unrelated to information structure .
yamashita even insists that heaviness is more important for scrambling than referentiality is .
riloff et al identify sarcasm that arises from the contrast between a positive sentiment referring to a negative situation .
riloff et al investigate sarcasm where the writer holds a positive sentiment toward a negative situation .
we used the target side of the parallel corpus and the srilm toolkit to train a 5-gram language model .
we trained a specific language model using srilm from each of these corpora in order to estimate n-gram log-probabilities .
the information extraction experiments rely on the wapiti tool that implements linear chain crf .
the model is trained with a gradient descent algorithm using the wapiti toolkit .
in this paper , we use self-training to generalize the lexicon of a combinatory categorial grammar ( ccg ) .
in this paper , we show that semi-supervised viterbi-em can be used to extend the lexicon of a generative ccg parser .
lexical selection is a very important task in statistical machine translation ( smt ) .
lexical selection is a significant problem for wide-coverage machine translation : depending on the context , a given source language word can often be translated into different target language words .
semantic role labeling ( srl ) is the process of assigning semantic roles to strings of words in a sentence according to their relationship to the semantic predicates expressed in the sentence .
semantic role labeling ( srl ) consists of finding the arguments of a predicate and labeling them with semantic roles ( cite-p-9-1-5 , cite-p-9-3-0 ) .
we describe our proposed methods and others of active learning for japanese dependency parsing .
we investigate active learning methods for japanese dependency parsing .
mihalcea et al used various text based similarity measures , including wordnet and corpus based similarity methods , to determine if two phrases are paraphrases .
mihalcea et al proposed a method to measure the semantic similarity of words or short texts , considering both corpus-based and knowledge-based information .
maximum entropy modeling is one of the best techniques for natural language processing .
maximum entropy classification is a technique which has proven effective in a number of natural language processing applications .
by applying these ideas to japanese why-qa , we improved precision by 4 . 4 % against all the questions in our test set over the current state-of-the-art system for japanese .
as a final result , we improved the precision by 4.4 % against all the questions in our test set over the current state-of-the-art system of japanese why-qa ( cite-p-19-1-19 ) .
word2vec and glove models are a popular choice for word embeddings , representing words by vectors for downstream natural language processing .
there are various methods such word2vec and global vectors for word representation which create a distributed representation of words .
for the n-gram lm , we use srilm toolkits to train a 4-gram lm on the xinhua portion of the gigaword corpus .
in addition , we use an english corpus of roughly 227 million words to build a target-side 5-gram language model with srilm in combination with kenlm .
in order to limit the size of the vocabulary of the nmt models , we further segmented tokens in the parallel data into sub-word units via byte pair encoding using 8k operations for both languages .
in order to limit the size of the vocabulary of the nmt models , we segmented tokens in the parallel data into sub-word units via byte pair encoding using 30k operations .
for all data sets , we trained a 5-gram language model using the sri language modeling toolkit .
we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing .
rtms at semeval-2014 contain results from sts , semantic relatedness and entailment , and cross-level semantic similarity tasks .
the system has been applied to semeval-2014 task 3 , cross-level semantic similarity .
the language model is trained and applied with the srilm toolkit .
all the language models are built with the sri language modeling toolkit .
we use pre-trained vectors from glove for word-level embeddings .
we use 300d glove vectors trained on 840b tokens as the word embedding input to the lstm .
in this paper , we first study the semantic representation of words in brain activity .
this study utilized word embeddings to investigate the semantic representations in brain activity as measured by fmri .
the language model was trained using srilm toolkit .
a 4-gram language model is trained on the monolingual data by srilm toolkit .
we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing .
a back-off 2-gram model with good-turing discounting and no lexical classes was also created from the training set , using the srilm toolkit , .
hu and liu applied frequent itemset mining to identify product features without supervision , and considered adjectives collocated with feature words as opinion words .
hu and liu extracted features based on association rule mining algorithms in conjunction with frequency to extract main product characteristics .
we also report the results using bleu and ter metrics .
we compute the interannotator agreement in terms of the bleu score .
in this paper , we develop a supervised learning algorithm that corrects triangulated word translation probabilities by relying on word translation distributions .
in this paper , we develop a supervised learning technique that improves noisy phrase translation scores obtained by phrase table triangulation .
we first obtain word representations using the popular skip-gram model with negative sampling introduced by mikolov et al and implemented in the gensim package .
we extract continuous vector representations for concepts using the continuous log-linear skipgram model of mikolov et al , trained on the 100m word british national corpus .
srl is the task of identifying arguments for a certain predicate and labelling them .
srl is the process by which predicates and their arguments are identified and their roles are defined in a sentence .
we solve this sequence tagging problem using the mallet implementation of conditional random fields .
we formulated disorder mention identification as a sequence labeling problem at token level and used conditional random fields .
twitter is a famous social media platform capable of spreading breaking news , thus most of rumour related research uses twitter feed as a basis for research .
twitter is a microblogging service that has 313 million monthly active users 1 .
we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing .
we build an open-vocabulary language model with kneser-ney smoothing using the srilm toolkit .
the matrix is written in the tdl formalism , which is interpreted by the lkb parser , generator , and grammar development environment .
the type hierarchy is organised by a typed feature structure hierarchy , and can be read by the lkb system and the pet parser .
the word embeddings are initialized by pre-trained glove embeddings 2 .
we obtain pre-trained tweet word embeddings using glove 3 .
the language model is a 3-gram language model trained using the srilm toolkit on the english side of the training data .
further , we apply a 4-gram language model trained with the srilm toolkit on the target side of the training corpus .
relation extraction is the problem of populating a target relation ( representing an entity-level relationship or attribute ) with facts extracted from natural-language text .
relation extraction ( re ) is the process of generating structured relation knowledge from unstructured natural language texts .
information extraction ( ie ) is a task of identifying 憽甪acts挕 ? ( entities , relations and events ) within unstructured documents , and converting them into structured representations ( e.g. , databases ) .
information extraction ( ie ) is the process of identifying events or actions of interest and their participating entities from a text .
twitter is a huge microblogging service with more than 500 million tweets per day from different locations of the world and in different languages ( cite-p-10-1-6 ) .
twitter is a huge microblogging service with more than 500 million tweets per day from different locations of the world and in different languages ( cite-p-8-1-9 ) .
they also pose unique challenges for traditional computational systems .
polysynthetic languages pose unique challenges for traditional computational systems .
in our model , we use negative sampling discussed in to speed up the computation .
therefore , we employ negative sampling and adam to optimize the overall objective function .
recognizing temporal order among events is a challenging task .
temporal event order recognition is a challenging task .
in the remainder of this paper , we suggest two related approaches for tagging causal constructions .
with this work , we have demonstrated the viability of two approaches to tagging causal constructions .
in recent years , vector space models ( vsms ) have been proved successful in solving various nlp tasks including named entity recognition , part-of-speech tagging , parsing , semantic role-labeling .
in recent years , neural word embeddings have proved very effective in improving various nlp tasks ( e.g . part-of-speech tagging , chunking , named entity recognition and semantic role labeling ) ( cite-p-22-1-2 ) .
mitchell and lapata presented a framework for representing the meaning of phrases and sentences in vector space .
mitchell and lapata investigated a variety of compositional operators to combine word vectors into phrasal representations .
to rerank the candidate texts , we used a 5-gram language model trained on the europarl corpus using kenlm .
we used kenlm with srilm to train a 5-gram language model based on all available target language training data .
text regression problem : given a piece of text , predict a real-world continuous quantity associated with the text ¡¯ s meaning .
we consider a text regression problem : given a piece of text , predict a r-valued quantity associated with that text .
word sense disambiguation ( wsd ) is a key enabling-technology .
many words have multiple meanings , and the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd ) .
kalchbrenner et al introduced a convolutional neural network for sentence modeling that uses dynamic k-max pooling to better model inputs of varying sizes .
kalchbrenner et al , 2014 ) proposes a cnn framework with multiple convolution layers , with latent , dense and low-dimensional word embeddings as inputs .
for example , pitler and nenkova also use text length , sentence-to-sentence transitions , word overlap and pronoun occurrences as features for predicting readability .
for example , pitler and nenkova use entity grids , syntactic features , discourse relations , vocabulary , and length features .
this is because chinese is a pro-drop language ( cite-p-21-3-1 ) that allows the subject to be dropped in more contexts than english does .
chinese is a language without natural word delimiters .
prettenhofer and stein use correspondence learning algorithm to learn a map between the source language and the target language .
prettenhofer and stein use the structural correspondence learning algorithm to learn a map between the source language and the target language .
following koo et al , we used the mxpost tagger trained on the full training data to provide part-of-speech tags for the development and the test set , and we used 10-way jackknifing to generate tags for the training set .
following the work of , we used the mxpost tagger trained on training data to provide part-of-speech tags for the development and the test set , and we used 10-way jackknifing to generate tags for the training set .
jackendoff and others have proposed that lexical rules be interpreted as redundancy statements which abbreviate the statement of the lexicon but which are not applied generatively .
jackendoff and others have proposed that lexical rules be interpreted as redundancy statements that abbreviate the statement of the lexicon but that are not applied generatively .
dagan and itai proposed an approach to wsd using monolingual corpora , a bilingual lexicon and a parser for the source language .
dagan and itai proposed a method for choosing target words using mono-lingual corpora .
in the literature , there is some discussion on the benefit of lemmatization for information extraction .
in the literature , there is some discussion on the benefit of lemmatization for question answering .
the experiments presented in this paper are carried out with the moses toolkit , a state-of-the-art open-source phrasebased smt system .
the experiments of the phrase-based smt systems are carried out using the open source moses toolkit .
for decoding , we used the state-of-the-art phrasebased smt toolkit moses with default options , except for the distortion limit .
for our experiments we used the moses phrasebased smt toolkit with default settings and features , including the five features from the translation table , and kb-mira tuning .
it is possible to compute the moorepenrose pseudoinverse using the svd in the following way .
it is possible to compute the moore-penrose pseudoinverse using the svd in the following way .
and due to its low memory footprint and efficient training time can be realistically applied for on-demand adaptation of big systems .
due to its small memory footprint and short training time it can be realistically applied to adapt large , general domain systems in order to improve their performance on specific domains .
in our data sample , semantic language classification appears to be almost perfectly correlated with genealogical relationships between languages .
we find that , in our sample of languages , lexical semantic spaces largely coincide with genealogical relations .
we use the pool-based approach to active learning , because it is a natural fit for domain adaptation .
we follow and use the uncertainty sampling strategy in our active learning setting .
a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit .
the n-gram language models are trained using the srilm toolkit or similar software developed at hut .
coreference resolution is the task of determining which mentions in a text refer to the same entity .
coreference resolution is the task of partitioning the set of mentions of discourse referents in a text into classes ( or ‘ chains ’ ) corresponding to those referents ( cite-p-12-3-14 ) .
would be , customer suggestions about improvement in a commercial entity .
targeted suggestions were mainly the ones which suggest improvements in a commercial entity .
we provide an extensive evaluation with different classifiers and evaluation setups , and suggest a suitable evaluation setup for the task .
we performed an extensive evaluation with different classifiers and evaluation setups , and suggest the out-of-domain evaluation as the most suitable for the task .
takamura et al also have reported a method for extracting polarity of words .
takamura et al proposed using spin models for extracting semantic orientation of words .
we conceptualize extractive summarization as a sentence ranking task and propose a novel training algorithm which globally optimizes the rouge evaluation metric .
in this work we developed an extractive summarization model which is globally trained by optimizing the rouge evaluation metric .
we used the sri language modeling toolkit with kneser-kney smoothing .
we used the srilm toolkit to generate the scores with no smoothing .
the feature weights are tuned to optimize bleu using the minimum error rate training algorithm .
the feature weights are tuned with minimum error-rate training to optimise the character error rate of the output .
indeed , bert improved the state-of-the-art for a range of nlp benchmarks by a significant margin .
bert outperforms previous state-of-the-art models in the eleven nlp tasks in the glue benchmark by a significant margin .
tuning was performed by minimum error rate training .
the standard minimum error rate training algorithm was used for tuning .
as a second major contribution , we introduce a new set of features to capture aspects of participant behavior .
we introduced a new set of features which describe the structure of the dialog .
text categorization is a classical text information processing task which has been studied adequately ( cite-p-18-1-9 ) .
since text categorization is a task based on predefined categories , we know the categories for classifying documents .
the rule-based classifier of uchiyama et al incorporates syntactic information about japanese compound verbs , a type of mwe composed of two verbs .
the rule-based classifier of uchiyama , baldwin , and ishizaki incorporates syntactic information about japanese compound verbs , a type of mwe composed of two verbs .
this paper describes a system for navigating large collections of information about cultural heritage .
this paper describes a system that allows users to explore large cultural heritage collections .
ravi and knight , 2011b ) have shown that one can use decipherment to learn a full translation model from non-parallel data .
ravi and knight , 2011b ) and have shown that-even for larger vocabulary sizes-it is possible to learn a full translation model from non-parallel data .
we explore the differences between language models compiled from texts originally written in the target language ( o ) and language models compiled from translated texts .
we investigate the differences between language models compiled from original target-language texts and those compiled from texts manually translated to the target language .
in this and our other n-gram models , we used kneser-ney smoothing .
we choose modified kneser ney as the smoothing algorithm when learning the ngram model .
several authors investigate neural network models that learn a vector of latent variables to represent each word .
bengio et al have proposed a neural network based model for vector representation of words .
r given a set of texts , the texts are sorted by the comparator .
by applying this comparator , a set of texts is sorted .
to the best of our knowledge , ours is the first work to use phonetic feature vectors for transliteration .
to the best of our knowledge , ours is the first unsupervised approach for substring-based transliteration .
entity disambiguation is the task of linking an extracted mention to a specific definition or instance of an entity in a knowledge base .
entity disambiguation is the task of linking entity mentions in unstructured text to the corresponding entities in a knowledge base .
word sense disambiguation ( wsd ) is a task to identify the intended sense of a word based on its context .
word sense disambiguation ( wsd ) is the task of determining the correct meaning ( “ sense ” ) of a word in context , and several efforts have been made to develop automatic wsd systems .
as word vectors the authors use word2vec embeddings trained with the skip-gram model .
for the cluster- based method , we use word2vec 2 which provides the word vectors trained on the google news corpus .
eisner proposed a generative model for dependency parsing .
eisner proposed probabilistic models of dependency parsing .
composition models yield reductions in perplexity when combined with a standard n-gram model over the n-gram model alone .
the composition models yield reductions in perplexity when combined with a standard n-gram language model over the n-gram model alone .
for example , chung and gildea reported preliminary work that has shown a positive impact of automatic empty element detection on statistical machine translation .
for example , chung and gildea has proved that automatic empty category detection has a positive impact on machine translation .
then , a small set of cue-phrase-based patterns were utilized to collect a large number of discourse instances .
then , an unsupervised method was proposed starting from a small set of cue-phrase-based patterns to mine high quality common ssrs for each discourse relation .
we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training .
we use the glove vectors of 300 dimension to represent the input words .
most prominently , they have been used for word sense disambiguation , noun learning and recently , amr parsing and generation .
most prominently , it has been used for wsd , noun learning and amr parsing and generation .