sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
we set the feature weights by optimizing the bleu score directly using minimum error rate training on the development set .
the development set is used to optimize feature weights using the minimum-error-rate algorithm .
unlike algorithms such as perceptron and stochastic gradient descent , our method keeps track of dual variables .
unlike our dcd algorithm , the extragradient method requires the learning rate to be specified .
motivated by the idea of addressing wce as a sequence labeling task , we employ the conditional random fields model and the corresponding wapiti toolkit to train our classifier .
motivated by the idea of addressing wce problem as a sequence labeling process , we employ the conditional random fields for our model training , with wapiti toolkit .
our 5-gram language model is trained by the sri language modeling toolkit .
our 5-gram language model was trained by srilm toolkit .
in the first pass , the general information is extracted by segmenting the entire resume into consecutive blocks .
in the first pass , a resume is segmented into a consecutive blocks attached with labels indicating the information types .
in this line of work , the focus is mainly on article content analysis , as a way to detect new potential translations , rather than link analysis .
in this line of work , the focus is mainly on article content analysis , as a way to detect new potential translations , rather than link analysis as done in our work .
the evaluation metric is casesensitive bleu-4 .
translation quality is evaluated by case-insensitive bleu-4 metric .
second , we introduce the nus corpus of learner english ( nucle ) , a fully annotated one million words corpus of learner english .
we have introduced the nus corpus of learner english ( nucle ) , a fully annotated corpus of learner text .
in this work , we focus on training task-oriented dialogue systems through user interactions .
in this work , we present a hybrid learning method for training task-oriented dialogue systems through online user interactions .
later , miwa and bansal have implemented an end-to-end neural network to construct a context representation for joint entity and relation extraction .
based on tai et al , miwa and bansal introduced a tree lstm model that can handle different types of children .
information extraction ( ie ) is the task of extracting factual assertions from text .
information extraction ( ie ) is a technology that can be applied to identifying both sources and targets of new hyperlinks .
the pipeline consisted in normalizing punctuation , tokenization and truecasing using the standard moses scripts .
all corpora were preprocessed using the standard moses scripts to perform normalization , tokenization , and truecasing .
we use the pre-trained glove vectors to initialize word embeddings .
we use 300-dimensional word embeddings from glove to initialize the model .
we train a 5-gram language model with the xinhua portion of english gigaword corpus and the english side of the training set using the srilm toolkit .
for the fluency and grammaticality features , we train 4-gram lms using the development dataset with the sri toolkit .
for the automatic evaluation the two most popular and widely used metrics bleu and nist were used .
two popular evaluation metrics nist and bleu were chosen for automatic evaluation .
sentence compression is the task of compressing long , verbose sentences into short , concise ones .
sentence compression is a task of creating a short grammatical sentence by removing extraneous words or phrases from an original sentence while preserving its meaning .
the corresponding weight is trained through minimum error rate method .
the feature weights 位 i are trained in concert with the lm weight via minimum error rate training .
particle swarm optimization ( pso ) is a meta-heuristic intelligent technique inspired by social behavior of the swarm .
particle swarm optimization is an evolutionary technique , inspired by the social behavior of birds .
in this paper , we overview recent advances on taxonomy .
in this paper , we present a survey on taxonomy learning from text corpora .
in this paper , we use a specific implementation of neural machine translation .
in this paper , we proposed a way to extend the size of the target vocabulary for neural machine translation .
we derive 100-dimensional word vectors using word2vec skip-gram model trained over the domain corpus .
we use the word2vec framework in the gensim implementation to generate the embedding spaces .
we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit .
language models were built using the sri language modeling toolkit with modified kneser-ney smoothing .
we used the implementation of random forest in scikitlearn as the classifier .
we use a random forest classifier , as implemented in scikit-learn .
we also propose a method to adapt embeddings for each classification task .
we also proposed a novel method ( i.e. , adaptation ) to add discriminative information to such embeddings .
phrase-based translation models are widely used in statistical machine translation .
machine translation models typically require large , sentence-aligned bilingual texts to learn good translation models .
in both cases , we computed 1 the word embeddings using the word2vec implementation of gensim .
as embedding vectors , we used the publicly available representations obtained from the word2vec cbow model .
the fully compacted grammar produces lower parsing performance than the extracted grammar , a 58 % reduction ( without loss ) can still be achieved by using linguistic compaction , and 69 % reduction yields a gain in recall , but a loss in precision .
combined with simple thresholding , this method can be used to give a 58 % reduction in grammar size without significant change in parsing performance , and can produce a 69 % reduction with some gain in recall , but a loss in precision .
we have achieved a first evaluation based on the methodology defined in .
we chose to take up the evaluation method proposed in .
phrase-based smt segments a bilingual sentence pair into phrases that are continuous sequences of words or discontinuous sequences of words .
the phrase-based model segments a bilingual sentence pair into phrases that are continuous sequences of words .
bahdanau et al extend the vanilla encoder-decoder nmt framework by adding a small feed-forward neural network which learns which word in the source sentence is relevant for predicting the next word in the target sequence .
bahdanau et al propose integrating an attention mechanism in the decoder , which is trained to determine on which portions of the source sentence to focus .
utilizing the phrase alignment information , we design a scoring scheme for the cross-language document summarization task .
in this paper we propose a phrase-based framework for the task of cross-language document summarization .
the results evaluated by bleu score is shown in table 2 .
automatic evaluation results in terms of bleu scores are provided in table 2 .
kalchbrenner et al , 2014 ) proposes a cnn framework with multiple convolution layers , with latent , dense and low-dimensional word embeddings as inputs .
kalchbrenner et al proposed to extend cnns max-over-time pooling to k-max pooling for sentence modeling .
we use the word2vec tool to train monolingual vectors , 6 and the cca-based tool for projecting word vectors .
we use skip-gram with negative sampling for obtaining the word embeddings .
however , most recent studies are concerned with a binary perspective over humor .
currently , studies are mainly concerned with the binary evaluation of humor , whether it is funny or not .
we use negative sampling to approximate softmax in the objective function .
we use a combination of negative sampling and hierachical softmax via backpropagation .
sarcasm is defined as ‘ a cutting , often ironic remark intended to express contempt or ridicule ’ 1 .
sarcasm , commonly defined as ‘ an ironical taunt used to express contempt ’ , is a challenging nlp problem due to its highly figurative nature .
script knowledge is a body of knowledge that describes a typical sequence of actions people do in a particular situation ( cite-p-7-1-6 ) .
script knowledge is a form of structured world knowledge that is useful in nlp applications for natural language understanding tasks ( e.g. , ambiguity resolution rahman and ng , 2012 ) , as well as for psycholinguistic models of human language processing , which need to represent event knowledge to model human expectations ( cite-p-15-3-5 , cite-p-15-3-4 ) of upcoming referents and utterances .
identification of user intent also has important implications in building intelligent conversational qa systems .
in conversational systems , understanding user intent is the key to the success of the interaction .
relation extraction is a key step towards question answering systems by which vital structured data is acquired from underlying free text resources .
relation extraction is the task of detecting and characterizing semantic relations between entities from free text .
twitter is a very popular micro blogging site .
twitter is a widely used microblogging environment which serves as a medium to share opinions on various events and products .
we pretrain word vectors with the word2vec tool on the news dataset released by ding et al , which are fine-tuned during training .
as a strong baseline , we trained the skip-gram model of mikolov et al using the publicly available word2vec 5 software .
like wikipedia and wiktionary , which have been applied in computational methods only recently , offer new possibilities to enhance information retrieval .
ckbs like wikipedia and wiktionary , which have been applied in computational methods only recently , offer new possibilities to tackle this problem .
briefly overviews the related work on both zero anaphora resolution and tree kernel-based anaphora resolution .
section 2 briefly describes the related work on both zero anaphora resolution and tree kernel-based anaphora resolution .
we use the svm implementation available in the li-blinear package .
for implementation , we used the liblinear package with all of its default parameters .
dependency parsing is the task of building dependency links between words in a sentence , which has recently gained a wide interest in the natural language processing community and has been used for many problems ranging from machine translation ( cite-p-12-1-4 ) to question answering ( zhou et al. , 2011a ) .
dependency parsing is a core task in nlp , and it is widely used by many applications such as information extraction , question answering , and machine translation .
in this paper , we have proposed methods for identifying appropriate segments and expressions automatically from the data .
in this paper , we propose a method to identify important segments of textual data for analysis from full transcripts of conversations .
in this paper , we present a method that automatically constructs a named entity ( ne ) tagged corpus from the web .
in this paper , we presented a method that automatically generates an ne tagged corpus using enormous web documents .
study is intended to deal with the problem of extracting binary relations between entity pairs from wikipedia ¡¯ s english version .
in this study , we address the problem of extracting relations between entities from wikipedia¡¯s english articles .
semantic role labeling ( srl ) is the task of labeling predicate-argument structure in sentences with shallow semantic information .
semantic role labeling ( srl ) is the task of automatically labeling predicates and arguments in a sentence with shallow semantic labels .
word embeddings have been trained using word2vec 4 tool .
we use word embedding pre-trained on newswire with 300 dimensions from word2vec .
in this work , we explore the use of ecoc to enhance the performance of centroid classifier .
in this work , we examine the use of ecoc for improving centroid text classifier .
the smt systems used a kenlm 5-gram language model , trained on the mono-lingual data from wmt 2015 .
an english 5-gram language model is trained using kenlm on the gigaword corpus .
the dependencies were obtained using the mate parser .
the corpus was processed with the mate dependency parser .
at most cubes the grammar size , but we show empirically that the size increase is only quadratic .
this transformation at most doubles the grammar ’ s rank and cubes its size , but we show that in practice the size increase is only quadratic .
toutanova and moore improved this approach by extending the error model with phonetic similarities over words .
toutanova and moore improve the model by incorporating pronunciation information .
we propose a joint , generative semi-supervised hierarchical topic model , i . e . semi-supervised hierarchical latent dirichlet allocation ( sshlda ) , to overcome the defects of hlda and hllda while combining the their merits .
in this paper , we have proposed a semi-supervised hierarchical topic models , i.e . sshlda , which aims to solve the drawbacks of hlda and hllda while combine their merits .
as there is no available public data in chinese , we annotate 25k chinese sentences manually .
as there is no available public data in chinese , we annotate 25k chinese sentences manually for training and testing .
grosz and sidner argue that such relations between intentions are a crucial part of intentional structure .
grosz and sidner claim that discourse segmentation is an important factor , though obviously not the only one , governing the use of referring expressions .
relation extraction ( re ) is the task of assigning a semantic relationship between a pair of arguments .
relation extraction is a crucial task in the field of natural language processing ( nlp ) .
ritter and etzioni proposed a generative approach to use extended lda to model selectional preferences .
adapting lda for selectional preference modeling was suggested independently by脫 s茅aghdha and ritter , mausam , and etzioni .
we used srilm to build a 4-gram language model with kneser-ney discounting .
we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing .
to avoid this problem , some recent studies exploit bootstrapping or unsupervised techniques .
to break this bottleneck , some recent studies exploit bootstrapping or unsupervised techniques .
in recent years , there has been increasing interest in improving the quality of smt systems over a wide range of linguistic phenomena , including coreference resolution and modality .
in recent years there has been increasing interest in improving the quality of smt systems over a wide range of linguistic phenomena , including coreference resolution and modality .
we initialize the embedding layer using embeddings from dedicated word embedding techniques word2vec and glove .
specifically , we tested the methods word2vec using the gensim word2vec package and pretrained glove word embeddings .
we apply this method to english part-of-speech tagging and japanese morphological analysis .
we apply this method to english pos tagging and japanese morphological analysis .
we used word2vec , a powerful continuous bag-of-words model to train word similarity .
we learn our word embeddings by using word2vec 3 on unlabeled review data .
language is a weaker source of supervision for colorization than user clicks .
language consists of much more than just content .
for the actioneffect embedding model , we use pre-trained glove word embeddings as input to the lstm .
the statistics for these datasets are summarized in settings we use glove vectors with 840b tokens as the pre-trained word embeddings .
lexical analogies also have applications in word sense disambiguation , information extraction , question-answering , and semantic relation classification .
lexical analogies occur frequently in text and are useful in various natural language processing tasks .
we use srilm to train a 5-gram language model on the target side of our training corpus with modified kneser-ney discounting .
we used the target side of the parallel corpus and the srilm toolkit to train a 5-gram language model .
recovering the original word from the transliterated target is called back-transliteration .
3 in the literature ( cite-p-13-3-5 , cite-p-13-3-12 ) , translating romanized japanese or chinese names to chinese characters is also known as back-transliteration .
we used the scikit-learn implementation of svrs and the skll toolkit .
we employed the machine learning tool of scikit-learn 3 , for training the classifier .
as noted in joachims , support vector machines are well suited for text categorisation .
it is well known that support vector machine methods are very suitable for this task .
we use the constrained decoding feature included in moses to this purpose .
we used the phrasebased translation system in moses 5 as a baseline smt system .
we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing .
the srilm toolkit was used for training the language models using kneser-ney smoothing .
we propose a scalable em-based method that automatically maps verb phrases to kb relations by using the mentions of the verb phrases with the relation instances .
in this paper , we introduce an em-based approach with argument type checking and ontological constraints to automatically map verb phrases to kb relations .
word sense disambiguation ( wsd ) is a natural language processing ( nlp ) task in which the correct meaning ( sense ) of a word in a given context is to be determined .
word sense disambiguation ( wsd ) is a fundamental task and long-standing challenge in natural language processing ( nlp ) .
we trained a standard 5-gram language model with modified kneser-ney smoothing using the kenlm toolkit on 4 billion running words .
we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit .
the evaluation metric for the overall translation quality was case-insensitive bleu4 .
the translation quality is evaluated by case-insensitive bleu-4 .
we use 300-dimensional word embeddings from glove to initialize the model .
this model first embeds the words using 300 dimensional word embeddings created using the glove method .
we used yamcha to detect named entities , and we trained it on the semeval full-text training sets .
we used yamcha , a multi-purpose chunking tool , to train our word segmentation models .
in order to do so , we perform traversals of the platforms and use already available tools to filter the urls .
in order to explore them , we introduce a traversal algorithm based on user pages .
to get the the sub-fields of the community , we use latent dirichlet allocation to find topics and label them by hand .
in our work , we use latent dirichlet allocation to identify the sub-topics in the given body of texts .
temporal patterns and periodicities can be useful to tasks like text .
periodicities play an important role when analysing the temporal dimension of text .
in the first text , crime was metaphorically portrayed as a virus .
in the first text , crime was metaphorically portrayed as a virus and in the second as a beast .
in the future , we would like to apply a similar methodology to different text units , for example , sub-sentence units such as elementary discourse unit .
in the future , we would like to apply a similar methodology to different text units , for example , sub-sentence units such as elementary discourse unit and a larger corpus , for example , duc 2002 and duc 2003 .
we train a 4-gram language model on the xinhua portion of english gigaword corpus by srilm toolkit .
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided .
for formality detection and by danescu et al for politeness detection have been included in our analysis for a comparison against baselines .
all features used by pavlick et al for formality detection and by danescu et al for politeness detection are included in our analysis .
we pre-train the word embeddings using word2vec .
the word embeddings are pre-trained , using word2vec 3 .
the recursive application of autoencoders was first introduced in pollack , whose recursive auto-associative memories learn vector representations over pre-specified recursive data structures .
using neural networks to process trees was first proposed by pollack in the recursive autoassociative memory model which was used for unsupervised learning .
all weights are initialized by the xavier method .
the nonembeddings weights are initialized using xavier initialization .
the trigram language model is implemented in the srilm toolkit .
ngram features have been generated with the srilm toolkit .
in this paper , we introduce a goal-directed random walk algorithm to increase efficiency of mining .
in this paper , we propose a goal-directed random walk algorithm to resolve the above problems .
in this paper , we introduce a new model for detecting restart and repair disfluencies in spontaneous speech transcripts .
in this paper , we present a new model for disfluency detection from spontaneous speech transcripts .
this is an extension of the two words phrase similarity task defined in mitchell and lapata , and constructed according to similar guidelines .
this is an extension of the similarity task for compositional models developed by mitchell and lapata , and constructed according to the same guidelines .
liu and lane , 2016a ) proposed an attention-based neural network model for joint intent detection and slot filling .
liu and lane propose an attention mechanism on the encoder-decoder model for joint intent classification and slot filling .
we apply byte-pair encoding with 30,000 merge operations on the english sentences .
in order to reduce the vocabulary size , we apply byte pair encoding .
for each emotional word , we create features based on the parse tree and its dependencies produced by the stanford parser .
we use the stanford dependency parser to parse the statement and identify the path connecting the content words in the parse tree .
the input to this network consists of pre-trained word embeddings extracted from the 300-dimensional fasttext embeddings .
this model uses multilingual word embeddings trained using fasttext and aligned using muse .