sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
we used the srilm toolkit to train a 4-gram language model on the xinhua portion of the gigaword corpus , which contains 238m english words .
we trained a 5-gram sri language model using the corpus supplied for this purpose by the shared task organizers .
to perform word alignment between languages l1 and l2 , we introduce a pivot language .
to perform word alignment between languages l1 and l2 , we introduce a third language l3 as the pivot language .
a critical analysis of unicode and a proposal of multicode can be found in .
a critical analysis of unicode and a proposal of multicode can be found in mudawwar .
in section 3 and 4 , we explain the proposed method .
in section 3 and 4 , we explain the proposed method in detail .
with this strategy , we observe that while it does sometimes succeed in avoiding egregious errors involving vicious ambiguities , common parsing mistakes such as pp-attachment errors lead to unnecessarily sacrificing conciseness or fluency .
in inspecting the results of reranking with this strategy , we observe that while it does sometimes succeed in avoiding egregious errors involving vicious ambiguities , common parsing mistakes such as pp-attachment errors lead to unnecessarily sacrificing conciseness or fluency in order to avoid ambiguities that would be easily tolerated by human readers .
we used all post bodies in the unlabeled dataset to train a skip-gram model of 50 dimensions .
we used a freely-available pretrained model of 300 dimensions trained on approximately 100 billion words from news articles .
we use the moses toolkit to train our phrase-based smt models .
we implement the pbsmt system with the moses toolkit .
questions show that a discriminatively trained preference rank model is able to outperform alternative approaches designed for the same task .
questions show that our learned preference ranking methods perform better than alternative solutions to the task of answer typing .
since sarcasm is a refined and indirect form of speech , its interpretation may be challenging for certain populations .
sarcasm is a sophisticated form of communication in which speakers convey their message in an indirect way .
in our future work , we will evaluate parsing performance on other learner corpora .
in addition , we explore methods to improve phrase structure parsing for learner english .
this tree kernel was slightly generalized by culotta and sorensen to compute similarity between two dependency trees .
culotta and sorensen , 2004 ) extended this work to calculate kernels between augmented dependency trees .
in realistic situations , we present a system that tracks contributions to a referential communication task using an abductive interpretation .
we illustrate with a case study building maximum entropy models over abductive interpretations in a referential communication task .
we describe an intuitionistic method for dependency parsing , where a classifier is used to determine whether a pair of words forms a dependency edge .
meanwhile , we propose an intuitionistic model for dependency parsing , which uses a classifier to determine whether a pair of words form a dependency edge .
this paper presents a framework to extract positive meaning from negation .
this paper presents a framework to understand negation in positive terms .
we used the core corpus of the balanced corpus of contemporary written japanese for the experiments .
as the data set , we used the balanced corpus of contemporary written japanese .
lodhi et al , 2002 ) first used string kernels with character level features for text categorization .
lodhi et al , 2000 ) applied the string kernel to the text classification .
structure , referential status and recency affect the variation between the writers .
this reveals that substantial individual variation between writers exists in terms of referential form .
wei et al show that instances may be labeled incorrectly due to the knowledge base being incomplete .
furthermore , xu et al correct false negative instances by using pseudo-relevance feedback to expand the origin knowledge base .
a 5-gram language model was built using srilm on the target side of the corresponding training corpus .
the language model was constructed using the srilm toolkit with interpolated kneser-ney discounting .
in this paper , we compare our technique with the grammar checker of microsoft word03 and the alek method used by ets .
in addition , we compared our technique with two other methods of checking errors , microsoft word03 and alek method .
we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings , which we do not optimize during training .
we use glove word embeddings , which are 50-dimension word vectors trained with a crawled large corpus with 840 billion tokens .
we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus .
a 5-gram language model with kneser-ney smoothing is trained using s-rilm on the target language .
sentiment analysis is a collection of methods and algorithms used to infer and measure affection expressed by a writer .
sentiment analysis is the study of the subjectivity and polarity ( positive vs. negative ) of a text ( cite-p-7-1-10 ) .
recurrent neural networks are at the core of many current approaches to sequence prediction in nlp .
it has been repeatedly demonstrated that recurrent neural networks are a good fit for sequence labeling tasks .
we implemented linear models with the scikit learn package .
for all classifiers , we used the scikit-learn implementation .
in this paper , we introduce a low-rank approximation based approach for learning joint embeddings of news stories and images .
finally , we propose a scalable low-rank approximation approach for learning joint embeddings of news stories and images .
heintz et al used lda topic modelling to identify sets of source and target domain vocabulary .
heintz et al and strzalkowski et al focused on modeling topical structure of text to identify metaphor .
we implement an in-domain language model using the sri language modeling toolkit .
furthermore , we train a 5-gram language model using the sri language toolkit .
upon such observation , there have been some feature-based studies that construct rules to capture document-level information for improving sentence-level ed .
there have been some feature-based studies that construct rules to capture document-level information for improving sentence-level ed .
we use approximate randomization for significance testing .
to compute statistical significance , we use the approximate randomization test .
soricut and echihabi propose documentlevel features to predict document-level quality for ranking purposes , having bleu as quality label .
soricut and echihabi proposed document-aware features in order to rank machine translated documents .
cer et al explored regularization of mert to improve generalization on test sets .
the mert implementation uses the line search of cer et al to directly minimize corpus-level error .
second , we construct a set of non-redundant relation topics defined at multiple scales from the relation repository .
thirdly , we design a new kernel for relation detection by integrating the relation topics into the relation detector construction .
culotta et al showed that although tree kernels by themselves may not be effective for relation extraction , combining a tree kernel with a bag of words kernel showed promising results .
culotta and sorensen and zhou et al have shown that tree kernels combined with flat kernels are more effective for intra-sentential relation extraction than either kernel used alone .
we propose a semi-supervised approach based on minimum cut in a lexical relation graph .
we propose a semi-supervised minimum cut framework that makes use of both wordnet definitions and its relation structure .
first subtask is about computing the semantic similarity of words and compositional phrases of minimal length .
section 3 discusses the first subtask , which is about semantic similarity of words and compositional phrases .
prettenhofer and stein provided a cl-scl model based on structural correspondence learning for sentiment classification .
prettenhofer and stein used a multi-lingual dataset focused on full-document classification at the global polarity level .
these word vectors can capture semantic and lexical properties of words , even allowing some relationships to be captured algebraically .
the word embeddings can provide word vector representation that captures semantic and syntactic information of words .
we use the standard generative dependency model with valence .
the models we use are based on the generative dependency model with valence .
in the application section , we start by presenting an open-source software package for gp modelling .
in the application section , we start by presenting an open-source software package for gp modelling in python : gpy .
for example , have shown that adding human-provided emotional scaffolding to an automated reading tutor increases student persistence .
for example have shown that adding human-provided emotional scaffolding to an automated reading tutor increases student persistence .
given its ubiquity , metaphorical language poses an important problem for natural language understanding .
the ubiquity of metaphor in our everyday communication makes it an important problem for natural language understanding .
the srilm toolkit was used to build the trigram mkn smoothed language model .
srilm toolkit was used to create up to 5-gram language models using the mentioned resources .
recently , neural networks have been explored by researchers , and applied to reduce the weakness of feature sparsity problem and heavy feature engineering .
recently , researchers have tended to explore neural network based approaches to reduce efforts of feature engineering .
the embedded word vectors are trained over large collections of text using variants of neural networks .
word embeddings can be learned by training a neural probabilistic language model or a unified neural network architecture for various nlp tasks .
we first train a word2vec model on fr-wikipedia 11 to obtain non contextual word vectors .
we use distributed word vectors trained on the wikipedia corpus using the word2vec algorithm .
we estimated unfiltered 5-gram language models using lmplz and loaded them with kenlm .
for language modeling , we computed 5-gram models using irstlm 7 and queried the model with kenlm .
by formulating deceptive opinion spam detection as a classification problem , existing work primarily focuses on extracting different types of features and applies offthe-shelf supervised classification algorithms to the problem .
previous work typically formulates deceptive opinion spam detection as a classification problem , and then presents different types of features to train supervised classification algorithms for the problem .
especially , for further analyses such as phrase alignment , word alignment and translation memory , high-precision alignment at sub-sentential levels would be very useful .
especially , for further analyses such as phrase alignment , word alignment and translation memory , high precision and quality alignment at sentence or sub-sentential levels would be very useful .
the target-normalized hierarchical phrase-based model is based on a more general hierarchical phrase-based model .
the hierarchical phrase-based model has been widely adopted in statistical machine translation .
the arabic data was preprocessed using an hmm segmenter that splits off attached prepositional phrases , personal pronouns , and the future marker .
arabic text was preprocessed using an hmm segmenter that splits attached prepositional phrases , personal pronouns , and the future marker .
with the svm reranker , we obtain a significant improvement in bleu scores over white & rajkumar ’ s averaged perceptron model .
with the svm reranker , we obtain a significant improvement in bleu scores over white & rajkumar ’ s averaged perceptron model on both development and test data .
in this paper , we propose a general framework for summarization that extracts sentences from a document .
in this paper , we proposed a general framework for extractive summarization using document subjects .
in this paper , we presented techniques of text distortion that can significantly enhance the robustness of authorship attribution methods .
in this paper , we propose a novel method that is based on text distortion to compress topic-related information .
ontology alignment addresses this need by identifying the semantically equivalent concepts .
ontology alignment addresses this requirement by identifying semantically equivalent concepts in multiple ontologies .
resnik and smith employ the web as parallel corpora to provide bilingual sentences for translation models .
resnik and smith extract bilingual sentences from the web to create parallel corpora for machine translation .
sentiment analysis is a recent attempt to deal with evaluative aspects of text .
sentiment analysis is the task of identifying the polarity ( positive , negative or neutral ) of review .
our implementation is based on the dynamic programming algorithm of zhang and shasha .
our scoring procedure uses the ted algorithm defined by zhang and shasha .
for nb and svm , we used their implementation available in scikit-learn .
we implement classification models using keras and scikit-learn .
yogatama and smith introduced the sentence regularizer , which uses patterns of word cooccurrence in the training data to define groups .
yogatama and smith used a linear combination of the sentence regularizer and the lasso to also encourage weights of irrelevant word types to go to zero .
on a standard benchmark data set , we achieve new state-of-the-art performance , reducing error in average f1 by 36 % , and word error rate by 78 % .
we obtain new state-of-the-art performance in extracting standard fields from research papers , with a significant error reduction by several metrics .
the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd ) .
word sense disambiguation ( wsd ) is the task of identifying the correct sense of an ambiguous word in a given context .
qa , the task is to pick sentences that are most relevant to the question .
the task is to classify whether each sentence provides the answer to the query .
mikolov et al have published word2vec , a toolkit that provides different possibilities to estimate word embeddings .
the pol-yglot project mikolov et al developed an alternative solution for computing word embeddings , which significantly reduces the computational costs .
the probabilistic language model is constructed on google web 1t 5-gram corpus by using the srilm toolkit .
in addition , we use an english corpus of roughly 227 million words to build a target-side 5-gram language model with srilm in combination with kenlm .
these corpora consist of a set of documents in two languages containing similar information .
such corpora consist of texts ( e.g . documents , abstracts , or sentences ) and annotations that associate structured information ( e.g . pos tags , named entities , shallow parses ) with extents of the texts .
we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting .
we used srilm for training the 5-gram language model with interpolated modified kneser-ney discounting , .
as mentioned in , the metrics are desirable but flawed when a corrupted triple exists in the kg .
as mentioned in , the metrics are desirable but flawed when a corrupted triple exists in the kb .
takamura et al propose using spin models for extracting semantic orientation of words .
takamura et al proposed a method based on the spin models in physics for extracting semantic orientations of words .
and showed that our method outperforms previous approaches .
experimental results show that our method outperforms existing methods .
this is because chinese is a pro-drop language ( cite-p-21-3-1 ) that allows the subject to be dropped in more contexts than english does .
chinese is a language that does not have morphological tense markers that provide explicit grammaticalization of the temporal location of situations ( events or states ) .
we used data from the conll-x shared task on multilingual dependency parsing .
we used datasets distributed for the 2006 and 2007 conll shared tasks .
in this paper , we use the connection between tensor products and conjunctions to prove algebraic properties of feature .
in this paper , we are concerned about two generally well understood operators on feature functions ¨c addition and conjunction .
complementary : combining the two modeling techniques yields the best known result on the one billion word benchmark .
combining the two modeling techniques yields the best known result on the benchmark which shows that the two models are complementary .
the english side of the parallel corpus is trained into a language model using srilm .
we also use a 4-gram language model trained using srilm with kneser-ney smoothing .
the major guideline in this part of the evaluation was to compare our results with previous work having a similar goal .
the major guideline in this part of the evaluation was to compare our results with previous work without the possible bias of human evaluation .
corpora are prone to be biased by the correlation of authors with specific topics .
as a consequence , corpora of flawed articles based on these templates are biased towards particular topics .
bahdanau et al propose a neural translation model that learns vector representations for individual words as well as word sequences .
bahdanau et al proposed an attentional encoder-decoder architecture for machine translation .
the embeddings have been trained with word2vec on twitter data .
both files are concatenated and learned by word2vec .
word sense disambiguation ( wsd ) is the task of automatically determining the correct sense for a target word given the context in which it occurs .
word sense disambiguation ( wsd ) is the task of assigning sense tags to ambiguous lexical items ( lis ) in a text .
following , the soul model combines the neural network approach with a class-based lm .
following mnih and hinton , the soul model combines the neural network approach with a class-based lm .
but ent sets of grs are useful for ent purposes .
a complication is that sets of grs are useful for purposes .
a back-off 2-gram model with good-turing discounting and no lexical classes was also created from the training set , using the srilm toolkit , .
the target fourgram language model was built with the english part of training data using the sri language modeling toolkit .
relation extraction is the task of finding relations between entities in text , which is useful for several tasks such as information extraction , summarization , and question answering ( cite-p-14-3-7 ) .
relation extraction is the task of recognizing and extracting relations between entities or concepts in texts .
in this paper we present a test collection composed of real-life , research-level mathematical topics and associated relevance judgements procured from the online collaboration website mathoverflow .
in this paper , we present a test collection for mathematical information retrieval composed of real-life , research-level mathematical information needs .
we used the target side of the parallel corpus and the srilm toolkit to train a 5-gram language model .
for all experiments , we used a 4-gram language model with modified kneser-ney smoothing which was trained with the srilm toolkit .
word segmentation is the foremost obligatory task in almost all the nlp applications where the initial phase requires tokenization of input into words .
therefore , word segmentation is a crucial first step for many chinese language processing tasks such as syntactic parsing , information retrieval and machine translation .
in the gsm-based wireless system , for instance , a vad module is used for discontinuous transmission to save battery power .
in the gsm-based communication system , a vad scheme is used to lengthen the battery power through discontinuous transmission when speech-pause is detected .
we used the single layer long short-term memory networks to extract the features of each text .
to do this , we relied on a neural network with a long short-term memory layer , which is fed from the word embeddings .
we used 4-gram language models , trained using kenlm .
language models were trained with the kenlm toolkit .
mikolov et al uses a continuous skip-gram model to learn a distributed vector representation that captures both syntactic and semantic word relationships .
mikolov et al and mikolov et al introduce efficient methods to directly learn high-quality word embeddings from large amounts of unstructured raw text .
tang et al used a document classification approach based on recurrent neural networks and showed an improvement on a sentiment classification task .
tang et al used cnn or lstm to learn sentence representation and encoded these semantic vectors in document representation by gated recurrent neural network .
we adopt a phrase-based smt framework , moses .
we make use of moses toolkit for this paradigm .
statistical machine translation ( smt ) system is heavily dependent upon the amount of parallel sentences used in training .
for any statistical machine translation system , the size of the parallel corpus used for training is a major factor in its performance .
we pretrain word vectors with the word2vec tool on the news dataset released by ding et al , which are fine-tuned during training .
we use word embeddings of dimension 100 pretrained using word2vec on the training dataset .
semantic parsing is the problem of deriving a structured meaning representation from a natural language utterance .
semantic parsing is the task of mapping natural language sentences to complete formal meaning representations .
the log-linear parameter weights are tuned with mert on a development set to produce the baseline system .
the log-linear feature weights are tuned with minimum error rate training on bleu .
we apply sri language modeling toolkit to train a 4-gram language model with kneser-ney smoothing .
we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .
proposal consists of a voting system of three polarity classifiers which follow a lexicon-based approach .
the system is based on a voting strategy of three lexicon-based sentiment classifiers .
in this paper , we proposed an arbitrary slot filling method that directly deals with the posterior probability of slot values .
to reflect this observation , in this paper we explore the value-based formulation approach for arbitrary slot filling tasks .