sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
we train a 4-gram language model on the xinhua portion of the english gigaword corpus using the srilm toolkits with modified kneser-ney smoothing .
thus , we use the sri language modeling toolkit to train the in-domain 4-gram language model with interpolated modified kneser-ney discounting .
sequences of words which exhibit a cohesive relationship are called lexical chains .
lexical chains are used to link semanticallyrelated words and phrases .
experiment results demonstrate that our approach is achieving state-of-the-art performance .
experimental results demonstrate the effectiveness of our approach comparable with the state-of-the-art .
mann and yarowsky used semantic information extracted from documents referring to the target person in an hierarchical agglomerative clustering algorithm .
mann and yarowsky proposed a bottom-up agglomerative clustering algorithm based on extracting local biographical information as features .
in this paper , we present work on detecting intensities ( or degrees ) of emotion .
this paper examines the task of detecting intensity of emotion from text .
we used srilm to build a 4-gram language model with kneser-ney discounting .
we used the srilm software 4 to build langauge models as well as to calculate cross-entropy based features .
we use the word2vec tool with the skip-gram learning scheme .
we use word2vec tool for learning distributed word embeddings .
in the above paper , we have presented an algorithm for solving letter-substitution ciphers , with an eye towards discovering unknown encoding standards in electronic documents .
in this paper , we introduce an exact method for deciphering messages using a generalization of the viterbi algorithm .
li and liu extended the character-level mt model by incorporating the pronunciation information .
li and liu introduced a character-level two-step mt method for normalization .
for nmt , smt and neural system combination , we further design a smart strategy to simulate the real training data for neural system combination .
unlike previous works , we adapt multi-source nmt for system combination and design a good strategy to simulate the real training data for our neural system combination .
by using the output of the tagger , the lemmatizer can determine the correct root .
the lemmatizer uses the output of the tagger to disambiguate word forms with more than one possible lemma .
the language models were trained using srilm toolkit .
language models are built using the sri-lm toolkit .
liu et al used conditional random fields for sentence boundary and edit word detection .
liu et al used conditional random fields for sentence boundary and edited word detection .
in particular , open ie systems such as textrunner , reverb , ollie , and nell have tackled the task of compiling an open-domain knowledge base .
thus open domain information extraction systems such as reverb , textrunner and nell have received added attention in recent times .
kiperwasser and goldberg incorporated the bidirectional long short-term memory into both graph-and transition-based parsers .
kiperwasser and goldberg proposed a simple yet effective architecture to implement neural dependency parsers .
coreference resolution is the task of automatically grouping references to the same real-world entity in a document into a set .
coreference resolution is a multi-faceted task : humans resolve references by exploiting contextual and grammatical clues , as well as semantic information and world knowledge , so capturing each of these will be necessary for an automatic system to fully solve the problem .
with human judgements , we source data from a dataset collected by the authors in ( cite-p-11-1-0 ) .
our research is conceptually similar to the work in ( cite-p-11-3-11 ) , which induces a “ human-likeness ” criteria .
however , dependency parsing , which is a popular choice for japanese , can incorporate only shallow syntactic information , i.e. , pos tags , compared with the richer syntactic phrasal categories in constituency parsing .
dependency parsing is the task of predicting the most probable dependency structure for a given sentence .
ibm models and the hidden markov model for word alignment are the most influential statistical word alignment models .
the most influential generative word alignment models are the ibm models 1-5 and the hmm model .
in sec . 5 , human judgment can result in inconsistent scoring .
however , as we demonstrate in sec . 5 , human judgment can result in inconsistent scoring .
we present a spatial knowledge representation that can be learned from 3d scenes .
we can also improve the representation used for spatial priors of objects in scenes .
we train the cbow model with default hyperparameters in word2vec .
we preinitialize the word embeddings by running the word2vec tool on the english wikipedia dump .
this paper reports about our systems in semeval2 japanese word sense disambiguation ( wsd ) task .
this paper reports about our three participating systems in semeval-2 japanese wsd task .
word embeddings are initialized with pretrained glove vectors 1 , and updated during the training .
word embeddings are initialized with pretrained glove vectors 2 , and updated during the training .
in this case the environment of a learning agent is one or more other agents that can also be learning .
in this case the environment of a learning agent is one or more other agents that can also be learning at the same time .
in this section , we will discuss the reordering constraints .
here , we will discuss two such constraints in detail .
for language model , we used sri language modeling toolkit to train a 4-gram model with modified kneser-ney smoothing .
for the tree-based system , we applied a 4-gram language model with kneserney smoothing using srilm toolkit trained on the whole monolingual corpus .
we use the popular moses toolkit to build the smt system .
our smt system is a phrase-based system based on the moses smt toolkit .
experiments on chinese ¨c english and german ¨c english tasks show that our model is significantly better than the state-of-the-art hierarchical phrase-based ( hpb ) model and a recently improved dependency tree-to-string model .
experiments on chinese¨cenglish and german¨c english tasks show that our model is significantly better than the hierarchical phrase-based model and a recent dependency tree-to-string model ( dep2str ) in moses .
the parse trees for sentences in the test set were obtained using the stanford parser .
we used the stanford parser to generate the grammatical structure of sentences .
thurmair , 2009 ) summarized several different architectures of hybrid systems using smt and rbmt systems .
thurmair summarized several different architectures of hybrid systems using smt and rbmt systems .
medlock and briscoe , vincze et al , and farkas et al , .
light et al , medlock and briscoe , medlock , and szarvas , .
we trained a 3-gram language model on all the correct-side sentences using kenlm .
we trained a trigram model with the kenlm , again using all sentences from wikipedia .
we propose a new dataset for the task of abstractive summarization of a document into multiple sentences .
we also propose a new dataset for multi-sentence summarization and establish benchmark numbers on it .
the log-lineal combination weights were optimized using mert .
the weights for these features are optimized using mert .
it has been observed by and , however , that medical language shows less variation and complexity than general , newspaper-style language .
interestingly though , it has also been observed that medical language shows less variation and complexity than general , newspaper-style language , thus exhibiting typical properties of a sublanguage .
semantic role labeling ( srl ) is defined as the task to recognize arguments for a given predicate and assign semantic role labels to them .
semantic role labeling ( srl ) is the task of identifying the arguments of lexical predicates in a sentence and labeling them with semantic roles ( cite-p-13-3-3 , cite-p-13-3-11 ) .
graham et al , 1980 , of the well-known cocke-younger-kasami algorithm .
graham et al , 1980 , of the cocke-younger-kasami algorithm .
we train a 4-gram language model on the xinhua portion of english gigaword corpus by srilm toolkit .
we use srilm for training a trigram language model on the english side of the training corpus .
in our implementation , we train a tri-gram language model on each phone set using the srilm toolkit .
furthermore , we train a 5-gram language model using the sri language toolkit .
we present a novel latent variable model for paraphrase identification , that specifically accommodates the very short context .
our latent-variable approach is capable of learning word-level paraphrase anchors given only sentence annotations .
twitter is the medium where people post real time messages to discuss on the different topics , and express their sentiments .
twitter is a microblogging site where people express themselves and react to content in real-time .
recently , methods inspired by neural language modeling received much attentions for representation learning .
recently , neural networks , and in particular recurrent neural networks have shown excellent performance in language modeling .
we set the feature weights by optimizing the bleu score directly using minimum error rate training on the development set .
the feature weights are tuned to optimize bleu using the minimum error rate training algorithm .
we hence propose a distant supervision approach that acquires argumentative text segments automatically .
instead , we apply distant supervision to automatically acquire annotations .
previous work agrees that word n-grams are well-performing features for hate speech detection .
related work shows that character ngrams can be successfully applied to detect abusive language in english-language content .
besides , we also proposed a novel feature based on distributed word representations ( i . e . , word embeddings ) learned over a large raw corpus .
besides , we also proposed novel features based on distributed word representations , which were learned using deep learning paradigms .
some approaches use very abstract and linguistically rich representations and rules to derive surface forms of the words .
the traditional approach is to hand write rules to identify the morphological properties of words .
we initialize word embeddings with a pre-trained embedding matrix through glove 3 .
we obtain pre-trained tweet word embeddings using glove 3 .
as to the language model , we trained a separate 5-gram lm using the srilm toolkit with modified kneser-ney smoothing on each subcorpus 4 and then interpolated them according to the corpus used for tuning .
we trained a 4-gram language model on the xinhua portion of gigaword corpus using the sri language modeling toolkit with modified kneser-ney smoothing .
and the induction algorithm , as well as full integration in decoding are needed to potentially result in substantial performance improvements .
further improvements in the original feature set and the induction algorithm , as well as full integration in decoding are needed to potentially result in substantial performance improvements .
our system uses a linear classification model trained with imitation learning .
to address this , we use searn , an iterative imitation learning algorithm .
metaphor is a frequently used figure of speech , reflecting common cognitive processes .
metaphor is a common linguistic tool in communication , making its detection in discourse a crucial task for natural language understanding .
to solve the traditional recurrent neural networks , hochreiter and schmidhuber proposed the lstm architecture .
hochreiter and schmidhuber , 1997 ) proposed a long short-term memory network , which can be used for sequence processing tasks .
these models are an instance of conditional random fields and include overlapping features .
they are undirected graphical models trained to maximize a conditional probability .
we use a popular word2vec neural language model to learn the word embeddings on an unsupervised tweet corpus .
we pretrain word vectors with the word2vec tool on the news dataset released by ding et al , which are fine-tuned during training .
n can be done using minimum error rate training on a development set of input sentences and their reference translations .
the minimum error rate training procedure is used for tuning the model parameters of the translation system .
in this paper , we explore the application of multilingual learning to part-of-speech tagging .
we demonstrate the effectiveness of multilingual learning for unsupervised part-of-speech tagging .
sentiment analysis ( sa ) is the task of determining the sentiment of a given piece of text .
sentiment analysis ( sa ) is the task of analysing opinions , sentiments or emotions expressed towards entities such as products , services , organisations , issues , and the various attributes of these entities ( cite-p-9-3-3 ) .
the skip-gram model aims to find word representations that are useful for predicting the surrounding words in a sentence or document .
the skip-gram model adopts a neural network structure to derive the distributed representation of words from textual corpus .
in this paper , we present a coarse-to-fine model that uses features from the asr and smt systems .
we present a coarse-to-fine featurized model which acts as the interface between asr and smt systems .
semantic role labeling ( srl ) is the task of automatic recognition of individual predicates together with their major roles ( e.g . frame elements ) as they are grammatically realized in input sentences .
semantic role labeling ( srl ) is the task of automatically labeling predicates and arguments in a sentence with shallow semantic labels .
then , one real challenge would be to manually recognize plentiful ground truth spam review data for model .
then , one weakness of previous work lies in the demand of manually recognizing a large amount of ground truth review spam data for model training .
wordnet is a knowledge base for english language semantics .
wordnet is a large lexical database of english words .
gabani et al used part-ofspeech language models to derive perplexity scores for transcripts of the speech of children with and without language impairment .
in a follow-up study on a larger group of children , gabani et al again used part-of-speech language models in an attempt to characterize the agrammaticality that is associated with language impairment .
for this score we use glove word embeddings and simple addition for composing multiword concept and relation names .
for the classification task , we use pre-trained glove embedding vectors as lexical features .
the corpus sentences were morphologically annotated and parsed using smor , marmot and the mate dependency parser .
the sentences were morphologically annotated and parsed using smor , marmot and the mate dependency parser .
for this reason , we first exploit indirect annotations of these distinctions in the form of certain types of discourse relations annotated in the penn discourse treebank .
to recognize explicit connectives , we construct a list of existing connectives labeled in the penn discourse treebank .
empty categories play a crucial role in the annotation framework of the hindi dependency treebank 1 .
empty categories play a crucial role in the annotation framework of the hindi dependency treebank .
word alignment is the process of identifying wordto-word links between parallel sentences .
word alignment is a key component of most endto-end statistical machine translation systems .
we use svm light to learn a linear-kernel classifier on pairwise examples in the training set .
we train a svm classifier with an rbf kernel for pairwise classification of temporal relations .
to capture the relation between words , kalchbrenner et al propose a novel cnn model with a dynamic k-max pooling .
kalchbrenner et al proposed to extend cnns max-over-time pooling to k-max pooling for sentence modeling .
kalchbrenner et al proposed a dynamic convolution neural network with multiple layers of convolution and k-max pooling to model a sentence .
kalchbrenner et al propose a dynamic cnn model using a dynamic k-max pooling mechanism which is able to generate a feature graph which captures a variety of word relations .
abstractive summarization is the challenging nlg task of compressing and rewriting a document into a short , relevant , salient , and coherent summary .
abstractive summarization is the ultimate goal of document summarization research , but previously it is less investigated due to the immaturity of text generation techniques .
the pretrained word embeddings are from glove , and the word embedding dimension d w is 300 .
the word embeddings are initialized using the pre-trained glove , and the embedding size is 300 .
for the simple discourse , dave created a file .
figure 1 : processing dave created a file .
a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit .
the language models were interpolated kneser-ney discounted trigram models , all constructed using the srilm toolkit .
in their studies , user satisfaction was measured as to whether intelligent assistants can accomplish predefined tasks .
in this study , engagement is considered as a sentiment as to whether users like intelligent assistants and feel like they want to use them continually .
we evaluate the performance of different translation models using both bleu and ter metrics .
we will show translation quality measured with the bleu score as a function of the phrase table size .
snow et al applied crowdsourcing to five nlp annotation tasks , but the settings of these tasks are very simple .
for annotation tasks , snow et al showed that crowdsourced annotations are similar to traditional annotations made by experts .
in recent years , machine learning techniques , in particular reinforcement learning , have been applied to the task of dialogue management .
recently there are some efforts in applying machine learning approaches to the acquisition of dialogue strategies .
following this , le and mikolov and kiros et al both proposed the concept of embedding entire paragraphs and documents into fixed length vectors .
le and mikolov , 2014 ) proposed the paragraph vector that learns fixed-length representations from variable-length pieces of texts .
we use the moses toolkit with a phrase-based baseline to extract the qe features for the x l , x u , and testing .
we use the moses mt framework to build a standard statistical phrase-based mt model using our old-domain training data .
we first train a word2vec model on fr-wikipedia 11 to obtain non contextual word vectors .
first , we train a vector space representations of words using word2vec on chinese wikipedia .
but i will argue that the new theory explains the opacity of indexicals while maintaining the advantages of a sentential theory of attitudes .
so if we are serious about a sentential theory of attitudes , it is important to be certain that such a theory can explain opaque indexicals .
text normalization is the task of transforming informal writing into its standard form in the language .
text normalization is a preprocessing step to restore non-standard words in text to their original ( canonical ) forms to make use in nlp applications or more broadly to understand the digitized text better ( cite-p-19-1-11 ) .
the translations are evaluated in terms of bleu score .
the translation systems were evaluated by bleu score .
we used moses , a phrase-based smt toolkit , for training the translation model .
our machine translation system is a phrase-based system using the moses toolkit .
choudhury et al proposed a hidden markov model based text normalization approach for sms texts and texting language .
choudhury et al developed a supervised hidden markov model based approach for normalizing short message service texts .
we measure the overall translation quality using 4-gram bleu , which is computed on tokenized and lowercased data for all systems .
we measure translation performance by the bleu and meteor scores with multiple translation references .
a well-known approach in distant supervision is mintz et al , which aligns freebase with wikipedia articles and extracts relations with logistic regression .
one of the first papers to introduce distant supervision was mintz et al , which aims at extracting relations between entities in wikipedia for the most frequent relations in freebase .
bagga and baldwin , 1998 ) proposed a cdc system to merge the wdc chains using the vector space model on the summary sentences .
bagga and baldwin , 1998 ) proposed a method using the vector space model to disambiguate references to a person , place , or event across multiple documents .
although this phenomenon is less prominent if state of the art smoothing of phrasetable probabilities is employed .
in some cases , an improvement in bleu is obtained at the same time although the effect is less pronounced if state-of-the-art phrasetable smoothing is employed .
for our approach , we rely on parsing with categorial combinatory grammar based on systemic functional theory .
our work is supported by automatic functional text analysis with combinatory categorial grammar using systemic functional theory .
we use the feature set presented in pil谩n et al designed for modeling linguistic complexity in input texts for l2 swedish learners .
we use the feature set that we described in pil谩n et al and for modeling linguistic complexity in l2 swedish texts .
we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing .
for the tree-based system , we applied a 4-gram language model with kneserney smoothing using srilm toolkit trained on the whole monolingual corpus .
semantic similarity is a well established research area of natural language processing , concerned with measuring the extent to which two linguistic items are similar ( cite-p-13-1-1 ) .
semantic similarity is a context dependent and dynamic phenomenon .
we train a 4-gram language model on the xinhua portion of the gigaword corpus using the sri language toolkit with modified kneser-ney smoothing .
we use srilm train a 5-gram language model on the xinhua portion of the english gigaword corpus 5th edition with modified kneser-ney discounting .
goldwater et al explored a bigram model built upon a dirichlet process to discover contextual dependencies .
goldwater et al used hierarchical dirichlet processes to induce contextual word models .
twitter is a famous social media platform capable of spreading breaking news , thus most of rumour related research uses twitter feed as a basis for research .
twitter consists of a massive number of posts on a wide range of subjects , making it very interesting to extract information and sentiments from them .