sentence1,sentence2 the phraseextraction heuristics of were used to build the phrase-based smt systems .,belz and kow proposed another smt based nlg system which made use of the phrase-based smt model . we measure the translation quality with automatic metrics including bleu and ter .,"in order to measure translation quality , we use bleu 7 and ter scores ." we relied on the multinomial naive bayes classifier by mccallum and nigam .,note that we use the naive bayes multinomial classifier in weka for classification . we used the pre-trained word embeddings that are learned using the word2vec toolkit on google news dataset .,"as for multiwords , we used the phrases from the pre-trained google news word2vec vectors , which were obtained using a simple statistical approach ." "liu et al developed a dependency-based neural network , in which a convolutional neural network has been used to capture features on the shortest path and a recursive neural network is designed to model subtrees .","liu et al proposed a recursive neural network designed to model the subtrees , and cnn to capture the most important features on the shortest dependency path ." the minimum error rate training was used to tune the feature weights .,parameters were tuned using minimum error rate training . "taxonomies that are backbone of structured ontology knowledge have been found to be useful for many areas such as question answering , document clustering and textual entailment .","taxonomies , which serve as backbones for structured knowledge , are useful for many nlp applications such as question answering and document clustering ." relation extraction ( re ) is the task of extracting semantic relationships between entities in text .,relation extraction is the task of finding semantic relations between two entities from text . we chose to use support vector machines for our classifier .,like we used support vector machines via the classifier svmlight . we used 300 dimensional skip-gram word embeddings pre-trained on pubmed .,we pre-train the word embedding via word2vec on the whole dataset . "in this paper , we investigate the use of form-function mappings derived from human-human dialogues .",in this paper we presented the results of a corpus study of naturally occurring crs in task-oriented dialogue . we use a fourgram language model with modified kneser-ney smoothing as implemented in the srilm toolkit .,we estimate a 5-gram language model using interpolated kneser-ney discounting with srilm . the targetside 4-gram language model was estimated using the srilm toolkit and modified kneser-ney discounting with interpolation .,a 5-gram language model with kneser-ney smoothing was trained with srilm on monolingual english data . zeng et al exploit a convolutional neural network to extract lexical and sentence level features for relation classification .,zeng et al use convolutional neural network for learning sentence-level features of contexts and obtain good performance even without using syntactic features . "we use the logistic regression classifier in the skll package , which is based on scikit-learn , optimizing for f 1 score .","we use several classifiers including logistic regression , random forest and adaboost implemented in scikit-learn ." negation is a grammatical category which comprises various kinds of devices to reverse the truth value of a proposition ( cite-p-18-3-8 ) .,"negation is a complex phenomenon present in all human languages , allowing for the uniquely human capacities of denial , contradiction , misrepresentation , lying , and irony ( horn and wansing , 2015 ) ." lei et al proposed to learn features by representing the cross-products of some primitive units with low-rank tensors for dependency parsing .,lei et al employ three-way tensors to obtain a low-dimensional input representation optimized for parsing performance . word sense disambiguation ( wsd ) is the task of determining the meaning of a word in a given context .,word sense disambiguation ( wsd ) is the task of identifying the correct sense of an ambiguous word in a given context . we use the pre-trained glove 50-dimensional word embeddings to represent words found in the glove dataset .,"for representing words , we used 100 dimensional pre-trained glove embeddings ." we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit .,we trained a 4-gram language model on this data with kneser-ney discounting using srilm . the model uses non-negative matrix factorization in order to find latent dimensions .,our model uses non-negative matrix factorization in order to find latent dimensions . "in the translation tasks , we used the moses phrase-based smt systems .",for training the translation model and for decoding we used the moses toolkit . "for the embeddings trained on stack overflow corpus , we use the word2vec implementation of gensim 8 toolkit .","in this run , we use a sentence vector derived from word embeddings obtained from word2vec ." traditional supervised learning methods heavily rely on large scale annotated data which is time and labor consuming .,traditional supervised re models heavily rely on abundant amounts of high-quality annotated data . "in this paper , we propose a new automatic evaluation method for machine translation using noun-phrase chunking .",results confirmed that our method using noun-phrase chunking is effective for automatic evaluation for machine translation . the pre-processed monolingual sentences will be used by srilm or berkeleylm to train a n-gram language model .,the english side of the parallel corpus is trained into a language model using srilm . "on all datasets and models , we use 300-dimensional word vectors pre-trained on google news .",we use 300-dimensional vectors that were trained and provided by word2vec tool using a part of the google news dataset 4 . bilingual lexicons serve as an indispensable source of knowledge for various cross-lingual tasks such as cross-lingual information retrieval or statistical machine translation .,bilingual dictionaries are an essential resource in many multilingual natural language processing tasks such as machine translation and cross-language information retrieval . we evaluated the reordering approach within the moses phrase-based smt system .,we use the moses toolkit to train our phrase-based smt models . our baseline is a phrase-based mt system trained using the moses toolkit .,our baseline system is an standard phrase-based smt system built with moses . grammar induction is the task of learning grammatical structure from plain text without human supervision .,grammar induction is the task of inducing high-level rules for application of grammars in spoken dialogue systems . for our baseline we use the moses software to train a phrase based machine translation model .,"we use moses , a statistical machine translation system that allows training of translation models ." we use a pbsmt model built with the moses smt toolkit .,we use the popular moses toolkit to build the smt system . our system is built using the open-source moses toolkit with default settings .,we used a standard pbmt system built using moses toolkit . we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing .,we used the sri language modeling toolkit with kneser-kney smoothing . "the decoder and encoder word embeddings are of size 500 , the encoder uses a bidirectional lstm layer with 1k units to encode the source side .",the language model has an embedding size of 250 and two lstm layers with a hidden size of 1000 . conditional random fields are probabilistic models for labelling sequential data .,conditional random fields are undirected graphical models of a conditional distribution . table 1 shows the translation performance by bleu .,table 4 shows end-to-end translation bleu score results . "faruqui et al employ semantic relations of ppdb , wordnet , framenet to retrofit word embeddings for various prediction tasks .","for example , faruqui et al introduce knowledge in lexical resources into the models in word2vec ." negation is a grammatical category which comprises various kinds of devices to reverse the truth value of a proposition ( cite-p-18-3-8 ) .,"negation is a complex phenomenon present in all human languages , allowing for the uniquely human capacities of denial , contradiction , misrepresentation , lying , and irony ( cite-p-18-3-7 ) ." "in addition , we can use pre-trained neural word embeddings on large scale corpus for neural network initialization .","in addition , we utilize the pre-trained word embeddings with 300 dimensions from for initialization ." for emd we used the stanford named entity recognizer .,we use the stanford named entity recognizer for this purpose . "for instance , bahdanau et al advocate the attention mechanism to dynamically generate a context vector of the whole source sentence for improving the performance of the nmt .","bahdanau et al propose integrating an attention mechanism in the decoder , which is trained to determine on which portions of the source sentence to focus ." "for the sick and msrvid experiments , we used 300-dimension glove word embeddings .",we used 100 dimensional glove embeddings for this purpose . "the language models were created using the srilm toolkit on the standard training sections of the ccgbank , with sentenceinitial words uncapitalized .","the system was trained using moses with default settings , using a 5-gram language model created from the english side of the training corpus using srilm ." table 5 shows the bleu and per scores obtained by each system .,"table 2 shows the blind test results using bleu-4 , meteor and ter ." "thus , we propose a new approach based on the expectation-maximization algorithm .",we learn the noise model parameters using an expectation-maximization approach . discourse segmentation is the first step in building a discourse parser .,discourse segmentation is the first step in building discourse parsers . semantic parsing is the task of converting natural language utterances into their complete formal meaning representations which are executable for some application .,"semantic parsing is the task of transducing natural language ( nl ) utterances into formal meaning representations ( mrs ) , commonly represented as tree structures ." "word representations , especially brown clustering , have been shown to improve the performance of ner system when added as a feature .",using word or phrase representations as extra features has been proven to be an effective and simple way to improve the predictive performance of an nlp system . hatzivassiloglou and mckeown proposed the first method for determining adjective polarities or orientations .,hatzivassiloglou and mckeown proposed a supervised algorithm to determine the semantic orientation of adjectives . latent dirichlet allocation is a representative of topic models .,latent dirichlet allocation is one of the widely adopted generative models for topic modeling . the language models used were 7-gram srilm with kneser-ney smoothing and linear interpolation .,a 5-gram language model with kneser-ney smoothing was trained with srilm on monolingual english data . we used the google news pretrained word2vec word embeddings for our model .,we use the 300-dimensional skip-gram word embeddings built on the google-news corpus . we experiment with word2vec and glove for estimating similarity of words .,we obtained distributed word representations using word2vec 4 with skip-gram . "we automatically parse sentences with minipar , a broad-coverage dependency parser .","for this purpose , we use the minipar dependency parser ." "recently , gong and zhou also applied topic modeling into domain adaptation in smt .",gong et al and xiao et al introduce topic-based similarity models to improve smt system . the word embeddings are initialized by pre-trained glove embeddings 2 .,all han models and a subset of bca models are initialized with pretrained glove word embeddings 1 . a similar idea called ibm bleu score has proved successful in automatic machine translation evaluation .,bleu is widely used for automatic evaluation of machine translation systems . "in the nlp field , nn-based multi-task learning has been proven to be effective .",high quality word embeddings have been proven helpful in many nlp tasks . we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing .,"we used srilm for training the 5-gram language model with interpolated modified kneser-ney discounting , ." "we train a trigram language model with modified kneser-ney smoothing from the training dataset using the srilm toolkit , and use the same language model for all three systems .","for language model , we use a trigram language model trained with the srilm toolkit on the english side of the training corpus ." the 5-gram target language model was trained using kenlm .,an english 5-gram language model is trained using kenlm on the gigaword corpus . "with the consideration of user and product information , our model can significantly improve the performance of sentiment classification .","with the user and product attention , our model can take account of the global user preference and product characteristics in both word level and semantic level ." "since segmentation is the first stage of discourse parsing , quality discourse segments are critical to building quality discourse representations ( cite-p-12-1-10 ) .","segmentation is the first step in a discourse parser , a system that constructs discourse trees from elementary discourse units ." "for the language model , we used srilm with modified kneser-ney smoothing .",we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing . "meanwhile , we adopt glove pre-trained word embeddings 5 to initialize the representation of input tokens .",we use 300-dimensional word embeddings from glove to initialize the model . dependency parsing is the task to assign dependency structures to a given sentence math-w-4-1-0-14 .,dependency parsing is the task of predicting the most probable dependency structure for a given sentence . empirical studies show that our model can significantly outperform the state-of-the-art response generation models .,"empirical results showed that our model can generate either general or specific responses , and significantly outperform state-of-the-art generation methods ." "language modeling is trained using kenlm using 5-grams , with modified kneser-ney smoothing .",unpruned language models were trained using lmplz which employs modified kneser-ney smoothing . we apply the rules to each sentence with its dependency tree structure acquired from the stanford parser .,"we compute the syntactic features only for pairs of event mentions from the same sentence , using the stanford dependency parser ." "for example , wu et al identified aspects based on the features explored by dependency parser .","for opinion mining , wu et al also utilized a dependency structure based on mwus , although they restricted mwus with predefined relations ." the language model is a 3-gram language model trained using the srilm toolkit on the english side of the training data .,we use srilm for training a trigram language model on the english side of the training corpus . we use sri language modeling toolkit to train a 5-gram language model on the english sentences of fbis corpus .,we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus . mikolov et al proposed a computationally efficient method for learning distributed word representation such that words with similar meanings will map to similar vectors .,mikolov et al and mikolov et al introduce efficient methods to directly learn high-quality word embeddings from large amounts of unstructured raw text . we use word embeddings 3 as a cheap low-maintenance alternative for knowledge base construction .,"for feature building , we use word2vec pre-trained word embeddings ." we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting .,a kn-smoothed 5-gram language model is trained on the target side of the parallel data with srilm . word embeddings have proven to be effective models of semantic representation of words in various nlp tasks .,unsupervised word embeddings trained from large amounts of unlabeled data have been shown to improve many nlp tasks . we used the statistical japanese dependency parser cabocha for parsing .,"for japanese , we produce rmrs from the dependency parser cabocha ." we primarily compared our model with conditional random fields .,our model is a first order linear chain conditional random field . roth and yih use ilp to deal with the joint inference problem of named entity and relation identification .,roth and yih also described a classification-based framework in which they jointly learn to identify named entities and relations . "we adopt adam for optimization , train for 20 epochs and pick the best epoch based on development set loss .",we train the models for 20 epochs using categorical cross-entropy loss and the adam optimization method . "in this study , we focus on improving the corpus-based method for cross-lingual sentiment classification of chinese product reviews .","in this paper , we propose to use the co-training approach to address the problem of cross-lingual sentiment classification ." "semantic parsing is then reduced to query graph generation , formulated as a search problem with staged .","semantic parsing is reduced to query graph generation , formulated as a staged search problem ." "when a pun is a spoken utterance , two types of puns are commonly distinguished : homophonic puns , which exploit different meanings of the same word , and heterophonic puns , in which one or more words have similar but not identical pronunciations to some other word or phrase that is alluded to in the pun .",a pun is the exploitation of the various meanings of a word or words with phonetic similarity but different meanings . we use minimum error rate training with nbest list size 100 to optimize the feature weights for maximum development bleu .,"we use 4-gram language models in both tasks , and conduct minimumerror-rate training to optimize feature weights on the dev set ." we use pre-trained glove embeddings to represent the words .,we use pre-trained word vectors from glove . the log-linear model is then tuned as usual with minimum error rate training on a separate development set coming from the same domain .,the optimisation of the feature weights of the model is done with minimum error rate training against the bleu evaluation metric . "tai et al introduced tree-lstm , a generalisation of lstms to tree-structured network topologies , eg , recursive neural networks .","tai et al , and le and zuidema extended sequential lstms to tree-structured lstms by adding branching factors ." "we trained a trigram language model on the chinese side , with the srilm toolkit , using the modified kneser-ney smoothing option .","on the remaining tweets , we trained a 10-gram word length model , and a 5-gram language model , using srilm with kneyser-ney smoothing ." the decoding weights were optimized with minimum error rate training .,the tuning step used minimum error rate training . we extract our paraphrase grammar from the french-english portion of the europarl corpus .,we build a french tagger based on englishfrench data from the europarl corpus . "for this purpose , we turn to the expectation maximization algorithm .",model fitting for our model is based on the expectation-maximization algorithm . we then follow published procedures to extract hierarchical phrases from the union of the directional word alignments .,we use the cube pruning method to approximately intersect the translation forest with the language model . "collobert et al , 2011 ) used word embeddings for pos tagging , named entity recognition and semantic role labeling .","collobert et al used word embeddings as the input of various nlp tasks , including part-of-speech tagging , chunking , ner , and semantic role labeling ." we solve this sequence tagging problem using the mallet implementation of conditional random fields .,"specifically , we adopt linear-chain conditional random fields as the method for sequence labeling ." "with the two alternative role annotations , we show that the propbank role set is more robust to the lack of verb ¨c specific semantic information .","we observe that the propbank roles are more robust in all tested experimental conditions , i.e. , the performance decrease is more severe for verbnet ." we use case-insensitive bleu as evaluation metric .,all systems are evaluated using case-insensitive bleu . we used the implementation of random forest in scikitlearn as the classifier .,we used the scikit-learn implementation of svrs and the skll toolkit . "in our experiments , we used the srilm toolkit to build 5-gram language model using the ldc arabic gigaword corpus .",we trained a specific language model using srilm from each of these corpora in order to estimate n-gram log-probabilities . a spelling-based model that directly maps english letter sequences into arabic letters was developed by al-onaizan and knight .,"al-onaizan and knight present a hybrid model for arabic-to-english transliteration , which is a linear combination of phoneme-based and grapheme-based models ." a trigram language model with modified kneser-ney discounting and interpolation was used as produced by the srilm toolkit .,we used srilm to build a 4-gram language model with interpolated kneser-ney discounting . we used a caseless parsing model of the stanford parser for a dependency representation of the messages .,we use the collapsed tree formalism of the stanford dependency parser . "in this paper , we focus on semantic tagging based on a domain-specific ontology , a dictionary-thesaurus and the overlapping coefficient .","in this paper , we present a method for the semantic tagging of word chunks extracted from a written transcription of conversations ." "we used 200 dimensional glove word representations , which were pre-trained on 6 billion tweets .",we used glove word embeddings with 300 dimensions pre-trained using commoncrawl to get a vector representation of the evidence sentence . semeval is the international workshop on semantic evaluation that has evolved from senseval .,semeval is a yearly event in which teams compete in natural language processing tasks . distributional semantic models induce large-scale vector-based lexical semantic representations from statistical patterns of word usage .,distributional semantic models represent the meanings of words by relying on their statistical distribution in text . the language models are 4-grams with modified kneser-ney smoothing which have been trained with the srilm toolkit .,a 4-gram language model was trained on the monolingual data by the srilm toolkit . "we use svm-light-tk 5 , which enables the use of structural kernels .","we used svm-light-tk , which enables the use of the partial tree kernel ." "for word representation , we train the skip-gram word embedding on each dataset separately to initialize the word vectors .","to obtain these features , we use the word2vec implementation available in the gensim toolkit to obtain word vectors with dimension 300 for each word in the responses ." we conducted baseline experiments for phrasebased machine translation using the moses toolkit .,we trained a phrase-based smt engine to translate known words and phrases using the training tools available with moses . active learning is a general framework and does not depend on tasks or domains .,active learning is a promising way for sentiment classification to reduce the annotation cost . an annotation effort demonstrates implicit relations reveal as much as 30 % of meaning .,a manual annotation effort demonstrates implicit relations yield substantial additional meaning . our trigram word language model was trained on the target side of the training corpus using the srilm toolkit with modified kneser-ney smoothing .,"firstly , we built a forward 5-gram language model using the srilm toolkit with modified kneser-ney smoothing ." the weights for the loglinear model are learned using the mert system .,the weights associated to feature functions are optimally combined using the minimum error rate training . we built a 5-gram language model on the english side of europarl and used the kneser-ney smoothing method and srilm as the language model toolkit .,we used the srilm toolkit to build unpruned 5-gram models using interpolated modified kneser-ney smoothing . we use the linear svm classifier from scikit-learn .,we implemented the different aes models using scikit-learn . "for this language model , we built a trigram language model with kneser-ney smoothing using srilm from the same automatically segmented corpus .",we apply sri language modeling toolkit to train a 4-gram language model with kneser-ney smoothing . we used max-f 1 training to train the feature weights .,we used minimum error rate training for tuning on the development set . relation extraction ( re ) is the task of recognizing relationships between entities mentioned in text .,"relation extraction is the key component for building relation knowledge graphs , and it is of crucial significance to natural language processing applications such as structured search , sentiment analysis , question answering , and summarization ." we obtained these scores by training a word2vec model on the wiki corpus .,we learn our word embeddings by using word2vec 3 on unlabeled review data . djuric et al use a paragraph2vec approach to classify language on user comments as abusive or clean .,"djuric et al , 2015 ) used binary classification to detect hate speech ." text categorization is a crucial and well-proven method for organizing the collection of large scale documents .,text categorization is a classical text information processing task which has been studied adequately ( cite-p-18-1-9 ) . we use 5-grams for all language models implemented using the srilm toolkit .,we train a trigram language model with the srilm toolkit . the evaluation metric is the case-insensitive bleu4 .,case-insensitive bleu-4 is our evaluation metric . one of the first challenges in sentiment analysis is the vast lexical diversity of subjective language .,"sentiment analysis is a much-researched area that deals with identification of positive , negative and neutral opinions in text ." the minimum error rate training was used to tune the feature weights .,the weights of the different feature functions were optimised by means of minimum error rate training . "on the input sentence , we propose two kinds of probabilistic parsing action models that can compute the entire dependency tree ’ s probability .",we propose two kinds of probabilistic models defined on parsing actions to compute the probability of entire sentence . we have measured the performance of the segmenters with the windowdiff metric .,we evaluate using the standard penalty metrics p k and windowdiff . "for the fluency and grammaticality features , we train 4-gram lms using the development dataset with the sri toolkit .",we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit . "recently , mikolov et al introduced an efficient way for inferring word embeddings that are effective in capturing syntactic and semantic relationships in natural language .",mikolov et al proposed the word2vec method for learning continuous vector representations of words from large text datasets . translation performances are measured with case-insensitive bleu4 score .,translation quality is measured in truecase with bleu on the mt08 test sets . we apply the 3-phase learning procedure proposed by where we first create word embeddings based on the skip-gram model .,"with english gigaword corpus , we use the skip-gram model as implemented in word2vec 3 to induce embeddings ." the smt systems were built using the moses toolkit .,the system was trained using the moses toolkit . for this step we used regular expressions and nltk to tokenize the text .,we used nltk wordnet synsets for obtaining the ambiguity of the word . "for the sick and msrvid experiments , we used 300-dimension glove word embeddings .","our first layer was a 200-dimensional embedding layer , using the glove twitter embeddings ." we pre-train the word embeddings using word2vec .,word2vec is an appropriate tool for this problem . we used the scikit-learn library the svm model .,we used the scikit-learn implementation of svrs and the skll toolkit . the corpus is automatically tagged and lemmatised by treetagger .,the web-derived ukwac is already tokenized and pos-tagged with the treetagger . the language models were 5-gram models with kneser-ney smoothing built using kenlm .,the language model is a 5-gram with interpolation and kneserney smoothing . stance detection is the task of classifying the attitude previous work has assumed that either the target is mentioned in the text or that training data for every target is given .,stance detection has been defined as automatically detecting whether the author of a piece of text is in favor of the given target or against it . part-of-speech tagging is the process of assigning to a word the category that is most probable given the sentential context ( cite-p-4-1-2 ) .,part-of-speech tagging is the problem of determining the syntactic part of speech of an occurrence of a word in context . for language modeling we used the kenlm toolkit for standard n-gram modeling with an n-gram length of 5 .,we built a trigram language model with kneser-ney smoothing using kenlm toolkit . in our approach is to reduce the tasks of content selection ( “ what to say ” ) and surface realization ( “ how to say ” ) into a common parsing problem .,a key insight in our approach is to reduce content selection and surface realization into a common parsing problem . we preinitialize the word embeddings by running the word2vec tool on the english wikipedia dump .,we use the word2vec framework in the gensim implementation to generate the embedding spaces . "for all models , we use the 300-dimensional glove word embeddings .","for input representation , we used glove word embeddings ." we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit .,we trained a 5-gram sri language model using the corpus supplied for this purpose by the shared task organizers . the negated event is the event or the entity that the negation indicates its absence or denies its occurrence .,the negated event is the property that is negated by the cue . "for feature building , we use word2vec pre-trained word embeddings .",we use 300 dimension word2vec word embeddings for the experiments . bleu is the most commonly used metric for mt evaluation .,bleu is used as a standard evaluation metric . these attributes were computed using stanford core nlp .,these features were extracted using stanford corenlp . hochreiter and schmidhuber proposed long short-term memories as the specific version of rnn designed to overcome vanishing and exploding gradient problem .,lstms were introduced by hochreiter and schmidhuber in order to mitigate the vanishing gradient problem . we use a minibatch stochastic gradient descent algorithm together with the adam optimizer .,we use the adam optimizer for the gradient-based optimization . "for language model , we used sri language modeling toolkit to train a 4-gram model with modified kneser-ney smoothing .",we apply sri language modeling toolkit to train a 4-gram language model with kneser-ney smoothing . word sense disambiguation ( wsd ) is a task to identify the intended sense of a word based on its context .,word sense disambiguation ( wsd ) is the task of identifying the correct sense of an ambiguous word in a given context . "for estimating the monolingual we , we use the cbow algorithm as implemented in the word2vec package using a 5-token window .","as monolingual baselines , we use the skip-gram and cbow methods of mikolov et al as implemented in the gensim package ." "in this baseline , we applied the word embedding trained by skipgram on wiki2014 .","in our experiment , word embeddings were 200-dimensional as used in , trained on gigaword with word2vec ." "we use the moses toolkit to create a statistical phrase-based machine translation model built on the best pre-processed data , as described above .","as a baseline system , we used the moses statistical machine translation package to build grapheme-based and phoneme-based translation systems , using a bigram language model ." the use of various synchronous grammar based formalisms has been a trend for statistical machine translation .,several recent syntax-based models for machine translation can be seen as instances of the general framework of synchronous grammars and tree transducers . the translation quality is evaluated by caseinsensitive bleu-4 metric .,the translations are evaluated in terms of bleu score . we ran mt experiments using the moses phrase-based translation system .,our method involved using the machine translation software moses . information extraction ( ie ) is a technology that can be applied to identifying both sources and targets of new hyperlinks .,information extraction ( ie ) is the process of finding relevant entities and their relationships within textual documents . we use the attention-based nmt model introduced by bahdanau et al as our text-only nmt baseline .,we use the attentive nmt model introduced by bahdanau et al as our text-only nmt baseline . "recently , mikolov et al introduced an efficient way for inferring word embeddings that are effective in capturing syntactic and semantic relationships in natural language .","more recently , mikolov et al propose two log-linear models , namely the skip-gram and cbow model , to efficiently induce word embeddings ." the srilm toolkit was used for training the language models using kneser-ney smoothing .,the irstlm toolkit is used to build ngram language models with modified kneser-ney smoothing . "mikolov et al , 2013a , builds a translation matrix using linear regression that transforms the source language word vectors to the target language space .","mikolov et al , 2013a ) proposes skip-gram and continuous bag-of-words models based on a single-layer network architecture ." "for language model , we used sri language modeling toolkit to train a 4-gram model with modified kneser-ney smoothing .",we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing . "in this paper , we propose to use the von mises-fisher distribution .","in this work , we use vmf as the observational distribution ." we used the srilm toolkit to build unpruned 5-gram models using interpolated modified kneser-ney smoothing .,"we trained a trigram language model on the chinese side , with the srilm toolkit , using the modified kneser-ney smoothing option ." we train an english language model on the whole training set using the srilm toolkit and train mt models mainly on a 10k sentence pair subset of the acl training set .,"for language model , we use a trigram language model trained with the srilm toolkit on the english side of the training corpus ." we use the moses smt toolkit to test the augmented datasets .,our baseline is the smt toolkit moses run over letter strings rather than word strings . sentiment analysis is a nlp task that deals with extraction of opinion from a piece of text on a topic .,sentiment analysis is a research area in the field of natural language processing . text segmentation is the task of splitting text into segments by placing boundaries within it .,"text segmentation is the task of dividing text into segments , such that each segment is topically coherent , and cutoff points indicate a change of topic ( cite-p-15-1-8 , cite-p-15-3-4 , cite-p-15-1-3 ) ." models were built and interpolated using srilm with modified kneser-ney smoothing and the default pruning settings .,"language models were built with srilm , modified kneser-ney smoothing , default pruning , and order 5 ." parameter optimization is performed with the diagonal variant of adagrad with minibatchs .,"to minimize the objective , we use stochastic gradient descent with the diagonal variant of adagrad ." semantic role labeling ( srl ) is the task of automatically annotating the predicate-argument structure in a sentence with semantic roles .,semantic role labeling ( srl ) is the task of identifying the predicate-argument structure of a sentence . extensive experiments have leveraged word embeddings to find general semantic relations .,it has been empirically shown that word embeddings can capture semantic and syntactic similarities between words . "sentiment analysis ( sa ) is a hot-topic in the academic world , and also in the industry .","sentiment analysis ( sa ) is a field of knowledge which deals with the analysis of people ’ s opinions , sentiments , evaluations , appraisals , attitudes and emotions towards particular entities ( cite-p-17-1-0 ) ." "to start with , we replace word types with corresponding neural language model representations estimated using the skip-gram model .","based on the distributional hypothesis , we train a skip-gram model to learn the distributional representations of words in a large corpus ." "further , we apply a 4-gram language model trained with the srilm toolkit on the target side of the training corpus .",we use srilm for training a trigram language model on the english side of the training data . we use our reordering model for n-best re-ranking and optimize bleu using minimum error rate training .,we optimized each system separately using minimum error rate training . "specifically , we tested the methods word2vec using the gensim word2vec package and pretrained glove word embeddings .",we use the 300-dimensional pre-trained word2vec 3 word embeddings and compare the performance with that of glove 4 embeddings . we measure the translation quality using a single reference bleu .,we evaluated the translation quality using the bleu-4 metric . the source of bilingual data used in the experiments is the europarl collection .,experiments were performed using the publicly available europarl corpora for the english-french language pair . we use a fourgram language model with modified kneser-ney smoothing as implemented in the srilm toolkit .,"further , we apply a 4-gram language model trained with the srilm toolkit on the target side of the training corpus ." "for sentences , we tokenize each sentence by stanford corenlp and use the 300-d word embeddings from glove to initialize the models .",we use the 300-dimensional pre-trained word2vec 3 word embeddings and compare the performance with that of glove 4 embeddings . "also , we initialized all of the word embeddings using the 300 dimensional pre-trained vectors from glove .",we used glove word embeddings with 300 dimensions pre-trained using commoncrawl to get a vector representation of the evidence sentence . coreference resolution is the task of determining which mentions in a text refer to the same entity .,"although coreference resolution is a subproblem of natural language understanding , coreference resolution evaluation metrics have predominately been discussed in terms of abstract entities and hypothetical system errors ." the translations are evaluated in terms of bleu score .,table 1 shows the translation performance by bleu . we tag the source language with the stanford pos tagger .,"for feature extraction , we used the stanford pos tagger ." "we use the moses translation system , and we evaluate the quality of the automatically produced translations by using the bleu evaluation tool .","we use moses , a statistical machine translation system that allows training of translation models ." "in natural language , a word often assumes different meanings , and the task of determining the correct meaning , or sense , of a word in different contexts is known as word sense disambiguation ( wsd ) .",word sense disambiguation ( wsd ) is formally defined as the task of computationally identifying senses of a word in a context . the weights of the different feature functions were optimised by means of minimum error rate training .,the weights of the different feature functions were optimised by means of minimum error rate training on the 2013 wmt test set . we implement the weight tuning component according to the minimum error rate training method .,then we use the standard minimum error-rate training to tune the feature weights to maximize the system潞s bleu score . we parsed the corpus with rasp and with the stanford pcfg parser .,"we obtained parse trees using the stanford parser , and used jacana for word alignment ." we use a variant on the the publicly available madamira tool for the arabic msa-egy pair .,we use a morphological analyzer for arabic called madamira . shen et al proposed a target dependency language model for smt to employ target-side structured information .,"shen et al proposed a string-to-dependency model , which restricted the target-side of a rule by dependency structures ." we measure the translation quality with automatic metrics including bleu and ter .,we evaluate the translation quality using the case-sensitive bleu-4 metric . "for evaluation , we used the case-insensitive bleu metric with a single reference .",we used the bleu score to evaluate the translation accuracy with and without the normalization . "semantic parsing is the task of translating natural language utterances to a formal meaning representation language ( cite-p-16-1-6 , cite-p-16-3-6 , cite-p-16-1-8 , cite-p-16-3-7 , cite-p-16-1-0 ) .","semantic parsing is the task of mapping natural language sentences into logical forms which can be executed on a knowledge base ( cite-p-18-5-13 , cite-p-18-5-14 , cite-p-18-3-6 , cite-p-18-5-8 , cite-p-18-3-15 , cite-p-18-3-9 ) ." this means in practice that the language model was trained using the srilm toolkit .,a 4-grams language model is trained by the srilm toolkit . "we used yamcha as a text chunker , which is based on support vector machine .","we used yamcha 1 , which is a general purpose svm-based chunker ." we used the google news pretrained word2vec word embeddings for our model .,we used the pre-trained word embeddings that were learned using the word2vec toolkit on google news dataset . we use pre-trained 50 dimensional glove vectors 4 for word embeddings initialization .,"in this task , we use the 300-dimensional 840b glove word embeddings ." nenkova et al found that high frequency word entrainment in dialogue is correlated with engagement and task success .,"nenkova et al found that entrainment on high-frequency words was correlated with naturalness , task success , and coordinated turn-taking behavior ." the language models used were 7-gram srilm with kneser-ney smoothing and linear interpolation .,the srilm language modelling toolkit was used with interpolated kneser-ney discounting . "we used 5-gram models , estimated using the sri language modeling toolkit with modified kneser-ney smoothing .",our trigram word language model was trained on the target side of the training corpus using the srilm toolkit with modified kneser-ney smoothing . these language models were built up to an order of 5 with kneser-ney smoothing using the srilm toolkit .,language models of order 5 have been built and interpolated with srilm and kenlm . word alignment is a fundamental problem in statistical machine translation .,word alignment is a well-studied problem in natural language computing . we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit .,our 5-gram language model is trained by the sri language modeling toolkit . similarity is the intrinsic ability of humans and some animals to balance commonalities and differences when comparing objects that are not identical .,similarity is a kind of association implying the presence of characteristics in common . our baseline is a phrase-based mt system trained using the moses toolkit .,the promt smt system is based on the moses open-source toolkit . "for this reason , we used glove vectors to extract the vector representation of words .",we use the glove vectors of 300 dimension to represent the input words . we utilize a maximum entropy model to design the basic classifier used in active learning for wsd .,the integrated dialect classifier is a maximum entropy model that we train using the liblinear toolkit . we set the feature weights by optimizing the bleu score directly using minimum error rate training on the development set .,we perform the mert training to tune the optimal feature weights on the development set . "shutova defined metaphor interpretation as a paraphrasing task , where literal paraphrases for metaphorical expressions are derived from corpus data using a set of statistical measures .",shutova defined metaphor interpretation as a paraphrasing task and presented a method for deriving literal paraphrases for metaphorical expressions from the bnc . we adopt two standard metrics rouge and bleu for evaluation .,we first use bleu score to perform automatic evaluation . we use the treebanks from the conll shared tasks on dependency parsing for evaluation .,the treebank data in our experiments are from the conll shared-tasks on dependency parsing . "an affective lexicon , wordnet-affect was used to identify words with emotional content in the text .",the wordnet-affect resource was employed for obtaining the affective terms . coreference resolution is a key problem in natural language understanding that still escapes reliable solutions .,"coreference resolution is a complex problem , and successful systems must tackle a variety of non-trivial subproblems that are central to the coreference task — e.g. , mention/markable detection , anaphor identification — and that require substantial implementation efforts ." "although coreference resolution is a subproblem of natural language understanding , coreference resolution evaluation metrics have predominately been discussed in terms of abstract entities and hypothetical system errors .",coreference resolution is the task of automatically grouping references to the same real-world entity in a document into a set . all the feature weights were trained using our implementation of minimum error rate training .,the feature weights for each system were tuned on development sets using the moses implementation of minimum error rate training . "from this , we extract an old domain sense dictionary , using the moses mt framework .",we use the moses statistical mt toolkit to perform the translation . this approach relies on word embeddings for the computation of semantic relatedness with word2vec .,we use pre-trained word2vec word vectors and vector representations by tilk et al to obtain word-level similarity information . the character embeddings are computed using a method similar to word2vec .,all word vectors are trained on the skipgram architecture . the bleu metric was used for translation evaluation .,translation results are evaluated using the word-based bleu score . twitter is a social platform which contains rich textual content .,twitter is a microblogging service that has 313 million monthly active users 1 . our phrase-based mt system is trained by moses with standard parameters settings .,our machine translation system is a phrase-based system using the moses toolkit . the evaluation metric for the overall translation quality was case-insensitive bleu4 .,the translation quality is evaluated by caseinsensitive bleu-4 metric . we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing .,we trained kneser-ney discounted 5-gram language models on each available corpus using the srilm toolkit . we use srilm with its default parameters for this purpose .,we use the srilm toolkit to compute our language models . we use minimum error rate training with nbest list size 100 to optimize the feature weights for maximum development bleu .,we tune model weights using minimum error rate training on the wmt 2008 test data . unpruned language models were trained using lmplz which employs modified kneser-ney smoothing .,"we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit ." the models were implemented using scikit-learn module .,all linear models were trained with the perceptron update rule . note that we use the naive bayes multinomial classifier in weka for classification .,"following , we use the na茂ve bayes model implemented in weka for candidate phrase selection ." "relation extraction is the key component for building relation knowledge graphs , and it is of crucial significance to natural language processing applications such as structured search , sentiment analysis , question answering , and summarization .","relation extraction ( re ) is the task of identifying instances of relations , such as nationality ( person , country ) or place of birth ( person , location ) , in passages of natural text ." a language model is a probability distribution over strings p ( s ) that attempts to reflect the frequency with which each string s occurs as a sentence in natural text .,the language model defined by the expression is named the conditional language model . we develop a semantic parser for this corpus .,we also develop a semantic parser for this corpus . "for hindi , dependency annotation is done using paninian framework .",dependency annotation for hindi is based on paninian framework for building the treebank . word embeddings have been trained using word2vec 4 tool .,the embeddings were trained over the english wikipedia using word2vec . "part-of-speech ( pos ) tagging is a critical task for natural language processing ( nlp ) applications , providing lexical syntactic information .",part-of-speech ( pos ) tagging is a job to assign a proper pos tag to each linguistic unit such as word for a given sentence . we trained a 4-gram language model on this data with kneser-ney discounting using srilm .,we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing . "motivated by the idea of addressing word confidence estimation problem as a sequence labeling process , we employ the conditional random fields for our model training , with wapiti toolkit .","motivated by the idea of addressing wce problem as a sequence labeling process , we employ the conditional random fields for our model training , with wapiti toolkit ." we then follow standard heuristics and filtering strategies to extract hierarchical phrases from the union of the directional word alignments .,we use the cube pruning method to approximately intersect the translation forest with the language model . semantic role labeling ( srl ) is the process of assigning semantic roles to strings of words in a sentence according to their relationship to the semantic predicates expressed in the sentence .,semantic role labeling ( srl ) is the task of identifying the semantic arguments of a predicate and labeling them with their semantic roles . "to be the only parse , the reduction in ppl ¡ª relative to a 3-gram baseline .",the best model achieved an overall wer improvement of 10 % relative to the 3-gram baseline . we train a kn-smoothed 5-gram language model on the target side of the parallel training data with srilm .,we use srilm for training a trigram language model on the english side of the training corpus . "sentence compression is the task of producing a shorter form of a grammatical source sentence , so that the new form will still be grammatical and it will retain the most important information of the source .","sentence compression is the task of producing a shorter form of a single given sentence , so that the new form is grammatical and retains the most important information of the original one ." we trained the statistical phrase-based systems using the moses toolkit with mert tuning .,we used the moses toolkit for performing statistical machine translation . "in order to present a comprehensive evaluation , we evaluated the accuracy of each model output using both bleu and chrf3 metrics .","we evaluated the proposed method using four evaluation measures , bleu , nist , wer , and per ." the translation quality is evaluated by case-insensitive bleu-4 metric .,the translation results are evaluated by caseinsensitive bleu-4 metric . "in this work , we employ the toolkit word2vec to pre-train the word embedding for the source and target languages .","we pretrain word vectors with the word2vec tool on the news dataset released by ding et al , which are fine-tuned during training ." our hierarchical phrase-based system is similar to the one described in .,"our baseline system is re-implementation of hiero , a hierarchical phrase-based system ." "we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings , which we do not optimize during training .","to train our neural algorithm , we apply word embeddings of a look-up from 100-d glove pre-trained on wikipedia and gigaword ." framenet is a lexicalsemantic resource manually built by fn experts .,framenet is a lexico-semantic resource focused on semantic frames . translation performance was measured by case-insensitive bleu .,the translation systems were evaluated by bleu score . "we also use editor score as an outcome variable for a linear regression classifier , which we evaluate using 10-fold cross-validation in scikit-learn .",we train and evaluate a l2-regularized logistic regression classifier with the liblin-ear solver as implemented in scikit-learn . "neural networks , working on top of conventional n-gram back-off language models , have been introduced in as a potential means to improve discrete language models .","neural networks , working on top of conventional n-gram models , have been introduced in as a potential means to improve conventional n-gram language models ." "for our experiments , we use a phrase-based translation system similar to moses .",our translation system is an in-house phrasebased system analogous to moses . this paper presents an unsupervised learning approach to building a non-english .,this paper presents an unsupervised learning approach to non-english stemming . "dependency parsing is a very important nlp task and has wide usage in different tasks such as question answering , semantic parsing , information extraction and machine translation .",dependency parsing is a valuable form of syntactic processing for nlp applications due to its transparent lexicalized representation and robustness with respect to flexible word order languages . we then implemented our model using moses toolkit with kenlm as the language model in 5-gram setting .,we used a phrase-based smt model as implemented in the moses toolkit . "in our experiments , we choose to use the published glove pre-trained word embeddings .",we use the glove pre-trained word embeddings for the vectors of the content words . "in this task , we used conditional random fields .",we use the mallet implementation of conditional random fields . "sentiment analysis is a research area where does a computational analysis of people ’ s feelings or beliefs expressed in texts such as emotions , opinions , attitudes , appraisals , etc . ( cite-p-12-1-3 ) .","sentiment analysis is a growing research field , especially on web social networks ." we train attentional sequence-to-sequence models implemented in nematus .,our semantic parser is implemented as a neural sequence-to-sequence model with attention . we use a fourgram language model with modified kneser-ney smoothing as implemented in the srilm toolkit .,we implement an in-domain language model using the sri language modeling toolkit . this section describes the classic hidden markov model based alignment model .,"in this section , we briefly review the hmm alignment model ." the language model was generated from the europarl corpus using the sri language modeling toolkit .,language models were built using the sri language modeling toolkit with modified kneser-ney smoothing . word embedding provides an unique property to capture semantics and syntactic information of different words .,the word embeddings can provide word vector representation that captures semantic and syntactic information of words . we measure the translation quality with automatic metrics including bleu and ter .,"we use the automatic mt evaluation metrics bleu , meteor , and ter , to evaluate the absolute translation quality obtained ." "coreference resolution is the task of partitioning a set of entity mentions in a text , where each partition corresponds to some entity in an underlying discourse model .","coreference resolution is the problem of partitioning a sequence of noun phrases ( or mentions ) , as they occur in a natural language text , into a set of referential entities ." coreference resolution is the process of linking multiple mentions that refer to the same entity .,coreference resolution is the process of linking together multiple expressions of a given entity . we trained a 5-gram language model on the xinhua portion of gigaword corpus using the srilm toolkit .,we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit . "in recent years , error mining techniques have been developed to help identify the most likely sources of parsing failure .","in recent years , error mining techniques have been developed to help identify the most likely sources of parsing failure ( cite-p-15-3-2 , cite-p-15-3-1 , cite-p-15-1-4 ) ." the matrix was then normalized with pointwise mutual information .,the matrix is weighted using positive pointwise mutual information . cohesion can be defined as a set of resources linking within a text that organize the text together ( cite-p-16-1-12 ) .,cohesion is a surface-level property of well-formed texts . "we use the svm implementation from scikit-learn , which in turn is based on libsvm .",we use a set of 318 english function words from the scikit-learn package . "we used moses , a phrase-based smt toolkit , for training the translation model .",we use the moses package to train a phrase-based machine translation model . "word sense disambiguation ( wsd ) is the task of determining the correct meaning ( “ sense ” ) of a word in context , and several efforts have been made to develop automatic wsd systems .",word sense disambiguation ( wsd ) is a key enabling-technology that automatically chooses the intended sense of a word in context . coreference resolution is a set partitioning problem in which each resulting partition refers to an entity .,coreference resolution is the task of partitioning the set of mentions of discourse referents in a text into classes ( or ‘ chains ’ ) corresponding to those referents ( cite-p-12-3-14 ) . a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit .,the model was built using the srilm toolkit with backoff and kneser-ney smoothing . the pre-processed monolingual sentences will be used by srilm or berkeleylm to train a n-gram language model .,"in addition , we use an english corpus of roughly 227 million words to build a target-side 5-gram language model with srilm in combination with kenlm ." "for the classification task , we use pre-trained glove embedding vectors as lexical features .",we use the glove vector representations to compute cosine similarity between two words . the translation models are included within a log-linear model which allows a weighted combination of features functions .,these models are usually regarded as features and combined with scaling factors to form a log-linear model . "the annotation scheme is derived from the universal stanford dependencies , the google universal part-of-speech tags and the interset interlingua for morphological tagsets .","the ud scheme is built on the google universal part-of-speech tagset , the interset interlingua of morphosyntactic features , and stanford dependencies ." for the classifiers we use the scikit-learn machine learning toolkit .,"we use a random forest classifier , as implemented in scikit-learn ." we report the mt performance using the original bleu metric .,we report case-sensitive bleu and ter as the mt evaluation metrics . the language models in this experiment were trigram models with good-turing smoothing built using srilm .,a 4-gram language model was trained on the monolingual data by the srilm toolkit . "relation extraction is the key component for building relation knowledge graphs , and it is of crucial significance to natural language processing applications such as structured search , sentiment analysis , question answering , and summarization .",relation extraction is the task of finding semantic relations between two entities from text . we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing .,we used srilm to build a 4-gram language model with interpolated kneser-ney discounting . we use the 100-dimensional pre-trained word embeddings trained by word2vec 2 and the 100-dimensional randomly initialized pos tag embeddings .,we derive 100-dimensional word vectors using word2vec skip-gram model trained over the domain corpus . "kalchbrenner et al , 2014 ) proposes a cnn framework with multiple convolution layers , with latent , dense and low-dimensional word embeddings as inputs .",kalchbrenner et al proposed a dynamic convolution neural network with multiple layers of convolution and k-max pooling to model a sentence . "the penn discourse treebank is the largest available corpus of annotations for discourse relations , covering one million words of the wall street journal .",the penn discourse treebank is the largest corpus richly annotated with explicit and implicit discourse relations and their senses . "another corpus has been annotated for discourse phenomena in english , the penn discourse treebank .",major discourse annotated resources in english include the rst treebank and the penn discourse treebank . zens and ney showed that itg constraints allow a higher flexibility in word-ordering for longer sentences than the conventional ibm model .,zens and ney show that itg constraints yield significantly better alignment coverage than the constraints used in ibm statistical machine translation models on both german-english and french-english . semantic role labeling ( srl ) is the task of automatic recognition of individual predicates together with their major roles ( e.g . frame elements ) as they are grammatically realized in input sentences .,"semantic role labeling ( srl ) is a major nlp task , providing a shallow sentence-level semantic analysis ." we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing .,"we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit ." results are reported using case-insensitive bleu with a single reference .,translation results are evaluated using the word-based bleu score . we use 300-dimensional word embeddings from glove to initialize the model .,we use pre-trained 50-dimensional word embeddings vector from glove . "sentiment classification is a useful technique for analyzing subjective information in a large number of texts , and many studies have been conducted ( cite-p-15-3-1 ) .","sentiment classification is a well studied problem ( cite-p-13-3-6 , cite-p-13-1-14 , cite-p-13-3-3 ) and in many domains users explicitly provide ratings for each aspect making automated means unnecessary ." feature weights were trained with minimum error-rate training on the news-test2008 development set using the dp beam search decoder and the mert implementation of the moses toolkit .,"the weights of the log-linear interpolation model were optimized via minimum error rate training on the ted development set , using 200 best translations at each tuning iteration ." "using word2vec , we compute word embeddings for our text corpus .","in this run , we use a sentence vector derived from word embeddings obtained from word2vec ." "in natural language , a word often assumes different meanings , and the task of determining the correct meaning , or sense , of a word in different contexts is known as word sense disambiguation ( wsd ) .",word sense disambiguation ( wsd ) is the task of assigning sense tags to ambiguous lexical items ( lis ) in a text . "we used moses , a state-of-the-art phrase-based smt model , in decoding .",we implemented our method in a phrase-based smt system . we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .,a 3-gram language model is trained on the target side of the training data by the srilm toolkits with modified kneser-ney smoothing . we implement logistic regression with scikit-learn and use the lbfgs solver .,we used the logistic regression implemented in the scikit-learn library with the default settings . twitter is a microblogging site where people express themselves and react to content in real-time .,"twitter is a communication platform which combines sms , instant messages and social networks ." "we follow ji and eisenstein , exploiting a transition-based framework for rst discourse parsing .","we follow previous studies , conducting experiments by using the rst discourse treebank ." unsupervised word embeddings trained from large amounts of unlabeled data have been shown to improve many nlp tasks .,continuous representation of words and phrases are proven effective in many nlp tasks . sentence compression is a text-to-text generation task in which an input sentence must be transformed into a shorter output sentence which accurately reflects the meaning in the input and also remains grammatically well-formed .,sentence compression is the task of generating a grammatical and shorter summary for a long sentence while preserving its most important information . system tuning was carried out using minimum error rate training optimised with k-best mira on a held out development set .,parameter tuning was carried out using both k-best mira and minimum error rate training on a held-out development set . we use a support vector machine -based chunker yamcha for the chunking process .,"we used yamcha as a text chunker , which is based on support vector machine ." we train a kn-smoothed 5-gram language model on the target side of the parallel training data with srilm .,a knsmoothed 5-gram language model is trained on the target side of the parallel data with srilm . the statistical significance test is performed by the re-sampling approach .,the statistical significance test is performed using the re-sampling approach . discourse segmentation is the task of identifying coherent clusters of sentences and the points of transition between those groupings .,discourse segmentation is the first step in building a discourse parser . "in particular , collobert et al and turian et al learn word embeddings to improve the performance of in-domain pos tagging , named entity recognition , chunking and semantic role labelling .","collobert et al used word embeddings as the input of various nlp tasks , including part-of-speech tagging , chunking , ner , and semantic role labeling ." relation extraction is the task of detecting and classifying relationships between two entities from text .,"relation extraction is a subtask of information extraction that finds various predefined semantic relations , such as location , affiliation , rival , etc. , between pairs of entities in text ." top accuracy on the entire data set and on the semantic subset was reached by mikolov et al using a skip-gram predict model .,mikolov et al showed that the sg algorithm achieves better accuracies in tested cases . our system is built using the open-source moses toolkit with default settings .,we use a pbsmt model built with the moses smt toolkit . the phrase-based translation model uses the con- the baseline lm was a regular n-gram lm with kneser-ney smoothing and interpolation by means of the srilm toolkit .,the targetside 4-gram language model was estimated using the srilm toolkit and modified kneser-ney discounting with interpolation . note that we use the naive bayes multinomial classifier in weka for classification .,we use the weka toolkit and the derived features to train a naive-bayes classifier . "we then learn reranking weights using minimum error rate training on the development set for this combined list , using only these two features .",we use minimum error rate training to tune the feature weights of hpb for maximum bleu score on the development set with serval groups of different start weights . we trained word vectors with the two architectures included in the word2vec software .,we chose the skip-gram model provided by word2vec tool developed by for training word embeddings . coreference resolution is a key task in natural language processing ( cite-p-13-1-8 ) aiming to detect the referential expressions ( mentions ) in a text that point to the same entity .,"coreference resolution is the task of partitioning a set of mentions ( i.e . person , organization and location ) into entities ." named entity recognition ( ner ) is a fundamental task in text mining and natural language understanding .,"named entity recognition ( ner ) is the task of identifying named entities in free text—typically personal names , organizations , gene-protein entities , and so on ." otero et al used wikipedia categories as the restriction to detect the equivalents within small-scale reliable candidates .,otero et al took advantage of the translation equivalents inserted in wikipedia by means of interlanguage links to extract similar articles . we also measure overall performance with uncased bleu .,we report the mt performance using the original bleu metric . "as embedding vectors , we used the publicly available representations obtained from the word2vec cbow model .",we use a cws-oriented model modified from the skip-gram model to derive word embeddings . "socher et al , 2012 , uses a recursive neural network in relation extraction , and further use lstm .",socher et al present a compositional model based on a recursive neural network . "here , for textual representation of captions , we use fisher-encoded word2vec features .",we use word2vec as the vector representation of the words in tweets . coreference resolution is the task of clustering a set of mentions in the text such that all mentions in the same cluster refer to the same entity .,"coreference resolution is the task of partitioning a set of mentions ( i.e . person , organization and location ) into entities ." we use corpus-level bleu score to quantitatively evaluate the generated paragraphs .,"we use the automatic mt evaluation metrics bleu , meteor , and ter , to evaluate the absolute translation quality obtained ." "moreover , throughout this paper we use the hierarchical phrase-based translation system , which is based on a synchronous contextfree grammar model .",the work described in this paper is based on the smt framework of hierarchical phrase-based translation . the pun is defined as “ a joke exploiting the different possible meanings of a word or the fact that there are words which sound alike but have different meanings ” ( cite-p-7-1-6 ) .,"a pun is a form of wordplay , which is often profiled by exploiting polysemy of a word or by replacing a phonetically similar sounding word for an intended humorous effect ." all of our parsing models are based on the transition-based dependency parsing paradigm .,the dependency parser we use is an implementation of a transition-based dependency parser . we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing .,we trained a 4-gram language model on the xinhua portion of gigaword corpus using the sri language modeling toolkit with modified kneser-ney smoothing . beam search decoding is effective with a small beam size .,fast decoding is achieved by using a novel multiple-beam search algorithm . "name tagging is a key task for language understanding , and provides input to several other tasks such as question answering , summarization , searching and recommendation .",name tagging is a critical early stage in many natural language processing pipelines . "we use srilm toolkits to train two 4-gram language models on the filtered english blog authorship corpus and the xinhua portion of gigaword corpus , respectively .","for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided ." the advent of the supervised method proposed by gildea and jurafsky has led to the creation of annotated corpora for semantic role labeling .,"there has been a substantial amount of work on automatic semantic role labeling , starting with the statistical model of gildea and jurafsky ." "we use three standard human judgements datasets -mc , rg and wordsim353 , composed of 30 , 65 , and 353 pairs of terms respectively .","specifically , we used wordsim353 , a benchmark dataset , consisting of relatedness judgments for 353 word pairs ." we extract the named entities from the web pages using the stanford named entity recognizer .,"we follow this practice here , and additionally detect person names at decode-time using the stanford named entity recognizer ." we use the popular word2vec 1 tool proposed by mikolov et al to extract the vector representations of words .,"to get a dictionary of word embeddings , we use the word2vec tool 2 and train it on the chinese gigaword corpus ." we used 300-dimensional pre-trained glove word embeddings .,we apply a pretrained glove word embedding on . we also measure overall performance with uncased bleu .,we first use bleu score to perform automatic evaluation . we evaluate the performance of different translation models using both bleu and ter metrics .,"we use the automatic mt evaluation metrics bleu , meteor , and ter , to evaluate the absolute translation quality obtained ." language models were built using the sri language modeling toolkit with modified kneser-ney smoothing .,the system used a tri-gram language model built from sri toolkit with modified kneser-ney interpolation smoothing technique . we utilized pre-trained global vectors trained on tweets .,we use pre-trained word vectors from glove . "a pun is a form of wordplay in which one signifier ( e.g. , a word or phrase ) suggests two or more meanings by exploiting polysemy , or phonological similarity to another signifier , for an intended humorous or rhetorical effect .","a pun is a form of wordplay in which one sign ( e.g. , a word or phrase ) suggests two or more meanings by exploiting polysemy , homonymy , or phonological similarity to another sign , for an intended humorous or rhetorical effect ( aarons , 2017 ; hempelmann and miller , 2017 ) ." "these word vectors can be randomly initialized , or be pre-trained from text corpus with learning algorithms .",the embedded word vectors are trained over large collections of text using variants of neural networks . our approach to relation embedding is based on a variant of the glove word embedding model .,our word embeddings is initialized with 100-dimensional glove word embeddings . this task can be formulated as a topic modeling problem for which we chose to employ latent dirichlet allocation .,"to do so , we utilized the popular latent dirichlet allocation , topic modeling method ." all the weights of those features are tuned by using minimal error rate training .,all features were log-linearly combined and their weights were optimized by performing minimum error rate training . we show that ltag-based features improve on the best known set of features used in current srl .,our experimental results show that ltagbased features can help improve the performance of srl systems . we evaluated translation quality using uncased bleu and ter .,we evaluated the translation quality using the case-insensitive bleu-4 metric . a language model is a probability distribution over strings p ( s ) that attempts to reflect the frequency with which each string s occurs as a sentence in natural text .,"traditionally , a language model is a probabilistic model which assigns a probability value to a sentence or a sequence of words ." "we use several classifiers including logistic regression , random forest and adaboost implemented in scikit-learn .",to train the models we use the default stochastic gradient descent classifier provided by scikit-learn . we used the pb smt system in moses 12 for je and kj translation tasks .,"for phrase-based smt translation , we used the moses decoder and its support training scripts ." the srilm toolkit was used to build the trigram mkn smoothed language model .,"in addition , a 5-gram lm with kneser-ney smoothing and interpolation was built using the srilm toolkit ." "in collobert et al , the authors proposed a unified cnn architecture to tackle various nlp problems traditionally handled with statistical approaches .","collobert et al employ a cnn-crf structure , which obtains competitive results to statistical models ." we use the moses package to train a phrase-based machine translation model .,we implemented our method in a phrase-based smt system . "similar to goldwater and griffiths and johnson , toutanova and johnson also use bayesian inference for pos tagging .",goldwater and griffiths employ a bayesian approach to pos tagging and use sparse dirichlet priors to minimize model size . we obtained both phrase structures and dependency relations for every sentence using the stanford parser .,"we obtained parse trees using the stanford parser , and used jacana for word alignment ." "the language model used was a 5-gram with modified kneserney smoothing , built with srilm toolkit .",the language model was constructed using the srilm toolkit with interpolated kneser-ney discounting . named entity recognition ( ner ) is the task of detecting named entity mentions in text and assigning them to their corresponding type .,"named entity recognition ( ner ) is the task of identifying and typing phrases that contain the names of persons , organizations , locations , and so on ." the most widely used topic modeling approach is the latent dirichlet allocation which is based on latent semantic analysis and probabilistic latent semantic analysis .,popular topic modeling techniques include latent dirichlet allocation and probabilistic latent semantic analysis . target language models were trained on the english side of the training corpus using the srilm toolkit .,"the language models were created using the srilm toolkit on the standard training sections of the ccgbank , with sentenceinitial words uncapitalized ." we use the pre-trained glove 50-dimensional word embeddings to represent words found in the glove dataset .,we use 100-dimension glove vectors which are pre-trained on a large twitter corpus and fine-tuned during training . semantic role labeling ( srl ) is the task of automatically annotating the predicate-argument structure in a sentence with semantic roles .,semantic role labeling ( srl ) has been defined as a sentence-level natural-language processing task in which semantic roles are assigned to the syntactic arguments of a predicate ( cite-p-14-1-7 ) . we train a 5-gram language model with the xinhua portion of english gigaword corpus and the english side of the training set using the srilm toolkit .,"in our implementation , we train a tri-gram language model on each phone set using the srilm toolkit ." we initialize these word embeddings with glove vectors .,we use the glove word vector representations of dimension 300 . relation extraction ( re ) has been defined as the task of identifying a given set of semantic binary relations in text .,relation extraction is the task of recognizing and extracting relations between entities or concepts in texts . lexical simplification is the task to find and substitute a complex word or phrase in a sentence with its simpler synonymous expression .,lexical simplification is a technique that substitutes a complex word or phrase in a sentence with a simpler synonym . "thus , zesch and gurevych used a semi-automatic process to create word pairs from domain-specific corpora .","thus , zesch and gurevych semi-automatically created word pairs from domain-specific corpora ." crowdsourcing is a promising new mechanism for collecting large volumes of annotated data at low cost .,crowdsourcing is a viable mechanism for creating training data for machine translation . for data preparation and processing we use scikit-learn .,we use the linear svm classifier from scikit-learn . distributed representations for words and sentences have been shown to significantly boost the performance of a nlp system .,previous work showed that word clusters derived from an unlabelled dataset can improve the performance of many nlp applications . the word embeddings are initialized with the publicly available word vectors trained through glove 5 and updated through back propagation .,word embeddings are initialized with glove 27b trained on tweets and are trainable parameters . "for the language model , we used srilm with modified kneser-ney smoothing .","we used 5-gram models , estimated using the sri language modeling toolkit with modified kneser-ney smoothing ." soricut and marcu use a standard bottomup chart parsing algorithm to determine the discourse structure of sentences .,soricut and marcu address the task of parsing discourse structures within the same sentence . we use word2vec tool for learning distributed word embeddings .,we use the cbow model for the bilingual word embedding learning . we use the skll and scikit-learn toolkits .,we implemented the different aes models using scikit-learn . hierarchical phrase-based translation has emerged as one of the dominant current approaches to statistical machine translation .,hierarchical phrase-based translation is one of the current promising approaches to statistical machine translation . we use five datasets from the conll-x shared task .,the data come from the conll-x and conll 2007 shared tasks . kulkarni et al used a synthetic task to evaluate how well diachronic distributional models can detect semantic shift .,kim et al and kulkarni et al computed the degree of meaning change by applying neural networks for word representation . the character embeddings are computed using a method similar to word2vec .,the model parameters of word embedding are initialized using word2vec . we implement classification models using keras and scikit-learn .,"we use a random forest classifier , as implemented in scikit-learn ." "however , ccg is a binary branching grammar , and as such , can not leave np structure underspecified .","ccg is a strongly lexicalized formalism , in which every word is associated with a syntactic category ( similar to an elementary syntactic structure ) indicating its subcategorization potential ." "visargue system offers the first web-based , interactive visual analytics approach of multi-party discourse data .",the visargue framework provides a novel visual analytics toolbox for exploratory and confirmatory analyses of multi-party discourse data . seki et al proposed a probabilistic model for zero pronoun detection and resolution that uses hand-crafted case frames .,seki et al proposed a probabilistic model for the sub-tasks of anaphoric identification and antecedent identification with the help of a verb dictionary . "discourse parsing is a challenging natural language processing ( nlp ) task that has utility for many other nlp tasks such as summarization , opinion mining , etc . ( cite-p-17-3-3 ) .",discourse parsing is a challenging task and is crucial for discourse analysis . we use the stanford pos tagger to obtain the lemmatized corpora for the parss task .,"to generate these trees , we employ the stanford pos tagger 8 and the stack version of the malt parser ." "for language model , we used sri language modeling toolkit to train a 4-gram model with modified kneser-ney smoothing .",we used kenlm with srilm to train a 5-gram language model based on all available target language training data . relation extraction is the task of finding relational facts in unstructured text and putting them into a structured ( tabularized ) knowledge base .,relation extraction is a key step towards question answering systems by which vital structured data is acquired from underlying free text resources . coreference resolution is the task of partitioning the set of mentions of discourse referents in a text into classes ( or ‘ chains ’ ) corresponding to those referents ( cite-p-12-3-14 ) .,coreference resolution is the task of clustering a set of mentions in the text such that all mentions in the same cluster refer to the same entity . the dominant approach for domain adaptation is training on large-scale out-of-domain data and then fine-tuning on the in-domain data .,"the conventional domain adaptation method is fine tuning , in which an out-of-domain model is further trained on indomain data ." "of such an extension , we present a complete , correct , terminating extension of earley ' s algorithm that uses restriction .","in section 4 , we develop a correct , complete and terminating extension of earley 's algorithm for the patr-ii formalism using the restriction notion ." "for nb and svm , we used their implementation available in scikit-learn .",we implemented linear models with the scikit learn package . "the similarity-based model showed error rates down to 0 . 16 , far lower than both em-based clustering and resnik ¡¯ s wordnet model .","in the evaluation , the similarity-model shows lower error rates than both resnik¡¯s wordnet-based model and the em-based clustering model ." "distributional models use statistics of word cooccurrences to predict semantic similarity of words and phrases , based on the observation that semantically similar words occur in similar contexts .",distributional semantic models build on the distributional hypothesis which states that the meaning of a word can be modelled by observing the contexts in which it is used . a 4-gram language model is trained on the monolingual data by srilm toolkit .,a 5-gram language model with kneser-ney smoothing was trained with srilm on monolingual english data . "semantic parsing is a domain-dependent process by nature , as its output is defined over a set of domain symbols .","semantic parsing is the task of transducing natural language ( nl ) utterances into formal meaning representations ( mrs ) , commonly represented as tree structures ." we use the stanford corenlp for obtaining pos tags and parse trees from our data .,we employ the sentiment analyzer in stanford corenlp to do so . "part-of-speech tagging is a key process for various tasks such as ` information extraction , text-to-speech synthesis , word sense disambiguation and machine translation .",part-of-speech tagging is the problem of determining the syntactic part of speech of an occurrence of a word in context . we use the mallet implementation of conditional random fields .,we define a conditional random field for this task . "in particular , rush et al proposed an approach for the abstractive summarization of sentences combining a neural language model with a contextual encoder .",rush et al proposed a sentence summarization framework based on a neural attention model using a supervised sequence-to-sequence neural machine translation model . most previous approaches that address bilingual lexicon extraction from comparable corpora are based on the standard approach .,most previous works addressing the task of bilingual lexicon extraction from comparable corpora are based on the standard approach . performance is measured in terms of bleu and ter computed using the multeval script .,the mt performance is measured with the widely adopted bleu and ter metrics . sentiment analysis is a natural language processing task whose aim is to classify documents according to the opinion ( polarity ) they express on a given subject ( cite-p-13-8-14 ) .,"sentiment analysis is the task of identifying the polarity ( positive , negative or neutral ) of review ." "in movies that fail the test , women are in fact portrayed as less-central and less-important characters .","indeed , movies that fail the test tend to portray women as less-important and peripheral characters ." we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit .,"for the language model , we used sri language modeling toolkit to train a trigram model with modified kneser-ney smoothing on the 31 , 149 english sentences ." we will show translation quality measured with the bleu score as a function of the phrase table size .,we measure translation performance by the bleu and meteor scores with multiple translation references . "if the anaphor is a pronoun but no referent is found in the cache , it is then necessary to operatingsearch memory .","the anaphor is a definite noun phrase and the referent is in focus , that is ." "in this work , we use a nmt system featuring long short-term memory units -in both the encoder and decoder-and equipped with an attention mechanism .","with reference to this system , we implement a data-driven parser with a neural classifier based on long short-term memory ." "sentiment classification is a special task of text categorization that aims to classify documents according to their opinion of , or sentiment toward a given subject ( e.g. , if an opinion is supported or not ) ( cite-p-11-1-2 ) .","sentiment classification is a task of predicting sentiment polarity of text , which has attracted considerable interest in the nlp field ." text segmentation is the task of splitting text into segments by placing boundaries within it .,text segmentation is the task of determining the positions at which topics change in a stream of text . bunescu and mooney show that using dependency trees to generate the input sequence to a model performs well in relation extraction tasks .,bunescu and mooney designed a kernel along the shortest dependency path between two entities by observing that the relation strongly relies on sdps . "for the language model we use the corpus of 60,000 simple english wikipedia articles 3 and build a 3-gram language model with kneser-ney smoothing trained with srilm .",we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit . we use conditional random fields for sequence labelling .,for parameter training we use conditional random fields as described in . "relation extraction is a subtask of information extraction that finds various predefined semantic relations , such as location , affiliation , rival , etc. , between pairs of entities in text .",relation extraction is the task of finding semantic relations between two entities from text . sentence compression is a paraphrasing task where the goal is to generate sentences shorter than given while preserving the essential content .,"sentence compression is the task of producing a shorter form of a single given sentence , so that the new form is grammatical and retains the most important information of the original one ( cite-p-15-3-1 ) ." jiang and zhai recently proposed an instance re-weighting framework to take domain shift into account .,jiang and zhai proposed an instance re-weighting framework that handles both the settings . we apply sri language modeling toolkit to train a 4-gram language model with kneser-ney smoothing .,we train a 4-gram language model on the xinhua portion of english gigaword corpus by srilm toolkit . coreference resolution is the process of finding discourse entities ( markables ) referring to the same real-world entity or concept .,coreference resolution is a field in which major progress has been made in the last decade . "semantic role labeling ( srl ) is a major nlp task , providing a shallow sentence-level semantic analysis .",semantic role labeling ( srl ) has been defined as a sentence-level natural-language processing task in which semantic roles are assigned to the syntactic arguments of a predicate ( cite-p-14-1-7 ) . "table 1 presents the results from the automatic evaluation , in terms of bleu and nist test .",table 1 shows the performance for the test data measured by case sensitive bleu . xing et al presented topic aware response generation by incorporating topic words obtained from a pre-trained lda model .,"moreover , xing et al incorporated topic words into seq2seq frameworks , where topic words are obtained from a pre-trained l-da model ." we used svm classifier that implements linearsvc from the scikit-learn library .,"in all cases , we used the implementations from the scikitlearn machine learning library ." "we trained a subword model using bpe with 29,500 merge operations .","then , we applied 32k bpe operations , learned jointly over the source and target languages ." "besides , we propose a hierarchical neural attention mechanism to capture the sentiment attention .","meanwhile , we propose a hierarchical attention mechanism for the bilingual lstm network ." "for all the systems we train , we build n-gram language model with modified kneserney smoothing using kenlm .",we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus . sentiment analysis is a fundamental problem aiming to give a machine the ability to understand the emotions and opinions expressed in a written text .,sentiment analysis is a research area in the field of natural language processing . dependency parsing is a topic that has engendered increasing interest in recent years .,dependency parsing is a basic technology for processing japanese and has been the subject of much research . "sentiment analysis ( sa ) is a fundamental problem aiming to allow machines to automatically extract subjectivity information from text ( cite-p-16-5-8 ) , whether at the sentence or the document level ( cite-p-16-3-3 ) .","sentiment analysis ( sa ) is the task of analysing opinions , sentiments or emotions expressed towards entities such as products , services , organisations , issues , and the various attributes of these entities ( cite-p-9-3-3 ) ." conditional random fields are undirected graphical models trained to maximize a conditional probability .,conditional random fields are undirected graphical models of a conditional distribution . the translation quality is evaluated by caseinsensitive bleu-4 metric .,translation performance was measured by case-insensitive bleu . all input and output must conform to the format of the conll-2012 shared task on coreference resolution .,it can process raw text and data conforming to the format of the conll-2012 shared task on coreference resolution . "for language model , we use a trigram language model trained with the srilm toolkit on the english side of the training corpus .","we employ srilm toolkit to linearly interpolate the target side of the training corpus with the wmt english corpus , optimizing towards the mt tuning set ." the evaluation metric is the case-insensitive bleu4 .,all systems are evaluated using case-insensitive bleu . sentiment analysis is a natural language processing task whose aim is to classify documents according to the opinion ( polarity ) they express on a given subject ( cite-p-13-8-14 ) .,sentiment analysis is a fundamental problem aiming to give a machine the ability to understand the emotions and opinions expressed in a written text . "the language models were trained with kneser-ney backoff smoothing using the sri language modeling toolkit , .",we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing . this result is opposed to yamashita stating that scrambling is unrelated to information structure .,"however , these conclusions contradict yamashita claiming that information structure is not crucial for scrambling ." we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing .,"firstly , we built a forward 5-gram language model using the srilm toolkit with modified kneser-ney smoothing ." shen et al propose the well-formed dependency structure to filter the hierarchical rule table .,"shen et al proposed a string-to-dependency model , which restricted the target-side of a rule by dependency structures ." we use the multi-class logistic regression classifier from the liblinear package 2 for the prediction of edit scripts .,we build all the classifiers using the l2-regularized linear logistic regression from the liblinear package . then we perform minimum error rate training on validation set to give different features corresponding reasonable weights .,then we use the standard minimum error-rate training to tune the feature weights to maximize the system潞s bleu score . "supertagging is the tagging process of assigning the correct elementary tree of ltag , or the correct supertag , to each word of an input sentence 1 .",supertagging is a widely used speedup technique for lexicalized grammar parsing . "for our experiments , we use 40,000 sentences from europarl for each language pair following the basic setup of tiedemann .","for our spanish experiments , we randomly sample 2 , 000 sentence pairs from the spanish-english europarl v5 parallel corpus ." relation extraction is a fundamental step in many natural language processing applications such as learning ontologies from texts ( cite-p-12-1-0 ) and question answering ( cite-p-12-3-6 ) .,relation extraction ( re ) is the task of determining semantic relations between entities mentioned in text . we use srilm to train a 5-gram language model on the xinhua portion of the english gigaword corpus 5th edition with modified kneser-ney discounting .,"we used a 5-gram language model trained on 126 million words of the xinhua section of the english gigaword corpus , estimated with srilm ." the target-side language models were estimated using the srilm toolkit .,a 5-gram language model with kneser-ney smoothing was trained with srilm on monolingual english data . ambiguity is a common feature of weps and wsd .,ambiguity is a problem in any natural language processing system . the scaling factors are tuned with mert with bleu as optimization criterion on the development sets .,all the feature weights and the weight for each probability factor are tuned on the development set with minimumerror-rate training . relation extraction is the task of finding relationships between two entities from text .,relation extraction is the task of detecting and characterizing semantic relations between entities from free text . we use the simplified factual statement extractor model 3 of heilman and smith .,"also , we compare our system with the rulebased system proposed by heilman and smith ." "in the reranking stage , we propose an exact 1-best search algorithm .","in this paper , we propose a novel forest reranking algorithm for dependency parsing ." we use the scikit-learn toolkit as our underlying implementation .,we use scikitlearn as machine learning library . we used the sri language modeling toolkit to train lms on our training data for each ilr level .,we used the srilm toolkit to generate the scores with no smoothing . "for evaluation , we measured the end translation quality with case-sensitive bleu .",we evaluated the translation quality using the bleu-4 metric . "this architecture is similar to the cbow model of , where the center word is replaced by a label .","hence , this model is similar to the skip-gram model in word embedding ." we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .,we used kenlm with srilm to train a 5-gram language model based on all available target language training data . a 4-gram language model was trained on the monolingual data by the srilm toolkit .,a tri-gram language model is estimated using the srilm toolkit . "blitzer et al , 2007 ) use structural correspondence learning to adapt the vocabulary of the various domains .",blitzer et al used structural correspondence learning to train a classifier on source data with new features induced from target unlabeled data . named entity recognition ( ner ) is a well-known problem in nlp which feeds into many other related tasks such as information retrieval ( ir ) and machine translation ( mt ) and more recently social network discovery and opinion mining .,named entity recognition ( ner ) is a frequently needed technology in nlp applications . we preprocessed the corpus with tokenization and true-casing tools from the moses toolkit .,we trained the statistical phrase-based systems using the moses toolkit with mert tuning . we perform pre-training using the skipgram nn architecture available in the word2vec tool .,our cdsm feature is based on word vectors derived using a skip-gram model . we apply bi-directional long shortterm memory networks to encode an input utterance into a vector .,we use a bidirectional long short-term memory rnn to encode a sentence . "for our experiments reported here , we obtained word vectors using the word2vec tool and the text8 corpus .",we trained the embedding vectors with the word2vec tool on the large unlabeled corpus of clinical texts provided by the task organizers . we use the moses mt framework to build a standard statistical phrase-based mt model using our old-domain training data .,we use the open-source moses toolkit to build a phrase-based smt system trained on mostly msa data obtained from several ldc corpora including some limited da data . "here too , we used the weka implementation of the na茂ve bayes model and the svmlight implementation of the svm .",we used the weka implementation of na茂ve bayes for this baseline nb system . we used the penn treebank wall street journal corpus .,we used the penn wall street journal treebank as training and test data . "many words have multiple meanings , and the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd ) .",word sense disambiguation ( wsd ) is the nlp task that consists in selecting the correct sense of a polysemous word in a given context . the srilm toolkit was used to build this language model .,all language models were trained using the srilm toolkit . word segmentation is the first step prior to word alignment for building statistical machine translations ( smt ) on language pairs without explicit word boundaries such as chinese-english .,"word segmentation is the first step of natural language processing for japanese , chinese and thai because they do not delimit words by whitespace ." we use the pre-trained glove 50-dimensional word embeddings to represent words found in the glove dataset .,"meanwhile , we adopt glove pre-trained word embeddings 5 to initialize the representation of input tokens ." we use 300d glove vectors trained on 840b tokens as the word embedding input to the lstm .,the input to net are the pre-trained glove word embeddings of 300d trained on 840b tokens . "in this paper , the proposed model improves the acquirement ability for oov translation through web mining and solves the translation pair .","in this paper , an oov translation model is established based on the combination pattern of web mining and translation ranking ." "as a classifier , we chose support vector machines .",like we used support vector machines via the classifier svmlight . "for the neural models , we use 100-dimensional glove embeddings , pre-trained on wikipedia and gigaword .","in a second baseline model , we also incorporate 300-dimensional glove word embeddings trained on wikipedia and the gigaword corpus ." twitter is a well-known social network service that allows users to post short 140 character status update which is called “ tweet ” .,"twitter is a famous social media platform capable of spreading breaking news , thus most of rumour related research uses twitter feed as a basis for research ." "to extract part-of-speech tags , phrase structure trees , and typed dependencies , we use the stanford parser on both train and test sets .","we rely on the stanford parser , a treebank-trained statistical parser , for tokenization , part-of-speech tagging , and phrase-structure parsing ." the targetside 4-gram language model was estimated using the srilm toolkit and modified kneser-ney discounting with interpolation .,"for all experiments , we used a 4-gram language model with modified kneser-ney smoothing which was trained with the srilm toolkit ." we develop translation models using the phrase-based moses smt system .,we use the opensource moses toolkit to build a phrase-based smt system . we train a linear support vector machine classifier using the efficient liblinear package .,we used the support vector machine implementation from the liblinear library on the test sets and report the results in table 4 . "in this paper , we introduce a discriminatively trained , globally normalized log-linear model of lexical translation .","we introduce a discriminatively trained , globally normalized , log-linear variant of the lexical translation models proposed by cite-p-17-1-6 ." this baseline uses pre-trained word embeddings using word2vec cbow and fasttext .,this model uses multilingual word embeddings trained using fasttext and aligned using muse . we use srilm for training a trigram language model on the english side of the training corpus .,we used the srilm toolkit to train a 4-gram language model on the english side of the training corpus . "we follow cite-p-31-3-9 , use freebase as source of distant supervision , and employ wikipedia as source of unlabelled text ¡ª .",our work is inspired by cite-p-31-3-9 who also use freebase as distant supervision source . "coreference resolution is the problem of identifying which noun phrases ( nps , or mentions ) refer to the same real-world entity in a text or dialogue .",coreference resolution is the task of identifying all mentions which refer to the same entity in a document . we use the stanford part-of-speech tagger and chunker to identify noun and verb phrases in the sentences .,we use stanford log-linear partof-speech tagger to produce pos tags for the english side . we use the stanford dependency parser with the collapsed representation so that preposition nodes become edges .,"for english , we convert the ptb constituency trees to dependencies using the stanford dependency framework ." "during the last few years , smt systems have evolved from the original word-based approach to phrase-based translation systems .",the well-known phrasebased translation model has significantly advanced the progress of smt by extending translation units from single words to phrases . word sense disambiguation ( wsd ) is a natural language processing ( nlp ) task in which the correct meaning ( sense ) of a word in a given context is to be determined .,"many words have multiple meanings , and the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd ) ." our system is based on the phrase-based part of the statistical machine translation system moses .,"to that end , we use the state-of-the-art phrase based statistical machine translation system moses ." "since the english treebanks are in constituency format , we used the stanfordconverter to convert the parse trees to dependencies and ignored the arc labels .","for english , we convert the ptb constituency trees to dependencies using the stanford dependency framework ." translation performances are measured with case-insensitive bleu4 score .,"the translation performance was measured using the bleu and the nist mt-eval metrics , and word error rate ." "as for ej translation , we use the stanford parser to obtain english abstraction trees .",we use the collapsed tree formalism of the stanford dependency parser . the composite kernel consists of an entity kernel and a convolution parse tree kernel .,the composite kernel consists of two individual kernels : an entity kernel that allows for entity-related features and a convolution parse tree kernel that models syntactic information of relation examples . we experimented using the standard phrase-based statistical machine translation system as implemented in the moses toolkit .,we used the dataset made available by the workshop on statistical machine translation to train a german-english phrase-based system using the moses toolkit in a standard setup . morpa is a morphological parser developed for use in the text-to-speech conversion system .,morpa is a fully implemented parser developed for use in a text-to-speech conversion system . semantic role labeling ( srl ) is the process of producing such a markup .,semantic role labeling ( srl ) is the process of assigning semantic roles to strings of words in a sentence according to their relationship to the semantic predicates expressed in the sentence . "gram language models are trained over the target-side of the training data , using srilm with modified kneser-ney discounting .","further , we apply a 4-gram language model trained with the srilm toolkit on the target side of the training corpus ." "for phrase-based smt translation , we used the moses decoder and its support training scripts .",we use the open source moses phrase-based mt system to test the impact of the preprocessing technique on translation quality . "in this paper , we integrate the context and glosses of the target word into a unified framework .","in this paper , we focus on how to integrate glosses into a unified neural wsd system ." we use distributed word vectors trained on the wikipedia corpus using the word2vec algorithm .,we use the word2vec cbow model with a window size of 5 and a minimum frequency of 5 to generate 200-dimensional vectors . "phrase-based approaches to statistical machine translation have recently achieved impressive results , leading to significant improvements in accuracy over the original ibm models .",phrase-based statistical machine translation models have achieved significant improvements in translation accuracy over the original ibm word-based model . "wikipedia , as it is a popular choice due to its large and ever expanding coverage and its ability to keep up with world events on a timely basis .",wikipedia is a constantly evolving source of detailed information that could facilitate intelligent machines — if they are able to leverage its power . "the weights for the language model and the grammar , are tuned towards bleu using mert .",maximum phrase length is set to 10 words and the parameters in the log-linear model are tuned by mert . our phrase-based mt system is trained by moses with standard parameters settings .,we implemented our method in a phrase-based smt system . "following lample et al , the character-based representation is computed with a bi-lstm whose parameters are defined by users .","following lample et al , the character-based representation is constructed with a bi-lstm ." these models were implemented using the package scikit-learn .,the experiments were conducted with the scikit-learn tool kit . discourse parsing is the process of discovering the latent relational structure of a long form piece of text and remains a significant open challenge .,discourse parsing is the task of identifying the presence and the type of the discourse relations between discourse units . we use liblinear logistic regression module to classify document-level embeddings .,we build all the classifiers using the l2-regularized linear logistic regression from the liblinear package . the models are built using the sri language modeling toolkit .,a 4-grams language model is trained by the srilm toolkit . we used 300 dimensional skip-gram word embeddings pre-trained on pubmed .,"then , we trained word embeddings using word2vec ." "transliteration is the task of converting a word from one writing script to another , usually based on the phonetics of the original word .",phonetic translation across these pairs is called transliteration . relation extraction ( re ) is a task of identifying typed relations between known entity mentions in a sentence .,relation extraction is the task of detecting and classifying relationships between two entities from text . lda is a probabilistic generative model that can be used to uncover the underlying semantic structure of a document collection .,lda is a generative model that learns a set of latent topics for a document collection . pang and lee attempted to improve the performance of an svm classifier by identifying and removing objective sentences from the texts .,pang and lee use a graph-based technique to identify and analyze only subjective parts of texts . "unfortunately , the non-projective parsing problem is known to be np-hard for all but the simplest models .",exact non-projective parsing with such a 2-order model is intractable . "previous work has focused on congressional debates , company-internal discussions , and debates in online forums .","previous works on stance detection have focused on congressional debates , company-internal discussions , and debates in online forums ." the feature weights 位 m are tuned with minimum error rate training .,the model parameters are trained using minimum error-rate training . we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting .,we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus . we use the moses phrase-based mt system with standard features .,we work with the phrase-based smt framework as the baseline system . we use the mallet implementation of a maximum entropy classifier to construct our models .,we use a standard maximum entropy classifier implemented as part of mallet . language models were trained with the kenlm toolkit .,a 5-gram language model built using kenlm was used for decoding . coreference resolution is the task of partitioning the set of mentions of discourse referents in a text into classes ( or ‘ chains ’ ) corresponding to those referents ( cite-p-12-3-14 ) .,"coreference resolution is a challenging task , that involves identification and clustering of noun phrases mentions that refer to the same real-world entity ." "for word embeddings , we consider word2vec and glove .",we initialize these word embeddings with glove vectors . "for word embeddings , we used popular pre-trained word vectors from glove .","we tried the models with glove and with randomly initialized , learnable word embeddings ." we trained linear-chain conditional random fields as the baseline .,we also used support vector machines and conditional random fields . we use case-sensitive bleu-4 to measure the quality of translation result .,we measure the translation quality with automatic metrics including bleu and ter . we used the implementation of random forest in scikitlearn as the classifier .,we used the svm implementation provided within scikit-learn . "in this paper , we present an approach that leverages structured knowledge contained in fdts .","in this paper , we have presented an fdt-based model training approach to smt ." "for sampling nodes , non-interactive active learning algorithms exclude expert annotators ’ human labels .",non-interactive algorithms do not use human labels during the learning process . "for wordnet , we employ the basictokenizer built in bert to tokenize text , and look up synsets for each word using nltk .",we split each document into sentences using the sentence tokenizer of the nltk toolkit . "for this language model , we built a trigram language model with kneser-ney smoothing using srilm from the same automatically segmented corpus .","for the tree-based system , we applied a 4-gram language model with kneserney smoothing using srilm toolkit trained on the whole monolingual corpus ." semantic parsing is the task of mapping a natural language ( nl ) sentence into a completely formal meaning representation ( mr ) or logical form .,semantic parsing is the problem of mapping natural language strings into meaning representations . the language models are 4-grams with modified kneser-ney smoothing which have been trained with the srilm toolkit .,"furthermore , we train a 5-gram language model using the sri language toolkit ." barzilay and mckeown extracted both single-and multiple-word paraphrases from a sentence-aligned corpus for use in multi-document summarization .,"barzilay and mckeown extract paraphrases from a monolingual parallel corpus , containing multiple translations of the same source ." "lei et al also use low-rank tensor learning in the context of dependency parsing , where like in our case dependencies are represented by conjunctive feature spaces .",lei et al employ three-way tensors to obtain a low-dimensional input representation optimized for parsing performance . "in addition to these two key indicators , we evaluated the translation quality using an automatic measure , namely bleu score .",we evaluated translation quality based on the caseinsensitive automatic evaluation score bleu-4 . ccg is a linguistically motivated categorial formalism for modeling a wide range of language phenomena .,ccgs are a linguistically-motivated formalism for modeling a wide range of language phenomena . "sentence compression is the task of compressing long , verbose sentences into short , concise ones .","sentence compression is the task of producing a shorter form of a single given sentence , so that the new form is grammatical and retains the most important information of the original one ( cite-p-15-3-1 ) ." "to optimize the system towards a maximal bleu or nist score , we use minimum error rate training as described in .",we utilize minimum error rate training to optimize feature weights of the paraphrasing model according to ndcg . "for the hierarchical phrase-based model we used the default moses rule extraction settings , which are taken from chiang .",we extract translation rules from a hypergraph for the hierarchical phrase-based system . the log-linear feature weights are tuned with minimum error rate training on bleu .,the parameter weights are optimized with minimum error rate training . wikipedia is a massively multilingual resource that currently hosts 295 languages and contains naturally annotated markups 2 and rich informational structures through crowdsourcing for 35 million articles in 3 billion words .,"wikipedia is a large , multilingual , highly structured , multi-domain encyclopedia , providing an increasingly large wealth of knowledge ." we used the srilm toolkit to build unpruned 5-gram models using interpolated modified kneser-ney smoothing .,we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit . "in the work of mikolov et al , they introduced two new architectures for estimating continuous representations of words using log-linear models , called continuous bag-of-word and continuous skip-gram .","mikolov et al further proposed continuous bagof-words and skip-gram models , which use a simple single-layer architecture based on inner product between two word vectors ." the log-linear feature weights are tuned with minimum error rate training on bleu .,each translation model is tuned using mert to maximize bleu . "in this run , we use a sentence vector derived from word embeddings obtained from word2vec .",we present the text to the encoder as a sequence of word2vec word embeddings from a word2vec model trained on the hrwac corpus . "to measure the importance of the generated questions , we use lda to identify the important sub-topics from the given body of texts .","in our work , we use latent dirichlet allocation to identify the sub-topics in the given body of texts ." lda is a topic model that generates topics based on word frequency from a set of documents .,plda is an extension of lda which is an unsupervised machine learning method that models topics of a document collection . "for the fluency and grammaticality features , we train 4-gram lms using the development dataset with the sri toolkit .","in our implementation , we train a tri-gram language model on each phone set using the srilm toolkit ." we use the glove vectors of 300 dimension to represent the input words .,we initialize these word embeddings with glove vectors . davidov et al utilize hashtags and smileys to build a largescale annotated tweet dataset automatically .,davidov et al describe a technique that transforms hashtags and smileys in tweets into sentiments . "negation is a linguistic phenomenon present in all languages ( cite-p-12-3-6 , cite-p-12-1-5 ) .",negation is a grammatical category which comprises various kinds of devices to reverse the truth value of a proposition ( cite-p-18-3-8 ) . the embeddings were trained over the english wikipedia using word2vec .,the word embeddings are pre-trained by skip-gram . the feature weights are tuned to optimize bleu using the minimum error rate training algorithm .,the log linear weights for the baseline systems are optimized using mert provided in the moses toolkit . shallow semantic representations can prevent the sparseness of deep structural approaches and the weakness of cosine similarity based models .,"shallow semantic representations , bearing a more compact information , could prevent the sparseness of deep structural approaches ." "we used the annotation and features available for the training set , to train the attribute detectors using a linear svm classifier .",we trained one logistic regression classifier for each emotion class using the liblinear package . the srilm language modelling toolkit was used with interpolated kneser-ney discounting .,the language model was constructed using the srilm toolkit with interpolated kneser-ney discounting . parameter tuning was carried out using both k-best mira and minimum error rate training on a held-out development set .,the feature weights for each system were tuned on development sets using the moses implementation of minimum error rate training . translation quality is measured in truecase with bleu on the mt08 test sets .,the translation quality is evaluated by case-insensitive bleu-4 . we have shown that there are two distinct ways of representing the parses of a tag .,"in particular , we show that there are two distinct ways of representing the parse forest ." "coreference resolution is the problem of identifying which noun phrases ( nps , or mentions ) refer to the same real-world entity in a text or dialogue .",coreference resolution is the task of automatically grouping references to the same real-world entity in a document into a set . we use word vectors produced by the cbow approach-continuous bagof-words .,"we use state-of-the-art word embedding methods , namely continuous bag of words and global vectors ." our 5-gram language model was trained by srilm toolkit .,a 4-grams language model is trained by the srilm toolkit . "convolutional neural networks are useful in many nlp tasks , such as language modeling , semantic role labeling and semantic parsing .","convolutional neural networks have obtained good results in text classification , which usually consist of convolutional and pooling layers ." next we consider the context-predicting vectors available as part of the word2vec 6 project .,"in this run , we use a sentence vector derived from word embeddings obtained from word2vec ." "a pun is a form of wordplay , which is often profiled by exploiting polysemy of a word or by replacing a phonetically similar sounding word for an intended humorous effect .","a pun is a form of wordplay in which a word suggests two or more meanings by exploiting polysemy , homonymy , or phonological similarity to another word , for an intended humorous or rhetorical effect ( cite-p-15-3-1 ) ." semantic parsing is the task of mapping natural language utterances to machine interpretable meaning representations .,"semantic parsing is the problem of translating human language into computer language , and therefore is at the heart of natural language understanding ." we use the scikit-learn toolkit as our underlying implementation .,we implemented linear models with the scikit learn package . automatic alignment can be performed using different algorithms such as em or hmm-based alignment .,automatic alignment can be performed using different algorithms such as the em algorithm or using an hmm aligner . faruqui et al proposed a related approach that performs a post-processing of word embeddings on the basis of lexical relations from the same resources .,faruqui et al apply post-processing steps to existing word embeddings in order to bring them more in accordance with semantic lexicons such as ppdb and framenet . we are using word embeddings trained on google news corpus for our experiments .,we make use of the recently published word embeddings trained on google news . "for training the model , we use the linear kernel svm implemented in the scikit-learn toolkit .","within this subpart of our ensemble model , we used a svm model from the scikit-learn library ." similarity is the intrinsic ability of humans and some animals to balance commonalities and differences when comparing objects that are not identical .,similarity is a fundamental concept in theories of knowledge and behavior . "arabic is a morphologically rich language , in which a word carries not only inflections but also clitics , such as pronouns , conjunctions , and prepositions .","morphologically , arabic is a non-concatenative language ." "sentiment analysis is the task of identifying the polarity ( positive , negative or neutral ) of review .","sentiment analysis is a natural language processing ( nlp ) task ( cite-p-10-3-0 ) which aims at classifying documents according to the opinion expressed about a given subject ( federici and dragoni , 2016a , b ) ." "sentiment classification is a hot research topic in natural language processing field , and has many applications in both academic and industrial areas ( cite-p-17-1-16 , cite-p-17-1-12 , cite-p-17-3-4 , cite-p-17-3-3 ) .","sentiment classification is a well studied problem ( cite-p-13-3-6 , cite-p-13-1-14 , cite-p-13-3-3 ) and in many domains users explicitly provide ratings for each aspect making automated means unnecessary ." text segmentation is the task of splitting text into segments by placing boundaries within it .,text segmentation can be defined as the automatic identification of boundaries between distinct textual units ( segments ) in a textual document . gao et al described a transformationbased converter to transfer a certain annotationstyle word segmentation result to another style .,gao et al do a pioneer work by describing a transformation-based converter to transfer a certain word segmentation result to another annotation guideline . "these word representations are used in various natural language processing tasks such as part-of-speech tagging , chunking , named entity recognition , and semantic role labeling .","importantly , word embeddings have been effectively used for several nlp tasks , such as named entity recognition , machine translation and part-of-speech tagging ." "the word vectors are learned using a skip-gram model with negative sampling , implemented in the word2vec toolkit .","these word embeddings are learned in advance using a continuous skip-gram model , or other continuous word representation learning methods ." the hierarchical phrase-based model is capable of capturing rich translation knowledge with the synchronous context-free grammar .,this model shows a significant improvement over the state-of-the-art hierarchical phrase-based system . we use the moses smt toolkit to test the augmented datasets .,we trained the statistical phrase-based systems using the moses toolkit with mert tuning . we use srilm to train a 5-gram language model on the xinhua portion of the english gigaword corpus 5th edition with modified kneser-ney discounting .,we train a 5-gram language model with the xinhua portion of english gigaword corpus and the english side of the training set using the srilm toolkit . relation extraction is a key step towards question answering systems by which vital structured data is acquired from underlying free text resources .,relation extraction is the task of finding semantic relations between two entities from text . we translated each german sentence using the moses statistical machine translation toolkit .,we used the moses toolkit to build an english-hindi statistical machine translation system . heilman et al extended this approach and worked towards retrieving relevant reading materials for language learners in the reap 3 project .,heilman et al continued using language modeling to predict readability for first and second language texts . word embeddings are low-dimensional vector representations of words such as word2vec that recently gained much attention in various semantic tasks .,word embedding approaches like word2vec or glove are powerful tools for the semantic analysis of natural language . for our baseline we use the moses software to train a phrase based machine translation model .,we use the moses mt framework to build a standard statistical phrase-based mt model using our old-domain training data . "for the language model , we used sri language modeling toolkit to train a trigram model with modified kneser-ney smoothing on the 31 , 149 english sentences .","for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided ." we use binary crossentropy loss and the adam optimizer for training the nil-detection models .,"additionally , we compile the model using the adamax optimizer ." a spelling-based model that directly maps english letter sequences into arabic letters was developed by al-onaizan and knight .,al-onaizan and knight proposed a spelling-based model which directly maps english letter sequences into arabic letter sequences . "we use the seq2seq attention architecture with 2 lstm layers for both encoder and decoder , and 512 hidden nodes in each layer .","our nmt is based on an encoderdecoder with attention design , using bidirectional lstm layers for encoding and unidirectional layers for decoding ." "twitter is a rich resource for information about everyday events – people post their tweets to twitter publicly in real-time as they conduct their activities throughout the day , resulting in a significant amount of mundane information about common events .","twitter is a popular microblogging service , which , among other things , is used for knowledge sharing among friends and peers ." event coreference resolution is the task of determining which event mentions in a text refer to the same real-world event .,"moreover , since event coreference resolution is a complex task that involves exploring a rich set of linguistic features , annotating a large corpus with event coreference information for a new language or domain of interest requires a substantial amount of manual effort ." we use sri language modeling toolkit to train a 5-gram language model on the english sentences of fbis corpus .,"for the language model , we used sri language modeling toolkit to train a trigram model with modified kneser-ney smoothing on the 31 , 149 english sentences ." srilm toolkit is used to build these language models .,the srilm toolkit was used to build the 5-gram language model . we test the statistical significance of differences between various mt systems using the bootstrap resampling method .,"finally , we conduct paired bootstrap sampling to test the significance in bleu scores differences ." the word2vec is among the most widely used word embedding models today .,the most commonly used word embeddings were word2vec and glove . we use the sentiment pipeline of stanford corenlp to obtain this feature .,we use stanford corenlp for pos tagging and lemmatization . an interesting implementation to get the word embeddings is the word2vec model which is used here .,we obtain word clusters from word2vec k-means word clustering tool . the target language model is built on the target side of the parallel data with kneser-ney smoothing using the irstlm tool .,the language model was a 5-gram model with kneser-ney smoothing trained on the monolingual news corpus with irstlm . we use a fourgram language model with modified kneser-ney smoothing as implemented in the srilm toolkit .,a 5-gram language model with kneser-ney smoothing is trained using s-rilm on the target language . "for the classification task , we use pre-trained glove embedding vectors as lexical features .",our word embeddings is initialized with 100-dimensional glove word embeddings . our baseline system was a vanilla phrase-based system built with moses using default settings .,our phrase-based mt system is trained by moses with standard parameters settings . central to our approach is the intuition that word meaning is represented as a probability distribution over a set of latent senses and is modulated by context .,central to our approach is a new type-based sampling algorithm for hierarchical pitman-yor models in which we track fractional table counts . word sense disambiguation ( wsd ) is formally defined as the task of computationally identifying senses of a word in a context .,word sense disambiguation ( wsd ) is the task of determining the correct meaning or sense of a word in context . "recent years have witnessed the success of various statistical machine translation models using different levels of linguistic knowledgephrase , hiero , and syntax-based .","recent efforts in statistical machine translation have seen promising improvements in output quality , especially the phrase-based models and syntax-based models ." we use an nmt-small model from the opennmt framework for the neural translation .,our neural machine translation systems are trained using a modified version of opennmt-py . we implement classification models using keras and scikit-learn .,"for all classifiers , we used the scikit-learn implementation ." kalchbrenner et al proposed a dynamic convolution neural network with multiple layers of convolution and k-max pooling to model a sentence .,kalchbrenner et al show that a cnn for modeling sentences can achieve competitive results in polarity classification . "for each gradient step , the step size is calculated using adagrad .",weights are optimized by the gradient-based adagrad algorithm with a mini-batch . "semantic role labeling ( srl ) is the task of labeling the predicate-argument structures of sentences with semantic frames and their roles ( cite-p-18-1-2 , cite-p-18-1-19 ) .","semantic role labeling ( srl ) is the task of identifying the arguments of lexical predicates in a sentence and labeling them with semantic roles ( cite-p-13-3-3 , cite-p-13-3-11 ) ." the smt weighting parameters were tuned by mert using the development data .,the decoding weights were optimized with minimum error rate training . "named entity recognition ( ner ) is the task of identifying and typing phrases that contain the names of persons , organizations , locations , and so on .",named entity recognition ( ner ) is a fundamental task in text mining and natural language understanding . the inversion transduction grammar of wu is a type of context-free grammar for generating two languages synchronously .,inversion transduction grammar is a synchronous grammar for synchronous parsing of source and target language sentences . we use 300d glove vectors trained on 840b tokens as the word embedding input to the lstm .,we use the glove vectors of 300 dimension to represent the input words . conditional random fields are undirected graphical models that are conditionally trained .,conditional random fields are undirected graphical models represented as factor graphs . relation classification is a crucial ingredient in numerous information extraction systems seeking to mine structured facts from text .,relation classification is the task of identifying the semantic relation present between a given pair of entities in a piece of text . "we use glove word embeddings , an unsupervised learning algorithm for obtaining vector representations of words .","we employ the pretrained word vector , glove , to obtain the fixed word embedding of each word ." this study is called morphological analysis .,our method of morphological analysis comprises a morpheme lexicon . we train our neural model with stochastic gradient descent and use adagrad to update the parameters .,we train the parameters of the stages separately using adagrad with the perceptron loss function . "transliteration is the task of converting a word from one writing script to another , usually based on the phonetics of the original word .",transliteration is often defined as phonetic translation ( cite-p-21-3-2 ) . in our experiments we use a publicly available implementation of conditional random fields .,"we use a conditional random field sequence model , which allows for globally optimal training and decoding ." socher et al later introduced the recursive neural network architecture for supervised learning tasks such as syntactic parsing and sentiment analysis .,"socher et al used recursive neural networks to model sentences for different tasks , including paraphrase detection and sentence classification ." one of the first challenges in sentiment analysis is the vast lexical diversity of subjective language .,sentiment analysis is a collection of methods and algorithms used to infer and measure affection expressed by a writer . we used the moses toolkit for performing statistical machine translation .,"for this purpose , we used phrase tables learned by the standard statistical mt toolkit moses ." twitter is a popular microblogging service which provides real-time information on events happening across the world .,"twitter is a popular microblogging service , which , among other things , is used for knowledge sharing among friends and peers ." we further investigated the usefulness of using lexicons using a recurrent neural network with bidirectional long short-term memory .,"finally , based on recent results in text classification , we also experiment with a neural network approach which uses a long-short term memory network ." the language models are 4-grams with modified kneser-ney smoothing which have been trained with the srilm toolkit .,"the 5-gram kneser-ney smoothed language models were trained by srilm , with kenlm used at runtime ." "collobert and weston and collobert et al employed a deep learning framework for multi-task learning including part-of-speech tagging , chunking , namedentity recognition , language modelling and semantic role-labeling .","collobert et al propose a multi-task learning framework with dnn for various nlp tasks , including part-of-speech tagging , chunking , named entity recognition , and semantic role labelling ." we use srilm to train a 5-gram language model on the target side of our training corpus with modified kneser-ney discounting .,"we employ srilm toolkit to linearly interpolate the target side of the training corpus with the wmt english corpus , optimizing towards the mt tuning set ." "negation is a linguistic phenomenon present in all languages ( cite-p-12-3-6 , cite-p-12-1-5 ) .","negation is a complex phenomenon present in all human languages , allowing for the uniquely human capacities of denial , contradiction , misrepresentation , lying , and irony ( cite-p-18-3-7 ) ." "moreover , xing et al incorporated topic words into seq2seq frameworks , where topic words are obtained from a pre-trained l-da model .",xing et al presented topic aware response generation by incorporating topic words obtained from a pre-trained lda model . it is a sequence-tosequence neural system with attention .,this system is a basic encoderdecoder with an attention mechanism . table 4 shows the bleu scores of the output descriptions .,table 1 shows the performance for the test data measured by case sensitive bleu . luong and manning proposed a hybrid scheme that consults character-level information whenever the model encounters an oov word .,luong and manning also propose an hybrid word-character model to handle the rare word problem . our baseline system is an standard phrase-based smt system built with moses .,"in all submitted systems , we use the phrase-based moses decoder ." "language modeling is trained using kenlm using 5-grams , with modified kneser-ney smoothing .",language models used modified kneserney smoothing estimated using kenlm . neural machine translation has recently become the dominant approach to machine translation .,neural machine translation using sequence to sequence architectures has become the dominant approach to automatic machine translation . we use a fourgram language model with modified kneser-ney smoothing as implemented in the srilm toolkit .,we used srilm to build a 4-gram language model with interpolated kneser-ney discounting . relation extraction is the task of recognizing and extracting relations between entities or concepts in texts .,relation extraction is the task of detecting and characterizing semantic relations between entities from free text . "unfortunately , wordnet is a fine-grained resource , which encodes possibly subtle sense distictions .",wordnet is a general english thesaurus which additionally covers biological terms . we trained kneser-ney discounted 5-gram language models on each available corpus using the srilm toolkit .,"for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided ." the model parameters of word embedding are initialized using word2vec .,we use the word2vec framework in the gensim implementation to generate the embedding spaces . translation model has been extensively employed in question search and has been shown to outperform the traditional ir methods significantly .,previous work consistently reported that the wordbased translation models yielded better performance than the traditional methods for question retrieval . tanev and magnini proposed a weakly supervised method that requires as training data a list of terms without context for each class under consideration .,tanev and magnini proposed a weaklysupervised method that requires as training data a list of terms without context for each category under consideration . "we trained a trigram language model on the chinese side , with the srilm toolkit , using the modified kneser-ney smoothing option .",we created 5-gram language models for every domain using srilm with improved kneserney smoothing on the target side of the training parallel corpora . it has previously been shown that word embeddings represent the contextualised lexical semantics of words .,it has been shown that word embeddings are able to capture to certain semantic and syntactic aspects of words . we use word embeddings of dimension 100 pretrained using word2vec on the training dataset .,we use 300 dimension word2vec word embeddings for the experiments . "taking the sequence of the word representation as input , our flat ner layer enables capturing context representation by a long short-term memory layer .","to do this , we relied on a neural network with a long short-term memory layer , which is fed from the word embeddings ." lexical simplification is a subtask of the more general text simplification task which attempts at reducing the cognitive complexity of a text so that it can be ( better ) understood by a larger audience .,lexical simplification is the task of modifying the lexical content of complex sentences in order to make them simpler . we used the svm light package with a linear kernel .,we exploit the svm-light-tk toolkit for kernel computation . this problem can be alleviated by long-short term memory units .,an effective solution for these problems is the long short-term memory architecture . we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting .,we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing . "however , dependency parsing , which is a popular choice for japanese , can incorporate only shallow syntactic information , i.e. , pos tags , compared with the richer syntactic phrasal categories in constituency parsing .","dependency parsing is a core task in nlp , and it is widely used by many applications such as information extraction , question answering , and machine translation ." here we use stanford corenlp toolkit to deal with the co-reference problem .,"for part of speech tagging and dependency parsing of the text , we used the toolset from stanford corenlp ." we perform the mert training to tune the optimal feature weights on the development set .,we use minimal error rate training to maximize bleu on the complete development data . we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit .,we trained kneser-ney discounted 5-gram language models on each available corpus using the srilm toolkit . we adopt pretrained embeddings for word forms with the provided training data by word2vec .,"in addition to that we use pre-trained embeddings , by training word2vec skip-gram model on wikipedia texts ." in our experiments we use a publicly available implementation of conditional random fields .,our model is a first order linear chain conditional random field . sun and xu enhanced a cws model by interpolating statistical features of unlabeled data into the crfs model .,sun and xu enhanced the segmentation results by interpolating the statistics-based features derived from unlabeled data to a crfs model . we use a minibatch stochastic gradient descent algorithm together with the adam optimizer .,we use the adam optimizer and mini-batch gradient to solve this optimization problem . "coreference resolution is the problem of identifying which noun phrases ( nps , or mentions ) refer to the same real-world entity in a text or dialogue .",coreference resolution is the task of clustering a set of mentions in the text such that all mentions in the same cluster refer to the same entity . we used the logistic regression implemented in the scikit-learn library with the default settings .,for the feature-based system we used logistic regression classifier from the scikit-learn library . "in this paper , we propose a sentiment-aligned topic model ( satm ) for product aspect rating prediction .","in this paper , we proposed a sentiment aligned topic model ( satm ) for product aspect rating prediction ." results were evaluated with both bleu and nist metrics .,"results are reported on two standard metrics , nist and bleu , on lower-cased data ." "coreference resolution is the task of clustering a sequence of textual entity mentions into a set of maximal non-overlapping clusters , such that mentions in a cluster refer to the same discourse entity .",coreference resolution is a set partitioning problem in which each resulting partition refers to an entity . we used the implementation of the scikit-learn 2 module .,"for all classifiers , we used the scikit-learn implementation ." "for language model , we used sri language modeling toolkit to train a 4-gram model with modified kneser-ney smoothing .",we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting . "for this purpose , we use the moses toolkit for training translation models and decoding , as well as srilm 2 to build the language models .",we use srilm for training a trigram language model on the english side of the training corpus . word sense disambiguation ( wsd ) is a problem of finding the relevant clues in a surrounding context .,word sense disambiguation ( wsd ) is the task of determining the correct meaning or sense of a word in context . word alignment is a critical first step for building statistical machine translation systems .,word alignment is a key component in most statistical machine translation systems . stemming is a heuristic approach to reducing form-related sparsity issues .,stemming is a popular way to reduce the size of a vocabulary in natural language tasks by conflating words with related meanings . "relation extraction is the task of predicting attributes and relations for entities in a sentence ( zelenko et al. , 2003 ; bunescu and mooney , 2005 ; guodong et al. , 2005 ) .",relation extraction is the task of recognizing and extracting relations between entities or concepts in texts . "we apply the adam algorithm for optimization , where the parameters of adam are set as in .",we use the adam optimizer for the gradient-based optimization . we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting .,we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing . luong and manning have proposed a hybrid nmt model flexibly switching from the word-based to the character-based model .,luong and manning proposed a hybrid scheme that consults character-level information whenever the model encounters an oov word . table 1 shows the performance for the test data measured by case sensitive bleu .,table 1 shows the evaluation of all the systems in terms of bleu score with the best score highlighted . we used the penn treebank wsj corpus to perform the empirical evaluation of the considered approaches .,we used the penn treebank wsj corpus to perform empirical experiments on the proposed parsing models . for training the translation model and for decoding we used the moses toolkit .,"for this purpose , we used phrase tables learned by the standard statistical mt toolkit moses ." our model appears to be able to do well also on recognizing non-overlapping mentions .,"in this paper , we propose a new model that is capable of recognizing overlapping mentions ." bilingual lexicons serve as an indispensable source of knowledge for various cross-lingual tasks such as cross-lingual information retrieval or statistical machine translation .,"bilingual lexicons are fundamental resources in multilingual natural language processing tasks such as machine translation , cross-language information retrieval or computerassisted translation ." we use a popular word2vec neural language model to learn the word embeddings on an unsupervised tweet corpus .,for our purpose we use word2vec embeddings trained on a google news dataset and find the pairwise cosine distances for all words . "hassan and menezes proposed an approach for normalizing social media text which used random walk framework on a contextual similarity bipartite graph constructed from n-grams sequences , which they interpolated with edit distance .","hassan and menezes proposed an approach based on the random walk algorithm on a contextual similarity bipartite graph , constructed from n-gram sequences on a large unlabeled text corpus ." the weights of the different feature functions were tuned by means of minimum error-rate training executed on the europarl development corpus .,the model weights of all systems have been tuned with standard minimum error rate training on a concatenation of the newstest2011 and newstest2012 sets . "we use glove word embeddings , an unsupervised learning algorithm for obtaining vector representations of words .","for this reason , we used glove vectors to extract the vector representation of words ." the translation quality is evaluated by case-insensitive bleu-4 .,the quality of translations is evaluated by the case insensitive nist bleu-4 metric . bengio et al propose a feedforward neural network to train a word-level language model with a limited n-gram history .,bengio et al presented a neural network language model where word embeddings are simultaneously learned along with a language model . the evaluation metric is casesensitive bleu-4 .,the evaluation method is the case insensitive ibm bleu-4 . li and yarowsky proposed an unsupervised method extracting the relation between a full-form phrase and its abbreviation from monolingual corpora .,li and yarowsky proposed an unsupervised method for extracting the mappings from chinese abbreviations and their full-forms . the weights are optimized over the bleu metric .,evaluation is done using the bleu metric with four references . dependency parsing is a topic that has engendered increasing interest in recent years .,dependency parsing is the task of predicting the most probable dependency structure for a given sentence . we use a fourgram language model with modified kneser-ney smoothing as implemented in the srilm toolkit .,we build a 9-gram lm using srilm toolkit with modified kneser-ney smoothing . transition-based and graph-based models have attracted the most attention of dependency parsing in recent years .,transition-based methods have become a popular approach in multilingual dependency parsing because of their speed and performance . there are hand-crafted semantic frames in the lexicons of framenet and propbank .,"recently , large corpora have been manually annotated with semantic roles in framenet and propbank ." "for the language model we use the corpus of 60,000 simple english wikipedia articles 3 and build a 3-gram language model with kneser-ney smoothing trained with srilm .","thus , we train a 4-gram language model based on kneser-ney smoothing method using sri toolkit and interpolate it with the best rnnlms by different weights ." richman and schone used english linguis-tic tools and cross language links in wikipedia to automatically annotate text in different languages .,richman and schone use article classification knowledge from english wikipedia to produce ne-annotated corpora in other languages . "we employ moses , an open-source toolkit for our experiment .","we use moses , an open source toolkit for training different systems ." the language models used were 7-gram srilm with kneser-ney smoothing and linear interpolation .,we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing . "on all datasets and models , we use 300-dimensional word vectors pre-trained on google news .","for the cluster- based method , we use word2vec 2 which provides the word vectors trained on the google news corpus ." part-of-speech ( pos ) tagging is a fundamental task in natural language processing .,"part-of-speech ( pos ) tagging is a fundamental nlp task , used by a wide variety of applications ." we build an open-vocabulary language model with kneser-ney smoothing using the srilm toolkit .,we use srilm for n-gram language model training and hmm decoding . language models were built using the srilm toolkit 16 .,the trigram language model is implemented in the srilm toolkit . the cbow model introduced in mikolov et al learns vector representations using a neural network architecture by trying to predict a target word given the words surrounding it .,the continuous bag-of-words approach described by mikolov et al is learned by predicting the word vector based on the context vectors . word embeddings have proven to be effective models of semantic representation of words in various nlp tasks .,high quality word embeddings have been proven helpful in many nlp tasks . we implement classification models using keras and scikit-learn .,we use the scikit-learn machine learning library to implement the entire pipeline . "the language model used was a 5-gram with modified kneserney smoothing , built with srilm toolkit .","in addition , a 5-gram lm with kneser-ney smoothing and interpolation was built using the srilm toolkit ." "to calculate language model features , we train traditional n-gram language models with ngram lengths of four and five using the srilm toolkit .","for the language model , we used sri language modeling toolkit to train a trigram model with modified kneser-ney smoothing on the 31 , 149 english sentences ." neural machine translation has recently gained popularity in solving the machine translation problem .,neural machine translation is currently the state-of-the art paradigm for machine translation . "for training our system classifier , we have used scikit-learn .",we use scikitlearn as machine learning library . we used the phrasebased translation system in moses 5 as a baseline smt system .,we used moses with the default configuration for phrase-based translation . "thus , event extraction is a difficult task and requires substantial training data .",event extraction is a particularly challenging type of information extraction ( ie ) . "riedel et al , 2010 ) made the at-least-once assumption that led the distant supervision for relation extraction to multi-instance learning .",riedel et al proposed to use multi-instance learning to tolerate noise in the positively-labeled data . aspect extraction is a key task of opinion mining ( cite-p-15-1-14 ) .,"aspect extraction is a task to abstract the common properties of objects from corpora discussing them , such as reviews of products ." "word alignment , which can be defined as an object for indicating the corresponding words in a parallel text , was first introduced as an intermediate result of statistical translation models ( cite-p-13-1-2 ) .",word alignment is the task of identifying corresponding words in sentence pairs . "for the neural models , we use 100-dimensional glove embeddings , pre-trained on wikipedia and gigaword .","we use glove word embeddings , which are 50-dimension word vectors trained with a crawled large corpus with 840 billion tokens ." we use the pre-trained glove 50-dimensional word embeddings to represent words found in the glove dataset .,"we use the 200-dimensional global vectors , pre-trained on 2 billion tweets , covering over 27-billion tokens ." word sense disambiguation ( wsd ) is the task of determining the meaning of an ambiguous word in its context .,"many words have multiple meanings , and the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd ) ." coreference resolution is the task of grouping mentions to entities .,coreference resolution is the task of identifying all mentions which refer to the same entity in a document . "in all cases , we used the implementations from the scikitlearn machine learning library .","for all classifiers , we used the scikit-learn implementation ." we use srilm for training a trigram language model on the english side of the training data .,we also use a 4-gram language model trained using srilm with kneser-ney smoothing . the srilm toolkit was used to build the trigram mkn smoothed language model .,"a trigram model was built on 20 million words of general newswire text , using the srilm toolkit ." we use word2vec as the vector representation of the words in tweets .,we learn our word embeddings by using word2vec 3 on unlabeled review data . "cussens and pulman used a symbolic approach employing inductive logic programming , while erbach , barg and walther and fouvry followed a unificationbased approach .",cussens and pulman describe a symbolic approach which employs inductive logic programming and barg and walther and fouvry follow a unification-based approach . our baseline is a phrase-based mt system trained using the moses toolkit .,we use a pbsmt model built with the moses smt toolkit . relation extraction ( re ) is a task of identifying typed relations between known entity mentions in a sentence .,relation extraction ( re ) is the task of assigning a semantic relationship between a pair of arguments . coreference resolution is the task of determining which mentions in a text refer to the same entity .,coreference resolution is a task aimed at identifying phrases ( mentions ) referring to the same entity . "in recent years , phrase-based systems for statistical machine translation have delivered state-of-the-art performance on standard translation tasks .","in the early part of the last decade , phrase-based machine translation emerged as the preeminent design of statistical mt systems ." the srilm language modelling toolkit was used with interpolated kneser-ney discounting .,we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing . coreference resolution is the task of determining which mentions in a text are used to refer to the same real-world entity .,coreference resolution is the process of linking together multiple referring expressions of a given entity in the world . the approach relies on the assumption that the term and its translation appear in similar contexts .,"in the second stage , we use this assumption that a word and its translation tend to appear in similar context across languages ." "we utilize the google news dataset created by mikolov et al , which consists of 300-dimensional vectors for 3 million words and phrases .","we use the word2vec vectors with 300 dimensions , pre-trained on 100 billion words of google news ." dependency parsing consists of finding the structure of a sentence as expressed by a set of directed links ( dependencies ) between words .,"dependency parsing is a simpler task than constituent parsing , since dependency trees do not have extra non-terminal nodes and there is no need for a grammar to generate them ." "we use logistic regression with l2 regularization , implemented using the scikit-learn toolkit .",for the feature-based system we used logistic regression classifier from the scikit-learn library . we use kaldi speech recognition toolkit to train our acoustic models .,we train our acoustic models by kaldi speech recognition toolkit . coreference resolution is the process of determining whether two expressions in natural language refer to the same entity in the world .,"coreference resolution is a challenging task , that involves identification and clustering of noun phrases mentions that refer to the same real-world entity ." zeng et al exploit a convolutional neural network to extract lexical and sentence level features for relation classification .,zeng et al propose the use of position feature for improving the performance of cnn in relation classification . bilmes and kirchhoff proposed a more general framework for n-gram language modelling .,a more linguistically-informed approach to n-gram models is the factored language model approach of bilmes and kirchhoff . word alignment is the task of identifying word correspondences between parallel sentence pairs .,word alignment is a critical first step for building statistical machine translation systems . "for feature building , we use word2vec pre-trained word embeddings .",we use word2vec to train the word embeddings . we built a linear svm classifier using svm light package .,"we used svm-light-tk , which enables the use of the partial tree kernel ." "we used implementations from scikitlearn , and the parameters of both classifiers were tuned on the development set using grid search .","in all cases , we used the implementations from the scikitlearn machine learning library ." "coreference resolution is a challenging task , that involves identification and clustering of noun phrases mentions that refer to the same real-world entity .",coreference resolution is a task aimed at identifying phrases ( mentions ) referring to the same entity . the translation quality is evaluated by case-insensitive bleu-4 .,translation results are evaluated using the word-based bleu score . "in this paper , we propose a novel branch and bound ( b & b ) algorithm for efficient parsing .",in this paper we proposed a new parsing algorithm based on a branch and bound framework . "in this paper , we present a novel method that enhances authorship attribution effectiveness .","in this paper , we propose a novel method that is based on text distortion to compress topic-related information ." we used the sri language modeling toolkit for this purpose .,we implement an in-domain language model using the sri language modeling toolkit . "more recently , mikolov et al showed that word vectors could be added or subtracted to isolate certain semantic and syntactic features .",mikolov et al showed that meaningful syntactic and semantic regularities can be captured in pre-trained word embedding . we use the glove algorithm to obtain 300-dimensional word embeddings from a union of these corpora .,"for word-level embedding e w , we utilize pre-trained , 300-dimensional embedding vectors from glove 6b ." "much current work in discourse parsing focuses on the labelling of discourse relations , using data from the penn discourse treebank .",the release of the penn discourse treebank has advanced the development of english discourse relation recognition . we use pre-trained 100 dimensional glove word embeddings .,we used 100 dimensional glove embeddings for this purpose . ngram features have been generated with the srilm toolkit .,a 4-grams language model is trained by the srilm toolkit . "barman et al , 2014 ) addressed the problem of language identification on bengali-hindi-english facebook comments .",barman et al addressed the problem of language identification on bengali-hindi-english facebook comments . "if the anaphor is a definite noun phrase and the referent is in focus ( i.e . in the cache ) , anaphora resolution will be hindered .","the anaphor is a definite noun phrase and the referent is in focus , that is ." to do this we examine the dataset created for the english lexical substitution task in semeval .,"for evaluation , we use the dataset from the semeval-2007 lexical substitution task ." the language models in this experiment were trigram models with good-turing smoothing built using srilm .,the target language model was a trigram language model with modified kneser-ney smoothing trained on the english side of the bitext using the srilm tookit . we use srilm train a 5-gram language model on the xinhua portion of the english gigaword corpus 5th edition with modified kneser-ney discounting .,"incometo select the most fluent path , we train a 5-gram language model with the srilm toolkit on the english gigaword corpus ." "for support vector learning , we use svm-light and svm-multiclass .",in the experiments reported here we use support vector machines through the svm light package . "for all baselines we used the phrase-based statistical machine translation system moses , with the default model features , weighted in a log-linear framework .","we used the phrase-based model moses for the experiments with all the standard settings , including a lexicalized reordering model , and a 5-gram language model ." semantic parsing is the task of mapping natural language sentences to a formal representation of meaning .,semantic parsing is the task of translating text to a formal meaning representation such as logical forms or structured queries . the smt systems were built using the moses toolkit .,we use the moses software package 5 to train a pbmt model . "recently , the field has been influenced by the success of neural language models .","more recently , neural networks have become prominent in word representation learning ." "pun is a way of using the characteristics of the language to cause a word , a sentence or a discourse to involve two or more different meanings .",a pun is the exploitation of the various meanings of a word or words with phonetic similarity but different meanings . a 4-gram language model is trained on the monolingual data by srilm toolkit .,"furthermore , we train a 5-gram language model using the sri language toolkit ." we use the adam optimizer with its default parameters and a mini-batch size of 32 .,we train the model using the adam optimizer with the default hyper parameters . word segmentation is the first step prior to word alignment for building statistical machine translations ( smt ) on language pairs without explicit word boundaries such as chinese-english .,"therefore , word segmentation is a crucial first step for many chinese language processing tasks such as syntactic parsing , information retrieval and machine translation ." we measure machine translation performance using the bleu metric .,we measure the translation quality using a single reference bleu . we use pre-trained 50 dimensional glove vectors 4 for word embeddings initialization .,"for the classification task , we use pre-trained glove embedding vectors as lexical features ." we also report the results using bleu and ter metrics .,"we report decoding speed and bleu score , as measured by sacrebleu ." "we employ the glove and node2vec to generate the pre-trained word embedding , obtaining two distinct embedding for each word .",we calculate cosine similarity using pretrained glove word vectors 7 to find similar words to the seed word . the english data representation was done using tokenizer 6 and glove pretrained word vectors .,"we also used pre-trained word embeddings , including glove and 300d fasttext vectors ." "to tackle this issue , we leverage pretrained word embeddings , specifically the 300 dimension glove embeddings trained on 42b tokens of external text corpora .","for this , we utilize the publicly available glove 1 word embeddings , specifically ones trained on the common crawl dataset ." coreference resolution is the task of grouping mentions to entities .,coreference resolution is a set partitioning problem in which each resulting partition refers to an entity . "to train the parsing models , while we use subtree-based features .","finally , we construct new subtree-based features for parsing algorithms ." the english side of the parallel corpus is trained into a language model using srilm .,the language model was constructed using the srilm toolkit with interpolated kneser-ney discounting . jtt is an lda-style model that is trained jointly on source and target documents linked by browsing transitions .,lda is a generative model that learns a set of latent topics for a document collection . we use the earley algorithm with cube-pruning for the string-to-amr parsing .,we use the cube pruning method to approximately intersect the translation forest with the language model . we have implemented a hierarchical phrase-based smt model similar to chiang .,we used an in-house implementation of the hierarchical phrase-based decoder as described in chiang . the model parameters of word embedding are initialized using word2vec .,descriptions are transformed into a vector by adding the corresponding word2vec embeddings . our results also show that incorporating and exploiting more information from the target domain is much more useful for improving performance than excluding misleading training .,our empirical results on three nlp tasks show that incorporating and exploiting more information from the target domain through instance weighting is effective . these models were implemented using the package scikit-learn .,these supervised learning methods are implemented in scikit-learn toolkit . the language models used were 7-gram srilm with kneser-ney smoothing and linear interpolation .,"the language models were interpolated kneser-ney discounted trigram models , all constructed using the srilm toolkit ." we used 300-dimensional pre-trained glove word embeddings .,we use pre-trained 100 dimensional glove word embeddings . the system used a tri-gram language model built from sri toolkit with modified kneser-ney interpolation smoothing technique .,"we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit ." "word alignment is a critical component in training statistical machine translation systems and has received a significant amount of research , for example , ( cite-p-17-1-0 , cite-p-17-1-8 , cite-p-17-1-4 ) , including work leveraging syntactic parse trees , e.g. , ( cite-p-17-1-1 , cite-p-17-1-2 , cite-p-17-1-3 ) .","word alignment is the task of identifying translational relations between words in parallel corpora , in which a word at one language is usually translated into several words at the other language ( fertility model ) ( cite-p-18-1-0 ) ." "in particular , we created standard trigram language models from the written training data without making use of concurrent perceptual context information using srilm .",we created 5-gram language models for every domain using srilm with improved kneserney smoothing on the target side of the training parallel corpora . "the language model used was a 5-gram with modified kneserney smoothing , built with srilm toolkit .",the srilm toolkit was used for training the language models using kneser-ney smoothing . "for each target language , we used the srilm toolkit to estimate separate 4-gram lms with kneser-ney smoothing , for each of the corpora listed in tables 3 , 4 and 5 .","for the fst representation , we used the the opengrm-ngram language modeling toolkit and used an n-gram order of 4 , with kneser-ney smoothing ." turney and littman determined the semantic orientation of a target word t by comparing its association with two seed sets of manually crafted target words .,turney and littman determined the polarity of sentiment words by estimating the point-wise mutual information between sentiment words and a set of seed words with strong polarity . table 5 shows the bleu and per scores obtained by each system .,table 4 shows end-to-end translation bleu score results . the language models used were 7-gram srilm with kneser-ney smoothing and linear interpolation .,we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit . the translation quality is evaluated by case-insensitive bleu-4 metric .,translation performance was measured by case-insensitive bleu . notable discriminative approaches are conditional random fields and structural svm .,"among these models , neural variants of the conditional random fields model are especially popular ." we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing .,we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus . translation performances are measured with case-insensitive bleu4 score .,the evaluation metric for the overall translation quality is caseinsensitive bleu4 . we evaluate our models using the standard bleu metric 2 on the detokenized translations of the test set .,"we follow the standard machine translation procedure of evaluation , measuring bleu for every system ." the bleu score measures the agreement between a hypothesis e i 1 generated by the mt system and a reference translation锚脦 1 .,the bleu score measures the precision of n-grams with respect to a reference translation with a penalty for short translations . we extract the named entities from the web pages using the stanford named entity recognizer .,we perform named entity tagging using the stanford four-class named entity tagger . "for the classification task , we use pre-trained glove embedding vectors as lexical features .",we use pre-trained glove vector for initialization of word embeddings . "sentiment analysis is the task of identifying the polarity ( positive , negative or neutral ) of review .",sentiment analysis is the task in natural language processing ( nlp ) that deals with classifying opinions according to the polarity of the sentiment they express . "in this paper , we propose another phrase-level combination approach – a paraphrasing model .","in this paper , we propose a paraphrasing model to address the task of system combination for machine translation ." we train a 4-gram language model on the xinhua portion of the gigaword corpus using the sri language toolkit with modified kneser-ney smoothing .,we use srilm to train a 5-gram language model on the target side of our training corpus with modified kneser-ney discounting . the word embeddings were obtained using word2vec 2 tool .,"for cos , we used the cbow model 6 of word2vec ." we built a 5-gram language model from it with the sri language modeling toolkit .,our 5-gram language model was trained by srilm toolkit . "stance detection is the task of assigning stance labels to a piece of text with respect to a topic , i.e . whether a piece of text is in favour of “ abortion ” , neutral , or against .","stance detection is the task of automatically determining from text whether the author of the text is in favor of , against , or neutral towards a proposition or target ." semantic parsing is the task of translating natural language utterances into a machine-interpretable meaning representation ( mr ) .,semantic parsing is the task of mapping natural language sentences to complete formal meaning representations . we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing .,we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit . "language modeling is trained using kenlm using 5-grams , with modified kneser-ney smoothing .",language models were estimated using the sri language modeling toolkit with modified kneser-ney smoothing . we use an attention-augmented architecture with a bi-directional lstm as encoder .,we use a standard lstm-based bidirectional encoder-decoder architecture with global attention . language models are built using the sri-lm toolkit .,a 4-grams language model is trained by the srilm toolkit . language models were estimated using the sri language modeling toolkit with modified kneser-ney smoothing .,the language model was a kneser-ney interpolated trigram model generated using the srilm toolkit . "we run skip-gram model on training dataset , and use the obtained word vector to initialize the word embedding part of model input .",we initialize the embedding weights by the pre-trained word embeddings with 200 dimensional vectors . "sentiment analysis is a research area where does a computational analysis of people ’ s feelings or beliefs expressed in texts such as emotions , opinions , attitudes , appraisals , etc . ( cite-p-12-1-3 ) .","sentiment analysis is a natural language processing ( nlp ) task ( cite-p-10-1-14 ) which aims at classifying documents according to the opinion expressed about a given subject ( federici and dragoni , 2016a , b ) ." the language model pis implemented as an n-gram model using the srilm-toolkit with kneser-ney smoothing .,the language model is a trigram model with modified kneser-ney discounting and interpolation . a pseudoword is a composite comprised of two or more words chosen at random ; the individual occurrences of the original words within a text are replaced by their conflation .,a pseudo-word is the concatenation of two words ( e.g . house/car ) . "in all cases , we use a support vector machine approach to training the model , using the smo implementation found in weka , using a linear polynomial kernel and default settings .","for classification , we use a maximum entropy model , from the logistic regression package in weka , with all default parameter settings ." lei et al introduce a syntactic dependency parser using a low-rank tensor component for scoring dependency edges .,lei et al proposed to learn features by representing the cross-products of some primitive units with low-rank tensors for dependency parsing . passing additional information to a neural network via word-attached features was first introduced by collobert et al as a way to add linguistic annotation for various nlp tasks using feed-forward and convolutional networks .,"the idea of extracting features for nlp using convolutional dnn was previously explored by collobert et al , in the context of pos tagging , chunking , named entity recognition and semantic role labeling ." a 4-gram language model was trained on the monolingual data by the srilm toolkit .,the srilm toolkit was used to build the 5-gram language model . we used the srilm toolkit and kneser-ney discounting for estimating 5-grams lms .,we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing . coreference resolution is a fundamental component of natural language processing ( nlp ) and has been widely applied in other nlp tasks ( cite-p-15-3-9 ) .,coreference resolution is the task of determining when two textual mentions name the same individual . we used srilm to build a 4-gram language model with kneser-ney discounting .,we used srilm to build a 4-gram language model with interpolated kneser-ney discounting . "later , ji and grishman employed a rule-based approach to propagate consistent triggers and arguments across topic-related documents .",ji and grishman extended the one sense per discourse idea to multiple topically related documents and propagate consistent event arguments across sentences and documents . relation extraction is the task of detecting and classifying relationships between two entities from text .,"relation extraction is the task of finding relations between entities in text , which is useful for several tasks such as information extraction , summarization , and question answering ( cite-p-14-3-7 ) ." "such models have been very frequently used in question-answering tasks and lee et al , machine translation , and many other nlp applications .",the attention strategies have been widely used in machine translation and question answering . sentiment analysis is a recent attempt to deal with evaluative aspects of text .,"sentiment analysis is a natural language processing ( nlp ) task ( cite-p-10-1-14 ) which aims at classifying documents according to the opinion expressed about a given subject ( federici and dragoni , 2016a , b ) ." coreference resolution is the process of linking multiple mentions that refer to the same entity .,"coreference resolution is the task of partitioning a set of entity mentions in a text , where each partition corresponds to some entity in an underlying discourse model ." twitter is a social platform which contains rich textual content .,"among them , twitter is the most popular service by far due to its ease for real-time sharing of information ." coreference resolution is the process of linking together multiple referring expressions of a given entity in the world .,coreference resolution is the task of clustering a set of mentions in the text such that all mentions in the same cluster refer to the same entity . "sentence compression is a paraphrasing task aimed at generating sentences shorter than the given ones , while preserving the essential content .","sentence compression is the task of producing a shorter form of a single given sentence , so that the new form is grammatical and retains the most important information of the original one ." "and we pretrain the chinese word embeddings on a huge unlabeled data , the chinese wikipedia corpus , with word2vec toolkit .",we train skip-gram word embeddings with the word2vec toolkit 1 on a large amount of twitter text data . "despite being relatively simple , this baseline has been previously used as a point of comparison by other unsupervised semantic role labeling systems and shown difficult to outperform .",this baseline has been previously used as a point of comparison by other unsupervised semantic role induction systems and shown difficult to outperform . systems that jointly annotate syntactic and semantic dependencies were introduced in the past conll-2008 shared task .,the conll 2008-2009 shared tasks introduced a variant where semantic dependencies are annotated rather than phrasal arguments . "unfortunately , wordnet is a fine-grained resource , encoding sense distinctions that are difficult to recognize even for human annotators ( cite-p-13-1-2 ) .","unfortunately , wordnet is a fine-grained resource , encoding sense distinctions that are often difficult to recognize even for human annotators ( cite-p-15-1-6 ) ." "neural models , with various neural architectures , have recently achieved great success .","more recently , neural networks have become prominent in word representation learning ." we applied liblinear via its scikitlearn python interface to train the logistic regression model with l2 regularization .,we use the logistic regression implementation of liblinear wrapped by the scikit-learn library . parameter optimization is performed with the diagonal variant of adagrad with minibatchs .,"following , we minimize the objective by the diagonal variant of adagrad with minibatchs ." word sense disambiguation ( wsd ) is a natural language processing ( nlp ) task in which the correct meaning ( sense ) of a word in a given context is to be determined .,word sense disambiguation ( wsd ) is a problem of finding the relevant clues in a surrounding context . the model weights are automatically tuned using minimum error rate training .,the log-linear parameter weights are tuned with mert on the development set . "convolutional neural networks are useful in many nlp tasks , such as language modeling , semantic role labeling and semantic parsing .","monolingual word embeddings have facilitated advances in many natural language processing tasks , such as natural language understanding , sentiment analysis , and dependency parsing ." faruqui et al demonstrated that embeddings learned without supervision can be retro-fitted to better conform to some semantic lexicon .,faruqui et al apply post-processing steps to existing word embeddings in order to bring them more in accordance with semantic lexicons such as ppdb and framenet . we use an in-house implementation of a pbsmt system similar to moses .,we use the moses toolkit to train our phrase-based smt models . "in this and our other n-gram models , we used kneser-ney smoothing .",we use 5-gram models with modified kneser-ney smoothing and interpolated back-off . "in this paper , we propose a novel hl-sot approach to labeling a product ’ s attributes and their associated sentiments in product reviews .","in this paper , we propose a novel and effective approach to sentiment analysis on product reviews ." this is motivated by the fact that multi-task learning has shown to be beneficial in several nlp tasks .,"also called deep learning , such approaches have recently been applied in a number of nlp tasks ." in this work we use the open-source toolkit moses .,we implement the pbsmt system with the moses toolkit . the pre-processed monolingual sentences will be used by srilm or berkeleylm to train a n-gram language model .,a 5-gram language model was created with the sri language modeling toolkit and trained using the gigaword corpus and english sentences from the parallel data . coreference resolution is the process of linking together multiple referring expressions of a given entity in the world .,coreference resolution is the task of identifying all mentions which refer to the same entity in a document . "n-gram features were based on language models of order 5 , built with the srilm toolkit on monolingual training material from the europarl and the news corpora .","the language models were interpolated kneser-ney discounted trigram models , all constructed using the srilm toolkit ." "at the document level , we find satirical news generally contain paragraphs which are more complex than true news .",we observe that satirical cues are often reflected in certain paragraphs rather than the whole document . "we report bleu and ter on tokenized output , as computed by multeval .","we report decoding speed and bleu score , as measured by sacrebleu ." table 4 shows the bleu scores of the output descriptions .,"automatic evaluation results are shown in table 1 , using bleu-4 ." li et al recently proposed a joint detection method to detect both triggers and arguments using a structured perceptron model .,li et al presented a structured perceptron model to detect triggers and arguments jointly . korhonen et al performed a clustering experiment with highly polysemous verbs .,korhonen et al used verb-frame pairs to cluster verbs relying on the information bottleneck . we use scikit learn python machine learning library for implementing these models .,for the classifiers we use the scikit-learn machine learning toolkit . we use word vectors produced by the cbow approach-continuous bagof-words .,"here , we choose the skip-gram model and continuous-bag-of-words model for comparison with the lbl model ." the probabilistic language model is constructed on google web 1t 5-gram corpus by using the srilm toolkit .,we apply sri language modeling toolkit to train a 4-gram language model with kneser-ney smoothing . word segmentation is the first step prior to word alignment for building statistical machine translations ( smt ) on language pairs without explicit word boundaries such as chinese-english .,"therefore , word segmentation is a preliminary and important preprocess for chinese language processing ." coreference resolution is the task of automatically grouping references to the same real-world entity in a document into a set .,coreference resolution is the task of determining when two textual mentions name the same individual . we use the logistic regression implementation of liblinear wrapped by the scikit-learn library .,for the feature-based system we used logistic regression classifier from the scikit-learn library . lda is a probabilistic model that can be used to model and discover underlying topic structures of documents .,"the benchmark model for topic modelling is latent dirichlet allocation , a latent variable model of documents ." "we compute the syntactic features only for pairs of event mentions from the same sentence , using the stanford dependency parser .","for english , we convert the ptb constituency trees to dependencies using the stanford dependency framework ." we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting .,"gram language models are trained over the target-side of the training data , using srilm with modified kneser-ney discounting ." performance of a simple baseline model can be improved significantly if long-range dependencies are also captured .,these results demonstrate that this model benefits greatly from the inclusion of long-range dependencies . "in this paper , we propose a broad-coverage normalization system by integrating three human perspectives , including the enhanced letter .","in this paper , we propose a broad-coverage normalization system for the social media language without using the human annotations ." an hierarchical phrase-based model is a powerful method to cover any format of translation pairs by using synchronous context free grammar .,hiero is a hierarchical phrase-based statistical mt framework that generalizes phrase-based models by permitting phrases with gaps . we represent input words using pre-trained glove wikipedia 6b word embeddings .,"for the mix one , we also train word embeddings of dimension 50 using glove ." we trained a support vector machine with rbf kernel per temporal span using scikit-learn and tuned svm parameters using 5-fold crossvalidation with the training set .,"simulating the approach reported by , we trained a support vector machine for regression with rbf kernel using scikit-learn with the set of features ." "semantic parsing is a domain-dependent process by nature , as its output is defined over a set of domain symbols .",semantic parsing is the task of translating natural language utterances into a machine-interpretable meaning representation . dependency parsing consists of finding the structure of a sentence as expressed by a set of directed links ( dependencies ) between words .,dependency parsing is a valuable form of syntactic processing for nlp applications due to its transparent lexicalized representation and robustness with respect to flexible word order languages . coreference resolution is the task of grouping mentions to entities .,coreference resolution is the process of linking together multiple expressions of a given entity . "as a further test , we ran the stanford parser on the queries to generate syntactic parse trees .","we obtained parse trees using the stanford parser , and used jacana for word alignment ." we use a shared subword vocabulary by applying byte-pair encoding to the data for all variants concatenated .,we use byte-pair-encoding to achieve openvocabulary translation with a fixed vocabulary of subword symbols . we use the adaptive gradients method for weight updates and averaging of the weight vector .,we use a minibatch stochastic gradient descent algorithm together with an adagrad optimizer . "we use bleu 2 , ter 3 and meteor 4 , which are the most-widely used mt evaluation metrics .","we use bleu , rouge , and meteor scores as automatic evaluation metrics ." the word representations are generated based on the co-occurrence count modeling using stanford glove tool .,the word vectors of vocabulary words are trained from a large corpus using the glove toolkit . the model weights are automatically tuned using minimum error rate training .,the weights of the different feature functions were optimised by means of minimum error rate training . we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting .,we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus . "twitter is a communication platform which combines sms , instant messages and social networks .","twitter consists of a massive number of posts on a wide range of subjects , making it very interesting to extract information and sentiments from them ." "then , we trained word embeddings using word2vec .",we use word2vec from as the pretrained word embeddings . "although the itg constraint allows more flexible reordering during decoding , zens and ney showed that the ibm constraint results in higher bleu scores .",zens and ney show that itg constraints allow a higher flexibility in word ordering for longer sentences than the conventional ibm model . "coreference resolution is the problem of partitioning a sequence of noun phrases ( or mentions ) , as they occur in a natural language text , into a set of referential entities .",coreference resolution is a well known clustering task in natural language processing . snow et al demonstrated that annotations by crowdworkers have almost identical quality with those by experts in various nlp tasks .,"for annotation tasks , snow et al showed that crowdsourced annotations are similar to traditional annotations made by experts ." semantic parsing is the mapping of text to a meaning representation .,semantic parsing is the problem of deriving a structured meaning representation from a natural language utterance . "we employ the glove and node2vec to generate the pre-trained word embedding , obtaining two distinct embedding for each word .","we employ the pretrained word vector , glove , to obtain the fixed word embedding of each word ." "he et al proposed a method to find bursts , periods of elevated occurrence of events as a dynamic phenomenon instead of focusing on arrival rates .","he and parket attempted to find bursts , periods of elevated occurrence of events as a dynamic phenomenon instead of focusing on arrival rates ." "sentiment analysis is a research area where does a computational analysis of people ’ s feelings or beliefs expressed in texts such as emotions , opinions , attitudes , appraisals , etc . ( cite-p-12-1-3 ) .",sentiment analysis is a natural language processing task whose aim is to classify documents according to the opinion ( polarity ) they express on a given subject ( cite-p-13-8-14 ) . the state-of-the-art techniques of statistical machine translation demonstrate good performance on translation of languages with relatively similar word orders .,phrase-based translation systems prove to be the stateof-the-art as they have delivered translation performance in recent machine translation evaluations . minimum error rate training under bleu criterion is used to estimate 20 feature function weights over the larger development set .,we use minimum error rate training with nbest list size 100 to optimize the feature weights for maximum development bleu . we train randomly initialized word embeddings of size 500 for the dialog model and use 300 dimentional glove embeddings for reranking classifiers .,we initialize the word embeddings for our deep learning architecture with the 100-dimensional glove vectors . the minimum error rate training was used to tune the feature weights .,the standard minimum error rate training algorithm was used for tuning . we use word embeddings of dimension 100 pretrained using word2vec on the training dataset .,we train the word embeddings through using the training and developing sets of each dataset with word2vec tool . we use srilm for n-gram language model training and hmm decoding .,we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing . "a modified kn model , termed p , was estimated on the training set count files and applied to the test set using srilm , the sri language modeling toolkit .",the target fourgram language model was built with the english part of training data using the sri language modeling toolkit . "for all the experiments below , we utilize the pretrained word embeddings word2vec from mikolov et al to initialize the word embedding table .","for dmcnn , following the settings of previous work , we use the pre-trained word embeddings learned by skip-gram as the initial word embeddings ." we use the stanford named entity recognizer to identify named entities in s and t .,we extract the named entities from the web pages using the stanford named entity recognizer . "we used moses , a phrase-based smt toolkit , for training the translation model .",we used a standard pbmt system built using moses toolkit . semantic role labeling ( srl ) is defined as the task to recognize arguments for a given predicate and assign semantic role labels to them .,semantic role labeling ( srl ) is the process of assigning semantic roles to strings of words in a sentence according to their relationship to the semantic predicates expressed in the sentence . coreference resolution is the task of determining whether two or more noun phrases refer to the same entity in a text .,coreference resolution is the process of linking together multiple expressions of a given entity . we use pre-trained glove vector for initialization of word embeddings .,we use the glove pre-trained word embeddings for the vectors of the content words . "for decoding , we used moses with the default options .","we used the moses decoder , with default settings , to obtain the translations ." "taxonomies are widely used for knowledge standardization , knowledge sharing , and inferencing in natural language processing tasks .","taxonomies are useful tools for content organisation , navigation , and retrieval , providing valuable input for semantically intensive tasks such as question answering and textual entailment ." mihalcea et al defines a measure of text semantic similarity and evaluates it in an unsupervised paraphrase detector on this data set .,mihalcea et al use both corpusbased and knowledge-based measures of the semantic similarity between words . hiero is a hierarchical system that expresses its translation model as a synchronous context-free grammar .,"our baseline system is based on a hierarchical phrase-based translation model , which can formally be described as a synchronous context-free grammar ." "nenkova et al proposed a score to evaluate the lexical entrainment in highly frequent words , and found that the score has high correlation with task success and engagement .",nenkova et al noted that the entrainment score between dialogue partners is higher than the entrainment score between non-partners in dialogue . "for word embeddings , we used popular pre-trained word vectors from glove .","for the word-embedding based classifier , we use the glove pre-trained word embeddings ." the rules are extracted from the trees generated by the stanford dependency parser for the candidate sentences of our corpora .,we apply the rules to each sentence with its dependency tree structure acquired from the stanford parser . we implemented the different aes models using scikit-learn .,we feed our features to a multinomial naive bayes classifier in scikit-learn . coreference resolution is a set partitioning problem in which each resulting partition refers to an entity .,coreference resolution is the task of identifying all mentions which refer to the same entity in a document . word sense disambiguation ( wsd ) is the task of automatically determining the correct sense for a target word given the context in which it occurs .,"word sense disambiguation ( wsd ) is a difficult natural language processing task which requires that for every content word ( noun , adjective , verb or adverb ) the appropriate meaning is automatically selected from the available sense inventory 1 ." bengio et al presented a neural network language model where word embeddings are simultaneously learned along with a language model .,"in 2003 , bengio et al proposed a neural network architecture to train language models which produced word embeddings in the neural network ." the system used a tri-gram language model built from sri toolkit with modified kneser-ney interpolation smoothing technique .,a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit . "for evaluating the effectiveness of our approach , we perform language modeling over penn treebak dataset .","for training and evaluating the itsg parser , we employ the penn wsj treebank ." "we employ the glove and node2vec to generate the pre-trained word embedding , obtaining two distinct embedding for each word .",we initialize the embedding weights by the pre-trained word embeddings with 200 dimensional vectors . word sense disambiguation ( wsd ) is the task of assigning sense tags to ambiguous lexical items ( lis ) in a text .,"many words have multiple meanings , and the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd ) ." a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit .,a 4-gram language model was trained on the monolingual data by the srilm toolkit . the log-linear parameter weights are tuned with mert on a development set to produce the baseline system .,the log linear weights for the baseline systems are optimized using mert provided in the moses toolkit . semantic role labeling ( srl ) is the task of identifying semantic arguments of predicates in text .,semantic role labeling ( srl ) is a task of automatically identifying semantic relations between predicate and its related arguments in the sentence . regarding svm we used linear kernels implemented in svm-light .,"as a classifier , we employ support vector machines as implemented in svm light ." we use a count-based distributional semantics model and the continuous bag-of-words model to learn word vectors .,the continuous bag-of-words approach described by mikolov et al is learned by predicting the word vector based on the context vectors . we extract the 4096-dimension full-connected layer of 19-layer vggnet as the vector representation of images .,"for the image labels , we use the representation of the last layer of the vgg neural network ." "pitler et al use several linguistically informed features , including polarity tags , levin verb classes and length of verb phrases .","pitler et al demonstrated that features developed to capture word polarity , verb classes and orientation , as well as some lexical features are strong indicator of the type of discourse relation ." "twitter 1 is a microblogging service , which according to latest statistics , has 284 million active users , 77 % outside the us that generate 500 million tweets a day in 35 different languages .",twitter is a popular microblogging service which provides real-time information on events happening across the world . phrases are extracted using standard phrase-based heuristics and used to build a translation table and lexicalized reordering model .,the phrase-based translation systems rely on language model and lexicalized reordering model to capture lexical dependencies that span phrase boundaries . "for example , turian et al used word embeddings as input features for several nlp systems , including a traditional chunking system based on conditional random fields .","for example , turian et al have improved the performance of chunking and named entity recognition by using word embedding also as one of the features in their crf model ." we used the bleu score to evaluate the translation accuracy with and without the normalization .,"to measure the translation quality , we use the bleu score and the nist score ." "we train a trigram language model with modified kneser-ney smoothing from the training dataset using the srilm toolkit , and use the same language model for all three systems .","for the language model , we used sri language modeling toolkit to train a trigram model with modified kneser-ney smoothing on the 31 , 149 english sentences ." the weights for these features are optimized using mert .,the log-linear feature weights are tuned with minimum error rate training on bleu . "we used moses , a state-of-the-art phrase-based smt model , in decoding .",we used a phrase-based smt model as implemented in the moses toolkit . we use a popular word2vec neural language model to learn the word embeddings on an unsupervised tweet corpus .,we apply the 3-phase learning procedure proposed by where we first create word embeddings based on the skip-gram model . semantic parsing is the task of translating text to a formal meaning representation such as logical forms or structured queries .,"semantic parsing is the task of transducing natural language ( nl ) utterances into formal meaning representations ( mrs ) , commonly represented as tree structures ." dependency relations have been extracted running the stanford parser .,dependency parses are obtained from the stanford parser . we use minimum error rate training with nbest list size 100 to optimize the feature weights for maximum development bleu .,we use our reordering model for n-best re-ranking and optimize bleu using minimum error rate training . "this paper described a simple pattern-matching algorithm for restoring empty nodes in parse trees that do not contain them , and appropriately .",this paper describes a simple pattern-matching algorithm for post-processing the output of such parsers to add a wide variety of empty nodes to its parse trees . the translation quality is evaluated by case-insensitive bleu and ter metric .,translation quality is measured by case-insensitive bleu on newstest13 using one reference translation . we use the k-best batch mira to tune mt systems .,"for tuning the feature weights , we applied batch-mira with -safe-hope ." the evaluation metric is casesensitive bleu-4 .,the translation quality is evaluated by case-insensitive bleu-4 . ji and grishman extended the scope from a single document to a cluster of topic-related documents and employed a rule-based approach to propagate consistent trigger classification and event arguments across sentences and documents .,ji and grishman extended the one sense per discourse idea to multiple topically related documents and propagate consistent event arguments across sentences and documents . "previous work on the relation between dms and drs is mostly based on corpora annotated with drs , most notably the penn discourse treebank for english .","much current work in discourse parsing focuses on the labelling of discourse relations , using data from the penn discourse treebank ." "to keep consistent , we initialize the embedding weight with pre-trained word embeddings .",we use the glove pre-trained word embeddings for the vectors of the content words . we use the wrapper of the scikit learn python library over the liblinear logistic regression implementation .,we use liblinear logistic regression module to classify document-level embeddings . pdtb is drawn from wall street journal articles with overlapping annotations with the penn treebank .,the conll dataset is taken form the wall street journal portion of the penn treebank corpus . semantic parsing is the task of converting natural language utterances into formal representations of their meaning .,semantic parsing is the task of converting natural language utterances into their complete formal meaning representations which are executable for some application . "with this induced model , we perform word alignment between languages l1 and l2 .","in addition , we build another word alignment model for l1 and l2 using the small l1-l2 bilingual corpus ." in clark and curran we investigate several log-linear parsing models for ccg .,in clark and curran we describe efficient methods for performing the calculations using packed charts . this task specifically focuses on the identification of hypernym-hyponym relation among terms in four different languages .,this task focuses only on the hypernym-hyponym relation extraction from a list of terms collected from various domains and languages . we initialize the embedding layer using embeddings from dedicated word embedding techniques word2vec and glove .,"for the word-embedding based classifier , we use the glove pre-trained word embeddings ." word sense disambiguation ( wsd ) is the task of automatically determining the correct sense for a target word given the context in which it occurs .,"many words have multiple meanings , and the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd ) ." "as we know , document summarization is a very useful means for people to quickly read and browse news articles in the big data era .","document summarization is a task to generate a fluent , condensed summary for a document , and keep important information ." we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit .,"for the language model , we used sri language modeling toolkit to train a trigram model with modified kneser-ney smoothing on the 31 , 149 english sentences ." we used latent dirichlet allocation to construct our topics .,the core machinery of our system is driven by a latent dirichlet allocation topic model . "relation extraction is the key component for building relation knowledge graphs , and it is of crucial significance to natural language processing applications such as structured search , sentiment analysis , question answering , and summarization .",relation extraction ( re ) is the task of extracting instances of semantic relations between entities in unstructured data such as natural language text . "word sense disambiguation ( wsd ) is the problem of assigning a sense to an ambiguous word , using its context .",word sense disambiguation ( wsd ) is a fundamental task and long-standing challenge in natural language processing ( nlp ) . the experiment was set up and run using the scikit-learn machine learning library for python .,the evaluations were performed with scikit-learn using the skll toolkit 6 that makes it easy to run batch scikit-learn experiments . we built a 5-gram language model on the english side of europarl and used the kneser-ney smoothing method and srilm as the language model toolkit .,we used the target side of the parallel corpus and the srilm toolkit to train a 5-gram language model . grammar induction is the task of learning grammatical structure from plain text without human supervision .,grammar induction is the task of learning a grammar from a set of unannotated sentences . "coreference resolution is the problem of identifying which noun phrases ( nps , or mentions ) refer to the same real-world entity in a text or dialogue .",coreference resolution is the task of determining when two textual mentions name the same individual . "for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided .","furthermore , we train a 5-gram language model using the sri language toolkit ." "in this research , we use the pre-trained google news dataset 2 by word2vec algorithms .",we use 300-dimensional vectors that were trained and provided by word2vec tool using a part of the google news dataset 4 . "for all data sets , we trained a 5-gram language model using the sri language modeling toolkit .",we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting . we use a fourgram language model with modified kneser-ney smoothing as implemented in the srilm toolkit .,"furthermore , we train a 5-gram language model using the sri language toolkit ." we report bleu scores computed using sacrebleu .,"we report decoding speed and bleu score , as measured by sacrebleu ." social media is a valuable source for studying health-related behaviors ( cite-p-11-1-8 ) .,social media is a rich source of rumours and corresponding community reactions . phonetic translation across these pairs is called transliteration .,transliteration is often defined as phonetic translation ( cite-p-21-3-2 ) . the feature weights are tuned to optimize bleu using the minimum error rate training algorithm .,these features were optimized using minimum error-rate training and the same weights were then used in docent . "therefore , dependency parsing is a potential “ sweet spot ” that deserves investigation .",dependency parsing is the task of predicting the most probable dependency structure for a given sentence . high quality word embeddings have been proven helpful in many nlp tasks .,word embeddings are critical for high-performance neural networks in nlp tasks . in this paper we presented a new corpus for context-dependent semantic parsing .,"in this paper we present a new , publicly available corpus for context-dependent semantic parsing ." semantic parsing is the problem of mapping natural language strings into meaning representations .,semantic parsing is the task of translating natural language utterances into a machine-interpretable meaning representation . "for all machine learning results , we train a logistic regression classifier implemented in scikitlearn with l2 regularization and the liblinear solver .","as our supervised classification algorithm , we use a linear svm classifier from liblinear , with its default parameter settings ." word sense disambiguation ( wsd ) is the task of determining the correct meaning or sense of a word in context .,word sense disambiguation ( wsd ) is the task of assigning sense tags to ambiguous lexical items ( lis ) in a text . we measure machine translation performance using the bleu metric .,we compute the interannotator agreement in terms of the bleu score . we trained the classifiers for relation extraction using l1-regularized logistic regression with default parameters using the liblinear package .,we applied liblinear via its scikitlearn python interface to train the logistic regression model with l2 regularization . the smt weighting parameters were tuned by mert in the development data .,the decoding weights were optimized with minimum error rate training . the first application of machine translation system combination used a consensus decoding strategy relying on a confusion network .,the earliest approach in used edit distance based multiple string alignment to build the confusion networks . we used an l2-regularized l2-loss linear svm to learn the attribute predictions .,we build all the classifiers using the l2-regularized linear logistic regression from the liblinear package . "word sense disambiguation ( wsd ) is a key task in computational lexical semantics , inasmuch as it addresses the lexical ambiguity of text by making explicit the meaning of words occurring in a given context ( cite-p-18-3-10 ) .",word sense disambiguation ( wsd ) is the nlp task that consists in selecting the correct sense of a polysemous word in a given context . we extend the perceptron training method of maaten et al to train a hucrf from partially labeled sequences .,a notable component of our extension is that we introduce a training algorithm for learning a hidden unit crf of maaten et al from partially labeled sequences . coreference resolution is the task of determining which mentions in a text refer to the same entity .,coreference resolution is the process of linking together multiple expressions of a given entity . semantic parsing is the task of mapping natural language to a formal meaning representation .,"semantic parsing is the task of converting a sentence into a representation of its meaning , usually in a logical form grounded in the symbols of some fixed ontology or relational database ( cite-p-21-3-3 , cite-p-21-3-4 , cite-p-21-1-11 ) ." "for probabilities , we trained 5-gram language models using srilm .",we trained a 4-gram language model on this data with kneser-ney discounting using srilm . we have used the srilm with kneser-ney smoothing for training a language model of order five and mert for tuning the model with development data .,we have used the srilm with kneser-ney smoothing for training a language model for the first stage of decoding . "word sense disambiguation ( wsd ) is a difficult natural language processing task which requires that for every content word ( noun , adjective , verb or adverb ) the appropriate meaning is automatically selected from the available sense inventory 1 .",word sense disambiguation ( wsd ) is a task to identify the intended sense of a word based on its context . "additionally , for en-de , compound splitting of the german side of the corpus was performed using a frequency based method described in .","in order to reduce the source vocabulary size translation , the german text was preprocessed by splitting german compound words with the frequencybased method described in ." "in this paper , we propose a method based on importance sampling that allows us to use a very large target vocabulary .","in this paper , we proposed a way to extend the size of the target vocabulary for neural machine translation ." we experiment with a machine learning strategy to model multilingual coreference for the conll-2012 shared task .,"we apply our model to the english portion of the conll 2012 shared task data , which is derived from the ontonotes corpus ." we evaluated the reordering approach within the moses phrase-based smt system .,"we used moses , a phrase-based smt toolkit , for training the translation model ." word embeddings have proven to be effective models of semantic representation of words in various nlp tasks .,distributed word representations have been shown to improve the accuracy of ner systems . "for probabilities , we trained 5-gram language models using srilm .",we used the srilm toolkit and kneser-ney discounting for estimating 5-grams lms . event coreference resolution is the task of identifying event mentions and clustering them such that each cluster represents a unique real world event .,event coreference resolution is the task of determining which event mentions expressed in language refer to the same real-world event instances . "in practical terms , we will use a paraphrase ranking task derived from the semeval 2007 lexical substitution task .",to do this we examine the dataset created for the english lexical substitution task in semeval . "word embeddings have also been effectively employed in several tasks such as named entity recognition , adjectival scales and text classification .","word embeddings have shown promising results in nlp tasks , such as named entity recognition , sentiment analysis or parsing ." "for the automatic evaluation , we used the bleu metric from ibm .",we evaluated the translation quality of the system using the bleu metric . we use the scikit-learn machine learning library to implement the entire pipeline .,we used the svd implementation provided in the scikit-learn toolkit . this approach relies on word embeddings for the computation of semantic relatedness with word2vec .,"due to the success of word embeddings in word similarity judgment tasks , this work also makes use of global vector word embeddings ." the evaluation metric is casesensitive bleu-4 .,case-insensitive bleu-4 is our evaluation metric . qiu et al proposed double propagation to collectively extract aspect terms and opinion words based on information propagation over a dependency graph .,qiu et al propose double propagation to expand opinion targets and opinion words lists in a bootstrapping way . we experimentally evaluate the effectiveness of multiple importance sampling distributions .,we experimentally evaluate the heldout perplexity of models trained with our various importance sampling distributions . the basic building blocks of our models are recurrent neural networks with long short-term memory units .,"as the encoder for text we consider convolutional neural networks , gated recurrent units , and long short-term memory networks ." for the machine learning component of our system we use the l2-regularised logistic regression implementation of the liblinear 3 software library .,"for this model , we use a binary logistic regression classifier implemented in the lib-linear package , coupled with the ovo scheme ." we trained a 5-gram language model on the english side of each training corpus using the sri language modeling toolkit .,"for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided ." the srilm toolkit was used to build the trigram mkn smoothed language model .,the srilm toolkit was used to build this language model . "relation extraction is the task of finding relations between entities in text , which is useful for several tasks such as information extraction , summarization , and question answering ( cite-p-14-3-7 ) .",relation extraction is a core task in information extraction and natural language understanding . translation performances are measured with case-insensitive bleu4 score .,results are reported using case-insensitive bleu with a single reference . "in this paper , we study the impact of persuasive argumentation in political debates .",in this work we study the use of semantic frames for modelling argumentation in speakers ’ discourse . "we then learn reranking weights using minimum error rate training on the development set for this combined list , using only these two features .",we set the feature weights by optimizing the bleu score directly using minimum error rate training on the development set . "based on hypothesis 1 , we learn sense-based embeddings from a large data set , using the continuous skip-gram model .","based on the distributional hypothesis , we train a skip-gram model to learn the distributional representations of words in a large corpus ." named entity recognition ( ner ) is a challenging learning problem .,named entity recognition ( ner ) is a frequently needed technology in nlp applications . "examples are yago , dbpedia , and freebase .",examples of such schemas include freebase and yago2 . "ner is a task to identify names in texts and to assign names with particular types ( cite-p-12-3-17 , cite-p-12-3-19 , cite-p-12-3-18 , cite-p-12-3-2 ) .","ner is the task of identifying names in text and assigning them a type ( e.g . person , location , organisation , miscellaneous ) ." the target language model was a trigram language model with modified kneser-ney smoothing trained on the english side of the bitext using the srilm tookit .,the targetside 4-gram language model was estimated using the srilm toolkit and modified kneser-ney discounting with interpolation . we used the scikit-learn implementation of svrs and the skll toolkit .,"within this subpart of our ensemble model , we used a svm model from the scikit-learn library ." we use the moses smt toolkit to test the augmented datasets .,"we use moses , an open source toolkit for training different systems ." semi-supervised learning is a type of machine learning where one has access to a small amount of labeled data and a large amount of unlabeled data .,"semi-supervised learning is a broader area of machine learning , focusing on improving the learning process by usage of unlabeled data in conjunction with labeled data ." semantic role labeling ( srl ) is the process of producing such a markup .,semantic role labeling ( srl ) is a task of automatically identifying semantic relations between predicate and its related arguments in the sentence . "we use the svm implementation from scikit-learn , which in turn is based on libsvm .","we use a random forest classifier , as implemented in scikit-learn ." we apply sri language modeling toolkit to train a 4-gram language model with kneser-ney smoothing .,we use srilm for training a trigram language model on the english side of the training data . the probabilistic language model is constructed on google web 1t 5-gram corpus by using the srilm toolkit .,we also use a 4-gram language model trained using srilm with kneser-ney smoothing . barzilay and mckeown and callisonburch et al extracted paraphrases from monolingual parallel corpus where multiple translations were present for the same source .,barzilay and mckeown identify multi-word paraphrases from a sentence-aligned corpus of monolingual parallel texts . training approach can improve the robustness of nmt models .,we have proposed adversarial stability training to improve the robustness of nmt models . "for this language model , we built a trigram language model with kneser-ney smoothing using srilm from the same automatically segmented corpus .","for the language model , we used sri language modeling toolkit to train a trigram model with modified kneser-ney smoothing on the 31 , 149 english sentences ." we train a linear support vector machine classifier using the efficient liblinear package .,we use liblinear 9 to solve the lr and svm classification problems . we used the implementation of the scikit-learn 2 module .,we used the scikit-learn implementation of svrs and the skll toolkit . "in order to measure translation quality , we use bleu 7 and ter scores .","to measure the translation quality , we use the bleu score and the nist score ." our holing system uses collapsed stanford parser dependencies as context features .,we use the stanford parser with stanford dependencies . kondrak and dorr reported that a simple average of several orthographic similarity measures outperformed all the measures on the task of the identification of cognates for drug names .,kondrak and dorr present a large number of language-independent distance measures in order to predict whether two drug names are confusable or not . we begin by computing the similarity between words using word embeddings .,we use a cws-oriented model modified from the skip-gram model to derive word embeddings . "wikipedia is a resource of choice exploited in many nlp applications , yet we are not aware of recent attempts to adapt coreference resolution to this resource .",wikipedia is a constantly evolving source of detailed information that could facilitate intelligent machines — if they are able to leverage its power . we used the moses toolkit for performing statistical machine translation .,for training the translation model and for decoding we used the moses toolkit . the n-gram models are created using the srilm toolkit with good-turning smoothing for both the chinese and english data .,the pre-processed monolingual sentences will be used by srilm or berkeleylm to train a n-gram language model . sentiment classification is the task of classifying an opinion document as expressing a positive or negative sentiment .,sentiment classification is the task of identifying the sentiment polarity of a given text . relation extraction ( re ) is the process of generating structured relation knowledge from unstructured natural language texts .,relation extraction is the task of detecting and characterizing semantic relations between entities from free text . "as a classifier , we choose a first-order conditional random field model .",our model is a structured conditional random field . we implement logistic regression with scikit-learn and use the lbfgs solver .,for our logistic regression classifier we use the implementation included in the scikit-learn toolkit 2 . semantic role labeling ( srl ) is defined as the task to recognize arguments for a given predicate and assign semantic role labels to them .,"semantic role labeling ( srl ) is the process of extracting simple event structures , i.e. , “ who ” did “ what ” to “ whom ” , “ when ” and “ where ” ." we use the glove pre-trained word embeddings for the vectors of the content words .,we use pre-trained 100 dimensional glove word embeddings . "luong et al segment words using morfessor , and use recursive neural networks to build word embeddings from morph embeddings .","luong et al train a recursive neural network for morphological composition , and show its effectiveness on word similarity task ." "then , we trained word embeddings using word2vec .",we use the word2vec skip-gram model to train our word embeddings . word embeddings such as word2vec and glove have been widely recognized for their ability to capture linguistic regularities .,word embedding approaches like word2vec or glove are powerful tools for the semantic analysis of natural language . semantic role labeling ( srl ) is the process of producing such a markup .,semantic role labeling ( srl ) is the task of identifying semantic arguments of predicates in text . we optimized each system separately using minimum error rate training .,we perform minimum error rate training to tune various feature weights . sentence vectors were generated using doc2vec .,the message-level embeddings are generated using doc2vec . domain adaptation is a common concern when optimizing empirical nlp applications .,domain adaptation is a challenge for supervised nlp systems because of expensive and time-consuming manual annotated resources . we use crf to learn the correlations between the current label and its neighbors .,we employ conditional random fields to predict the sentiment label for each segment . "additionally , a back-off 2-gram model with goodturing discounting and no lexical classes was built from the same training data , using the srilm toolkit .",the target fourgram language model was built with the english part of training data using the sri language modeling toolkit . "in this research , we use the pre-trained google news dataset 2 by word2vec algorithms .","we use the word2vec vectors with 300 dimensions , pre-trained on 100 billion words of google news ." we apply linear regression with elastic net regularization and support vector regression with an rbf kernel for comparison .,we use both logistic regression with elastic net regularisation and support vector machines with a linear kernel . we have used the srilm with kneser-ney smoothing for training a language model for the first stage of decoding .,the language model was constructed using the srilm toolkit with interpolated kneser-ney discounting . we parse the corpus using the stanford dependency parser and extract the main verb of each segment .,we use the stanford dependency parser to parse the statement and identify the path connecting the content words in the parse tree . we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing .,"for all experiments , we used a 4-gram language model with modified kneser-ney smoothing which was trained with the srilm toolkit ." we use distributed word vectors trained on the wikipedia corpus using the word2vec algorithm .,"first , we train a vector space representations of words using word2vec on chinese wikipedia ." "for all models , we use the 300-dimensional glove word embeddings .","for the classification task , we use pre-trained glove embedding vectors as lexical features ." "in this paper , we explore an implicit content-introducing method for generative short-text conversation .","in this paper , we aim to generate a more meaningful and informative reply when answering a given question ." "language modeling is a fundamental task , used for example to predict the next word or character in a text sequence given the context .","language modeling is a fundamental task in natural language processing and is routinely employed in a wide range of applications , such as speech recognition , machine translation , etc ’ ." long short-term memory network was proposed by to specifically address this issue of learning longterm dependencies .,long short-term memory was introduced by hochreiter and schmidhuber to overcome the issue of vanishing gradients in the vanilla recurrent neural networks . "to set the weights , 位 m , we performed minimum error rate training on the development set using bleu as the objective function .",we optimise the feature weights of the model with minimum error rate training against the bleu evaluation metric . the target-side language models were estimated using the srilm toolkit .,the language models were built using srilm toolkits . "to calculate language model features , we train traditional n-gram language models with ngram lengths of four and five using the srilm toolkit .","for the fluency and grammaticality features , we train 4-gram lms using the development dataset with the sri toolkit ." luong et al utilized the morpheme segments produced by morfessor and constructed morpheme trees for words to learn morphologically-aware word embeddings by the recursive neural network .,luong et al learn word representations based on morphemes that are obtained from an external morphological segmentation system . we trained the statistical phrase-based systems using the moses toolkit with mert tuning .,we adapted the moses phrase-based decoder to translate word lattices . semantic role labeling ( srl ) is the task of labeling predicate-argument structure in sentences with shallow semantic information .,semantic role labeling ( srl ) is a task of automatically identifying semantic relations between predicate and its related arguments in the sentence . transliteration is the task of converting a word from one alphabetic script to another .,phonetic translation across these pairs is called transliteration . evaluation sets are translated using the cdec decoder and evaluated with the bleu metric .,evaluation is done using the bleu metric with four references . "although coreference resolution is a subproblem of natural language understanding , coreference resolution evaluation metrics have predominately been discussed in terms of abstract entities and hypothetical system errors .",coreference resolution is the task of determining which mentions in a text refer to the same entity . keyphrase extraction is a fundamental task in natural language processing that facilitates mapping of documents to a set of representative phrases .,"keyphrase extraction is the problem of automatically extracting important phrases or concepts ( i.e. , the essence ) of a document ." distributional semantic models induce large-scale vector-based lexical semantic representations from statistical patterns of word usage .,traditional semantic space models represent meaning on the basis of word co-occurrence statistics in large text corpora . relation extraction ( re ) is a task of identifying typed relations between known entity mentions in a sentence .,"relation extraction ( re ) is the task of identifying instances of relations , such as nationality ( person , country ) or place of birth ( person , location ) , in passages of natural text ." "part-of-speech ( pos ) tagging is a crucial task for natural language processing ( nlp ) tasks , providing basic information about syntax .","part-of-speech ( pos ) tagging is a fundamental natural-language-processing problem , and pos tags are used as input to many important applications ." the srilm toolkit was used to build the 5-gram language model .,a 4-grams language model is trained by the srilm toolkit . we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing .,"we used srilm for training the 5-gram language model with interpolated modified kneser-ney discounting , ." we use stanford part-of-speech tagger to automatically detect nouns from text .,"in our wok , we have used the stanford log-linear part-of-speech to do pos tagging ." feature weights are tuned using minimum error rate training on the 455 provided references .,the parameter weights are optimized with minimum error rate training . semantic role labeling ( srl ) is a form of shallow semantic parsing whose goal is to discover the predicate-argument structure of each predicate in a given input sentence .,semantic role labeling ( srl ) is a task of automatically identifying semantic relations between predicate and its related arguments in the sentence . "coreference resolution is the problem of identifying which mentions ( i.e. , noun phrases ) refer to which real-world entities .","coreference resolution is the problem of partitioning a sequence of noun phrases ( or mentions ) , as they occur in a natural language text , into a set of referential entities ." "we measure the overall translation quality using 4-gram bleu , which is computed on tokenized and lowercased data for all systems .","we measure the translation quality with ibm bleu up to 4 grams , using 2 reference translations , bleur2n4 ." 1 bunsetsu is a linguistic unit in japanese that roughly corresponds to a basic phrase in english .,1a bunsetsu is a common unit when syntactic structures in japanese are discussed . we train a 4-gram language model on the xinhua portion of the gigaword corpus using the sri language toolkit with modified kneser-ney smoothing .,we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing . relation extraction is the task of finding semantic relations between entities from text .,relation extraction is the task of finding relationships between two entities from text . we use a count-based distributional semantics model and the continuous bag-of-words model to learn word vectors .,"based on the distributional hypothesis , we train a skip-gram model to learn the distributional representations of words in a large corpus ." relation extraction is a traditional information extraction task which aims at detecting and classifying semantic relations between entities in text ( cite-p-10-1-18 ) .,relation extraction ( re ) is the task of extracting instances of semantic relations between entities in unstructured data such as natural language text . "we pretrain word vectors with the word2vec tool on the news dataset released by ding et al , which are fine-tuned during training .","we train 300 dimensional word embedding using word2vec on all the training data , and fine-turning during the training process ." the srilm toolkit is used to build the character-level language model for generating the lm features in nsw detection system .,the language model component uses the srilm lattice-tool for weight assignment and nbest decoding . we will show translation quality measured with the bleu score as a function of the phrase table size .,we use corpus-level bleu score to quantitatively evaluate the generated paragraphs . one of the first challenges in sentiment analysis is the vast lexical diversity of subjective language .,sentiment analysis is a fundamental problem aiming to give a machine the ability to understand the emotions and opinions expressed in a written text . coreference resolution is the process of finding discourse entities ( markables ) referring to the same real-world entity or concept .,coreference resolution is the task of identifying all mentions which refer to the same entity in a document . "sarcasm is a pervasive phenomenon in social media , permitting the concise communication of meaning , affect and attitude .",sarcasm is a form of verbal irony that is intended to express contempt or ridicule . the database of typological features we used is the online edition 8 of the world atlas of language structures .,"as the database of typological features , we used the online edition 2 of the world atlas of language structures ." "recently , to reduce labeling effort for relation extraction , distant supervision has been proposed .","recently , distant supervision has emerged to be a popular choice for training relation extractors without using manually labeled data ." "as classifier we use a traditional model , a support vector machine with linear kernel implemented in scikit-learn .","we use the logistic regression classifier in the skll package , which is based on scikit-learn , optimizing for f 1 score ." "information extraction ( ie ) is a main nlp aspects for analyzing scientific papers , which includes named entity recognition ( ner ) and relation extraction ( re ) .",information extraction ( ie ) is the task of extracting information from natural language texts to fill a database record following a structure called a template . we considered one layer and used the adam optimizer for parameter optimization .,"additionally , we compile the model using the adamax optimizer ." coreference resolution is the task of automatically grouping references to the same real-world entity in a document into a set .,"additionally , coreference resolution is a pervasive problem in nlp and many nlp applications could benefit from an effective coreference resolver that can be easily configured and customized ." "we use the stanford ner to identify named entities in our corpus , and then use these entities as bag-of-features .",we use the stanford named entity recognizer to identify named entities in s and t . the word embeddings are initialized with pre-trained word vectors using word2vec 2 and other parameters are randomly initialized including pos embeddings .,the word embeddings are initialized with pre-trained word vectors using word2vec 2 and other parameters are randomly initialized by sampling from uniform distribution in including character embeddings . the weights are learned automatically using expectation maximization .,the model parameters will then be estimated using the expectation-maximization algorithm . sentiment analysis is a nlp task that deals with extraction of opinion from a piece of text on a topic .,sentiment analysis is a recent attempt to deal with evaluative aspects of text . "in arabic , there is a reasonable number of sentiment lexicons but with major deficiencies .","first , arabic is a morphologically rich language ( cite-p-19-3-7 ) ." dependency parsing is a central nlp task .,dependency parsing is the task to assign dependency structures to a given sentence math-w-4-1-0-14 . "for the first two features , we adopt a set of pre-trained word embedding , known as global vectors for word representation .","we employ the glove and node2vec to generate the pre-trained word embedding , obtaining two distinct embedding for each word ." the case insensitive nist bleu-4 metric is adopted for evaluation .,case-insensitive 4-gram bleu is used as evaluation metric . "based on word2vec , we obtained both representations using the skipgram architecture with negative sampling .",we obtained distributed word representations using word2vec 4 with skip-gram . "since coreference resolution is a pervasive discourse phenomenon causing performance impediments in current ie systems , we considered a corpus of aligned english and romanian texts to identify coreferring expressions .","coreference resolution is a challenging task , that involves identification and clustering of noun phrases mentions that refer to the same real-world entity ." "sentiment analysis is the computational analysis of people ’ s feelings or beliefs expressed in texts such as emotions , opinions , attitudes , appraisals , etc . ( cite-p-11-3-3 ) .",sentiment analysis is the task in natural language processing ( nlp ) that deals with classifying opinions according to the polarity of the sentiment they express . "sentiment analysis is the task of identifying the polarity ( positive , negative or neutral ) of review .",sentiment analysis is the study of the subjectivity and polarity ( positive vs. negative ) of a text ( cite-p-7-1-10 ) . we use the pre-trained 300-dimensional word2vec embeddings trained on google news 1 as input features .,we learn our word embeddings by using word2vec 3 on unlabeled review data . the translation quality is evaluated by caseinsensitive bleu-4 metric .,translation quality is measured by case-insensitive bleu on newstest13 using one reference translation . "according to lakoff and johnson , metaphors are cognitive mappings of concepts from a source to a target domain .","according to lakoff and johnson , metaphor is a productive phenomenon that operates at the level of mental processes ." "for this , we used the combination of the entire swedish-english europarl corpus and the smultron data .","to rerank the candidate texts , we used a 5-gram language model trained on the europarl corpus using kenlm ." word sense disambiguation ( wsd ) is a key enabling-technology that automatically chooses the intended sense of a word in context .,"word sense disambiguation ( wsd ) is a widely studied task in natural language processing : given a word and its context , assign the correct sense of the word based on a predefined sense inventory ( cite-p-15-3-4 ) ." barzilay and lapata propose an entity grid model which represents the distribution of referents in a discourse for sentence ordering .,barzilay and lapata propose an entity-based coherence model which operationalizes some of the intuitions behind the centering model . our baseline is the smt toolkit moses run over letter strings rather than word strings .,we use the moses toolkit to train our phrase-based smt models . "for word embeddings , we used popular pre-trained word vectors from glove .","for input representation , we used glove word embeddings ." "as an early work , li et al used maximum mutual information as the objective to penalize general responses .",li et al proposed to use the maximum mutual information as the objective to penalize general responses . we used the svm implementation of scikit learn .,we used sklearn-kittext to build our svm models . "amr is a semantic formalism , structured as a graph ( cite-p-13-1-1 ) .",an amr is a graph with nodes representing the concepts of the sentence and edges representing the semantic relations between them . for the english sts subtask used regression models that combined a wide array of features including semantic similarity scores obtained with various methods .,"for the english sts subtask , we used regression models combining a wide array of features including semantic similarity scores obtained from various methods ." we use the mallet implementation of conditional random fields .,we also used support vector machines and conditional random fields . "in this task , we use the 300-dimensional 840b glove word embeddings .",we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings . the results evaluated by bleu score is shown in table 2 .,table 4 shows the bleu scores of the output descriptions . "for training our system classifier , we have used scikit-learn .",we trained the five classifiers using the svm implementation in scikit-learn . word sense disambiguation ( wsd ) is a key enabling technology that automatically chooses the intended sense of a word in context .,"word sense disambiguation ( wsd ) is a widely studied task in natural language processing : given a word and its context , assign the correct sense of the word based on a predefined sense inventory ( cite-p-15-3-4 ) ." "for the word-embedding based classifier , we use the glove pre-trained word embeddings .","for the actioneffect embedding model , we use pre-trained glove word embeddings as input to the lstm ." "for all languages in our dataset , we used treetagger with its built-in lemmatiser .",we used treetagger based on the english parameter files supplied with it . "as classifier we use a traditional model , a support vector machine with linear kernel implemented in scikit-learn .",we use the logistic regression implementation of liblinear wrapped by the scikit-learn library . "for language model , we used sri language modeling toolkit to train a 4-gram model with modified kneser-ney smoothing .",we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing . phrase-based statistical machine translation models have achieved significant improvements in translation accuracy over the original ibm word-based model .,"in recent years , phrase-based systems for statistical machine translation have delivered state-of-the-art performance on standard translation tasks ." we use srilm to train a 5-gram language model on the xinhua portion of the english gigaword corpus 5th edition with modified kneser-ney discounting .,we trained a 5-gram language model on the xinhua portion of gigaword corpus using the srilm toolkit . "further , we apply a 4-gram language model trained with the srilm toolkit on the target side of the training corpus .",we also use a 4-gram language model trained using srilm with kneser-ney smoothing . we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .,"thus , we train a 4-gram language model based on kneser-ney smoothing method using sri toolkit and interpolate it with the best rnnlms by different weights ." "in recent years , various phrase translation approaches have been shown to outperform word-to-word translation models .","in recent years , phrase-based systems for statistical machine translation have delivered state-of-the-art performance on standard translation tasks ." we use the skipgram model to learn word embeddings .,we obtain word clusters from word2vec k-means word clustering tool . we used minimum error rate training mert for tuning the feature weights .,we performed mert based tuning using the mira algorithm . "in renew , we exploit the stanford typed dependency representations that use triples to formalize dependency relations .",we use the stanford dependency parser with the collapsed representation so that preposition nodes become edges . we trained a 3-gram language model on all the correct-side sentences using kenlm .,the 5-gram target language model was trained using kenlm . our baseline system is an standard phrase-based smt system built with moses .,we trained the statistical phrase-based systems using the moses toolkit with mert tuning . "we use three common evaluation metrics including bleu , me-teor , and ter .",we measure translation quality via the bleu score . one of the very few available discourse annotated corpora is the penn discourse treebank in english .,the penn discourse treebank is the largest available discourseannotated resource in english . named entity ( ne ) transliteration is the process of transcribing a ne from a source language to some target language based on phonetic similarity between the entities .,named entity ( ne ) transliteration is the process of transcribing a ne from a source language to some target language while preserving its pronunciation in the original language . we used latent dirichlet allocation to create these topics .,the clustering method used in this work is latent dirichlet allocation topic modelling . we use the moses mt framework to build a standard statistical phrase-based mt model using our old-domain training data .,we use the moses toolkit to train various statistical machine translation systems . we used the case-insensitive bleu-4 to evaluate translation quality and run mert three times .,we evaluated the translation quality using the case-insensitive bleu-4 metric . language models of order 5 have been built and interpolated with srilm and kenlm .,"language models were built with srilm , modified kneser-ney smoothing , default pruning , and order 5 ." we initialize our word representation using publicly available word2vec trained on google news dataset and keep them fixed during training .,"we train 300 dimensional word embedding using word2vec on all the training data , and fine-turning during the training process ." "we use a random forest classifier , as implemented in scikit-learn .",we feed our features to a multinomial naive bayes classifier in scikit-learn . semantic parsing is the task of converting natural language utterances into formal representations of their meaning .,semantic parsing is the task of mapping natural language to machine interpretable meaning representations . "for nb and svm , we used their implementation available in scikit-learn .","in all cases , we used the implementations from the scikitlearn machine learning library ." sentiment analysis ( sa ) is the task of prediction of opinion in text .,"sentiment analysis ( sa ) is a field of knowledge which deals with the analysis of people ’ s opinions , sentiments , evaluations , appraisals , attitudes and emotions towards particular entities ( liu , 2012 ) ." "coreference resolution is the task of clustering a sequence of textual entity mentions into a set of maximal non-overlapping clusters , such that mentions in a cluster refer to the same discourse entity .",coreference resolution is the task of determining whether two or more noun phrases refer to the same entity in a text . relation extraction ( re ) is the task of recognizing relationships between entities mentioned in text .,relation extraction ( re ) has been defined as the task of identifying a given set of semantic binary relations in text . we used a caseless parsing model of the stanford parser for a dependency representation of the messages .,we obtained both phrase structures and dependency relations for every sentence using the stanford parser . the penn discourse treebank is another annotated discourse corpus .,the penn discourse treebank is a new resource of annotated discourse relations . "entity linking ( el ) is a central task in information extraction — given a textual passage , identify entity mentions ( substrings corresponding to world entities ) and link them to the corresponding entry in a given knowledge base ( kb , e.g . wikipedia or freebase ) .","entity linking ( el ) is the task of automatically linking mentions of entities such as persons , locations , or organizations to their corresponding entry in a knowledge base ( kb ) ." "for word-level embedding e w , we utilize pre-trained , 300-dimensional embedding vectors from glove 6b .",we use the 300-dimensional pre-trained word2vec 3 word embeddings and compare the performance with that of glove 4 embeddings . text classification is a crucial and well-proven method for organizing the collection of large scale documents .,text classification is a fundamental problem in natural language processing ( nlp ) . coreference resolution is the task of clustering a set of mentions in the text such that all mentions in the same cluster refer to the same entity .,coreference resolution is the process of linking together multiple expressions of a given entity . we evaluate our results with case-sensitive bleu-4 metric .,we report the mt performance using the original bleu metric . llu铆s et al introduced a dual decomposition based joint model for joint syntactic and semantic parsing .,llu铆s et al use a joint arcfactored model that predicts full syntactic paths along with predicate-argument structures via dual decomposition . discourse parsing is a challenging task and plays a critical role in discourse analysis .,discourse parsing is the task of identifying the presence and the type of the discourse relations between discourse units . "relation extraction is a subtask of information extraction that finds various predefined semantic relations , such as location , affiliation , rival , etc. , between pairs of entities in text .",relation extraction ( re ) is the process of generating structured relation knowledge from unstructured natural language texts . svms have been shown to be robust in classification tasks involving text where the dimensionality is high .,support vector machines have been shown to outperform other existing methods in text categorization . "to train our models , we utilized the standard machine learning package , scikit-learn for the models using a shallow feature representation .","within this subpart of our ensemble model , we used a svm model from the scikit-learn library ." "mikolov et al observed a strong similarity of the geometric arrangements of corresponding concepts between the vector spaces of different languages , and suggested that a crosslingual mapping between the two vector spaces is technically plausible .",mikolov et al used distributed representations of words to learn a linear mapping between vector spaces of languages and showed that this mapping can serve as a good dictionary between the languages . we extract lexical relations from the question using the stanford dependencies parser .,we use the stanford dependency parser to extract nouns and their grammatical roles . choi and cardie combine different kinds of negations with lexical polarity items through various compositional semantic models to improve phrasal sentiment analysis .,choi and cardie developed inference rules to capture compositional effects at the lexical level on phrase-level polarity classification . we use the moses smt toolkit to test the augmented datasets .,we use the popular moses toolkit to build the smt system . we apply a pretrained glove word embedding on .,we use pre-trained word vectors from glove . and we will show that this framework captures many existing topic models ( ¡ì 4 ) .,our framework has made clear advancements with respect to existing structured topic models . "sarcasm is defined as ‘ a cutting , often ironic remark intended to express contempt or ridicule ’ 1 .",sarcasm is a form of speech in which speakers say the opposite of what they truly mean in order to convey a strong sentiment . semantic role labeling ( srl ) is the task of identifying semantic arguments of predicates in text .,semantic role labeling ( srl ) is the task of identifying the semantic arguments of a predicate and labeling them with their semantic roles . coreference resolution is a fundamental component of natural language processing ( nlp ) and has been widely applied in other nlp tasks ( cite-p-15-3-9 ) .,coreference resolution is a task aimed at identifying phrases ( mentions ) referring to the same entity . "for all data sets , we trained a 5-gram language model using the sri language modeling toolkit .",we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus . we implement some of these features using the stanford parser .,the base pcfg uses simplified categories of the stanford pcfg parser . we measured translation performance with bleu .,we evaluated translation quality using uncased bleu and ter . sentiment classification is a very domain-specific problem ; training a classifier using the data from one domain may fail when testing against data from another .,"sentiment classification is the fundamental task of sentiment analysis ( cite-p-15-3-11 ) , where we are to classify the sentiment of a given text ." the target-side language models were estimated using the srilm toolkit .,srilm toolkit is used to build these language models . we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing .,we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing . "more recently , neural networks have become prominent in word representation learning .","recently , neural networks become popular for natural language processing ." word sense disambiguation ( wsd ) is the task of automatically determining the correct sense for a target word given the context in which it occurs .,word sense disambiguation ( wsd ) is a natural language processing ( nlp ) task in which the correct meaning ( sense ) of a word in a given context is to be determined . we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing .,we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit . "we trained a trigram language model on the chinese side , with the srilm toolkit , using the modified kneser-ney smoothing option .","for the language model , we used sri language modeling toolkit to train a trigram model with modified kneser-ney smoothing on the 31 , 149 english sentences ." we used the moses toolkit with its default settings .,"we used the moses decoder , with default settings , to obtain the translations ." "in this work , we introduce an extension to the continuous bag-of-words model .","in this paper , we adopt continuous bag-of-word in word2vec as our context-based embedding model ." the nnlm weights are optimized as the other feature weights using minimum error rate training .,the decoding weights are optimized with minimum error rate training to maximize bleu scores . we used the scikit-learn library the svm model .,we use the skll and scikit-learn toolkits . we train a 4-gram language model on the xinhua portion of english gigaword corpus by srilm toolkit .,we use srilm for training a trigram language model on the english side of the training data . semantic parsing is the mapping of text to a meaning representation .,semantic parsing is the task of mapping natural language to machine interpretable meaning representations . we conduct experiments on the benchmark twitter sentiment classification dataset from semeval 2013 .,we conduct experiments on the latest twitter sentiment classification benchmark dataset in semeval 2013 . "discourse parsing is a challenging natural language processing ( nlp ) task that has utility for many other nlp tasks such as summarization , opinion mining , etc . ( cite-p-17-3-3 ) .","and while discourse parsing is a document level task , discourse segmentation is done at the sentence level , assuming that sentence boundaries are known ." the model weights are automatically tuned using minimum error rate training .,the parameter weights are optimized with minimum error rate training . coreference resolution is a key problem in natural language understanding that still escapes reliable solutions .,"although coreference resolution is a subproblem of natural language understanding , coreference resolution evaluation metrics have predominately been discussed in terms of abstract entities and hypothetical system errors ." a 5-gram language model was created with the sri language modeling toolkit and trained using the gigaword corpus and english sentences from the parallel data .,a 4-gram language model which was trained on the entire training corpus using srilm was used to generate responses in conjunction with the phrase-based translation model . brockett et al showed that phrase-based statistical mt can help to correct mistakes made on mass nouns .,brockett et al use an smt system to correct errors involving mass noun errors . "we trained a support vector machine for regression with rbf kernel using scikitlearn , which in turn uses libsvm .","simulating the approach reported by , we trained a support vector machine for regression with rbf kernel using scikit-learn with the set of features ." sentiment analysis is a natural language processing task whose aim is to classify documents according to the opinion ( polarity ) they express on a given subject ( cite-p-13-8-14 ) .,"sentiment analysis is a growing research field , especially on web social networks ." coreference resolution is the process of determining whether two expressions in natural language refer to the same entity in the world .,"coreference resolution is the task of partitioning a set of entity mentions in a text , where each partition corresponds to some entity in an underlying discourse model ." we adopt the brown cluster algorithm to find the word cluster .,we use the brown clustering algorithm to induce our word representations . our experimental results demonstrate both that our proposed approach is useful in predicting missing preferences of users .,our experimental results show that this approach can accurately predict missing topic preferences of users accurately ( 80–94 % ) . the smt systems were built using the moses toolkit .,"the smt tools are a phrase-based smt toolkit licensed by nict , and moses ." "stance detection is the task of automatically determining from the text whether the author of the text is in favor of , against , or neutral towards a proposition or target .","stance detection is the task of assigning stance labels to a piece of text with respect to a topic , i.e . whether a piece of text is in favour of “ abortion ” , neutral , or against ." we will show translation quality measured with the bleu score as a function of the phrase table size .,we measure the translation quality with automatic metrics including bleu and ter . we train and evaluate a l2-regularized logistic regression classifier with the liblin-ear solver as implemented in scikit-learn .,"for this model , we use a binary logistic regression classifier implemented in the lib-linear package , coupled with the ovo scheme ." for unsupervised baselines we use morfessor categories-map and undivide .,"as a baseline for this comparison , we use morfessor categories-map ." relation extraction ( re ) is the task of recognizing relationships between entities mentioned in text .,relation extraction is a fundamental task in information extraction . we used word2vec to preinitialize the word embeddings .,we used word2vec to learn these dense vectors . relation extraction is the task of predicting semantic relations over entities expressed in structured or semi-structured text .,relation extraction is the task of recognizing and extracting relations between entities or concepts in texts . "in smt , we propose a coverage-based approach to nmt .","to address this problem , we propose coverage-based nmt in this paper ." our ncpg system is an attention-based bidirectional rnn architecture that uses an encoder-decoder framework .,"our nmt is based on an encoderdecoder with attention design , using bidirectional lstm layers for encoding and unidirectional layers for decoding ." "as a case study , we explore the task of learning to solve geometry problems .","in this paper , we introduce the novel task of question answering using natural language demonstrations ." luong and manning propose training a model on an out-of-domain corpus and do finetuning with small sized in-domain parallel data to mitigate the domain shift problem .,"luong and manning propose a fine-tuning method , which continues to train the already trained out-of-domain system on the in-domain data ." we set all feature weights by optimizing bleu directly using minimum error rate training on the tuning part of the development set .,then we use the standard minimum error-rate training to tune the feature weights to maximize the system潞s bleu score . the log-lineal combination weights were optimized using mert .,the same data was used for tuning the systems with mert . "a multiword expression is any combination of words with lexical , syntactic or semantic idiosyncrasy , in that the properties of the mwe are not predictable from the component words .",a multiword expression can be defined as a combination of words for which syntactic or semantic properties of the whole expression can not be obtained from its parts . "in , the authors use a recursive neural network to explicitly model the morphological structures of words and learn morphologically-aware embeddings .",luong et al utilized the morpheme segments produced by morfessor and constructed morpheme trees for words to learn morphologically-aware word embeddings by the recursive neural network . we pre-trained embeddings using word2vec with the skip-gram training objective and nec negative sampling .,we trained the embedding vectors with the word2vec tool on the large unlabeled corpus of clinical texts provided by the task organizers . marcu and wong present a joint probability model for phrase-based translation .,marcu and wong proposed a phrase-based context-free joint probability model for lexical mapping . sequence labeling is the simplest subclass of structured prediction problems .,sequence labeling is a structured prediction task where systems need to assign the correct label to every token in the input sequence . "additionally , a back-off 2-gram model with goodturing discounting and no lexical classes was built from the same training data , using the srilm toolkit .","the system was trained using moses with default settings , using a 5-gram language model created from the english side of the training corpus using srilm ." we use sri language modeling toolkit to train a 5-gram language model on the english sentences of fbis corpus .,"furthermore , we train a 5-gram language model using the sri language toolkit ." "in recent years , many researchers have employed statistical models or association measures to build alignment links .",some researchers used similarity and association measures to build alignment links . "for training our system classifier , we have used scikit-learn .",we use the scikit-learn machine learning library to implement the entire pipeline . conditional random fields are discriminative structured classification models for sequential tagging and segmentation .,conditional random fields are probabilistic models for labelling sequential data . twitter is a microblogging service that has 313 million monthly active users 1 .,"twitter 1 is a microblogging service , which according to latest statistics , has 284 million active users , 77 % outside the us that generate 500 million tweets a day in 35 different languages ." "we assume that a morphological analysis consists of three processes : tokenization , dictionary lookup , and disambiguation .",morphological analysis is a staple of natural language processing for broad languages . "we then perform mert which optimizes parameter settings using the bleu metric , while a 5-gram language model is derived with kneser-ney smoothing trained using srilm .",our translation model is implemented as an n-gram model of operations using the srilm toolkit with kneser-ney smoothing . we use the multi-class logistic regression classifier from the liblinear package 2 for the prediction of edit scripts .,we used the support vector machine implementation from the liblinear library on the test sets and report the results in table 4 . we trained a standard 5-gram language model with modified kneser-ney smoothing using the kenlm toolkit on 4 billion running words .,we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus . we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus .,"for the language model we use the corpus of 60,000 simple english wikipedia articles 3 and build a 3-gram language model with kneser-ney smoothing trained with srilm ." we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing .,"we first trained a trigram bnlm as the baseline with interpolated kneser-ney smoothing , using srilm toolkit ." automatic word alignment can be defined as the problem of determining a translational correspondence at word level given a parallel corpus of aligned sentences .,automatic word alignment is a key step in training statistical machine translation systems . we report the mt performance using the original bleu metric .,we first use bleu score to perform automatic evaluation . we train a 4-gram language model on the xinhua portion of the gigaword corpus using the sri language toolkit with modified kneser-ney smoothing .,we then lowercase all data and use all sentences from the modern dutch part of the corpus to train an n-gram language model with the srilm toolkit . we evaluate the translation quality using the case-insensitive bleu-4 metric .,we evaluated the translation quality using the case-insensitive bleu-4 metric . a trigram language model with modified kneser-ney discounting and interpolation was used as produced by the srilm toolkit .,"the 5-gram kneser-ney smoothed language models were trained by srilm , with kenlm used at runtime ." we develop translation models using the phrase-based moses smt system .,our machine translation system is a phrase-based system using the moses toolkit . semantic parsing is the problem of mapping natural language strings into meaning representations .,"semantic parsing is the problem of translating human language into computer language , and therefore is at the heart of natural language understanding ." we adopt glove vectors as the initial setting of word embeddings v .,we initialize the embedding weights by the pre-trained word embeddings with 200 dimensional vectors . "in our implementation , we use a kn-smoothed trigram model .",we choose modified kneser ney as the smoothing algorithm when learning the ngram model . we used srilm to build a 4-gram language model with kneser-ney discounting .,we used the srilm toolkit to build unpruned 5-gram models using interpolated modified kneser-ney smoothing . we trained a 5-grams language model by the srilm toolkit .,we train a trigram language model with the srilm toolkit . "the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd ) .",word sense disambiguation ( wsd ) is the nlp task that consists in selecting the correct sense of a polysemous word in a given context . coreference resolution is the process of finding discourse entities ( markables ) referring to the same real-world entity or concept .,coreference resolution is a key task in natural language processing ( cite-p-13-1-8 ) aiming to detect the referential expressions ( mentions ) in a text that point to the same entity . we present an expressive entity-mention model that performs coreference resolution .,"in this paper , we present a more expressive entity-mention model for coreference resolution ." relation extraction ( re ) has been defined as the task of identifying a given set of semantic binary relations in text .,relation extraction is the task of finding relationships between two entities from text . hatzivassiloglou and mckeown proposed a method to identify the polarity of adjectives based on conjunctions linking them .,hatzivassiloglou and mckeown proposed a method for identifying word polarity of adjectives . word sense disambiguation ( wsd ) is a natural language processing ( nlp ) task in which the correct meaning ( sense ) of a word in a given context is to be determined .,word sense disambiguation ( wsd ) is the task of determining the correct meaning for an ambiguous word from its context . mikolov et al have proposed to obtain cross-lingual word representations by learning a linear mapping between two monolingual word embedding spaces .,"in the case of bilingual word embedding , mikolov et al propose a method to learn a linear transformation from the source language to the target language for the task of lexicon extraction from bilingual corpora ." we measure the quality of the automatically created summaries using the rouge measure .,we evaluate our models with the standard rouge metric and obtain rouge scores using the pyrouge package . "in previous work , hatzivassiloglou and mckeown proposed a method to identify the polarity of adjectives based on conjunctions linking them in a large corpus .",hatzivassiloglou and mckeown proposed a supervised algorithm to determine the semantic orientation of adjectives . the skip-gram and continuous bag-of-words models of mikolov et al propose a simple single-layer architecture based on the inner product between two word vectors .,"more recently , mikolov et al propose two log-linear models , namely the skip-gram and cbow model , to efficiently induce word embeddings ." "recently , neural networks , and in particular recurrent neural networks have shown excellent performance in language modeling .","deep neural networks have seen widespread use in natural language processing tasks such as parsing , language modeling , and sentiment analysis ." the minimum error rate training was used to tune the feature weights .,minimum error rate training is applied to tune the cn weights . extractive summarization is a task to create summaries by pulling out snippets of text form the original text and combining them to form a summary .,extractive summarization is a sentence selection problem : identifying important summary sentences from one or multiple documents . dreyer and eisner propose a dirichlet process mixture model to learn paradigms .,dreyer and eisner proposed a log-linear model to identify paradigms . coreference resolution is the process of determining whether two expressions in natural language refer to the same entity in the world .,coreference resolution is a fundamental component of natural language processing ( nlp ) and has been widely applied in other nlp tasks ( cite-p-15-3-9 ) . we use an attention-based bidirectional rnn architecture with an encoder-decoder framework to build our ncpg models .,our ncpg system is an attention-based bidirectional rnn architecture that uses an encoder-decoder framework . we also use an in-house implementation of a japanese chunker to obtain chunks in japanese sentences .,"as for je translation , we use a popular japanese dependency parser to obtain japanese abstraction trees ." wordnet is a general english thesaurus which additionally covers biological terms .,wordnet is a key lexical resource for natural language applications . trigram language models were estimated using the sri language modeling toolkit with modified kneser-ney smoothing .,the targetside 4-gram language model was estimated using the srilm toolkit and modified kneser-ney discounting with interpolation . the lstm word embeddings are initialized with 100-dim embeddings from glove and fine-tuned during training .,the word-embeddings were initialized using the glove 300-dimensions pre-trained embeddings and were kept fixed during training . minimum error training under bleu was used to optimise the feature weights of the decoder with respect to the dev2006 development set .,the feature weights for each system were tuned on development sets using the moses implementation of minimum error rate training . these weights are optimized using minimum error-rate training on a held-out 500 sentence-pair development set for each of the experiments .,their weights are optimized using minimum error-rate training on a held-out development set for each of the experiments . "sentiment analysis ( sa ) is the task of analysing opinions , sentiments or emotions expressed towards entities such as products , services , organisations , issues , and the various attributes of these entities ( cite-p-9-3-3 ) .","sentiment analysis ( sa ) is the research field that is concerned with identifying opinions in text and classifying them as positive , negative or neutral ." "since chinese is the dominant language in our data set , a word-by-word statistical machine translation strategy ( cite-p-14-1-22 ) is adopted to translate english words into chinese .","more importantly , chinese is a language that lacks the morphological clues that help determine the pos tag of a word ." we report the mt performance using the original bleu metric .,we compute the interannotator agreement in terms of the bleu score . "coreference resolution is the task of clustering a sequence of textual entity mentions into a set of maximal non-overlapping clusters , such that mentions in a cluster refer to the same discourse entity .","since coreference resolution is a pervasive discourse phenomenon causing performance impediments in current ie systems , we considered a corpus of aligned english and romanian texts to identify coreferring expressions ." we evaluate the performance of different translation models using both bleu and ter metrics .,we use corpus-level bleu score to quantitatively evaluate the generated paragraphs . mihalcea et al learn multilingual subjectivity via cross-lingual projections .,mihalcea et al propose a method to learn multilingual subjective language via crosslanguage projections . a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit .,"the 5-gram kneser-ney smoothed language models were trained by srilm , with kenlm used at runtime ." we ¡¯ ve demonstrated that the benefits of unsupervised multilingual learning increase steadily with the number of available languages .,we found that performance improves steadily as the number of available languages increases . we first use the popular toolkit word2vec 1 provided by mikolov et al to train our word embeddings .,we first train a word2vec model on fr-wikipedia 11 to obtain non contextual word vectors . "experimental results show that our algorithm can find important feature subset , estimate model order ( cluster number ) and achieve better performance .","experimental results show that our algorithm ( math-w-2-2-5-186 ) can find important feature subset , estimate cluster number and achieve better performance compared with cgd algorithm ." "for word embeddings , we used popular pre-trained word vectors from glove .","for representing words , we used 100 dimensional pre-trained glove embeddings ." "within this subpart of our ensemble model , we used a svm model from the scikit-learn library .","specifically , we used the python scikit-learn module , which interfaces with the widely-used libsvm ." we trained a 3-gram language model on all the correct-side sentences using kenlm .,the 5-gram target language model was trained using kenlm . our baseline russian-english system is a hierarchical phrase-based translation model as implemented in cdec .,"our baseline system is based on a hierarchical phrase-based translation model , which can formally be described as a synchronous context-free grammar ." "in this paper , we will improve upon collins ’ algorithm by introducing a bidirectional searching strategy , so as to effectively utilize more context information .","in this paper , we propose guided learning , a new learning framework for bidirectional sequence classification ." "sentiment analysis is a growing research field , especially on web social networks .","sentiment analysis is the task of identifying the polarity ( positive , negative or neutral ) of review ." "we train a word embedding using word2vec over a large corpus of 55 , 463 product reviews .",we use the word2vec skip-gram model to train our word embeddings . "social media is a natural place to discover new events missed by curation , but mentioned online by someone planning to attend .","social media is a popular public platform for communicating , sharing information and expressing opinions ." "for language modeling , we use the english gigaword corpus with 5-gram lm implemented with the kenlm toolkit .","after standard preprocessing of the data , we train a 3-gram language model using kenlm ." we train a 4-gram language model on the xinhua portion of english gigaword corpus by srilm toolkit .,we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus . we use the mstparser implementation described in mcdonald et al for feature extraction .,"for other methods , we used the mstparser as the underlying dependency parsing tool ." previous work consistently reported that word-based translation models yielded better performance than traditional methods for question retrieval .,"later , xue et al combined the language model and translation model to a translation-based language model and observed better performance in question retrieval ." huang et al train their vectors with a neural network and additionally take global context into account .,huang et al further extended this context clustering method and incorporated global context to learn multi-prototype representation vectors . "we use glove pre-trained word embeddings , a 100 dimension embedding layer that is followed by a bilstm layer of size 32 .","we use glove word embeddings , which are 50-dimension word vectors trained with a crawled large corpus with 840 billion tokens ." "with this data , we can investigate whether the relationship between personal traits and brand preferences .","in this paper , we present a comprehensive analysis of the relationship between personal traits and brand preferences ." "semantic similarity is a central concept that extends across numerous fields such as artificial intelligence , natural language processing , cognitive science and psychology .","semantic similarity is a core technique for many topics in natural language processing such as textual entailment ( cite-p-22-1-7 ) , semantic role labeling ( cite-p-22-1-19 ) , and question answering ( cite-p-22-3-26 ) ." we used the logistic regression implementation in scikit-learn for the maximum entropy models in our experiments .,we implemented the algorithms in python using the stochastic gradient descent method for nmf from the scikit-learn package . "furthermore , the concept of word embedding introduced by mikolov et al allows for words to have vector representations , such that syntactic and semantic similarities are embodied in the vector space .",mikolov et al and mikolov et al further observe that the semantic relationship of words can be induced by performing simple algebraic operations with word vectors . "word segmentation is a fundamental task for processing most east asian languages , typically chinese .","word segmentation is a prerequisite for many natural language processing ( nlp ) applications on those languages that have no explicit space between words , such as arabic , chinese and japanese ." we measure translation performance by the bleu and meteor scores with multiple translation references .,"we use the automatic mt evaluation metrics bleu , meteor , and ter , to evaluate the absolute translation quality obtained ." hearst proposed a lexico-syntactic pattern based method for automatic acquisition of hyponymy from unrestricted texts .,hearst examined extracting hyponym data by taking advantage of lexical patterns in text . we use the adam optimizer for the gradient-based optimization .,we use the adaptive moment estimation for the optimizer . "in our experiments , we used the srilm toolkit to build 5-gram language model using the ldc arabic gigaword corpus .",we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit . "therefore , we employ negative sampling and adam to optimize the overall objective function .",we use the adam optimizer and mini-batch gradient to solve this optimization problem . language models were built using the srilm toolkit 16 .,"for lm training and interpolation , the srilm toolkit was used ." we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting .,we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing . semantic role labeling was first defined in gildea and jurafsky .,automatic semantic role labeling was first introduced by gildea and jurafsky . "we used the stanford corenlp toolkit for word segmentation , part-of-speech tagging , and syntactic parsing .","we used stanford corenlp for sentence splitting , part-of-speech tagging , named entity recognition , co-reference resolution and dependency parsing ." our method of learning multilingual word vectors is most closely associated to zou et al who learn bilingual word embeddings and show their utility in machine translation .,zou et al learn bilingual word embeddings by designing an objective function that combines unsupervised training with bilingual constraints based on word alignments . "recently , mikolov et al introduced an efficient way for inferring word embeddings that are effective in capturing syntactic and semantic relationships in natural language .",mikolov et al proposed a computationally efficient method for learning distributed word representation such that words with similar meanings will map to similar vectors . definition is most effective and is able to significantly and consistently improve retrieval performance .,using this similarity function in query expansion can significantly improve the retrieval performance . "we extend the model by adding continuous word representations , induced from the unlabeled data using the skip-gram algorithm , to the feature representations .",we apply the 3-phase learning procedure proposed by where we first create word embeddings based on the skip-gram model . our smt-based query expansion techniques are based on a recent implementation of the phrasebased smt framework .,our phrase-based system is similar to the alignment template system described by och and ney . mikolov et al introduced the skip-gram architecture built on a single hidden layer neural network to learn efficiently a vector representation for each word w of a vocabulary v from a large corpora of size c .,"mikolov et al further proposed continuous bagof-words and skip-gram models , which use a simple single-layer architecture based on inner product between two word vectors ." framenet is a lexico-semantic resource focused on semantic frames .,framenet is an expert-built lexical-semantic resource incorporating the theory of frame-semantics . dependency parsing is the task to assign dependency structures to a given sentence math-w-4-1-0-14 .,dependency parsing is a way of structurally analyzing a sentence from the viewpoint of modification . "in our experiments , we used the implementation of l2-regularised logistic regression in fan et al as our local classifier .",we used l2-regularized logistic regression classifier as implemented in liblinear . "for this , we used the combination of the entire swedish-english europarl corpus and the smultron data .","in order to build the englishfrench parallel corpus with discourse annotations , we used the europarl corpus ." coreference resolution is the process of determining whether two expressions in natural language refer to the same entity in the world .,coreference resolution is a set partitioning problem in which each resulting partition refers to an entity . we evaluate the output of our generation system against the raw strings of section 23 using the simple string accuracy and bleu evaluation metrics .,we evaluate our models using the standard bleu metric 2 on the detokenized translations of the test set . word sense disambiguation is the task of assigning sense labels to occurrences of an ambiguous word .,word sense disambiguation is the process of determining which sense of a homograph is correct in a given context . pang and lee frame the problem of detecting subjective sentences as finding the minimum cut in a graph representation of the sentences .,pang and lee propose a graph-based method which finds minimum cuts in a document graph to classify the sentences into subjective or objective . "for the newsgroups and sentiment datasets , we used stopwords from the nltk python package .","for the tokenization process , our system used tweettokenizer from nltk ." "semantic parsing is the task of automatically translating natural language text to formal meaning representations ( e.g. , statements in a formal logic ) .",semantic parsing is the task of translating natural language utterances into a machine-interpretable meaning representation . caseinsensitive nist bleu is used to measure translation performance .,"for evaluation , caseinsensitive nist bleu is used to measure translation performance ." each candidate property ’ s compatibility with the complementary simile component .,each candidate property is generated from just one component of the simile . we have established a grouping-based ordering scheme to accommodate both local and global coherence .,we propose a grouping-based ordering framework that integrates local and global coherence concerns . we used moses as the implementation of the baseline smt systems .,we used moses to train an alignment model on the created paraphrase dataset . we also obtain the embeddings of each word from word2vec .,we use skip-gram with negative sampling for obtaining the word embeddings . "we use the logistic regression classifier in the skll package , which is based on scikit-learn , optimizing for f 1 score .",to train the models we use the default stochastic gradient descent classifier provided by scikit-learn . we present the first approach for applying distant supervision to cross-sentence relation extraction .,"in this paper , we propose the first approach for applying distant supervision to cross-sentence relation extraction ." we pre-trained word embeddings using word2vec over tweet text of the full training data .,we perform pre-training using the skipgram nn architecture available in the word2vec tool . we perform our translation experiments using an in-house state-of-the-art phrase-based smt system similar to moses .,our baseline is an in-house phrase-based statistical machine translation system very similar to moses . "we use the logistic regression classifier as implemented in the skll package , which is based on scikitlearn , with f1 optimization .",we use the logistic regression implementation of liblinear wrapped by the scikit-learn library . relation extraction ( re ) is the task of recognizing the assertion of a particular relationship between two or more entities in text .,relation extraction is a core task in information extraction and natural language understanding . "in this paper , we propose an algorithmic approach for training a word problem solver based on both explicit and implicit supervision .","in this work , we address the technical difficulty of leveraging implicit supervision in learning an algebra word problem solver ." "for support vector learning , we use svm-light and svm-multiclass .","as a classifier , we employ support vector machines as implemented in svm light ." "for this task , we use the widely-used bleu metric .",for the evaluation of the results we use the bleu score . the model weights were trained using the minimum error rate training algorithm .,feature weights were set with minimum error rate training on a development set using bleu as the objective function . "in addition to these two key indicators , we evaluated the translation quality using an automatic measure , namely bleu score .",we used the bleu score to evaluate the translation accuracy with and without the normalization . vaswani et al proposed the transformer as an alternative model to the rnn .,vaswani et al extend the dot product attention described in luong et al to consider these vectors . the comparison was done in terms of bleu and processing times .,we evaluated the system using bleu score on the test set . we employ widely used and standard machine translation tool moses to train the phrasebased smt system .,our machine translation system is a phrase-based system using the moses toolkit . gabrilovich and markovitch introduced the esa model in which wikipedia and open directory project 1 was used to obtain the explicit concepts .,gabrilovich and markovitch utilized wikipedia-based concepts as the basis for a high-dimensional meaning representation space . "because the parser is incremental , it should be well suited to unsegmented text .","because the system is incremental , it should be straightforward to apply it to unsegmented text ." semantic parsing is the mapping of text to a meaning representation .,semantic parsing is the task of translating natural language utterances into a machine-interpretable meaning representation ( mr ) . we used svm classifier that implements linearsvc from the scikit-learn library .,"within this subpart of our ensemble model , we used a svm model from the scikit-learn library ." the second decoding method is to use conditional random field .,"for simplicity , we use the well-known conditional random fields for sequential labeling ." the decoder uses a cky-style parsing algorithm to integrate the language model scores .,the parsing algorithm is extended to handle translation candidates and to incorporate language model scores via cube pruning . "we use the wsj portion of the penn treebank 4 , augmented with head-dependant information using the rules of yamada and matsumoto .",we generate dependency structures from the ptb constituency trees using the head rules of yamada and matsumoto . the skip-gram model aims to find word representations that are useful for predicting the surrounding words in a sentence or document .,the skip-gram model is a very popular technique for learning embeddings that scales to huge corpora and can capture important semantic and syntactic properties of words . we evaluate the performance of different translation models using both bleu and ter metrics .,"to evaluate segment translation quality , we use corpus level bleu ." "here , we extract unigram and bigram features and use them in a logistic regression classifier with elastic net regularization .",we use both logistic regression with elastic net regularisation and support vector machines with a linear kernel . the language models are 4-grams with modified kneser-ney smoothing which have been trained with the srilm toolkit .,language models were built using the sri language modeling toolkit with modified kneser-ney smoothing . gu et al combined a copying mechanism with the seq2seq framework to improve the quality of the generated summaries .,"gu et al , cheng and lapata , and nallapati et al also utilized seq2seq based framework with attention modeling for short text or single document summarization ." we use the logistic regression implementation of liblinear wrapped by the scikit-learn library .,we use the linearsvc classifier as implemented in scikit-learn package 17 with the default parameters . yarowsky has used a few seeds and untagged sentences in a bootstrapping algorithm based on decision lists .,"yarowsky proposes a method for word sense disambiguation , which is based on monolingual bootstrapping ." we use the stanford dependency parser to extract nouns and their grammatical roles .,the grammatical relations are all the collapsed dependencies produced by the stanford dependency parser . gamon et al and gamon use a combination of classification and language modeling .,gamon et al train a decision tree model and a language model to correct errors in article and preposition usage . we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings .,we initialize the word embeddings for our deep learning architecture with the 100-dimensional glove vectors . dependency parsing consists of finding the structure of a sentence as expressed by a set of directed links ( dependencies ) between words .,"dependency parsing is the task of building dependency links between words in a sentence , which has recently gained a wide interest in the natural language processing community ." we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing .,"for language models , we use the srilm linear interpolation feature ." recently socher et al introduced compositional vector grammar to address the above limitations .,"in particular , socher et al obtain good parsing performance by building compositional representations from word vectors ." semantic role labeling ( srl ) is the task of identifying the predicate-argument structure of a sentence .,semantic role labeling ( srl ) is defined as the task to recognize arguments for a given predicate and assign semantic role labels to them . we built a 5-gram language model on the english side of europarl and used the kneser-ney smoothing method and srilm as the language model toolkit .,we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing . "as with our original refined language model , we estimate each coarse language model using the srilm toolkit .",our translation model is implemented as an n-gram model of operations using the srilm toolkit with kneser-ney smoothing . koo et al used a clustering algorithm to produce word clusters on a large amount of unannotated data and represented new features based on the clusters for dependency parsing models .,"in order to reduce the amount of annotated data to train a dependency parser , koo et al used word clusters computed from unlabelled data as features for training a parser ." all the feature weights and the weight for each probability factor are tuned on the development set with minimum-error-rate training .,the weights associated to feature functions are optimally combined using the minimum error rate training . coreference resolution is the next step on the way towards discourse understanding .,"coreference resolution is a challenging task , that involves identification and clustering of noun phrases mentions that refer to the same real-world entity ." named entity recognition ( ner ) is the first step for many tasks in the fields of natural language processing and information retrieval .,"named entity recognition ( ner ) is the task of identifying named entities in free text—typically personal names , organizations , gene-protein entities , and so on ." "berger and lafferty , 1999 , proposed a translation model that expands the document model .",berger and lafferty proposed the use of translation models for document retrieval . the translation quality is evaluated by caseinsensitive bleu-4 metric .,the quality of translations is evaluated by the case insensitive nist bleu-4 metric . coreference resolution is the task of grouping mentions to entities .,coreference resolution is the process of linking multiple mentions that refer to the same entity . our nnape model is inspired by the mt work of bahdanau et al which is based on bidirectional recurrent neural networks .,we use the attention-based nmt model introduced by bahdanau et al as our text-only nmt baseline . a 4-gram language model is trained on the monolingual data by srilm toolkit .,we apply sri language modeling toolkit to train a 4-gram language model with kneser-ney smoothing . named entity recognition ( ner ) is the task of detecting named entity mentions in text and assigning them to their corresponding type .,named entity recognition ( ner ) is the task of finding rigid designators as they appear in free text and classifying them into coarse categories such as person or location ( cite-p-24-4-6 ) . we update the model parameters by minimizing l c and l k with adam optimizer .,we update the gradient with adaptive moment estimation . it has been shown in previous work that word pairs are effective for identifying implicit discourse relations .,lexical co-occurrences have previously been shown to be useful for discourse level learning tasks . "since the bleu scores we obtained are close , we did a significance test on the scores .",we performed paired bootstrap sampling to test the significance in bleu score differences . we use the 100-dimensional pre-trained word embeddings trained by word2vec 2 and the 100-dimensional randomly initialized pos tag embeddings .,we initialize the embedding weights by the pre-trained word embeddings with 200 dimensional vectors . semantic parsing is the task of mapping natural language sentences to a formal representation of meaning .,"semantic parsing is the task of automatically translating natural language text to formal meaning representations ( e.g. , statements in a formal logic ) ." we utilize minimum error rate training to optimize feature weights of the paraphrasing model according to ndcg .,we implement the weight tuning component according to the minimum error rate training method . srilm toolkit was used to create up to 5-gram language models using the mentioned resources .,the srilm toolkit was used to build the 5-gram language model . a 5-gram language model was built using srilm on the target side of the corresponding training corpus .,the target fourgram language model was built with the english part of training data using the sri language modeling toolkit . we use 300-dimensional word embeddings from glove to initialize the model .,our word embeddings is initialized with 100-dimensional glove word embeddings . we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit .,srilm toolkit was used to create up to 5-gram language models using the mentioned resources . our mt decoder is a proprietary engine similar to moses .,our smt system is a phrase-based system based on the moses smt toolkit . semantic parsing is the task of translating natural language utterances into a machine-interpretable meaning representation ( mr ) .,"semantic parsing is the problem of translating human language into computer language , and therefore is at the heart of natural language understanding ." relation extraction is a fundamental task in information extraction .,relation extraction ( re ) is the task of extracting semantic relationships between entities in text . "to compute statistical significance , we use the approximate randomization test .","for assessing significance , we apply the approximate randomization test ." the universal dependencies project has produced a languageindependent but extensible standard for morphological and syntactic annotation using a formalism based on dependency grammar .,the universal dependencies project seeks to develop cross-linguistically consistent treebank annotation of morphology and syntax for many languages . "in this paper , we propose an approach to solve a significant problem : how to learn distinguishable representations from word sequences .","in this paper , we propose a novel cascade model , which can capture both the latent semantics and latent similarity by modeling mooc data ." "table 2 shows size of the inferred mdl-based pb models , and bleu score of their translations of the tune and test partitions .",table 2 shows the translation quality measured in terms of bleu metric with the original and universal tagset . "zhou et al further extend it to context-sensitive shortest pathenclosed tree , which includes necessary predicate-linked path information .","zhou et al further propose context-sensitive spt , which can dynamically determine the tree span by extending the necessary predicate-linked path information outside spt ." "stance detection is the task of automatically determining from text whether the author is in favor of the given target , against the given target , or whether neither inference is likely .","stance detection is the task of determining whether the author of a text is in favor or against a given topic , while rejecting texts in which neither inference is likely ." "relation extraction ( re ) is the task of identifying instances of relations , such as nationality ( person , country ) or place of birth ( person , location ) , in passages of natural text .",relation extraction ( re ) is the task of assigning a semantic relationship between a pair of arguments . the srilm toolkit was used for training the language models using kneser-ney smoothing .,unpruned language models were trained using lmplz which employs modified kneser-ney smoothing . "collobert et al used word embeddings as the input of various nlp tasks , including part-of-speech tagging , chunking , ner , and semantic role labeling .",collobert et al showed that a neural model could achieve close to state-of-the-art results in part of speech tagging and chunking by relying almost only on word embeddings learned with a language model . twitter is a widely used social networking service .,"twitter is a widely used microblogging platform , where users post and interact with messages , “ tweets ” ." we used the disambig tool provided by the srilm toolkit .,the srilm toolkit was used to build this language model . the srilm toolkit was used to build the trigram mkn smoothed language model .,the target-side language models were estimated using the srilm toolkit . we use the wrapper of the scikit learn python library over the liblinear logistic regression implementation .,we used the support vector machine implementation from the liblinear library on the test sets and report the results in table 4 . we used crfsuite and the glove word vector .,"for input representation , we used glove word embeddings ." "we used the open source moses decoder package for word alignment , phrase table extraction and decoding for sentence translation .","for the training of the smt model , including the word alignment and the phrase translation table , we used moses , a toolkit for phrase-based smt models ." relation extraction is the task of finding semantic relations between entities from text .,relation extraction is a fundamental task that enables a wide range of semantic applications from question answering ( cite-p-13-3-12 ) to fact checking ( cite-p-13-3-10 ) . part-of-speech ( pos ) tagging is a job to assign a proper pos tag to each linguistic unit such as word for a given sentence .,"part-of-speech ( pos ) tagging is a crucial task for natural language processing ( nlp ) tasks , providing basic information about syntax ." all the language models are built with the sri language modeling toolkit .,language models were built using the sri language modeling toolkit with modified kneser-ney smoothing . we trained a 4-gram language model on this data with kneser-ney discounting using srilm .,we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit . the decoder uses cky-style parsing with cube pruning to integrate the language model .,the decoder uses a cky-style parsing algorithm and cube pruning to integrate the language model scores . target language models were trained on the english side of the training corpus using the srilm toolkit .,the n-gram language models are trained using the srilm toolkit or similar software developed at hut . we minimize cross-entropy loss over all 42 relations using adagrad .,"to minimize the objective , we use stochastic gradient descent with the diagonal variant of adagrad ." the translation results are evaluated with case insensitive 4-gram bleu .,translation performances are measured with case-insensitive bleu4 score . a 5-gram language model of the target language was trained using kenlm .,"we used 4-gram language models , trained using kenlm ." "we use the webquestions dataset as our main dataset , which contains 5,810 question-answer pairs .","we evaluate our semantic parser on the webques-tions dataset , which contains 5,810 question-answer pairs ." the framenet database provides an inventory of semantic frames together with a list of lexical units associated with these frames .,"the framenet corpus is a collection of semantic frames , together with a corpus of documents annotated with these frames ." "finally , we used kenlm to create a trigram language model with kneser-ney smoothing on that data .",we built a trigram language model with kneser-ney smoothing using kenlm toolkit . "to the best of our knowledge , this is the first time that the ¡° benefit of depths ¡± was shown for convolutional neural networks .","to the best of our knowledge , this is the first time that very deep convolutional nets have been applied to text processing ." a 4-gram language model was trained on the monolingual data by the srilm toolkit .,language models were built using the sri language modeling toolkit with modified kneser-ney smoothing . the penn discourse treebank is the largest available discourse-annotated corpus in english .,one of the very few available discourse annotated corpora is the penn discourse treebank in english . we first use bleu score to perform automatic evaluation .,we evaluated the system using bleu score on the test set . "in this paper , we propose a novel unsupervised model , sentiment distribution consistency regularized .","in this paper , we model our problem in the framework of posterior regularization ." we use the pre-trained word2vec embeddings provided by mikolov et al as model input .,"for all three classifiers , we used the word2vec 300d pre-trained embeddings as features ." miwa and bansal adopt a bidirectional dependency tree-lstm model by introducing a top-down lstm path .,miwa and bansal adopted a bidirectional tree lstm model to jointly extract named entities and relations under a dependency tree structure . we used the statistical japanese dependency parser cabocha for parsing .,"for j-e translation , we used the cabocha parser to analyze the context document ." we used the target side of the parallel corpus and the srilm toolkit to train a 5-gram language model .,"we trained two 5-gram language models on the entire target side of the parallel data , with srilm ." the srilm toolkit was used to build this language model .,the language models were built using srilm toolkits . we use srilm to train a 5-gram language model on the xinhua portion of the english gigaword corpus 5th edition with modified kneser-ney discounting .,we use srilm to train a 5-gram language model on the target side of our training corpus with modified kneser-ney discounting . modified kneser-ney trigram models are trained using srilm upon the chinese portion of the training data .,the language model is a 3-gram language model trained using the srilm toolkit on the english side of the training data . "sarcasm is defined as ‘ a cutting , often ironic remark intended to express contempt or ridicule ’ 1 .",sarcasm is a form of verbal irony that is intended to express contempt or ridicule . we use the scikit-learn machine learning library to implement the entire pipeline .,we used the svm implementation provided within scikit-learn . the weights of the different feature functions were optimised by means of minimum error rate training on the 2008 test set .,the weights of the different feature functions were optimised by means of minimum error rate training on the 2013 wmt test set . case-insensitive bleu4 was used as the evaluation metric .,the case insensitive nist bleu-4 metric is adopted for evaluation . "semantic role labeling ( srl ) is a kind of shallow semantic parsing task and its goal is to recognize some related phrases and assign a joint structure ( who did what to whom , when , where , why , how ) to each predicate of a sentence ( cite-p-24-3-4 ) .",semantic role labeling ( srl ) is a form of shallow semantic parsing whose goal is to discover the predicate-argument structure of each predicate in a given input sentence . "we used moses , a phrase-based smt toolkit , for training the translation model .",for training the translation model and for decoding we used the moses toolkit . the language model is a 5-gram lm with modified kneser-ney smoothing .,we use a pbsmt model where the language model is a 5-gram lm with modified kneser-ney smoothing . our source for syntactically annotated training data was the penn treebank .,we automatically produced training data from the penn treebank . "in addition , we can use pre-trained neural word embeddings on large scale corpus for neural network initialization .","this approach benefits from large unsupervised corpora , that can be used to learn effective word embeddings ." badjatiya et al used an lstm model with features extracted by character n-grams for hate speech detection .,badjatiya et al presented a gradient boosted lstm model with random embeddings to outperform state of the art hate speech detection techniques . "like more data , performance improves log-linearly with the number of parameters ( unique n-grams ) .","there is no data like more data , performance improves log-linearly with the number of parameters ( unique n-grams ) ." the translation technology used in our system is based on the well-known phrase-based translation statistical approach .,we implement our approach in the framework of phrase-based statistical machine translation . the annotation scheme leans on the universal stanford dependencies complemented with the google universal pos tagset and the interset interlingua for morphological tagsets .,"the annotation is based on the google universal part-ofspeech tags and the stanford dependencies , adapted and harmonized across languages ." "parsing { 2 } is the same as searching for an s node that dominates the entire string , ie .",parsing is the task of reconstructing the syntactic structure from surface text . "we use the word2vec tool to train monolingual vectors , 6 and the cca-based tool for projecting word vectors .","with english gigaword corpus , we use the skip-gram model as implemented in word2vec 3 to induce embeddings ." we use the moses package to train a phrase-based machine translation model .,we used a phrase-based smt model as implemented in the moses toolkit . the parameters of the systems were tuned using mert to optimize bleu on the development set .,the feature weights for each system were tuned on development sets using the moses implementation of minimum error rate training . "for the optimization process , we apply the diagonal variant of adagrad with mini-batches .",we train the parameters of the stages separately using adagrad with the perceptron loss function . "morphological analysis is the task of segmenting a word into morphemes , the smallest meaning-bearing elements of natural languages .","morphological analysis is the basis for many nlp applications , including syntax parsing , machine translation and automatic indexing ." "in this study , we propose to leverage both the information in the source language .",the existing methods use only the information in either language side . in our experiments we used 5-gram language models trained with modified kneser-ney smoothing using kenlm toolkit .,we trained a standard 5-gram language model with modified kneser-ney smoothing using the kenlm toolkit on 4 billion running words . xing et al presented topic aware response generation by incorporating topic words obtained from a pre-trained lda model .,li et al used a latent dirichlet allocation model to generate topic distribution features as the news representations . "in our work , we build on lda , which is often used as a building block for topic models .","in our work , we use latent dirichlet allocation to identify the sub-topics in the given body of texts ." "the most common word embeddings used in deep learning are word2vec , glove , and fasttext .",word embedding approaches like word2vec or glove are powerful tools for the semantic analysis of natural language . we report the mt performance using the original bleu metric .,"for this task , we use the widely-used bleu metric ." the target language model was a trigram language model with modified kneser-ney smoothing trained on the english side of the bitext using the srilm tookit .,the language model is a 3-gram language model trained using the srilm toolkit on the english side of the training data . "word sense disambiguation ( wsd ) is the problem of assigning a sense to an ambiguous word , using its context .","word sense disambiguation ( wsd ) is a problem long recognised in computational linguistics ( yngve 1955 ) and there has been a recent resurgence of interest , including a special issue of this journal devoted to the topic ( cite-p-27-8-11 ) ." the development set is used to optimize feature weights using the minimum-error-rate algorithm .,the component features are weighted to minimize a translation error criterion on a development set . "twitter is the medium where people post real time messages to discuss on the different topics , and express their sentiments .","twitter 1 is a microblogging service , which according to latest statistics , has 284 million active users , 77 % outside the us that generate 500 million tweets a day in 35 different languages ." "named entity disambiguation ( ned ) is the task of determining which concrete person , place , event , etc . is referred to by a mention .","named entity disambiguation is the task of linking entity mentions to their intended referent , as represented in a knowledge base , usually derived from wikipedia ." for our purpose we use word2vec embeddings trained on a google news dataset and find the pairwise cosine distances for all words .,"for our experiments reported here , we obtained word vectors using the word2vec tool and the text8 corpus ." we trained a 4-gram language model on this data with kneser-ney discounting using srilm .,we used srilm to build a 4-gram language model with interpolated kneser-ney discounting . semantic role labeling ( srl ) is the process of producing such a markup .,semantic role labeling ( srl ) has been defined as a sentence-level natural-language processing task in which semantic roles are assigned to the syntactic arguments of a predicate ( cite-p-14-1-7 ) . srilm toolkit is used to build these language models .,the language models were trained using srilm toolkit . "word segmentation is the first step of natural language processing for japanese , chinese and thai because they do not delimit words by whitespace .",word segmentation is the foremost obligatory task in almost all the nlp applications where the initial phase requires tokenization of input into words . as word vectors the authors use word2vec embeddings trained with the skip-gram model .,all word vectors are trained on the skipgram architecture . we substitute our language model and use mert to optimize the bleu score .,"we report decoding speed and bleu score , as measured by sacrebleu ." socher et al used an rnn-based architecture to generate compositional vector representations of sentences .,socher et al present a model for compositionality based on recursive neural networks . standard vector space models of semantics are based in a term-document or word-context matrix .,traditional semantic space models represent meaning on the basis of word co-occurrence statistics in large text corpora . "for example , turian et al have improved the performance of chunking and named entity recognition by using word embedding also as one of the features in their crf model .","turian et al , for example , used embeddings from existing language models as unsupervised lexical features to improve named entity recognition and chunking ." "the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd ) .",word sense disambiguation ( wsd ) is the task of assigning sense tags to ambiguous lexical items ( lis ) in a text . semantic parsing is the task of translating natural language utterances into a machine-interpretable meaning representation .,"semantic parsing is the problem of translating human language into computer language , and therefore is at the heart of natural language understanding ." some researchers used similarity and association measures to build alignment links .,some researchers use similarity and association measures to build alignment links . "sentiment analysis is a natural language processing ( nlp ) task ( cite-p-10-1-14 ) which aims at classifying documents according to the opinion expressed about a given subject ( federici and dragoni , 2016a , b ) .",sentiment analysis is the task of automatically identifying the valence or polarity of a piece of text . sentiment analysis is a recent attempt to deal with evaluative aspects of text .,"sentiment analysis is a much-researched area that deals with identification of positive , negative and neutral opinions in text ." "for adjusting feature weights , the mert method was applied , optimizing the bleu-4 metric obtained on the development corpus .",the smt system was tuned on the development set newstest10 with minimum error rate training using the bleu error rate measure as the optimization criterion . the lms are build using the srilm language modelling toolkit with modified kneserney discounting and interpolation .,"the language models were interpolated kneser-ney discounted trigram models , all constructed using the srilm toolkit ." "specifically , we tested the methods word2vec using the gensim word2vec package and pretrained glove word embeddings .","we also used pre-trained word embeddings , including glove and 300d fasttext vectors ." we rely on distributed representation based on the neural network skip-gram model of mikolov et al .,we use the pre-trained 300-dimensional word2vec vectors by mikolov et al and mikolov et al . one of the first challenges in sentiment analysis is the vast lexical diversity of subjective language .,sentiment analysis is the task of automatically identifying the valence or polarity of a piece of text . we initialize the vectors corresponding to words in our input layer with 100-dimensional vectors generated by a word2vec model trained on over one million words from the pubmed central article repository .,we use the 100-dimensional pre-trained word embeddings trained by word2vec 2 and the 100-dimensional randomly initialized pos tag embeddings . xiao et al introduce a topic similarity model to select the synchronous rules for hierarchical phrase-based translation .,xiao et al propose a topic-based similarity model for rule selection in hierarchical phrasebased translation . as a baseline system for our experiments we use the syntax-based component of the moses toolkit .,we use the open source moses phrase-based mt system to test the impact of the preprocessing technique on translation quality . all annotations were carried out with the brat rapid annotation tool .,the annotation was performed using the brat 2 tool . the stanford parser was used to generate the dependency parse information for each sentence .,the syntax tree features were calculated using the stanford parser trained using the english caseless model . bengio et al proposed neural probabilistic language model by using a distributed representation of words .,bengio et al proposed a probabilistic neural network language model for word representations . the target language model is built on the target side of the parallel data with kneser-ney smoothing using the irstlm tool .,all language models are created with the srilm toolkit and are standard 4-gram lms with interpolated modified kneser-ney smoothing . the model weights were trained using the minimum error rate training algorithm .,the decoding weights were optimized with minimum error rate training . part-of-speech ( pos ) tagging is a job to assign a proper pos tag to each linguistic unit such as word for a given sentence .,part-of-speech ( pos ) tagging is the task of assigning each of the words in a given piece of text a contextually suitable grammatical category . we used the scikit-learn library the svm model .,we used the svd implementation provided in the scikit-learn toolkit . we use pre-trained vectors from glove for word-level embeddings .,"for english , we use the pre-trained glove vectors ." "gram language model with modified kneser-ney smoothing is trained with the srilm toolkit on the epps , ted , newscommentary , and the gigaword corpora .",the language model was constructed using the srilm toolkit with interpolated kneser-ney discounting . "we use glove word embeddings , an unsupervised learning algorithm for obtaining vector representations of words .",we use pre-trained 50 dimensional glove vectors 4 for word embeddings initialization . we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting .,"in addition , we use an english corpus of roughly 227 million words to build a target-side 5-gram language model with srilm in combination with kenlm ." coreference resolution is the process of linking together multiple expressions of a given entity .,"coreference resolution is a complex problem , and successful systems must tackle a variety of non-trivial subproblems that are central to the coreference task — e.g. , mention/markable detection , anaphor identification — and that require substantial implementation efforts ." and we use sri language modeling toolkit to tune our feature weights .,we use the sdsl library to implement all our structures and compare our indexes to srilm . "word sense disambiguation ( wsd ) is a difficult natural language processing task which requires that for every content word ( noun , adjective , verb or adverb ) the appropriate meaning is automatically selected from the available sense inventory 1 .",word sense disambiguation ( wsd ) is a particular problem of computational linguistics which consists in determining the correct sense for a given ambiguous word . "the language model used was a 5-gram with modified kneserney smoothing , built with srilm toolkit .",unpruned language models were trained using lmplz which employs modified kneser-ney smoothing . coreference resolution is the process of linking together multiple expressions of a given entity .,coreference resolution is a well known clustering task in natural language processing . metanet ’ s aims of increasing communication between citizens of different european countries .,the meta-net project aims to ensure equal access to information by all european citizens . "as described in this paper , we propose a new automatic evaluation method for machine translation .","as described herein , we proposed a new automatic evaluation method for machine translation ." "for our experiments , we used the latent variablebased berkeley parser .",we adopt berkeley parser 1 to train our sub-models . a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit .,"we trained two 5-gram language models on the entire target side of the parallel data , with srilm ." "in spite of the small-scale of training set , our approach outperforms the state-of-the-art systems in nlp & cc 2013 clsc .",experiments on nlp & cc 2013 clsc dataset show that our approach outperforms the state-of-the-art systems . curran and lin use syntactic features in the vector definition .,"pereira , curran and lin use syntactic features in the vector definition ." "feature weight tuning was carried out using minimum error rate training , maximizing bleu scores on a held-out development set .",parameter tuning was carried out using both k-best mira and minimum error rate training on a held-out development set . taxonomies which serve as backbone of structured knowledge are useful for many applications such as question answering and document clustering .,"taxonomies , which serve as backbones for structured knowledge , are useful for many nlp applications such as question answering and document clustering ." the lstm model is developed to solve the gradient vanishing or exploding problems in the rnn .,the lstm addresses the problem by re-parameterizing the rnn model . ccg is a lexicalized grammar formalism in which every constituent in a sentence is associated with a structured category that specifies its syntactic relationship to other constituents .,"ccg is a strongly lexicalized formalism , in which every word is associated with a syntactic category ( similar to an elementary syntactic structure ) indicating its subcategorization potential ." "in our experiments , we used the srilm toolkit to build 5-gram language model using the ldc arabic gigaword corpus .","we used the srilm toolkit to train a 4-gram language model on the xinhua portion of the gigaword corpus , which contains 238m english words ." we set the feature weights by optimizing the bleu score directly using minimum error rate training on the development set .,we used minimum error rate training to tune the feature weights for maximum bleu on the development set . "negation is a linguistic phenomenon present in all languages ( cite-p-12-3-6 , cite-p-12-1-5 ) .",negation is a grammatical category that comprises devices used to reverse the truth value of propositions . nlp researchers are especially well-positioned to contribute to the national discussion about gun violence .,these nlp tools have the potential to make a marked difference for gun violence researchers . we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit .,we also use a 4-gram language model trained using srilm with kneser-ney smoothing . "to evaluate our method , we use the webquestions dataset , which contains 5,810 questions crawled via google suggest api .","we evaluate our semantic parser on the webques-tions dataset , which contains 5,810 question-answer pairs ." "for the language model , we used sri language modeling toolkit to train a trigram model with modified kneser-ney smoothing on the 31 , 149 english sentences .",we used a 4-gram language model which was trained on the xinhua section of the english gigaword corpus using the srilm 4 toolkit with modified kneser-ney smoothing . we preprocessed the corpus with tokenization and true-casing tools from the moses toolkit .,"we tokenized , cleaned , and truecased our data using the standard tools from the moses toolkit ." "kalchbrenner et al propose a convolutional architecture for sentence representation that vertically stacks multiple convolution layers , each of which can learn independent convolution kernels .","kalchbrenner et al , 2014 ) proposes a cnn framework with multiple convolution layers , with latent , dense and low-dimensional word embeddings as inputs ." coreference resolution is the process of linking multiple mentions that refer to the same entity .,"coreference resolution is a challenging task , that involves identification and clustering of noun phrases mentions that refer to the same real-world entity ." "to measure translation accuracy , we use the automatic evaluation measures of bleu and ribes measured over all sentences in the test corpus .","we use the automatic mt evaluation metrics bleu , meteor , and ter , to evaluate the absolute translation quality obtained ." rosa et al and mare膷ek et al applied ape on english-to-czech mt outputs on morphological level .,rosa et al and mare膷ek et al applied a rule-based approach to ape of english-czech mt outputs on the morphological level . we used srilm to build a 4-gram language model with kneser-ney discounting .,we trained a 5-gram language model on the xinhua portion of gigaword corpus using the srilm toolkit . we use srilm to train a 5-gram language model on the target side of our training corpus with modified kneser-ney discounting .,"we trained two 5-gram language models on the entire target side of the parallel data , with srilm ." germanet is a lexical semantic network that is modeled after the princeton wordnet for english .,"italwordnet is a lexical semantic database based on eurowordnet lexical model which , in its turn , is inspired from princeton wordnet ." the language model pis implemented as an n-gram model using the srilm-toolkit with kneser-ney smoothing .,we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit . "a 3-gram language model was trained from the target side of the training data for chinese and arabic , using the srilm toolkit .","a back-off 2-gram model with good-turing discounting and no lexical classes was also created from the training set , using the srilm toolkit , ." feature weights were trained with minimum error-rate training on the news-test2008 development set using the dp beam search decoder and the mert implementation of the moses toolkit .,the smt system was tuned on the development set newstest10 with minimum error rate training using the bleu error rate measure as the optimization criterion . "we train 300 dimensional word embedding using word2vec on all the training data , and fine-turning during the training process .",we use word embeddings of dimension 100 pretrained using word2vec on the training dataset . coreference resolution is the task of grouping mentions to entities .,"coreference resolution is a challenging task , that involves identification and clustering of noun phrases mentions that refer to the same real-world entity ." "semantic role labeling ( srl ) is the task of identifying the arguments of lexical predicates in a sentence and labeling them with semantic roles ( cite-p-13-3-3 , cite-p-13-3-11 ) .","semantic role labeling ( srl ) is the process of extracting simple event structures , i.e. , “ who ” did “ what ” to “ whom ” , “ when ” and “ where ” ." "for estimating the monolingual we , we use the cbow algorithm as implemented in the word2vec package using a 5-token window .","in this run , we use a sentence vector derived from word embeddings obtained from word2vec ." "we use the best performing model amongst those tested by baroni and colleagues , which has been constructed with word2vec 5 using the cbow approach proposed by mikolov et al .","as a strong baseline , we trained the skip-gram model of mikolov et al using the publicly available word2vec 5 software ." "mihalcea et al compared knowledgebased and corpus-based methods , using word similarity and word specificity to define one general measure of text semantic similarity .","mihalcea et al proposed a method to measure the semantic similarity of words or short texts , considering both corpus-based and knowledge-based information ." "sentiment analysis is a natural language processing ( nlp ) task ( cite-p-10-1-14 ) which aims at classifying documents according to the opinion expressed about a given subject ( federici and dragoni , 2016a , b ) .",sentiment analysis is a research area in the field of natural language processing . "we employ the pretrained word vector , glove , to obtain the fixed word embedding of each word .","for the word-embedding based classifier , we use the glove pre-trained word embeddings ." the lexicalized reordering model was trained with the msd-bidirectionalfe option .,a lexicalized reordering model was trained with the msd-bidirectional-fe option . we use srilm for n-gram language model training and hmm decoding .,we implement an in-domain language model using the sri language modeling toolkit . twitter is a huge microbloging service with more than 500 million tweets per day 1 from different locations in the world and in different languages .,twitter is a microblogging service that has 313 million monthly active users 1 . "for regularization , dropout is applied to the input and hidden layers .","for regularization , dropout is applied to each layer ." active learning approach can effectively avoid this problem .,this scenario posits new challenges to active learning . reading comprehension ( rc ) is a high-level task in natural language understanding that requires reading a document and answering questions about its content .,"reading comprehension ( rc ) is a language understanding task similar to question answering , where a system is expected to read a given passage of text and answer questions about it ." we use bleu and meteor for our automatic metric-based evaluation .,we used bleu and meteor for extrinsic evaluation . the word embeddings can provide word vector representation that captures semantic and syntactic information of words .,word embedding models are aimed at learning vector representations of word meaning . coreference resolution is the task of clustering a set of mentions in the text such that all mentions in the same cluster refer to the same entity .,coreference resolution is the task of clustering referring expressions in a text so that each resulting cluster represents an entity . "as for ej translation , we use the stanford parser to obtain english abstraction trees .",we use the stanford dependency parser with the collapsed representation so that preposition nodes become edges . "coreference resolution is the task of partitioning a set of mentions ( i.e . person , organization and location ) into entities .",coreference resolution is the task of determining which mentions in a text refer to the same entity . "additionally , lexical substitution is a more natural task than similarity ratings , it makes it possible to evaluate meaning composition at the level of individual words , and provides a common ground to compare cdsms with dedicated lexical substitution models .","lexical substitution is a more natural task , enables us to evaluate meaning composition at the level of individual words , and provides a common ground to compare cdsms with dedicated lexical substitution models ." dredze et al showed the possibility that many parsing errors in the domain adaptation tasks came from inconsistencies between annotation manners of training resources .,"dredze et al , show that domain adaptation is hard for dependency parsing based on results in the conll 2007 shared task ." we use five datasets from the conll-x shared task .,we used datasets distributed for the 2006 and 2007 conll shared tasks . we also use glove vectors to initialize the word embedding matrix in the caption embedding module .,we train randomly initialized word embeddings of size 500 for the dialog model and use 300 dimentional glove embeddings for reranking classifiers . we use liblinear with l2 regularization and default parameters to learn a model .,we use liblinear logistic regression module to classify document-level embeddings . we evaluate our system using bleu and ter .,we report bleu and ter evaluation scores . "we trained a trigram language model on the chinese side , with the srilm toolkit , using the modified kneser-ney smoothing option .",we trained a 5-gram language model on the xinhua portion of gigaword corpus using the srilm toolkit . yu and chen proposed to use conditional random field to detect chinese word ordering errors .,peng et al achieved better results by using a conditional random field model . it is a standard phrasebased smt system built using the moses toolkit .,the smt system is implemented using moses and the nmt system is built using the fairseq toolkit . "for building our statistical ape system , we used maximum phrase length of 7 and a 5-gram language model trained using kenlm .","after standard preprocessing of the data , we train a 3-gram language model using kenlm ." we train a recurrent neural network language model on a large collection of tweets .,we model the generative architecture with a recurrent language model based on a recurrent neural network . a 4-gram language model was trained on the monolingual data by the srilm toolkit .,the language model was a kneser-ney interpolated trigram model generated using the srilm toolkit . "more recently , mikolov et al propose two log-linear models , namely the skip-gram and cbow model , to efficiently induce word embeddings .","recently , mikolov et al proposed novel model architectures to compute continuous vector representations of words obtained from very large data sets ." word embeddings are critical for high-performance neural networks in nlp tasks .,continuous representation of words and phrases are proven effective in many nlp tasks . we extract our paraphrase grammar from the french-english portion of the europarl corpus .,we use the aligned english and german sentences in europarl for our experiments . part-of-speech tagging is a crucial preliminary process in many natural language processing applications .,part-of-speech tagging is the act of assigning each word in a sentence a tag that describes how that word is used in the sentence . "in this paper , we propose the use of autoencoders based on long short term memory neural networks for capturing long distance relationships between phonemes in a word .",we are also interested in using long short-term memory neural networks to better model the locality of propagated information from the stack and queue . the skip-gram model adopts a neural network structure to derive the distributed representation of words from textual corpus .,the skip-gram model is a very popular technique for learning embeddings that scales to huge corpora and can capture important semantic and syntactic properties of words . we evaluate our models using the standard bleu metric 2 on the detokenized translations of the test set .,we evaluate the translation quality using the case-sensitive bleu-4 metric . we implement our approach in the framework of phrase-based statistical machine translation .,we work with the phrase-based smt framework as the baseline system . we used kenlm with srilm to train a 5-gram language model based on all available target language training data .,"for the semantic language model , we used the srilm package and trained a tri-gram language model with the default goodturing smoothing ." "transliteration is a key building block for multilingual and cross-lingual nlp since it is essential for ( i ) handling of names in applications like machine translation ( mt ) and cross-lingual information retrieval ( clir ) , and ( ii ) user-friendly input methods .",transliteration is a key building block for multilingual and cross-lingual nlp since it is useful for user-friendly input methods and applications like machine translation and cross-lingual information retrieval . le and mikolov extends the neural network of word embedding to learn the document embedding .,"recently , le and mikolov exploit neural networks to learn continuous document representation from data ." named entity recognition ( ner ) is the task of detecting named entity mentions in text and assigning them to their corresponding type .,named entity recognition ( ner ) is the process by which named entities are identified and classified in an open-domain text . we used the dataset from the conll shared task for cross-lingual dependency parsing .,the data sets used are taken from the conll-x shared task on multilingual dependency parsing . we apply statistical significance tests using the paired bootstrapped resampling method .,"to see whether an improvement is statistically significant , we also conduct significance tests using the paired bootstrap approach ." we used maltparser to derive syntactic dependency relations in english .,we used the malt parser to obtain source english dependency trees and the stanford parser for arabic . conditional random fields are discriminative structured classification models for sequential tagging and segmentation .,conditional random fields are discriminatively-trained undirected graphical models that find the globally optimal labeling for a given configuration of random variables . we set the feature weights by optimizing the bleu score directly using minimum error rate training on the development set .,we use our reordering model for n-best re-ranking and optimize bleu using minimum error rate training . "named entity recognition ( ner ) is the task of identifying and classifying phrases that denote certain types of named entities ( nes ) , such as persons , organizations and locations in news articles , and genes , proteins and chemicals in biomedical literature .",named entity recognition ( ner ) is the process by which named entities are identified and classified in an open-domain text . "in order to deal with the evolutionary nature of the problem , nepveu et al propose an imt system with dynamic adaptation via cache-based model extensions for language and translation models .","an early attempt can be found in nepveu et al , where dynamic adaptation of an imt system via cache-based model extensions to language and translation models is proposed ." we built a 5-gram language model on the english side of europarl and used the kneser-ney smoothing method and srilm as the language model toolkit .,"for the semantic language model , we used the srilm package and trained a tri-gram language model with the default goodturing smoothing ." luong et al created a hierarchical language model that uses rnn to combine morphemes of a word to obtain a word representation .,"luong et al , 2013 ) utilized recursive neural networks in which inputs are morphemes of words ." "we cast the problem of event property extraction as a sequence labeling task , using conditional random fields for learning and inference .","specifically , we adopt linear-chain conditional random fields as the method for sequence labeling ." word sense disambiguation ( wsd ) is the task to identify the intended sense of a word in a computational manner based on the context in which it appears ( cite-p-13-3-4 ) .,word sense disambiguation ( wsd ) is a key enabling technology that automatically chooses the intended sense of a word in context . relation extraction is a traditional information extraction task which aims at detecting and classifying semantic relations between entities in text ( cite-p-10-1-18 ) .,relation extraction ( re ) is the process of generating structured relation knowledge from unstructured natural language texts . we use pre-trained glove vector for initialization of word embeddings .,we use the glove word vector representations of dimension 300 . "the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd ) .",word sense disambiguation ( wsd ) is a key enabling-technology that automatically chooses the intended sense of a word in context . most existing works are based on variants and extensions of lda .,"existing works are based on two basic models , plsa and lda ." relation extraction is the task of finding relational facts in unstructured text and putting them into a structured ( tabularized ) knowledge base .,"relation extraction ( re ) is the task of identifying instances of relations , such as nationality ( person , country ) or place of birth ( person , location ) , in passages of natural text ." "for the language model we use the corpus of 60,000 simple english wikipedia articles 3 and build a 3-gram language model with kneser-ney smoothing trained with srilm .",we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus . "as described by joshi et al , recent approaches to irony can roughly be classified as either rule-based or machine learning-based .","as described by joshi , bhattacharyya , and carman , irony modeling approaches can roughly be classified into rule-based and machine learning methods ." "meanwhile , we adopt glove pre-trained word embeddings 5 to initialize the representation of input tokens .","for the word-embedding based classifier , we use the glove pre-trained word embeddings ." "sentiment analysis is a natural language processing ( nlp ) task ( cite-p-10-1-14 ) which aims at classifying documents according to the opinion expressed about a given subject ( federici and dragoni , 2016a , b ) .",sentiment analysis is the process of identifying and extracting subjective information using natural language processing ( nlp ) . "to quantify it , we train a word2vec model on a mid-2011 copy of english wikipedia .",we learn our word embeddings by using word2vec 3 on unlabeled review data . we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit .,"we used srilm for training the 5-gram language model with interpolated modified kneser-ney discounting , ." "if the anaphor is a pronoun , the cache is searched for a plausible referent .","if the anaphor is a definite noun phrase and the referent is in focus ( i.e . in the cache ) , anaphora resolution will be hindered ." "further , we apply a 4-gram language model trained with the srilm toolkit on the target side of the training corpus .","for improving the word alignment , we use the word-classes that are trained from a monolingual corpus using the srilm toolkit ." morfessor 2.0 is a new implementation of the morfessor baseline algorithm .,"morfessor 2.0 is a rewrite of the original , widely-used morfessor 1.0 software , with well documented command-line tools and library interface ." we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing .,we trained a 4-gram language model on this data with kneser-ney discounting using srilm . coreference resolution is the task of determining whether two or more noun phrases refer to the same entity in a text .,coreference resolution is a field in which major progress has been made in the last decade . we use srilm to train a 5-gram language model on the target side of our training corpus with modified kneser-ney discounting .,we trained kneser-ney discounted 5-gram language models on each available corpus using the srilm toolkit . "dependency parsing is the task of building dependency links between words in a sentence , which has recently gained a wide interest in the natural language processing community and has been used for many problems ranging from machine translation ( cite-p-12-1-4 ) to question answering ( zhou et al. , 2011a ) .","dependency parsing is a crucial component of many natural language processing ( nlp ) systems for tasks such as relation extraction ( cite-p-15-1-5 ) , statistical machine translation ( cite-p-15-5-7 ) , text classification ( o ? zgu ? r and gu ? ngo ? r , 2010 ) , and question answering ( cite-p-15-3-0 ) ." "huang et al , 2012 ) used the multi-prototype models to learn the vector for different senses of a word .",huang et al further extended this context clustering method and incorporated global context to learn multi-prototype representation vectors . zeng et al exploit a convolutional neural network to extract lexical and sentence level features for relation classification .,"zeng et al proposed an approach for relation classification where sentence-level features are learned through a cnn , which has word embedding and position features as its input ." mintz et al proposed a distant supervision approach for relation extraction using a richfeatured logistic regression model .,distant supervision as a learning paradigm was introduced by mintz et al for relation extraction in general domain . all models used interpolated modified kneser-ney smoothing .,the language model is a 5-gram with interpolation and kneserney smoothing . "since our dataset is not so large , we make use of pre-trained word embeddings , which are trained on a much larger corpus with word2vec toolkit .",we use the pre-trained 300-dimensional word2vec embeddings trained on google news 1 as input features . "unlike dong et al , we initialize our word embeddings using a concatenation of the glove and cove embeddings .",our word embeddings is initialized with 100-dimensional glove word embeddings . we use conditional random field sequence labeling as described in .,we use conditional random fields for sequence labelling . coreference resolution is the process of determining whether two expressions in natural language refer to the same entity in the world .,coreference resolution is a task aimed at identifying phrases ( mentions ) referring to the same entity . "meanwhile , we adopt glove pre-trained word embeddings 5 to initialize the representation of input tokens .","for the actioneffect embedding model , we use pre-trained glove word embeddings as input to the lstm ." "to exploit these kind of labeling constraints , we resort to conditional random fields .","as a classifier , we choose a first-order conditional random field model ." we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting .,we build a 9-gram lm using srilm toolkit with modified kneser-ney smoothing . "relation extraction is the task of finding relations between entities in text , which is useful for several tasks such as information extraction , summarization , and question answering ( cite-p-14-3-7 ) .",relation extraction is a fundamental task in information extraction . we use the glove algorithm to obtain 300-dimensional word embeddings from a union of these corpora .,"for the classification task , we use pre-trained glove embedding vectors as lexical features ." we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing .,"we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit ." "for word-level embeddings , we pre-train the word vectors using word2vec on the gigaword corpus mentioned in section 4 , and the text of the training dataset .","due to their ability to capture syntactic and semantic information of words from large scale unlabeled texts , we pre-train the word embeddings from the given training dataset by word2vec toolkit ." coreference resolution is the task of determining which mentions in a text are used to refer to the same real-world entity .,"coreference resolution is the task of partitioning a set of entity mentions in a text , where each partition corresponds to some entity in an underlying discourse model ." cite-p-20-1-5 extended work by adding several more subsystems in this error model .,cite-p-20-1-16 extended the above model to handle other types of non-standard words . coreference resolution is a fundamental component of natural language processing ( nlp ) and has been widely applied in other nlp tasks ( cite-p-15-3-9 ) .,coreference resolution is the task of clustering referring expressions in a text so that each resulting cluster represents an entity . table 1 shows the translation performance by bleu .,table 4 shows the bleu scores of the output descriptions . "for the sick and msrvid experiments , we used 300-dimension glove word embeddings .","in our experiments , we choose to use the published glove pre-trained word embeddings ." tai et al and zhu et al extended sequential lstms to tree-structured lstms by adding branching factors .,"tai et al , and le and zuidema extended sequential lstms to tree-structured lstms by adding branching factors ." a joint probability model for phrase translation was proposed by marcu and wong .,marcu and wong proposed a phrase-based context-free joint probability model for lexical mapping . "following the current practice in evaluating summarization , particularly duc 3 , we use the rouge evaluation package .","we use the rouge 1 to evaluate our framework , which has been widely applied for summarization evaluation ." coreference resolution is the task of determining which mentions in a text refer to the same entity .,"coreference resolution is a complex problem , and successful systems must tackle a variety of non-trivial subproblems that are central to the coreference task — e.g. , mention/markable detection , anaphor identification — and that require substantial implementation efforts ." coreference resolution is the process of linking together multiple referring expressions of a given entity in the world .,coreference resolution is the process of linking multiple mentions that refer to the same entity . "we pretrain word vectors with the word2vec tool on the news dataset released by ding et al , which are fine-tuned during training .","we utilize the google news dataset created by mikolov et al , which consists of 300-dimensional vectors for 3 million words and phrases ." "to this end , we use first-and second-order conditional random fields .","specifically , we adopt linear-chain conditional random fields as the method for sequence labeling ." "a sentiment lexicon is a list of words and phrases , such as “ excellent ” , “ awful ” and “ not bad ” , each of them is assigned with a positive or negative score reflecting its sentiment polarity and strength ( cite-p-18-3-8 ) .","a sentiment lexicon is a list of words and phrases , such as excellent , awful and not bad , each is being assigned with a positive or negative score reflecting its sentiment polarity ." "for this language model , we built a trigram language model with kneser-ney smoothing using srilm from the same automatically segmented corpus .",we trained a 4-gram language model on the xinhua portion of gigaword corpus using the sri language modeling toolkit with modified kneser-ney smoothing . a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit from stolcke .,srilm toolkit was used to create up to 5-gram language models using the mentioned resources . word sense disambiguation ( wsd ) is a task to identify the intended sense of a word based on its context .,word sense disambiguation ( wsd ) is the task to identify the intended sense of a word in a computational manner based on the context in which it appears ( cite-p-13-3-4 ) . a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit .,a 5-gram language model with kneser-ney smoothing was trained with srilm on monolingual english data . qiu et al propose a double propagation method to extract opinion word and opinion target simultaneously .,qiu et al propose double propagation to expand opinion targets and opinion words lists in a bootstrapping way . we preprocessed all aligned translations by means of the treetagger tool that outputs part-of-speech and 55 lemma information .,"we used the treetagger tool to extract part-of-speech from each given text , then tokenize and lemmatize it ." we use the open-source moses toolkit to build four arabic-english phrase-based statistical machine translation systems .,"we used the phrase-based smt model , as implemented in the moses toolkit , to train an smt system translating from english to arabic ." "we begin by building two word alignment models using the berkeley aligner , a state-of-the-art word alignment package that relies on ibm models 1 and 2 and an hmm .","we begin by building two word alignment models using the berkeley aligner , a state-of-the-art word alignment package that relies on ibm mixture models 1 and 2 and an hmm ." "more recently , matsuo et al presented a method of word clustering based on web counts using a search engine .",matsuo et al presented a graph cluster-ing algorithm for word clustering based on word similarity measures by web counts . pre-trained word embeddings were shown to boost the performance in various nlp tasks and specifically in ner .,unsupervised word embeddings trained from large amounts of unlabeled data have been shown to improve many nlp tasks . "to convert into a distributed representation here , a neural network for word embedding learns via the skip-gram model .","with english gigaword corpus , we use the skip-gram model as implemented in word2vec 3 to induce embeddings ." the evaluation metric for the overall translation quality was case-insensitive bleu4 .,translation performances are measured with case-insensitive bleu4 score . "we use conditional random fields , a popular approach to solve sequence labeling problems .","specifically , we adopt linear-chain conditional random fields as the method for sequence labeling ." we parsed the corpus with rasp and with the stanford pcfg parser .,we added part of speech and dependency triple annotations to this data using the stanford parser . we pre-train the word embeddings using word2vec .,"for feature building , we use word2vec pre-trained word embeddings ." we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing .,the srilm toolkit was used to build the 5-gram language model . "more recently , features drawn from word embeddings have been shown to be effective in various text classification tasks such as sentiment analysis and named entity recognition .","recent studies focuses on learning word embeddings for specific tasks , such as sentiment analysis and dependency parsing ." yarowsky used the one sense per collocation property as an essential ingredient for an unsupervised word-sense disambiguation algorithm .,"yarowsky proposes a method for word sense disambiguation , which is based on monolingual bootstrapping ." word2vec is a prediction-based distributional model in which a word representation is obtained from a neural network trying to predict a word from its context or vice-versa .,word2vec is the method to obtain distributed representations for a word by using neural networks with one hidden layer . this approach has already been used with great success in the domain of language models .,"in particular , neural language models have demonstrated impressive performance at the task of language modeling ." we model the task of finding commonalities from student answers in a manner similar to the sequential pattern mining problem .,"in the first step , we pose a variant of sequential pattern mining problem to identify sequential word patterns that are more common among student answers ." "further , the word embeddings are initialized with glove , and not tied with the softmax weights .",the word embeddings are identified using the standard glove representations . "relation extraction is the key component for building relation knowledge graphs , and it is of crucial significance to natural language processing applications such as structured search , sentiment analysis , question answering , and summarization .",relation extraction is the task of detecting and classifying relationships between two entities from text . the subtask of aspect category detection obtains the best result when applying the boosting method .,the subtask of aspect category detection obtains the best performance when applying the boosting method on maxent . dependency parsing is a central nlp task .,dependency parsing is a fundamental task for language processing which has been investigated for decades . barzilay and mckeown used a corpus-based method to identify paraphrases from a corpus of multiple english translations of the same source text .,barzilay and mckeown extracted both single-and multiple-word paraphrases from a sentence-aligned corpus for use in multi-document summarization . bansal et al show the benefits of such modified-context embeddings in dependency parsing task .,"using these representations as features , bansal et al obtained improvements in dependency recovery in the mst parser ." we use the weka toolkit and the derived features to train a naive-bayes classifier .,we used the mallet toolkit for generating topic distribution vectors and the weka package for the classification tasks . relation extraction is the task of predicting semantic relations over entities expressed in structured or semi-structured text .,relation extraction is the task of finding relationships between two entities from text . "in previous approaches , the features are collected from corpora , those we make use of are retrieved from the lexicon entries .","also , while in previous approaches , the features are collected from corpora , those we make use of are retrieved from the lexicon entries ." we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus .,we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing . "sentiment analysis ( sa ) is a hot-topic in the academic world , and also in the industry .","sentiment analysis ( sa ) is the determination of the polarity of a piece of text ( positive , negative , neutral ) ." xu et al and min et al improve the quality of distant supervision training data by reducing false negative examples .,"furthermore , xu et al correct false negative instances by using pseudo-relevance feedback to expand the origin knowledge base ." the probabilistic language model is constructed on google web 1t 5-gram corpus by using the srilm toolkit .,the language model was constructed using the srilm toolkit with interpolated kneser-ney discounting . "as a classifier , we employ support vector machines as implemented in svm light .",the classifier we use in this paper is support vector machines in the implementation of svm light . we used adam optimizer with its standard parameters .,we used adam for optimization of the neural models . the translation results are evaluated with case insensitive 4-gram bleu .,the translation quality is evaluated by case-insensitive bleu-4 . "named entity recognition ( ner ) is the task of identifying and typing phrases that contain the names of persons , organizations , locations , and so on .",named entity recognition ( ner ) is a challenging learning problem . "for our experiments reported here , we obtained word vectors using the word2vec tool and the text8 corpus .",we also used word2vec to generate dense word vectors for all word types in our learning corpus . more recently wang et al proposed to train a conditional random field using an entropy-based regularizer .,shen et al extended the hmm-based approach to make it discriminative by making use of conditional random fields . "long-short term memory networks have been proposed to solve this issue , and so we employ them .",an effective solution for these problems is the long short-term memory architecture . the annotation scheme is based on an evolution of stanford dependencies and google universal part-of-speech tags .,"the annotation scheme is based on an evolution of stanford dependencies , google universal part-ofspeech tags , and the interset interlingua for morphosyntactic tagsets ." we trained a 4-gram language model on this data with kneser-ney discounting using srilm .,we apply sri language modeling toolkit to train a 4-gram language model with kneser-ney smoothing . "we trained a trigram language model on the chinese side , with the srilm toolkit , using the modified kneser-ney smoothing option .",we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit . "morphological disambiguation is a useful first step for higher level analysis of any language but it is especially critical for agglutinative languages like turkish , czech , hungarian , and finnish .",morphological disambiguation is the task of selecting the correct morphological parse for a given word in a given context . sentiment analysis is the study of the subjectivity and polarity ( positive vs. negative ) of a text ( cite-p-7-1-10 ) .,sentiment analysis is a research area in the field of natural language processing . we use the liblinear tool as our svm implementation .,we used the svm implementation of scikit learn . we used a logistic regression classifier provided by the liblinear software .,we used l2-regularized logistic regression classifier as implemented in liblinear . the model weights are automatically tuned using minimum error rate training .,the smt weighting parameters were tuned by mert using the development data . "we train a trigram language model with modified kneser-ney smoothing from the training dataset using the srilm toolkit , and use the same language model for all three systems .",we build a trigram language model per prompt for the english data using the srilm toolkit and measure the perplexity of translated german answers under that language model . as a software we use srilm with the default algorithm .,we use srilm with its default parameters for this purpose . "for this reason , we used glove vectors to extract the vector representation of words .","we employ the pretrained word vector , glove , to obtain the fixed word embedding of each word ." we present an active sentiment domain adaptation approach to train accurate sentiment classifier for target domain with less labeled samples .,extensive experiments on benchmark datasets show that our approach can train accurate sentiment classifier with less labeled samples . the formally syntax-based models use synchronous context-free grammar but induce a grammar from a parallel text without relying on any linguistic annotations or assumptions .,an hierarchical phrase-based model is a powerful method to cover any format of translation pairs by using synchronous context free grammar . "table 2 shows the blind test results using bleu-4 , meteor and ter .","table 1 summarizes test set performance in bleu , nist and ter ." we segment english and chinese tokens into subwords via byte-pair encoding .,we use byte pair encoding with 45k merge operations to split words into subwords . we used the implementation of the scikit-learn 2 module .,"we use the svm implementation from scikit-learn , which in turn is based on libsvm ." "katiyar and cardie proposed a recurrent neural network to extract features to learn an hypergraph structure of nested mentions , using a bilou encoding scheme .",katiyar and cardie proposed a neural network-based approach that learns hypergraph representation for nested entities using features extracted from a recurrent neural network . "however , we use a large 4-gram lm with modified kneser-ney smoothing , trained with the srilm toolkit , stolcke , 2002 and ldc english gigaword corpora .",we have used the srilm with kneser-ney smoothing for training a language model for the first stage of decoding . pereira et al suggested deterministic annealing to cluster verb-argument pairs into classes of verbs and nouns .,"pereira et al cluster nouns according to their distribution as direct objects of verbs , using information-theoretic tools ." "we use the moses toolkit with a phrase-based baseline to extract the qe features for the x l , x u , and testing .",as a baseline system for our experiments we use the syntax-based component of the moses toolkit . we initialize the embedding weights by the pre-trained word embeddings with 200 dimensional vectors .,we apply the 3-phase learning procedure proposed by where we first create word embeddings based on the skip-gram model . we measure translation quality via the bleu score .,we evaluate the translation quality using the case-sensitive bleu-4 metric . we used the srilm toolkit to build unpruned 5-gram models using interpolated modified kneser-ney smoothing .,"for the semantic language model , we used the srilm package and trained a tri-gram language model with the default goodturing smoothing ." "sentiment classification is a special task of text categorization that aims to classify documents according to their opinion of , or sentiment toward a given subject ( e.g. , if an opinion is supported or not ) ( cite-p-11-1-2 ) .","sentiment classification is a task to predict a sentiment label , such as positive/negative , for a given text and has been applied to many domains such as movie/product reviews , customer surveys , news comments , and social media ." "guo et al , 2014 ) explored bilingual resources to learn sense-specific word representation .","guo et al , 2014 ) considers bilingual datasets to learn sense-specific word representations ." word sense disambiguation ( wsd ) is the task of determining the meaning of an ambiguous word in its context .,"the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd ) ." the nodes are concepts ( or synsets as they are called in the wordnet ) .,wordnet is a key lexical resource for natural language applications . we trained the machine translation toolkit moses to translate groups of letters rather than groups of words .,we used the moses toolkit to train the phrase tables and lexicalized reordering models . we used the sri language modeling toolkit for this purpose .,we used the srilm toolkit to generate the scores with no smoothing . we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing .,our translation model is implemented as an n-gram model of operations using srilm-toolkit with kneser-ney smoothing . for entity tagging we used a maximum entropy model .,"for learning coreference decisions , we used a maximum entropy model ." all models used interpolated modified kneser-ney smoothing .,this type of features are based on a trigram model with kneser-ney smoothing . we will show translation quality measured with the bleu score as a function of the phrase table size .,we use case-sensitive bleu-4 to measure the quality of translation result . "we have presented here a new methodology for acquiring comprehensive multiword lexicons from large corpora , using competition .","here , we present an effective , expandable , and tractable new approach to comprehensive multiword lexicon acquisition ." we train a 4-gram language model on the xinhua portion of the gigaword corpus using the sri language toolkit with modified kneser-ney smoothing .,we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting . a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit from stolcke .,"the system was trained using moses with default settings , using a 5-gram language model created from the english side of the training corpus using srilm ." the model weights were trained using the minimum error rate training algorithm .,the standard minimum error rate training algorithm was used for tuning . the srilm toolkit was used to build the trigram mkn smoothed language model .,the language model component uses the srilm lattice-tool for weight assignment and nbest decoding . sagae and tsujii used an ensemble to select high-quality dependency parses .,sagae and tsujii applied the standard co-training method for dependency parsing . such as wordnet ( cite-p-11-1-13 ) with subjectivity labels could support better subjectivity analysis .,adding subjectivity labels to wordnet could also support automatic subjectivity analysis . the system was trained using the moses toolkit .,it is a standard phrasebased smt system built using the moses toolkit . we used a phrase-based smt model as implemented in the moses toolkit .,we implement the pbsmt system with the moses toolkit . transition-based and graph-based models have attracted the most attention of dependency parsing in recent years .,various recent attempts have been made to include non-local features into graph-based dependency parsing . we trained a 5-gram sri language model using the corpus supplied for this purpose by the shared task organizers .,"for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided ." we use the word2vec tool to pre-train the word embeddings .,we used the google news pretrained word2vec word embeddings for our model . the log-lineal combination weights were optimized using mert .,the log-linear parameter weights are tuned with mert on the development set . reviews demonstrate that proposed method outperforms the template extraction based algorithm .,the experimental results demonstrate that our approach outperforms the template extraction based approaches . we substitute our language model and use mert to optimize the bleu score .,we apply standard tuning with mert on the bleu score . relation extraction is a traditional information extraction task which aims at detecting and classifying semantic relations between entities in text ( cite-p-10-1-18 ) .,"relation extraction is a subtask of information extraction that finds various predefined semantic relations , such as location , affiliation , rival , etc. , between pairs of entities in text ." word embeddings are considered one of the key building blocks in natural language processing and are widely used for various applications .,continuous-valued vector representation of words has been one of the key components in neural architectures for natural language processing . "semantic textual similarity is the task of judging the similarity of a pair of sentences on a scale from 0 to 5 , and was recently introduced as a semeval task .",semantic textual similarity is the task of judging the similarity of a pair of sentences on a scale from 1 to 5 . "in this paper , we describe an approach which overcomes this problem .","in this paper , we describe an approach which overcomes this problem using dictionary definitions ." relation extraction is the task of automatically detecting occurrences of expressed relations between entities in a text and structuring the detected information in a tabularized form .,relation extraction is the task of predicting semantic relations over entities expressed in structured or semi-structured text . we use the glove vector representations to compute cosine similarity between two words .,our word embeddings is initialized with 100-dimensional glove word embeddings . duh et al used a neural network based language model trained on a small in-domain corpus to select from a larger data pool .,duh et al employed the method of and further explored neural language model for data selection rather than the conventional n-gram language model . "sentiment classification is the task of identifying the sentiment polarity ( e.g. , positive or negative ) of * 1 corresponding author a natural language text towards a given topic ( cite-p-18-1-19 , cite-p-18-3-1 ) and has become the core component of many important applications in opinion analysis ( cite-p-18-1-2 , cite-p-18-1-10 , cite-p-18-1-15 , cite-p-18-3-4 ) .","sentiment classification is the fundamental task of sentiment analysis ( cite-p-15-3-11 ) , where we are to classify the sentiment of a given text ." "for nb and svm , we used their implementation available in scikit-learn .","for this task , we used the svm implementation provided with the python scikit-learn module ." coreference resolution is the process of determining whether two expressions in natural language refer to the same entity in the world .,coreference resolution is the task of determining when two textual mentions name the same individual . the srilm toolkit was used to build the trigram mkn smoothed language model .,the probabilistic language model is constructed on google web 1t 5-gram corpus by using the srilm toolkit . a 4-gram language model was trained on the monolingual data by the srilm toolkit .,all language models were trained using the srilm toolkit . the parameter weights are optimized with minimum error rate training .,the decoding weights are optimized with minimum error rate training to maximize bleu scores . sch眉tze created sense representations by clustering context representations derived from co-occurrence .,the context clustering approach was pioneered by sch眉tze who used second order co-occurrences to construct the context embedding . text categorization is a classical text information processing task which has been studied adequately ( cite-p-18-1-9 ) .,text categorization is the problem of automatically assigning predefined categories to free text documents . "for this purpose , we turn to the expectation maximization algorithm .","for training the trigger-based lexicon model , we apply the expectation-maximization algorithm ." traditional topic models like latent dirichlet allocation have been explored extensively to discover topics from text .,generative models like lda and plsa have been proved to be very successful in modeling topics and other textual information in an unsupervised manner . "luong et al break words into morphemes , and use recursive neural networks to compose word meanings from morpheme meanings .",luong et al learn word representations based on morphemes that are obtained from an external morphological segmentation system . "the language model used was a 5-gram with modified kneserney smoothing , built with srilm toolkit .",these language models were built up to an order of 5 with kneser-ney smoothing using the srilm toolkit . "sentiment analysis is a research area where does a computational analysis of people ’ s feelings or beliefs expressed in texts such as emotions , opinions , attitudes , appraisals , etc . ( cite-p-12-1-3 ) .","sentiment analysis is a ‘ suitcase ’ research problem that requires tackling many nlp subtasks , e.g. , aspect extraction ( cite-p-26-3-15 ) , named entity recognition ( cite-p-26-3-6 ) , concept extraction ( cite-p-26-3-20 ) , sarcasm detection ( cite-p-26-3-16 ) , personality recognition ( cite-p-26-3-7 ) , and more ." we implemented the different aes models using scikit-learn .,we use the selectfrommodel 4 feature selection method as implemented in scikit-learn . relation extraction is the task of automatically detecting occurrences of expressed relations between entities in a text and structuring the detected information in a tabularized form .,relation extraction is a core task in information extraction and natural language understanding . "additionally , coreference resolution is a pervasive problem in nlp and many nlp applications could benefit from an effective coreference resolver that can be easily configured and customized .","coreference resolution is a complex problem , and successful systems must tackle a variety of non-trivial subproblems that are central to the coreference task — e.g. , mention/markable detection , anaphor identification — and that require substantial implementation efforts ." "as our supervised classification algorithm , we use a linear svm classifier from liblinear , with its default parameter settings .",we use the l2-regularized logistic regression of liblinear as our term candidate classifier . a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit from stolcke .,a knsmoothed 5-gram language model is trained on the target side of the parallel data with srilm . "in this paper , we examine the problem of detecting ideological bias .","in addition , we describe an approach to crowdsourcing ideological bias annotations ." callison-burch et al used paraphrases of the trainig corpus for translating unseen phrases .,callison-burch et al used pivot languages for paraphrase extraction to handle the unseen phrases for phrase-based smt . parameters are initialized using the method described by glorot and bengio .,the parameters are initialized by the techniques described in . the language model is a 5-gram lm with modified kneser-ney smoothing .,unpruned language models were trained using lmplz which employs modified kneser-ney smoothing . kulkarni et al use neural word embeddings to model the shift in meaning of words such as gay over the last century .,kim et al and kulkarni et al computed the degree of meaning change by applying neural networks for word representation . "we use several classifiers including logistic regression , random forest and adaboost implemented in scikit-learn .","finally , we combine all the above features using a support vector regression model which is implemented in scikit-learn ." we use the treebanks from the conll shared tasks on dependency parsing for evaluation .,for monolingual treebank data we relied on the conll-x and conll-2007 shared tasks on dependency parsing . "for pos tagging and syntactic parsing , we use the stanford nlp toolkit .","we do perform word segmentation in this work , using the stanford tools ." we apply srilm to train the 3-gram language model of target side .,we use srilm for training a trigram language model on the english side of the training data . lexical simplification is a specific case of lexical substitution where the complex words in a sentence are replaced with simpler words .,lexical simplification is a technique that substitutes a complex word or phrase in a sentence with a simpler synonym . negation is a linguistic phenomenon where a negation cue ( e.g . not ) can alter the meaning of a particular text segment or of a fact .,negation is a grammatical category that comprises devices used to reverse the truth value of propositions . "word alignment is a crucial early step in the training of most statistical machine translation ( smt ) systems , in which the estimated alignments are used for constraining the set of candidates in phrase/grammar extraction ( cite-p-9-3-5 , cite-p-9-1-4 , cite-p-9-3-0 ) .",word alignment is a critical first step for building statistical machine translation systems . "in addition , a 5-gram lm with kneser-ney smoothing and interpolation was built using the srilm toolkit .",the language model is a 5-gram lm with modified kneser-ney smoothing . an english 5-gram language model is trained using kenlm on the gigaword corpus .,"for language modeling , we use the english gigaword corpus with 5-gram lm implemented with the kenlm toolkit ." "collobert et al , 2011 ) used word embeddings for pos tagging , named entity recognition and semantic role labeling .",collobert et al used word embeddings as input to a deep neural network for multi-task learning . "we have also shown that phrase structure trees , even when deprived of the labels , retain in a certain sense .","these results show that phrase structure trees , when viewed in certain ways , have much more descriptive power than one would have thought ." "the xml markup was removed , and the collection was tokenised and split into sentences using bio-specific nlp tools .","the trec documents were converted from html to raw text , and both collections were tokenised using bio-specific nlp tools ." "in this paper , we propose a neural system combination framework , which is adapted from the multi-source nmt model .","in this paper , we propose a novel neural system combination framework for machine translation ." hatzivassiloglou and mckeown used a log-linear regression model to predict the similarity of conjoined adjectives .,hatzivassiloglou and mckeown proposed a method for identifying word polarity of adjectives . for our purpose we use word2vec embeddings trained on a google news dataset and find the pairwise cosine distances for all words .,for our implementation we use 300-dimensional part-of-speech-specific word embeddings v i generated using the gensim word2vec package . crowdsourcing is the use of the mass collaboration of internet passersby for large enterprises on the world wide web such as wikipedia and survey companies .,crowdsourcing is a viable mechanism for creating training data for machine translation . the phrase-based translation model uses the con- the baseline lm was a regular n-gram lm with kneser-ney smoothing and interpolation by means of the srilm toolkit .,the target language model was a trigram language model with modified kneser-ney smoothing trained on the english side of the bitext using the srilm tookit . we use srilm to build 5-gram language models with modified kneser-ney smoothing .,"as a countbased baseline , we use modified kneser-ney as implemented in kenlm ." "using word2vec , we compute word embeddings for our text corpus .",we use the word2vec skip-gram model to learn initial word representations on wikipedia . "lexical simplification is a subtask of text simplification ( cite-p-16-3-3 ) concerned with replacing words or short phrases by simpler variants in a context aware fashion ( generally synonyms ) , which can be understood by a wider range of readers .",lexical simplification is a popular task in natural language processing and it was the topic of a successful semeval task in 2012 ( cite-p-14-1-9 ) . "chapman et al developed negex , a regular expression based algorithm for determining whether a finding or disease mentioned within narrative medical reports is present or absent .","chapman et al developed negex , a simple regular expression-based algorithm to determine whether a finding or disease mentioned within medical reports was present or absent ." we use the logistic regression implementation of liblinear wrapped by the scikit-learn library .,we used a logistic regression classifier provided by the liblinear software . luong et al propose bilingual skip-gram which extends the monolingual skip-gram model and learns bilingual embeddings using a parallel copora and word alignments .,"firstly , at word-level alignment , luong et al extend the skip-gram model to learn efficient bilingual word embeddings ." "in this way , our cache-based approach can provide useful data at the beginning of the translation process .","in this paper , we propose a cache-based approach to document-level translation ." we used the srilm software 4 to build langauge models as well as to calculate cross-entropy based features .,we used the target side of the parallel corpus and the srilm toolkit to train a 5-gram language model . the dts are based on collapsed dependencies from the stanford parser in the holing operation .,the grammatical relations are all the collapsed dependencies produced by the stanford dependency parser . "ever since the pioneering article of gildea and jurafsky , there has been an increasing interest in automatic semantic role labeling .","there has been a substantial amount of work on automatic semantic role labeling , starting with the statistical model of gildea and jurafsky ." our trigram word language model was trained on the target side of the training corpus using the srilm toolkit with modified kneser-ney smoothing .,we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus . we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit .,we apply sri language modeling toolkit to train a 4-gram language model with kneser-ney smoothing . the translation quality is evaluated by caseinsensitive bleu-4 metric .,the translation systems were evaluated by bleu score . the language models are 4-grams with modified kneser-ney smoothing which have been trained with the srilm toolkit .,all language models are created with the srilm toolkit and are standard 4-gram lms with interpolated modified kneser-ney smoothing . "to optimize model parameters , we use the adagrad algorithm of duchi et al with l2 regularization .","to minimize the objective , we use stochastic gradient descent with the diagonal variant of adagrad ." performance is measured in terms of bleu and ter computed using the multeval script .,"models are evaluated in terms of bleu , meteor and ter on tokenized , cased test data ." "the experiments are carried out on a subset of the basic travel expression corpus , as it is used for the supplied data track condition of the iwslt evaluation campaign .","the experiments were carried out using the chinese-english datasets provided within the iwslt 2006 evaluation campaign , extracted from the basic travel expression corpus ." "pun is a way of using the characteristics of the language to cause a word , a sentence or a discourse to involve two or more different meanings .","a pun is a form of wordplay in which one sign ( e.g. , a word or phrase ) suggests two or more meanings by exploiting polysemy , homonymy , or phonological similarity to another sign , for an intended humorous or rhetorical effect ( aarons , 2017 ; hempelmann and miller , 2017 ) ." our baseline system is an standard phrase-based smt system built with moses .,"using espac medlineplus , we trained an initial phrase-based moses system ." relation extraction is the task of automatically detecting occurrences of expressed relations between entities in a text and structuring the detected information in a tabularized form .,"relation extraction is the task of extracting semantic relationships between entities in text , e.g . to detect an employment relationship between the person larry page and the company google in the following text snippet : google ceo larry page holds a press announcement at its headquarters in new york on may 21 , 2012 ." "related to their information goals , the interaction context can provide useful cues for the system to automatically identify problematic situations .",recent studies have also shown that the capability to automatically identify problematic situations during interaction can significantly improve the system performance . translation quality is evaluated by case-insensitive bleu-4 metric .,the translation results are evaluated by caseinsensitive bleu-4 metric . we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting .,a trigram language model with modified kneser-ney discounting and interpolation was used as produced by the srilm toolkit . the source and target sentences are tagged respectively using the treetagger and amira toolkits .,all english data are pos tagged and lemmatised using the treetagger . we use the opensource moses toolkit to build a phrase-based smt system .,"we use moses , an open source toolkit for training different systems ." we initialize our model with 300-dimensional word2vec toolkit vectors generated by a continuous skip-gram model trained on around 100 billion words from the google news corpus .,we present the text to the encoder as a sequence of word2vec word embeddings from a word2vec model trained on the hrwac corpus . "for all models , we use fixed pre-trained glove vectors and character embeddings .",we use the glove vector representations to compute cosine similarity between two words . we use case-sensitive bleu-4 to measure the quality of translation result .,we used the case-insensitive bleu-4 to evaluate translation quality and run mert three times . we use conditional random fields for sequence labelling .,"to this end , we use first-and second-order conditional random fields ." "itspoke , is a speech-enabled tutor built on top of the text-based why2-atlas conceptual physics tutor .",itspoke is a speech-enabled version of the text-based why2-atlas conceptual physics tutoring system . word sense disambiguation ( wsd ) is the task of assigning sense tags to ambiguous lexical items ( lis ) in a text .,word sense disambiguation ( wsd ) is the nlp task that consists in selecting the correct sense of a polysemous word in a given context . the smt system deployed in our approach is an implementation of the alignment template approach of och and ney .,our phrase-based system is similar to the alignment template system described by och and ney . "language modeling is trained using kenlm using 5-grams , with modified kneser-ney smoothing .",the target language model is built on the target side of the parallel data with kneser-ney smoothing using the irstlm tool . we use the moses toolkit to train our phrase-based smt models .,we use the moses software package 5 to train a pbmt model . "collobert and weston , in their seminal paper on deep architectures for nlp , propose a multilayer neural network for learning word embeddings .","collobert and weston , 2008 , proposed a multitask neural network trained jointly on the relevant tasks using weight-sharing ." the language models are 4-grams with modified kneser-ney smoothing which have been trained with the srilm toolkit .,"we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit ." the penn discourse treebank is the largest corpus richly annotated with explicit and implicit discourse relations and their senses .,the penn discourse treebank is the largest available discourseannotated resource in english . table 4 shows the bleu scores of the output descriptions .,"table 1 summarizes test set performance in bleu , nist and ter ." mcclosky et al presented a self-training approach for phrase structure parsing and the approach was shown to be effective in practice .,mcclosky et al presented a successful instance of parsing with self-training by using a reranker . we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings .,we use the glove word vector representations of dimension 300 . sentence compression is the task of producing a summary at the sentence level .,this task is called sentence compression . bleu is essentially a precision-based metric and is currently the standard metric for automatic evaluation of mt performance .,bleu is an established and the most widely used automatic metric for evaluation of mt quality . word sense disambiguation ( wsd ) is formally defined as the task of computationally identifying senses of a word in a context .,word sense disambiguation ( wsd ) is the task of determining the correct meaning for an ambiguous word from its context . a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit from stolcke .,the target language model was a standard ngram language model trained by the sri language modeling toolkit . tomanek et al utilised eye-tracking data to evaluate difficulties of named entities for selecting training instances for active learning techniques .,tomanek et al utilised eye-tracking data to evaluate the degree of difficulty in annotating named entities . "then , we encode each tweet as a sequence of word vectors initialized using glove embeddings .",we initialize the embedding weights by the pre-trained word embeddings with 200 dimensional vectors . "sentiment analysis ( sa ) is a hot-topic in the academic world , and also in the industry .",sentiment analysis ( sa ) is the task of determining the sentiment of a given piece of text . we trained a 5-gram language model on the xinhua portion of gigaword corpus using the srilm toolkit .,we used the srilm toolkit to train a 4-gram language model on the english side of the training corpus . socher et al present a compositional model based on a recursive neural network .,socher et al train a composition function using a neural network-however their method requires annotated data . srilm toolkit is used to build these language models .,the srilm toolkit is used to train 5-gram language model . we use a sequential lstm to encode this description .,we use a bidirectional long short-term memory rnn to encode a sentence . unsupervised word embeddings trained from large amounts of unlabeled data have been shown to improve many nlp tasks .,previous work has shown that unlabeled text can be used to induce unsupervised word clusters which can improve the performance of many supervised nlp tasks . "through extensive experiments on multiple real-world datasets , we find that sictf is not only more accurate than state-of-the-art baselines , but also significantly faster ( about 14x faster .","in section 4 , through experiments on multiple real-world datasets , we observe that sictf is not only more accurate than kb-lda but also significantly faster with a speedup of 14x ." we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting .,we build an open-vocabulary language model with kneser-ney smoothing using the srilm toolkit . we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .,"incometo select the most fluent path , we train a 5-gram language model with the srilm toolkit on the english gigaword corpus ." "luong et al segment words using morfessor , and use recursive neural networks to build word embeddings from morph embeddings .","luong et al , 2013 ) utilized recursive neural networks in which inputs are morphemes of words ." "to compensate the limit of in-domain data size , we use word2vec to learn the word embedding from a large amount of general-domain data .","for word representation , we train the skip-gram word embedding on each dataset separately to initialize the word vectors ." "for feature extraction , we used the stanford pos tagger .","when parsers are trained on ptb , we use the stanford pos tagger ." we used svm classifier that implements linearsvc from the scikit-learn library .,we used the scikit-learn implementation of svrs and the skll toolkit . we used the moses mt toolkit with default settings and features for both phrase-based and hierarchical systems .,we used the phrasebased translation system in moses 5 as a baseline smt system . we measure the translation quality using a single reference bleu .,"for this task , we use the widely-used bleu metric ." coreference resolution is the task of grouping mentions to entities .,coreference resolution is the task of automatically grouping references to the same real-world entity in a document into a set . long short term memory units are proposed in hochreiter and schmidhuber to overcome this problem .,lstms were introduced by hochreiter and schmidhuber in order to mitigate the vanishing gradient problem . these language models were built up to an order of 5 with kneser-ney smoothing using the srilm toolkit .,the language model was constructed using the srilm toolkit with interpolated kneser-ney discounting . we evaluate the translation quality using the case-sensitive bleu-4 metric .,we evaluate text generated from gold mr graphs using the well-known bleu measure . we experiment with both the maltparser and the mstparser as the dg parser .,we conduct experiments using these word embeddings with maltparser and maltoptimizer . grammar induction is a task within the field of natural language processing that attempts to construct a grammar of a given language solely on the basis of positive examples of this language .,grammar induction is the task of learning grammatical structure from plain text without human supervision . "the stanford parser is used to parse chinese sentences on the training , dev and test sets .",the selected plain sentence pairs are further parsed by stanford parser on both the english and chinese sides . the trigram language model is implemented in the srilm toolkit .,the language models were built using srilm toolkits . we use pretrained 100-d glove embeddings trained on 6 billion tokens from wikipedia and gigaword corpus .,we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings . hence we use the expectation maximization algorithm for parameter learning .,"in this work , we use the expectation-maximization algorithm ." the target-side language models were estimated using the srilm toolkit .,"we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit ." abstract meaning representation is a framework suitable for integrated semantic annotation .,"abstract meaning representation is a compact , readable , whole-sentence semantic annotation ." "according to lakoff and johnson and others , these linguistic metaphors are an observable manifestation of our mental , conceptual metaphors .","according to lakoff and johnson , metaphor is a productive phenomenon that operates at the level of mental processes ." bunescu and mooney give a shortest path dependency kernel for relation extraction .,bunescu and mooney propose a shortest path dependency kernel for relation extraction . we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit .,we train a 4-gram language model on the xinhua portion of english gigaword corpus by srilm toolkit . "while pitler and nenkova have shown that the discourse relation feature is strongest at predicting the linguistic quality of a document , dr shows poor performance .",pitler and nenkova show that discourse coherence features are more informative than other features for ranking texts with respect to their readability . "to measure the translation quality , we use the bleu score and the nist score .","we use two standard evaluation metrics bleu and ter , for comparing translation quality of various systems ." we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting .,we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus . we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit .,"for the semantic language model , we used the srilm package and trained a tri-gram language model with the default goodturing smoothing ." "evaluation was done with multi-reference bleu on test sets with four references for each language pair , and mira was used for tuning .",mert was used to tune development set parameter weights and bleu was used on test sets to evaluate the translation performance . we use a popular word2vec neural language model to learn the word embeddings on an unsupervised tweet corpus .,we first train a word2vec model on fr-wikipedia 11 to obtain non contextual word vectors . all the weights of those features are tuned by using minimal error rate training .,the weights of these features are then learned using a discriminative training algorithm . "for this purpose , we use the moses toolkit for training translation models and decoding , as well as srilm 2 to build the language models .","for improving the word alignment , we use the word-classes that are trained from a monolingual corpus using the srilm toolkit ." "more importantly , event coreference resolution is a necessary component in any reasonable , broadly applicable computational model of natural language understanding ( cite-p-18-3-4 ) .","moreover , since event coreference resolution is a complex task that involves exploring a rich set of linguistic features , annotating a large corpus with event coreference information for a new language or domain of interest requires a substantial amount of manual effort ." we created 5-gram language models for every domain using srilm with improved kneserney smoothing on the target side of the training parallel corpora .,we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing . "in this paper we propose multi-space variational encoder-decoders , a new model for labeled sequence transduction .","in this work , we propose a multi-space variational encoder-decoder framework for labeled sequence transduction problem ." "a 3-gram language model was trained from the target side of the training data for chinese and arabic , using the srilm toolkit .","the system was trained using moses with default settings , using a 5-gram language model created from the english side of the training corpus using srilm ." we used the pre-trained word embeddings that were learned using the word2vec toolkit on google news dataset .,we use the 300-dimensional skip-gram word embeddings built on the google-news corpus . relation extraction is the task of finding semantic relations between entities from text .,relation extraction is the task of predicting semantic relations over entities expressed in structured or semi-structured text . aspect-based sentiment analysis is one of the main frameworks for sentiment analysis .,aspect extraction is a central problem in sentiment analysis . "active learning processin this work , we are interested in selective sampling for pool-based active learning , and focus on uncertainty sampling .","in this work , we are interested in selective sampling for pool-based active learning , and focus on uncertainty sampling ." sentiment analysis is the process of identifying and extracting subjective information using natural language processing ( nlp ) .,"sentiment analysis is a much-researched area that deals with identification of positive , negative and neutral opinions in text ." we use pre-trained vectors from glove for word-level embeddings .,"for this task , we use glove pre-trained word embedding trained on common crawl corpus ." "twitter is the medium where people post real time messages to discuss on the different topics , and express their sentiments .",twitter is a widely used microblogging environment which serves as a medium to share opinions on various events and products . the lms are build using the srilm language modelling toolkit with modified kneserney discounting and interpolation .,the language model component uses the srilm lattice-tool for weight assignment and nbest decoding . a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit .,a knsmoothed 5-gram language model is trained on the target side of the parallel data with srilm . all source-target sentences were parsed with the stanford parser in order to label the text with syntactic information .,the parse trees for sentences in the test set were obtained using the stanford parser . "as discussed in the introduction , we use conditional random fields , since they are particularly suitable for sequence labelling .","specifically , we adopt linear-chain conditional random fields as the method for sequence labeling ." we optimized the learned parameters with the adam stochastic gradient descent .,we use the adaptive moment estimation for the optimizer . "the input layers are initialized using the glove vectors , and are updated during training .","word embeddings are initialized with pretrained glove vectors 2 , and updated during the training ." we use the pre-trained 300-dimensional word2vec embeddings trained on google news 1 as input features .,we use word embedding pre-trained on newswire with 300 dimensions from word2vec . "multi-task learning has resulted in successful systems for various nlp tasks , especially in cross-lingual settings .",multi-task joint learning can transfer knowledge between tasks by sharing task-invariant layers . word sense disambiguation ( wsd ) is the task of automatically determining the correct sense for a target word given the context in which it occurs .,word sense disambiguation ( wsd ) is the task to identify the intended sense of a word in a computational manner based on the context in which it appears ( cite-p-13-3-4 ) . "we employ the glove and node2vec to generate the pre-trained word embedding , obtaining two distinct embedding for each word .","we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training ." we use stanford corenlp for preprocessing and a supervised learning approach for classification .,we use stanford corenlp for chinese word segmentation and pos tagging . "finally , we represent subtree-based features on training data .","finally , we construct new subtree-based features for parsing algorithms ." dependency parsing consists of finding the structure of a sentence as expressed by a set of directed links ( dependencies ) between words .,"dependency parsing is a core task in nlp , and it is widely used by many applications such as information extraction , question answering , and machine translation ." we set the feature weights by optimizing the bleu score directly using minimum error rate training on the development set .,"we use 4-gram language models in both tasks , and conduct minimumerror-rate training to optimize feature weights on the dev set ." we use the popular word2vec 1 tool proposed by mikolov et al to extract the vector representations of words .,"in this run , we use a sentence vector derived from word embeddings obtained from word2vec ." chen et al and koo et al used large-scale unlabeled data to improve syntactic dependency parsing performance .,koo et al and suzuki et al use unsupervised wordclusters as features in a dependency parser to get lexical dependencies . word alignment is the problem of annotating parallel text with translational correspondence .,word alignment is a key component in most statistical machine translation systems . "marcu and wong , 2002 , defined the joint model , which modeled consecutive word m-to-n alignments .","marcu and wong , 2002 ) presents a joint probability model for phrase-based translation ." "to facilitate comparison with previous results , we used the upenn treebank corpus .",we used the penn treebank wsj corpus to perform empirical experiments on the proposed parsing models . "to keep consistent , we initialize the embedding weight with pre-trained word embeddings .",we use the pre-trained glove vectors to initialize word embeddings . "long sentences are removed , and the remaining sentences are pos-tagged and dependency parsed using the pre-trained stanford parser .",the phrase structure trees produced by the parser are further processed with the stanford conversion tool to create dependency graphs . relation extraction is the problem of populating a target relation ( representing an entity-level relationship or attribute ) with facts extracted from natural-language text .,relation extraction ( re ) is the task of assigning a semantic relationship between a pair of arguments . "therefore , we use the long short-term memory network to overcome this problem .",our approach relies on long short-term memory networks . coreference resolution is a key problem in natural language understanding that still escapes reliable solutions .,coreference resolution is the task of clustering a set of mentions in the text such that all mentions in the same cluster refer to the same entity . "for learning coreference decisions , we used a maximum entropy model .","as a learning algorithm for our classification model , we used maximum entropy ." our primary contribution in this paper is a recasting and merging of the tasks of mention detection and entity disambiguation into a single endto-end entity linking task .,"the primary contribution of this paper is a novel technique— cube summing—for approximate summing over discrete structures with non-local features , which we relate to cube pruning ( §4 ) ." "to represent the semantics of the nouns , we use the word2vec method which has proven to produce accurate approximations of word meaning in different nlp tasks .","for estimating monolingual word vector models , we use the cbow algorithm as implemented in the word2vec package using a 5-token window ." "also , we initialized all of the word embeddings using the 300 dimensional pre-trained vectors from glove .","we use 300-dimensional glove vectors trained on 6b common crawl corpus as word embeddings , setting the embeddings of outof-vocabulary words to zero ." the translation outputs were evaluated with bleu and meteor .,the bleu metric was used to automatically evaluate the quality of the translations . "language models were built with srilm , modified kneser-ney smoothing , default pruning , and order 5 .",models were built and interpolated using srilm with modified kneser-ney smoothing and the default pruning settings . "in particular , the vector-space word representations learned by a neural network have been shown to successfully improve various nlp tasks .",word embeddings have proven to be effective models of semantic representation of words in various nlp tasks . coreference resolution is the task of determining which mentions in a text refer to the same entity .,coreference resolution is the task of clustering referring expressions in a text so that each resulting cluster represents an entity . we used the implementation of random forest in scikitlearn as the classifier .,"for this task , we used the svm implementation provided with the python scikit-learn module ." we measure machine translation performance using the bleu metric .,we use bleu scores to measure translation accuracy . semantic parsing is the task of mapping natural language utterances to machine interpretable meaning representations .,semantic parsing is the problem of deriving a structured meaning representation from a natural language utterance . a joint probability model for phrase translation was proposed by marcu and wong .,"marcu and wong , 2002 ) presents a joint probability model for phrase-based translation ." relation extraction ( re ) is the task of assigning a semantic relationship between a pair of arguments .,"relation extraction is a subtask of information extraction that finds various predefined semantic relations , such as location , affiliation , rival , etc. , between pairs of entities in text ." "to train the link embeddings , we use the speedy , skip-gram neural language model of mikolov et al via their toolkit word2vec .","as a strong baseline , we trained the skip-gram model of mikolov et al using the publicly available word2vec 5 software ." we use the mallet implementation of conditional random fields .,for parameter training we use conditional random fields as described in . we use a phrase-based translation system similar to moses .,we used moses as the phrase-based machine translation system . we use pre-trained 50 dimensional glove vectors 4 for word embeddings initialization .,our word embeddings is initialized with 100-dimensional glove word embeddings . we report mt performance in table 1 by case-insensitive bleu .,we evaluated the intermediate outputs using bleu against human references as in table 3 . a trigram language model with modified kneser-ney discounting and interpolation was used as produced by the srilm toolkit .,we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit . "we pretrain word vectors with the word2vec tool on the news dataset released by ding et al , which are fine-tuned during training .",we used the pre-trained word embeddings that were learned using the word2vec toolkit on google news dataset . "information extraction ( ie ) is the task of generating structured information , often in the form of subject-predicate-object relation triples , from unstructured information such as natural language text .",information extraction ( ie ) is the process of finding relevant entities and their relationships within textual documents . clark and curran describes log-linear parsing models for ccg .,clark and curran evaluate a number of log-linear parsing models for ccg . "abstract meaning representation is a semantic formalism in which the meaning of a sentence is encoded as a rooted , directed , acyclic graph .",abstract meaning representation is a semantic formalism which represents sentence meaning in a form of a rooted directed acyclic graph . greedy transition-based dependency parsers incrementally process an input sentence from left to right .,"transition-based dependency parsers scan an input sentence from left to right , performing a sequence of transition actions to predict its parse tree ." we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit .,we used srilm to build a 4-gram language model with interpolated kneser-ney discounting . we use 5-grams for all language models implemented using the srilm toolkit .,we used the sri language modeling toolkit for this purpose . results were evaluated with both bleu and nist metrics .,evaluation was performed using the bleu metric . named entity recognition ( ner ) is the process by which named entities are identified and classified in an open-domain text .,named entity recognition ( ner ) is a key technique for ie and other natural language processing tasks . relation extraction ( re ) is the task of extracting instances of semantic relations between entities in unstructured data such as natural language text .,relation extraction is the task of finding semantic relations between two entities from text . we use a linear chain crf to predict the morphological features .,we employ conditional random fields to predict the sentiment label for each segment . we adopt two standard metrics rouge and bleu for evaluation .,"we use three common evaluation metrics including bleu , me-teor , and ter ." "sentiment analysis ( sa ) is the determination of the polarity of a piece of text ( positive , negative , neutral ) .","sentiment analysis ( sa ) is a field of knowledge which deals with the analysis of people ’ s opinions , sentiments , evaluations , appraisals , attitudes and emotions towards particular entities ( liu , 2012 ) ." "inspired by previous work , we adapt the word2vec nnlm of mikolov et al to this qa task .",we use the perplexity computation method of mikolov et al suitable for skip-gram models . we implemented the different aes models using scikit-learn .,we used the svd implementation provided in the scikit-learn toolkit . "in this paper , we proposed a complex neural network model for geolocation prediction .",we propose a novel geolocation prediction model using a complex neural network . we derive 100-dimensional word vectors using word2vec skip-gram model trained over the domain corpus .,"instead , we compute the relatedness of two words based on their distributed representations , which are learned using the word2vec toolkit ." we used glove vectors trained on common crawl 840b 4 with 300 dimensions as fixed word embeddings .,"for representing words , we used 100 dimensional pre-trained glove embeddings ." "we describe the first edition of the complex word identification task , organized at semeval 2016 .",we report the findings of the complex word identification task of semeval 2016 . "finally , we used kenlm to create a trigram language model with kneser-ney smoothing on that data .",we also use a 4-gram language model trained using srilm with kneser-ney smoothing . "dependency parsing is a very important nlp task and has wide usage in different tasks such as question answering , semantic parsing , information extraction and machine translation .","however , dependency parsing , which is a popular choice for japanese , can incorporate only shallow syntactic information , i.e. , pos tags , compared with the richer syntactic phrasal categories in constituency parsing ." "we report decoding speed and bleu score , as measured by sacrebleu .",we apply standard tuning with mert on the bleu score . srilm toolkit is used to build these language models .,all language models were trained using the srilm toolkit . we use a sequential lstm to encode this description .,we use the long short-term memory architecture for recurrent layers . "for the classification task , we use pre-trained glove embedding vectors as lexical features .",we use 300-dimensional word embeddings from glove to initialize the model . the model weights were trained using the minimum error rate training algorithm .,these features were optimized using minimum error-rate training and the same weights were then used in docent . relation classification is the task of identifying the semantic relation holding between two nominal entities in text .,relation classification is the task of assigning sentences with two marked entities to a predefined set of relations . the evaluation metric for the overall translation quality is caseinsensitive bleu4 .,the translation quality is evaluated by case-insensitive bleu-4 . "translation results are reported on the standard mt metrics bleu , meteor , and per , position independent word error rate .","translation performance is measured using the automatic bleu metric , on one reference translation ." "to get a dictionary of word embeddings , we use the word2vec tool 2 and train it on the chinese gigaword corpus .","the input to the network is the embeddings of words , and we use the pre-trained word embeddings by using word2vec on the wikipedia corpus whose size is over 11g ." "vignet is inspired by and based on framenet , a resource for lexical semantics .",framenet is an expert-built lexical-semantic resource incorporating the theory of frame-semantics . "in this paper , we propose detecting disfluencies using a right-to-left transition-based dependency .","in this paper , we propose a novel approach for disfluency detection ." sentiment analysis is a nlp task that deals with extraction of opinion from a piece of text on a topic .,sentiment analysis is a collection of methods and algorithms used to infer and measure affection expressed by a writer . "for evaluation we use multeval to calculate bleu , meteor , ter , and length of the test set for each system .","we use the automatic mt evaluation metrics bleu , meteor , and ter , to evaluate the absolute translation quality obtained ." "we pretrain word vectors with the word2vec tool on the news dataset released by ding et al , which are fine-tuned during training .",for our purpose we use word2vec embeddings trained on a google news dataset and find the pairwise cosine distances for all words . word embeddings have been trained using word2vec 4 tool .,word embeddings for english and hindi have been trained using word2vec 1 tool . our results show that we improve over a state-of-the-art baseline by over 2 . 7 % ( relative bleu score ) .,our results show a consistent improvement over a state-of-the-art baseline in terms of bleu and a manual error analysis . information extraction ( ie ) is the task of extracting factual assertions from text .,"information extraction ( ie ) is a main nlp aspects for analyzing scientific papers , which includes named entity recognition ( ner ) and relation extraction ( re ) ." bilingual lexicons are an important resource in multilingual natural language processing tasks such as statistical machine translation and cross-language information retrieval .,bilingual lexicons serve as an indispensable source of knowledge for various cross-lingual tasks such as cross-lingual information retrieval or statistical machine translation . current state-of-the-art statistical parsers are all trained on large annotated corpora such as the penn treebank .,state of the art statistical parsers are trained on manually annotated treebanks that are highly expensive to create . we then lowercase all data and use all sentences from the modern dutch part of the corpus to train an n-gram language model with the srilm toolkit .,"for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided ." dependency parsing is a valuable form of syntactic processing for nlp applications due to its transparent lexicalized representation and robustness with respect to flexible word order languages .,dependency parsing is a way of structurally analyzing a sentence from the viewpoint of modification . it was first used for unlabeled dependency parsing by kudo and matsumoto and yamada and matsumoto .,transition-based dependency parsing was originally introduced by yamada and matsumoto and nivre . the log-lineal combination weights were optimized using mert .,each system is optimized using mert with bleu as an evaluation measure . "for training the model , we use the linear kernel svm implemented in the scikit-learn toolkit .","we use the svm implementation from scikit-learn , which in turn is based on libsvm ." and our simplest model has a concave objective that guarantees convergence to a global optimum .,"furthermore , the objective function for our simplest model is concave , guaranteeing convergence to a global optimum ." "we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training .","for the mix one , we also train word embeddings of dimension 50 using glove ." coreference resolution is the process of finding discourse entities ( markables ) referring to the same real-world entity or concept .,"coreference resolution is the problem of identifying which mentions ( i.e. , noun phrases ) refer to which real-world entities ." zhou et al employed both unsupervised and supervised neural networks to learn bilingual sentiment word embedding .,chandar a p et al and zhou et al use the autoencoder to model the connections between bilingual sentences . the language model is trained and applied with the srilm toolkit .,the language models were trained using srilm toolkit . one of the most important resources for discourse connectives in english is the penn discourse treebank .,one of the very few available discourse annotated corpora is the penn discourse treebank in english . "relevance for satisfaction ¡¯ , ¡® contrastive weight ¡¯ and certain adverbials , that work to affect polarity in a more subtle but crucial manner , as evidenced also by the statistical analysis .","we argue that relevance for satisfaction , contrastive weight clues , and certain adverbials work to affect the polarity , as evidenced by the statistical analysis ." we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing .,"for the fluency and grammaticality features , we train 4-gram lms using the development dataset with the sri toolkit ." we evaluated translation output using case-insensitive ibm bleu .,we evaluated the translation quality using the case-insensitive bleu-4 metric . "word alignment , which can be defined as an object for indicating the corresponding words in a parallel text , was first introduced as an intermediate result of statistical translation models ( cite-p-13-1-2 ) .",word alignment is a central problem in statistical machine translation ( smt ) . "for our experiments , we use 300-dimensional glove english word embeddings trained on the cased common crawl .","in our experiments , we choose to use the published glove pre-trained word embeddings ." "in this work , we propose to tackle the problem of list-only entity linking .","in this paper , we proposed a novel framework to tackle the problem of list-only entity linking ." "in our work , we build on lda , which is often used as a building block for topic models .","in our work , we use lda to identify the subtopics in the given body of texts ." we used the default parameter in svm light for all trials .,we used svm multiclass from svm-light toolkit as the classifier . berland and charniak used a similar method for extracting instances of meronymy relation .,berland and charniak used similar pattern-based techniques and other heuristics to extract meronymy relations . luong and manning proposed a hybrid scheme that consults character-level information whenever the model encounters an oov word .,a hybrid model of the word-based and the character-based model has also been proposed by luong and manning . "recently , the field has been influenced by the success of neural language models .","recently , methods inspired by neural language modeling received much attentions for representation learning ." "to solve the traditional recurrent neural networks , hochreiter and schmidhuber proposed the lstm architecture .",hochreiter and schmidhuber developed long short-term memory to overcome the long term dependency problem . "however , aspect extraction is a complex task that also requires fine-grained domain embeddings .",aspect extraction is a central problem in sentiment analysis . "soricut and echihabi explore pseudo-references and document-aware features for document-level ranking , using bleu as quality label .",soricut and echihabi proposed document-aware features in order to rank machine translated documents . we train a 4-gram language model on the xinhua portion of the english gigaword corpus using the srilm toolkits with modified kneser-ney smoothing .,we use srilm to train a 5-gram language model on the target side of our training corpus with modified kneser-ney discounting . we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing .,we train a kn-smoothed 5-gram language model on the target side of the parallel training data with srilm . english texts were tokenized by the stanford parser 5 with the pcfg grammar .,the binary syntactic features were automatically extracted using the stanford parser . "word sense disambiguation is the process of selecting the most appropriate meaning for a word , based on the context in which it occurs .",word sense disambiguation is the process of determining which sense of a word is used in a given context . an in-house language modeling toolkit was used to train the 4-gram language models with modified kneser-ney smoothing over the web-crawled data .,"language models were built with srilm , modified kneser-ney smoothing , default pruning , and order 5 ." "our parser is based on the shift-reduce parsing process from sagae and lavie and wang et al , and therefore it can be classified as a transition-based parser .","our transition-based parser is based on a study by zhu et al , which adopts the shift-reduce parsing of sagae and lavie and zhang and clark ." we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting .,"in addition , we use an english corpus of roughly 227 million words to build a target-side 5-gram language model with srilm in combination with kenlm ." the target language model is trained by the sri language modeling toolkit on the news monolingual corpus .,"gram language model with modified kneser-ney smoothing is trained with the srilm toolkit on the epps , ted , newscommentary , and the gigaword corpora ." bannard and callison-burch proposed identifying paraphrases by pivoting through phrases in a bilingual parallel corpora .,bannard and callison-burch introduced the pivot approach to extracting paraphrase phrases from bilingual parallel corpora . we implemented the algorithms in python using the stochastic gradient descent method for nmf from the scikit-learn package .,to train the models we use the default stochastic gradient descent classifier provided by scikit-learn . the dimension of glove word vectors is set as 300 .,we use the glove word vector representations of dimension 300 . we measure the translation quality with automatic metrics including bleu and ter .,"to measure the translation quality , we use the bleu score and the nist score ." language models were estimated using the sri language modeling toolkit with modified kneser-ney smoothing .,the language model was constructed using the srilm toolkit with interpolated kneser-ney discounting . we initialize the embedding layer using embeddings from dedicated word embedding techniques word2vec and glove .,we use the 300-dimensional pre-trained word2vec 3 word embeddings and compare the performance with that of glove 4 embeddings . chen et al proposed an approach that extracted partial tree structures from a large amount of data and used them as the additional features to improve dependency parsing .,chen et al extracted different types of subtrees from the auto-parsed data and used them as new features in standard learning methods . we use word2vec from as the pretrained word embeddings .,we use 300 dimension word2vec word embeddings for the experiments . the language models are 4-grams with modified kneser-ney smoothing which have been trained with the srilm toolkit .,the probabilistic language model is constructed on google web 1t 5-gram corpus by using the srilm toolkit . we chose the skip-gram model provided by word2vec tool developed by for training word embeddings .,we use skip-gram representation for the training of word2vec tool . "sentiment analysis is a natural language processing ( nlp ) task ( cite-p-10-1-14 ) which aims at classifying documents according to the opinion expressed about a given subject ( federici and dragoni , 2016a , b ) .",sentiment analysis is a natural language processing task whose aim is to classify documents according to the opinion ( polarity ) they express on a given subject ( cite-p-13-8-14 ) . the parameter weights are optimized with minimum error rate training .,"to tune feature weights minimum error rate training is used , optimized against the neva metric ." al-onaizan and knight find that a model mapping directly from english to arabic letters outperforms the phoneme-toletter model .,al-onaizan and knight proposed a spelling-based model which directly maps english letter sequences into arabic letter sequences . relation extraction ( re ) is the task of recognizing relationships between entities mentioned in text .,relation extraction is a crucial task in the field of natural language processing ( nlp ) . the seminal work in the field of hypernym learning was done by hearst .,the fundamental work for the pattern-based approaches is that of hearst . training is done through stochastic gradient descent over shuffled mini-batches with adadelta update rule .,parameter optimisation is done by mini-batch stochastic gradient descent where back-propagation is performed using adadelta update rule . we trained a tri-gram hindi word language model with the srilm tool .,"we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit ." "for smt decoding , we use the moses toolkit with kenlm for language model queries .","for language modeling , we use the english gigaword corpus with 5-gram lm implemented with the kenlm toolkit ." "a more effective alternative , which however only delivers quasinormalized scores , is to train the network using the noise contrastive estimation or nce .","an effective alternative , which however only delivers unnormalized scores , is to train the network using the noise contrastive estimation denoted by nce in the rest of the paper ." "with the refined outputs , we build phrasebased transliteration systems using moses , a popular statistical machine translation framework .",we then evaluate the effect of word alignment on machine translation quality using the phrase-based translation system moses . the model is a log-linear model over synchronous cfg derivations .,the basic model of the our system is a log-linear model . we could approach the movie overview generation task using an attention-based encoder-decoder model .,"accordingly , we use an adaptive recurrence mechanism to learn a dynamic node representation through attention structure ." "we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit .",our trigram word language model was trained on the target side of the training corpus using the srilm toolkit with modified kneser-ney smoothing . "in both pre-training and fine-tuning , we adopt adagrad and l2 regularizer for optimization .",we use mini-batch update and adagrad to optimize the parameter learning . the model parameters of word embedding are initialized using word2vec .,we initialize our word vectors with 300-dimensional word2vec word embeddings . "pun is a way of using the characteristics of the language to cause a word , a sentence or a discourse to involve two or more different meanings .","pun is a figure of speech that consists of a deliberate confusion of similar words or phrases for rhetorical effect , whether humorous or serious ." we propose an implicit content-introducing method which incorporates additional information into the seq2seq model .,"in this paper , we explore an implicit content-introducing method for generative short-text conversation system ." the minimum error rate training was used to tune the feature weights .,all the feature weights were trained using our implementation of minimum error rate training . we use skip-gram with negative sampling for obtaining the word embeddings .,we first train a word2vec model on fr-wikipedia 11 to obtain non contextual word vectors . "for language model scoring , we use the srilm toolkit training a 5-gram language model for english .","incometo select the most fluent path , we train a 5-gram language model with the srilm toolkit on the english gigaword corpus ." we use 5-grams for all language models implemented using the srilm toolkit .,we trained a 5-grams language model by the srilm toolkit . to encode the original sentences we used word2vec embeddings pre-trained on google news .,we used google pre-trained word embedding with 300 dimensions . "each grammar consists of a set of rules evaluated in a leftto-right fashion over the input annotations , with multiple grammars cascaded together and evaluated bottom-up .",the 'grammar ' consists of a lexicon where each lexical item is associated with a finite number of structures for which that item is the 'head ' . word sense disambiguation is the task of assigning a sense to a word based on the context in which it occurs .,word sense disambiguation is the process of determining which sense of a word is used in a given context . we used the bleu score to evaluate the translation accuracy with and without the normalization .,"we measured the overall translation quality with 4-gram bleu , which was computed on tokenized and lowercased data for all systems ." feature weights were set with minimum error rate training on a tuning set using bleu as the objective function .,parameter tuning was carried out using both k-best mira and minimum error rate training on a held-out development set . coreference resolution is a set partitioning problem in which each resulting partition refers to an entity .,"coreference resolution is the problem of partitioning a sequence of noun phrases ( or mentions ) , as they occur in a natural language text , into a set of referential entities ." we use a cws-oriented model modified from the skip-gram model to derive word embeddings .,our cdsm feature is based on word vectors derived using a skip-gram model . we created 5-gram language models for every domain using srilm with improved kneserney smoothing on the target side of the training parallel corpora .,we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit . "twitter is a famous social media platform capable of spreading breaking news , thus most of rumour related research uses twitter feed as a basis for research .","twitter is a widely used microblogging platform , where users post and interact with messages , “ tweets ” ." "for feature building , we use word2vec pre-trained word embeddings .","for word embeddings , we trained a skip-gram model over wikipedia , using word2vec ." we use long shortterm memory networks to build another semanticsbased sentence representation .,we use the long short-term memory architecture for recurrent layers . "for input representation , we used glove word embeddings .","for english posts , we used the 200d glove vectors as word embeddings ." "in natural language , a word often assumes different meanings , and the task of determining the correct meaning , or sense , of a word in different contexts is known as word sense disambiguation ( wsd ) .",word sense disambiguation ( wsd ) is a particular problem of computational linguistics which consists in determining the correct sense for a given ambiguous word . we used the part of speech tagged for tweets with the twitter nlp tool .,we tokenized and part-of-speech tagged the tweets with the carnegie mellon university twitter nlp tool . semantic role labeling ( srl ) is the task of labeling predicate-argument structure in sentences with shallow semantic information .,semantic role labeling ( srl ) is the process of producing such a markup . "as classifier we use a traditional model , a support vector machine with linear kernel implemented in scikit-learn .","finally , we combine all the above features using a support vector regression model which is implemented in scikit-learn ." we use word2vec 1 toolkit to pre-train the character embeddings on the chinese wikipedia corpus .,we use the pre-trained 300-dimensional word2vec embeddings trained on google news 1 as input features . we use the popular word2vec 1 tool proposed by mikolov et al to extract the vector representations of words .,we adapt the models of mikolov et al and mikolov et al to infer feature embeddings . stance detection is a difficult task since it often requires reasoning in order to determine whether an utterance is in favor of or against a specific issue .,stance detection has been defined as automatically detecting whether the author of a piece of text is in favor of the given target or against it . we used a phrase-based smt model as implemented in the moses toolkit .,we used the moses toolkit to build mt systems using various alignments . socher et al proposed the recursive neural network that has been proven to be efficient in terms of constructing sentences representations .,socher et al introduced a family of recursive neural networks to represent sentence-level semantic composition . "finkel and manning also proposed a parsing model for the extraction of nested named entity mentions , which , like this work , parses just the corresponding semantic annotations .","finkel and manning propose a discriminative parsingbased method for nested named entity recognition , employing crfs as its core ." we translated each german sentence using the moses statistical machine translation toolkit .,"for generating the translations from english into german , we used the statistical translation toolkit moses ." socher et al introduced a family of recursive neural networks to represent sentence-level semantic composition .,socher et al introduce a family of recursive neural networks for sentence-level semantic composition . "within this subpart of our ensemble model , we used a svm model from the scikit-learn library .",we trained a linear log-loss model using stochastic gradient descent learning as implemented in the scikit learn library . "we used the 300-dimensional glove word embeddings learned from 840 billion tokens in the web crawl data , as general word embeddings .",we used glove word embeddings with 300 dimensions pre-trained using commoncrawl to get a vector representation of the evidence sentence . "lastly , we populate the adjacency with a distributional similarity measure based on word2vec .",we first train a word2vec model on fr-wikipedia 11 to obtain non contextual word vectors . "dependency parsing is a simpler task than constituent parsing , since dependency trees do not have extra non-terminal nodes and there is no need for a grammar to generate them .",dependency parsing is the task to assign dependency structures to a given sentence math-w-4-1-0-14 . "therefore , we adopt the greedy feature selection algorithm as described in jiang et al to pick up positive features incrementally according to their contributions .",here we adopt the greedy feature selection algorithm as described in jiang and ng to select useful features empirically and incrementally according to their contributions on the development data . "we used moses , a state-of-the-art phrase-based smt model , in decoding .",we used moses as the phrase-based machine translation system . we train a 4-gram language model on the xinhua portion of english gigaword corpus by srilm toolkit .,"incometo select the most fluent path , we train a 5-gram language model with the srilm toolkit on the english gigaword corpus ." "since segmentation is the first stage of discourse parsing , quality discourse segments are critical to building quality discourse representations ( cite-p-12-1-10 ) .","once again , segmentation is the part of the process where the automatic algorithms most seriously underperform ." cussens and pulman describe a symbolic approach which employs inductive logic programming and barg and walther and fouvry follow a unification-based approach .,"erbach , barg and walther and fouvry followed a unification-based symbolic approach to unknown word processing for constraint-based grammars ." readability can be used to provide satisfiable services in text recommendation or text visualization .,readability is used to provide users with high-quality service in text recommendation or text visualization . we used scikit-learn library for all the machine learning models .,we used the svm implementation provided within scikit-learn . we present our method for initializing a plsa model using lsa model .,in this paper we present a method for using lsa analysis to initialize a plsa model . morphological tagging is the process of labeling each word token with its morphological attributes .,morphological tagging is the task of assigning a morphological analysis to a token in context . we use case-insensitive bleu as evaluation metric .,the evaluation metric is the case-insensitive bleu4 . "gu et al , cheng and lapata , and nallapati et al also utilized seq2seq based framework with attention modeling for short text or single document summarization .","nallapati et al also employed the typical attention modeling based seq2seq framework , but utilized a trick to control the vocabulary size to improve the training efficiency ." word sense disambiguation ( wsd ) is a problem of finding the relevant clues in a surrounding context .,word sense disambiguation ( wsd ) is the task of determining the correct meaning for an ambiguous word from its context . "for this task , we used the svm implementation provided with the python scikit-learn module .","to implement svm algorithm , we have used the publicly available python based scikit-learn package ." "since segmentation is the first stage of discourse parsing , quality discourse segments are critical to building quality discourse representations ( cite-p-12-1-10 ) .",segmentation is the task of dividing a stream of data ( text or other media ) into coherent units . coreference resolution is a set partitioning problem in which each resulting partition refers to an entity .,coreference resolution is the task of determining which mentions in a text refer to the same entity . "again , shen et al explore a dependency language model to improve translation quality .",shen et al proposed a target dependency language model for smt to employ target-side structured information . "word embedding has shown promising results in variety of the nlp applications , such as named entity recognition , sentiment analysis and parsing .","word embeddings have shown promising results in nlp tasks , such as named entity recognition , sentiment analysis or parsing ." "to get a dictionary of word embeddings , we use the word2vec tool 2 and train it on the chinese gigaword corpus .","we employ word2vec as the unsupervised feature learning algorithm , based on a raw corpus of over 90 million messages extracted from chinese weibo platform ." relation extraction is the task of detecting and classifying relationships between two entities from text .,relation extraction is the task of predicting semantic relations over entities expressed in structured or semi-structured text . "twitter 1 is a microblogging service , which according to latest statistics , has 284 million active users , 77 % outside the us that generate 500 million tweets a day in 35 different languages .","twitter is a popular microblogging service , which , among other things , is used for knowledge sharing among friends and peers ." "irony is a form of figurative language , considered as “ saying the opposite of what you mean ” , where the opposition of literal and intended meanings is very clear ( cite-p-23-1-1 , cite-p-23-3-8 ) .",irony is a particular type of figurative language in which the meaning is often the opposite of what is literally said and is not always evident without context or existing knowledge . xu et al used the knowledge graph to advance the learning of word embeddings .,xu et al and yu and dredze exploited semantic knowledge to improve the semantic representation of word embeddings . we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing .,we used srilm to build a 4-gram language model with interpolated kneser-ney discounting . the probabilistic language model is constructed on google web 1t 5-gram corpus by using the srilm toolkit .,we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus . we use the sri language modeling toolkit for language modeling .,we use the srilm toolkit to compute our language models . we use the scikit-learn toolkit as our underlying implementation .,for data preparation and processing we use scikit-learn . we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .,we estimate a 5-gram language model using interpolated kneser-ney discounting with srilm . we have used latent dirichlet allocation model as our main topic modeling tool .,to learn the topics we use latent dirichlet allocation . "we use glove pre-trained word embeddings , a 100 dimension embedding layer that is followed by a bilstm layer of size 32 .","for the word-embedding based classifier , we use the glove pre-trained word embeddings ." word alignment is the problem of annotating parallel text with translational correspondence .,word alignment is a fundamental problem in statistical machine translation . "a context-free grammar ( cfg ) is a 4-tuple math-w-3-1-1-9 where math-w-3-1-1-21 and math-w-3-1-1-23 are finite disjoint sets of nonterminal and terminal symbols , respectively , math-w-3-1-1-36 is the start symbol and math-w-3-1-1-44 is a finite set of rules .","a context-free grammar ( cfg ) is a tuple math-w-2-5-5-22 , where vn and vt are finite , disjoint sets of nonterminal and terminal symbols , respectively , and s e vn is the start symbol ." named entity recognition is a well established information extraction task with many state of the art systems existing for a variety of languages .,"named entity ( ne ) recognition is a task in which proper nouns and numerical information in a document are detected and classified into categories such as person , organization , location , and date ." "for the tf representation , we use the countvectorizer class from scikit-learn to process the text and create the appropriate representation .",we use the linearsvc classifier as implemented in scikit-learn package 17 with the default parameters . the distributed word representation by word2vec factors word distance and captures semantic similarities through vector arithmetic .,the word embeddings can provide word vector representation that captures semantic and syntactic information of words . all source-target sentences were parsed with the stanford parser in order to label the text with syntactic information .,the binary syntactic features were automatically extracted using the stanford parser . the uima project provides an infrastructure to store unstructured documents .,the pipeline is based on the uima framework and contains many text analysis components . we use the maximum entropy model for our classification task .,"for learning coreference decisions , we used a maximum entropy model ." relation extraction ( re ) is the task of extracting instances of semantic relations between entities in unstructured data such as natural language text .,relation extraction ( re ) is the task of extracting semantic relationships between entities in text . "for feature building , we use word2vec pre-trained word embeddings .",we use word embeddings of dimension 100 pretrained using word2vec on the training dataset . we implement classification models using keras and scikit-learn .,for the classifiers we use the scikit-learn machine learning toolkit . "part-of-speech ( pos ) tagging is a fundamental natural-language-processing problem , and pos tags are used as input to many important applications .",part-of-speech ( pos ) tagging is a fundamental language analysis task . twitter is a very popular micro blogging site .,"twitter is a widely used microblogging platform , where users post and interact with messages , “ tweets ” ." we measured translation performance with bleu .,we evaluated the system using bleu score on the test set . lda is a probabilistic model that can be used to model and discover underlying topic structures of documents .,"lda is a widely used topic model , which views the underlying document distribution as having a dirichlet prior ." the system was general-domain oriented and it was tuned by using mert with a combination of six in-domain development datasets .,the parameters of the systems were tuned using mert to optimize bleu on the development set . we use the stanford corenlp for obtaining pos tags and parse trees from our data .,we use stanford corenlp for pos tagging and lemmatization . translation quality is evaluated by case-insensitive bleu-4 metric .,the translation quality is evaluated by case-insensitive bleu-4 . a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit .,we used the target side of the parallel corpus and the srilm toolkit to train a 5-gram language model . "the language models were interpolated kneser-ney discounted trigram models , all constructed using the srilm toolkit .",the language model was constructed using the srilm toolkit with interpolated kneser-ney discounting . we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus .,"for the language model , we used sri language modeling toolkit to train a trigram model with modified kneser-ney smoothing on the 31 , 149 english sentences ." "for support vector learning , we use svm-light and svm-multiclass .",we employ support vector machines to perform the classification . we use a popular word2vec neural language model to learn the word embeddings on an unsupervised tweet corpus .,"for word embeddings , we trained a skip-gram model over wikipedia , using word2vec ." coreference resolution is the process of linking multiple mentions that refer to the same entity .,coreference resolution is a field in which major progress has been made in the last decade . takamura et al used the spin model to extract word semantic orientation .,takamura et al proposed using spin models for extracting semantic orientation of words . we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .,"for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided ." "for phrase-based smt translation , we used the moses decoder and its support training scripts .",we used the moses toolkit with its default settings to build three phrase-based translation systems . we train a kn-smoothed 5-gram language model on the target side of the parallel training data with srilm .,we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus . "word sense disambiguation ( wsd ) is a difficult natural language processing task which requires that for every content word ( noun , adjective , verb or adverb ) the appropriate meaning is automatically selected from the available sense inventory 1 .",word sense disambiguation ( wsd ) is the task of assigning sense tags to ambiguous lexical items ( lis ) in a text . results were evaluated with both bleu and nist metrics .,the translations were evaluated with the widely used bleu and nist scores . "for word embeddings , we used popular pre-trained word vectors from glove .",we used 100 dimensional glove embeddings for this purpose . sentence compression is a text-to-text generation task in which an input sentence must be transformed into a shorter output sentence which accurately reflects the meaning in the input and also remains grammatically well-formed .,sentence compression is a task of creating a short grammatical sentence by removing extraneous words or phrases from an original sentence while preserving its meaning . a 5-gram language model was built using srilm on the target side of the corresponding training corpus .,"the trigram models were created using the srilm toolkit on the standard training sections of the ccgbank , with sentence-initial words uncapitalized ." "zeng et al proposed a deep convolutional neural network with softmax classification , extracting lexical and sentence level features .",zeng et al use a convolutional deep neural network to extract lexical features learned from word embeddings and then fed into a softmax classifier to predict the relationship between words . "we preprocess the data using standard nlp packages to tokenize , stem , and pos tag the words .",we pre-processed the data to add part-ofspeech tags and dependencies between words using the stanford parser . we primarily used the charniak-johnson generative parser to parse the english europarl data and the test data .,we used the first-stage pcfg parser of charniak and johnson for english and bitpar for german . "sentiment analysis ( sa ) is a field of knowledge which deals with the analysis of people ’ s opinions , sentiments , evaluations , appraisals , attitudes and emotions towards particular entities ( liu , 2012 ) .","sentiment analysis ( sa ) is the task of analysing opinions , sentiments or emotions expressed towards entities such as products , services , organisations , issues , and the various attributes of these entities ( cite-p-9-3-3 ) ." choi and cardie present a more lightweight approach using compositional semantics towards classifying the polarity of expressions .,choi and cardie assert that the sentiment polarity of natural language can be better inferred by compositional semantics . word sense disambiguation ( wsd ) is a key enabling technology that automatically chooses the intended sense of a word in context .,word sense disambiguation ( wsd ) is the task of determining the correct meaning for an ambiguous word from its context . "phrase-based models have until recently been a stateof-the-art method for statistical machine translation , and moses is one of the most used phrase-based translation systems .","since their introduction at the beginning of the twenty-first century , phrase-based translation models have become the state-of-the-art for statistical machine translation ." this type of features are based on a trigram model with kneser-ney smoothing .,the language models are 4-grams with modified kneser-ney smoothing which have been trained with the srilm toolkit . "semantic role labeling ( srl ) is a kind of shallow semantic parsing task and its goal is to recognize some related phrases and assign a joint structure ( who did what to whom , when , where , why , how ) to each predicate of a sentence ( cite-p-24-3-4 ) .",semantic role labeling ( srl ) is defined as the task to recognize arguments for a given predicate and assign semantic role labels to them . the parameter weights are optimized with minimum error rate training .,minimum error rate training is applied to tune the cn weights . the learning rate is automatically tuned by adam .,the learning rate was automatically adjusted using adam . we found that using adagrad to update the parameters is very effective .,we use mini-batch update and adagrad to optimize the parameter learning . previous work consistently reported that word-based translation models yielded better performance than traditional methods for question retrieval .,previous work consistently reported that the word-based translation models yielded better performance than the traditional methods for question retrieval . "semantic parsing is the task of mapping a natural language ( nl ) sentence into a complete , formal meaning representation ( mr ) which a computer program can execute to perform some task , like answering database queries or controlling a robot .",semantic parsing is the task of translating natural language utterances into a machine-interpretable meaning representation ( mr ) . we evaluated translation quality with the case-insensitive bleu-4 and nist .,we evaluated the translation quality using the case-insensitive bleu-4 metric . "recently , deep learning has also been introduced to propose an end-to-end convolutional neural network for relation classification .",recursive neural network and convolutional neural network have proven powerful in relation classification . conditional random fields are a class of undirected graphical models with exponent distribution .,conditional random fields are a class of graphical models which are undirected and conditionally trained . our trigram word language model was trained on the target side of the training corpus using the srilm toolkit with modified kneser-ney smoothing .,"for the semantic language model , we used the srilm package and trained a tri-gram language model with the default goodturing smoothing ." "we then learn reranking weights using minimum error rate training on the development set for this combined list , using only these two features .",we utilize minimum error rate training to optimize feature weights of the paraphrasing model according to ndcg . "for training the model , we use the linear kernel svm implemented in the scikit-learn toolkit .","to implement svm algorithm , we have used the publicly available python based scikit-learn package ." "a ∗ parsing algorithm is 5 times faster than cky parsing , without loss of accuracy .","our a ∗ algorithm is 5 times faster than cky parsing , with no loss in accuracy ." "for all data sets , we trained a 5-gram language model using the sri language modeling toolkit .","in our implementation , we train a tri-gram language model on each phone set using the srilm toolkit ." twitter is a well-known social network service that allows users to post short 140 character status update which is called “ tweet ” .,twitter is a huge microblogging service with more than 500 million tweets per day from different locations of the world and in different languages ( cite-p-8-1-9 ) . "for nb and svm , we used their implementation available in scikit-learn .",we implemented the different aes models using scikit-learn . we used the implementation of the scikit-learn 2 module .,we use the scikit-learn toolkit as our underlying implementation . zeng et al introduce a convolutional neural network to extract relational facts with automatically learning features from text .,zeng et al use a convolutional deep neural network to extract lexical features learned from word embeddings and then fed into a softmax classifier to predict the relationship between words . we solve this sequence tagging problem using the mallet implementation of conditional random fields .,we use conditional random fields sequence labeling as described in . we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting .,we estimate a 5-gram language model using interpolated kneser-ney discounting with srilm . text summarization is the task of automatically condensing a piece of text to a shorter version while maintaining the important points .,text summarization is the process of generating a short version of a given text to indicate its main topics . we use the selectfrommodel 4 feature selection method as implemented in scikit-learn .,we use the linearsvc classifier as implemented in scikit-learn package 17 with the default parameters . "for word-level embedding e w , we utilize pre-trained , 300-dimensional embedding vectors from glove 6b .","for the mix one , we also train word embeddings of dimension 50 using glove ." "semantic role labeling ( srl ) is the task of labeling the predicate-argument structures of sentences with semantic frames and their roles ( cite-p-18-1-2 , cite-p-18-1-19 ) .",semantic role labeling ( srl ) is a task of analyzing predicate-argument structures in texts . relation extraction ( re ) is the task of assigning a semantic relationship between a pair of arguments .,relation extraction is a traditional information extraction task which aims at detecting and classifying semantic relations between entities in text ( cite-p-10-1-18 ) . the translation quality is evaluated by case-insensitive bleu and ter metrics using multeval .,the evaluation metric for the overall translation quality is caseinsensitive bleu4 . moses is used as a baseline phrase-based smt system .,"both systems are phrase-based smt models , trained using the moses toolkit ." "word sense disambiguation ( wsd ) is a difficult natural language processing task which requires that for every content word ( noun , adjective , verb or adverb ) the appropriate meaning is automatically selected from the available sense inventory 1 .",word sense disambiguation ( wsd ) is a key enabling-technology that automatically chooses the intended sense of a word in context . the target language model is built on the target side of the parallel data with kneser-ney smoothing using the irstlm tool .,the srilm toolkit was used for training the language models using kneser-ney smoothing . we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing .,our translation model is implemented as an n-gram model of operations using the srilm toolkit with kneser-ney smoothing . twitter is a microblogging site where people express themselves and react to content in real-time .,twitter is a huge microbloging service with more than 500 million tweets per day 1 from different locations in the world and in different languages . "meanwhile , we adopt glove pre-trained word embeddings 5 to initialize the representation of input tokens .",we initialize the embedding layer using embeddings from dedicated word embedding techniques word2vec and glove . sentiment lexicon is a set of words ( or phrases ) each of which is assigned with a sentiment polarity score .,"a sentiment lexicon is a list of words and phrases , such as excellent , awful and not bad , each is being assigned with a positive or negative score reflecting its sentiment polarity ." word embeddings have also been used in several nlp tasks including srl .,"importantly , word embeddings have been effectively used for several nlp tasks ." distributional semantic models represent the meanings of words by relying on their statistical distribution in text .,distributional semantic models produce vector representations which capture latent meanings hidden in association of words in documents . sentiment analysis is a collection of methods and algorithms used to infer and measure affection expressed by a writer .,"sentiment analysis is a natural language processing ( nlp ) task ( cite-p-10-3-0 ) which aims at classifying documents according to the opinion expressed about a given subject ( federici and dragoni , 2016a , b ) ." word segmentation is the foremost obligatory task in almost all the nlp applications where the initial phase requires tokenization of input into words .,word segmentation is a fundamental task for chinese language processing . we used moses as the implementation of the baseline smt systems .,"we used moses , a phrase-based smt toolkit , for training the translation model ." "the annotation scheme is derived from the universal stanford dependencies , the google universal part-of-speech tags and the interset interlingua for morphological tagsets .","the annotation is based on the google universal part-ofspeech tags and the stanford dependencies , adapted and harmonized across languages ." hochreiter and schmidhuber proposed long short-term memories as the specific version of rnn designed to overcome vanishing and exploding gradient problem .,"to tackle this problem , hochreiter and schmidhuber proposed long short term memory , which uses a cell with input , forget and output gates to prevent the vanishing gradient problem ." "for testing purposes , we used the wall street journal part of the penn treebank corpus .","for the pos-tagger , we trained hunpos 10 with the wall street journal english corpus ." we implement our lstm encoder-decoder model using the opennmt neural machine translation toolkit .,"we use opennmt , which is an implementation of the popular nmt approach that uses an attentional encoder-decoder network ." "we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings , which we do not optimize during training .",we used glove word embeddings with 300 dimensions pre-trained using commoncrawl to get a vector representation of the evidence sentence . "parsers trained on the penn treebank are reporting impressive numbers these days , but they don ’ t do very well on this problem .","parsers are reporting impressive numbers these days , but coordination remains an area with room for improvement ." "in this work , we apply several unsupervised and supervised techniques of sentiment composition .","finally , we apply several unsupervised and supervised techniques of sentiment composition to determine their efficacy on this dataset ." we built a 5-gram language model from it with the sri language modeling toolkit .,"we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit ." we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing .,we build a 9-gram lm using srilm toolkit with modified kneser-ney smoothing . "for example , blitzer et al proposed a domain adaptation method based on structural correspondence learning .",blitzer et al used structural correspondence learning to train a classifier on source data with new features induced from target unlabeled data . "mimno et al similarly introduced a methodology for computing coherence , replacing pmi with log conditional probability .","mimno et al proposed a closely-related method for evaluating semantic coherence , replacing pmi with log conditional probability ." "ccg is a lexicalized , mildly context-sensitive parsing formalism that models a wide range of linguistic phenomena .",ccgs are a linguistically-motivated formalism for modeling a wide range of language phenomena . we used the moses tree-to-string mt system for all of our mt experiments .,we used moses for pbsmt and hpbsmt systems in our experiments . "in our experiments , we used the srilm toolkit to build 5-gram language model using the ldc arabic gigaword corpus .","for language model , we used sri language modeling toolkit to train a 4-gram model with modified kneser-ney smoothing ." we adopt the domain-adaptation method used by luong and manning to fine-tune the trained model using in-domain data .,luong and manning use transfer learning to adapt a general model to indomain data . sentiment analysis is a technique to classify documents based on the polarity of opinion expressed by the author of the document ( cite-p-16-1-13 ) .,sentiment analysis is a collection of methods and algorithms used to infer and measure affection expressed by a writer . "the word vectors are learned using a skip-gram model with negative sampling , implemented in the word2vec toolkit .",the word embedding is pre-trained using the skip-gram model in word2vec and fine-tuned during the learning process . coreference resolution is the task of automatically grouping references to the same real-world entity in a document into a set .,coreference resolution is the task of clustering referring expressions in a text so that each resulting cluster represents an entity . coreference resolution is a set partitioning problem in which each resulting partition refers to an entity .,coreference resolution is a fundamental component of natural language processing ( nlp ) and has been widely applied in other nlp tasks ( cite-p-15-3-9 ) . we built a hierarchical phrase-based mt system based on weighted scfg .,in our experiments the mt system used is hierarchical phrase-based system . "stance detection is the task of assigning stance labels to a piece of text with respect to a topic , i.e . whether a piece of text is in favour of “ abortion ” , neutral , or against .",stance detection is a difficult task since it often requires reasoning in order to determine whether an utterance is in favor of or against a specific issue . "word sense disambiguation ( wsd ) is the problem of assigning a sense to an ambiguous word , using its context .","word sense disambiguation ( wsd ) is the task of determining the correct meaning ( “ sense ” ) of a word in context , and several efforts have been made to develop automatic wsd systems ." "we use the 200-dimensional global vectors , pre-trained on 2 billion tweets , covering over 27-billion tokens .","we use 300-dimensional glove vectors trained on 6b common crawl corpus as word embeddings , setting the embeddings of outof-vocabulary words to zero ." we initialize the embedding layer using embeddings from dedicated word embedding techniques word2vec and glove .,we use distributed word vectors trained on the wikipedia corpus using the word2vec algorithm . "sentiment classification is the task of identifying the sentiment polarity of a given text , which is traditionally categorized as either positive or negative .","sentiment classification is a special task of text categorization that aims to classify documents according to their opinion of , or sentiment toward a given subject ( e.g. , if an opinion is supported or not ) ( cite-p-11-1-2 ) ." "therefore , we use em-based estimation for the hidden parameters .","thus , we propose a new approach based on the expectation-maximization algorithm ." latent dirichlet allocation is a generative model that overcomes some of the limitations of plsi by using a dirichlet prior on the topic distribution .,the latent dirichlet allocation is a topic model that is assumed to provide useful information for particular subtasks . the use of unsupervised word embeddings in various natural language processing tasks has received much attention .,"recently , the field has been influenced by the success of neural language models ." we map the pos labels in the conll datasets to the universal pos tagset .,for the source side we use the pos tags from stanford corenlp mapped to universal pos tags . "in natural language , a word often assumes different meanings , and the task of determining the correct meaning , or sense , of a word in different contexts is known as word sense disambiguation ( wsd ) .","the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd ) ." "word segmentation is a fundamental task for processing most east asian languages , typically chinese .",word segmentation is the foremost obligatory task in almost all the nlp applications where the initial phase requires tokenization of input into words . "sentiment classification is the task of identifying the sentiment polarity of a given text , which is traditionally categorized as either positive or negative .","sentiment classification is the fundamental task of sentiment analysis ( cite-p-15-3-11 ) , where we are to classify the sentiment of a given text ." "semantic role labeling ( srl ) consists of finding the arguments of a predicate and labeling them with semantic roles ( cite-p-9-1-5 , cite-p-9-3-0 ) .",semantic role labeling ( srl ) is the task of automatically labeling predicates and arguments in a sentence with shallow semantic labels . sentence compression is a text-to-text generation task in which an input sentence must be transformed into a shorter output sentence which accurately reflects the meaning in the input and also remains grammatically well-formed .,sentence compression is the task of compressing long sentences into short and concise ones by deleting words . evaluation demonstrates that text generated by our model is preferred over that of baselines .,automatic evaluation shows that our system is both less repetitive and more diverse than baselines . barzilay and mckeown extracted both single-and multiple-word paraphrases from a sentence-aligned corpus for use in multi-document summarization .,barzilay and mckeown identify multi-word paraphrases from a sentence-aligned corpus of monolingual parallel texts . "as a baseline smt system , we use the hierarchical phrase-based translation with an efficient left-to-right generation originally proposed by chiang .","as our baseline , we apply a high-performing chinese-english mt system based on hierarchical phrase-based translation framework ." "from this , we extract an old domain sense dictionary , using the moses mt framework .","for this purpose , we used phrase tables learned by the standard statistical mt toolkit moses ." relation extraction is a challenging task in natural language processing .,relation extraction is the task of finding semantic relations between two entities from text . dependency parsing is a valuable form of syntactic processing for nlp applications due to its transparent lexicalized representation and robustness with respect to flexible word order languages .,dependency parsing is a fundamental task for language processing which has been investigated for decades . all smt models were developed using the moses phrase-based mt toolkit and the experiment management system .,the experiments of the phrase-based smt systems are carried out using the open source moses toolkit . we use pre-trained vectors from glove for word-level embeddings .,"for the word-embedding based classifier , we use the glove pre-trained word embeddings ." "word alignment , which can be defined as an object for indicating the corresponding words in a parallel text , was first introduced as an intermediate result of statistical translation models ( cite-p-13-1-2 ) .",word alignment is a natural language processing task that aims to specify the correspondence between words in two languages ( cite-p-19-1-0 ) . we use glove 300-dimension embedding vectors pre-trained on 840 billion tokens of web data .,we use the 300-dimensional pre-trained word2vec 3 word embeddings and compare the performance with that of glove 4 embeddings . sentiment analysis is the study of the subjectivity and polarity ( positive vs. negative ) of a text ( cite-p-7-1-10 ) .,"sentiment analysis is a much-researched area that deals with identification of positive , negative and neutral opinions in text ." we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing .,"we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit ." we use pre-trained vectors from glove for word-level embeddings .,we use pre-trained 50-dimensional word embeddings vector from glove . sentiment analysis is a nlp task that deals with extraction of opinion from a piece of text on a topic .,"sentiment analysis is the task of identifying positive and negative opinions , sentiments , emotions and attitudes expressed in text ." zeng et al use convolutional neural network for learning sentence-level features of contexts and obtain good performance even without using syntactic features .,"for instance , zeng et al utilized a cnn-based model to extract sentence-level features for relation classification ." we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing .,"for language model , we used sri language modeling toolkit to train a 4-gram model with modified kneser-ney smoothing ." we use the logistic regression implementation of liblinear wrapped by the scikit-learn library .,"to implement svm algorithm , we have used the publicly available python based scikit-learn package ." we trained word embeddings using word2vec on 4 corpora of different sizes and types .,we used 300 dimensional skip-gram word embeddings pre-trained on pubmed . a pseudoword is a composite comprised of two or more words chosen at random ; the individual occurrences of the original words within a text are replaced by their conflation .,pseudo-word is a kind of basic multi-word expression that characterizes minimal sequence of consecutive words in sense of translation . "the method of bannard and callison-burch requires bilingual parallel corpora , and uses the translations of expressions as its feature .",bannard and callison-burch used the bilingual pivoting method on parallel corpora for the same task . "we compare our approach with a standard phrase-based mt system , moses trained using the same 1m sequence pairs constructed from the wikianswers dataset .",we then evaluate the effect of word alignment on machine translation quality using the phrase-based translation system moses . "we have described a perceptron-style algorithm for training the neural networks , which is much easier to be implemented , and has speed advantage over the maximum-likelihood scheme .","we also describe a perceptron-style algorithm for training the neural networks , as an alternative to maximum-likelihood method , to speed up the training process and make the learning algorithm easier to be implemented ." and thus this study aims to examine the use of content features in speech scoring systems .,"motivated by this limitation , the study aims to investigate the use of content features in speech scoring systems ." we first train a word2vec model on fr-wikipedia 11 to obtain non contextual word vectors .,we obtained distributed word representations using word2vec 4 with skip-gram . "the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd ) .","word sense disambiguation ( wsd ) is a key task in computational lexical semantics , inasmuch as it addresses the lexical ambiguity of text by making explicit the meaning of words occurring in a given context ( cite-p-18-3-10 ) ." we adapted the moses phrase-based decoder to translate word lattices .,for training the translation model and for decoding we used the moses toolkit . issue framing is related to both analyzing biased language and subjectivity .,framing is further related to works which analyze biased language and subjectivity . "we assume that a morphological analysis consists of three processes : tokenization , dictionary lookup , and disambiguation .","morphological analysis is the basis for many nlp applications , including syntax parsing , machine translation and automatic indexing ." the decoder uses a ckystyle parsing algorithm and cube pruning to integrate the language model scores .,the parsing algorithm is extended to handle translation candidates and to incorporate language model scores via cube pruning . we use a standard long short-term memory model to learn the document representation .,we use a bidirectional long short-term memory rnn to encode a sentence . we use the moses software to train a pbmt model .,we use the moses software package 5 to train a pbmt model . bleu is a system for automatic evaluation of machine translation .,bleu is widely used for automatic evaluation of machine translation systems . "table 2 presents the results from the automatic evaluation , in terms of bleu and nist scores , of 4 system setups .",table 1 shows the evaluation of all the systems in terms of bleu score with the best score highlighted . we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit .,"we first trained a trigram bnlm as the baseline with interpolated kneser-ney smoothing , using srilm toolkit ." "to obtain their corresponding weights , we adapted the minimum-error-rate training algorithm to train the outside-layer model .",we optimise the feature weights of the model with minimum error rate training against the bleu evaluation metric . "semantic role labeling ( srl ) is the task of labeling the predicate-argument structures of sentences with semantic frames and their roles ( cite-p-18-1-2 , cite-p-18-1-19 ) .",semantic role labeling ( srl ) has been defined as a sentence-level natural-language processing task in which semantic roles are assigned to the syntactic arguments of a predicate ( cite-p-14-1-7 ) . "preparing an aligned abbreviation corpus , we obtain the optimal combination of the features by using the maximum entropy framework .",we utilize a maximum entropy model to design the basic classifier used in active learning for wsd . "to their syntactic expressions , these mixed feature sets are potentially useful for building verb classifications .","in addition , mixed feature sets also show potential for scaling well when dealing with larger number of verbs and verb classes ." coreference resolution is the next step on the way towards discourse understanding .,coreference resolution is the task of determining which mentions in a text refer to the same entity . we use the pre-trained word2vec embeddings provided by mikolov et al as model input .,"regarding word embeddings , we use the ones trained by baziotis et al using word2vec and 550 million tweets ." the long short-term memory was first proposed by hochreiter and schmidhuber that can learn long-term dependencies .,hochreiter and schmidhuber developed long short-term memory to overcome the long term dependency problem . as a sequence labeler we use conditional random fields .,we use conditional random fields sequence labeling as described in . "in this paper , we describe our contribution at task 2 of semeval 2013 .",this paper presents our approach for the subtask of message polarity classification of semeval 2013 . we ran mt experiments using the moses phrase-based translation system .,we used moses to train an alignment model on the created paraphrase dataset . "finally , goldwasser et al presented an unsupervised approach of learning a semantic parser by using an em-like retraining loop .",goldwasser et al took an unsupervised approach for semantic parsing based on self-training driven by confidence estimation . the parsing was performed with the berkeley parser and features were extracted from both source and target .,"pcfg parsing features were generated on the output of the berkeley parser , trained over an english and a spanish treebank ." we train a 4-gram language model on the xinhua portion of the gigaword corpus using the sri language toolkit with modified kneser-ney smoothing .,we use srilm to train a 5-gram language model on the xinhua portion of the english gigaword corpus 5th edition with modified kneser-ney discounting . the language models used were 7-gram srilm with kneser-ney smoothing and linear interpolation .,"we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit ." "the latent descriptor for math-w-2-4-3-110 consists of the pair ( math-w-2-4-3-116 ) , math-w-2-4-3-122 ) ) ¡ª .",we define the left descriptor of word type math-w-3-3-3-87 as : math-p-3-4-0 phonetic translation across these pairs is called transliteration .,transliteration is the task of converting a word from one alphabetic script to another . bagga and baldwin used the vector space model together with summarization techniques to tackle the cross-document coreference problem .,"the classic work on this task was by bagga and baldwin , who adapted the vector space model ." "to train the link embeddings , we use the speedy , skip-gram neural language model of mikolov et al via their toolkit word2vec .","as a point of comparison , we will also present results from the word2vec model of mikolov et al trained on the same underlying corpus as our models ." we used the moses pbsmt system for all of our mt experiments .,we used the moses toolkit for performing statistical machine translation . we measure translation quality via the bleu score .,we evaluated the translation quality of the system using the bleu metric . our 5-gram language model was trained by srilm toolkit .,we trained a 3-gram language model on the spanish side using srilm . "in our work , we use latent dirichlet allocation to identify the sub-topics in the given body of texts .",the core machinery of our system is driven by a latent dirichlet allocation topic model . "costa-juss脿 and fonollosa , 2006 ) consider part-of-speech based source reordering as a translation task .","costa-juss脿 and fonollosa , 2006 ) view the source reordering as a translation task that translate the source language into a reordered source language ." "as textual features , we use the pretrained google news word embeddings , obtained by training the skip-gram model with negative sampling .","we use the word2vec vectors with 300 dimensions , pre-trained on 100 billion words of google news ." labeled data for msa we use the penn arabic treebank .,"for msa , we use the penn arabic treebank ." "in 2013 , mikolov et al generated phrase representation using the same method used for word representation in word2vec .","recently , mikolov et al presented a shallow network architecture that is specifically for learning word embeddings , known as the word2vec model ." we used the svd implementation provided in the scikit-learn toolkit .,we used the implementation provided by without tuning any hyper-parameters . syntactic parsing is the task of identifying the phrases and clauses in natural language sentences .,syntactic parsing is a computationally intensive and slow task . we used svm classifier that implements linearsvc from the scikit-learn library .,we used the svm implementation provided within scikit-learn . for the token-level sequence labeling tasks we use hidden markov models and conditional random fields appear sentences .,"for this supervised structure learning task , we choose the approach conditional random fields ." "in order to tune all systems , we use the k-best batch mira .",we use the k-best batch mira to tune mt systems . we used the google news pretrained word2vec word embeddings for our model .,"for english , we used the pre-trained word2vec by on google news ." "following och and ney , we adopt a general loglinear model .","following li et al , we define our model in the well-known log-linear framework ." we use the english penn treebank to evaluate our model implementations and yamada and matsumoto head rules are used to extract dependency trees .,we generate dependency structures from the ptb constituency trees using the head rules of yamada and matsumoto . statistical significance is computed using the bootstrap re-sampling approach proposed by koehn .,the statistical significance test is performed by the re-sampling approach . sentiment analysis is the process of identifying and extracting subjective information using natural language processing ( nlp ) .,sentiment analysis is a collection of methods and algorithms used to infer and measure affection expressed by a writer . we use the stanford nlp pos tagger to generate the tagged text .,we use the stanford pos tagger to obtain the lemmatized corpora for the parss task . we tune the systems using minimum error rate training .,we perform minimum error rate training to tune various feature weights . the language model pis implemented as an n-gram model using the srilm-toolkit with kneser-ney smoothing .,we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus . ganchev et al propose a posterior regularization framework for weakly supervised learning to derive a multi-view learning algorithm .,"ganchev et al , 2010 ) describes a method based on posterior regularization that incorporates additional constraints within the em algorithm for estimation of ibm models ." "mihalcea et al compared knowledgebased and corpus-based methods , using word similarity and word specificity to define one general measure of text semantic similarity .","mihalcea et al combine pointwise mutual information , latent semantic analysis and wordnet-based measures of word semantic similarity into an arbitrary text-to-text similarity metric ." "to solve the traditional recurrent neural networks , hochreiter and schmidhuber proposed the lstm architecture .","to solve this problem , hochreiter and schmidhuber introduced the long short-term memory rnn ." coreference resolution is a key task in natural language processing ( cite-p-13-1-8 ) aiming to detect the referential expressions ( mentions ) in a text that point to the same entity .,"additionally , coreference resolution is a pervasive problem in nlp and many nlp applications could benefit from an effective coreference resolver that can be easily configured and customized ." "the ape system for each target language was tuned on comparable development sets , optimizing ter with minimum error rate training .",feature weights were set with minimum error rate training on a development set using bleu as the objective function . the evaluation metric is case-sensitive bleu-4 .,we adopted the case-insensitive bleu-4 as the evaluation metric . "word sense disambiguation ( wsd ) is the problem of assigning a sense to an ambiguous word , using its context .",word sense disambiguation ( wsd ) is a task to identify the intended sense of a word based on its context . bilingual lexicons are an important resource in multilingual natural language processing tasks such as statistical machine translation and cross-language information retrieval .,bilingual dictionaries are an essential resource in many multilingual natural language processing tasks such as machine translation and cross-language information retrieval . "word vectors are vector representations of the words learned from their raw form , using models such as word2vec .",word embeddings are low-dimensional vector representations of words such as word2vec that recently gained much attention in various semantic tasks . "in this task , we use the 300-dimensional 840b glove word embeddings .",we use the glove pre-trained word embeddings for the vectors of the content words . the evaluation metric for the overall translation quality was case-insensitive bleu4 .,the translation quality is evaluated by case-insensitive bleu and ter metrics using multeval . the phrase-based translation model uses the con- the baseline lm was a regular n-gram lm with kneser-ney smoothing and interpolation by means of the srilm toolkit .,a 3-gram language model is trained on the target side of the training data by the srilm toolkits with modified kneser-ney smoothing . markov models were trained with modified kneser-ney smoothing as implemented in srilm .,we also use a 4-gram language model trained using srilm with kneser-ney smoothing . "coreference resolution is a central problem in natural language processing with a broad range of applications such as summarization ( cite-p-16-3-24 ) , textual entailment ( cite-p-16-3-12 ) , information extraction ( cite-p-16-3-11 ) , and dialogue systems ( cite-p-16-3-25 ) .",coreference resolution is a fundamental component of natural language processing ( nlp ) and has been widely applied in other nlp tasks ( cite-p-15-3-9 ) . rhetorical structure theory is one of the most influential approaches for document-level discourse analysis .,rhetorical structure theory posits a hierarchical structure of discourse relations between spans of text . sentiment analysis ( sa ) is the task of prediction of opinion in text .,sentiment analysis ( sa ) is the task of determining the sentiment of a given piece of text . we used standard classifiers available in scikit-learn package .,we used the scikit-learn implementation of svrs and the skll toolkit . we used the disambig tool provided by the srilm toolkit .,we used the sri language modeling toolkit with kneser-kney smoothing . we selected conditional random fields as the baseline model .,our model is a first order linear chain conditional random field . "with the refined outputs , we build phrasebased transliteration systems using moses , a popular statistical machine translation framework .","we use the moses toolkit to create a statistical phrase-based machine translation model built on the best pre-processed data , as described above ." "information extraction ( ie ) is a main nlp aspects for analyzing scientific papers , which includes named entity recognition ( ner ) and relation extraction ( re ) .","information extraction ( ie ) is the task of generating structured information , often in the form of subject-predicate-object relation triples , from unstructured information such as natural language text ." "sentiment analysis is a research area where does a computational analysis of people ’ s feelings or beliefs expressed in texts such as emotions , opinions , attitudes , appraisals , etc . ( cite-p-12-1-3 ) .",the sentiment analysis is a field of study that investigates feelings present in texts . we used the srilm toolkit to simulate the behavior of flexgram models by using count files as input .,we used the srilm toolkit to build unpruned 5-gram models using interpolated modified kneser-ney smoothing . mikolov et al uses a continuous skip-gram model to learn a distributed vector representation that captures both syntactic and semantic word relationships .,mikolov et al presents a neural network-based architecture which learns a word representation by learning to predict its context words . discourse parsing is the process of assigning a discourse structure to the input provided in the form of natural language .,"and while discourse parsing is a document level task , discourse segmentation is done at the sentence level , assuming that sentence boundaries are known ." we use sri language modeling toolkit to train a 5-gram language model on the english sentences of fbis corpus .,we apply sri language modeling toolkit to train a 4-gram language model with kneser-ney smoothing . we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .,we use srilm for training a trigram language model on the english side of the training corpus . "we use support vector machines , a maximum-margin classifier that realizes a linear discriminative model .","as a classifier , we employ support vector machines as implemented in svm light ." the model weights are automatically tuned using minimum error rate training .,the nnlm weights are optimized as the other feature weights using minimum error rate training . we used the sri language modeling toolkit to train lms on our training data for each ilr level .,we used the srilm toolkit to train a 4-gram language model on the english side of the training corpus . we run parfda smt experiments using moses in all language pairs in wmt15 and obtain smt performance close to the top constrained moses systems .,we perform smt experiments in all language pairs of the wmt13 and obtain smt performance close to the baseline moses system using less resources for training . the english side of the parallel corpus is trained into a language model using srilm .,the n-gram language models are trained using the srilm toolkit or similar software developed at hut . we use the english penn treebank to evaluate our model implementations and yamada and matsumoto head rules are used to extract dependency trees .,"we use the wsj portion of the penn treebank 4 , augmented with head-dependant information using the rules of yamada and matsumoto ." the word embeddings are initialized with 100-dimensions vectors pre-trained by the cbow model .,"as embedding vectors , we used the publicly available representations obtained from the word2vec cbow model ." we train a 4-gram language model on the xinhua portion of english gigaword corpus by srilm toolkit .,"for improving the word alignment , we use the word-classes that are trained from a monolingual corpus using the srilm toolkit ." word alignment is a central problem in statistical machine translation ( smt ) .,"word alignment is the task of identifying translational relations between words in parallel corpora , in which a word at one language is usually translated into several words at the other language ( fertility model ) ( cite-p-18-1-0 ) ." the pre-processed monolingual sentences will be used by srilm or berkeleylm to train a n-gram language model .,the target language model is trained by the sri language modeling toolkit on the news monolingual corpus . "to set the weights , 位 m , we performed minimum error rate training on the development set using bleu as the objective function .",we used minimum error rate training to tune the feature weights for maximum bleu on the development set . we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing .,we implement an in-domain language model using the sri language modeling toolkit . "as a classifier , we choose a first-order conditional random field model .","for this supervised structure learning task , we choose the approach conditional random fields ." we used scikit-learn library for all the machine learning models .,we used the svd implementation provided in the scikit-learn toolkit . our nmt baseline is an encoder-decoder model with attention and dropout implemented with nematus and amunmt .,we utilize the nematus implementation to build encoder-decoder nmt systems with attention and gated recurrent units . we use a phrase-based translation system similar to moses .,our translation system is an in-house phrasebased system analogous to moses . twitter is a social platform which contains rich textual content .,"twitter is a famous social media platform capable of spreading breaking news , thus most of rumour related research uses twitter feed as a basis for research ." we now review the path ranking algorithm introduced by lao and cohen .,"we briefly review the path ranking algorithm , described in more detail by lao and cohen ." we define a conditional random field for this task .,the resulting model is an instance of a conditional random field . we used glove word embeddings with 300 dimensions pre-trained using commoncrawl to get a vector representation of the evidence sentence .,"we use glove word embeddings , which are 50-dimension word vectors trained with a crawled large corpus with 840 billion tokens ." "for this paper , we directly utilize the pre-trained fasttext word embeddings model which is trained on wikipedia data .",we used the 300-dimensional fasttext embedding model pretrained on wikipedia with skip-gram to initialize the word embeddings in the embedding layer . "for all methods , we applied dropout to the input of the lstm layers .","to prevent overfitting , we apply dropout operators to non-recurrent connections between lstm layers ." these approaches range from distortion models to lexical reordering models .,"among them , lexicalized reordering models have been widely used in practical phrase-based systems ." we use 2-best parse trees of berkeley parser and 1-best parse tree of bikel parser and stanford parser as inputs to the full parsing based system .,"therefore , for both chinese and english srl systems , we use the 3-best parse trees of berkeley parser and 1-best parse trees of bikel parser and stanford parser as inputs ." marcu and echihabi propose an approach considering word-based pairs as useful features .,marcu and echihabi proposed a method for cheap acquisition of training data for discourse relation sense prediction . "zeng et al proposed a deep convolutional neural network with softmax classification , extracting lexical and sentence level features .","zeng et al proposed an approach for relation classification where sentence-level features are learned through a cnn , which has word embedding and position features as its input ." we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting .,"for the semantic language model , we used the srilm package and trained a tri-gram language model with the default goodturing smoothing ." "the objective measures used were the bleu score , the nist score and multi-reference word error rate .","the metrics that were used to evaluate the model were bleu , ne dist and nist ." we used the implementation of the scikit-learn 2 module .,we implemented linear models with the scikit learn package . "reisinger and mooney and huang et al use context clustering to induce multiple word senses for a target word type , where each sense is represented by a different context feature vector .",reisinger and mooney and huang et al also presented methods that learn multiple embeddings per word by clustering the contexts . we used the target side of the parallel corpus and the srilm toolkit to train a 5-gram language model .,we used the srilm toolkit to train a 4-gram language model on the english side of the training corpus . "in particular , neural language models have demonstrated impressive performance at the task of language modeling .","recently , neural networks , and in particular recurrent neural networks have shown excellent performance in language modeling ." we train a 4-gram language model on the xinhua portion of the gigaword corpus using the sri language toolkit with modified kneser-ney smoothing .,we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus . that suggests that original texts are significantly different from translated ones in various aspects .,numerous studies suggest that translated texts are different from original ones . galley and manning use the shift-reduce algorithm to conduct hierarchical phrase reordering so as to capture long-distance reordering .,galley and manning propose a shift-reduce algorithm to integrate a hierarchical reordering model into phrase-based systems . word sense disambiguation ( wsd ) is the task of identifying the correct meaning of a word in context .,"word sense disambiguation ( wsd ) is a widely studied task in natural language processing : given a word and its context , assign the correct sense of the word based on a predefined sense inventory ( cite-p-15-3-4 ) ." the anaphor is a pronoun and the referent is in the cache ( in focus ) .,"the anaphor is a definite noun phrase and the referent is in focus , that is ." we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit .,"for the language model , we used sri language modeling toolkit to train a trigram model with modified kneser-ney smoothing on the 31 , 149 english sentences ." visual question answering ( vqa ) is the task of predicting a suitable answer given an image and a question about it .,visual question answering ( vqa ) is a well-known and challenging task that requires systems to jointly reason about natural language and vision . "for input representation , we used glove word embeddings .",we use 300-dimensional word embeddings from glove to initialize the model . "discourse parsing is a difficult , multifaceted problem involving the understanding and modeling of various semantic and pragmatic phenomena as well as understanding the structural properties that a discourse graph can have .",discourse parsing is a challenging task and plays a critical role in discourse analysis . "in addition , we use an english corpus of roughly 227 million words to build a target-side 5-gram language model with srilm in combination with kenlm .",we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus . information extraction ( ie ) is the task of extracting factual assertions from text .,information extraction ( ie ) is the nlp field of research that is concerned with obtaining structured information from unstructured text . we use srilm to train a 5-gram language model on the target side of our training corpus with modified kneser-ney discounting .,we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing . dhingra et al proposed a multi-turn dialogue agent which helps users search knowledge base by soft kb lookup .,li et al and dhingra et al also proposed end-to-end task-oriented dialog models that can be trained with hybrid supervised learning and rl . the probabilistic language model is constructed on google web 1t 5-gram corpus by using the srilm toolkit .,uedin has used the srilm toolkit to train the language model and relies on kenlm for language model scoring during decoding . table 4 shows the bleu scores of the output descriptions .,table 2 gives the results measured by caseinsensitive bleu-4 . our translation system is an in-house phrasebased system analogous to moses .,we used moses with the default configuration for phrase-based translation . "for french , we used 300,000 parallel sentences from the europarl training data parsed on the english side with the stanford parser and on the french side with the xerox xip parser .","we automatically parsed the french side of the corpus with the berkeley parser , while we used the fast vanilla pcfg model of the stanford parser for the english side ." using word or phrase representations as extra features has been proven to be an effective and simple way to improve the predictive performance of an nlp system .,previous work showed that word clusters derived from an unlabelled dataset can improve the performance of many nlp applications . "named entity recognition ( ner ) is the task of identifying named entities in free text—typically personal names , organizations , gene-protein entities , and so on .",named entity recognition ( ner ) is a frequently needed technology in nlp applications . we use the term-sentence matrix to train a simple generative topic model based on lda .,we use latent dirichlet allocation to obtain the topic words for each lexical pos . 1 a bunsetsu is the linguistic unit in japanese that roughly corresponds to a basic phrase in english .,"a bunsetsu is a japanese grammatical and phonological unit that consists of one or more content words such as a noun , verb , or adverb followed by a sequence of zero or more function words such as auxiliary verbs , postpositional particles , or sentence-final particles ." lda is a generative model that learns a set of latent topics for a document collection .,lda is a representative probabilistic topic model of document collections . trigram language models are implemented using the srilm toolkit .,the models are built using the sri language modeling toolkit . sentiment analysis is a multi-faceted problem .,"sentiment analysis is a growing research field , especially on web social networks ." the minimum error rate training was used to tune the feature weights .,the decoding weights were optimized with minimum error rate training . "first , arabic is a morphologically rich language ( cite-p-19-3-7 ) .","moreover , arabic is a morphologically complex language ." relation extraction is a fundamental step in many natural language processing applications such as learning ontologies from texts ( cite-p-12-1-0 ) and question answering ( cite-p-12-3-6 ) .,relation extraction ( re ) is the task of extracting instances of semantic relations between entities in unstructured data such as natural language text . "for word embedding , we used pre-trained glove word vectors with 300 dimensions , and froze them during training .","in our experiments , we use 300-dimension word vectors pre-trained by glove ." "collobert et al used word embeddings as inputs of a multilayer neural network for part-of-speech tagging , chunking , named entity recognition and semantic role labelling .","collobert et al used word embeddings as the input of various nlp tasks , including part-of-speech tagging , chunking , ner , and semantic role labeling ." the method proposed by huang et al incorporates the sinica word segmentation system to detect typos .,huang et al proposed a learning model based on chinese phonemic alphabet to detect chinese spelling errors . we use scikitlearn as machine learning library .,we implemented linear models with the scikit learn package . "for regularization , dropout is applied to the input and hidden layers .","dropout is performed at the input of each lstm layer , including the first layer ." we regularize our network using dropout with the drop-out rate tuned using development set .,we regularize our network using dropout with the dropout rate tuned using the development set . "for input representation , we used glove word embeddings .",we use the glove vectors of 300 dimension to represent the input words . "luong et al break words into morphemes , and use recursive neural networks to compose word meanings from morpheme meanings .","luong et al , 2013 ) utilized recursive neural networks in which inputs are morphemes of words ." we use the 100-dimensional glove 4 embeddings trained on 2 billions tweets to initialize the lookup table and do fine-tuning during training .,we use the pre-trained glove 50-dimensional word embeddings to represent words found in the glove dataset . relation extraction ( re ) is a task of identifying typed relations between known entity mentions in a sentence .,"relation extraction is a subtask of information extraction that finds various predefined semantic relations , such as location , affiliation , rival , etc. , between pairs of entities in text ." "instead , we use bleu scores since it is one of the primary metrics for machine translation evaluation .","to measure the translation quality , we use the bleu score and the nist score ." we use pre-trained 50 dimensional glove vectors 4 for word embeddings initialization .,we use 300-dimensional word embeddings from glove to initialize the model . the language model was constructed using the srilm toolkit with interpolated kneser-ney discounting .,"we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit ." in this paper we attempt to deliver a framework useful for analyzing text in blogs .,in this paper we applied several probabilistic topic models to discourse within political blogs . parameter tuning was carried out using both k-best mira and minimum error rate training on a held-out development set .,system tuning was carried out using minimum error rate training optimised with k-best mira on a held out development set . the standard classifiers are implemented with scikit-learn .,we implement classification models using keras and scikit-learn . we implemented the algorithms in python using the stochastic gradient descent method for nmf from the scikit-learn package .,we trained a linear log-loss model using stochastic gradient descent learning as implemented in the scikit learn library . "for the neural models , we use 100-dimensional glove embeddings , pre-trained on wikipedia and gigaword .","we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings , which we do not optimize during training ." "further , we apply a 4-gram language model trained with the srilm toolkit on the target side of the training corpus .","for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided ." "we use the datasets , experimental setup , and scoring program from the conll 2011 shared task , based on the ontonotes corpus .","we also run our systems on the ontonotes dataset , which was used for evaluation in conll 2011 shared task ." "the language model used was a 5-gram with modified kneserney smoothing , built with srilm toolkit .",the model was built using the srilm toolkit with backoff and kneser-ney smoothing . "for the cluster- based method , we use word2vec 2 which provides the word vectors trained on the google news corpus .",for our implementation we use 300-dimensional part-of-speech-specific word embeddings v i generated using the gensim word2vec package . this baseline uses pre-trained word embeddings using word2vec cbow and fasttext .,we use word embeddings of dimension 100 pretrained using word2vec on the training dataset . coreference resolution is the process of linking together multiple expressions of a given entity .,"coreference resolution is the task of partitioning a set of mentions ( i.e . person , organization and location ) into entities ." we train a 4-gram language model on the xinhua portion of the gigaword corpus using the sri language toolkit with modified kneser-ney smoothing .,we trained kneser-ney discounted 5-gram language models on each available corpus using the srilm toolkit . "sentiment analysis is the computational analysis of people ’ s feelings or beliefs expressed in texts such as emotions , opinions , attitudes , appraisals , etc . ( cite-p-11-3-3 ) .",the sentiment analysis is a field of study that investigates feelings present in texts . we created 5-gram language models for every domain using srilm with improved kneserney smoothing on the target side of the training parallel corpora .,we trained a 5-gram language model on the xinhua portion of gigaword corpus using the srilm toolkit . "among them , twitter is the most popular service by far due to its ease for real-time sharing of information .","twitter is a widely used microblogging platform , where users post and interact with messages , “ tweets ” ." "lda is a probabilistic model of text data which provides a generative analog of plsa , and is primarily meant to reveal hidden topics in text documents .",plda is an extension of lda which is an unsupervised machine learning method that models topics of a document collection . "in our wok , we have used the stanford log-linear part-of-speech to do pos tagging .",we use stanford log-linear partof-speech tagger to produce pos tags for the english side . "sentiment classification is a hot research topic in natural language processing field , and has many applications in both academic and industrial areas ( cite-p-17-1-16 , cite-p-17-1-12 , cite-p-17-3-4 , cite-p-17-3-3 ) .","sentiment classification is a special task of text categorization that aims to classify documents according to their opinion of , or sentiment toward a given subject ( e.g. , if an opinion is supported or not ) ( cite-p-11-1-2 ) ." "we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings , which we do not optimize during training .","in a second baseline model , we also incorporate 300-dimensional glove word embeddings trained on wikipedia and the gigaword corpus ." "part-of-speech ( pos ) tagging is a fundamental nlp task , used by a wide variety of applications .","part-of-speech ( pos ) tagging is a crucial task for natural language processing ( nlp ) tasks , providing basic information about syntax ." "among them , twitter is the most popular service by far due to its ease for real-time sharing of information .","twitter is a popular microblogging service , which , among other things , is used for knowledge sharing among friends and peers ." table 4 shows the bleu scores of the output descriptions .,table 4 shows end-to-end translation bleu score results . "in our experiments , we used the srilm toolkit to build 5-gram language model using the ldc arabic gigaword corpus .",we used the target side of the parallel corpus and the srilm toolkit to train a 5-gram language model . "multi-task learning has been used with success in applications of machine learning , from natural language processing and speech recognition .","the combination of multi-task learning and neural networks has shown its advantages in many tasks , ranging from computer vision to natural language processing ." "unlike lemma prediction , we use a liblinear classifier to build linear svm classification models for gnp and case prediction .","for the task of event trigger prediction , we train a multi-class logistic regression classifier using liblinear ." "we used 4-gram language models , trained using kenlm .",the 5-gram target language model was trained using kenlm . "coreference resolution is the task of clustering a sequence of textual entity mentions into a set of maximal non-overlapping clusters , such that mentions in a cluster refer to the same discourse entity .",coreference resolution is the task of partitioning the set of mentions of discourse referents in a text into classes ( or ‘ chains ’ ) corresponding to those referents ( cite-p-12-3-14 ) . "sentiment analysis is the task of identifying the polarity ( positive , negative or neutral ) of review .",sentiment analysis is a research area in the field of natural language processing . "to evaluate the quality of our generated summaries , we choose to use the rouge 3 evaluation toolkit , that has been found to be highly correlated with human judgments .","to evaluate our approach , we classically adopted the rouge 2 framework , which estimates a summary score by its n-gram overlap with several reference summaries ." our system is based on the phrase-based part of the statistical machine translation system moses .,our method involved using the machine translation software moses . coreference resolution is a key task in natural language processing ( cite-p-13-1-8 ) aiming to detect the referential expressions ( mentions ) in a text that point to the same entity .,coreference resolution is the process of linking together multiple referring expressions of a given entity in the world . translation performances are measured with case-insensitive bleu4 score .,translation scores are reported using caseinsensitive bleu with a single reference translation . we use the skipgram model with negative sampling to learn word embeddings on the twitter reference corpus .,we use a count-based distributional semantics model and the continuous bag-of-words model to learn word vectors . the target language model was a trigram language model with modified kneser-ney smoothing trained on the english side of the bitext using the srilm tookit .,"the language models were interpolated kneser-ney discounted trigram models , all constructed using the srilm toolkit ." "we used yamcha 1 , which is a general purpose svm-based chunker .","we use the chunker yamcha , which is based on svms ." "following mirza and tonelli , we use the three million 300-dimensional word2vec vectors 5 pre-trained on part of the google news dataset .","for the textual sources , we populate word embeddings from the google word2vec embeddings trained on roughly 100 billion words from google news ." we obtained these scores by training a word2vec model on the wiki corpus .,"we used nwjc2vec 10 , which is a 200 dimensional word2vec model ." part-of-speech tagging is a crucial preliminary process in many natural language processing applications .,part-of-speech tagging is the process of assigning to a word the category that is most probable given the sentential context ( cite-p-4-1-2 ) . relation extraction is a core task in information extraction and natural language understanding .,relation extraction is the task of detecting and characterizing semantic relations between entities from free text . we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing .,the targetside 4-gram language model was estimated using the srilm toolkit and modified kneser-ney discounting with interpolation . "semantic parsing is the task of transducing natural language ( nl ) utterances into formal meaning representations ( mrs ) , commonly represented as tree structures .",semantic parsing is the task of mapping natural language to a formal meaning representation . "in this article , we present a framework to recommend relevant information in internet forums and blogs .","in this work , we present a framework for information recommendation in such social media as internet forums and blogs ." "sentence compression is the task of compressing long , verbose sentences into short , concise ones .",sentence compression is the task of generating a grammatical and shorter summary for a long sentence while preserving its most important information . we used the moseschart decoder and the moses toolkit for tuning and decoding .,"we used the moses decoder , with default settings , to obtain the translations ." "to identify content words , we used the nltk-lite tagger to assign a part of speech to each word .",we used nltk wordnet synsets for obtaining the ambiguity of the word . as a classifier we use an svm as implemented in svm light .,"as a classifier , we employ support vector machines as implemented in svm light ." we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit .,we trained a 4-gram language model on the xinhua portion of gigaword corpus using the sri language modeling toolkit with modified kneser-ney smoothing . we initialize the word embedding matrix with pre-trained glove embeddings .,we initialize the word embeddings for our deep learning architecture with the 100-dimensional glove vectors . "zhu et al suggest a probabilistic , syntaxbased approach to text simplification .","zhu et al propose to use a tree-based translation model which covers splitting , dropping , reordering and substitution ." named entity recognition ( ner ) is the first step for many tasks in the fields of natural language processing and information retrieval .,named entity recognition ( ner ) is a key technique for ie and other natural language processing tasks . distributed representations for words and sentences have been shown to significantly boost the performance of a nlp system .,"word representations , especially brown clustering , have been shown to improve the performance of ner system when added as a feature ." "for the sick and msrvid experiments , we used 300-dimension glove word embeddings .","for representing words , we used 100 dimensional pre-trained glove embeddings ." results are reported using case-insensitive bleu with a single reference .,the translation quality is evaluated by case-insensitive bleu-4 . a 4-gram language model is trained on the monolingual data by srilm toolkit .,a 5-gram language model with kneser-ney smoothing is trained using s-rilm on the target language . we learn word and sentence embeddings jointly by training a multilingual skip-gram model together with a cross-lingual sentence similarity model .,our system learns word and sentence embeddings jointly by training a multilingual skip-gram model together with a cross-lingual sentence similarity model . the language models are 4-grams with modified kneser-ney smoothing which have been trained with the srilm toolkit .,srilm toolkit was used to create up to 5-gram language models using the mentioned resources . pichotta and mooney showed that the lstm-based event sequence model outperformed previous co-occurrence-based methods for event prediction .,"pichotta and mooney applied a lstm recurrent neural network , coupled with beam search , to model event sequences and their representations ." we created 5-gram language models for every domain using srilm with improved kneserney smoothing on the target side of the training parallel corpora .,we trained a specific language model using srilm from each of these corpora in order to estimate n-gram log-probabilities . we follow honnibal et al in using the dynamic oracle-based search-and-learn training strategy introduced by goldberg and nivre .,we achieve this by following goldberg and nivre in using a dynamic oracle to create partially labelled training data . we trained linear-chain conditional random fields as the baseline .,our model is a first order linear chain conditional random field . improvements with additional measures always increases the overall reliability of the evaluation process .,the state of the art suggests that the use of heterogeneous measures can improve the evaluation reliability . major discourse annotated resources in english include the rst treebank and the penn discourse treebank .,the penn discourse treebank is the largest available discourseannotated resource in english . relation extraction ( re ) is the task of recognizing the assertion of a particular relationship between two or more entities in text .,relation extraction is the task of predicting semantic relations over entities expressed in structured or semi-structured text . the log-linear parameter weights are tuned with mert on the development set .,the nnlm weights are optimized as the other feature weights using minimum error rate training . we apply srilm to train the 3-gram language model of target side .,the srilm toolkit is used to train 5-gram language model . "since sarcasm is a refined and indirect form of speech , its interpretation may be challenging for certain populations .",sarcasm is a sophisticated speech act which commonly manifests on social communities such as twitter and reddit . "according to lakoff and johnson , humans use one concept in metaphors to describe another concept for reasoning and communication .","according to lakoff and johnson , metaphors are cognitive mappings of concepts from a source to a target domain ." "twitter is the medium where people post real time messages to discuss on the different topics , and express their sentiments .",twitter is a huge microbloging service with more than 500 million tweets per day 1 from different locations in the world and in different languages . "transliteration is a process of translating a foreign word into a native language by preserving its pronunciation in the original language , otherwise known as translationby-sound .","transliteration is the task of converting a word from one writing script to another , usually based on the phonetics of the original word ." "linguistically , metaphor is defined as a language expression that uses one or several words to represent another concept , rather than taking their literal meanings of the given words in the context ( cite-p-14-1-6 ) .",metaphor is a natural consequence of our ability to reason by analogy ( cite-p-16-1-12 ) . we used kenlm with srilm to train a 5-gram language model based on all available target language training data .,"furthermore , we train a 5-gram language model using the sri language toolkit ." morphological analysis is the first step for most natural language processing applications .,morphological analysis is a staple of natural language processing for broad languages . we use pre-trained glove vector for initialization of word embeddings .,we initialize the embedding weights by the pre-trained word embeddings with 200 dimensional vectors . "for representing words , we used 100 dimensional pre-trained glove embeddings .",we used the 200-dimensional word vectors for twitter produced by glove . exact bleu and ter scores of the optimum on dev and the baseline are given in table 2 .,"the bleu , rouge and ter scores by comparing the abstracts before and after human editing are presented in table 5 ." in which a one-to-one topic correspondence is enforced between the lsa models .,the challenge is to enforce the one-to-one topic correspondence . noun phrase can refer to the entity denoted by a noun phrase that has already appeared .,sometimes a noun can refer to the entity denoted by a noun that has a different modifier . "in this paper , we propose a novel uncertainty classification scheme and construct the first uncertainty corpus based on social media data ¨c tweets in specific .","in this paper , we propose a variant of annotation scheme for uncertainty identification and construct the first uncertainty corpus based on tweets ." "for this language model , we built a trigram language model with kneser-ney smoothing using srilm from the same automatically segmented corpus .",we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing . "through our experimental results , we demonstrated that our approach was able to accurately predict missing topic preferences of users ( 80 ¨c 94 % ) .",our experimental results show that this approach can accurately predict missing topic preferences of users accurately ( 80¨c94 % ) . we use the wn similarity jcn score on nouns since this gave reasonable results for mccarthy et al and it is efficient at run time given precompilation of frequency information .,we use the wn similarity jcn score since this gave reasonable results for and it is efficient at run time given precompilation of frequency information . text categorization is the classification of documents with respect to a set of predefined categories .,text categorization is a classical text information processing task which has been studied adequately ( cite-p-18-1-9 ) . "coreference resolution is the task of partitioning a set of mentions ( i.e . person , organization and location ) into entities .",coreference resolution is the task of automatically grouping references to the same real-world entity in a document into a set . wiebe et al use statistical methods to automatically correct the biases in annotations of speaker subjectivity .,"wiebe et al analyze linguistic annotator agreement statistics to find bias , and use a similar model to correct labels ." "text classification is a well-studied problem in machine learning , natural language processing , and information retrieval .",text classification is a crucial and well-proven method for organizing the collection of large scale documents . conditional random fields are a class of undirected graphical models with exponent distribution .,conditional random fields are a type of discriminative probabilistic model proposed for labeling sequential data . we use datasets of semeval-2017 ‘ fine-grained sentiment analysis on financial microblogs and news ’ shared task .,we evaluate our proposed technique on a benchmark dataset of semeval-2017 shared task on financial sentiment analysis . semantic parsing is the task of mapping natural language sentences to a formal representation of meaning .,semantic parsing is the task of converting natural language utterances into their complete formal meaning representations which are executable for some application . "our second method is based on the recurrent neural network language model approach to learning word embeddings of mikolov et al and mikolov et al , using the word2vec package .","as a strong baseline , we trained the skip-gram model of mikolov et al using the publicly available word2vec 5 software ." we derive our gold standard from the semeval 2007 lexical substitution task dataset .,to do this we examine the dataset created for the english lexical substitution task in semeval . "to minimize the objective , we use stochastic gradient descent with the diagonal variant of adagrad .","following , we minimize the objective by the diagonal variant of adagrad with minibatchs ." we ran mt experiments using the moses phrase-based translation system .,our system is based on the phrase-based part of the statistical machine translation system moses . we trained linear-chain conditional random fields as the baseline .,"we applied a supervised machine-learning approach , based on conditional random fields ." "for simplicity , we use the well-known conditional random fields for sequential labeling .",we use the mallet implementation of conditional random fields . rambow et al addressed the challenge of summarizing entire threads by treating it as a binary sentence classification task .,rambow et al proposed a sentence extraction summarization approach for email threads . this tree kernel was slightly generalized by culotta and sorensen to compute similarity between two dependency trees .,culotta and sorensen extended this work to estimate similarity between augmented dependency trees . "twitter is a rich resource for information about everyday events – people post their tweets to twitter publicly in real-time as they conduct their activities throughout the day , resulting in a significant amount of mundane information about common events .",twitter is a popular microblogging service which provides real-time information on events happening across the world . domain adaptation is a common concern when optimizing empirical nlp applications .,domain adaptation is a challenge for ner and other nlp applications . for data preparation and processing we use scikit-learn .,we implemented the different aes models using scikit-learn . part-of-speech tagging is the process of assigning to a word the category that is most probable given the sentential context ( cite-p-4-1-2 ) .,part-of-speech tagging is the act of assigning each word in a sentence a tag that describes how that word is used in the sentence . we extract the corresponding feature from the output of the stanford parser .,we use the collapsed tree formalism of the stanford dependency parser . "ccgs are a linguistically-motivated formalism for modeling a wide range of language phenomena , steedman , 1996 , steedman , 2000 .","combinatory categorial grammar ccg is a categorial formalism that provides a transparent interface between syntax and semantics , steedman , 1996 , steedman , 2000 ." "we employ the glove and node2vec to generate the pre-trained word embedding , obtaining two distinct embedding for each word .",we use different pretrained word embeddings such as glove 1 and fasttext 2 as the initial word embeddings . named entity recognition ( ner ) is a well-known problem in nlp which feeds into many other related tasks such as information retrieval ( ir ) and machine translation ( mt ) and more recently social network discovery and opinion mining .,named entity recognition ( ner ) is the task of detecting named entity mentions in text and assigning them to their corresponding type . word sense disambiguation ( wsd ) is the task to identify the intended sense of a word in a computational manner based on the context in which it appears ( cite-p-13-3-4 ) .,"word sense disambiguation ( wsd ) is a widely studied task in natural language processing : given a word and its context , assign the correct sense of the word based on a predefined sense inventory ( cite-p-15-3-4 ) ." "in particular , we consider conditional random fields and a variation of autoslog .","to exploit these kind of labeling constraints , we resort to conditional random fields ." "therefore , word segmentation is a crucial first step for many chinese language processing tasks such as syntactic parsing , information retrieval and machine translation .",word segmentation is a fundamental task for chinese language processing . "we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit .",we used the sri language modeling toolkit with kneser-kney smoothing . we use pre-trained vectors from glove for word-level embeddings .,we use the pre-trained glove vectors to initialize word embeddings . we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit .,"we used the srilm toolkit to train a 4-gram language model on the xinhua portion of the gigaword corpus , which contains 238m english words ." "named entity recognition ( ner ) is the task of identifying and typing phrases that contain the names of persons , organizations , locations , and so on .","named entity recognition ( ner ) is the task of identifying named entities in free text—typically personal names , organizations , gene-protein entities , and so on ." we use the word2vec tool with the skip-gram learning scheme .,our cdsm feature is based on word vectors derived using a skip-gram model . "we also extract subject-verbobject event representations , using the stanford partof-speech tagger and maltparser .","to generate dependency links , we use the stanford pos tagger 18 and the malt parser ." these parameters are tuned using mert algorithm on development data using a criterion of accuracy maximization .,the scaling factors are tuned with mert with bleu as optimization criterion on the development sets . relation extraction is a challenging task in natural language processing .,relation extraction ( re ) has been defined as the task of identifying a given set of semantic binary relations in text . "the treebank consists of poems from the tang dynasty ( 618 – 907 ce ) , considered one of the crowning achievements in traditional chinese literature .","the treebank consists of approximately 30,000 sentences annotated with syntactic roles in addition to morphosyntactic features ." "for these implementations , we use mallet and svm-light package 3 .","in our implementation , we use the binary svm light developed by joachims ." "in the conditional distribution of a word , it is not only influenced by its context words , but also by a topic , which is an embedding vector .","the probability of a word is governed by its latent topic , which is modeled as a categorical distribution in lda ." we use the popular moses toolkit to build the smt system .,we implement the pbsmt system with the moses toolkit . "we use stochastic gradient descent with adagrad , l 2 regularization and minibatch training .","to optimize model parameters , we use the adagrad algorithm of duchi et al with l2 regularization ." we use word2vec tool for learning distributed word embeddings .,we use word embedding pre-trained on newswire with 300 dimensions from word2vec . rel-lda is an application of the lda topic model to the relation discovery task .,lda is a representative probabilistic topic model of document collections . "we instead use adagrad , a variant of stochastic gradient descent in which the learning rate is adapted to the data .","we use the adagrad algorithm to optimize the conditional , marginal log-likelihood of the data ." we parsed all corpora using the berkeley parser .,we adopt berkeley parser 1 to train our sub-models . conditional random fields are undirected graphical models trained to maximize a conditional probability .,conditional random fields are discriminative structured classification models for sequential tagging and segmentation . we initialize our word vectors with 300-dimensional word2vec word embeddings .,we obtained distributed word representations using word2vec 4 with skip-gram . "for the neural models , we use 100-dimensional glove embeddings , pre-trained on wikipedia and gigaword .","for the actioneffect embedding model , we use pre-trained glove word embeddings as input to the lstm ." the special difficulty of this task is the length disparity between the compared pair .,the special difficulty of this task is the length disparity between the two semantic comparison texts . bilmes and kirchhoff generalize lattice-based language models further by allowing arbitrary factors in addition to words and classes .,this is effectively what bilmes and kirchhoff did in generalizing n-gram language models to factored language models . coreference resolution is the task of grouping all the mentions of entities 1 in a document into equivalence classes so that all the mentions in a given class refer to the same discourse entity .,"coreference resolution is the task of clustering a sequence of textual entity mentions into a set of maximal non-overlapping clusters , such that mentions in a cluster refer to the same discourse entity ." "to alleviate this shortcoming , we performed smoothing of the phrase table using the good-turing smoothing technique .","to compensate this shortcoming , we performed smoothing of the phrase table using the good-turing smoothing technique ." previous works illustrated the need for qualitative analysis to identify error sources .,"for this reason , previous work often included qualitative analyses and carefully defined heuristics to address these problems ." "for simplicity , we use the well-known conditional random fields for sequential labeling .","we use a conditional random field sequence model , which allows for globally optimal training and decoding ." "we train each model on the training set for 10 epochs using word-level log-likelihood , minibatches of size 50 , and the adam optimization method with the default parameters suggested by kingma and ba .",we use binary cross-entropy as the objective function and the adam optimization algorithm with the parameters suggested by kingma and ba for training the network . coreference resolution is the task of determining which mentions in a text are used to refer to the same real-world entity .,coreference resolution is the process of determining whether two expressions in natural language refer to the same entity in the world . "for all tasks , we use the adam optimizer to train models , and the relu activation function for fast calculation .",we train the model using the adam optimizer with the default hyper parameters . sentiment classification is a very domain-specific problem ; training a classifier using the data from one domain may fail when testing against data from another .,sentiment classification is the task of classifying an opinion document as expressing a positive or negative sentiment . we used srilm to build a 4-gram language model with kneser-ney discounting .,we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting . "this rough stemming is a preliminary technique , but it avoids the need for hand-crafted morphological information .",stemming is a popular way to reduce the size of a vocabulary in natural language tasks by conflating words with related meanings . linear combinations of word embedding vectors have been shown to correspond well to the semantic composition of the individual words .,it has been empirically shown that word embeddings can capture semantic and syntactic similarities between words . socher et al used an rnn-based architecture to generate compositional vector representations of sentences .,socher et al introduce a matrix-vector recursive neural network model that learns compositional vector representations for phrases and sentences . "semantic parsing is the task of mapping a natural language query to a logical form ( lf ) such as prolog or lambda calculus , which can be executed directly through database query ( zettlemoyer and collins , 2005 , 2007 ; haas and riezler , 2016 ; kwiatkowksi et al. , 2010 ) .",semantic parsing is the task of mapping natural language to machine interpretable meaning representations . metonymy is defined as the use of a word or a phrase to stand for a related concept which is not explicitly mentioned .,"metonymy is a figure of speech , in which one expression is used to refer to the standard referent of a related one ( cite-p-18-1-13 ) ." link grammar is a grammar theory that is strongly dependencybased .,link grammar is a context-free lexicalized grammar without explicit constituents . latent dirichlet allocation is a widely adopted generative model for topic modeling .,"a particular generative model , which is well suited for the modeling of text , is called latent dirichlet allocation ." the parameters of our mt system were tuned on a development corpus using minimum error rate training .,we tuned parameters of the smt system using minimum error-rate training . we used the pre-trained google embedding to initialize the word embedding matrix .,we used the google news pretrained word2vec word embeddings for our model . "the representative ml approaches used in ner are hidden markov model , me , crfs and svm .","some of the very effective ml approaches used in ner are hmm , me , crfs and svm ." "for the embeddings trained on stack overflow corpus , we use the word2vec implementation of gensim 8 toolkit .",we use the pre-trained 300-dimensional word2vec embeddings trained on google news 1 as input features . we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit .,"in our implementation , we train a tri-gram language model on each phone set using the srilm toolkit ." we implement our lstm encoder-decoder model using the opennmt neural machine translation toolkit .,we used the opennmt-tf framework 4 to train a bidirectional encoder-decoder model with attention . "we used srilm for training the 5-gram language model with interpolated modified kneser-ney discounting , .",we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit . we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing .,we trained a 4-gram language model on the xinhua portion of gigaword corpus using the sri language modeling toolkit with modified kneser-ney smoothing . semeval 2014 is a semantic evaluation of natural language processing ( nlp ) that comprises several tasks .,semeval is a yearly event in which teams compete in natural language processing tasks . relation extraction is the task of automatically detecting occurrences of expressed relations between entities in a text and structuring the detected information in a tabularized form .,relation extraction ( re ) is a task of identifying typed relations between known entity mentions in a sentence . we represent input words using pre-trained glove wikipedia 6b word embeddings .,we initialize the embedding layer using embeddings from dedicated word embedding techniques word2vec and glove . we train a 4-gram language model on the xinhua portion of the gigaword corpus using the sri language toolkit with modified kneser-ney smoothing .,we use a fourgram language model with modified kneser-ney smoothing as implemented in the srilm toolkit . relation extraction is the task of automatically detecting occurrences of expressed relations between entities in a text and structuring the detected information in a tabularized form .,relation extraction ( re ) is the task of extracting instances of semantic relations between entities in unstructured data such as natural language text . we therefore use kenlm to train a 6-gram language model with the monolingual data outlined in table 1 .,"in addition , we use an english corpus of roughly 227 million words to build a target-side 5-gram language model with srilm in combination with kenlm ." "we trained the initial parser on the ccgbank training set , consisting of 39603 sentences of wall street journal text .","for the pos-tagger , we trained hunpos 10 with the wall street journal english corpus ." we seek to produce an automatic readability metric that is tailored to the literacy skills of adults with id .,we investigate linguistic features that correlate with the readability of texts for adults with intellectual disabilities ( id ) . we then lowercase all data and use all sentences from the modern dutch part of the corpus to train an n-gram language model with the srilm toolkit .,we then lowercase all data and use all unique headlines in the training data to train a language model with the srilm toolkit . relation extraction ( re ) is the task of recognizing relationships between entities mentioned in text .,"relation extraction ( re ) is the task of identifying instances of relations , such as nationality ( person , country ) or place of birth ( person , location ) , in passages of natural text ." "information extraction ( ie ) is a main nlp aspects for analyzing scientific papers , which includes named entity recognition ( ner ) and relation extraction ( re ) .",information extraction ( ie ) is a technology that can be applied to identifying both sources and targets of new hyperlinks . "for the tree-based system , we applied a 4-gram language model with kneserney smoothing using srilm toolkit trained on the whole monolingual corpus .",a 5-gram language model with kneser-ney smoothing was trained with srilm on monolingual english data . tuning is performed to maximize bleu score using minimum error rate training .,minimum error rate training is used for tuning to optimize bleu . we followed tiedemann by using linear svms implemented in liblinear .,we use liblinear 9 to solve the lr and svm classification problems . the word vectors of vocabulary words are trained from a large corpus using the glove toolkit .,the embedding layer in the model is initialized with 300-dimensional glove word vectors obtained from common crawl . we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing .,we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus . the stanford parser was used to generate the dependency parse information for each sentence .,each sentence in the dataset is parsed using stanford dependency parser . coreference resolution is the task of determining whether two or more noun phrases refer to the same entity in a text .,coreference resolution is the task of partitioning the set of mentions of discourse referents in a text into classes ( or ‘ chains ’ ) corresponding to those referents ( cite-p-12-3-14 ) . we used moses as the implementation of the baseline smt systems .,we adapted the moses phrase-based decoder to translate word lattices . the word embeddings are word2vec of dimension 300 pre-trained on google news .,we use word embedding pre-trained on newswire with 300 dimensions from word2vec . fader et al present a question answering system that learns to paraphrase a question so that it can be answered using a corpus of open ie triples .,"fader et al recently presented a scalable approach to learning an open domain qa system , where ontological mismatches are resolved with learned paraphrases ." a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit from stolcke .,the english side of the parallel corpus is trained into a language model using srilm . "for all machine learning results , we train a logistic regression classifier implemented in scikitlearn with l2 regularization and the liblinear solver .","classifier we use the l2-regularized logistic regression from the liblinear package , which we accessed through weka ." "the 5-gram kneser-ney smoothed language models were trained by srilm , with kenlm used at runtime .","in addition , a 5-gram lm with kneser-ney smoothing and interpolation was built using the srilm toolkit ." the log linear weights for the baseline systems are optimized using mert provided in the moses toolkit .,the parameters of the log-linear model are tuned by optimizing bleu on the development data using mert . "in our implementation , we use a kn-smoothed trigram model .","in this and our other n-gram models , we used kneser-ney smoothing ." we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting .,a knsmoothed 5-gram language model is trained on the target side of the parallel data with srilm . our machine translation system is a phrase-based system using the moses toolkit .,our smt system is a phrase-based system based on the moses smt toolkit . ding and palmer propose a syntax-based translation model based on a probabilistic synchronous dependency insertion grammar .,ding and palmer introduced a version of probabilistic extension of synchronous dependency insertion grammars to deal with the pervasive structure divergence . distributed representations for words and sentences have been shown to significantly boost the performance of a nlp system .,continuous representation of words and phrases are proven effective in many nlp tasks . "sentence compression is the task of compressing long , verbose sentences into short , concise ones .",sentence compression is a paraphrasing task where the goal is to generate sentences shorter than given while preserving the essential content . we train a 4-gram language model on the xinhua portion of the gigaword corpus using the sri language toolkit with modified kneser-ney smoothing .,we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit . "word sense disambiguation ( wsd ) is the task of determining the correct meaning ( “ sense ” ) of a word in context , and several efforts have been made to develop automatic wsd systems .","word sense disambiguation ( wsd ) is a widely studied task in natural language processing : given a word and its context , assign the correct sense of the word based on a predefined sense inventory ( cite-p-15-3-4 ) ." polysemy is a major characteristic of natural languages .,"however , polysemy is a fundamental problem for distributional models ." "relation extraction is the key component for building relation knowledge graphs , and it is of crucial significance to natural language processing applications such as structured search , sentiment analysis , question answering , and summarization .",relation extraction is a traditional information extraction task which aims at detecting and classifying semantic relations between entities in text ( cite-p-10-1-18 ) . for our baseline we use the moses software to train a phrase based machine translation model .,we trained a phrase-based smt engine to translate known words and phrases using the training tools available with moses . relation extraction ( re ) is the task of recognizing relationships between entities mentioned in text .,relation extraction ( re ) is the task of extracting instances of semantic relations between entities in unstructured data such as natural language text . "for the embeddings trained on stack overflow corpus , we use the word2vec implementation of gensim 8 toolkit .",we perform pre-training using the skipgram nn architecture available in the word2vec tool . we use the moses smt toolkit to test the augmented datasets .,we use the moses software package 5 to train a pbmt model . coreference resolution is the task of determining which mentions in a text refer to the same entity .,coreference resolution is the process of finding discourse entities ( markables ) referring to the same real-world entity or concept . "thus , we pre-train the embeddings on a huge unlabeled data , the chinese wikipedia corpus , with word2vec toolkit .","for word-level embeddings , we pre-train the word vectors using word2vec on the gigaword corpus mentioned in section 4 , and the text of the training dataset ." the target-side language models were estimated using the srilm toolkit .,the sri language modeling toolkit was used to build 4-gram word-and character-based language models . "stance detection is the task of automatically determining from text whether the author of the text is in favor of , against , or neutral towards a proposition or target .",stance detection is the task of classifying the attitude previous work has assumed that either the target is mentioned in the text or that training data for every target is given . coreference resolution is the task of grouping mentions to entities .,coreference resolution is a task aimed at identifying phrases ( mentions ) referring to the same entity . "table 2 summarizes machine translation performance , as measured by bleu , calculated on the full corpus with the systems resulting from each iteration .","table 2 presents the results from the automatic evaluation , in terms of bleu and nist scores , of 4 system setups ." "neural models , with various neural architectures , have recently achieved great success .","recently , neural networks , and in particular recurrent neural networks have shown excellent performance in language modeling ." relation extraction is a traditional information extraction task which aims at detecting and classifying semantic relations between entities in text ( cite-p-10-1-18 ) .,relation extraction is the task of finding relationships between two entities from text . we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting .,"however , we use a large 4-gram lm with modified kneser-ney smoothing , trained with the srilm toolkit , stolcke , 2002 and ldc english gigaword corpora ." relation extraction is the task of finding relational facts in unstructured text and putting them into a structured ( tabularized ) knowledge base .,relation extraction is a fundamental task in information extraction . "sentiment analysis is a natural language processing ( nlp ) task ( cite-p-10-1-14 ) which aims at classifying documents according to the opinion expressed about a given subject ( federici and dragoni , 2016a , b ) .","sentiment analysis is a natural language processing ( nlp ) task ( cite-p-10-3-0 ) which aims at classifying documents according to the opinion expressed about a given subject ( federici and dragoni , 2016a , b ) ." experiments have shown that word embedding models are superior to conventional distributional models .,word embeddings have proven to be effective models of semantic representation of words in various nlp tasks . traditional semantic space models represent meaning on the basis of word co-occurrence statistics in large text corpora .,distributional semantic models represent the meanings of words by relying on their statistical distribution in text . "to get the the sub-fields of the community , we use latent dirichlet allocation to find topics and label them by hand .","to measure the importance of the generated questions , we use lda to identify the important subtopics 9 from the given body of texts ." shallow semantic representations could prevent the sparseness of deep structural approaches and the weakness of bow models .,shallow semantic representations can prevent the weakness of cosine similarity based models . text segmentation can be defined as the automatic identification of boundaries between distinct textual units ( segments ) in a textual document .,text segmentation is the task of determining the positions at which topics change in a stream of text . "luong et al break words into morphemes , and use recursive neural networks to compose word meanings from morpheme meanings .",luong et al created a hierarchical language model that uses rnn to combine morphemes of a word to obtain a word representation . "to do so , we utilized the popular latent dirichlet allocation , topic modeling method .",we used latent dirichlet allocation to perform the classification . we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing .,we train a 4-gram language model on the xinhua portion of the english gigaword corpus using the srilm toolkits with modified kneser-ney smoothing . word alignment is the problem of annotating parallel text with translational correspondence .,"word alignment is the task of identifying translational relations between words in parallel corpora , in which a word at one language is usually translated into several words at the other language ( fertility model ) ( cite-p-18-1-0 ) ." "we present a novel approach to fsd , which operates in constant time / space .",we present a novel approach to fsd that operates in math-w-2-1-0-91 per tweet . twitter is a well-known social network service that allows users to post short 140 character status update which is called “ tweet ” .,twitter is a widely used microblogging environment which serves as a medium to share opinions on various events and products . we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .,we use srilm for training a trigram language model on the english side of the training data . "relation extraction is a subtask of information extraction that finds various predefined semantic relations , such as location , affiliation , rival , etc. , between pairs of entities in text .",relation extraction is the task of recognizing and extracting relations between entities or concepts in texts . "additionally , a back-off 2-gram model with goodturing discounting and no lexical classes was built from the same training data , using the srilm toolkit .","a back-off 2-gram model with good-turing discounting and no lexical classes was also created from the training set , using the srilm toolkit , ." "for all the systems we train , we build n-gram language model with modified kneserney smoothing using kenlm .","for the fst representation , we used the the opengrm-ngram language modeling toolkit and used an n-gram order of 4 , with kneser-ney smoothing ." "twitter is a popular microblogging service , which , among other things , is used for knowledge sharing among friends and peers .","twitter consists of a massive number of posts on a wide range of subjects , making it very interesting to extract information and sentiments from them ." semantic role labeling ( srl ) is the task of identifying the semantic arguments of a predicate and labeling them with their semantic roles .,semantic role labeling ( srl ) has been defined as a sentence-level natural-language processing task in which semantic roles are assigned to the syntactic arguments of a predicate ( cite-p-14-1-7 ) . tuning is performed to maximize bleu score using minimum error rate training .,minimum error rate training is applied to tune the cn weights . "semantic parsing is a domain-dependent process by nature , as its output is defined over a set of domain symbols .",semantic parsing is the task of mapping natural language to a formal meaning representation . word alignment is a fundamental problem in statistical machine translation .,word alignment is a key component of most endto-end statistical machine translation systems . in our experiments we use word2vec as a representative scalable model for unsupervised embeddings .,we learn our word embeddings by using word2vec 3 on unlabeled review data . we use the popular word2vec 1 tool proposed by mikolov et al to extract the vector representations of words .,"for estimating monolingual word vector models , we use the cbow algorithm as implemented in the word2vec package using a 5-token window ." we use the skipgram model with negative sampling to learn word embeddings on the twitter reference corpus .,we use the word2vec skip-gram model to learn initial word representations on wikipedia . we initialize our word vectors with 300-dimensional word2vec word embeddings .,we initialize the embedding layer using embeddings from dedicated word embedding techniques word2vec and glove . we use srilm train a 5-gram language model on the xinhua portion of the english gigaword corpus 5th edition with modified kneser-ney discounting .,we use sri language modeling toolkit to train a 5-gram language model on the english sentences of fbis corpus . relation extraction is the task of automatically detecting occurrences of expressed relations between entities in a text and structuring the detected information in a tabularized form .,relation extraction is a fundamental task in information extraction . "for tagging , we use the stanford pos tagger package .","for pos tagging , we used the stanford pos tagger ." gru has been shown to achieve comparable performance with less parameters than lstm .,gru and lstm have been shown to yield comparable performance . "as a further test , we ran the stanford parser on the queries to generate syntactic parse trees .",we used the stanford parser to extract dependency features for each quote and response . "we compare the final system to moses 3 , an open-source translation toolkit .","we use moses , a statistical machine translation system that allows training of translation models ." "we use the word2vec tool to train monolingual vectors , 6 and the cca-based tool for projecting word vectors .",we use distributed word vectors trained on the wikipedia corpus using the word2vec algorithm . "in this task , we use the 300-dimensional 840b glove word embeddings .","for input representation , we used glove word embeddings ." coreference resolution is the process of linking multiple mentions that refer to the same entity .,coreference resolution is the task of determining when two textual mentions name the same individual . relation extraction is a crucial task in the field of natural language processing ( nlp ) .,relation extraction ( re ) has been defined as the task of identifying a given set of semantic binary relations in text . we used minimum error rate training mert for tuning the feature weights .,we used minimum error rate training to optimize the feature weights . "word sense disambiguation ( wsd ) is the problem of assigning a sense to an ambiguous word , using its context .",word sense disambiguation ( wsd ) is the task of determining the meaning of a word in a given context . "word embeddings have proved useful in downstream nlp tasks such as part of speech tagging , named entity recognition , and machine translation .","importantly , word embeddings have been effectively used for several nlp tasks , such as named entity recognition , machine translation and part-of-speech tagging ." "to calculate language model features , we train traditional n-gram language models with ngram lengths of four and five using the srilm toolkit .","for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided ." "pitler et al experiment with polarity tags , verb classes , length of verb phrases , modality , context and lexical features and found that word pairs with non-zero information gain yield best results .","pitler et al demonstrated that features developed to capture word polarity , verb classes and orientation , as well as some lexical features are strong indicator of the type of discourse relation ." relation extraction is a fundamental step in many natural language processing applications such as learning ontologies from texts ( cite-p-12-1-0 ) and question answering ( cite-p-12-3-6 ) .,relation extraction is the task of recognizing and extracting relations between entities or concepts in texts . "for all the systems we train , we build n-gram language model with modified kneserney smoothing using kenlm .","after standard preprocessing of the data , we train a 3-gram language model using kenlm ." "we used moses , a phrase-based smt toolkit , for training the translation model .",we used moses with the default configuration for phrase-based translation . we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit .,"for all experiments , we used a 4-gram language model with modified kneser-ney smoothing which was trained with the srilm toolkit ." our system is built using the open-source moses toolkit with default settings .,we used the moses mt toolkit with default settings and features for both phrase-based and hierarchical systems . the phrase-based approach developed for statistical machine translation is designed to overcome the restrictions of many-to-many mappings in word-based translation models .,the well-known phrase-based statistical translation model extends the basic translation units from single words to continuous phrases to capture local phenomena . to train the models we use the default stochastic gradient descent classifier provided by scikit-learn .,we use the linearsvc classifier as implemented in scikit-learn package 17 with the default parameters . for collapsed syntactic dependencies we use the stanford dependency parser .,"for english , we use the stanford parser for both pos tagging and cfg parsing ." all feature models are estimated in the in-domain corpus with standard techniques .,the language models are trained on the corresponding target parts of this corpus using the sri language model tool . we used 300 dimensional skip-gram word embeddings pre-trained on pubmed .,we use the 300-dimensional skip-gram word embeddings built on the google-news corpus . a 4-gram language model is trained on the xinhua portion of the gigaword corpus with the srilm toolkit .,we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus . we created 5-gram language models for every domain using srilm with improved kneserney smoothing on the target side of the training parallel corpora .,"firstly , we built a forward 5-gram language model using the srilm toolkit with modified kneser-ney smoothing ." "sentiment analysis is a natural language processing ( nlp ) task ( cite-p-10-1-14 ) which aims at classifying documents according to the opinion expressed about a given subject ( federici and dragoni , 2016a , b ) .",sentiment analysis is the task in natural language processing ( nlp ) that deals with classifying opinions according to the polarity of the sentiment they express . "we trained a trigram language model on the chinese side , with the srilm toolkit , using the modified kneser-ney smoothing option .",we used the target side of the parallel corpus and the srilm toolkit to train a 5-gram language model . morphological disambiguation is the task of selecting the correct morphological parse for a given word in a given context .,"morphological disambiguation is a well studied problem in the literature , but lstm-based contributions are still relatively scarce ." "the bleu score , introduced in , is a highly-adopted method for automatic evaluation of machine translation systems .",the bleu is a classical automatic evaluation method for the translation quality of an mt system . sequence labeling is a widely used method for named entity recognition and information extraction from unstructured natural language data .,sequence labeling is a structured prediction task where systems need to assign the correct label to every token in the input sequence . word alignment is a well-studied problem in natural language computing .,word alignment is a key component in most statistical machine translation systems . english annotations were all produced using the stanford core-nlp toolkit .,the syntax tree features were calculated using the stanford parser trained using the english caseless model . "the pdtb is the largest corpus annotated for discourse relations , formed by newspaper articles from the wall street journal .",the penn discourse tree bank is the largest resource to date that provides a discourse annotated corpus in english . "twitter is a rich resource for information about everyday events – people post their tweets to twitter publicly in real-time as they conduct their activities throughout the day , resulting in a significant amount of mundane information about common events .","twitter is a widely used microblogging platform , where users post and interact with messages , “ tweets ” ." we used the implementation of the scikit-learn 2 module .,we used standard classifiers available in scikit-learn package . "named entity disambiguation ( ned ) is the task of resolving ambiguous mentions of entities to their referent entities in a knowledge base ( kb ) ( e.g. , wikipedia ) .","named entity disambiguation ( ned ) is the task of linking mentions of entities in text to a given knowledge base , such as freebase or wikipedia ." "for language model , we used sri language modeling toolkit to train a 4-gram model with modified kneser-ney smoothing .","for the semantic language model , we used the srilm package and trained a tri-gram language model with the default goodturing smoothing ." "for the language model , we used srilm with modified kneser-ney smoothing .",we trained a 4-gram language model on this data with kneser-ney discounting using srilm . "table 4 presents case-insensitive evaluation results on the test set according to the automatic metrics bleu , ter , and meteor .","table 2 presents the results from the automatic evaluation , in terms of bleu and nist scores , of 4 system setups ." in part because such distributions can be estimated from positive data .,"as shown , these distributions are efficiently estimable from positive data ." we build an open-vocabulary language model with kneser-ney smoothing using the srilm toolkit .,we train a 4-gram language model on the xinhua portion of english gigaword corpus by srilm toolkit . "li et al propose a hybrid method based on wordnet and the brown corpus to incorporate semantic similarity between words , semantic similarity between sentences , and word order similarity to measure the overall sentence similarity .","li et al proposed a hybrid method based on wordnet and the brown corpus to incorporate semantic similarity between words , semantic similarity between sentences , and word order similarity to measure overall sentence similarity ." "to support this point , we further train a topic model based on lda by treating each poem as a document .",we use the term-sentence matrix to train a simple generative topic model based on lda . we used the bleu score to evaluate the translation accuracy with and without the normalization .,we evaluated the translation quality using the case-insensitive bleu-4 metric . our evaluation metric is case-insensitive bleu-4 .,case-insensitive bleu-4 is our evaluation metric . the n-gram models are created using the srilm toolkit with good-turning smoothing for both the chinese and english data .,the language model pis implemented as an n-gram model using the srilm-toolkit with kneser-ney smoothing . semantic role labeling ( srl ) is a form of shallow semantic parsing whose goal is to discover the predicate-argument structure of each predicate in a given input sentence .,semantic role labeling ( srl ) is the task of identifying the predicate-argument structure of a sentence . "we implement this proposal with a hierarchical dirichlet process , which allows for sharing categories across data groups .","to test this hypothesis , we extended our model to incorporate bigram dependencies using a hierarchical dirichlet process ." "to get a dictionary of word embeddings , we use the word2vec tool 2 and train it on the chinese gigaword corpus .",we use word2vec tool which efficiently captures the semantic properties of words in the corpus . pang and lee attempted to improve the performance of an svm classifier by identifying and removing objective sentences from the texts .,pang and lee propose a graph-based method which finds minimum cuts in a document graph to classify the sentences into subjective or objective . we pre-trained word embeddings using word2vec over tweet text of the full training data .,"we train a word2vec cbow model on raw 517 , 400 emails from the en-ron email dataset to obtain the word embeddings ." we use the penn wsj treebank for our experiments .,we used small portions of the penn wsj treebank for the experiments . minimum error rate training under bleu criterion is used to estimate 20 feature function weights over the larger development set .,we set the feature weights by optimizing the bleu score directly using minimum error rate training on the development set . we then used word2vec to train word embeddings with 512 dimensions on each of the prepared corpora .,we trained word vectors with the two architectures included in the word2vec software . shen et al describe the result of filtering rules by insisting that target-side rules are well-formed dependency trees .,shen et al propose the well-formed dependency structure to filter the hierarchical rule table . semeval is a yearly event in which international teams of researchers work on tasks in a competition format where they tackle open research questions in the field of semantic analysis .,semeval 2014 is a semantic evaluation of natural language processing ( nlp ) that comprises several tasks . we implement logistic regression with scikit-learn and use the lbfgs solver .,to train the models we use the default stochastic gradient descent classifier provided by scikit-learn . user affect parameters increase the usefulness of these models .,user affect parameters can increase the usefulness of these models . syntactic information is a useful feature to phrase reordering .,structured syntactic knowledge is important for phrase reordering . "in this baseline , we applied the word embedding trained by skipgram on wiki2014 .",we used the pre-trained word embeddings that were learned using the word2vec toolkit on google news dataset . "wikipedia is a large , multilingual , highly structured , multi-domain encyclopedia , providing an increasingly large wealth of knowledge .",wikipedia is a free multilingual online encyclopedia and a rapidly growing resource . "in natural language , a word often assumes different meanings , and the task of determining the correct meaning , or sense , of a word in different contexts is known as word sense disambiguation ( wsd ) .",word sense disambiguation ( wsd ) is the task of determining the correct meaning for an ambiguous word from its context . a bunsetsu consists of one independent word and more than zero ancillary words .,a bunsetsu consists of one independent word and zero or more ancillary words . context-free grammar augmented with ¦ë-operators is learned given a set of training sentences and their correct logical forms .,a semantic parser is learned given a set of sentences and their correct logical forms using smt methods . we trained the statistical phrase-based systems using the moses toolkit with mert tuning .,we preprocessed the training corpora with scripts included in the moses toolkit . relation extraction ( re ) is a task of identifying typed relations between known entity mentions in a sentence .,relation extraction is the task of finding semantic relations between entities from text . we evaluated our models using bleu and ter .,we measured translation performance with bleu . we use the attention-based nmt model introduced by bahdanau et al as our text-only nmt baseline .,"we follow the neural machine translation architecture by bahdanau et al , which we will briefly summarize here ." the parameter for each feature function in log-linear model is optimized by mert training .,the parameters of the log-linear model are tuned by optimizing bleu on the development data using mert . "for language models , we use the srilm linear interpolation feature .","for language modeling , we used the trigram model of stolcke ." "jiang et al , 2007 ) put forward a ptc framework based on the svm model .","jiang et al , 2007 ) put forward a ptc framework based on support vector machine ." "sentiment analysis is the computational analysis of people ’ s feelings or beliefs expressed in texts such as emotions , opinions , attitudes , appraisals , etc . ( cite-p-11-3-3 ) .","sentiment analysis is a natural language processing ( nlp ) task ( cite-p-10-3-0 ) which aims at classifying documents according to the opinion expressed about a given subject ( federici and dragoni , 2016a , b ) ." we also measure overall performance with uncased bleu .,we report bleu scores computed using sacrebleu . sentiment analysis is a fundamental problem aiming to give a machine the ability to understand the emotions and opinions expressed in a written text .,sentiment analysis is the study of the subjectivity and polarity ( positive vs. negative ) of a text ( cite-p-7-1-10 ) . we use the pre-trained glove 50-dimensional word embeddings to represent words found in the glove dataset .,our word embeddings is initialized with 100-dimensional glove word embeddings . we use the cmu twitter part-of-speech tagger to select only instances in the verb sense .,"hence , we use the cmu twitter pos-tagger to obtain the part-of-speech tags ." language models were built using the srilm toolkit 16 .,a 4-grams language model is trained by the srilm toolkit . "in natural language , a word often assumes different meanings , and the task of determining the correct meaning , or sense , of a word in different contexts is known as word sense disambiguation ( wsd ) .",word sense disambiguation ( wsd ) is the task of determining the correct meaning or sense of a word in context . we use the 300-dimensional skip-gram word embeddings built on the google-news corpus .,we use the well-known word embedding model that is a robust framework to incorporate word representation features . relation extraction ( re ) is the task of assigning a semantic relationship between a pair of arguments .,relation extraction is the task of finding relationships between two entities from text . relation extraction is a fundamental step in many natural language processing applications such as learning ontologies from texts ( cite-p-12-1-0 ) and question answering ( cite-p-12-3-6 ) .,relation extraction is the task of detecting and characterizing semantic relations between entities from free text . discourse parsing is a fundamental task in natural language processing that entails the discovery of the latent relational structure in a multi-sentence piece of text .,discourse parsing is a challenging task and is crucial for discourse analysis . the language model is a large interpolated 5-gram lm with modified kneser-ney smoothing .,a 3-gram language model is trained on the target side of the training data by the srilm toolkits with modified kneser-ney smoothing . turney and littman compute the point wise mutual information of the target term with each seed positive and negative term as a measure of their semantic association .,turney and littman use pointwise mutual information and latent semantic analysis to determine the similarity of the word of unknown polarity with the words in both positive and negative seed sets . we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .,our translation model is implemented as an n-gram model of operations using the srilm toolkit with kneser-ney smoothing . marcu and wong present a joint probability model for phrase-based translation .,"marcu and wong , 2002 ) presents a joint probability model for phrase-based translation ." we use pre-trained glove vector for initialization of word embeddings .,"for the word-embedding based classifier , we use the glove pre-trained word embeddings ." "for decoding , we used the state-of-the-art phrasebased smt toolkit moses with default options , except for the distortion limit .","we used the moses machine translation decoder , using the default features and decoding settings ." "coreference resolution is the task of partitioning a set of mentions ( i.e . person , organization and location ) into entities .","coreference resolution is a challenging task , that involves identification and clustering of noun phrases mentions that refer to the same real-world entity ." "for all models , we use the 300-dimensional glove word embeddings .","for the mix one , we also train word embeddings of dimension 50 using glove ." the character embeddings are computed using a method similar to word2vec .,the word embeddings are initialized with 100-dimensions vectors pre-trained by the cbow model . translation quality can be measured in terms of the bleu metric .,we use bleu to evaluate translation quality . we use srilm for training a trigram language model on the english side of the training corpus .,"for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided ." "part-of-speech ( pos ) tagging is a fundamental nlp task , used by a wide variety of applications .",part-of-speech ( pos ) tagging is the task of assigning each of the words in a given piece of text a contextually suitable grammatical category . we use sri language modeling toolkit to train a 5-gram language model on the english sentences of fbis corpus .,we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus . we trained a 5-gram language model on the xinhua portion of gigaword corpus using the srilm toolkit .,"for the semantic language model , we used the srilm package and trained a tri-gram language model with the default goodturing smoothing ." the standard classifiers are implemented with scikit-learn .,the training of the classifiers has been performed with scikit-learn . we used latent dirichlet allocation to construct our topics .,to learn the topics we use latent dirichlet allocation . we also report the results using bleu and ter metrics .,we report case-sensitive bleu and ter as the mt evaluation metrics . the trigram language model is implemented in the srilm toolkit .,these features are the output from the srilm toolkit . "sentiment analysis ( sa ) is the determination of the polarity of a piece of text ( positive , negative , neutral ) .","sentiment analysis ( sa ) is a field of knowledge which deals with the analysis of people ’ s opinions , sentiments , evaluations , appraisals , attitudes and emotions towards particular entities ( cite-p-17-1-0 ) ." semantic role labeling was pioneered by gildea and jurafsky .,early work in frame-semantic analysis was pioneered by gildea and jurafsky . we estimated lexical surprisal using trigram models trained on 1 million hindi sentences from emille corpus using the srilm toolkit .,we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit . semantic role labeling ( srl ) is a task of analyzing predicate-argument structures in texts .,semantic role labeling ( srl ) is the task of automatic recognition of individual predicates together with their major roles ( e.g . frame elements ) as they are grammatically realized in input sentences . ccg is a linguistically-motivated categorial formalism for modeling a wide range of language phenomena .,"ccg is a lexicalized , mildly context-sensitive parsing formalism that models a wide range of linguistic phenomena ." "for the implementation of discriminative sequential model , we chose the wapiti 4 toolkit .","we used the wapiti toolkit , based on the linear-chain crfs framework ." "in arabic , there is a reasonable number of sentiment lexicons but with major deficiencies .",arabic is a morphologically complex language . we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus .,"we train a trigram language model with modified kneser-ney smoothing from the training dataset using the srilm toolkit , and use the same language model for all three systems ." "we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training .",we use the glove pre-trained word embeddings for the vectors of the content words . we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing .,we trained a 5-gram language model on the xinhua portion of gigaword corpus using the srilm toolkit . all systems are evaluated using case-insensitive bleu .,the translation quality is evaluated by case-insensitive bleu-4 . a 4-gram language model generated by sri language modeling toolkit is used in the cube-pruning process .,the probabilistic language model is constructed on google web 1t 5-gram corpus by using the srilm toolkit . "for a fair comparison to our model , we used word2vec , that pretrain word embeddings at a token level .",we chose the skip-gram model provided by word2vec tool developed by for training word embeddings . bracketing transduction grammar is a special case of synchronous context free grammar .,inversion transduction grammar is a well studied synchronous grammar formalism . "the baseline system is a phrase-based smt system , built almost entirely using freely available components .",our smt system is a phrase-based system based on the moses smt toolkit . the parameter for each feature function in log-linear model is optimized by mert training .,the optimisation of the feature weights of the model is done with minimum error rate training against the bleu evaluation metric . semantic parsing is the task of converting natural language utterances into their complete formal meaning representations which are executable for some application .,semantic parsing is the task of translating text to a formal meaning representation such as logical forms or structured queries . coreference resolution is the process of linking multiple mentions that refer to the same entity .,"coreference resolution is a multi-faceted task : humans resolve references by exploiting contextual and grammatical clues , as well as semantic information and world knowledge , so capturing each of these will be necessary for an automatic system to fully solve the problem ." we used kenlm with srilm to train a 5-gram language model based on all available target language training data .,we use srilm for training a trigram language model on the english side of the training data . "we use stochastic gradient descent with adagrad , l 2 regularization and minibatch training .","we apply online training , where model parameters are optimized by using adagrad ." we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .,"for language model , we use a trigram language model trained with the srilm toolkit on the english side of the training corpus ." we used the implementation of the scikit-learn 2 module .,"we employed the machine learning tool of scikit-learn 3 , for training the classifier ." the language models are 4-grams with modified kneser-ney smoothing which have been trained with the srilm toolkit .,we also use a 4-gram language model trained using srilm with kneser-ney smoothing . we use 5-grams for all language models implemented using the srilm toolkit .,we implement an in-domain language model using the sri language modeling toolkit . we use a fourgram language model with modified kneser-ney smoothing as implemented in the srilm toolkit .,"for the semantic language model , we used the srilm package and trained a tri-gram language model with the default goodturing smoothing ." our mt decoder is a proprietary engine similar to moses .,it is a standard phrasebased smt system built using the moses toolkit . "the language models were trained with kneser-ney backoff smoothing using the sri language modeling toolkit , .",we also use a 4-gram language model trained using srilm with kneser-ney smoothing . zelenko et al used the kernel methods for extracting relations from text .,zelenko et al and culotta and sorensen proposed kernels for dependency trees inspired by string kernels . "in view of this background , this paper presents a novel error correction framework called error case frames .",this paper presents a novel framework called error case frames for correcting preposition errors . "to this end , we propose a new annotation scheme to study how preferences are linguistically expressed in two different corpus .","to this end , we propose a new annotation scheme to study how preferences are linguistically expressed in dialogues ." "following , we develop a continuous bag-of-words model that can effectively model the surrounding contextual information .","we employ a neural method , specifically the continuous bag-of-words model to learn high-quality vector representations for words ." semantic parsing is the task of mapping natural language sentences to a formal representation of meaning .,"semantic parsing is the task of converting a sentence into a representation of its meaning , usually in a logical form grounded in the symbols of some fixed ontology or relational database ( cite-p-21-3-3 , cite-p-21-3-4 , cite-p-21-1-11 ) ." we use sri language modeling toolkit to train a 5-gram language model on the english sentences of fbis corpus .,we build an open-vocabulary language model with kneser-ney smoothing using the srilm toolkit . kalchbrenner et al show that a cnn for modeling sentences can achieve competitive results in polarity classification .,kalchbrenner et al propose a dynamic cnn model using a dynamic k-max pooling mechanism which is able to generate a feature graph which captures a variety of word relations . we use stanford ner for named entity recognition .,we use the stanford named entity recognizer for this purpose . we perform pre-training using the skipgram nn architecture available in the word2vec tool .,we trained word vectors with the two architectures included in the word2vec software . we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing .,we build a 9-gram lm using srilm toolkit with modified kneser-ney smoothing . we use the liblinear package with the linear kernel 5 .,we use liblinear 9 to solve the lr and svm classification problems . we used the srilm toolkit to simulate the behavior of flexgram models by using count files as input .,we used the target side of the parallel corpus and the srilm toolkit to train a 5-gram language model . "coreference resolution is the problem of identifying which mentions ( i.e. , noun phrases ) refer to which real-world entities .",coreference resolution is a key task in natural language processing ( cite-p-13-1-8 ) aiming to detect the referential expressions ( mentions ) in a text that point to the same entity . our baseline is a standard phrase-based smt system .,we use a standard phrasebased translation system . twitter is a microblogging site where people express themselves and react to content in real-time .,twitter is a huge microblogging service with more than 500 million tweets per day from different locations of the world and in different languages ( cite-p-8-1-9 ) . "for all models , we use the 300-dimensional glove word embeddings .",we use the glove word vector representations of dimension 300 . we use a set of 318 english function words from the scikit-learn package .,we feed our features to a multinomial naive bayes classifier in scikit-learn . relation extraction is a fundamental step in many natural language processing applications such as learning ontologies from texts ( cite-p-12-1-0 ) and question answering ( cite-p-12-3-6 ) .,relation extraction ( re ) is the process of generating structured relation knowledge from unstructured natural language texts . the model parameters of word embedding are initialized using word2vec .,our cdsm feature is based on word vectors derived using a skip-gram model . ambiguity is the task of building up multiple alternative linguistic structures for a single input .,ambiguity is a central issue in natural language processing . turian et al used unsupervised word representations as extra word features to improve the accuracy of both ner and chunking .,"similarly , turian et al collectively used brown clusters , cw and hlbl embeddings , to improve the performance of named entity recognition and chucking tasks ." improved decision list can raise the f-measure of error detection .,the improved decision list can raise the f-measure of error detection . "for the support vector machine , we used svm-light .",regarding svm we used linear kernels implemented in svm-light . "to keep consistent , we initialize the embedding weight with pre-trained word embeddings .","we employ the glove and node2vec to generate the pre-trained word embedding , obtaining two distinct embedding for each word ." "word sense disambiguation ( wsd ) is a key task in computational lexical semantics , inasmuch as it addresses the lexical ambiguity of text by making explicit the meaning of words occurring in a given context ( cite-p-18-3-10 ) .",word sense disambiguation ( wsd ) is a task to identify the intended sense of a word based on its context . we use pre-trained embeddings from glove .,we use pre-trained 100 dimensional glove word embeddings . we will show translation quality measured with the bleu score as a function of the phrase table size .,"in order to measure translation quality , we use bleu 7 and ter scores ." we use the 300-dimensional skip-gram word embeddings built on the google-news corpus .,our cdsm feature is based on word vectors derived using a skip-gram model . an in-house language modeling toolkit was used to train the 4-gram language models with modified kneser-ney smoothing over the web-crawled data .,"we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit ." "we use the srilm toolkit to obtain the per-word-perplexity of each suggested phrase , and normalize it by the maximal perplexity of the language model .","as with our original refined language model , we estimate each coarse language model using the srilm toolkit ." rnns have proven to be a very powerful model in many natural language tasks .,recurrent neural network architectures have proven to be well suited for many natural language generation tasks . "by extracting structures from translated texts , we can generate a phylogenetic tree that reflects the ¡° true ¡± distances among the source languages .","specifically , we automatically reconstruct phylogenetic language trees from monolingual texts ( translated from several source languages ) ." the evaluation metric for the overall translation quality is caseinsensitive bleu4 .,translation results are evaluated using the word-based bleu score . "on all datasets and models , we use 300-dimensional word vectors pre-trained on google news .",for our purpose we use word2vec embeddings trained on a google news dataset and find the pairwise cosine distances for all words . we used glove vectors trained on common crawl 840b 4 with 300 dimensions as fixed word embeddings .,"in this task , we use the 300-dimensional 840b glove word embeddings ." word sense disambiguation ( wsd ) is the task of determining the meaning of a word in a given context .,"many words have multiple meanings , and the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd ) ." word sense disambiguation is the process of determining which sense of a homograph is correct in a given context .,word sense disambiguation is the task of assigning a sense to a word based on the context in which it occurs . text categorization is the classification of documents with respect to a set of predefined categories .,"since text categorization is a task based on predefined categories , we know the categories for classifying documents ." we use the word2vec skip-gram model to learn initial word representations on wikipedia .,we learn our word embeddings by using word2vec 3 on unlabeled review data . transition-based methods have given competitive accuracies and efficiencies for dependency parsing .,"such approaches , for example , transition-based and graph-based models have attracted the most attention in dependency parsing in recent works ." discourse parsing is the process of discovering the latent relational structure of a long form piece of text and remains a significant open challenge .,"discourse parsing is a natural language processing ( nlp ) task with the potential utility for many other natural language processing tasks ( webber et al. , 2011 ) ." "to calculate language model features , we train traditional n-gram language models with ngram lengths of four and five using the srilm toolkit .","in addition , we use an english corpus of roughly 227 million words to build a target-side 5-gram language model with srilm in combination with kenlm ." "the present paper is a report of these investigations , their results and conclusions drawn therefrom .",the present paper is the first to use a reranking parser and the first to address the adaptation scenario for this problem . the target-side language models were estimated using the srilm toolkit .,these language models were built up to an order of 5 with kneser-ney smoothing using the srilm toolkit . these models can be tuned using minimum error rate training .,the parameter weights are optimized with minimum error rate training . "more importantly , event coreference resolution is a necessary component in any reasonable , broadly applicable computational model of natural language understanding ( cite-p-18-3-4 ) .",event coreference resolution is the task of determining which event mentions in a text refer to the same real-world event . model parameters that maximize the loglikelihood of the training data are computed using a numerical optimization method .,model parameters 位 i are estimated using numer-ical optimization methods so as to maximize the log-likelihood of the training data . relation extraction is the task of finding relational facts in unstructured text and putting them into a structured ( tabularized ) knowledge base .,relation extraction ( re ) is the task of recognizing the assertion of a particular relationship between two or more entities in text . we apply sri language modeling toolkit to train a 4-gram language model with kneser-ney smoothing .,we implement an in-domain language model using the sri language modeling toolkit . dependency parsing is a topic that has engendered increasing interest in recent years .,"dependency parsing is a simpler task than constituent parsing , since dependency trees do not have extra non-terminal nodes and there is no need for a grammar to generate them ." coreference resolution is the task of partitioning the set of mentions of discourse referents in a text into classes ( or ‘ chains ’ ) corresponding to those referents ( cite-p-12-3-14 ) .,coreference resolution is a well known clustering task in natural language processing . we adopt pretrained embeddings for word forms with the provided training data by word2vec .,for our purpose we use word2vec embeddings trained on a google news dataset and find the pairwise cosine distances for all words . latent variable models such as latent dirichlet allocation and latent semantic analysis have been widely used to extract topic models from corpora .,popular topic modeling techniques include latent dirichlet allocation and probabilistic latent semantic analysis . "additionally , coreference resolution is a pervasive problem in nlp and many nlp applications could benefit from an effective coreference resolver that can be easily configured and customized .",coreference resolution is the task of clustering referring expressions in a text so that each resulting cluster represents an entity . "for this task , we use glove pre-trained word embedding trained on common crawl corpus .","for this , we utilize the publicly available glove 1 word embeddings , specifically ones trained on the common crawl dataset ."