diff --git "a/ParaSCI-ACL/test.csv" "b/ParaSCI-ACL/test.csv" new file mode 100644--- /dev/null +++ "b/ParaSCI-ACL/test.csv" @@ -0,0 +1,2346 @@ +sentence1,sentence2 +"we run mecab 4 with ipa dictionary 5 on hadoop 6 , an open source software that implemented the map-reduce framework , for parallel word segmenting , part-of-speech tagging , and kana pronunciation annotating .","we run mecab on hadoop 11 , an open source software that implemented the map-reduce framework , for word segmenting and pos tagging the data ." +"to accurately represent knowledge , they must represent noun phrases , concepts , and the many-to-many mapping from noun phrases to concepts .","to minimize such confusion , a system must separately represent noun phrases , the underlying concepts to which they can refer , and the many-to-many ¡°can refer to¡± relation between them ." +"for all systems , we trained a 6-gram language model smoothed with modified kneser-ney smoothing using kenlm .",we improved the system combination by adding a 5-grams language model with modified kneser-ney smoothing . +"in our experiments , we obtain this information from the stanford parser but any other broadly similar parser could be used instead .",for our experiments we use the unlexicalised berkeley parser and the lexicalised form of the stanford parser . +"wedekind achieves this goal by expanding first nodes that are connected , that is , whose semantics is instantiated .","wedekind achieves the reordering by first generating nodes that are connected , that is , whose semantics is instantiated ." +"and so our statistical model integrates linguistic , acoustic and situational information .","our method is based on a theoretically clear statistical model that integrates linguistic , acoustic and situational information ." +in our approach is to formulate summarization as a phrase rather than sentence .,a key aspect of our approach is the representation of content by phrases rather than entire sentences . +tang et al use enriched task-specific word embeddings and show improvement in a twitter sentiment classification task .,"using a similar corpus , tang et al induced sentiment specific word embeddings , for the twitter domain ." +"in this paper , we have presented a case study of the annoying behaviors .","more specifically , we focus on a case study of analyzing annoying behaviors ." +"for simplicity , we use the well-known conditional random fields for sequential labeling .","to predict labels , we train conditional random fields , which are directly optimized for splitting ." +this paper describes the process of having humans annotate a corpus of emails .,this paper describes the largest scale annotation project involving the enron email corpus to date . +"figure 1 shows the topologies of the conventional chain-structured lstm and the treelstm , illustrating the input , cell and hidden node at a time step t .","figure 1 shows the sequence structured lstm of hochreiter and schmidhuber and the treestructured lstm of , illustrating the input , cell and hidden nodes at a certain time step t ." +"in this work , we investigate the effectiveness of using domain adaptation .","in our work , we focus on supervised domain adaptation ." +we then review the observation made by shimoyama and her e-type analysis of ihrc .,we then looked at the argument made by shimoyama and her e-type analysis of ihrc . +"because verb usage highly depends on the usage context , which is hard to capture and represent .",the verb choice highly depends on its usage context which is not consistently captured by local features . +"davidov et al , 2007 ) proposed a method for unsupervised discovery of concept specific relations , requiring initial word seeds .","davidov et al , 2007 ) introduce the use of term frequency patterns for relationship discovery ." +"in this paper , we investigate pool-based active learning and joint optimization techniques to collect user feedback for identifying important concepts .","in addition to the joint optimization framework using ilp , we explore pool-based active learning to further reduce the required feedback ." +"by clustering the semantically related patterns into groups , we can both overcome the data .","second , we cluster the extracted patterns to identify the semantically related patterns ." +"in this study , we have extended the techniques of automatic humor recognition to different types of humor as well as different languages .","in addition , we constructed a chinese dataset to evaluate the generality of the method performance on humor recognition against different languages ." +galley et al proposed an mt model which produces target parse trees for string inputs in order to exploit the syntactic structure of the target language .,galley et al describe an algorithm for inducing a string-to-tree grammar using a parallel corpus with syntax trees on target side . +"the problem of polarity classification has been studied in detail by wilson et al , who used a set of carefully devised linguistic features .","the problem of polarity classification has been studied in detail by wilson , wiebe , and hoffmann , who used a set of carefully devised linguistic features ." +previous works showed that conditional random fields can outperform other sequence labeling models like memms in abbreviation generation tasks .,previous works proved that conditional random fields can outperform other sequence labeling models like memms in abbreviation generation tasks . +"predicate vectors are learned from the contexts of preceding arguments , and are required to contribute to the prediction of upcoming arguments .","similarly , predicate vectors are learned from the contexts of preceding arguments , and are required to contribute to the prediction of upcoming arguments ." +shorter sentences that convey the same meaning is a challenging problem .,"processing long , complex sentences is challenging ." +recent work on temporal resolution focuses primarily on news .,previous work on adapting temporal taggers primarily focus on scaling up to more languages . +"in this paper , we propose a simple but novel approach to automatically generate large-scale pseudo training data for zero pronoun resolution .","to address the problem above , in this paper , we propose a simple but novel approach to automatically generate large-scale pseudo training data for zero pronoun resolution ." +we give an extended lexrank with integer linear programming to optimize sentence selection .,we then use an extension of the lexrank algorithm to rank sentences . +for details on the computation of this code length see .,details on the computation of this code length are given in . +"towards this overall goal , we describe the construction of a resource that contains more than 160 , 000 document pairs that are known to talk about the same events .",our work presents a method to automatically construct a large corpus of text pairs describing the same underlying events . +"for example , cut can be used in the sense of ¡° cutting costs , ¡± which carries with it restrictions on instruments , locations , and so on that somewhat overlap with eliminate .","for example , cut can be used in the sense of ¡°cutting costs , ¡± which carries with it restrictions on instruments , locations , and so on that somewhat overlap with eliminate as in ¡°eliminating costs.¡±" +"however , to train maxent , we do not need manually labeled training data .",we adopt this method as well but with no use of manually labeled data in training . +"to adapt to user and domain changes , we performed an application-oriented analysis of different online algorithms .","focusing on the adaptability to user and domain changes , we report the results of comparative experiments with two online algorithms and the standard batch approach ." +we proposed to solve the semantic textual similarity task .,"we proposed udl , a model for estimating sentence pair semantic similarity ." +the classifier used was svm light described in using a linear kernel .,the taxonomy kernel was trained using the svm 脨 脴 package . +named entity recognition ( ner ) is the first step for many tasks in the fields of natural language processing and information retrieval .,named entity recognition ( ner ) is a well-known problem in nlp which feeds into many other related tasks such as information retrieval ( ir ) and machine translation ( mt ) and more recently social network discovery and opinion mining . +we use latent semantic analysis to perform this representational transformation .,"as an alternative , we apply latent semantic analysis to compute a reduced-rank representation ." +"in standard penntreebank ( cite-p-24-1-21 ) evaluations , our parser achieves a significant accuracy improvement ( + 1 . 8 % ) .","in standard ptb evaluation , our parser achieved a 1.8 % accuracy improvement over the parser of cite-p-24-1-1 , which shows the effect of combining search and learning ." +"as a classification problem , we focused on identification and examination of various linguistic features such as verb class , tense , aspect , mood , modality , and experience .","we formulate the problem as a classification task using various linguistic features including tense , mood , aspect , modality , experiencer , and verb classes ." +we describe a highly efficient monotone search algorithm with a complexity .,we described a highly efficient monotone search algorithm . +"in this paper , we present three different approaches for the textual semantic similarity task of semeval 2012 .",in this paper we describe the three approaches we submitted to the semantic textual similarity task of semeval 2012 . +we also show how the proposed architecture can be applied to domain adaptation .,"finally , we also show how this architecture can be used for domain adaptation ." +"hierarchical convolutional neural network ( shcnn ) , which integrates local and more global representations of a message , is first presented to estimate the conversation-level similarity between closely posted messages .","in the first stage , we propose the siamese hierarchical convolutional neural network ( shcnn ) to estimate conversation-level similarity between pairs of closely posted messages ." +"tresner-kirsch ( 2012 ) use the logarithm of the frequency for some experimental runs , reporting that it improved accuracy .","vogel and tresner-kirsch ( 2012 ) use the logarithm of the frequency for some experimental runs , reporting that it improved accuracy in some cases ." +"in addition , we demonstrate that topicvec can derive coherent topics based only on one document , which is not possible for topic .","in addition , we illustrate with an example that our method can generate coherent topics even based on only one document ." +predicting the subject of a disease / symptom .,identifying whether the subject has a disease/symptom . +xiong and zhang employ a sentence-level topic model to capture coherence for document-level machine translation .,"xiong and zhang attempt to improve lexical coherence via a topic-based model , using a hidden topic markov model to determine the topic in the source sentence ." +information exchange through social media concerning various health challenges has been extensively studied .,analysis of social media content for health has been a topic of wide interest . +"to ensure that task-specific features of given task do not exist in the shared space , we exploit the concept of adversarial training into shared feature space .","inspired by adversarial networks , we incorporate adversarial training into shared space to guarantee that specific features of tasks do not exist in shared space ." +"in particular , we carefull y studied the fastus system of hobbs et al , who have clearly and eloquently set forth the advantage s of this approach .","in particular , we carefully studied the fastus system of hobbs et al , who have clearly and eloquently set forth the advantages of this approach ." +we evaluate the output of the unsupervised pos tagger as a direct replacement for the output of a fully supervised pos tagger for the task of shallow parsing .,"finally , building on this promising result we use the output of the unsupervised pos tagger as a direct replacement for the output of a fully supervised pos tagger for the task of shallow parsing ." +"punyakanok et al , 2005 , typically involves multiple stages to 1 ) parse the input , 2 ) identify arguments , 3 ) classify those arguments , and then 4 ) run inference to make sure the final labeling for the full sentence does not violate any linguistic constraints .","punyakanok et al , 2005a , typically involves multiple stages to 1 ) parse the input , 2 ) identify arguments , 3 ) classify those arguments , and then 4 ) run inference to make sure the final labeling for the full sentence does not violate any linguistic constraints ." +"on three narrow domain translation tasks , caused little increase in the translation time , and compared favorably to another alternative retrieval-based method with respect to accuracy , speed , and simplicity of implementation .","it also causes little increase in the translation time , and compares favorably to another alternative retrieval-based method with respect to accuracy , speed , and simplicity of implementation ." +in this exchange share one belief that we have not represented .,the user and advisor in this exchange share one belief that we have not represented . +the semantic textual similarity task examines semantic similarity at a sentence-level .,the task of semantic textual similarity is aimed at measuring the degree of semantic equivalence between a pair of texts . +ucca ¡¯ s approach that advocates automatic learning of syntax from semantic supervision stands in contrast to the traditional view of generative grammar ( cite-p-11-1-7 ) .,ucca¡¯s representation is guided by conceptual notions and has its roots in the cognitive linguistics tradition and specifically in cognitive grammar ( cite-p-11-3-6 ) . +"by processing n-best lists or lattices , which allows us to consider both the segmented and desegmented output .","in this paper , we expand our translation options by desegmenting n-best lists or lattices ." +"based on the structured perceptron , we propose a general framework of ¡° violation-fixing ¡± perceptrons for inexact search with a theoretical guarantee for convergence .",we have presented a unifying framework of ¡°violation-fixing¡± perceptron which guarantees convergence with inexact search . +cite-p-12-5-6 improved the crf method by employing the large margin method .,cite-p-12-5-7 reported a pos tagger based on cyclic dependency network . +"we term the ¡° word generalization ¡± problem , which refers to how children associate a word such as dog with a meaning at the appropriate category level .","we use the term ¡°word generalization¡± to refer to this problem of associating a word with the meaning at an appropriate category level , given some sample of experiences with the word ." +"in nature , the entities themselves tend to be complex and expressed as noun phrases containing multiple modifiers , giving rise to examples like oxygen depletion in the upper 500 m of the ocean or timing and magnitude of surface temperature evolution in the southern hemisphere in deglacial proxy records .","variables tend to be complex rather than atomic entities and expressed as noun phrases containing multiple modifiers , e.g . oxygen depletion in the upper 500 m of the ocean or timing and magnitude of surface temperature evolution in the southern hemisphere in deglacial proxy records ." +huang et al and huang et al mainly focused on the generative hmm models .,"huang , harper , and wang and huang , eidelman , and harper mainly focused on the generative hmm models ." +"we compare our proposed model 3 with three existing models including cnn , minie , and clausie by corro and gemulla .","baselines we compare our method with both competitive neural and non-neural models , including rnnoie , openie4 , 5 clausie , and props ." +"in this paper , we propose to employ statistical machine translation to improve question retrieval and enrich the question representation with the translated words from other languages .","in this work , we propose an alternative way to address the word ambiguity and word mismatch problems by taking advantage of potentially rich semantic information drawn from other languages ." +"models , along with two non-composition models , namely the adjective and the noun models , are used to explain the systematic variance in neural activation .","furthermore , vector-based semantic models can explain a significant portion of systematic variance in the observed neural activity ." +"in chinese abbreviation prediction , we also use a substring tagging strategy to generate local substring tagging candidates .",we utilize this substring based approach and obtain this local tagging information by labeling on the substring of the full character sequence . +stance classification is the task of automatically identifying users ’ positions about a specific target from text ( cite-p-18-1-10 ) .,"stance classification is the task of determining from text whether the author of the text is in favor of , against , or neutral towards a target of interest ." +we experimentally evaluated the test collection for single document summarization contained in the rst discourse treebank distributed by the linguistic data consortium 1 .,we conducted an experimental evaluation on the test collection for single document summarization evaluation contained in the rst discourse treebank distributed by the linguistic data consortium 3 . +"in order to evaluate the performance of our tensorbased factorization model of compositionality , we make use of the sentence similarity task for transitive sentences , defined in grefenstette and sadrzadeh .","in order to evaluate the performance of our new co-compositional model with prototype projection and word representation learning algorithm , we make use of the disambiguation task of transitive sentences developed by grefenstette and sadrzadeh ." +"in this paper , it is possible to provide theoretical guarantees for distributed online passive aggressive learning .",in this paper we investigate distributed training strategies for the structured perceptron as a means to reduce training times when computing clusters are available . +"following , we describe the algorithm in a deductive system .","following chiang , we describe our algorithms in a deductive system ." +"system performance is evaluated on newstest 2011 using bleu , meteor , and ter .",performance is measured in terms of bleu and ter computed using the multeval script . +"in order to cluster lexical items , we use the algorithm proposed by brown et al , as implemented in the srilm toolkit .","to determine the word classes , one can use the algorithm of brown et al for finding the classes ." +machinery can be obtained without going beyond the power of mildly context-sensitive grammars .,we thus show that these analyses can be obtained without requiring power beyond mildly context-sensitive grammars . +"experiments have been conducted on four publicly available datasets , including three synthetic conversation datasets and one real conversation dataset from reddit .",experiments were conducted with four publicly available datasets of conversations from reddit and irc channels . +"in this section , we generalize the ideas regarding network-based dsms presented in , for the case of more complex structures .","in this section , we summarize the main ideas of dsms that were proposed in for building semantic networks , which are extended here for the creation of affective networks ." +"in this paper , we conducted a systematic comparative analysis of language in different contexts of bursty topics , including web search , news media .","in this paper , we have studied and compared how the web content reacts to bursty events in multiple contexts of web search and online media ." +gaustad showed that human-generated pseudo-words are more difficult to classify than random choices .,gaustad showed that evaluations using pseudowords can over-estimate the accuracy of a word sense disambiguation system on real data . +the generation of referring expressions is a core ingredient of most natural language generation systems .,the generation of referring expressions is an integral part of most natural language generation systems . +evaluation results on the same corpus demonstrate that our system performs better than an existing widely used multi-document summarization system .,our evaluation demonstrates that scisumm achieves higher quality summaries than a state-of-the-art multidocument summarization system ( cite-p-15-3-4 ) . +huang et al utilise a bi-directional lstm with a sequential conditional random layer using a gazetteer and senna word embeddings to obtain superior performance .,huang et al use a bilstm with a crf layer in addition to making use of explicit spelling and context features along with word embeddings . +all experiments used the europarl parallel corpus as sources of text in the languages of interest .,"for the english-german experiments , the translation system was trained and tested using a part of the europarl corpus ." +"in this work , we use fasttext for training embeddings .","additionally , we use averaged word embeddings ." +which can be seen as a compromise of the hierarchical phrase-based model and the tree-to-string model .,"in some sense , our model can be seen as a compromise between the hierarchical phrase-based model and the tree-to-string model , specifically" +it was followed by schwenk who applied neural network for language modeling in large scale vocabulary speech recognition and obtained a noticeable improvement in word error rate .,"his work has been followed by schwenk , who has shown that neural network language models actually work very well in the state-of-theart speech recognition systems ." +"brockett et al treat error correction as a translation task , and solve it by using the noisy channel model .","similar to our approach , brockett et al view error correction as a machine translation problem ." +"when dealing with highly inflected or agglutinative languages , as well as analytic languages , of which chinese is the focus of this article .","but a number of augmentations and changes become necessary when dealing with highly inflected or agglutinative languages , as well as analytic languages , of which chinese is the focus of this article ." +deeb-rnn achieves better performance as compared to the state-of-the-art methods in terms of both recall and f1-measure .,experimental results show that the deebrnn model outperforms both feature-based and representation-based state-of-the-art methods in terms of recall and f1-measure . +label propagation is a semi-supervised algorithm which needs labeled data .,traditional label propagation is a graph-based semi-supervised learning approach with a single view . +"for each domain , we formulate the review spam detection tasks for multiple domains , e . g . , hotel , restaurant , and so on , as a multi-task learning problem .","we formulate review spam detection for multiple domains ( e.g. , hotel and restaurant ) as a multi-task learning problem ." +birke and sarkar proposed the trope finder system to recognize verbs with non-literal meaning using word sense disambiguation and clustering .,birke and sarkar propose a minimally supervised algorithm for distinguishing between literal and non-literal usages of verbs in context . +ibm model 4 is essentially a better model .,our model is also easier to understand than ibm model 4 . +"given a set of question-answer pairs as the development set , we use the minimum error rate training algorithm to tune the feature weights 位 m i in our proposed model .","to set the model parameters 胃 , we used the minimum error rate training algorithm to maximize the f-measure of the 1-best alignment of the model on a development set consisting of sentence pairs with manually generated alignments ." +jiang and zhai introduce a general instance weighting framework for model adaptation .,jiang and zhai recently proposed an instance re-weighting framework to take domain shift into account . +"when the physician ' s plan is deficient , several problems are generally detected , and thus multiple critiques are independently produced .","our experience with a critiquing system shows that when the system detects problems with the user 's performance , multiple critiques are often produced ." +in this paper we study the problem of interpreting and verbalizing visual information using abstract scenes .,in this paper we study the problem of interpreting visual scenes and rendering their content using natural language . +"in this paper , we focus on designing a review generation model that is able to leverage both user and item information .","in this paper , we argue that the extra aspect ( opinion ) information extracted using these previous works can effectively improve the quality of generated reviews ." +our system outperforms a comparable well-published system and a previously published form of our system .,"the system outperforms a comparable publicly available system , as well as a previously published form of our system ." +we use the maximum entropy model as implemented in the stanford corenlp toolset .,we use a maximum entropy classifier which allows an efficient combination of many overlapping features . +there is a method to automatically learn the weights but it requires reference phrase pairs not easily available in resource constrained scenarios like ours .,there is a method to automatically learn the interpolation weights but it requires reference phrase pairs which are not easily available . +as word vectors the authors use word2vec embeddings trained with the skip-gram model .,"the word vectors are learned using a skip-gram model with negative sampling , implemented in the word2vec toolkit ." +"in this paper , we propose bridge correlational neural networks which can learn common representations for multiple views .","to this end , we propose bridge correlational neural networks ( bridge corrnets ) which learn aligned representations across multiple views using a pivot view ." +"in machine translation , improved language models have resulted in significant improvements in translation performance .","large language models have been shown to improve quality , especially in machine translation ." +this paper describes our participation in the language identification in code-switched data task at codeswitch 2014 .,following work has been described in the first shared task on language identification in code-switched data held at emnlp 2014 . +"to obtain a vector representation of a sentence and score each target word in the sentence , and for heterographic puns , we computed the semantic similarity between cluster center vectors of each sentence .","for homographic puns , we calculated the semantic similarity between sense vectors of each target word in the sentence to obtain its sentence vector and score each target word ." +"charniak , 2000 ) extends pcfg and achieves similar performance to .",charniak 2000 ) describes a different method which achieves very similar performance to . +"in this paper , we give an overview of our participation in the timeline generation task of semeval-2015 .","in this paper , we presented an overview of our participation in the timeline generation task of semeval-2015 ." +"in this framework , review feature words and opinion words are organized into categories in a simultaneous and iterative manner .","we propose an iterative reinforcement framework , and under this framework , review feature words and opinion words are organized into categories in a simultaneous and iterative manner ." +one key reason is that the objective functions of topic models do not correlate well with human judgements .,chang et al stated that one reason is that the objective function of topic models does not always correlate well with human judgments . +"djuric et al propose an approach that learns low-dimensional , distributed representations of user comments in order to detect expressions of hate speech .","djuric et al were the first to propose a self-taught learning strategy in the context of hateful speech detection , where they simultaneously learn low-dimension representations of documents and words in a common vector space ." +sen proposed a latent topic model to learn the context entity association .,sen modeled the topic coherence as the groups of co-occurring entities . +"each essay was represented through the sets of features described below , using term frequency and the liblinear scikit-learn implementation of support vector machines with ovr , one vs .","we used term frequency weighting scheme and the liblinear scikit-learn implementation of support vector machines with ovr , one vs ." +"in this paper , we present a neural keyphrase extraction framework for microblog posts that takes their conversation context into account , where four types of neural encoders , namely , averaged embedding , rnn , attention , and memory .","in this paper , we present a neural keyphrase extraction framework that exploits conversation context , which is represented by neural encoders for capturing salient content to help in indicating keyphrases in target posts ." +most spoken languages use brahmiderived scripts .,all languages use different brahmidescended scripts . +"to address the generalization concern , we propose a method inspired by yarowsky .",our method is based on a decision list proposed by yarowsky . +"in recent years , many accurate phrase-structure parsers have been developed , .","recently , many accurate statistical parsers have been proposed for english , for japanese ) ." +"to manipulate strings , current methods usually have no access to most information available in decoding phase .",current smt systems usually decode with single translation models and can not benefit from the strengths of other models in decoding phase . +goldwater et al explored a bigram model built upon a dirichlet process to discover contextual dependencies .,"goldwater et al justified the importance of context as a means of avoiding undersegmentation , through a method based on hierarchical dirichlet processes ." +"in our study , we explore the use of generalized lexical features for predictive opinion analysis .",our experiment focuses on investigating aspects of predictive opinions by learning lexical patterns and comparing them with judgment opinions . +three systems are unsupervised and relied on dictionary-based similarity measures .,the three systems are unsupervised and relied on dictionary-based similarity measures . +named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance .,"named entity recognition is the task of finding entities , such as people and organizations , in text ." +paul s . jacobs phred : a generator for natural language interfaces .,paul s. jacobs phred : a generator for natural language interfaces +"as a building block : we model the left and right sequences of modifiers using rnns , which are composed in a recursive manner to form a tree .","in this work , we use the rnn abstraction as a building block , and recursively combine several rnns to obtain our tree representation ." +"we perform standard phrase extraction to obtain our synthetic phrases , whose translation probabilities are again estimated based on the single-word probabilities pfrom our translation model .","finally , we compute the translation probabilities according to the estimated co-occurrence counts , using the standard training method in phrase-based smt ." +"in this paper , 4 word boundary tags are employed : b ( beginning of a word ) , m ( middle part of a word ) , e ( end of a word ) .","in this paper , 4 word boundary tags are employed : b ( beginning of a word ) , m ( middle part of a word ) , e ( end of a word ) and s ( single character ) ." +"in the first approach , heuristic rules are used to find the dependencies or penalties for label inconsistency are required to handset ad-hoc .","however , in the first approach , heuristic rules are used to find the dependencies or penalties for label inconsistency are required to handset ad-hoc ." +"while data sparsity is a common problem of many nlp tasks , it is much more severe for sentence compression , leading cite-p-12-3-4 to question the applicability of the channel model for this task altogether .","data sparsity is the bane of natural language processing ( nlp ) ( cite-p-15-5-2 , cite-p-15-3-7 ) ." +"from the oceanic language family , our model achieves a cluster purity score over 91 % , while maintaining pairwise recall over 62 % .","on the larger oceanic data , our model can achieve cluster purity scores of 91.8 % , while maintaining pairwise recall of 62.1 % ." +"to address the first processing stage , we build phrase-based smt models using moses , an open-source phrase-based smt system and available data .","for this experiment , we train a standard phrase-based smt system over the entire parallel corpus ." +"s酶gaard and goldberg , 2016 ) showed that inducing a priori knowledge in a multi-task model , by ordering the tasks to be learned , leads to better performance .","s酶gaard and goldberg , 2016 ) showed that a higher-level task can benefit from making use of a shared representation learned by training on a lower-level task ." +"however , our results suggest that the tensor-based methods are more robust than the basic hal model .",results suggest that the tensor-based methods we propose are more robust than the basic hal model in some respects . +we use the simplified factual statement extractor model 6 of heilman and smith .,"to do the task of simplification , we use the simplified factual statement extraction toolkit ." +"character-level models obtained better accuracies than previous work on segmentation , pos-tagging and word-level dependency parsing .",experimental results show that the character-level dependency parsing models outperform the word-based methods on all the data sets . +"users also subjectively rate the rl-based policy on average 10 % higher , and 49 % higher .",users also subjectively rate the rl-based policy on average 10 % higher . +we propose a novel approach that is based on pattern discovery and supervised learning to successfully identify erroneous / correct sentences .,"in this paper , we propose a new approach to detecting erroneous sentences by integrating pattern discovery with supervised learning models ." +we also explore bi-lstm models to avoid the detailed feature engineering .,we also explore bi-lstm models with fewer features . +english tweets are identified using a compression-based language identification tool .,english tweets are identified automatically using a compression-based language identification tool . +zhang et al propose a simplified neural network which contains only one hidden layer and use three different pooling operations .,zhang et al explore a shallow convolutional neural network and achieve competitive performance . +named entity recognizer ( ner ) on upper case text can be improved by using a mixed case .,"named entity recognition on mixed case text is easier than on upper case text , where case information is unavailable ." +because common topics in weakly-correlated collections are usually found in the tail of the document-topic distribution of a sufficiently large set of topics .,"topic correlations in weakly-related collections typically lie in the tail of the topic distribution , where they would be overlooked by models unable to fit large numbers of topics ." +universal dependencies is a cross-linguistically consistent annotation scheme for dependency-based treebanks .,universal dependencies is a framework for cross-linguistically consistent treebank annotation . +"likewise , skadi艈a et al and skadi艈a et al argue that the advantages of comparable corpora in machine translation are considerable and more beneficial than those of parallel corpora .",skadi艈a et al and skadi艈a et al argue that the advantages of comparable corpora in machine translation are considerable and more beneficial than those of parallel corpora . +we also use a simple bagging technique to deal with the sparsity of boundary tags .,"for large datasets , we use an ensemble technique inspired by bagging ." +convolution tree kernel defines a feature space consisting of all subtree types of parse trees and counts the number of common subtrees as the syntactic similarity between two parse trees .,convolution tree kernel defines a feature space consisting of all subtree types of parse trees and counts the number of common subtrees to express the respective distance in the feature space . +"when a peco-structured query is formulated , it is matched against the peco elements in the documents .",one can then match the peco elements in the query to the elements detected in documents . +stochastic models have been widely used in pos tagging task for simplicity and language independence of the models .,stochastic models have been widely used in pos tagging for simplicity and language independence of the models . +the selectional preference distribution was defined in terms of selectional association measures introduced by resnik over the noun classes automatically produced by sun and korhonen .,the selectional preference distribution is defined in terms of selectional association measure introduced by resnik over the noun classes automatically produced by sun and korhonen . +"for clustering , we use the implementation from the cluto software package .","particularly , we used a partitioning algorithm of the cluto library for clustering ." +as well as words in sentential contexts demonstrate the superiority of this approach over both prototype and exemplar based vector-space models .,the results demonstrate the superiority of a clustered approach over both traditional prototype and exemplar-based vector-space models . +word embeddings have been proven helpful in many nlp tasks .,all of them have proven that word embedding is helpful in nlp tasks . +"using manually compiled document-independent features , we develop a novel summary system called priorsum , which applies the enhanced convolutional neural networks to capture the summary .","to this end , we develop a novel summarization system called priorsum to automatically exploit all possible semantic aspects latent in the summary prior nature ." +svm was used since it is known to perform well for sentiment classification .,we choose to use svm since it performs the best for sentiment classification . +"in the cross-domain setting , and a traditional ilp method does not work well in the in-domain setting .","on the other hand , a deletion-based method does not face such a problem in a cross-domain setting ." +like ours is the first proposal of its kind .,"as far as we know , our work is the first of its kind ." +statistical model is still useful in that it allows us to quantify precise improvements in performance upon the integration of each specific cue into the model .,but a bare-bones statistical model is still useful in that it allows us to quantify precise improvements in performance upon the integration of each specific cue into the model . +"in the pattern selection process , we propose to capture and exploit these relationships using pattern-based entailment graphs .",we propose to exploit entailment relationships holding among re patterns by structuring the candidate set in an entailment graph . +we apply domain adversarial training only on the topic inputs from learned topic representations .,we consider the domain adversarial training network on the user factor adaptation task . +"in addition , we extend the sick dataset to include unscored fluency-focused sentence comparisons .","in addition , we extend the sick dataset to include unscored fluency-focused sentence comparisons and we propose a toy metric for evaluation ." +we obtained these scores by training a word2vec model on the wiki corpus .,we mainly used pre-trained embeddings of words from the conll 2017 shared task trained on word2vec . +demberg uses a fourth-order hidden markov model to tackle orthographic syllabification in german .,"demberg applies a fourth-order hmm to the syllabification task , as a component of a larger german text-tospeech system ." +"extending the results of lapata et al , we confirmed that cooccurrence frequency can be used to estimate the plausibility of an adjective-noun pair .",lapata et al demonstrated that the cooccurrence frequency of an adjective-noun combination is the best predictor of its rated plausibility . +"we have introduced semeval-2018 task 5 , a referential quantification task of counting events and participants in local news articles with high ambiguity .","this paper discusses semeval-2018 task 5 : a referential quantification task of counting events and participants in local , long-tail news documents with high ambiguity ." +"translations are filtered using a measure of similarity to the original words , based on latent semantic analysis scores .",words or expressions are aligned using a word similarity model based on a combination of latent semantic analysis and semantic distance in the wordnet knowledge graph . +and we plan to explore this possibility in future work .,we plan to explore this possibility in future work . +"in this paper , we proposed a lifelong learning approach to sentiment classification .",this paper proposes a novel lifelong learning ( ll ) approach to sentiment classification . +"backoff language models can be efficiently and exactly collapsed into a single normalized backoff model , contradicting cite-p-21-3-0 .","contrary to cite-p-21-3-0 , we proved that these models can be exactly collapsed into a single backoff language model ." +while is measured across ten randomized embedding spaces trained on the training data of the ptb ( determined using language modeling splits ( cite-p-16-1-18 ) ) .,stability is measured across ten randomized embedding spaces trained on the training portion of the ptb ( determined using language modeling splits ( cite-p-16-1-18 ) ) . +experimental results show that our model significantly outperforms the direct transfer method .,experimental results show that our method consistently outperforms various baselines across languages . +"to address this drawback , ranking models were proved to be a useful solution , .","to address this drawback , ranking models were proved to be useful solutions , ." +"however , multi-word translation units have been shown to improve the quality of smt dramatically .",using multi-word phrases instead of individual words as the basic translation unit has been shown to increase translation performance . +"as table 7 shows , our system clearly outperforms the system proposed by silfverberg and hulden with regard to f1-score on tags .","in addition , our system substantially improves upon the baseline presented by silfverberg and hulden ." +a chain can be defined as a path between a verb node and any other node in the dependency tree passing through zero or more intermediate nodes .,a pattern is defined as a path between a verb node and any other node in the dependency tree passing through zero or more intermediate nodes . +"for annotation tasks , snow et al showed that crowdsourced annotations are similar to traditional annotations made by experts .","snow et al showed , however , that crowdsourced annotations can produce similar results to annotations made by experts ." +our analysis shows that the high-performance of the acm comes .,our analysis shows that the high-performance of the acm lies in the asymmetry of the model . +"in the second phase , it selects an optimal substitute for each given word from the synonyms according to the context .","in the second phase , it selects an optimal substitute for each given word from the synonyms according to the context in math-w-3-4-0-74 ." +"in reviews , customers might express different sentiment towards various aspects of a product or service .",reviews depict sentiments of customers towards various aspects of a product or service . +state-of-the-art smt models achieve excellent results by extracting phrases to induct the translation rules .,"with large amounts of data , phrase-based translation systems achieve state-of-the-art results in many typologically diverse language pairs ." +"this approach was pioneered by galley et al with numerous variants in subsequent research , usually referred to as tree-to-tree , tree-to-string and string-to-tree , depending on where the analyses are found in the training data .","this approach was pioneered by galley et al , and there has been a lot of research since , usually referred to as tree-to-tree , treeto-string and string-to-tree , depending on where the analyses are found in the training data ." +"in this work , we take a more direct approach and treat a word type and its allowed pos tags .","in this paper , we make a simplifying assumption of one-tag-per-word ." +discourse referents showed that script knowledge is a highly important factor in determining human discourse expectations .,our hypothesis is that script knowledge may be a significant factor in human anticipation of discourse referents . +"in our experiments , this method is shown to be very effective to boost the performance of keyphrase extraction .",we find that topic-sensitive propagation can largely help boost the performance . +this makes the information such as cur- 5 related work benamara and dizier present the cooperative question answering approach which generates natural language responses for given questions .,related work benamara and dizier present the cooperative question answering approach which generates natural language responses for given questions . +and we plan to explore such semantic signals in future work .,we plan to incorporate such signals in future work . +semantic textual similarity is the task of finding the degree of semantic equivalence between a pair of sentences .,the semantic textual similarity is a core problem in the computational linguistic field . +"in the semi-supervised adaboost algorithm , we investigate two boosting methods in this paper .","based on this semi-supervised boosting algorithm , we investigate two boosting methods for word alignment ." +"phrase structure trees in ctb have been semi-automatically converted to deep derivations in the ccg , lfg , and hpsg formalisms .","phrase structure trees in ctb have been semiautomatically converted to deep derivations in the ccg , lfg , tag and hpsg formalisms ." +cahill et al reported an application of the pcfg approximation technique in lfg parsing and the recovery of long distance dependencies on the f-structures .,cahill et al present a method to automatically obtain approximations of ldd resolution for lfg resources acquired from a treebank . +the iterative scaling algorithm combined with monte carlo simulation is used to train the weights in this generative model .,"the obtained average observations are set as constraints , and the improved iterative scaling algorithm is employed to evaluate the weights ." +"the idea is that documents are represented as random mixtures over latent topics , where each topic is characterized by a distribution over words .","the basic idea behind topic models is that documents are mixtures of topics , where a topic is a probability distribution over words ." +"and an analysis of the results shows that the generalization methods of resnik and li and abe appear to be overgeneralizing , at least for this task .","an analysis of the results has shown that the other approaches appear to be overgeneralizing , at least for this task ." +and our results suggest that the attention of generative networks can be successfully biased to look at sentences relevant to a topic .,our method works by modifying the attention mechanism of a pointer-generator neural network to make it focus on text relevant to a topic . +noise-contrastive estimation has been a successful alternative to train continuous space language models with large vocabularies .,noise-contrastive estimation has been successfully adopted for training neural language models with large vocabularies . +we use approximate randomization for significance testing .,we compute statistical significance using the approximate randomization test . +we look at word level perplexity with respect to the word frequency .,"in this work , we present , at word level , the correlation between perplexity and word frequency ." +"kurokawa et al show that for an english-to-french mt system , a translation model trained on an english-to-french data performs better than one trained on french-to-english translations .",kurokawa et al showed that french-to-english smt systems whose translation models were constructed from human translations from french to english yielded better translation quality than ones created from translations in the other direction . +"following the common practice of adaptation research on this data , we take the union of bn and nw as the source domain and bc , cts and wl as three different target domains .","following the common practice of domain adaptation research on this dataset , we use news as the source domain and bc , cts , wl as three different target domains ." +experiments were run with a variety of machine learning algorithms using the scikit-learn toolkit .,all the experiments are developed using the scikit-learn machine learning library . +"considering that users do not know in which terms the categories are expressed , they might query the same concept .","considering that users do not know in which terms the categories are expressed , they might query the same concept by a paraphrase ." +"however , recnns require a predefined topological structure , like parse tree , to encode .","however , recnns need a given external topological structure , like syntactic tree ." +"we use an iterative rule distillation process to effectively transfer rich structured knowledge , expressed in the declarative first-order logic language , into parameters of general neural networks .","specifically , we develop an iterative distillation method that transfers the structured information of logic rules into the weights of neural networks ." +"on the one hand , we do not expect such pairs to occur in any systematic pattern , so they could obscure an otherwise more systematic pattern .","on the one hand , we do not expect such pairs to occur in any systematic pattern , so they could obscure an otherwise more systematic pattern in the high pmi bins ." +"in the previous task , we introduce a task and a dataset consisting in a set of crowdsourced human ratings of spatial similarity for object pairs .","to evaluate the quality of the spatial representations learned in the previous task , we introduce a task consisting in a set of 1,016 human ratings of spatial similarity between object pairs ." +"evaluation on a standard data set shows that our method consistently outperforms the best performing previously reported method , which is supervised .",evaluation on a standard data set shows that our method consistently outperforms the supervised state-of-the-art method for the task . +"by using entice , we are able to increase nell ’ s knowledge density by a factor of 7 . 7 .",we find that entice is able to significantly increase nell ’ s knowledge density by a factor of 7.7 at 75.5 % accuracy . +"we obtain useful information from wikipedia by the tool named java wikipedia library 2 , which allows to access all information contained in wikipedia .","for retrieving the discussion pages , we use the java wikipedia library , which offers efficient , databasedriven access to the contents of wikipedia ." +"in bohnet et al , the goal is to improve parsing accuracy for morphologically rich languages by performing morphological and syntactic analysis jointly instead of in a pipeline .","bohnet et al presented a joint approach for morphological and syntactic analysis for morphologically rich languages , integrating additional features that encode whether a tag is in the dictionary or not ." +chen and ng further extend the study of zhao and ng by proposing several novel features and introducing the coreference links between zps .,"on the base of zhao and ng , chen and ng further investigate their model , introducing two extensions to the resolver , namely , novel features and zero pronoun links ." +"besides , riezler et al and zhou et al proposed the phrase-based translation models for question and answer retrieval .","in the same space , riezler et al develop smt-based query expansion methods and use them for retrieval from faq pages ." +and is available for download at github . com / sdl-research / hyp .,hyp is available for download at github.com/sdl-research/hyp . +"here too , we used the weka implementation of the na茂ve bayes model and the svmlight implementation of the svm .","for our learners , we used the na茂ve bayes implementation in the weka machine learning toolkit , a support vector machine , 3 and the crf implementation in mallet ." +"in this method , dual decomposition is used as a framework to take advantage of both hpsg parsing and coordinate structure .","in this paper , we present a statistical analysis model for coordination disambiguation that uses the dual decomposition as a framework ." +dependencies are incorporated into the crf model via a ( relatively ) straightforward feature extraction scheme .,the dependencies were included in the crf model using a relatively straightforward feature expansion scheme . +phrase table pruning is the technique of removing ineffective phrase pairs from a phrase table to make it smaller while minimizing the performance degradation .,"phrase table pruning is the act of removing phrase pairs from a phrase table to make it smaller , ideally removing the least useful phrases first ." +discourse cohesion model can help better capture discourse structure information .,identifying long-span dependencies between discourse units is crucial to improve discourse parsing performance . +"by using well calibrated probabilities , we are able to estimate the sense priors effectively .","in this paper , we explore the estimation of sense priors by first calibrating the probabilities from naive bayes ." +morphologically rich languages ( mrl ) are languages for which important information concerning the syntactic structure .,"morphologically rich languages ( mrl ) are languages in which much of the structural information is contained at the word-level , leading to high level word-form variation ." +"in our experiments , we used the kyoto university text corpus and kyoto university web document leads corpus as manually tagged corpora .","in our experiments , we used the kyoto university text corpus 11 and the kyoto university web document leads corpus 12 as japanese treebanks ." +textual units is represented as a rooted tree whose nodes correspond to the minimum textual units .,hence a document is represented as a nested tree where each node of the outer tree corresponds to an inner tree . +word embeddings represent each word as a low-dimensional vector where the similarity of vectors captures some aspect of semantic similarity of words .,"word embeddings , as a low-dimensional continuous vectors of words are regarded to be an efficient representations of word semantics ." +jokinen et al use a topic model based on a tree organisation of domain knowledge to detect topic shifts .,jokinen et al combine a manually built tree for main topics with an n-gram model for topic shifts . +"using the expectation maximization algorithm with viterbi decoding , we adopt the iterative parameter estimation procedure to solve the maximum likelihood estimation problem .","as there is no closed form solution for the maximum likelihood estimate , we resort to iterative training via the em algorithm ." +"snow et al used dependency parses to automatically learn such patterns , which they used to augment wordnet with new hypernym relations .","snow et al use syntactic path patterns as features for supervised hyponymy and synonymy classifiers , whose training examples are derived automatically from wordnet ." +"riaz and girju propose cluster sentences into topic-specific scenarios , and then focus on identifying causal relations between events and building a dataset of causal text spans headed by a verb .","in , the authors first cluster sentences into topic-specific scenarios , and then focus on building a dataset of causal text spans , where each span is headed by a verb ." +"for a dense annotation , a major downside is the limitation that events and time expressions must be in the same or in adjacent sentences .",a drawback of the previous annotation works is the limitation that only links between expressions in the same or in succeeding sentences are annotated . +"we perform all our experiments on the english section of the conll-2012 corpus , which is based on ontonotes .","in our experiments , we rely on the english portion of the conll-2012 dataset ." +however their generalization on unseen text is relatively poor comparing with models that exploit syntactic tree .,"however , the current recursive architecture is limited by its dependence on syntactic tree ." +"novelty mining studies on the chinese language have been performed on topic detection and tracking , which identifies and collects relevant stories on certain topics .","although novelty mining studies have mainly been conducted on the english language , studies on the chinese language have been performed on topic detection and tracking ." +"we investigate a new way for extracting hypernymy relations , exploiting the text layout which expresses hierarchical relations .",our aim is to improve the relation extraction task by considering both the plain text and the layout . +instances are chosen to be labelled depending on their similarity with the seed instances and are added in the seed set .,the promoted instances are either added to the initial seed set or used to replace it . +"subject and object can not easily be identified compared to english , while their detections are the key process to generate correct english word orders .","the detection of subjects and objects from japanese sentences is more difficult than that from english , while it is the key process to generate correct english word orders ." +"where math-w-3-3-0-1 is the number of words in the corpus , and math-w-3-3-0-12 is a predetermined window size .","for each word math-w-2-6-2-17 , we construct a vector math-w-2-6-2-24 of size math-w-2-6-2-27 , where math-w-2-6-2-30 is the size of the lexicon ." +"since our multilingual skip-gram and cross-lingual sentence similarity models are trained jointly , they can inform each other through the shared word embedding layer .","even though they are related tasks , multilingual skip-gram and cross-lingual sentence similarity models are always in a conflict to modify the shared word embeddings according to their objectives ." +"the ability to identify paraphrase , in which a sentences express the same meaning of another one but with different words , has proven useful for a wide variety of natural language processing applications .","alternative expressions of the same meaning , and the degree of their semantic similarity has proven useful for a wide variety of natural language processing applications ." +"particularly , zeng et al proposed a piecewise convolutional neural network architecture , which can build an extractor based on distant supervision .","to address the issue of lack of data , zeng et al incorporate multi-instance learning with a piece-wise convolutional neural network to extract relations in distantly supervised data ." +"word ordering is the abstract language modeling task of making a grammatical sentence by ordering a bag of words ( white , 2004 ; zhang and clark , 2015 ; de gispert et al. , 2014 ; bohnet et al. , 2010 ; filippova and strube , 2007 ; he et al. , 2009 ) , which is practically relevant to text-to-text applications such as summarization ( cite-p-11-3-2 ) and machine translation ( cite-p-11-1-1 ) .",word ordering is a fundamental problem in nlp and has been shown to be np-complete in discourse ordering ( cite-p-16-1-1 ) and in smt with arbitrary word reordering ( cite-p-16-3-6 ) . +"balamurali et al , have shown that use of senses in places of words improves the performance of indomain sa significantly .","balamurali et al , have shown that senses are better features than words for in-domain sa ." +we have used a simplified version of the miniature language acquisition task proposed by feldman et al .,this domain is a simplification of the miniature language acquisition task proposed by feldman et al . +experimental studies demonstrate the effectiveness of our approach .,results demonstrate the effectiveness and generality of our approach . +itg constraints are not sufficient on the canadian hansards task .,with the extended itg constraints the coverage improves significantly on both tasks . +"pgf is the backend format to which grammatical framework ( gf , ranta , 2004 ) grammars are compiled .","pgf is a simple “ machine language ” , to which the much richer gf source language is compiled by the gf grammar compiler ." +"elsner and charniak , elsner and charniak present a combination of local coherence models initially provided for monologues showing that those models can satisfactorily model local coherence in chat dialogues .","elsner and charniak , elsner and charniak are presenting a combination of local coherence models initially provided for monologues showing that those models can satisfactorily model local coherence in chat dialogues ." +"by parallelizing the clustering algorithm , we successfully constructed a cluster gazetteer with up to 500 , 000 entries .","we enabled such large-scale clustering by parallelizing the clustering algorithm , and we demonstrate the usefulness of the gazetteer constructed ." +"using previously proposed automatic measures , we find that we can not reliably predict human ratings .","therefore , we build a ranking model which successfully mimics human judgments using previously proposed automatic measures ." +"we encode a relatively rich lexical semantic structure for nouns based on the notion of qualia structure , described by pustejovsky , 1989 pustejovsky , 1991 .","briscoe et al and copestake illustrate some lexical entries with the qualia structure following pustejovsky and aniek , pustejovsky , 1989 pustejovsky , 1991 ." +le and mikolov introduce paragraph vector to learn document representation from semantics of words .,le and mikolov presented the paragraph vector algorithm to learn a fixed-size feature representation for documents . +"on these collections , it is necessary to have topic models that are aligned across languages .",we introduce a polylingual topic model that discovers topics aligned across multiple languages . +"in the parliament domain , this means ( and is translated as ) “ report .","in the parliament domain , this means ( and is translated as ) “ report . ”" +the grammar matrix is couched within the head-driven phrase structure grammar framework .,hpsg is a syntactic theory based on lexicalized grammar formalism . +"as an example of these probabilistic methods , stolcke et al apply a hmm method to the switchboard corpus , one that exploits both the order of words within utterances and the order of dialogue acts over utterances .","stolcke et al apply a somewhat more complicated hmm method to the switchboard corpus , one that exploits both the order of words within utterances and the order of dialogue acts over utterances ." +table 9 : f-score of different types of reparandums .,table 1 : different types of disfluencies . +evaluation shows that the proposed metric gives more reasonable and distinguishable scores and correlates well with human judgement .,experiment results reveal that our weighted evaluation metrics gives more reasonable and distinguishable scores and correlates well with human judgement . +previous works in qa have shown that these relations can help us answer certain kinds of questions .,rst tells us that sentences with discourse relations are related to each other and can help us answer certain kinds of questions . +we solve this sequence tagging problem using the mallet implementation of conditional random fields .,we design our model for ssl as a natural semisupervised extension of conventional supervised conditional random fields . +"categorial grammar provides a functional approach to lexicalised grammar , and so , can be thought of as defining a syntactic calculus .","categorial grammar provides a functional approach to lexicalised grammar , and so can be thought of as defining a syntactic calculus ." +parallel or comparable corpora have also been explored for unsuperwised wsd .,several studies have shown encouraging results for wsd based on parallel corpora . +we utilize maximum entropy model to design the basic classifier used in active learning for wsd and tc tasks .,we utilize a maximum entropy model to design the basic classifier for wsd and tc tasks . +"in recent work , recurrent neural network language models have produced stateof-the-art perplexities in sentence-level language modeling , far below those of traditional n-gram models .","in recent years , neural lms have become the prominent class of language modeling and have established state-of-the-art results on almost all sufficiently large benchmarks ." +one important work is proposed by who use wikipedia articles to build a bipartite graph and apply spectral clustering over it to discover relevant clusters .,one such work is proposed by scaiella et al which uses wikipedia articles to develop a bipartite graph and employs spectral clustering over it to discover relevant clusters . +"math word problems form a natural abstraction to a range of quantitative reasoning problems , such as understanding financial news , sports .",math word problems form a natural abstraction to a lot of these quantitative reasoning problems . +cite-p-19-5-7 proposed a supervised method to learn term embeddings for hypernymy .,cite-p-19-5-7 proposed a dynamic distance-margin model to learn term embeddings that capture properties of hypernymy . +this measure has been shown to correlate well with human judgements .,it was shown to correlate significantly with human judgments and behave similarly to bleu . +"finally , the string regeneration problem can be viewed as a constraint satisfaction approach .",the string regeneration can also be viewed as a natural language realization problem . +"in this paper , we explore a “ cluster and label ” strategy to reduce the human annotation effort needed to generate subjectivity .",we show that a “ cluster and label ” strategy relying on these two proposed components generates training data of good purity . +contractor et al used an mt model as well but the focus of their work is to utilize an unsupervised method to clean noisy text .,"contractor et al , 2010 , used an mt model as well but the focus of his work is to generate an unsupervised method to clean noisy text in this domain ." +"lord et al , 2015b ) analyzed the language style synchrony between therapist and client during mi encounters .",lord et al analyzed the language style synchrony between counselors and clients . +"in this work , we further propose a word embedding based model that consider the word formation of ugcs to improve the prediction .","in this paper , we propose a weakly supervised learning framework to mine fine-grained and multiple-typed relations from chinese ugcs ." +kim and hovy and bethard et al explore the usefulness of semantic roles provided by framenet for both opinion holder and opinion target extraction .,bethard et al and kim and hovy explore the usefulness of semantic roles provided by framenet for both opinion holder and opinion target extraction . +in this paper we developed an algorithm that uses global optimization to learn widely-applicable entailment rules between typed predicates .,in this paper we propose an algorithm that utilizes transitivity constraints to learn a globally-optimal set of entailment rules for typed predicates . +this work proposes a framework better-suited to scaling belief tracking models for deployment in real-world dialogue systems operating over sophisticated application domains .,"in this paper , we have proposed a novel neural belief tracking ( nbt ) framework designed to overcome current obstacles to deploying dialogue systems in real-world dialogue domains ." +we then use extended lexrank algorithm to rank the sentences .,we then use an extension of the lexrank algorithm to rank sentences . +"domain labels , such as medicine , architecture and sport provide a natural way to establish semantic relations among word senses , which can be profitably used during the disambiguation process .","the hypothesis is that domain labels ( such as medicine , architecture and sport ) provide a natural and powerful way to establish semantic relations among word senses , which can be profitably used during the disambiguation process ." +"to begin , all state sets are initialized to empty and the initial state math-w-2-3-9-140 is put into .","to begin , all state sets are initialized to empty and the initial state math-w-2-3-9-140 is put into so ; here _1_is the end-of-input marker ." +"input format specifications are almost always described in natural languages , with these specifications .",it is standard practice to write english language specifications for input formats . +"recently , zeng et al attempt to connect neural networks with distant supervision following the expressed-at-least-once assumption .","particularly , zeng et al proposed a piecewise convolutional neural network architecture , which can build an extractor based on distant supervision ." +popescu and etzioni proposed a relaxation labeling approach to utilize linguistic rules for opinion polarity detection .,popescu and etzioni proposed a relaxed labeling approach to utilize linguistic rules for opinion polarity detection . +the fisher kernel for structured classification is a trivial generalization of one of the best known data-defined kernels for binary classification .,the fisher kernel is one of the best known kernels belonging to the class of probability model based kernels . +feature weights are tuned using pairwise ranking optimization on the mt04 benchmark .,the feature weights are tuned using pairwise ranking optimization . +"this task usually requires aspect-related text segmentation , followed by prediction or summarization .","this task usually requires aspect segmentation , followed by prediction or summarization ." +"latent semantic analysis ( lsa ) is a mathematical technique used in natural language processing for finding complex and hidden relations of meaning among words and the various contexts in which they are found ( landauer and dumais , 1997 ; landauer et al , 1998 ) .",latent semantic analysis ( lsa ) is a familiar technique that employs a word-by-document vsm ( cite-p-11-1-5 ) . +neubig et al present a discriminative parser using the derivations of tree structures as underlying variables from word alignment with the parallel corpus .,neubig et al present a bottom-up method for inducing a preorder for smt by training a discriminative model to minimize the loss function on the hand-aligned corpus . +we run parallel fda5 smt experiments using moses in all language pairs in wmt14 and obtain smt performance close to the top constrained moses systems training using all of the training material .,we run parfda smt experiments using moses in all language pairs in wmt15 and obtain smt performance close to the top constrained moses systems . +cook and stevenson extend this work to create an unsuper-vised noisy channel approach using probabilistic models for common abbreviation types and choosing the english word with the highest probability after combining the models .,this work was then extended by to create an unsupervised noisy channel approach using probabilistic models for common abbreviation types and choosing the english word with the highest probability after combining the models . +"by our method , the reordering problem is converted into a sequence labeling problem .","our model converts the decoding order problem into a sequence labeling problem , i.e . a tagging task ." +"we gathered training examples from parallel corpora , semcor , and the dso corpus .","finally , we made use of examples from the dso corpus and semcor as part of our training data ." +"on sentences of length 40 , our system achieves an f-score of 89 . 0 % , a 36 % relative reduction in error .","on sentences of length 40 , our system achieves an f-score of 89.0 % , a 36 % relative reduction in error over a generative baseline ." +bleu is essentially a precision-based metric and is currently the standard metric for automatic evaluation of mt performance .,"bleu is a widely accepted baseline measure of mt quality at the system level and , as such , is an obvious choice for a baseline adequacy metric ." +"on a collection of 1 . 5 million documents and 423 queries , our method was found to lead to an improvement of 28 % in map and 50 % in p @ 5 , as compared to the state-of-the-art method .","on a data set composed of 1.5 million citations extracted with pubmed , our best model obtains an increase of 28 % for map and nearly 50 % for p @ 5 over the classical language modeling approach ." +"it is used to support semantic analyses in the english hpsg grammar erg , but also in other grammar formalisms like lfg .","it is used to support semantic analyses in hpsg english grammar -erg , but also in other grammar formalisms like lfg ." +"overall , this is a difficult task even for human translators .","overall , there is a large body of work in the linguistics literature that argues that paraphrases are not restricted to strict synonymy ." +"source and target words are at the two ends of a long information processing procedure , mediated by hidden states .","however , as illustrated in figure 1 , source and target word embeddings are at the two ends of a long information processing procedure ." +uszkoreit et al describe a large-scale parallel document mining method that involves translating all source documents into english then using n-gram matching through multiple scoring steps .,"for large scale document level alignment , uszkoreit et al , proposed a distributed system that reliably mines parallel text from large corpora ." +"in this paper , we present a novel approach to lexical selection where the target words are associated with the entire source sentence ( global ) .","in this paper , we present the details of training a global lexical selection model using classification techniques and sentence reconstruction models using permutation automata ." +several product feature extraction techniques have been proposed in the literatures .,"several methods have been proposed , mainly in the context of product review mining ." +we quantitatively evaluate the use of open ie output against other dominant structures .,"we studied open ie¡¯s output compared with other dominant structures , highlighting their main differences ." +"this function is a convolution kernel , which is proven to be positive definite .",this zero-extension is known to preserve positive definiteness . +"these representations can be used as features or inputs , which are widely employed in information retrieval , document classification and other nlp tasks .","such methods are highly scalable and have been applied in information retrieval , large-scale taxonomy induction , and knowledge acquisition ." +articles from current week are clustered separately in currently 5 languages .,articles from current week are clustered monolingually several times a day . +"using our approach yields better accuracy than two baselines , a majority class baseline and a more difficult baseline of lexical n-gram features .",our results show significant improvement over a majority class baseline as well as a more difficult baseline consisting of lexical n-grams . +"a similar method is presented in where wordnet synonyms , antonyms , and glosses are used to iteratively expand a list of seeds .","a similar method is presented in andreevskaia and bergler , where wordnet synonyms , antonyms , and glosses are used to iteratively expand a list of seeds ." +this paper explores the utilization of personalization features for the post-processing of recognition .,"in this paper , we explore the use of personalization in the context of voice searches rather than web queries ." +"we present the tweetingjay system for detecting paraphrases in tweets , with which we participated in task 1 of semeval 2015 .","we described tweetingjay , a supervised model for detecting twitter paraphrases with which we participated in task 1 of semeval 2015 ." +summaries show the effectiveness of the proposed methods .,evaluation results demonstrate the effectiveness of the proposed methods . +"in this setting , where we use both word-level and character-level representations , it is beneficial to use a smaller lstm than in the character-level only setting .","9 we found that in this setting , where we use both word-level and character-level representations , it is beneficial to use a smaller lstm than in the character-level only setting ." +the underlying model used is a long shortterm memory recurrent neural network in a bidirectional configuration .,the proposed method is based on a deep learning architecture named long short term memory . +in this paper we present the machine learning system submitted to the conll shared task 2009 .,"in this paper we present the crotal semantic role labelling system , which has been used in the conll 2009 shared task 1 ." +"amr parsing is a new research problem , with only a few papers published to date ( flanigan et al. , 2014 ; wang et al. , 2015 ) and a publicly available corpus of more than 10,000 english/amr pairs .","amr parsing is the task of taking a sentence as input and producing as output an abstract meaning representation ( amr ) that is a rooted , directed , edge-labeled and leaf-labeled graph that is used to represent the meaning of a sentence ( banarescu et al. , 2013 ) ." +"following li et al , we build the coupled sequence labeling model based on a bigram linearchain crf .","as shown in similar to the first step , we use a sequence labelling approach with a crf model ." +this paper presents a novel unsupervised method for discovering intra-sentence level discourse relations .,this paper focused on unsupervised discovery of intra-sentence discourse relations for sentence level polarity classification . +"recently , many accurate statistical parsers have been proposed for english , for japanese ) .","in recent years , many accurate phrase-structure parsers have been developed , ." +we explore the use of transductive semi-supervised methods .,"to tackle this problem , we propose algorithms for transductive semi-supervised learning ." +riedel et al used knowledge bases in conjunction with surface patterns to learn custom representations for relation extraction .,"similarly , riedel et al learn universal schemas by matrix factorization without pre-defined relations ." +"the most related to this study is the work of continuous space n-gram translation models , where the feed-forward neural network language model is extended to represent translation probabilities .","the research that comes closest to ours is the work of schwenk et al on continuous space ngram models , where a neural network is employed to smooth translation probabilities ." +to incorporate the document-level information and the sentenceto-document relationship into the graph-based ranking process .,the document-level information and the sentenceto-document relationship are incorporated into the graph-based ranking algorithm . +and we hope that it will serve as a guide for future research in the field .,we hope that these findings can serve as a guide for future research in the field . +"suitable for neural machine translation , we propose to learn a decoding algorithm with an arbitrary decoding objective .","in this paper , we solely focus on the problem of decoding given a trained neural machine translation model ." +"the conll 2008 shared task was joint dependency parsing and srl , but the top performing systems decoupled the tasks , rather than building joint models .","the conll 2008 shared task was intended to be about joint dependency parsing and semantic role labeling , but the top performing systems decoupled the tasks and outperformed the systems which attempted to learn them jointly ." +we take a more restrictive approach by additionally penalizing sequences similar to the out-domain data .,our second model takes a more conservative approach by additionally penalizing data instances similar to the out-domain data . +in this paper suggest that current te systems may be able to provide open-domain q / a systems with the forms of semantic inference needed to perform accurate answer .,"in this paper , we demonstrate how computational systems designed to recognize textual entailment can be used to enhance the accuracy of current open-domain automatic question answering ( q/a ) systems ." +"we compare with convolutional neural networks , recurrent neural networks , bidirectional gated recurrent neural networks , and word embeddings .","examples of such neural networks are linear networks , deeper feed-forward neural networks , or recurrent neural networks ." +the system by modeling the inference as an ilp problem with the features of narratives adopted as soft constraints .,the idea is to perform inference via a linear programming formulation with the features of narratives adopted as soft constraints . +domestic abuse is a problem of pandemic proportions ; nearly 25 % of females and 7.6 % of males have been raped or physically assaulted by an intimate partner ( cite-p-15-1-11 ) .,"domestic abuse is the 12 th leading cause of years of life lost ( cite-p-17-1-15 ) , and it contributes to health issues including frequent headaches , chronic pain , difficulty sleeping , anxiety , and depression ( cite-p-17-1-1 ) ." +no induces the f relation ; mutating cat to carnivore induces the math-w-4-2-0-46 relation .,"for example , in figure 1 , mutating the to no induces the f relation ; mutating cat to carnivore induces the math-w-4-2-0-46 relation ." +"in at least 95 % of cases , so we applied errant to the system output of the conll-2014 shared task to carry out a detailed error type analysis .",we demonstrated the value of errant by carrying out a detailed evaluation of system error type performance for all teams in the conll2014 shared task on grammatical error correction . +we use an implementation of a maximum-entropy classifier called wapiti 8 .,"we employ the crf implementation in the wapiti toolkit , using default settings ." +we use wordsim 353 as the original data set .,"to that end , we use the wordsim353 dataset ." +while cite-p-15-3-10 only showed results on a narrow domain of cooking videos with a small set of predefined objects and actors .,"also , they showed results only on the narrow domain of cooking videos with a small set of predefined objects and actors ." +the kit system uses an in-house phrase-based decoder to perform translation .,the kit translations are generated by an in-house phrase-based translations system . +"we adopted the second release of the american national corpus frequency data 2 , which provides the number of occurrences of a word in the written and spoken anc .","we adopted the second release of the american national corpus frequency data 3 , which provides the number of occurrences of a word in the written and spoken anc ." +"in this paper , we present and make publicly available 1 a new dataset for darknet active domains , which we call it ” darknet usage .","in this paper , we present and make publicly available 1 a new dataset for darknet active domains , which we call it ” darknet usage text addresses ” ( duta ) ." +"as a language model feature , we use a standard backing off word-based trigram language model .",we integrate the recurrent neural network language model as an additional feature into the standard log-linear framework of translation . +we evaluate our models using the standard bleu metric 2 on the detokenized translations of the test set .,"we evaluated the models using the wmt data set , computing the ter and bleu scores on the decoded output ." +we used cohen 魏 to measure the inter-annotator agreement .,we also computed the inter-annotator agreement via kappa . +"on the standard parseval metric matches that of the ( cite-p-16-3-5 ) parser on which it is based , despite the data fragmentation caused by the greatly enriched space of possible node labels .","the accuracy of the first-stage parser on the standard parseval metric matches that of the ( cite-p-16-3-5 ) parser on which it is based , despite the data fragmentation caused by the greatly enriched space of possible node labels ." +statistical part implements an entropy based decision tree ( c4 . 5 ) .,the statistical part implements an entropy based decision tree ( c4.5 ) . +"by deeply integrating semantic frame criteria into the mt training pipeline , it is necessary to have a cross-lingual semantic objective function that assesses the semantic frame .","however , to go beyond tuning weights in the loglinear smt model , a cross-lingual objective function that can deeply integrate semantic frame criteria into the mt training pipeline is needed ." +"tang et al design preference matrices for each user and each product to tune word representations , based on which convolutional neural networks are used to model the whole document .","tang et al design user and product preference matrices to tune word representations , based on which convolutional neural networks are used to model the whole document ." +"for our first hypothesis , we induce pos distribution information from a corpus , and approximate the probability of occurrence of pos blocks .","as a first step , to test our first hypothesis , we remove the pos blocks with a low probability of occurrence from each query , on the assumption that these blocks are content-poor ." +"in this paper , we investigate discriminative reranking upon a baseline semantic parser .","in this paper , we investigate the effect of discriminative reranking to semantic parsing ." +"when combined with word deviations and mention percentages , most persuasive argumentation features give superior performance compared to the baselines .","when combining with content-related features , most persuasive argumentation features give superior performance compared to the baselines ." +"hyp consists of a c + + api , as well as a command line tool , and is available for download .",the hyp toolkit provides a c++ library and a command line executable . +"firstly , for computing the lexical and string similarity between two sentences , we take advantage from the task baseline which is a system using a logistic regression model with eighteen features based on n-grams .",we use the system described in the literature to compute the lexical and string similarity between two sentences by using a logistic regression model with eighteen features based on n-grams . +coreference resolution has traditionally benefited from machine learning approaches .,some machine learning approaches have been applied to coreference resolution . +the language model is a 5-gram lm with modified kneser-ney smoothing .,it is based on 5-grams with extended kneser-ney smoothing . +"we present a pro , a new method for machine translation tuning .","we have presented a pro , a new tuning method for machine translation ." +experimental results on real-world datasets show that our model achieves significant and consistent improvements on relation extraction .,the experimental results show that our model achieves significant and consistent improvements as compared with baselines . +"in an enc ¨c dec model , a long input sequence results in performance degradation due to loss of information in the front portion of the input sequence .",the rnn encoder¨cdecoder model suffers from poor performance when the length of the input sequence is long . +"in our work , we develop our active dual supervision framework using constrained non-negative tri-factorization .",our work is based on the dual supervision framework using constrained non-negative tri-factorization proposed in ( cite-p-17-1-10 ) . +use of comparable corpora is motivated by the fact that large parallel corpora are only available for a few language pairs and for limited domains .,"however , large parallel corpora are only available for a few language pairs and for limited domains ." +"in this paper , we investigate the difference between word and sense similarity measures .","thus , in this paper we investigate whether similarity should be measured on the sense level ." +some researchers have found that transliteration is quite useful in proper name translation .,some researchers have applied the rule of transliteration to automatically translate proper names . +"morphological disambiguation is the process of assigning one set of morphological features to each individual word in a text , according to the word context .",morphological disambiguation is the process of assigning one set of morphological features to each individual word in a text . +mikolov et al introduced a particularly simple version that takes advantage of a vocabulary of shared bilingual seed words to map embeddings from a source language onto the vector space of a target language .,mikolov et al extended this model to two languages by introducing bilingual embeddings where word embeddings for two languages are simultaneously represented in the same vector space . +"this paper proposes a novel japanese pas analysis model based on a neural network ( nn ) framework , which has been proved to be effective for several nlp tasks .",this paper presents a novel model for japanese predicate argument structure ( pas ) analysis based on a neural network framework . +"we propose a divide-and-conquer strategy by decomposing a hypergraph into a set of independent subhypergraphs , which is computationally feasible .","therefore , we propose a divide-and-conquer strategy by decomposing a hypergraph into a set of independent subhypergraphs ." +bunescu and mooney connect weak supervision with multi-instance learning and extend their relational extraction kernel to this context .,"bunescu and mooney , 2007 ) connects weak supervision with multi-instance learning and extends it to relation extraction ." +v-measure assesses the quality of a clustering solution by explicitly measuring its homogeneity and its completeness .,v-measure assesses the quality of a clustering solution against reference clusters in terms of clustering homogeneity and completeness . +liu et al proposed two models capturing the interdependencies between the two parallel lstms encoding two input sentences for the tasks of recognising textual entailment and matching questions and answers .,"liu et al propose two models that capture the interdependencies between two parallel lstms encoding the two sentences for the tasks of recognizing textual entailment and matching questions and answers , respectively ." +but they produce differing characterizations of non-arbitrariness .,"however , they attribute responsibility for non-arbitrariness differently ." +the former approach involves adding self-labeled data from the target domain produced by a model trained in-domain .,the former approach involves adding self-labelled data from the target domain produced by a model trained in-domain . +temporal and causal relations are closely related .,the temporal relation is dictated by the causal relation . +"moreover , some systems and , also exploit kinds of extra information such as the unlabeled data or other knowledge .","moreover , some systems and zhang et al , also exploit kinds of extra information such as unlabeled data or other knowledge ." +"argumentation features derived from a coarse-grained , argumentative structure of essays are helpful in predicting essays .",we investigate whether argumentation features derived from a coarse-grained argumentative structure of essays can help predict essays scores . +"sentence similarity computation plays an important role in text summarization , classification , question answering and social network applications .",sentence similarity computation plays an important role in text summarization and social network applications . +the power prediction system is built using the cleartk wrapper for svmlight package .,the taxonomy kernel was trained using the svm 脨 脴 package . +"in english , we have demonstrated a 33 % relative reduction in error rate .","with this approach , we reduce the error rate for english by 33 % , relative to the best existing system ." +we use marginal inference in a conditional random field .,"in particular , we consider conditional random fields and a variation of autoslog ." +we use a random forest classifier consisting of a combination of decision trees where features are randomly extracted to build each decision tree .,"we employ a random forest classifier , an ensemble of decision tree classifiers learned from many independent subsamples of the training data ." +"indicating the importance of each fact or pair of facts , they select the facts to express .",they usually start by selecting the logical facts to express . +"since chinese is the dominant language in our data set , a word-by-word statistical machine translation strategy ( cite-p-14-1-22 ) is adopted to translate english words into chinese .",this is because chinese is a pro-drop language ( cite-p-21-3-1 ) that allows the subject to be dropped in more contexts than english does . +"from the second experiment , we can conclude that taking definition structure into account helps to get better classification .",in the second experiment we show that classification results improve when information on definition structure is included . +"messages on microblogs are short , noisy , and informal texts with little context , and often contain phrases with ambiguous meanings .","the reason is that messages on microblogs are short , noisy and informal texts with little context , and often contain phrases with ambiguous meanings ." +our baseline system is an standard phrase-based smt system built with moses .,our baseline system was a vanilla phrase-based system built with moses using default settings . +"in prototype-driven learning , we specify prototypical examples .",we investigate prototype-driven learning for primarily unsupervised sequence modeling . +we use the berkeley parser to parse all of the data .,"for comparison , we also include the berkeley parser ." +"in this paper , we presented allvec , an efficient batch learning based word embedding model that is capable to leverage all positive and negative training examples .","in this work , we propose allvec that uses batch gradient learning to generate word representations from all training samples ." +"that with proper representation , large number of deterministic constraints can be learned from training examples .","we have shown by experiments that large number of deterministic constraints can be learned from training examples , as long as the proper representation is used ." +"gao et al and moore and lewis apply this method to language modeling , and foster , goutte , and kuhn and axelrod , he , and gao apply this method to translation modeling .","gao et al and moore and lewis apply this method to language modeling , while foster et al and axelrod et al use it on the translation model ." +"relevant applications deal with numerous domains such as blogs , news stories , and product reviews .",relevant applications deal with numerous domains such as news stories and product reviews . +"we use a linear classifier trained with a regularized average perceptron update rule as implemented in snow , .","we use a linear classifier trained with a regularized perceptron update rule as implemented in snow , ." +all classifiers and kernels have been implemented within the kernel-based learning platform .,all classifiers and kernels have been implemented within the kernel-based learning platform called kelp . +our experiments show that r ealm reduces extraction error .,our experiments demonstrate that r ealm outperforms these approaches on sparse data . +"recent question answering systems have focused on open-domain factoid questions , relying on knowledge bases like freebase or large corpora of unstructured text .","in recent years , the development of largescale knowledge bases , such as freebase , provides a rich resource to answer open-domain questions ." +"approaches to dependency parsing either generate such trees by considering all possible spanning trees , or build a single tree on the fly by means of shift-reduce parsing actions .","approaches to dependency parsing either generate such trees by considering all possible spanning trees , or build a single tree by means of shift-reduce parsing actions ." +"in this paper , we presented the methods we used while participating in the 2016 clinical tempeval task .","in this paper , we present the methods we used while participating in the 2016 clinical tempeval task as part of the semeval-2016 challenge ." +we provide an analysis of humans ¡¯ subjective perceptions of formality in four different genres .,we perform an analysis of humans¡¯ perceptions of formality in four different genres . +"in this paper , we propose a framework for automatically identifying reasons in online reviews .","in this paper , we present a system that automatically extracts the pros and cons from online reviews ." +we exploit the svm-light-tk toolkit for kernel computation .,we employ the ranking mode of the popular learning package svm light . +paper presents a step toward semantic grounding for complex problem-solving dialogues .,this paper presents a novel approach to semantic grounding of noun phrases within tutorial dialogue for computer programming . +refinement process continues until the two base rankers can not learn from each other any more .,this process continues until the two base rankers can not learn from each other any more . +"choi et al address the task of extracting opinion entities and their relations , and incorporate syntactic features to their relation extraction model .",choi et al used an integer linear programming approach to jointly extract entities and relations in the context of opinion oriented information extraction . +pereira et al suggested deterministic annealing to cluster verb-argument pairs into classes of verbs and nouns .,"pereira et al use an information-theoretic based clustering approach , clustering nouns according to their distribution as direct objects among verbs ." +"in addition , the average accuracy of the classifier is 81 . 5 % on the sentences .","in addition , the average accuracy of the classifier is 81.5 % on the sentences the judges tagged with certainty ." +"by extrinsic evaluation , i . e . , we applied the results of topic detection to extractive multi-document summarization .","in order to evaluate the method , we applied the results of topic detection to extractive multi-document summarization ." +natural language generation is the process of generating coherent natural language text from non-linguistic data .,data-to-text generation refers to the task of automatically generating text from non-linguistic data . +"for the majority of tasks , we find that simple , unsupervised models perform better when n-gram frequencies are obtained from the web .","we showed that simple , unsupervised models using web counts can be devised for a variety of nlp tasks ." +"given a user ¡¯ s tweet sequence , we define the purchase stage identification task as automatically determining for each tweet .","in particular , we define the task of classifying the purchase stage of each tweet in a user¡¯s tweet sequence ." +"cohn et al , the annotators were instructed to distinguish between sure and possible alignments , depending on how certainly , in their opinion , two predicates describe the same event .","cohn et al , the annotators were instructed to distinguish between sure and possible alignments , depending on how certainly , in their opinion , two predicates describe verbalizations of the same event ." +the combination of a heightened learning rate and greedy processing results in very reasonable one-shot learning .,it requires a high-risk strategy combining heightened learning rate and greedy processing of the context . +"from a computational point of view , distinguishing between antonymy and synonymy is important for nlp .",distinguishing between antonyms and synonyms is a key task to achieve high performance in nlp systems . +"on the same topic , most tdt approaches rely on traditional vector space models .",most existing approaches rely on the traditional vector space model . +a gaussian prior is used for regularizing the model .,we use the gaussian prior smoothing method for the language model . +current event extraction systems rely on local information .,several recent studies use high-level information to aid local event extraction systems . +li et al investigated the prediction of places of interest based on linear rank combination of content and temporal factors .,li et al rank a set of candidate points of interest using language and temporal models . +brown clustering is a hierarchical clustering method that groups words into a binary tree of classes .,brown clustering is a commonly used unsupervised method for grouping words into a hierarchy of clusters . +dinu and lapata propose a probabilistic framework for representing word meaning and measuring similarity of words in context .,and dinu and lapata propose a probabilistic framework that models the meaning of words as a probability distribution over latent factors . +"we use the penn discourse treebank , which is the largest handannotated discourse relation corpus annotated on 2312 wall street journal articles .","the penn discourse treebank we use the penn discourse treebank , the largest available manually annotated corpora of discourse on top of one million word tokens from the wall street journal ." +we introduce a novel method to aggregate the variable-cardinality boew into a fixed-length vector by using the fk .,"then , the variable size boews are aggregated into fixed-length vectors by using fk ." +to transfer the semantic difference vector to a probability distribution over similarity scores .,5 ) transfer the semantic difference vector to the probability distribution over similarity scores by fully-connected neural network . +to calculate the constituent-tree kernels st and sst we used the svm-light-tk toolkit .,we used the svm-light-tk 5 to train the reranker with a combination of tree kernels and feature vectors . +"maas et al present a probabilistic topic model that exploits sentiment supervision during training , leading to rep- resentations that include sentiment signals .","maas et al presented a probabilistic model that combined unsupervised and supervised techniques to learn word vectors , capturing semantic information as well as sentiment information ." +"in order to evaluate the retrieval performance of the proposed model on text of cross languages , we use the europarl corpus 2 which is the collection of parallel texts in 11languages from the proceedings of the european parliament .","we evaluate the proposed triangulation method through pivot translation experiments on the europarl corpus , which is a multilingual corpus including 21 european languages widely used in pivot translation work ." +"in later work , this idea was extended to the disambiguation of translations in a bilingual dictionary .","in later work , this idea was applied to the disambiguation of translations in a bilingual dictionary ." +"in this paper , we have proposed a novel topic model for hypertexts .","in this paper , we study the problem of topic modeling for hypertexts ." +"zhang and clark proposed a wordbased cws approach using a discriminative perceptron learning algorithm , which allows word-level information to be added as features .","zhang and clark proposed a graphbased scoring model , with features based on complete words and word sequences ." +"the current release of the odin ( online database of interlinear text ) database contains over 150 , 000 linguistic examples , from nearly 1 , 500 languages , extracted from pdfs found on the web .","the current release of the odin ( online database of interlinear text ) database contains over 150,000 linguistic examples in the form of interlinear glossed text ( igt ) , an example of which is shown in fig . 1 ." +phrase structures and dependency structures are two of the most popular grammar formalisms for statistical parsing .,probabilistic context-free grammars are commonly used in parsing and grammar induction systems . +"chambers and jurafsky learn narrative schemas , which mean coherent sequences or sets of events , from unlabeled corpora .","for instance , chambers and jurafsky model narrative flow in the style of schankian scripts ." +"in all current deep compositional distributional settings , the word embeddings are internal parameters of the model .","in all the above models , the word embeddings and the weights of the compositional layers are optimized against a task-specific objective function ." +zelenko et al used the kernel methods for extracting relations from text .,zelenko et al and culotta and sorensen used tree kernels for relation extraction . +gao et al model interestingness between two documents with deep nns .,gao et al modeled interestingness between two documents with deep nns . +we describe our deep convolutional neural network for sentiment analysis of tweets .,this paper describes our deep learning system for sentiment analysis of tweets . +we propose an inter-weighted layer to measure the importance of different parts .,"from this point , we propose a novel inter-weighted layer to measure the importance of each word ." +input to the pos feature is obtained from the twitter partof-speech tagger .,the tweets are tokenized using the cmu pos tagger . +vuli膰 et al utilize the vsm to produce target clusters which are compared to the groupings from the lexical resource via collocation and purity .,vuli膰 et al and sun et al apply a clustering algorithm to the input words and measure how well the clusters correspond to the word groupings in verbnet via purity and collocation . +"bilingual dictionaries , parallel corpora , machine translators , morphological analyzers , and so on ) on the internet , most intercultural collaboration activities are still lacking multilingual support .","although there are many language resources on the internet , most intercultural collaboration activities still lack multilingual support ." +"semantic relatedness is a very important factor for coreference resolution , as noun phrases used to refer to the same entity should have a certain semantic relation .",semantic relatedness is a very important factor for the coreference resolution task . +evaluation metrics we use the ribes and the bleu scores as evaluation metrics .,evaluation metrics we report bleu as the main evaluation metric of the question generation systems . +"in our model , we use negative sampling discussed in to speed up the computation .",we use negative sampling to approximate softmax in the objective function . +we used the pre-trained word embeddings that are learned using the word2vec toolkit on google news dataset .,"as for english , we used a pretrained google news word embeddings 2 , which has shown high performance in several word similarity tasks ." +distributional semantics builds on the assumption that the semantic similarity of words is strongly correlated to the overlap between their linguistic contexts .,distributional semantics is based on the hypothesis that words co-occurring in similar contexts tend to have similar meaning . +"this representation consists of distributional features , suffixes and word shapes of math-w-2-1-1-32 and its local neighbors .",this representation is the basis for the lexical-semantic level that is included in the kr component . +tag is a tree-rewriting system : the derivation process consists in applying operations to trees in order to obtain a ( derived ) tree whose sequence of leaves is a sentence .,"although tag is a class of tree rewriting systems , a derivation relation can be defined on strings in the following way ." +details about svm and krr can be found in .,details about svm and kfd can be found in . +we used glove 10 to learn 300-dimensional word embeddings .,we used crfsuite and the glove word vector . +"in this paper , we propose an entity recognition system that improves this neural architecture .",this paper presents novel methods to improve neural entity recognition tasks . +"unfortunately , pinchak and lin use a brittle generative model when combining question contexts that assumes all contexts are equally important .","the pinchak and lin system is unable to assign individual weights to different question contexts , even though not all question contexts are equally important ." +the word ¡° granite ¡± is a pun with the target ¡° granted ¡± .,"here , ¡°clothed¡± is the pun and ¡°closed¡± is the target ." +"on the right hand side of the current word is not utilized , which is a relative weakness of shift-reduce parsing .","however , due to the incremental nature of shift-reduce parsing , the right-hand side constituents of the current word can not be used to guide the action at each step ." +lebret et al generate the first sentence of a biography using a conditional neural language model .,lebret et al generate the first sentence of a biography by a conditional neural language model . +"we used the first 200 movie reviews from the dataset provided by zaidan et al , with an equal distribution of positive and negative examples .",we use the movie reviews dataset from zaidan et al that was originally released by pang and lee . +"on the simlex999 word similarity dataset , our model achieves a spearman ’ s math-w-1-1-0-111 score of 0 . 517 , compared to 0 . 462 of the state-of-the-art word2vec model .","on simlex999 , our model is superior to six strong baselines , including the state-of-the-art word2vec skip-gram model by as much as 5.5–16.7 % in spearman ’ s ρ score ." +"our domains were taken from conceptual specifications in , which cluster semantically and encyclopedically related concepts to ensure a generally applicable set of domains involved in meaning shifts .","instead , our source and target domains were taken from specifications in , which we assumed to ensure a more stratified and generally applicable set of domains involved in meaning shifts ." +we segment the chinese half of the corpus using the maximum entropy segmenter from .,we use the maximum entropy segmenter of to segment the chinese part of the fbis corpus . +"in particular , the recent shared tasks of conll 2008 tackled joint parsing of syntactic and semantic dependencies .","this is , in part , inspired by the recent conll shared task , which was the first evaluation of syntactic and semantic dependency parsing to include unmarkable nominals ." +in this paper shows that some kind of guiding technique has to be considered when one wants to increase parsing efficiency .,"in this paper , we study a parsing technique whose purpose is to improve the practical efficiency of rcl parsers ." +"between reviewers and products , we employ tensor decomposition to learn the embeddings of the reviewers and products in a vector space .","next , we utilize tensor factorization to perform tensor decomposition , and the representations of reviewers and products are embedded in a latent vector space by collective learning ." +"in the context of this discussion , we will refer to the target partitions , or clusters , as classes , referring only to hypothesized clusters .","in the context of this discussion , we will refer to the target partitions , or clusters , as classes , referring only to hypothesized clusters as clusters ." +"the penn discourse treebank is the largest available annotated corpora of discourse relations over 2,312 wall street journal articles .","the pdtb is the largest corpus annotated for discourse relations , formed by newspaper articles from the wall street journal ." +"however , most models of topic segmentation ignore the social aspect of conversations .","however , topic models alone can not model the dynamics of a conversation ." +"recently , new reordering strategies have been proposed in the literature on smt such as the reordering of each source sentence to match the word order in the corresponding target sentence , see kanthak et al and crego et al .","recently , new reordering strategies have been proposed such as the reordering of each source sentence to match the word order in the corresponding target sentence , see kanthak et al and ." +"in this study , we examined factors hypothesized to influence the propagation of words through a community of speakers , focusing on anglicisms in a german hip hop discussion .",we therefore study which factors contribute to the uptake of ( hip hop-related ) anglicisms in an online community of german hip hop fans over a span of 11 years . +"for the model , we introduce three novel fine-grained relations .",we introduce a novel graph that incorporates three fine-grained relations . +our experiments show that our model achieves better accuracy than existing supervised and semi-supervised models .,experimental results show that our model outperforms state-of-the-art methods in both the supervised and semi-supervised settings . +"paraphrase identification is the problem to determine whether two sentences have the same meaning , and is the objective of the task 1 of semeval 2015 workshop ( cite-p-14-3-19 ) .",paraphrase identification is the task of judging if two texts express the same or very similar meaning . +reiter and frank use a wide range of syntactic and semantic features to train a supervised classifier for identifying generic nps .,reiter and frank exploit linguistically-motivated features in a supervised approach to distinguish between generic and specific nps . +i show how this can be done on an example of the classical k-dnf learner .,"below , i show how this can be done by extending a k-dnf 4 learner of to a paradigm-learner ." +"first , we add a transition to an existing non-projective parsing algorithm , so it can perform either projective or non-projective parsing .","to reduce the search space , we add a transition to an existing non-projective parsing algorithm ." +from this we extracted grammar rules following the technique described in cohn and lapata .,to demonstrate this we have extracted paraphrase rules from our annotations using the grammar induction algorithm from cohn and lapata . +cite-p-18-3-7 presented a general framework to expand the short and sparse text by appending topic .,cite-p-18-1-11 proved that leveraging topics at multiple granularity can model short texts more precisely . +passages are clustered using a combination of hierarchical clustering and n-bin classification .,the remaining passages are clustered using a combination of hierarchical clustering and n-bin classification . +with the aid of this tool a domain expert was able to drastically reduce her model building time from months to two days .,"with the aid of this tool , a domain expert reduced her model building time from months to two days ." +in this paper we show the effectiveness of partial-label learning in digesting the encoded knowledge from wikipedia data .,in this paper we adopt partial-label learning with conditional random fields to make use of this valuable knowledge for semi-supervised chinese word segmentation . +word embedding provides an unique property to capture semantics and syntactic information of different words .,"word embedding techniques aim to use continuous low-dimension vectors representing the features of the words , captured in context ." +zelenko et al proposed extracting relations by computing kernel functions between parse trees .,"zelenko et al proposed a kernel between two parse trees , which recursively matches nodes from roots to leaves in a top-down manner ." +"as textual features , we use the pretrained google news word embeddings , obtained by training the skip-gram model with negative sampling .","in addition to pos-tags , we use the pre-trained word embeddings available from google 5 trained using the skip-gram objective as input features for our convolutional neural network ." +random forest is an ensemble method that learns many classification trees and predicts an aggregation of their result .,the random forest algorithm is a decision tree algorithm which uses multiple random trees to vote for an overall classification of the given input . +rationales are never provided during training .,rationales are never given during training . +the most prominent approaches include the karma system and the att-meta project .,these include the karma system and the att-meta project . +we extracted scfg rules from the parallel corpus using the standard heuristics and filtering strategies .,we extract hierarchical rules from the aligned parallel texts using the constraints developed by chiang . +"in this work , we investigate the use of rule markov models in the context of tree-856 to-string translation .","in this paper , we incorporate the mers model into a stateof-the-art linguistically syntax-based smt model , the tree-to-string alignment template model ." +"for the evaluation , we use the same measures as brent and goldwater et al , namely segmentation precision , recall and f-score .","for the evaluation , we used the same measures as brent , venkataraman and goldwater , namely token precision , recall and f-score ." +"in this demo , we introduce need4tweet , a twitterbot for a combined system for nee and ned in tweets .","in this demo paper , we present need4tweet , a twitterbot for named entity extraction ( nee ) and disambiguation ( ned ) for tweets ." +"islam and inkpen proposed a corpus-based sentence similarity measure as a function of string similarity , word similarity and common word order similarity .","for example , in they proposed a corpus-based sentence similarity measure as a function of string similarity , word similarity and common word order similarity ." +"this representation consists of two facets : a segmentation into minimal semantic units , and a labeling of some of those units with semantic classes .",this representation is the basis for the lexical-semantic level that is included in the kr component . +luong et al adapted an nmt model trained on general domain data with further training on in-domain data only .,"luong and manning , 2015 ) adapts an already existing nmt system to a new domain by further training on the in-domain data only ." +evaluation shows that 98 . 3 % of distractors are reliable when generated by our method .,the result shows that 98.3 % of distractors generated by our methods are reliable . +we explore whether using coreference can improve the learning process .,we explore whether coreference can improve the learning process . +"with the help of the phrasal lexical disambiguation model , we build three models : a context-sensitive n-gram prediction model , a paraphrase suggestion model , and a translation model .","similar to their work , we further integrate the multi-word phrasal lexical disambiguation model to the n-gram prediction model , paraphrase model and translation model of our system ." +"systems , we plan to design a supervised srl system .","so , we plan to design a supervised srl system too ." +"our experimental results show that our proposed sentence type tagging method works very well , even for the minority categories .",our experimental results show that our proposed approach performs well for sentence dependency tagging . +"in addition , we have compared our approach with other based on feature translation .","in addition , we have compared the results with a system which translates selected document features ." +"cohn and lapata , 2007 ) cast the sentence compression problem as a tree-to-tree rewriting task .",cohn and lapata formulated sentence compression as a tree-to-tree rewrite problem . +"multi-task learning has resulted in successful systems for various nlp tasks , especially in cross-lingual settings .","in particular , the vector-space word representations learned by a neural network have been shown to successfully improve various nlp tasks ." +kim and hovy and bethard et al examine the usefulness of semantic roles provided by framenet 1 for both oh and opinion target extraction .,bethard et al and kim and hovy explore the usefulness of semantic roles provided by framenet for both opinion holder and opinion target extraction . +the decoder is implemented with weighted finite state transducers using standard operations available in the openfst libraries .,our decoder is implemented as a cascade of weighted finite-state transducers using the functionalities of the openfst library . +"fung and cheung , 2004 , for instance , present the first exploration of very nonparallel corpora using a document similarity measure based on bilingual lexical matching defined over mutual information scores on word pairs .","fung and cheung present the first exploration of very non-parallel corpora , using a document similarity measure based on bilingual lexical matching defined over mutual information scores on word pairs ." +"in an experimental evaluation on the test-set that was used in koehn et al we show that for examples that are in coverage of the grammar-based system , we can achieve stateof-the-art quality on n-gram based evaluation measures .","under the nist measure , we achieve results in the range of the state-of-the-art phrase-based system of koehn et al for in-coverage examples of the lfgbased system ." +evaluation shows that docchat is a perfect complement for chatbot engines .,"these make docchat as a general response generation solution to chatbots , with high adaptation capability ." +autotutor eschews the pattern-based approach entirely in favor of a bow lsa approach .,autotutor eschews the pattern-based approach entirely in favor of a bag-of-words lsa approach . +le and mikolov introduce paragraph vector to learn document representation from semantics of words .,"le and mikolov introduced paragraph-level vectors , a fixed-length feature representations for variable-length texts ." +"through extensive experiments on real-world datasets , we find that neuraldater significantly outperforms state-of-the-art baseline .","through extensive experiments on real-world datasets , we demonstrate the effectiveness of neuraldater over existing state-of-the-art approaches ." +and thus predicting and recovering empty categories can be cast as a tree annotating problem .,a wide variety of language problems can be treated as or cast into a tree annotating problem . +"in the no context , partial profile and full profile conditions , annotators often selected the ¡° neutral ¡± option ( x-axis ) when the model inferred .","we see that in the no context , partial profile and full profile conditions , annotators often selected the ¡°neutral¡± option ( x-axis ) when the model inferred the true label was ¡°clinton¡± or ¡°trump¡± ( y-axis ) ." +an alternation is a pattern in which a number of words share the same relationship between a pair of senses .,an alternation is a pattern in which a number of words share the same relationship between ? a pair of senses . +barzilay and mckeown extracted both single-and multiple-word paraphrases from a sentence-aligned corpus for use in multi-document summarization .,barzilay and mckeown acquire paraphrases from a monolingual parallel corpus using a co-training algorithm . +"in this work , we improve the robustness of encoder representations against noisy perturbations with adversarial learning .","in this paper , we propose to improve the robustness of nmt models with adversarial stability training ." +"in this thesis , we propose and evaluate novel text quality .","in this thesis , we propose computable measures to capture genre-specific text quality ." +cite-p-20-1-22 used sequence labeling model ( crf ) for normalizing deletion-based abbreviation .,cite-p-20-1-22 used a crf sequence modeling approach for deletion-based abbreviations . +"however , the size of the used corpora still leads to data sparseness and the extraction procedure can therefore require extensive smoothing .","therefore , the size of the corpora used in some previous approaches leads to data sparseness , and the extraction procedure can therefore require extensive smoothing ." +"in this paper , we develop a novel behavior-based assessment using human language .","in this paper , we develop a temporal orientation measure based on language in social media ." +"in section 4 , we describe tools allowing to efficiently access wikipedia ’ s edit history .","thus , in section 4 , we present a tool to efficiently access wikipedia ’ s edit history ." +we carry out our experiments using a reimplementation of the hierarchical phrase-based system on the nist chinese-english translation tasks .,we implement a hierarchical phrase-based system similar to the hiero and evaluate our method on the chinese-to-english translation task . +early works mainly focused on exploiting parallel corpora to project information between the high-and low-resource languages .,early works primarily assumed a large parallel corpus and focused on exploiting them to project information from high-to low-resource . +"in the most general case , initial anchors are only the first and final sentence pairs of both texts .","in the most general case , initial anchors are only the first and final sentence pairs of both texts as depicted in figure 2 ." +"beaufort et al , 2010 ) combine a noisy channel model with a rulebased finite-state transducer and got reasonable results on french sms , but did not test their method on english text .","beaufort et al combine a noisy channel model with a rule-based finite-state transducer and got reasonable results on french sms , but have not tried their method on english text ." +blitzer et al apply the structural correspondence learning algorithm to train a crossdomain sentiment classifier .,"blitzer et al introduce an extension to a structural correspondence learning algorithm , which was specifically designed to address the task of domain adaptation ." +"with the crosslingual semantic frame based objective function not only helps to further sharpen the itg constraints , but still avoids excising relevant portions of the search space , and leads to better performance than either conventional itg or giza + + based approaches .","finally , we show that directly driving itg induction with a crosslingual semantic frame objective function not only helps to further sharpen the itg constraints , but still avoids excising relevant portions of the search space , and leads to better performance than either conventional itg or giza++ based approaches ." +inputs are projected into another high-dimensional space by a word .,the words appearing in vocabulary are indexed and associated with high-dimensional vectors . +"for example , “ reserate ” is correctly included in c rown as a hypernym of unlock % 2 : 35 : 00 : : ( to open the lock of ) and “ awesometastic ” as a synonym of fantastic % .","1 for example , “ reserate ” is correctly included in c rown as a hypernym of unlock % 2:35:00 : : ( to open the lock of ) and “ awesometastic ” as a synonym of fantastic % 3:00:00 : extraordinary:00 ( extraordinarily good or great ) ." +related tasks are implemented with bidirectional long short-term memory ( blstm ) recurrent neural network ( rnn ) .,bidirectional long short-term memory ( blstm ) recurrent neural network ( rnn ) has been successfully applied in many tagging tasks . +"but more recent approaches have tried to minimize the amount of supervision necessary ( cite-p-20-3-15 , cite-p-20-3-4 , cite-p-20-1-1 ) .","recent approaches try to minimize the amount of supervision needed ( cite-p-20-3-15 , cite-p-20-1-1 , cite-p-20-3-12 ) ." +habash and sadat have shown that tokenization is helpful for translating arabic .,previous results all suggest that some degree of tokenization is helpful when translating from arabic . +"microblogs , this paper proposes a novel search task that we call microblog event retrieval .","in this paper , we proposed a novel microblog search task called microblog event retrieval ." +it has been shown in previous work on relation extraction that the shortest dependency path between any two entities captures the information required to assert a relationship between them .,the dependency path is the shortest path between the two entities in a dependency parse graph and has been shown to be important for relation extraction . +"for extracting recurrent neural network language model features , we use elman 1 , a modification of the rnnlm toolkit 2 that outputs hidden layer activations .","for the decoder , we use a recurrent neural network language model , which is widely used in language generation tasks ." +our approach outperforms the current state-of-the-art by over 81 % larger f1-score .,our logistic regression model improves f1-scores by over 80 % in comparison to state-of-the-art approaches . +"also , f is the non-linear activation function and we use re-lu as well .",f is the non-linear activation function and we use relu in this paper . +lui and baldwin presented empirical evidence that ld feature selection was effective for domain adaptation in language identification .,"lui and baldwin showed that it is relatively easy to attain high accuracy for language iden- , and later shown to be effective for feature selection in text categorization ." +this is inspired by other predictive models in the representation learning literature .,the learning technique follows other representation learning algorithms in using negative sampling . +"we pretrain 200-dimensional word embeddings using word2vec on the english wikipedia corpus , and randomly initialize other hyperparameters .","we obtain monolingual texts for both english and the target language from wikipedia , and independently train word embeddings for each language using word2vec ." +"by combining the hal model and relevance feedback , the cip can induce semantic patterns from the unannotated web corpora .","in addition , combining the relevance feedback and pseudo-relevance feedback , the induction process can be guided to induce more relevant semantic patterns ." +we then used word2vec to train word embeddings with 512 dimensions on each of the prepared corpora .,"for a fair comparison to our model , we used word2vec , that pretrain word embeddings at a token level ." +"grefenstette and nioche and jones and ghani use the web to generate corpora for languages for which electronic resources are scarce , and resnik describes a method for mining the web in order to obtain bilingual texts .","grefenstette and nioche and jones and ghani use the web to generate corpora for languages where electronic resources are scarce , while resnik describes a method for mining the web for bilingual texts ." +by a coordinating conjunction is a classic hard problem .,resolving coordination ambiguity is a classic hard problem . +gedigian et al trained a maximum entropy classifier to discriminate between literal and metaphorical use .,"more recently , gedigian et al used hand-annotated corpora to train an automatic metaphor classifier ." +"by removing the tensor ¡¯ s surplus parameters , our methods learn better and faster .","by removing the tensor¡¯s surplus parameters , our methods learn better and faster as was shown in experiments ." +"the gp is fully defined by the covariance structure assumed between the observed points , and its hyperparameters , which can be automatically learned from data .",gp is a non-parametric model which allows for powerful modelling of the underlying intensity function . +"in the initial formulation of velldal , an svm classifier was applied using simple n-gram features over words , both full forms and lemmas , to the left and right of the candidate cues .","in the initial formulation of velldal , an svm classifier was trained using simple n-gram features over words , both full forms and lemmas , to the left and right of the candidate cues ." +"in this paper , we describe a cross-domain sentiment classification method using an automatically created sentiment sensitive .","in this paper , we describe a cross-domain sentiment classification method using an automatically created sentiment sensitive thesaurus ." +ganin et al propose introducing an adversarial loss to make shared features domaininvariant .,ganin et al proposed an adversarial network for domain adaptation . +"experimental results show that our model outperforms the sequenceto-sequence baseline by a large margin , and achieves the state-of-the-art performances .",experimental results show that our model achieves the state-of-the-art performances on the benchmark dataset . +experimental results show that the proposed method significantly outperforms the standard convolution tree kernel .,experimental results show that the composite kernel outperforms the previously best-reported methods . +pad贸 and lapata use similar dependency subtrees as a feature to create general semantic space models .,"more recently , pado and lapata propose a semantic space based on dependency paths ." +"moreover , shaalan et al created a model using unigrams to correct arabic spelling errors and recently , created madami-ra , a morphological analyzer and a disambiguation tool for arabic .","more recently , alkanhal et al wrote a paper about a stochastic approach used for word spelling correction and attia et al created a dictionary of 9 million entries fully inflected arabic words using a morphological transducer ." +we show that the proposed approaches outperform the state-of-the-art ner models ( both with and without using additional visual contexts ) .,"we showed that the modality attention based model outperforms other state-of-the-art baselines when text was the only modality available , by better combining word and character level information ." +galley and manning use the shift-reduce algorithm to conduct hierarchical phrase reordering so as to capture long-distance reordering .,for example galley and manning propose a shift-reduce style method to allow hieararchical non-local reorderings in a phrase-based decoder . +"to compare our model with the other systems , we evaluated the performance of our model when the entity boundaries were given .",we compare the entity and relation extraction performance of our model with other systems . +"notice how this is also a general problem of statistical learning processes , as large .",notice how this is also the strategy pursued in recent work on deep learning approaches to nlp tasks . +adaptor grammars are a framework for bayesian inference of a certain class of hierarchical nonparametric models .,adaptor grammars is a non-parametric bayesian framework for performing grammatical inference over parse trees . +in our data is that a model based on f-scores alone predicts only a small proportion of the variance .,we show that standard intrinsic metrics such as f-score alone do not predict the outcomes well . +shi and mihalcea argue that mapping the lexical entries in framenet to wordnet senses via verbnet is a promising approach to connecting these complementary resources .,"shi and mihalcea propose the integration of verbnet , wordnet and framenet into a knowledge base and use it in the building of a semantic parser ." +and the results demonstrate that facial expressions hold great promise for distinguishing the pedagogically relevant dialogue act .,the results point to ways in which dialogue systems can effectively leverage affective channels to improve dialogue act classification . +"in this paper , we present a system that we developed to automatically learn elements of a plan and the ordering constraints .","in this paper , we focus on learning the plan elements and the ordering constraints between them ." +"following the work of nogueira dos on this dataset , we apply the feature set of ratnaparkhi .","we apply sixteen feature templates , motivated by by ratnaparkhi ." +"thus , we assume that these structures are latent and make use of the latent structure perceptron to train our models .","because coreference trees are not given in the training data , we assume that these structures are latent and use the latent structured perceptron as the learning algorithm ." +the seminal work in the field of hypernym learning was done by hearst .,one of the first studies on acquisition of hyponymy relations was made by hearst . +analysis on the experimental results suggests that the extrinsic evaluation based on slpt problems captures a different dimension of translation quality than the manual / automatic intrinsic .,an analysis of the experimental results showed that the extrinsic evaluation captured a different dimension of translation quality than that captured by manual and automatic intrinsic evaluation . +we use the mstparser implementation described in mcdonald et al for feature extraction .,"we use the mstparser to generate k-best lists , and optimize k and 伪 on the development set ." +"to overcome this independence assumptions imposed by the bi-lstm and to exploit this kind of labeling constraints in our arabic segmentation system , we model label sequence logic jointly using conditional random fields .","to overcome the independence assumptions imposed by the bilstm and exploit these kind of labeling constraints in our arabic segmentation system , we model label sequence logic jointly using conditional random fields ." +"we evaluate the reliability of these candidates , using simple metrics based on co-occurence frequencies , similar to those used in associative approaches to word alignment .","we evaluate the reliability of these candidates using simple metrics based on co-occurrence frequencies , similar to those used in associative approaches to word alignment ." +"in this study , we propose a new approach that reduces the cost of scaling natural language understanding to a large number of domains and experiences .","in this paper , we propose a practical technique that addresses this issue in a web-scale language understanding system : microsoft ’ s personal digital assistant cortana ." +we believe that this framework will be useful for a variety of applications .,we believe this work to be useful for a variety of applications . +"higher-order dependency features encode more complex subparts of a dependency tree structure than first-order , bigram .",higher-order dependency features are known to improve dependency parser accuracy . +the sumat project 3 included a statistical approach for serbian and slovenian subtitles .,"different smt systems for subtitles were developed in the framework of the sumat project , 6 including serbian and slovenian ." +this paper presents an empirically motivated theory of the discourse focusing nature of accent .,this paper presents an empirically motivated theory of the discourse focusing function of accent . +bouchard-c么t茅 et al employ a graphical model to reconstruct the word forms in protoaustronesian using swadesh lists .,bouchard-c么t茅 et al employ a graphical model to reconstruct the proto-word forms from the synchronic word-forms for the austronesian language family . +"by presenting interesting future research directions , which we believe are fruitful in advancing this field by building high-quality tweet representation learning models .","we also provide interesting future directions , which we believe are fruitful in advancing this field by building high-quality tweet representation learning models ." +"with the consideration of user and product information , our model can significantly improve the performance of sentiment classification .","to address this issue , we propose a hierarchical neural network to incorporate global user and product information into sentiment classification ." +distributed word representations induced through deep neural networks have been shown to be useful in several natural language processing applications .,vector space word representations learned using unsupervised algorithms are often effective features in supervised learning methods . +louis and nenkova implemented features to capture aspects of great writing in science journalism domain .,louis and nenkova defined genre-specific and general features to predict the article quality in science journalism domain . +"jindal and liu use machine learning to identify some comparative structures , but do not provide a semantic interpretation .","similarly , jindal and liu had used pattern mining for identifying comparative sentences in a supervised learning setting ." +"grammars are , however , typically created manually or learned in a supervised fashion , requiring extensive manual effort .","such grammars are , however , typically created manually , which is time-consuming and error-prone ." +"as a testbed , we present a sequence of ‘ negative ’ results culminating in a ‘ positive ’ one – showing that while most agent-invented languages are effective ( i . e . achieve near-perfect task .","in conclusion , we presented a sequence of ‘ negative ’ results culminating in a ‘ positive ’ one – showing that while most invented languages are effective ( i.e . achieve near-perfect rewards ) , they are decidedly not interpretable or compositional ." +we evaluated bleu and nist score as shown in table 3 .,we obtained bleu scores for e2f direction as shown in table 2 . +"in a general smt system , this paper proposes a dedicated statistical model to generate measure words for englishto-chinese translation .","in this paper we propose a statistical model for measure word generation for englishto-chinese smt systems , in which contextual knowledge from both source and target sentences is involved ." +"in line with the dual coding theory , anderson et al demonstrate an advantage in decoding brain activity patterns of abstract words for text-based semantic models over the image-based ones .","contrary to previous findings , anderson et al find no advantage in decoding neural activity patterns associated with concrete words for image-based models ." +"when the large-scale bilingual corpus is not available , some researchers use existing dictionaries to improve word alignment .","when the large-scale bilingual corpus is unavailable , some researchers acquired class-based alignment rules with existing dictionaries to improve word alignment ." +we use the standard stanford-style set of dependency labels .,the dependency structure is according to stanford dependency . +"with equal corpus sizes , we found that there is a clear effect of text type on text prediction quality .","in this paper , we address the influence of text type and domain differences on text prediction quality ." +"in this paper , we present the first completely data-driven approach for generating short high-level summaries of source code .","in this paper , we present the first completely data-driven approach for generating high level summaries of source code ." +hearst proposed a lexico-syntactic pattern based method for automatic acquisition of hyponymy from unrestricted texts .,hearst extracted information from lexico-syntactic expressions that explicitly indicate hyponymic relationships . +"through the proposed system , we demonstrate that it is feasible to automatically reconstruct a detailed list of individual life events .",experimental results show that our model is able to extract a wide variety of major life events . +"in the following , we compare our approach to some related work on verb alternations .","in this paper , we approach the problem of verb alternations from the perspective of" +our baseline decoder is an in-house implementation of bracketing transduction grammar in cky-style decoding with a lexical reordering model trained with maximum entropy .,"our baseline system is a phrase-based system using btgs , which includes a contentdependent reordering model discriminatively trained using reordering examples ." +"we present the tweetingjay system for detecting paraphrases in tweets , with which we participated in task 1 of semeval 2015 .","we describe tweetingjay , a system for detecting paraphrases and semantic similarity of tweets , with which we participated in task 1 of semeval 2015 ." +"as input , the proposed regional cnn uses individual sentences as regions , dividing an input text into several regions such that the useful affective information in different regions can be extracted and weighted according to their contribution to the va prediction .","unlike a conventional cnn which considers a whole text as input , the proposed regional cnn uses an individual sentence as a region , dividing an input text into several regions such that the useful affective information in each region can be extracted and weighted according to their contribution to the va prediction ." +"the lemmatization is performed using the wordnetlemmatizer , contained in nltk .",this feature is produced using nltk to generate the lemma of each word according to its tagged pos . +"in this work , we present a new discriminative model for semantic parsing which extends the hybrid tree .","in this work , we specifically focus on the tree-structured representations for semantics ." +"since proposed in , the sequence-to-sequence model has been achieving the stateof-the-art performance when combined with the attention mechanism .",kyoto-nmt implements the sequence-to-sequence model with attention mechanism first proposed in as well as some more recent improvements . +co-training model can learn a performance-driven data selection policy to select high-quality unlabeled data .,co-training methods exploit predicted labels on the unlabeled data and select samples based on prediction confidence to augment the training . +"with regards to some natural linguistic phenomena , we can now show that it can be used successfully as part of existing nlp technologies .",we also demonstrate how it can be used to improve existing applications in information retrieval and summarization . +"therefore , we treat each similarity function as a subordinate predicting algorithm and utilize the specialist learning framework to combine the predictions .",we utilize the specialist ensemble learning framework to combine these component similarities into the relation strength for clustering . +transition-based and graph-based models have attracted the most attention of dependency parsing in recent years .,graph-based models and transitionbased models are two dominant paradigms in the dependency parsing community . +style transfer is the task of rephrasing the text to contain specific stylistic properties without changing the intent or affect within the context .,style transfer is the task of automatically transforming a piece of text in one particular style into another . +the goal of semantic parsing is to map text to a complete and detailed meaning representation .,the goal of semantic parsing is to map text into a complete and detailed meaning representation . +johnson and showed that word segmentation accuracy improves when phonotactic constraints on word shapes are incorporated into the model .,johnson showed that word segmentation accuracy improves if the model can learn different consonant sequences for word-inital onsets and wordfinal codas . +this paper presents an approach to dependency parsing using an extended finite state model .,this paper presents a dependency parsing scheme using an extended finite state approach . +finkel and manning proposed a former version of it based on the use of a hierarchical bayesian prior .,finkel and manning further propose a hierarchical bayesian extension of this idea . +we use a dnn model mainly suited for sequence tagging and is a variant of the bi-lstm-crf architecture .,"as mentioned above , the baseline model is a char-lstm-lstm-crf model ." +"language is a dynamic system , constantly evolving and adapting to the needs of its users and their environment ( cite-p-15-1-0 ) .","language is the primary tool that people use for establishing , maintaining and expressing social relations ." +"on the character level , we find that learning character-level representations with an rnn architecture significantly improves results over standard distance metrics used in previous bli research .",results also indicate that learning character-level representations from the data is beneficial as the char -lstm joint model significantly outperforms the baselines used in prior work . +the vocabulary size of the participants was measured by using a japanese language vocabulary evaluation test .,the vocabulary size of the participants was measured using a japanese language vocabulary evaluation test . +we propose a new algorithm for graph-based ssl and use the task of text classification .,we have proposed a new algorithm for semi-supervised text categorization . +the standard way to handle this problem is to hand-craft a finite set of features which provides a sufficient summary of the history .,the standard way to handle this problem is to handcraft a finite set of features which provides a sufficient summary of the unbounded history . +feature function scaling factors 位 m are optimized based on a maximum likely approach or on a direct error minimization approach .,feature function scaling factors 位 m are optimized based on a maximum likelihood approach or on a direct error minimization approach . +"for example , both finkel and manning and mcdonald et al methods suffer from a high time complexity which is cubic in the number of tokens in the sentence .","however , one disadvantage of their models , as in , is that their time complexity is cubic in the number of tokens in the sentence ." +"for english , we use the fasttext word embedding of dimension 300 .","additionally , we use averaged word embeddings ." +entrainment is the phenomenon of interlocutors becoming more similar to each other in their speech in the course of a conversation .,entrainment is the phenomenon of the speech of conversational partners becoming more similar to each other . +"significance-based n-gram selection not only reduces language model size , but it also improves perplexity when applied to a number of widely-used smoothing methods , including katz backoff and several variants of absolute discounting .","we introduce significance-based n-gram selection , which not only reduces model size , but also improves perplexity for several smoothing methods , including katz back-off and absolute discounting ." +"in this paper , we demonstrate that a substantially simpler approach that starts from a tree drawn from the uniform distribution .","in this paper , we introduce a randomized greedy algorithm that can be easily used with any rich scoring function ." +"we present plato , a probabilistic model for entity resolution that includes a novel approach for handling noisy or uninformative features .","we have presented plato , a simple and scalable entity resolution system that leverages unlabeled data to produce state-of-the-art results ." +"as well as new attention to syntactic phenomena such as scrambling have given rise to increased interested in multicomponent tag formalisms ( mctag ) , which extend the flexibility , and in some cases generative capacity of the formalism .",recent applications of tree-adjoining grammar ( tag ) to the domain of semantics as well as new attention to syntactic phenomena have given rise to increased interested in more expressive and complex multicomponent tag formalisms ( mctag ) . +we model the generative architecture with a recurrent language model based on a recurrent neural network .,we extend a recurrent neural network language model so that its output can be conditioned on a featurized melody . +and our automatically generated questions help to improve a strong extractive qa system .,"furthermore , the questions generated by our model help to improve a strong extractive qa system ." +"in order to acquire class attributes , a common strategy is to first acquire attributes of instances , then aggregate or propagate attributes , from instances to the classes to which the instances belong .","in order to acquire class attributes in particular , a common strategy is to first acquire attributes of instances , then aggregate or propagate attributes , from instances to the classes to which the instances belong ." +"and the results show that our proposed approach outperforms twical , the state-of-the-art open event extraction system , by 7 . 7 % in f-measure .",experimental results show our proposed framework outperforms the state-of-the-art baseline by over 7 % in f-measure . +in the form of selective labeled instances from different domains is congregated to form an auxiliary training set which is used for learning .,the transferable knowledge was assimilated in terms of selective labeled instances from different source domain to form a k-class auxiliary training set . +"through comparison comprehension , we have crowdsourced a dataset of more than 14k comparison paragraphs .",so far we have crowdsourced a dataset of more than 14k comparison paragraphs comparing entities from a variety of categories such as fruits and animals . +"to overcome this problem , we use wordnet to find semantically equivalent replacements for unknown words .","we look for further alternatives in wordnet , which has previously been widely used to find semantically related words ." +this paper presents a new method for building such a resource .,this paper presents a new method for systematically organizing a large set of such phrases . +our hypothesis is a generalization of the original hypothesis since it allows a reducible sequence to form several adjacent subtrees .,"this hypothesis is the foundation for distributional semantics , in which words are represented by context vectors ." +fasttext pre-trained vectors are used for word embedding with embed size is 300 .,the fasttext pre-trained vectors are used for word embedding with embed size is 300 . +"xue et al , 2011 ) adopted the noisy-channel framework and incorporated orthographic , phonetic , contextual , and acronym expansion factors in calculating the likelihood probabilities .","xue et al , 2011 ) adopted the noisy-channel framework for normalisation of microtext and proved that it is an effective method for performing normalisation ." +we apply a supervised wsd system to derive the english word senses .,then we automatically induce the english word sense correspondences to l2 . +"in transition-based parsing , and multi-layer attention is introduced to capture multiple word dependencies in partial trees .",the multi-layer attention aims to capture multiple word dependencies in partial trees for action prediction . +"following the previous work , we employ the linear chain crfs as our learning model .","as shown in similar to the first step , we use a sequence labelling approach with a crf model ." +"semantic transformation method is the most sophisticated approach for linguistic steganography , and perhaps impractical .",hence finding linguistic transformations which can be applied reliably and often is a challenging problem for linguistic steganography . +"our corpus is europarl , specifically , portions collected over the years 1996-1999 and 2001-2009 .","our main corpus is europarl , specifically portions collected over years 1996 to 365 1999 and 2001 to 2009 ." +then we train word2vec to represent each entity with a 100-dimensional embedding vector .,we first encode each word in the input sentence to an m-dimensional vector using word2vec . +propbank encodes propositional information by adding a layer of argument structure annotation to the syntactic structures of the penn treebank .,propbank encodes propositional information by adding a layer of argument structure annotation to the syntactic structures of verbs in the penn treebank . +"in a previous study , we presented a fully automated spoken dialogue system that can perform the map task with a user .","in a previous study , we used this paradigm for collecting data on how humans elicit feedback in humancomputer dialogue ." +"as for experiments , state-of-the-art svm and knn algorithm are adopted for topic classification .","as for experiments , state-of-the-art svm and knn algorithms are employed for topic classification ." +"in section 2 , we talk more about the intricacies of basilica and agents .","in this section , we will discuss these four conversational agents briefly ." +"to the english wikipedia , the typing information from wikipedia categories and freebase are useful language-independent features .","since words in any language are grounded to the english wikipedia , the corresponding wikipedia categories and freebase types can be used as language-independent features ." +approach is that redundancy within the set of related sentences provides a reliable way of generating informative and grammatical sentences .,the key assumption is that redundancy provides a reliable way of generating grammatical sentences . +"thanks to entity-level information , the cluster-ranking model correctly declines to merge these two large clusters .","using the cluster-pair representations , our network learns when combining two coreference clusters is desirable ." +experimental results show that our strategy of combining vision and language produces readable and descriptive sentences .,experimental results show that our approach produces sentences that are both relevant and readable . +"many attempts have been made along these lines , as for example brill and goto et al , with some claiming performance equivalent to lexicon-driven methods , while kwok reports good results with only a small lexicon and simple segmentor .","many attempts have been made along these lines , as for example brill and goto et al , with some claiming performance equivalent to lexicon-based methods , while kwok reports good results with only a small lexicon and simple segmentor ." +a and b should give participants enough tools to create a cqa system to solve the main task .,subtasks a and b should give participants enough tools to create a cqa system to solve the main task . +"during this phase , a human expert supervises samples , that is , provides the correct interpretation .","in the training phase , a sample is then selected from the system outputs and provided with the correct interpretation by a human expert ." +we use the mallet implementation of a maximum entropy classifier to construct our models .,we utilize a maximum entropy model to design the basic classifier for wsd and tc tasks . +"in this section , we compare our method with the previous work from the feature engineering viewpoint .",section 5 compares our method with the previous work from the viewpoint of feature exploration . +"in a knowledge base ( kb ) by jointly embedding the union of all available schema types ¡ª not only types from multiple structured databases ( such as freebase or wikipedia infoboxes ) , but also types expressed as textual patterns from raw text .","ties and relations in a knowledge base ( kb ) by jointly embedding the union of all available schema types¡ªnot only types from multiple structured databases ( such as freebase or wikipedia infoboxes ) , but also types expressed as textual patterns from raw text ." +in this paper we examine different linguistic features for sentimental polarity classification .,in this paper we investigate binary polarity classification ( positive vs. negative ) . +annotation projects employ guidelines to maximize inter-annotator agreement .,"typically , nlp annotation projects employ guidelines to maximize inter-annotator agreement ." +"f-structures are abstract , high-level syntactic representations .","f-structures and udrss are underspecified syntactic and semantic representations , respectively ." +some of the compound functional expressions in japanese are ambiguous .,"however , some phrases have two or more idiomatic meanings without context ." +"first , we apply sentence tokenisation over the mrec using the stanford corenlp toolkit .","first , we use the stanford corenlp package for tokenization and sentence splitting ." +and will therefore provide a productive research tool in addition to the immediate practical benefit of improving the fluency and readability of generated texts .,this framework facilitates detailed research into evaluation metrics and will therefore provide a productive research tool in addition to the immediate practical benefit of improving the fluency and readability of generated texts . +word sense disambiguation was performed using the babelfy tool which relies on the multilingual resource babelnet .,they used babelnet synsets to identify semantic concepts and disambiguate words using the word sense disambiguation system babelfy . +"in this study , rc datasets with different task formulations were annotated with prerequisite skills .","in this study , two classes of metrics were adopted for evaluating rc datasets : prerequisite skills and readability ." +"in this paper , we focus on enhancing the expressive power of the modeling , which is independent of the research of enhancing translation .","in this paper , we propose a nonlinear modeling for the quality of translation hypotheses based on neural networks , which allows more complex interaction between features ." +opinionfinder ( cite-p-7-1-12 ) is a system for mining opinions from text .,opinionfinder ( cite-p-8-1-11 ) is a system for mining opinions from text . +word categories are created based on graph clique sets .,categories are generated using a novel graph clique-set algorithm . +due to the flexible word ordering of a sentence and the existence of a large number of synonyms for words .,this is mainly caused by the flexible word ordering and the existence of the large number of synonyms for words . +several oov tokens are ambiguous and without contextual information .,the oov tokens that should be considered for normalization are referred to as ill-formed words . +hearst extracted information from lexico-syntactic expressions that explicitly indicate hyponymic relationships .,hearst handcrafted a set of lexico-syntactic paths that connect the joint occurrences of x and y which indicate hypernymy in a large corpus . +"in this paper , we present a logic form representation of knowledge which captures syntactic dependencies as well as semantic relations between concepts .",this paper proposes a knowledge representation model and a logic proving setting with axioms on demand successfully used for recognizing textual entailments . +"named-entity recognition ( ner ) is the task of identifying mentions of rigid designators from text belonging to named-entity types such as persons , organizations and locations ( cite-p-15-2-2 ) .","named-entity recognition ( ner ) is the task of identifying mentions of rigid designators from text belonging to named-entity types such as persons , organizations and locations ( cite-p-17-1-27 ) ." +"possible , we vet a collection of comparison paragraphs to obtain a test set on which human performs with an accuracy 94 . 2 % .","for benchmarking the progress , we filter a collection of these paragraphs to create a test set , on which humans perform with an accuracy of 94.2 % ." +although most current research in summarization focuses on newspaper articles .,"current research in summarization focuses on processing short articles , primarily in the news domain ." +"recently , the growth of online social networks provides the opportunity to perform user classification in a broader context .",the growth of online social networks provides the opportunity to analyse user text in a broader context . +we initialize the model parameters randomly using a gaussian distribution with xavier scheme .,we first initialize the model parameters by sampling from glorot-uniform distribution . +"a widely accepted way to use knowledge graph is tying queries with it by annotating entities in them , also known as entity linking .",entity linking is the task of mapping an entity mention in a text document to an entity in a knowledge base . +"smt systems , adopting either max-derivation decoding or max-translation decoding , have only used single models in decoding phase .",current smt systems usually decode with single translation models and can not benefit from the strengths of other models in decoding phase . +"this limitation is already discussed in and in , in which bilingual extensions of the word2vec architecture are also proposed .","this limitation is already discussed in and in , in which bilingual extensions of the word2vec architecture are proposed ." +zhou et al proposed attention-based bi-directional lstm networks for relation classification task .,"zhou et al proposed attention-based , bidirectional lstm networks for a relation classification task ." +because an individual event can be expressed by several sentences .,this is partly because the description of an individual event can spread across several sentences . +this is opposite to the conclusion in indomain tasks that using only adjectives as features results in much worse performance than using the same number of most frequent unigrams .,"indeed , it was resulted in that using only adjectives as features actually results in much worse performance than using the same number of most frequent unigrams ." +data show that the proposed algorithm achieves comparable results with other data-driven dependency parsing algorithms .,experimental results show that the proposed top-down parser achieves competitive results with other data-driven parsing algorithms . +"recently there has been a growing understanding both of the need for the appropriate handling of multiword expressions , and of the complexities involved in the task .","recently , there has been increasing awareness of the need for appropriate handling of multiword expressions in nlp tasks ." +a similar e ort was also made in the eurowordnet project .,a similar effort was also made in the eurowordnet project . +"to accurately represent knowledge , they must represent noun phrases , concepts , and the many-to-many mapping from noun phrases to concepts .","to minimize such confusion , a system must separately represent noun phrases , the underlying concepts to which they can refer , and the many-to-many “ can refer to ” relation between them ." +we also report the results using bleu and ter metrics .,we use bleu and ter we expose the statistical decisions in eqn . +to show that it is possible to measure diachronic semantic drifts within social media and within the span of a few years .,we illustrate that it is possible to measure diachronic semantic drifts within social media and within the span of a few years . +su et al presented a system to detect and rephrase profane words written in chinese .,"su et al propose a system which can not only detect , but also rephrase abusive language in chinese ." +"using a set of basic unit features from each feature subspace , we can achieve reasonably good performance .","within each feature subspace , using only the basic unit features can already give reasonably good performance ." +"in the future , we plan to work towards our long-term goal , i . e . , including more linguistic information in the skl framework .","in the future , we plan to work towards our long-term goal , i.e. , including more linguistic information in the skl framework and showing that this can help ." +"we train and evaluate our linking model on the data set produced for the conll-08 shared task on joint parsing of syntactic and semantic dependencies , which is based on the propbank corpus .","we tested our model on a semantic role labeling benchmark , using propbank annotations and automatic charniak parse trees as provided for the conll 2005 evaluation campaign ." +bert is a bidirectional contextual language model based on the transformer architecture .,bert is a model trained for masked language modeling word prediction and sentence prediction using the transformer network . +"as new instructions are given , the instruction history expands , and as the agent acts .","as new instructions are given , the instruction history expands , and as the agent acts the world state changes ." +"small , andersen , and kempler showed that paraphrased repetition is just as effective as verbatim repetition .",small et al showed that paraphrased repetition is just as effective as verbatim repetition . +the experiment was set up and run using the scikit-learn machine learning library for python .,experiments were run with a variety of machine learning algorithms using the scikit-learn toolkit . +"al-onaizan and knight present a hybrid model for arabic-to-english transliteration , which is a linear combination of phoneme-based and grapheme-based models .","al-onaizan and knight compare a grapheme-based approach , a phoneme-based approach and a linear combination of both for transliteration ." +"katiyar and cardie proposed a recurrent neural network to extract features to learn an hypergraph structure of nested mentions , using a bilou encoding scheme .",katiyar and cardie presented a standard lstm-based sequence labeling model to learn the nested entity hypergraph structure for an input sentence . +zhao and ng first present a machine learning approach to identify and resolve zps .,zhao and ng use the learning-based model to locate and resolve zero anaphoras . +morphological analysis is to reduce the sparse data problem in under-resourced languages .,phological analysis is to reduce the sparse data problem in under-resourced languages . +"however , training such models requires a large corpus of annotated dialogues .","however , these models require a large corpus of dialogues to learn effectively ." +"after the contest , we tuned the parameter used in the simple bayes method .","after the contest , we tuned the parameter used in the simple bayes method , and it obtained higher precision ." +"for instance , ¡® seq-kd + seq-inter + word-kd ¡¯ in table 1 means that the model was trained on seq-kd data and fine-tuned towards seq-inter data .","9 for instance , ¡®seq-kd + seq-inter + word-kd¡¯ in table 1 means that the model was trained on seq-kd data and fine-tuned towards seq-inter data with the mixture cross-entropy loss at the word-level ." +"in this deployment paper , we target the development of content-based methods for job recommendations .","in this paper , we have introduced a new method for automatically creating datasets for the offline evaluation of job posting similarities ." +into the grammar-based generation makes the generated responses more relevant to the document content .,we also show that conditioning the generation on topic models makes generated responses more relevant to the document content . +we measure the quality of the automatically created summaries using the rouge measure .,we then compare the system summaries against the manual summaries using the rouge-1 metric . +alignment types are shown with the ∗ symbol .,the incorrectly predicted tags are shown with the ∗ symbol . +"in this paper , we compare the merits of these different language modeling approaches .","in this paper , we study the relative merits of these approaches ." +"grosz and sidner , in their tripartite model of discourse structure , classify cue phrases based on the changes they signal to the attentional and intentional states .",grosz and sidner classify cue phrases based on changes to the attentional stack and intentional structure found in their theory of discourse . +good performance in many natural language processing tasks has been shown to depend heavily on integrating many sources of information .,"the performance of many natural language processing tasks , such as shallow parsing and named entity recognition , has been shown to depend on integrating many sources of information ." +blei et al proposed lda as a general bayesian framework and gave a variational model for learning topics from data .,sadamitsu et al proposed a bootstrapping method that uses unsupervised topic information estimated by latent dirichlet allocation to alleviate semantic drift . +in this paper we presented a novel approach to unsupervised role induction .,in this paper we present a simple approach to unsupervised semantic role labeling . +that enable the first computational study of word generalization that is integrated within a word learning model .,we present the first computational study of word generalization integrated within a word-learning model . +"other than bengali , the works on hindi can be found in li and mccallum with crf and saha et al with a hybrid feature set based me approach .","other than bengali , the works on hindi can be found in li and mccallum with crf and cucerzan and yarowsky with a language independent method ." +we achieve competitive accuracy to the state-of-the-art and on wmt ’ 15 english-german .,on wmt ’ 16 english-romanian translation we achieve accuracy that is very competitive to the current state-of-the-art result . +"the framenet database is a lexical resource of english describing some prototypical situations , the frames , and the frame-evoking words or expressions associated with them , the lexical units .","the framenet database , fillmore et al , is an english lexical resource based on the description of some prototypical situations , the frames , and the frame-evoking words or expressions associated to them , the lexical units ." +efforts have focussed on automatically solving school level math word problems .,there has also been efforts to automatically solve school level math word problems . +this paper has reported the effect of corpus size on case frame acquisition .,this paper reports the effect of corpus size on case frame acquisition for discourse analysis in japanese . +we use lstm-based neural language models for the lexical features .,"we use an lstm-based neural language model , with class-based input rather than words ." +wilson et alpresent a two stage classification approach to determine the contextual polarity of subjective clues in a corpus .,wilson et al present an approach to classify contextual polarity building on a two-step process . +goyal et al generate a lexicon of patient polarity verbs that impart positive or negative states on their patients .,goyal et al generate a lexicon of patient polarity verbs that imparts positive or negative states on their patients . +"for the simulation , dps will be autonomous conversational agents with a cognitive state consisting of goals , a notion of their expected behaviour .","for the simulation , dps will be autonomous conversational agents with a cognitive state consisting of goals , a notion of their expected behaviour in a political interview , priorities , and some knowledge of the world ." +we show a self-training protocol that achieves better results than all of these methods .,our results show that self-training is of substantial benefit for the problem . +we present a new approach to cross-language text classification that builds on structural correspondence learning .,"section 4 describes our main contribution , a new approach to cross-language text classification based on structural correspondence learning ." +using word appearance in context is an effective element .,we also introduce word appearance in context . +a first version of dependency tree kernels was proposed by culotta and sorensen .,culotta and sorensen proposed a tree kernel for dependency trees . +in this paper we examine different linguistic features for sentimental polarity classification .,"in this paper , we have evaluated various features and the domain effect on sentimental polarity classification ." +"following the approach in , we use the morfessor categories-map algorithm .",we use both the morfessor baseline and the morfessor categories-map algorithms . +the parsing model we use is based on the stochastic tree-insertion grammar model described by chiang .,"the parsing model used is essentially that of chiang , which is based on a highly restricted version of tree-adjoining grammar ." +"these results were corroborated by lembersky et al , 2012a lembersky et al , 2013 , who further demonstrated that translation models can be adapted to translationese , thereby improving the quality of smt even further .","these results were corroborated by lembersky et al , 2012a lembersky et al , 2013 , who showed that translation models can be adapted to translationese , thereby improving the quality of smt even further ." +we have investigated style accommodation in online discussions by means of a new model that takes into account the presence of a marker .,"we investigate style accommodation in online discussions , in particular its interplay with content agreement and disagreement ." +we present a generative distributional model for the unsupervised induction of natural language syntax .,we have presented a simple generative model for the unsupervised distributional induction of hierarchical linguistic structure . +the traditional attention mechanism was proposed by bahdanau et al in the nmt literature .,"in nmt , bahdanau et al first proposed to use an attention mechanism in the decoder ." +"to evaluate our wsd program , named lexas ( lexical ambiguity-resolving _ system ) , we tested it on a common data set .","we tested our wsd program , named lexas , on both a common data set used in previous work , as well as on a large sense-tagged corpus that we separately constructed ." +this evaluation is made possible by our extension to all target composition models of the corpus-extracted phrase approximation method originally proposed in ad-hoc settings by baroni and zamparelli and guevara .,"we evaluated the performance of the composition models on the test split of the dataset , using the rank evaluation proposed by baroni and zamparelli ." +"cucerzan and yarowsky tagger from existing linguistic resources , namely a dictionary and a reference grammar , but these resources are not available , much less digitized , for most under-studied languages .","cucerzan and yarowsky learn a pos-tagger from existing linguistic resources , namely a dictionary and a reference grammar , but these resources are not available , much less digitized , for most under-studied languages ." +"they are a combination of features introduced by gildea and jurafsky , ones proposed in , surdeanu et al and the syntactic-frame feature proposed in .","the feature set used in assert is a combination of features described in gildea and jurafsky as well as those introduced in pradhan et al , surdeanu et al , and the syntactic-frame feature proposed in ." +"most related work focused on detecting profanity , using list-based methods to identify offensive words .",most related work in the field of abusive language detection has focused on detecting profanity using list-based methods to identify offensive words . +"latent dirichlet allocation is a popular probabilistic model that learns latent topics from documents and words , by using dirichlet priors to regularize the topic distributions .","latent dirichlet allocation is a widely used type of topic model in which documents can be viewed as probability distributions over topics , 胃 ." +"in particular , we implemented the ghkm algorithm as proposed by galley et al from word-aligned treestring pairs .",we applied the algorithm of galley et al to extract tree-to-string translation rules . +"at competition time , we achieved the sixth best result on the task .","at competition time , we achieved the sixth best result on the task from a set of twelve systems ." +"for each node n , state is assigned a state of ag .","for each node n , state is assigned a state of ag as specified above ." +"in terms of speed and memory consumption , graph unification remains the most expensive component in unification-based grammar parsing .","in terms of both speed and memory consumption , graph unification remains the most expensive component of unification-based grammar parsing ." +sangati et al proposed to use a third-order generative model for reranking k-best lists of dependency parses .,sangati et al proposed a k-best generative reranking algorithm for dependency parsing . +the maximum entropy model estimates a probability distribution from training data .,the maximum entropy model estimates the probability of a time-bin given the observed medical event . +"we construct our joint model as an extension to the discriminatively trained , feature-rich , conditional random field-based , crf-cfg parser of .","our parsing model is the discriminatively trained , conditional random field-based context-free grammar parser of ." +"based on labeledlda , which obtains a 25 % increase in f 1 score over the co-training approach to named entity classification .","labeledlda outperforms co-training , increasing f 1 by 25 % over ten common entity types ." +"in this paper , we propose several novel active learning ( al ) strategies for statistical machine translation .",we provided a novel active learning framework for smt which utilizes both labeled and unlabeled data . +stochastic optimality theory ( cite-p-17-1-2 ) is a variant of optimality theory that tries to quantitatively predict linguistic variation .,stochastic optimality theory ( cite-p-17-1-2 ) is a widely-used model in linguistics that did not have a theoretically sound learning method previously . +"and thus we take a joint annotation approach , which combines several independent annotations to improve the overall annotation accuracy .",this method allows us to exploit the dependency between different unsupervised annotations to further improve the accuracy of the entire set of annotations . +"whitehill et al proposed a probabilistic model to filter labels from non-experts , in the context of an image labeling task .",whitehill et al explored a probabilistic model to combine labels from both human labelers and automatic classifiers in image classification . +"experiment results show that both the identified topics and topical structure are intuitive and meaningful , and they are helpful for improving the performance of tasks such as sentence annotation and sentence ordering .","experiment results show that the proposed structural topic model can effectively discover topical structures in text , and the identified structures significantly improve the performance of tasks such as sentence annotation and sentence ordering ." +"it was first used in chinese word segmentation by , where maximum entropy methods were used .",it was first implemented in chinese word segmentation by using the maximum entropy methods . +"in this paper , we propose a working definition of thwarting amenable to machine learning .","in section 4 , we propose a na ? ve rule based approach to detect thwarting ." +"in this work , we deal with the problem of detecting a textual review .","in this work , we deal with the problem of detecting a textual review as spam or not , i.e. , non-spam ." +"on the macro-averaged f 1-measure , our lexical classifier outperformed the majority-class baseline by 0 . 33 ( on b eetle ) and 0 . 18 ( on s ci e nts b ank ) .","on the macro-averaged f 1 -measure , our lexical classifier outperformed the majority-class baseline by 0.33 ( on b eetle ) and 0.18 ( on s ci e nts b ank ) and by 13 % and 3 % on accuracy ." +performance can be achieved when the test data is from the same domain as the training data .,the performance is always evaluated on a test set from the same domain as the training set . +"for arabic we use the penn arabic treebank , parts 1-3 in their latest versions .",labeled data for msa we use the penn arabic treebank . +morfessor 2.0 is a new implementation of the morfessor baseline algorithm .,morfessor is a family of methods for unsupervised morphological segmentation . +in this paper we present results on the problem of pos tagging english-spanish code-switched discourse .,this paper presents results on part-of-speech tagging spanish-english code-switched discourse . +we have proposed a neural network based insertion position selection model to reduce the computational cost of the decoding .,"in this paper , we propose a method to select the appropriate insertion position before decoding ." +"yu et al , 2002 ) has used pattern recognition techniques to summarize interesting features of automatically generated graphs of time-series data from a gas turbine engine .","yu et al , 2002 ) used pattern recognition techniques to summarize interesting features of automatically generated graphs of time-series data from a gas turbine engine ." +subject and object are usually easily determined in english .,"in german , subject-object ambiguities are frequent ." +"employing the usage notes of dictionaries and thesaurus as a methodology , the fine differences of the verbs were demonstrated in a two-part representation for lexical differentiation .","after the frequently used verbs were identified , the usage notes of dictionaries and thesarus demonstrating the fine differences of the verbs were employed in a two-part representation for lexical differentiation ." +word senses occurring in a coherent portion of text tend to maximize domain similarity .,"as a consequence , word senses occurring in a coherent portion of text tend to maximize domain similarity ." +"the third one is a collection of tweets , collected by .","the third one is a tweet collection , which are gathered by ." +this hypothesis is the basis for our algorithm for distinguishing literal and metaphorical senses .,"this hypothesis is the foundation for distributional semantics , in which words are represented by context vectors ." +"in this paper , we present spot , a sentence planner , and a new methodology for automatically training spot .","in this paper , we present spot , a sentence planner , and a new methodology for automatically training spot on the basis of feedback provided by human judges ." +we show that our model significantly outperforms a feature based mention hypergraph model and a recent multigraph model on the ace dataset .,our model significantly outperforms the previously mentioned hypergraph model of lu and roth and muis and lu on entity mention recognition for the ace2004 and ace2005 corpora . +we present an algorithm for incremental parsing using parallel multiple contextfree grammars ( pmcfg ) .,we present an algorithm for incremental statistical parsing with parallel multiple context-free grammars ( pmcfg ) . +"firstly , we propose a novel way to predict readers ¡¯ rating of text .",we presented a novel approach to predict reader¡¯s rating of texts . +to do so we use the frame guidelines developed by boydstun et al .,to do so we used the frame guidelines developed by boydstun et al . +"lai et al and visin et al proposed recurrent cnns , while johnson and zhang proposed semi-supervised cnns for solving a text classification task .",lai et al proposed recurrent cnn while johnson and zhang proposed semi-supervised cnn for solving text classification task . +we use a dnn model mainly suited for sequence tagging and is a variant of the bi-lstm-crf architecture .,we used the bidirectional lstm architecture introduced by for named entity recognition . +that knowing multiple scores for each example instead of a single score results in a more reliable estimation of the quality of a nlp system .,we show that knowing multiple scores for each example instead of a single score results in a more reliable estimation of a system quality . +the experiments were conducted with the scikit-learn tool kit .,all the experiments are developed using the scikit-learn machine learning library . +the automatic annotation adaptation strategy for sequence labeling aims to strengthen a tagger trained on a corpus annotated in one annotation standard with a larger assistant corpus annotated in another standard .,automatic annotation adaptation for sequence labeling aims to enhance a tagger with one annotation standard by transferring knowledge from a source corpus annotated in another standard . +we propose a minimalistic model architecture based on gated recurrent unit combined with an attention mechanism .,"without any loss of generality , we propose a simple classification model using gated recurrent unit coupled with attention ." +"in which we have constructed around 100 , 000 cloze queries from clinical case reports .","our dataset uses clinical case reports with around 100,000 gap-filling queries about these cases ." +"rules , which can be hand crafted or learned by a system , are commonly created by looking at the context around already known entities , such as surface word patterns and dependency patterns .","rules are typically defined by creating patterns around the entities , such as lexico-syntactic surface word patterns and dependency tree patterns ." +gorithm is its integration with state-of-the-art statistical machine translation techniques .,the main innovation of w asp is its use of state-of-the-art statistical machine translation techniques . +word alignment is the process of identifying wordto-word links between parallel sentences .,word alignment is the task of identifying word correspondences between parallel sentence pairs . +in this paper the word prediction system soothsayer .,in this paper we presented the word prediction system soothsayer . +we focus on specific textual structures which share the same discourse properties and that are expected to bear hypernymy relations .,we focused on textual structures which correspond to a well defined discourse structure and which often bear hypernymy relations . +the data we use comes from the penn arabic treebank .,labeled data for msa we use the penn arabic treebank . +in huck et al a lexicalized reordering model for hierarchical phrase-based machine translation was introduced .,huck et al proposed a very effective phrase orientation model for hpb translation . +"lastly , sarawgi et al present a study that carefully and systematically controls for topic and genre bias .",sarawgi et al attempted to remove topic bias for identifying gender-specific stylistic markers . +"cite-p-25-3-8 also employed the typical attention modeling based seq2seq framework , but utilized a trick to control the vocabulary size .",cite-p-25-3-10 and cite-p-25-3-8 employed attention-based sequenceto-sequence ( seq2seq ) framework only for sentence summarization . +this paper presents a novel approach to determine textual similarity .,this paper proposes a novel approach to determine textual similarity . +we investigate an endto-end attention-based neural network .,"to address this factor , we proposed an endto-end attention-based neural network ." +"in the following example , the first occurrence of aluminum is only considered to be markable because it corefers with the occurrence of this noun .","in the following example , the first occurrence of aluminum is only considered to be markable because it corefers with the occurrence of this noun as a bare np in the second clause ." +which extends the distributional hypothesis to multilingual data and joint-space embeddings .,"in this work , we extend this hypothesis to multilingual data and joint-space embeddings ." +"in this approach can be interpreted as a conditional language model , it is suitable for nlg tasks .","since a scheme in this approach can be interpreted as a conditional language model , it is suitable for nlg tasks ." +we have developed an efficient and flexible kernel-based framework for comparing sets of contexts .,we present a new flexible and efficient kernel-based framework for classification with relational similarity . +"this parsing approach is very similar to the one used successfully by nivre et al , but we use a maximum entropy classifier to determine parser actions , which makes parsing considerably faster .","this parsing approach is very similar to the one used successfully by nivre et al , but we use a maximum entropy classifier to determine parser actions , which makes parsing extremely fast ." +models are presented to capture the term dependence .,"specifically , the language model is utilized to capture the term dependence ." +a system combination implementation developed at rwth aachen university is used to combine the outputs of the different engines .,a system combination implementation which has been developed at rwth aachen university is used to combine the outputs of different engines . +vikner and jensen apply the qualia structure of the possessee noun and type-shift the possessee noun into a relational noun .,"instead of selective binding , vikner and jensen type-shift the possessor noun using one of the qualia roles to explain the meaning of the genitive phrases following partee ." +"with respect to unlabeled data , we propose a novel semi-supervised learning objective that can be optimized using the expectation-maximization ( em ) algorithm .","more importantly , we propose a novel semi-supervised machine learning objective for estimating a crf model integrated with ve ." +"in contrast to previous methods , we analyze the cohesive strength within a chain .","in contrast to previous methods , we are able to select chains based on their cohesive strength ." +"to the best of our knowledge , this is the first work of using dnn technology for automatic math word problem solving .","to our knowledge , we are the first to apply seq2seq model to the task of math word problem solving ." +the negated event is the event or the entity that the negation indicates its absence or denies its occurrence .,negated event is the shortest group of words that is actually affected by the negation cue . +"there are several excellent textbook presentations of hidden markov models and the forward-backward algorithm for expectation-maximization , so we do not cover them in detail here .","there are a number of excellent textbook presentations of hidden markov models , so we do not present them in detail here ." +in this section we provide experiments comparing the performance of algorithm 2 with algorithm 1 as well as a baseline algorithm .,in this section we provide experiments comparing the performance of algorithm 2 with algorithm 1 as well as a baseline algorithm based on the approach of ( cite-p-13-1-15 ) . +"in ( partee , 1984 ) , within the framework of discourse representation theory ( drt ) ( cite-p-7-5-4 ) gives the wrong truth-conditions , when the temporal connective in the sentence is before or after .","the analysis in ( partee , 1984 ) of quantified sentences , introduced by a temporal connective , gives the wrong truth-conditions when the temporal connective in the subordinate clause is before or after ." +"for instance , the frequency distributions of most commonly-used words in a native and seven eastern european learner corpora are compared on different parts-of-speech categories .","for instance , the frequency distributions of most commonly-used words in a native and seven eastern european learner corpora are compared on various parts-of-speech categories ." +parsing strategies differ in terms of the order in which they recognize productions .,the parsing strategies differ in terms of the order in which they recognize productions in the derivation tree . +figure 2 shows the performance of baseline hmm and hmm + type + gen model for two word alignment .,figure 2 : a comparison between the performance of baseline hmm and hmm+type+gen model for two test alignment . +the first application of machine translation system combination used a consensus decoding strategy relying on a confusion network .,the first confusion network decoding method was based on multiple string alignment borrowed from biological sequence analysis . +we propose using reservoir sampling in the rejuvenation step to reduce the storage complexity of the particle filter .,"we use reservoir sampling to reduce the storage complexity of a previously-studied online algorithm , namely the particle filter , to constant ." +"blanco and moldovan annotate focus of negation in the 3,993 negations marked with argm-neg semantic role in propbank .",blanco and moldovan annotate focus on the negations marked with argm-neg role in propbank . +"in this paper , we discuss methods for automatically creating models of dialog structure using dialog act and task .","in this paper , we address one aspect of this problem ¨c inferring predictive models to structure task-oriented dialogs ." +our solution for determining the sentiment score extends an earlier convolutional neural network for sentiment analysis .,"our solution , “ hcs , ” is a convolutional neural network to classify sentiment scores ." +recent studies have shown that subjectivity is a language property which is directly related to word senses .,"this is consistent with observations made in previous work that subjectivity is a property associated not with words , but with word meanings ." +we use the switchboard corpus and the british national corpus in this study .,the switchboard corpus and the british national corpus are used in this study . +baroni et al argues that predict models such as word2vec outperform count based models on a wide range of lexical semantic tasks .,recent work by baroni et al shows that word embeddings trained by predict models outperforms the count based models in various lexical semantic tasks . +"reasoning is a very challenging , but basic part of natural language inference ( nli ) ( cite-p-12-3-1 ) , and many relevant tasks have been proposed such as recognizing textual entailment ( rte ) and so on .",reasoning is a very important topic and has many important applications in the field of natural language processing . +"from medline , medie , and a gui-based medline search tool , info-pubmed .","medline , medie , and a gui-based efficient medline search tool , info-pubmed ." +"in this paper , we described the semi-automatic adaptation of a timeml annotated corpus from english to portuguese .","in this paper , we report on an experiment that consisted in adapting the english data of tempeval-1 to portuguese ." +labeled data exists for a fixed inventory of individual relation types .,the set of relation types is not pre-specified but induced from observed unlabeled data . +task has not been well investigated in microblogs yet .,"currently , this task has not been well studied in microblogs yet ." +"in this work , we present a general framework to perform such comparisons .",we have presented a bipartite graph model for drawing comparisons among large groups of documents . +"that requires systems to establish the meaning , reference and identity of events .","the complexity of this task challenges systems to establish the meaning , reference and identity across documents ." +that can be implemented in either version of roget ’ s or in wordnet .,it is worth noting that this method only relies on the hierarchies in roget ’ s and wordnet . +and most of it has focused on either discriminating between sincere and insincere arguments .,the focus has mostly been on detecting insincere reviews or arguments . +"the problem of correct identification of named entities is specifically addressed and benchmarked by the developers of information extraction system , such as the gate system .","the problem of correct identification of nes is specifically addressed and benchmarked by the developers of information extraction system , such as the gate system ." +argumentation features such as premise and support relation appear to be better predictors of a speaker ¡¯ s influence rank .,particularly with the premise and supportrel types appear to be better predictors of a speaker¡¯s influence rank . +"we employ the pretrained word vector , glove , to obtain the fixed word embedding of each word .","our monolingual objective follows the glove model , which learns from global word co-occurrence statistics ." +this opens the possibility of computing the occurrences of discontinuous treelets in much the same way as is done in for discontinuous substrings .,"this allows in turn to compute by intersection the occurrences of discontinuous treelets , much like what is done in for discontinuous strings ." +"in this article , we use the posterior regularization framework ( cite-p-23-13-3 ) to incorporate complex constraints into probabilistic models during learning .","in this article we explored a novel learning framework , posterior regularization , for incorporating rich constraints over the posterior distributions of word alignments ." +"despite the joint approach , our system is still efficient .","finally , we show that despite the joint approach , our system is still efficient ." +"in language modeling , perplexity is frequently used as a quality measure for language models built with n-grams extracted from text corpora .",perplexity is frequently used as a quality measure for language models built with n-grams extracted from text corpora . +and solve the optimization problem by using linear programming .,the optimization problem is addressed by using a linear programming model . +chambers and jurafsky introduced the concept of narrative event chains as a representation of structured event relation knowledge .,"chambers and jurafsky proposed unsupervised induction of narrative event chains from raw newswire texts , with narrative cloze as the evaluation metric ." +the evaluation method is the case insensitive ib-m bleu-4 .,the evaluation metric is case-sensitive bleu-4 . +"in this paper , we describe the system submitted to the semeval-2010 task .",this paper describes the pku_hit system on event detection in the semeval-2010 task . +gedigian et al and li and sporleder distinguished the literal and nonliteral use of a target expression in text .,the method by gedigian et al discriminates between literal and metaphorical use . +"for work on l-pcfgs estimated with em , see petrov et al , matsuzaki et al , and pereira and schabes .","for work on l-pcfgs using the em algorithm , see petrov et al , matsuzaki et al , pereira and schabes ." +aso is a recently proposed linear multi-task learning algorithm based on empirical risk minimization .,"aso is a recently proposed linear multi-task learning algorithm , which extracts the common structures of multiple tasks to improve accuracy , via the use of auxiliary problems ." +"in this paper , we introduce a novel automatic query expansion approach for image captioning to retrieve semantically more relevant captions .","in this paper , we present a novel query expansion approach for image captioning , in which we utilize a distributional model of meaning for sentences ." +we established a state-of-the-art baseline that utilizes a variety of features .,we first establish a state-of-the-art baseline with a rich feature set . +"in this paper , we proposed a novel environment for japanese text input based on aerial hand gestures .","in this paper , we propose koosho : an integrated environment for japanese text input based on aerial hand gestures ." +"we show , first , that both cnn features and word embeddings are good predictors of human judgments , and second , that these vectors can be further specialized in spatial knowledge if we update them by backpropagation when learning the model in the task of predicting spatial arrangements of objects .","we show that the model succeeds in this task and , furthermore , that it is capable of predicting correct spatial arrangements for unseen objects if either cnn features or word embeddings of the objects are provided ." +"in this paper , we propose a method to jointly model and exploit the context compatibility , the topic .","in this paper , we propose a generative model – called entity-topic model , to effectively join the above two complementary directions together ." +dropout is a regularization technique in which units and their connections are randomly dropped from the neural network during training .,dropout is a technique that involves randomly dropping units during training to prevent overfitting and co-adaptation of neurons . +for task c-f we operated on features automatically computed from raw text rather than using the tagged events and temporal expressions in the corpus .,"our system is notable in that for tasks c ¨c f , they operated on raw text while all other systems used tagged events and temporal expressions in the corpus as input ." +mikolov et al have proposed to obtain cross-lingual word representations by learning a linear mapping between two monolingual word embedding spaces .,mikolov et al have also shown that distributed vector representations over large corpora in a continuous space model capture many linguistic regularities and key aspects of words . +"we applied our system to the xtag english grammar 3 , which is a large-scale fb-ltag grammar for english .","we constructed a type signature for the xtag english grammar , an existing broad-coverage grammar of english ." +"adversarial examples in neural image captioning crafted by show-and-fool highlight the inconsistency in visual language grounding between humans and machines , suggesting a possible weakness of current machine vision and perception machinery .","moreover , the adversarial examples in neural image captioning highlight the inconsistency in visual language grounding between humans and machines , suggesting a possible weakness of current machine vision and perception machinery ." +under the constraints of independently generated monolingual parse trees might be the main reason why “ syntactic ” constraints have not yet increased the accuracy of smt systems .,"as discussed in section 4 , these findings shed new light on why “ syntactic ” constraints have not yet helped to improve the accuracy of statistical machine translation ." +"in this paper , we describe a method of using document similarity measures to describe differences in behavior between native and non-native speakers of english .","in this paper , we describe a method of using document similarity measures to describe differences in behavior between native and non-native speakers of english in a writing task ." +"in this paper we focus on categories acquired from natural language stimuli , that is words .",in this paper we have presented a bayesian model of category acquisition . +"in this paper , we present five models for sentence realisation .","in this paper , we present different models for sentence realisation ." +"on the other hand , mem2seq is able to produce the correct responses .","on the other hand , mem2seq is able to produce the correct responses in this two examples ." +we train a regression model that predicts aggregated labels for unseen instances and compare the predictions to expert annotations .,we present an aggregation approach that learns a regression model from crowdsourced annotations to predict aggregated labels for instances that have no expert adjudications . +cui et al measured sentence similarity based on similarity measures between dependency paths among aligned words .,cui et al learned transformations of dependency paths from questions to answers to improve passage ranking . +"although this one-shot learning paradigm is very useful , it will never make an nlp system understand the natural language because it does not accumulate .",this kind of one-shot approach is useful but it does not usually perform well to various datasets or tasks . +"finally , zhu et al approach the scope learning problem via simplified shallow semantic parsing .",zhu et al also used the bioscope corpus and employed techniques developed for shallow semantic parsing for detecting scope . +"an nlp tool where the mwes can be employed is the urdu pargram grammar , which is based on the lexical-functional grammar formalism .","the extracted mwes are integrated into the urdu pargram grammar , a computational grammar for urdu running with xle and based on the syntax formalism of lfg ." +most of the following work focused on feature engineering and machine learning models .,most of the following works focused on feature engineering and machine learning models . +the system can not tell whether the user utterance corresponds to a dialogue act .,this process enables the system to understand user utterances based on the context of a dialogue . +"here , for textual representation of captions , we use fisher-encoded word2vec features .",for the representation of textual data we use both tfidf and the word embedding representation of the data . +"in section 3 . 5 , the majority of sentences require zero or few corrections .","as pointed out in section 3.5 , the majority of sentences require zero or few corrections ." +summarization systems that directly optimize the number of topic signature words during content selection have fared very well in evaluations .,summarization systems that directly optimize for more topic signatures during content selection have fared very well in evaluations . +we use the europarl english-french parallel corpus plus around 1m segments of symantec translation memory .,our nmt systems are trained on 1m parallel sentences of the europarl corpus for en-fr and en-de . +but existing automatic emotion detectors are restricted to identify only a small set of emotions .,many complex emotions are ignored by current automatic emotion detectors because they are not programmed to seek out these “ undefined ” emotions . +"in this paper , we present a supervised learning-based pronoun resolution system which incorporates coreferential information of candidates .",in this paper we have proposed a model which incorporates coreferential information of candidates to improve pronoun resolution . +we use the rules for reordering german constituent parses of collins et al together with the additional rules described by fraser .,we used the rules for reordering german constituent parses of collins et al together with the additional rules described by fraser . +these energy functions are encoded from design guidelines or learned from scene data .,these energy functions are encoded from interior design guidelines or learned from input scene data . +"te is a generic paradigm for semantic inference , where the objective is to recognize whether a target meaning can be inferred from a given text .",te is a task of determining if the truth of a text entails the truth of another text ( hypothesis ) . +"one is to find unknown words from corpora and put them into a dictionary , and the other is to estimate a model that can identify unknown words correctly , .","one is to acquire unknown words from corpora and put them into a dictionary , and the other is to estimate a model that can identify unknown words correctly , ." +we train a linear classifier using the averaged perceptron algorithm .,we use the voted perceptron algorithm as the kernel machine . +"we present the mineral ( medical information extraction and linking ) system for recognizing and normalizing mentions of clinical conditions , with which we participated in task 14 of semeval 2015 .","we described mineral , a system for extraction and normalization of disorder mentions in clinical text , with which we participated in task 14 of semeval 2015 ." +our experiment results demonstrate that our proposed system gives a significant performance improvement on nsw detection .,our experiment results demonstrate the effectiveness of our nsw detection method and the benefit of nsw detection for ner . +while automatic induction of plot representations has attracted considerable attention ( see cite-p-15-1-16 ) .,"automatically learning representations of book plots , as structured summaries of their content , has attracted much attention ( cf , cite-p-15-1-16 for a review ) ." +"to obtain their corresponding weights , we adapted the minimum-error-rate training algorithm to train the outside-layer model .","to train the feature weights , we made use of a novel two-phase training algorithm that incorporates a probabilistic training objective and standard minimum error training ." +in the unsupervised setting are comparable to the best reported values .,the results in the unsupervised setting is comparable to the best reported values . +"a synset is a set of mutual synonyms , which can be represented as a clique graph where nodes are words and edges are synonymy relations .",a synset is a set of synonyms that are interchangeable in some context . +"in this paper , we propose a general term-weighting learning framework , t weak , that learns the term-weighting function .","in this paper , we propose a novel framework that learns the term-weighting function ." +"in subtask b , participants must determine which type of irony .","in subtask b , participants must determine which type of irony a particular tweet contains ." +we describe the ie framework and the experimental setup used for comparing the various tagging strategies .,we describe the tagging strategies that can be found in the literature and evaluate their relative performances . +"on three popular benchmark datasets ( pku , msra and ctb6 ) , and the experimental results show that our model achieves the state-of-the-art performance .","experiments on pku , msra and ctb6 benchmark datasets show that our model outperforms the previous neural network models and state-of-the-art methods ." +we also measure overall performance with uncased bleu .,"generation quality is primarily evaluated with bleu , using sacrebleu ." +cite-p-18-1-4 combined a rule-based approach and machine learning .,cite-p-18-1-4 combine pattern matching and machine learning . +"in this paper , we adopt the method to weight features on an upper sequence labeling stage .","in this paper , we describe a novel approach to cascaded learning and inference on sequences ." +explicit semantic analysis is a variation on the standard vectorial model in which the dimensions of the vector are directly equivalent to abstract concepts .,explicit semantic analysis is a variation on the standard vector-space model in which the dimensions of the vector are directly equivalent to abstract concepts . +"similar approaches were applied in multiple other languages , including italian , german and basque .","these systems have been created for english , portuguese , italian and german ." +"on the other hand , glorot et al , proposed a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion .","glorot et al , proposed a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion ." +"in addition to improving the original k & m noisy-channel model , we create unsupervised and semi-supervised models of the task .","in addition to simplifying the task , k & m¡¯s noisy-channel formulation is also appealing ." +"the encoder is implemented with a bi-directional lstm , and the decoder a uni-directional one .",the encoder units are bidirectional lstms while the decoder unit incorporates an lstm with dot product attention . +"in contrast , kernel methods allow for automatically generating all possible dependencies .","in this perspective , kernel methods are a viable approach to implicitly and easily explore feature spaces encoding dependencies ." +model parameters that maximize the loglikelihood of the training data are computed using a numerical optimization method .,"the model parameters , 位 u , are estimated using numerical optimization methods to maximize the log-likelihood of the training data ." +"hdp-based wsi is superior to other topic model-based approaches to wsi , and indeed , better than the best-published results for both semeval datasets .","our hdp-based method outperforms all methods over the semeval-2010 wsi dataset , and is also superior to other topic modelling-based approaches to wsi based on the semeval-2007 dataset ." +we pre-processed the data to add part-ofspeech tags and dependencies between words using the stanford parser .,we used the stanford parser to create parse trees that modeled the language structure of patient and therapist utterances . +we evaluate our semi-supervised approach on the conll 2009 distribution of the penn treebank wsj corpus .,we also train and test unsupervised pos induction on the conll 2007 splits of the penn treebank using the hyper-parameter settings from ontonotes . +"szarvas et al , 2012 , hence we also relied on a method based on conditional random fields in our experiments .","szarvas et al , hence we also applied a method based on conditional random fields in our experiments ." +that will provide further insights into the characterization of preposition behavior .,"in addition , the task has generated considerable information for further examination of preposition behavior ." +"it is often beneficial to have access to more than one source form since different source forms can provide complementary information , e . g . , different stems ; and ( ii ) .","the motivation is that it is beneficial to have access to more than one source form since different source forms can provide complementary information , e.g. , different stems ." +"in this paper , we adapt a neural network joint model ( nnjm ) using l1-specific learner text .","in this paper , we perform nnjm adaptation using l1-specific learner text with a kl divergence regularized objective function ." +as well as an implementation of the approach will be made freely available .,it will be made freely available to other researchers . +"in this paper , we propose an approach based on linguistic knowledge for identification of aliases .","in this paper , we propose a linguistically grounded algorithm for alias detection ." +"we define sense annotation as a synonymy judgment task , following al-sabbagh et al .","we define sense annotation as a synonymy judgment task , following al-sabbagh et al , 2013 , 2014b ." +"we applied a supervised machine-learning approach , based on conditional random fields .",we used conditional random fields to conduct the automatic annotation experiments using our annotated corpus . +"cpra couples the classification tasks of multiple relations , and enables implicit data sharing and regularization .","by coupling different relations , cpra takes into account relation associations and enables implicit data sharing among them ." +"in this paper , we have described how mert can be employed to estimate the weights for the linear loss function .","in this paper , we extend mert and mbr decoding to work on hypergraphs produced by scfg-based mt systems ." +"in spelling error correction , cite-p-17-1-3 proposed employing a generative model for candidate generation .",cite-p-17-1-18 proposed a multi-level feature-based framework for spelling error correction including a modification of brill and moore¡¯s model ( 2000 ) . +"from the external corpora , our new models produce significant improvements on topic coherence , document clustering and document classification tasks , especially on datasets with few or short documents .","meanwhile , document clustering and document classification results show that our models improve the document-topic assignments compared to the baseline models , especially on datasets with few or short documents ." +it is a more efficient re-implementation of the chart parser and generator of the lkb .,in these respects it is quite similar to the lkb parser-generator system . +"in this article , we have presented a work focusing on the extraction of temporal relations between medical events , temporal expressions and document creation time .","in this paper , we focus on the extraction of temporal relations between medical events ( event ) , temporal expressions ( timex3 ) and document creation time ( dct ) ." +cahill et al presents a set of penn-ii treebank-based lfg parsing resources .,cahill et al developed a method for automatic annotation of lfg f-structure on the penn-ii treebank . +"despite being a natural comparison and addition , previous work on attentive neural architectures have not considered hand-crafted features .","despite being a natural comparison and addition , previous work on attentive neural architectures do not consider hand-crafted features ." +we present a host of neural approaches and a novel semantic-driven model for tackling the guesstwo task .,we presented a host of neural models and a novel semantic-driven approach for tackling the task of guesstwo . +the tagger uses a bigram hmm augmented with a statistical unknown word guesser .,the tagger uses a bigram hidden markov model augmented with a statistical unknown word guesser . +"the french treebank is a syntactically annotated corpus 7 of 569,039 tokens .","the french treebank is a treebank of 21,564 sentences annotated with constituency annotation ." +the second decoding method is to use conditional random field .,as a sequence labeler we use conditional random fields . +we used the svm light package with a linear kernel .,we employ the ranking mode of the popular learning package svm light . +stroppa et al add source-side contextual features into a phrase based smt system by integrating context dependent phrasal translation probabilities learned using a decision-tree classifier .,stroppa et al added source-side contextual features to a state-of-the-art log-linear pb-smt system by incorporating contextdependent phrasal translation probabilities learned using decision trees . +"in such models , the target character can only influence the prediction .","in such models , the target character can only influence the prediction as features ." +"for each node p / , compute thickness hij of each subr ~ f & ant sij .","for each node p/ , compute thickness hij of each subr~f & ant sij in the following way : math-p-20-7-0" +"in the training data , we found that 50 . 98 % sentences labeled as “ should be extracted ” belongs to the first 5 sentences , which may cause .","in the training data , we found that 50.98 % sentences labeled as “ should be extracted ” belongs to the first 5 sentences , which may cause the trained model tends to select more leading sentences ." +"the parser performs a weighted deductive parsing , based on this deduction system .","our parser performs a weighted deductive parsing , based on this deduction system ." +chiang et al used features indicating problematic use of syntax to improve performance within hierarchical and syntax-based translation .,"chiang et al added thousands of linguistically-motivated features to hierarchical and syntax systems , however , the source syntax features are derived from the research above ." +"in this paper , we compare regularized winnow and winnow algorithms on text chunking .","in this paper , we apply this new method to text chunking ." +"lin et alanalyzed the impacts of features extracted from contextual information , constituent parse trees , dependency parse trees , and word pairs .","in this subtask , lin et al and rutherford and xue explored rich features such as word-pairs , dependency rules , production rules and brown cluster pairs ." +"the topic of large scale distributed language models is relatively new , and existing works are restricted to n-grams only .","the topic of large-scale distributed language models is relatively new , and existing work is restricted to n-grams only ." +the stanford parser is used to extract the pos information .,the stanford parser was used to produce constituent structure trees . +"in this work , we present wikikreator , a system that is capable of generating content automatically .","to address the above-mentioned issues , we present wikikreator ¨c a system that can automatically generate content for wikipedia stubs ." +all smt models were developed using the moses phrase-based mt toolkit and the experiment management system .,the smt systems were trained using the moses toolkit and the experiment management system . +"in this paper , we propose n-gram-based tense .",this paper explores document-level smt from the tense perspective . +ibm translation models have been hugely influential in statistical machine translation .,the ibm translation models ( cite-p-14-3-1 ) have been tremendously important in statistical machine translation ( smt ) . +we use the kernel version of the large-margin ranking approach from which solves the optimization problem in figure 3 below .,we used the kernel version of the large-margin ranking approach from which solves the optimization problem in figure 2 . +neural network methods have achieved promising results for sentiment classification .,neural networks have achieved promising results for sentiment classification . +evodag 3 is a genetic programming system specifically tailored to tackle classification and regression problems on very high dimensional vector spaces and large datasets .,evodag 4 is a genetic programming system specifically tailored to tackle classification and regression problems on very high dimensional vector spaces and large datasets . +"as a representative , chapman et al developed a simple regular expression-based algorithm to detect negation signals and identify medical terms which fall within the negation scope .",chapman et al created a simple regular expression algorithm called negex that can detect phrases indicating negation and identify medical terms falling within the negative scope . +research in cognitive science suggests that human meaning representations are grounded in our perceptual system and sensori-motor experience .,"ample re-search in cognitive science suggests that human meaning representations are not merely a product of our linguistic exposure , but are also grounded in our perceptual system and sensori-motor experience ." +since they employ features that are present in the majority of spoken dialogue systems and can be obtained with little or no computational overhead .,"these features are present in many spoken dialogue systems and do not require additional computation , which makes this a very cheap method to detect problems ." +we regard candidate extraction as a latent variable and train the two-stage process jointly with reinforcement learning .,"therefore , we treat candidate extraction as a latent variable and train these two stages jointly with reinforcement learning ( rl ) ." +"property norms are a valuable source of semantic information , and can potentially be applied to a variety of nlp tasks , but are expensive to obtain .","property norms have the potential to aid a wide range of semantic tasks , provided that they can be obtained for large numbers of concepts ." +we directly optimize f 尾 using the support vector method for optimizing performance measures proposed by joachims .,we use the performance measure optimization framework proposed by joachims for optimizing these metrics . +feature weights are tuned using pairwise ranking optimization on the mt04 benchmark .,the feature weights for all approaches were tuned by using pairwise ranking optimization on the wmt12 benchmark . +"in this work , we present an approach for producing readable and cohesive .","in this work , we focus on the coherence and readability aspects of the problem ." +several studies showed that using a dictionary brings improvement for chinese word segmentation .,recent studies show that character sequence labeling is an effective method of chinese word segmentation for machine learning . +"in this paper , we have presented our deep learning-based approach to twitter sentiment analysis .",this paper describes our deep learning-based approach to sentiment analysis in twitter as part of semeval-2016 task 4 . +"since asia has great linguistic and cultural diversity , asian language resources have received much less attention than their western counterparts .","as an area of great linguistic and cultural diversity , asian language resources have received much less attention than their western counterparts ." +we set the feature weights by optimizing the bleu score directly using minimum error rate training on the development set .,"in general , we could get the optimized parameters though minimum error rate training on the development set ." +"compared with this paradigm , we can stack our c-lstms to model multiple-granularity interactions .",our architecture can also capture the multiple granular interactions by several stacked coupled-lstms layers . +"as an slu model , semantic information must be added manually , since only syntactic structures can be induced automatically .","in particular , in order to be applicable as an slu model , semantic information must be added manually , since only syntactic structures can be induced automatically in this case ." +and our present work is the first to perform both identification and resolution of chinese anaphoric zero pronouns using a machine learning approach .,"to our knowledge , our work is the first to perform both identification and resolution of chinese anaphoric zero pronouns using a machine learning approach ." +bollen et al have focused on modeling public mood on a variety of axes to correlate with socio-economic factors and to predict the dow jones industrial average .,bollen et al explored the notion that public mood can be correlated to and even predictive of economic indicators . +"word2vec has been proposed for building word representations in vector space , which consists of two models , including continuous bag of word and skipgram .","word2vec has become a standard method that builds dense vector representations , which are the weights of a neural network layer predicting neighboring words ." +the uniform information density hypothesis suggests that speakers try to distribute information uniformly across their utterances .,the uniform information density hypothesis holds that speakers tend to maintain a relatively constant rate of information transfer during speech production . +we use a transformer model for all of our experiments .,we use the sockeye implementation of a transformer for all of our experiments . +centering theory argues that these syntactic positions have low salience in comparison with subject and object position .,centering and other discourse theories argue that topical entities are likely to appear in prominent syntactic positions such as subject or object . +we propose three variants of a selectional preference feature for string-to-tree statistical machine translation based on the selectional association measure of resnik .,we propose a selectional preference feature for string-to-tree statistical machine translation based on the information theoretic measure of resnik . +"second , we restrict the search to each of these constraints and compare the resulting translation .","then , we will compare the translation results when restricting the search to either of these constraints ." +we use the f 1 measure according to the parseval metric .,we used the labeled bracketing metric parseval . +"in this paper , we introduce multi-column convolutional neural networks ( mccnns ) to understand questions from three different aspects ( namely , answer path , answer context , and answer type .","in this paper , we introduce the multi-column convolutional neural networks ( mccnns ) to automatically analyze questions from multiple aspects ." +aspects of control structure a major aspect of the bild project is that specific parametrization of the deduction process is represented in the lexicon as well as in the grammar to obtain efficient structures of control .,a major aspect of the bild project is that a specific parametrization of the deduction process is represented in the lexicon as well as in the grammar to obtain efficient structures of control . +"framenet is a three-year nsf-supported project in corpus-based computational lexicography , now in its second year ( nsf iri-9618838 , tools for lexicon building ) .",framenet is a manually constructed database based on frame semantics . +"in this work , we present a novel beam-search decoder for grammatical error correction .",we present a novel beam-search decoder for grammatical error correction . +"in this paper , we present a new method to collect large-scale sentential paraphrases from twitter .","in this paper , we address a major challenge in paraphrase research — the lack of parallel corpora ." +multiwords expressions leads to an increase of between 7 . 5 % and 9 . 5 % in accuracy of shallow parsing of sentences that contain these multiword expressions .,experiments show that knowledge about multiword expressions leads to an increase of between 7.5 % and 9.5 % in accuracy of shallow parsing in sentences containing these multiword expressions . +"shared task , cite-p-25-1-5 achieved the best reported results on linearizing deep input representation .",our system gave the highest scores reported for the nlg 2011 shared task on deep input linearization ( cite-p-25-1-3 ) . +"furthermore , l2 regularization and dropout are adopted to avoid overfitting .","in order to avoid over-fitting , dropout regularization was also used ." +chart parsing is a commonly used algorithm for parsing natural language texts .,"chart parsing is a method of building a parse tree that systematically explores combinations based on a set of grammatical rules , while using a chart to store partial results ." +"using our proposed model , we observe improvements on two tasks , neural machine translation on the europarl english to french parallel corpora and text summarization .",we observe noticeable improvements over the baselines on machine translation and summarization tasks by using pointer softmax . +the task of semantic textual similarity measures the degree of semantic equivalence between two sentences .,the task of semantic textual similarity is aimed at measuring the degree of semantic equivalence between a pair of texts . +"however , little is known on their ability to reveal the underlying morphological structure of a word , which is a crucial skill for high-level semantic .","hence , little is known on their true ability to reveal the underlying morphological structure of a word and their semantic capabilities ." +"that for the relatively complex problem of why-qa , a significant improvement can be gained by the addition of structural information .","for why-questions , we also expect to gain improvement from the addition of structural information ." +we compare different approaches including a logistic regression classifier using similarity features .,we compare the feature-based logistic regression classifier to different convolutional neural network architectures . +tagging without supervision is a quintessential problem in unsupervised learning .,unsupervised pos tagging is a classic problem in unsupervised learning that has been tackled with various approaches . +which participated in the coarse-grained english all-words task and fine-grained english all-words task of semeval-2007 .,we participated in the semeval-2007 coarse-grained english all-words task and fine-grained english all-words task . +"in this paper , we propose a method to reduce the number of wrong labels generated by ds .","in this paper , we propose a method to reduce the number of wrong labels ." +such an architecture has been further extended to jointly model intent detection and slot filling in multiple domains .,such architectures have later been extended to jointly model intent detection and slot filling in multiple domains . +we propose a log-linear model to compute the paraphrase likelihood of two patterns .,"secondly , we propose a log-linear model for computing the paraphrase likelihood ." +"in this paper , we propose a novel semi-supervised approach to addressing the problem by transforming the base features into high-level features ( i . e . meta features ) .","in this paper , we have presented a simple but effective semi-supervised approach to learning the meta features from the auto-parsed data for dependency parsing ." +coreference resolution systems typically operate by making sequences of local decisions .,coreference resolution systems are typically trained with heuristic loss functions that require careful tuning . +type theory with records is an extension of standard type theory shown useful in semantics and dialogue modelling .,type theory with records is an extension of standard type theory shown to be useful in semantics and dialogue modelling . +the lstm model is based on an encoderdecoder framework .,its underlying seq2seq model consists of an lstm encoder and an lstm decoder . +"in this paper , we propose to combine the output from a classification-based system and an smt-based system to improve the correction .","in this work , we aim to take advantage of both the classification and the smt approaches ." +"from a theoretical perspective , it is accepted that negation has scope and focus , and that the focusnot just the scope-yields positive interpretations .","from a theoretical perspective , it is accepted that negation has scope and focus , and that the focus yields positive interpretations ." +bastings et al used neural monkey to develop a new convolutional architecture for encoding the input sentences using dependency trees .,bastings et al relied on graph-convolutional networks primarily developed for modelling graph-structured data . +johnson and charniak proposed a tagbased noisy channel model which showed great improvement over boosting based classifier .,"johnson and charniak proposed a tag-based noisy channel model , which showed great improvement over a boosting-based classifier ." +"quality estimation of machine translation is an area that focuses on predicting the quality of new , unseen machine translation data without relying on human references .",quality estimation is the process of predicting the quality of a machine translation system without human intervention or reference translations . +cite-p-18-1-7 further suggest that the performance advantage of neural network based models is largely due to hyperparameter optimization .,cite-p-18-1-7 argue that the key to success lies in hyperparameter tuning rather than in the model¡¯s architecture . +mann encoded specific inference rules to improve extraction of ceo in the muc management succession task .,mann encoded specific inference rules to improve extraction of information about ceos . +"incorporation , based on the technique developed at cmu .","incorporation ( cgi ) , based on the technique developed" +"in this paper , we present the spoken narratives and gaze dataset ( snag ) , which contains gaze information and spoken narratives co-captured from observers .","in this paper , we describe a new multimodal dataset that consists of gaze measurements and spoken descriptions collected in parallel during an image inspection task ." +"in this paper , we have proposed several methods to make the sequenceto-sequence model work competitively against conventional amr parsing .","in this paper , we describe a sequenceto-sequence model for amr parsing and present different ways to tackle the data sparsity problem ." +"in this paper , we present that , word sememe information can improve word representation learning ( wrl ) , which maps words into a low-dimensional semantic space .","in this paper , we aim to incorporate word sememes into word representation learning ( wrl ) and learn improved word embeddings in a low-dimensional semantic space ." +"among them , the machine learning-based techniques showed excellent performance in many research studies .","among them , the machine learning-based techniques showed excellent performance in many recent research studies ." +empirical analysis on a human-labeled data set demonstrates the promising results of our proposed approach .,the experimental results demonstrate promising and reasonable performance of our approach . +brown clustering is an agglomerative algorithm that induces a hierarchical clustering of words .,brown clustering is a commonly used unsupervised method for grouping words into a hierarchy of clusters . +"in this paper , we have proposed a new hybrid kernel for re .","in this paper , we propose a new hybrid kernel for re ." +"chu et al presented the mapreduce framework for a wide range of machine learning algorithms , including the em algorithm .","chu et al have demonstrated that many standard machine learning algorithms can be phrased as mapreduce tasks , thus illuminating the versatility of this framework ." +"based on such representations , the classifier could identify the opinion spam .","finally , such representations are fed into a classifier to detect the review spam ." +"in this paper , we compare regularized winnow and winnow algorithms on text chunking .","in this paper , we described a text chunking system using regularized winnow ." +we parse all german and english articles with bitpar to extract verb-argument relations .,"for string-to-tree translation , we parse the german target side with bitpar ." +"we will consider parsing grammars in chomsky normal form ( cnf ) , i . e . , grammars with rules of the form .","we consider the problem of parsing non-recursive context-free grammars , i.e. , context-free grammars that generate finite languages ." +"if a reader has understood the text completely , their gaze behaviour is more reliable .",this indicated that gaze behaviour is more reliable when the reader has understood the text . +that requires relative frequency estimation .,this is precisely the relative frequency estimate we seek . +"our performance comparison shows that our voting techniques outperform traditional soft voting , as well as other systems submitted to the shared task .","our performance comparison shows how effective our voting strategies can be : they top the rankings in the semeval task , outperforming even elaborate ensemble strategies ." +we use bleu scores as the performance measure in our evaluation .,"while we do report bleu , the primary goal in our work is to achieve highest possible f1 score ." +"in this paper , we propose a novel task which is the joint prediction of word alignment and alignment types .","in this paper , we provide a new probabilistic model for word alignment where word alignments are associated with linguistically motivated alignment types ." +"in this paper , we have presented a novel method for aligning instructional text to videos .",we present a novel method for aligning a sequence of instructions to a video of someone carrying out a task . +one method that has been successfully applied to a number of linguistic problems is the winnow algorithm .,one method that has been quite successful in many applications is the snow architecture . +weighting and part-of-speech tagging are used to support the identification of words that are highly descriptive in each sentence .,idf weighting and part-of-speech tagging are applied on the examined sentences to support the identification of words that are highly descriptive in each sentence . +we built a 5-gram language model from it with the sri language modeling toolkit .,"for all data sets , we trained a 5-gram language model using the sri language modeling toolkit ." +corston-oliver et al use a classification method to measure machine translation system quality at the sentence level as being human-like translation or machine translated .,corston-oliver et al treated the evaluation of mt outputs as classification problem between human translation and machine translation . +we use an implementation based on blocks and theano .,we use an implementation based on blocks and theano for evaluation . +"we present marian , an efficient and self-contained neural machine translation framework with an integrated automatic differentiation engine based on dynamic computation .","we introduced marian , a self-contained neural machine translation toolkit written in c++ with focus on efficiency and research ." +we investigate the use of character-level translation models to support the translation from and to under-resourced languages .,"we explore the fact that many poorly resourced languages are closely related to well equipped languages , which enables low-level techniques such as character-based translation ." +"in this work , we present an approach to feed generic cues into the training process of such networks , leading to better generalization abilities .","in this paper , we study how to incorporate extrinsic cues into the network , beyond just generic word embeddings ." +worst scaling ( bws ) is an alternative method of annotation that is claimed to produce high-quality annotations .,"best–worst scaling ( bws ) is a less-known , and more recently introduced , variant of comparative annotation ." +to integrate multiple tk models into a single model we apply a classifier stacking approach .,to integrate multiple tk representations into a single model we apply a classifier stacking approach . +"semantic role labelling is the task of identifying the predicates in a sentence , their semantic arguments and the roles these arguments take .",semantic role labeling is the task of locating and labeling the semantic arguments to predicates . +"kalchbrenner and blunsom use top-level , composed distributed representations of sentences to guide generation in a machine translation setting .",kalchbrenner and blunsom utilize a simple convolution model to generate phrase embeddings from word embeddings . +in section 6 considers the implications of our experimental results .,discussion in section 6 considers the implications of our experimental results . +"to use this new notion of s-relevance , we have published the annotated s-relevance corpus used in this paper .","to enable other researchers to use this new notion of s-relevance , we have published the annotated s-relevance corpus used in this paper ." +this strategy is taken by some of the previously built dialogue systems that integrate task-oriented dialogues and chat-oriented dialogues .,it has been tried to combine chat-oriented dialogue systems with taskoriented dialogue systems . +we will describe and evaluate two compilation approaches to approximating a typed unification grammar .,we will describe and evaluate two approaches to this compilation problem . +"wsd has been recognized as one of the most important subjects in natural language processing , especially in machine translation , information retrieval , and so on .","word sense disambiguation is one of the large open problems in the field of natural language processing , and in recent years has attracted considerable research interest ." +"word representations derived from unlabeled text have proven useful for many nlp tasks , e . g . , part-of-speech ( pos ) .","word representations have proven useful for many nlp tasks , e.g. , brown clusters as features in dependency parsing ( cite-p-15-3-5 ) ." +learning with confidence can improve standard methods .,our confidence based approach can be used to improve these tasks . +"zheng et al proposed a gated attention neural network model to generate comments for news article , which addressed the contextual relevance and the diversity of comments .",zheng et al first paid attention to generating comments for news articles by proposing a gated attention neural network model to address the contextual relevance and the diversity of comments . +"stroppa et al added souce-side context features to a phrase-based translation system , including conditional probabilities of the same form that we use .",stroppa et al added source-side contextual features to a state-of-the-art log-linear pb-smt system by incorporating contextdependent phrasal translation probabilities learned using decision trees . +"for any pcfg math-w-7-1-0-40 , there are equivalent ppdts .",a pcfg math-w-3-1-3-146 is reduced if math-w-3-1-3-154 is reduced . +the recurrent continuous translation models proposed by kalchbrenner and blunsom also adopt the recurrent language model .,kalchbrenner and blunsom introduced recurrent continuous translation models that comprise a class for purely continuous sentence-level translation models . +"so far , they have been quite successfully applied to part-of-speech tagging , syntactic parsing , semantic role labeling , opinion mining , etc .","so far , they have been quite successfully applied to 56 part-of-speech tagging , syntactic parsing , semantic role labeling , opinion mining , etc ." +we employ the stacked denoising auto-encoder to build the corpus-based classifier .,we use stacked denoising autoencoder to build the corpus-based model . +"we use glove word embeddings , an unsupervised learning algorithm for obtaining vector representations of words .",our approach to relation embedding is based on a variant of the glove word embedding model . +we used 300-dimensional pre-trained glove word embeddings .,we used crfsuite and the glove word vector . +"in this report , we show that this increased power can be used to define the above-mentioned linguistic phenomena .","in this paper , we have shown that this extra formal power can be used in nl processing ." +deep and non-deep learning approaches for solving vqa have also been proposed .,deep learning models in various forms have been the standard for solving vqa . +blei and mcauliffe and ramage et al used document labels in supervised setting .,blei and mcauliffe proposed supervised lda that can handle sentiments as observed labels . +we used the hindencorp monolingual corpus as the monolingual hindi reference corpus .,we used the hindmono corpus which contains roughly 45 million sentences to build our language model in hindi . +on the shelf above the fridge is in this context preferable to the white powder .,the salt behind the corn flakes on the shelf above the fridge is in this context preferable to the white powder . +bleu is calculated as the geometric mean of n-grams comparing a machine translation and a reference text .,"bleu and nist are calculated as the geometric mean of n-grams multiplied by a brevity penalty , comparing a machine translation and a reference text ." +we experimented with two available relation extraction ( re ) tools .,we tried two available re tools for extracting semantic relations from scientific publications . +"in this paper , we discuss inter-dialect mt in general and cantonese-mandarin mt .","in this paper , we discuss inter-dialect mt in general and cantonese-mandarin mt in particular ." +"hence , we introduce an attention mechanism to extract the words that are important to the meaning of the post , and aggregate the representation of those informative words to form a vector .","in order to amplify the contribution of important words in the final representation , we use a context-aware attention mechanism , that aggregates all the intermediate hidden states using their relative importance ." +one of the important open questions in natural language generation is how the common rule-based approaches to generation can be combined with recent insights from statistical natural language processing .,"one of the important open questions in natural language generation is how the common , rulebased approaches to generation can be combined with recent insights from statistical nlp ." +"in this paper , we propose an endto-end deep architecture to capture the strong interaction .","in this paper , we propose a deep architecture to model the strong interaction of sentence pair with two coupled-lstms ." +the experiments not only show that our system achieves higher f1-measure than other state-of-the-art systems .,the experiments show that our framework is effective ; it achieves higher f1-measure than three state-of-the-art systems . +a recent study shows that the technique behind word2vec is very similar to implicit matrix factorization .,it has been shown that the skip-gram with negative sampling algorithm in word2vec corresponds to implicit factorization of the pmi matrix . +"we compare our system with the rule-based approach aris , the purely statistical approach kazb , and the mixed approach uiuc system .",we compare our method with the template-based method and the verb-categorization method . +"in this work , we instead derive sense vectors by embedding the graph structure of a semantic network .",the algorithm takes word vectors and uses them and the network structure to induce the sense vectors . +word embeddings have shown to capture synonyms and analogies .,extensive experiments have leveraged word embeddings to find general semantic relations . +unsupervised word embeddings trained from large amounts of unlabeled data have been shown to improve many nlp tasks .,previous work has shown that unlabeled text can be used to induce unsupervised word clusters that can improve performance of many supervised nlp tasks . +key component is the so-called alignment model .,the key component is the alignment model . +bykh and meurers presented an ensemble classifier based on lexicalized and non-lexicalized local syntactic features .,bykh and meurers systematically explored nonlexicalized and lexicalized context-free grammar production rules . +we use weight tying to limit the search space for parameters .,we use weight tying between target and output embeddings . +"the final scoring function also incorporates a 5-gram language model trained on a subset of common crawl , estimated with kneser-ney smoothing using kenlm .","for all systems we compared , the language model is a 5-gram kenlm , which uses modified kneser-ney smoothing and tuning is performed to maximize the bleu score using minimum error rate training ." +serban et al . ( 2017 ) further exploit an utterance latent variable in the hierarchical rnns by incorporating the variational autoencoder ( vae ) framework .,"the vhred model ( serban et al. , 2017 ) integrates the vae with the hred to model twitter and ubuntu irc conversations by introducing an utterance latent variable ." +"target identification show that rnns , without using any hand-crafted features , outperform feature-rich crf-based models .","a comparison between models demonstrates that rnns outperform crfs , even when they use word embeddings as the only features ." +and bastings et al extend the use of graph convolutional network to nlp tasks .,bastings et al relied on graph-convolutional networks primarily developed for modelling graph-structured data . +foster et al used phrase pairs instead of sentences and learned weights for them using in-domain features based on word frequencies and perplexities .,"yasuda et al and foster et al ranked the sentence pairs in the general-domain corpus according to the perplexity scores of sentences , which are computed with respect to in-domain language models ." +"for the image labels , we use the representation of the last layer of the vgg neural network .","for image features , we use the precomputed features provided by faghri et al , which are extracted from the fc7 layer of vgg-19 ." +it has been shown in previous work on relation extraction that the shortest path between any two entities captures the the information required to assert a relationship between them .,it has been shown in previous work on relation extraction that the shortest dependency path between any two entities captures the information required to assert a relationship between them . +"here derive an argument graph from the complete argument web ( cite-p-17-1-4 ) , a large ground-truth database consisting of about 50 , 000 argument units .","to maximize size and heterogeneity , we here refer to the argument web ( cite-p-17-1-4 ) , which is to our knowledge the largest ground-truth argument database available so far ." +"using this setting , we show that linguistic cues and conversational patterns extracted from the first 20 seconds of a team .",we show that such cues have predictive power even when extracted from the first 20 seconds of the conversations . +"each utterance u is accompanied by syntax , a syntactic analysis in penn treebank format .",each sentence is linguistically analyzed by a pcfg-la parser trained on the penn treebank . +we propose a metric based on symmetric kl divergence to filter out the highly divergent training instances .,we propose to use the symmetric kl-divergence metric to measure the tag distribution divergence . +"many of these systems exploit linguistically-derived syntactic information either on the target side , the source side , or both .","syntax-based models either use linguistic annotation on the source language side , target language side or are syntactic in a structural sense only ." +blacoe and lapata compare different arithmetic functions across multiple representations on a range of compositionality benchmarks .,blacoe and lapata compare several types of vector representations for semantic composition tasks . +"while each of these three sources of world knowledge has been shown to improve coreference resolution , the improvements were typically obtained by incorporating world knowledge ( as features ) into a baseline resolver composed of a rather weak coreference model .","while world knowledge has been shown to improve learning-based coreference resolvers , the improvements were typically obtained by incorporating world knowledge into a fairly weak baseline resolver ." +barzilay and mckeown used a corpus-based method to identify paraphrases from a corpus of multiple english translations of the same source text .,barzilay and mckeown presented an unsupervised learning approach to extract paraphrases of words and phrases from different english translations of the identical source language sentences . +this paper presents the hitsz-icrc system designed for the qa tempeval challenge .,this paper described the method hitsz-icrc system used for qa tempeval challenge . +"the desired output is a document argumentation graph structure , such as the one in figure 1 , where propositions are denoted by letter subscripts , and the associated argumentation graph shows their types and support relations between them .","the desired output is a mapping from terms to their corresponding hypernyms , which can naturally be represented as a weighted bipartite graph ( term-label graph ) ." +dong et al perform targeted sentiment classification by using a recursive neural network to model the transmission of sentiment signal from opinion baring expressions to a target .,dong et al used dependency parsing for twitter sentiment classification to find the words syntactically connected to the target of interest . +"supervised approaches to dependency parsing have been very successful for many resource-rich languages , where relatively large treebanks are available .",supervised approaches to dependency parsing have been successful for languages where relatively large treebanks are available . +"in this paper , we describe the tagging strategies that can be found in the literature .",we describe the tagging strategies that can be found in the literature and evaluate their relative performances . +"the representative ml approaches used in ner are hidden markov model , me , crfs and svm .","some of the very effective ml approaches used in ner are me , crf and svm ." +"for domain-specific translation tasks , we exploited a normalized correlation method to spot the translation equivalents .","to remedy the above mentioned effects , we extended the normalized frequency of cite-p-12-1-0 to a normalized correlation criterion to spot translation equivalents ." +inui et al proposed a rule-based system for text simplification aimed at deaf people .,inui et al proposes a rule-based system for text simplification aimed at deaf people . +"in addition , we used word category information of a chinese thesaurus .","in addition , we used word category information of a chinese thesaurus for verb disambiguation ." +experimental results show that the proposed approach can outperform the baseline .,the experimental results show the effectiveness of the proposed approach . +that is based on the idea that plausible values for a given field could be inferred from the context provided by the other fields in the record .,the model is based on the idea that missing or corrupted values for one field can be inferred from values in other fields of the record . +standard english lm benchmarks are the penn treebank and the 1 billion word benchmark .,"standard lm benchmarks in english include the penn treebank , the 1 billion word benchmark , and the hutter prize data ." +work suggest that grammatical features can play a role in predicting reading difficulty levels .,this phenomenon suggests that grammatical features may play a more important role in predicting and measuring l2 readability . +correctly resolving these references is critical yet challenging for artificial agents .,reliably resolving these references is critical for dialogue success . +this is the first reported application of nli to non-english data .,in this work we have presented the first application of nli to non-english data . +experiments also demonstrate that nmt is more effective for incorporating the source-side monolingual data .,researchers have proven that the target-side monolingual data can greatly enhance the decoder model of nmt . +"in this paper , we have analyzed the state of the art in order to clarify why novel text .",in this paper we first present an in-depth analysis of the state of the art in order to clarify this issue . +borschinger et al . ( 2011 ) ¡¯ s approach to reducing the problem of grounded learning of semantic parsers to pcfg induction .,bo ? rschinger et al . ( 2011 ) introduced an approach to grounded language learning based on unsupervised pcfg induction . +"the training data for the dependency model was created by first supertagging the sentences in sections 2-21 , using the supertagger described in clark and curran .",the derivations licenced by a lexical category sequence were created using the ccg parser described in clark and curran . +school of thought analysis has been identified an important fine-grained scientific knowledge discovery task .,school of thought analysis is an important yet not-well-elaborated scientific knowledge discovery task . +"we describe b aye s um , an algorithm for performing query-focused summarization .","we present b aye s um ( for ¡°bayesian summarization¡± ) , a model for sentence extraction in query-focused summarization ." +we adopt here is to compile the model of a classifier into a weighted finite-state transducer ( wfst ) .,we do this by compiling the rules resulting from an adaboost classifier into a finite-state transducer . +"by leveraging these representations and lexical-semantic knowledge , we put forward a semantic similarity measure with state-of-the-art performance on multiple datasets .","we evaluate our approach on word similarity and relational similarity frameworks , reporting state-of-the-art performance on multiple datasets ." +"shimbo and hara considered many features for coordination disambiguation and automatically optimized their weights , which were heuristically determined in kurohashi and nagao , using a discriminative learning model .","shimbo and hara and hara et al considered many features for coordination disambiguation and automatically optimized their weights , which were heuristically determined in kurohashi and nagao , by using a discriminative learning model ." +"decision making process during natural language generation can be vastly reduced , because the ebl method supports adaption of a nlg system to a particular language use .","the main advantage for the proposed new method for nlg is that the complexity of the grammatical decision making process during nlg can be vastly reduced , because the ebl method supports the adaption of a nlg system to a particular use of a language ." +"lacoste-julien et al , 2006 ) created a discriminative model able to model 1-to-1 , 1-to-2 and 2-to-1 alignments for which the best results were obtained using features based on symmetric hmms trained to agree , and intersected model 4 .","moore et al , 2006 , introduced a discriminative model of 1-to-n and m-to-1 alignments , and similarly to the best results were obtained using hmms trained to agree and intersected model 4 ." +"one of its main goals , the system suggests a set of domain-free rules to help the human annotator in scoring semantic equivalence of two sentences .","meanwhile , the system domain-free rules aim to help the human annotator in scoring semantic equivalence of sentence pair ." +we use the whole penn treebank corpus as our data set .,we use penndiscourse treebank and penn treebank data through this entire work . +"as each edge in the confusion network only has a single word , it is possible to produce inappropriate translations such as ¡° .","as each edge in the confusion network only has a single word , it is possible to produce inappropriate translations such as ¡°he is like of apples¡± ." +"in this approach , source , target tree structures are used for model training .","in this approach , tree structures for the source , target , or both are used for model training ." +"for task-specific ranking , we propose to extract clickthrough data and incorporate it with dedicated training data .","a critical problem for the task-specific ranking is training data insufficiency , which may be solved by using the data extracted from click log ." +the systems were tuned using a small extracted parallel dataset with minimum error rate training and then tested with different test sets .,systems were optimized on the wmt08 french-english development data using minimum error rate training and tested on the wmt08 test data . +our previous work models the sentences in the weighted matrix factorization framework .,we used weighted textual matrix factorization to model the semantics of the sentences . +"the most widely used are word error rate , position independent word error rate , the bleu score and the nist score .","there exists a variety of different metrics , eg , word error rate , position-independent word error rate , bleu score , nist score , meteor , gtm ." +lin and hovy proposed the idea of extracting topic-based signature terms for summarization .,lin and hovy first introduced topic signatures which are topic relevant terms for summarization . +"in a more recent work , sennrich et al carry out translation from english to german while controlling the degree of politeness .",sennrich et al proposed a method to control the level of politeness in target sentence in english-to-german translation . +"costa and branco showed that aspectual indicators improve temporal relation classification in tempeval challenges , which emerged in conjunction with timeml and timebanks .",costa and branco explore the usefulness of a wider range of explicitly aspectual features for temporal relation classification . +"construction , this paper addresses semantic modeling of relational patterns .","in this paper , we addressed the semantic modeling of relational patterns ." +our baseline is a state-of-the-art smt system which adapts bracketing transduction grammars to phrasal translation and augment itself with a maximum entropy based reordering model .,our baseline system is a state-of-the-art smt system which adapts bracketing transduction grammars to phrasal translation and equips itself with a maximum entropy based reordering model . +bhargava and kondrak propose a reranking approach that uses supplemental transliterations to improve grapheme-to-phoneme conversion of names .,bhargava and kondrak present a method for applying transliterations to grapheme-to-phoneme conversion . +"many models have been proposed for sequence labeling tasks , such as hidden markov models , conditional random fields , max-margin markov networks and others .","various sequence labeling models have been proposed , like hidden markov models , structured perceptron , conditional random fields and svm-hmm ." +"their technique shares similar limitations with the work presented in nepveu et al , since it requires pre-existent models estimated in batch mode .","the work by nepveu et al constitutes a domain adaptation technique and not an online learning technique , since the proposed cache components require pre-existent models estimated in batch mode ." +"in this paper , we propose an automatic quantitative expansion method for a sentence set that contains sentences of the same meaning .","in this paper , we apply syntactic substitution for generating sentences , which corresponds to transfer-based machine translation ." +"recently , deep learning has also been introduced to propose an end-to-end convolutional neural network for relation classification .","in particular , many neural network approaches have been proposed and shown better performance in relation classification and relation extraction ." +the task is organized based on some research works .,the tasks are organized based on some research works . +the current state-of-the-art in machine translation is phrase-based smt .,the current state-of-the-art in smt are phrase-based systems . +"called lexical sets ( cite-p-12-1-5 ) , the model ’ s performance improved in a preliminary experiment for the three most difficult verbs .","in addition , using lexical sets improves the model ’ s performance on three of the most challenging verbs ." +socher et al proposed the recursive neural network that has been proven to be efficient in terms of constructing sentences representations .,socher et al proposed a feature learning algorithm to discover explanatory factors in sentiment classification . +our model has better capability to perform long-distance reordering and is more suitable for translating long sentences .,we also find that the rules used in our model are more suitable for long-distance reordering and translating long sentences . +"with gold standard utterance segmentation , much work exists on detecting disfluencies .","as a separate task , there has been extensive work on utterance segmentation ." +central to our approach is the intuition that word meaning is represented as a probability distribution over a set of latent senses and is modulated by context .,"central to our approach is the construction of high-accuracy , high-coverage multilingual wikipedia entity type mappings ." +"by contrast , the construction-specific transformations targeting coordination and verb groups appear to have a more language-independent effect ( for languages to which they are applicable .","by contrast , the construction specific transformations appear to be more sensitive to parsing strategy but have a constant positive effect over several languages ." +"this year , again we were unable to follow the methodology outlined in graham et al for evaluation of segment-level metrics because the sampling of sentences did not provide sufficient number of assessments of the same segment .","this year , we were unable to follow the methodology outlined in graham et al for evaluation of segment-level metrics because the sampling of sentences did not provide sufficient number of assessments of the same segment ." +"we use the penn discourse treebank , a corpus annotated at the discourse level upon the penn treebank , giving access to a gold syntactic annotation , and composed of articles from the wall street journal .","the penn discourse treebank we use the penn discourse treebank , the largest available manually annotated corpora of discourse on top of one million word tokens from the wall street journal ." +"for englishto-arabic translation , we achieve a + 1 . 04 bleu average improvement by tiling our model .","for englishto-arabic translation , our model yields a +1.04 bleu average improvement over a state-of-the-art baseline ." +"in this paper , we tackle the problem of decoding in neural machine translation .","in this paper , we solely focus on the problem of decoding given a trained neural machine translation model ." +"our smt systems are built with the moses toolkit , while word alignment is produced by the berkeley aligner .","all our smt systems are built with the moses toolkit , and word alignments are generated by the berkeley aligner ." +we follow and formalize semantic inference as an integer linear program .,we follow the inference approach in and formalize this process as an integer linear program . +"in this paper , we study and design models for extracting atfs from a sentence with respect to another one .","in this paper , we study and design models for learning to detect ancillary information in the context of pi ." +the system output is evaluated using the meteor and bleu scores computed against a single reference sentence .,"based on bleu , it computes n-gram precision of the system output against reference sentences ." +one such feature is the knowledge of the semantic clusters in a domain .,one such feature is the constraint that two case elements with the same case do not modify a verb . +"in the argument reasoning comprehension task , the organizer extracts the instances from room for debate .",the argument comprehension reasoning task aims to reconstruct and analyze the argument reasoning . +and then we extract subtrees from dependency parsing trees .,"then , from dependency trees in the data , we extract different types of subtrees ." +finkel and manning proposed a crf-based constituency parser for nested named entities such that each named entity is a constituent in the parse tree .,finkel and manning propose a crf-based constituency parser which takes each named entity as a constituent in the parsing tree . +we use different pretrained word embeddings such as glove 1 and fasttext 2 as the initial word embeddings .,we use glove embeddings for the english tasks and fasttext embeddings for all newswire tasks . +most spell checking systems are based on a noisy channel formulation .,a typical query spelling correction system employs a noisy channel model . +we present the parsing algorithm as a deductive system .,we also present a lwfg parser as a deductive system . +"for the evaluation of translation quality , we used the bleu metric , which measures the n-gram overlap between the translated output and one or more reference translations .",we used the machine translation quality metric bleu to measure the similarity between machine generated tweets and the held out tests sets . +implicit task-based feedback that has been gathered in a cross-lingual search task can be used successfully to improve task-specific metrics .,"in contrast , we successfully utilize implicit task-based feedback collected in a cross-lingual search task to improve task-specific and machine translation quality metrics ." +we suggest a shift in focus from constraining locality and complexity through restrictions that all trees in a tree set must adjoin within a single tree or tree set to constraining locality and complexity through restrictions on the derivational distance between trees in the same tree set in a valid derivation .,in this paper we suggest a shift in focus from constraining locality and complexity through tree-and set-locality to constraining locality and complexity through restrictions on the derivational distance between trees in the same tree set in a valid derivation . +"we consider morphological similarity , paths in wordnet , and cosine similarity of word2vec embeddings .","we then consider approaches drawing off of word2vec , paragraph vectors , and skip-thoughts ." +"for instance , klein and manning introduced an approach where the objective function is the product of the probabilities of a generative phrase structure and a dependency parser .","for instance , klein and manning introduced an approach where the objective function is the product of the probabilities of a generative phrase-structure and a dependency parsers ." +"the third baseline , a bigram language model , was constructed by training a 2-gram language model from the large english ukwac web corpus using the srilm toolkit with default good-turing smoothing .","the additional baseline , bigram baseline , is a bigram-based language model trained on the bnc with srilm , using the standard language model settings for computing log-probabilities of bigrams ." +"in the learning process , the uncertainty of instance labels can be taken into account .","in multi-instance learning , the uncertainty of instance labels can be taken into account ." +this paper describes our participation in the semeval 2016 sts shared task .,this paper has described our system submission to the semeval 2016 sts shared task . +"at l imsi , broadcast news transcription systems have been developed for english , french , german , mandarin and portuguese .","versions of the l imsi broadcast news transcription system have been developed in american english , french , german , mandarin and portuguese ." +and we report the performance of a phrase-based statistical model ( cite-p-17-1-19 ) estimated using these monolingual features .,here we review the parameters of the standard phrase-based translation model ( cite-p-17-1-20 ) . +"in this paper , we present an approach for the unsupervised knowledge extraction for taxonomies of concepts .",this paper presents an approach to the problem of taxonomy construction from texts focusing on the hyponym-hypernym relation between two terms . +this is a high accuracy tagging task often performed using a sequence classifier .,this is implemented with a discriminative model using a crf sequence tagger . +"for efficiency , we follow the hierarchical softmax optimization used in word2vec .","here , for textual representation of captions , we use fisher-encoded word2vec features ." +we use the glove algorithm to obtain 300-dimensional word embeddings from a union of these corpora .,our approach to relation embedding is based on a variant of the glove word embedding model . +"it is widely acknowledged in the nlp community that multiword expressions are a challenge for many nlp applications , due to their idiosyncratic behaviour at different levels of linguistic description .","it is widely acknowledged that good mwe processing strategies are necessary for nlp systems to work effectively , since these kinds of word combinations are very frequent in both text and speech ." +"with feature-based methods , we proposed an event-based time label propagation model called confidence boosting in which timestamps are propagated on a document-event bipartite graph according to relative temporal relations between documents and events for dating documents .","based on this intuition , we proposed an event-based time label propagation model called confidence boosting in which time label information can be propagated between documents and events on a bipartite graph ." +"in this work , we aim to relieve the data acquisition bottleneck associated with automatic image annotation .","in this paper , we describe a new approach for the collection of image annotation datasets ." +goldstein-stewart et al performed a study on cross-topic authorship attribution by concatenating the texts of an author from different genres on the same topics .,goldstein-stewart et al also carried out some cross-topic experiments by concatenating the texts of an author from different genres . +"within the latent space , our model exploits the latent space to determine the features that are important for a particular context , and adapt the original ( out-of-context ) dependency-based feature vector of the target word .","the factorization model allows us to determine which dimensions are important for a particular context , and adapt the dependency-based feature vector of the word accordingly ." +"lemmatization is the process of reducing a word to its base form , normally the dictionary lookup form ( lemma ) of the word .",lemmatization is the process to determine the root/dictionary form of a surface word . +"to address the second problem , we adopt convolutional architecture to automatically learn relevant features without complicated nlp preprocessing .","to address the latter problem , we avoid feature engineering and instead adopt convolutional architecture with piecewise max pooling to automatically learn relevant features ." +szpektor et al describe the tease method for extracting entailing relation templates from the web .,szpektor et al defined a fully unsupervised learning algorithm for web-based extraction of entailment relations . +the most widely used approach derives phrase pairs from word alignment matrix .,a popular approach are phrase-based models which translate short sequences of words together . +short-term memory is known to have severe capacity limitations of perhaps no more than three to four distinct elements .,but this kind of memory is known to have a severely constrained storage capacity ¡ª possibly constrained to as few as three or four distinct elements . +rasooli and collins proposed a method to induce dependency parser in tl using a dependency parser in sl and a parallel corpus .,rasooli and collins proposed a method to induce dependency parser in the target language 100 using a dependency parser in the source language and a parallel corpus . +"through experiments on real-world datasets , we demonstrate that kgeval best estimates kg accuracy .","through extensive experiments on real datasets , we demonstrated effectiveness of kgeval ." +we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing .,we train a 4-gram language model on the xinhua portion of the gigaword corpus using the sri language toolkit with modified kneser-ney smoothing . +tan et al employ local feature selection and explicit discrimination of positive and negative features to ensure the performance of trigger type determination .,tan et al used a local feature selection method to ensure the performance of trigger classification and applied multiple levels of patterns to improve their coverage in argument classification . +our experiments on the instructional corpus consider the same 26 primary relations used in and also treat the reversals of non-commutative relations as separate relations .,"following on the instructional corpus , we use 26 relations , and treat the reversals of non-commutative relations as separate relations ." +in cite-p-3-15-8 and used in all of the previous repeated evaluations based upon the testing .,to avoid those given in cite-p-3-15-8 and used in all of the previous repeated evaluations based upon the testing cor-work . +iyer acknowledges support from the microsoft research ph . d fellowship .,rishabh iyer acknowledges support from the microsoft research ph.d fellowship . +"in this paper , we present a predicate-argument structure analysis that simultaneously resolves the anaphora of zero pronouns .",this paper presents a predicate-argument structure analysis that simultaneously conducts zero-anaphora resolution . +"we used the uiuic dataset which contains 5,952 factoid questions 4 to train a multi-class question classifier .",in this task we used the trec question dataset 10 which contains 5952 questions . +"for syntactic and semantic dependency parsing , people usually define a very high-dimensional feature .",discriminative models in syntactic and semantic parsers usually use millions of features . +combining averaged scores with features based on confusion frequencies improves prediction quality .,"however , adding confusion matrix features improves the predictive model ( section 4 ) ." +"the methods employed for gathering the data , preparation and compilation of dataset , used in offenseval shared task is described in zampieri et al .",the data collection methods used to compile the dataset provided in offenseval is described in zampieri et al . +in this paper we presented a method to discover asymmetric entailment relations between verbs .,in this paper we investigate a novel method to detect asymmetric entailment relations between verbs . +pitler and nenkova showed how syntactic features could be used in disambiguating both usage ambiguity and sense ambiguity .,pitler and nenkova showed that syntactic features extracted from constituent parse trees are very useful in disambiguating discourse connectives . +"trained and tested on data derived from the chinese treebank , our model achieves a classification accuracy of close to 90 % .",we trained a statistical model using data derived from the chinese treebank and reported promising preliminary results . +"escudero et al tested the supervised adaptation setting on the dso corpus , which had examples from the brown corpus and wall street journal corpus .","escudero et al tested the supervised adaptation scenario on the dso corpus , which had examples from the brown corpus and wall street journal corpus ." +"as mentioned above , moldovan et al showed that the sense collocation of ncs is a key feature when interpreting ncs .",the sense collocation method of moldovan et al is based on the pair of word senses of nc constituents . +kalchbrenner et al developed a cnnbased model that can be used for sentence modelling problems .,kalchbrenner et al introduced a dynamic k-max pooling to handle variable length sequences . +the svm is based on discr iminative approach and makes use of both pos itive and negative examples to learn the distinction between the two classes .,the svm is based on discriminative approach and makes use of both positive and negative examples to learn the distinction between the two classes . +we propose to formalize a scene ( consisting of a set of objects with various properties and relations ) as a labeled directed graph and describe content selection ( which properties to include in a referring expression ) .,"we propose to formalize a scene ( i.e. , a domain of objects and their properties and relations ) as a labeled directed graph and describe the content selection problem ( which properties and relations to include in a description for an object ? )" +it forms a hierarchy of subgraphs that are recursively included in one another and whose cohesiveness and size respectively increases and decreases with k .,it forms a hierarchy of nested subgraphs whose cohesiveness and size respectively increase and decrease with k . +we use julius as the lvcsr and julian as the dssr .,we used julius as the lvcsr and julian as the dssr . +"zelenko et al , 2003 ) devised a kernel on shallow parse trees to detect relations between named entities , such as the person-affiliation relation between a person name and an organization name .","zelenko et al described a recursive kernel based on shallow parse trees to detect personaffiliation and organization-location relations , in which a relation example is the least common subtree containing two entity nodes ." +the relevance of these features is supported by mohammad et al that produced the top ranked system at semeval-2013 and semeval-2014 for sentiment classification of tweets using emotional lexicons .,"mohammad et al leverage a large sentiment lexicon in a svm model , achieving the best results in the semeval 2013 benchmark on sentence-level sentiment analysis ." +pantel and ravichandran extended his work by including all syntactic dependency relations for each considered noun .,"recently , pantel and ravichandran extended this approach by making use of all syntactic dependency features for each noun ." +issue framing is related to both analyzing biased language and subjectivity .,issue framing is related to the broader challenges of biased language analysis and subjectivity . +"in a first step , we describe how to construct a single large neural network which imitates the output of an ensemble of multiple networks .","first , we show that the ensemble can be unfolded into a single large neural network which imitates the output of the ensemble system ." +"phoneme sequences , we propose to apply an hmm method after a local phoneme prediction .",we also apply an hmm method in conjunction with a local classification model to predict a global phoneme sequence given a word . +lexical features are a major source of information in current coreference resolvers .,lexical features are a major source of information in state-of-the-art coreference resolvers . +le and mikolov presented the paragraph vector algorithm to learn a fixed-size feature representation for documents .,"le and mikolov introduced paragraph-level vectors , a fixed-length feature representations for variable-length texts ." +and a seed-expansion approach is utilized for argument extraction .,a seed-expansion approach is proposed to extract the arguments correctly . +"which enables us to efficiently guide the annotators , to store all their intermediate results , and to record user ¨c system interaction data .","as a key property of our tool , we store all intermediate annotation results and record the user¨csystem interaction data ." +"our data structure is a trie in which individual nodes are represented by b-trees , which are searched in parallel ( section 3 ) and arranged compactly in memory ( section 4 ) .",the data structure is a list which is mainly accessed wittl a typical lifo stack policy . +"multilinguality is the premise of the lump approach : we use representations which lie towards language-independence as we aim to be able to approach similar tasks on other languages , paying the least possible effort .","since multilinguality is a key need in today ’ s information society , and because wcls have been tested overwhelmingly only with the english language , we provide experiments for three different languages , namely english , french and italian ." +"for the embeddings trained on stack overflow corpus , we use the word2vec implementation of gensim 8 toolkit .","here , for textual representation of captions , we use fisher-encoded word2vec features ." +marcu and echihabi presented the unsupervised approach to recognize the discourse relations by using word pair probabilities between two adjacent sentences .,marcu and echihabi presented an unsupervised method to recognize discourse relations held between arbitrary spans of text . +the n-gram translation model is a markov model over phrasal bilingual tuples and can improve the phrase-based translation system by providing contextual dependencies between phrase pairs .,this model extends the phrase-based model by using the formal synchronous grammar to well capture the recursiveness of language during translation . +gildea and palmer developed a system to predict semantic roles from sentences and their parse trees as determined by the statistical parser of collins .,"in previous work using the propbank corpus , gildea and palmer developed a system to predict semantic roles from sentences and their parse trees as determined by the statistical parser of collins ." +"for muc-6 , this way of using unlabeled text can bring a relative reduction in errors of 38 . 68 % between the upper case and mixed case ners .","by using the teaching process , we can reduce the performance gap between mixed and upper case ner by as much as 39 % for muc-6 and 22 % for muc-7 ." +"in this paper , we investigate methods for converting arbitrary bit strings into english word sequences .","in this paper , we investigate other methods for converting a system-generated bit string into a memorable sequence of english words ." +"as there is no standard chinese corpus , no chinese experimental results are reported in and .","as there is no standard chinese corpus , no chinese experimental results are reported in ." +"experimental results show that the proposed model achieves 83 % in f-measure , and outperforms the state-of-the-art baseline .",experimental results show our proposed framework outperforms the state-of-the-art baseline by over 7 % in f-measure . +"the most relevant to our work are kazama and torisawa , toral and mu帽oz , and cucerzan .","the most relevant work to this paper are kazama and torisawa , toral and munoz , cucerzan , richman and schone ." +"the web is a very rich source of linguistic data , and in the last few years it has been used intensively by linguists and language technologists for many tasks .","with its growing size and coverage , the internet has become an attractive source of material for linguistic resources , used both for linguistics and natural language processing applications ." +"having obtained the system output of morante and daelemans , however , we also computed cue-level scores for their system .","looking at the cue-level scores , we find that the precision of our system and that of morante and daelemans drops by an equal amount for the bsp cross-text testing ." +we introduce a multi-sense embedding model based on chinese restaurant processes that achieves state of the art performance on matching human word similarity judgments .,"in this paper , we expand ongoing research into multi-sense embeddings by first proposing a new version based on chinese restaurant processes that achieves state of the art performance on simple word similarity matching tasks ." +"in this paper , we have proposed a method for multi-target translation .","in this paper , we propose a framework of multi-target smt ." +"cite-p-16-3-9 also suggested that syntactic features ( syntactic errors ) might be useful features , but only investigated this idea at a shallow level .",cite-p-16-3-9 did suggest using syntactic errors in their work but did not investigate them in any detail . +semeval-2016 task 8 is the task of recovering this type of semantic formalism for plain text .,"semeval-2016 task 4 comprises five subtasks , three of which represent a significant departure from previous editions ." +semantic textual similarity assess the degree to which two snippets of text mean the same thing .,semantic textual similarity measures the degree of equivalence between the meanings of two text sequences . +"the bleu score , introduced by papineni et al , is a metric for evaluating the quality of a candidate translation by comparing it to one or more reference translations .",the bleu score or bilingual evaluation under study is a method to measure the difference between machine and human translations . +logical derivations were used to combine clauses and to remove easily inferrable clauses in .,logical derivations were used to combine clauses and remove easily inferable clauses in . +the second one is a btg decoder with lexicalized reordering model based on maximum entropy principle as proposed by xiong et al .,the second one is a reimplementation of a phrase-based decoder with lexicalized reordering model based on maximum entropy principle proposed by xiong et al . +"a hybrid method , tribayes , was then introduced to exploit this complementarity by applying trigrams .",a hybrid method called tribayes is then introduced that combines the best of the previous two methods . +hacioglu et al showed that tagging phrase-byphrase is better than word-by-word .,hacioglu et al showed that tagging phrase by phrase is better than word by word . +entropy is a measure of the uncertainty of a probability distribution .,"thus , the entropy is the logarithm of the local perplexity at a given point in the word string ." +"in this paper , we elucidate how korean temporal markers , oe and dongan contribute to specifying the event .","in this paper , we showed how oe adverbials and dongan adverbials contribute to constructing the temporal interpretation of korean sentences ." +"by incorporating this sentence compression model , our summarization system can yield significant performance gain in linguistic quality .","under this model , we incorporate various constraints to improve the linguistic quality of the compressed sentences ." +statistical parsers and pos taggers perform very well when trained with large amounts of data .,state-of-the-art statistical parsers and pos taggers perform very well when trained with large amounts of in-domain data . +"of the task , that is , evaluating the gap of full-fledged recognizing textual entailment systems with respect to compositional distributional semantic models ( cdsms ) applied to this task .","we tried to follow the underlying idea of the task , that is , evaluating the gap of full-fledged recognizing textual entailment systems with respect to compositional distributional semantic models ( cdsms ) applied to this task ." +"thus , optimizing this objective remains straightforward with the expectation-maximization algorithm .",a solution to this problem relies on the use of expectation maximization . +"wang and jiang combine match-lstm , originally introduced in and pointer networks to produce the boundary of the answer .","wang and jiang build question-aware passage representation with match-lstm , and predict answer boundaries in the passage with pointer networks ." +"on the other hand , the majority of corpus statistics approaches to noun compound interpretation collect statistics on the occurrence frequency of the noun constituents and use them in a probabilistic model .","on the other hand , the majority of corpus statistics approaches to noun-noun compound interpretation collect statistics on the occurrence frequency of the noun constituents and use them in a probabilistic model ." +our classifier consistently matches modified kneser-ney smoothing and can outperform it if sufficiently rich features are incorporated .,section 3 shows that a generative 1 lm built with our classifier is competitive to modified kneser-ney smoothing and can outperform it if sufficiently rich features are incorporated . +"on the multimodal emotion recognition task , our model achieves better results compared to the state-of-the-art models across all emotions .",we evaluated our approach with experiments on three multimodal tasks using public datasets and compare its performance with state-of-the-art models . +"to consider word order , and introduce word appearance in context .",we also introduce word appearance in context . +wei and gao derived external features based on a collection of relevant tweets to assist the ranking of the original sentences for highlight generation .,wei and gao derived external features based on the relevant tweet collection to assist the ranking of the original sentences for extractive summarization in a fashion of supervised machine learning . +"in this paper , first , we introduce a new amortized inference algorithm called the margin-based amortized inference , which uses the notion of structured margin to identify inference problems .","in this paper , we focus on an inference technique called amortized inference ( cite-p-13-3-0 ) , where previous solutions to inference problems are used to speed up new instances ." +"considering the knowledge-poor approach , experiments with a more complex textual domain show that the system is unsuitable for wide-coverage tasks such as question answering and summarisation .","although the system performs well within a limited textual domain , further research is needed to make it effective for open-domain question answering and text summarisation ." +features derived from annotated data performed better than a baseline trained on unigram features .,models based on the current scheme performed appreciably better than the baseline . +"consequently the number of possible feature structures is no longer finite and therefore , in contrast to standard ftag , the formalism is no longer equivalent to tag .","consequently , the number of feature structures is no longer finite as defined in , and therefore the generative capacity of the formalism is extended ." +"that minimizes the sum of distances , for a given distance function math-w-5-1-1-112 , to a list of strings math-w-5-1-1-124 .","generally , we may think of math-w-2-6-1-123 as arbitrary strings over arbitrary alphabets math-w-2-6-1-142 ." +"recently , a few caption datasets in languages other than english have been constructed .","since most available caption datasets have been constructed for english language , there are few datasets for japanese ." +"we use maege to perform a detailed analysis of metric behavior , showing that correcting some types of errors .",we use maege to mimic a setting of ranking against precision-oriented outputs . +the grammar is grounded in the theoretical framework of hpsg and uses minimal recursion semantics for the semantic representation .,the grammar matrix is couched within the head-driven phrase structure grammar framework . +we evaluate our results with case-sensitive bleu-4 metric .,we report results in terms of case-insensitive 4-gram bleu scores . +"our baseline system is the parser of berant et al , called sempre .","to perform qa , we used the framework of berant et al , as implemented in sempre ." +wikification is a particularly useful task for short messages such as tweets because it allows a reader to easily grasp the related topics and enriched information from the kb .,wikification is the task of identifying and linking expressions in text to their referent wikipedia pages . +"because word frequencies are zipf distributed , this often means that there is little relevant training data for a substantial fraction of parameters , especially in new domains .","also because word frequencies are zipf-distributed , this often means that there is little relevant training data for a substantial fraction of parameters , especially in new domains ." +word vectors are distributed representations which are designed to carry contextual information of words if their training meets certain criteria .,"word embeddings are dense , low dimensional , and real-valued vectors that can capture syntactic and semantic properties of the words ." +several massive knowledge bases such as dbpedia and freebase have been released .,large scale knowledge bases like dbpedia and freebase provide structured information in diverse domains . +"we use a random forest classifier , as implemented in scikit-learn .",the regressor used is a random forest regressor in the implementation provided by scikit-learn . +model regards associative anaphora as a kind of zero anaphora and resolves it in the same manner as zero anaphora resolution using automatically acquired lexical knowledge .,our model regards associative anaphora as a kind of zero anaphora and resolves it in the same manner as zero anaphora resolution that uses automatically acquired case frames . +"with topic modeling , the top associated words of topics can be used as good descriptors for topics in a hierarchy .","after topics are discovered by topic modeling techniques , these topics are conventionally represented by their top n words or terms ." +"for one label , the predictions-as-features methods can model dependencies between former labels and the current label , but they can ’ t model dependencies between the current label and the latter labels .",the predictions-as-features methods suffer from the drawback that they methods can ’ t model dependencies between current label and the latter labels . +"in this paper , we adopt the ilp based summarization framework , and propose methods to improve bigram concept .",in this paper our focus is on choosing useful bigrams and estimating accurate weights to use in the concept-based ilp methods . +marcu and echihabi 2002 ) proposed a method to identify discourse relations between text segments using na茂ve bayes classifiers trained on a huge corpus .,marcu and echihabi presented an unsupervised method to recognize discourse relations held between arbitrary spans of text . +and they utilized hmm method for pos tagging and morpheme-analysis-based method to predict poss for new words .,and we utilized hmm method for pos tagging and morpheme-analysis-based method to predict poss for new words . +summarization is a classic text processing problem .,summarization is the task of condensing a piece of text to a shorter version that contains the main information from the original . +"second , we propose a novel abstractive summarization technique based on an optimization framework that generates section-specific summaries for wikipedia .","second , we develop a novel integer linear programming ( ilp ) based abstractive summarization technique to generate text from the classified content ." +"since similarity is only one type of relatedness , comparison to similarity norms fails to provide a complete view of a measure ’ s ability to capture more general types of relatedness .","because similarity is only one particular type of relatedness , comparison to similarity norms fails to give a complete view of a relatedness measure ’ s efficacy ." +"in section 2 we discuss related work , section 3 details the algorithm , section 4 describes the evaluation protocol .","in section 2 we discuss related work , section 3 details the algorithm , section 4 describes the evaluation protocol and section 5 presents our results ." +kim and hovy and bethard et al examine the usefulness of semantic roles provided by framenet 1 for both oh and opinion target extraction .,kim and hovy and bethard et al explore the usefulness of semantic roles provided by framenet for both opinion holder and opinion target extraction . +"nevertheless , it is well-known that k-means has the major drawback of not being able to separate data points that are not linearly separable in the given feature space and cai et al , .","nevertheless , it is well-known that k-means has the major drawback of not being able to separate data points that are not linearly separable in the given feature space , ." +the smt system was tuned on the development set newstest10 with minimum error rate training using the bleu error rate measure as the optimization criterion .,systems were optimized on the wmt08 french-english development data using minimum error rate training and tested on the wmt08 test data . +semantic roles were obtained by using the parser by zhang et al .,semantic roles are obtained by using the parser by zhang et al . +"in this paper , we experiment with three complementary methods for automatically detecting errors in the pos annotation .","in this paper , we experiment with three different methods of pos error detection using the ifd corpus ." +"the english experiments were performed on the penn treebank , using a standard set of head-selection rules to convert the phrase structure syntax of the treebank to a dependency tree representation .","the experiments were done on the english penn treebank , using standard head-percolation rules to convert the phrase structure into dependency trees ." +"we use two standard lexical substitution datasets , one english and one german .","we evaluate and compare both approaches on two lexical substitution datasets , one english and one german ." +we extract hierarchical rules from the aligned parallel texts using the constraints developed by chiang .,we introduce non-lexical rules using the same approach as for the hierarchical rules of chiang . +"among them , probability is very similar to that proposed by seymore and rosenfeld .","this method is an entropy-based cutoff method , and can be considered an extension of the work of seymore and rosenfeld ." +"shows help the audience absorb the essence of previous episodes , and grab their attention with upcoming plots .","recaps not only help the audience absorb the essence of previous episodes , but also grab people¡¯s attention with upcoming plots ." +we implemented all models in python using the pytorch deep learning library .,we implemented our models in pytorch with a codebase derived from opennmt-py . +we introduce a new task of argument facet similarity ( afs ) aimed at identifying facets across opinionated dialogs and show that we can identify afs with a correlation of . 54 .,"we define a new task , argument facet similarity ( afs ) , and show that we can predict afs with a .54 correlation score , versus an ngram system baseline of .39 and a semantic textual similarity system baseline of .45 ." +"in this paper , we proposed to use discourse markers as indicators for paradigmatic relations between words .","in this paper , we propose to strike a balance between availability and restrictedness by making use of discourse markers ." +and it may be the case that those words surrounding the target give extra information as to its complexity .,each target word occurs in a sentence and it may be the case that those words surrounding the target give extra information as to its complexity . +"in this paper , we propose a bigram based supervised method for extractive document summarization .","in this paper , we propose to incorporate the supervised method into the concept-based ilp framework ." +aspect extraction is a key task of opinion mining ( cite-p-15-1-14 ) .,"however , aspect extraction is a complex task that also requires fine-grained domain embeddings ." +garfield is probably the first to discuss an automatic computation of a citation classification .,"garfield was the first to define a classification scheme , while finney was the first to suggest that a citation classifier could be automated ." +"in this paper , we are concerned with the interpretation of temporal expressions in text : that is , given an occurrence in a text of an expression like that marked in italics .","in this paper , we describe our approach to intermediate semantic representations in the interpretation of temporal expressions ." +we have compiled a suitable corpus for this task from the europarl corpus .,we evaluate our method by means of the europarl corpus . +textual entailment can be successfully adapted to this setting .,the recently suggested idea of partial textual entailment may remedy this problem . +"our approach can therefore be adapted to other languages with dependency treebanks , since ccg lexical categories can be easily extracted from dependency treebanks .","our approach can therefore be adapted to languages with dependency treebanks , since ccg lexical categories can be easily extracted from dependency treebanks ." +knowledge graphs often lack a succinct textual description .,"unfortunately , many knowledge graph entities lack such textual descriptions ." +this paper has presented a q & / a system that employs several feedback mechanisms that provide lexical and semantic alternations to the question .,this paper presents an open-domain textual question-answering system that uses several feedback loops to enhance its performance . +recent studies on review helpfulness require plenty of labeled samples .,recent studies on review helpfulness prediction have been shown effective by using handcrafted features . +minimum error rate training is the algorithm for log-linear model parameter training most used in state-of-the-art statistical machine translation systems .,"minimum error rate training is a crucial component to many state-of-the-art nlp applications , such as machine translation and speech recognition ." +"uccaapp supports a variety of formal properties that have proven useful for syntactic and semantic representation , such as discontiguous phrases , multiple parents and empty elements .","uccaapp supports annotation with a variety of formal properties , including discontiguous units , inter-sentence annotation , reentrancy and multi-layered annotation , making it suitable for other syntactic and semantic annotation schemes that use these properties ." +the treetagger is employed to compile a part-of-speech tagged word frequency list .,part-of-speech tags are obtained using the treetag-ger . +"lapata and brew and li and brew focused on this issue , and described methods for inducing probabilities of verb-class associations .",lapata and brew and li and brew proposed probabilistic models for calculating prior probabilities of verb classes for a verb . +"in which there is an interesting minority class , the brf method might be applied to those problems .","as there are many other nlp problems in which there is an interesting minority class , the brf method might be applied to those problems also ." +"besides , wang et al proposed the topical n-gram model that allows the generation of ngrams based on the context .",wang et al proposed the topical n-gram model that adds a layer of complexity to allow the formation of bigrams to be determined by the context . +"we apply our methods to a compound interpretation task and demonstrate that combining models of lexical and relational similarity can give state-of-the-art results on a compound noun interpretation task , surpassing the performance attained by either model taken alone .","applying our methods to the task of compound noun interpretation , we have shown that combining lexical and relational similarity is a very effective approach that surpasses either similarity model taken individually ." +"for chinese , the concatenated trigram model introduced in shao et al is employed .","for chinese , the concatenated trigram model in shao et al is applied ." +we used the brown word clustering algorithm to obtain the word clusters .,"to determine the word classes , one can use the algorithm of brown et al for finding the classes ." +the translation model was smoothed in both directions with kneser-ney smoothing .,the translation model was smoothed in both directions with kn smoothing . +we have presented an overview of methods used in qa in restricted domains .,in this article we survey past and current work on question answering in restricted domains . +averaging the model parameters is effective in stabilizing the learning and improves generalization capacity .,these curves clearly demonstrate that parameter averaging helps to stabilize the learning and improve generalization capacity . +that uses a discriminative large-margin learning framework coupled with a novel feature set defined on compressed bigrams .,this system uses discriminative large-margin learning techniques coupled with a decoding algorithm that searches the space of all compressions . +summarization is a well-studied problem in the literature .,summarization is a classic text processing problem . +"our challenge set consists of short sentences that each focus on one particular phenomenon , which makes it easy to collect reliable manual assessments of mt output by asking direct yes-no questions .","a challenge set consists of a small set of sentences , each hand-designed to probe a system ’ s capacity to bridge a particular structural divergence between languages ." +"there is no attempt in the literature to automatically associate words with human senses , in this section .","however , to the best of our knowledge , there is no attempt in the literature to build a resource that associates words with senses ." +"the model is trained using a variant of the structured perceptron , similar to the algorithm of .",this weight vector is learned using a simple perceptron like algorithm similar to the one used in . +"using intrinsic methods of evaluation , we show that the resulting geo-word embeddings themselves encode information about semantic relatedness .",intrinsic evaluation of the resulting vectors shows that geographic context alone does provide useful information about semantic relatedness . +kernel has shown promising results in semantic role classification .,this kernel has shown very promising results in srl . +"and considering the personalized needs of users , we propose an approach for generating personalized views starting from a normalized dictionary with respect to lexical markup framework .","in this paper , we propose an approach that favors the use of normalized dictionaries by generating virtual/materialized personalized views ." +"in the context of ir , decompounding has an analogous effect to stemming , and it significantly improves retrieval .","in the context of ir , decompounding has an analogous effect to stemming , and it significantly improves retrieval results ( cite-p-26-3-1 ) ." +"the proposed neural models have a large number of variations , such as feed-forward networks , hierarchical models , recurrent neural networks , and recursive neural networks .","examples of such neural networks are linear networks , deeper feed-forward neural networks , or recurrent neural networks ." +numerous subtypes of elliptical constructions are distinguished in linguistics .,in linguistics various subtypes of elliptical constructions are studied . +"experimental results on spoken language translation show that this hybrid method significantly improves the translation quality , which outperforms the method .",experimental results indicate that this method can consistently and significantly improve translation quality over individual translation outputs . +"in this paper , we analyze the reasons that cause errors .","in this paper , we are concerned with the reasons that cause the errors ." +the alignment-based nns are trained using an extension of the rwthlm toolkit .,the neural networks are trained using an extension of the rwthlm toolkit . +"in particular , we consider conditional random fields and a variation of autoslog .",as a sequence labeler we use conditional random fields . +"as mentioned in previous sections , we apply our measure word generation module into smt output .","as mentioned in previous sections , we apply our measure word generation module into smt output as a post-processing step ." +pv is an unsupervised framework that learns distributed representations for sentences and documents .,one of the earliest work is an extension of word2vec to learn a distributed representation of text . +we also apply topic modeling in order to get topic distribution over each sentence set .,we also applied topic modeling in order to get topic distributions over set of sentences . +pang et al conducted early polarity classification of reviews using supervised approaches .,pang et al examined the effectiveness of using supervised learning methods to identify document level sentiments . +"all models were implemented in python , using scikit-learn machine learning library .",all linear svm models were implemented with scikit-learn and trained and tested using liblinear backend . +"for more details on the original definition of tags , we refer the reader to joshi , kroch and joshi or vijay-shanker .","for more details on the original definition of tags , we refer the reader to , kroch and joshi , or vijayshanker ." +"xiong et al also used a maximum entropy classifier , in this case to train the reordering component of their mt model .","to pursue a better method to predict the order between two neighboring blocks 1 , xiong et al present an enhanced btg with a maximum entropy based reordering model ." +bunescu and mooney proposed a shortest path dependency kernel .,bunescu and mooney give a shortest path dependency kernel for relation extraction . +examples include the widely known discourse parsing work by marcu .,examples include the widely known discourse parsing work of . +"to integrate monolingual training data , our approach can be easily applied to other nmt systems .",this makes our approach applicable to different nmt architectures . +click patterns are then utilized for constructing a large and heterogeneous training corpus for answer .,"these articles are then used to learn a vector of word frequencies , wherewith answer candidates are rated afterwards ." +"for the word-embedding based classifier , we use the glove pre-trained word embeddings .",we use glove embeddings for the english tasks and fasttext embeddings for all newswire tasks . +our approach to relation embedding is based on a variant of the glove word embedding model .,"our monolingual objective follows the glove model , which learns from global word co-occurrence statistics ." +we also propose a single model for learning representations of images and multiple languages .,we proposed a new model that jointly learns multilingual multimodal representations using the image as a pivot between languages . +"in their filtering consequences , we propose to train them jointly , so that each classifier can focus on the gaps of the others .","in our framework , each classifier learns to focus on the cases where the other classifiers are less confident ." +liao and grishman propose document level cross-event inference to improve event extraction .,"liao and grishman , 2010 ) use cross-event inference to help with the extraction of role fillers shared across events ." +"in this paper , we present an error analysis of a new cross-lingual task : the 5w task , a sentence-level understanding task which seeks to return the english .","in this paper , we present an evaluation and error analysis of a cross-lingual application that we developed for a government-sponsored evaluation , the 5w task ." +we achieve competitive accuracy to the state-of-the-art and on wmt ¡¯ 15 english-german .,on wmt¡¯16 english-romanian translation we achieve accuracy that is very competitive to the current state-of-the-art result . +"finally , we explain how other more efficient variants of the basic parser can be obtained by determinizing portions of the basic non-deterministic pushdown machine while still using the same pseudo-parallel driver .","finally , we explain how other more efficient variants of the basic parser can be obtained by determinizing portionsof the basic non-deterministic pushdown machine while still using the same pseudo-parallel driver ." +"however , the extensions of these models require carefully tailored graphical models .","nevertheless , these methods yield the same poor performance in short texts as traditional topic models ." +"neural machine translation is an emerging technique which utilizes deep neural networks , to generate end-to-end translation .",neural machine translation is a new paradigm in machine translation powered by recent advances in sequence to sequence learning frameworks . +for both unsupervised and lightly supervised mapping we used muse with default parameters .,for the unsupervised mapping we ran using the source and target language monolingual spaces . +in this paper we present a recurrent neural networks approach for estimating the quality of machine translation output .,this paper presents a novel approach using recurrent neural networks for estimating the quality of machine translation output . +we used the mstparser to generate k-best lists .,we used a k-best version of the mira algorithm . +"the first two are from the semeval 2014 task 4 1 , which contains the reviews in laptop and restaurants , respectively .","the first two are from semeval 2014 , containing reviews of restaurant and laptop domains , which are widely used in previous works ." +"word embeddings have shown promising results in nlp tasks , such as named entity recognition , sentiment analysis or parsing .","word embedding has shown promising results in variety of the nlp applications , such as named entity recognition , sentiment analysis and parsing ." +but the technique can be applied to other nlg systems that perform hierarchical text structuring based on a theory of coherence relations ( with additional assumptions .,"the technique is generally applicable to natural language generation systems , which perform hierarchical text structuring based on a theory of coherence relations with certain additional assumptions ." +"in his seminal work , kleinberg proposed a state machine to model the arrival times of documents in a stream in order to identify bursts .","to find bursty patterns from data streams , kleinberg proposed a state machine to model the arrival times of documents in a stream ." +"in this example , the score of translating “ dos ” to “ make ” was higher than the score of translating “ dos ” to “ both ” .","we note that in this example , the score of translating “ dos ” to “ make ” was higher than the score of translating “ dos ” to “ both ” ." +attention yields large gains of up to 5 . 0 bleu over non-attentional models that already incorporate known techniques such as dropout .,our attentional models yield a boost of up to 5.0 bleu over non-attentional systems which already incorporate known techniques such as dropout . +"lexical categories can be easily extracted from dependency treebanks ( cite-p-19-1-2 , cite-p-19-1-0 ) .",providing information about lexical category probabilities ( cite-p-19-1-1 ) assigned by the supertagger can be useful during parsing . +co-occurrences are used to build a semantic lexicon based on collocative meaning .,"a semantic bias is used to associate collocations with the appropriate meaning relation , if one exists ." +"major challenge of semantic parsing is the structural mismatch between a natural language sentence and its target logical form , which are mainly raised by the vocabulary .",a major challenge of semantic parsing is the vocabulary mismatch problem between natural language and target ontology . +xu et al represent heterogeneous features as embeddings and propose a multichannel lstm based recurrent neural network for picking up information along the sdp .,"xu et al picked up heterogeneous information along the left and right sub-path of the sdp respectively , leveraging recurrent neural networks with long short term memory units ." +"handle missing words , we decide to model sentences using a weighted matrix .",we explicitly model missing words to alleviate the sparsity problem in modeling short texts . +"roark implements an incremental top-down and left-corner parsing model , which is used as a syntactic language model for a speech recognition task .",the roark parser is an incremental syntactic parser based language model that uses rich lexical and syntactic contexts as features to predict its next moves . +glaysher and moldovan demonstrated an efficiency gain by explicitly disallowing entries in chart cells that would result in constituents that cross chunk boundaries .,glaysher and moldovan demonstrated an efficiency gain by explicitly disallowing constituents that cross chunk boundaries . +"the penn discourse treebank is the largest available corpus of annotations for discourse relations , covering one million words of the wall street journal .","the pdtb is the largest corpus annotated for discourse relations , formed by newspaper articles from the wall street journal ." +xu et al applied long short term memory based recurrent neural networks along the shortest dependency path .,xu et al represent heterogeneous features as embeddings and propose a multichannel lstm based recurrent neural network for picking up information along the sdp . +"since this model relies on solving a tsp efficiently , it can not capture features other than pairwise features that examine the words and neighborhood .","however , for efficiently solving the tsp , the model is restricted to pairwise features which examine only a pair of words and their neighborhood ." +in general : verbs that are similar in meaning also tend to have similar subcategorization frames and selectional preferences .,this also corresponds to a syntactic similarity : all the verbs of this group share the same preferred syntactic subcategorization frames . +"we developed a word-sense induction ( wsi ) system based on topic modelling , specifically a hierarchical dirichlet process .","we adopt a previously-proposed wsi methodology for the task , which is based on a hierarchical dirichlet process ( hdp ) , a non-parametric topic model ." +clarke and lapata presented an unsupervised method that finds the best compression using integer linear programming .,the method of clarke and lapata uses a trigram language model to score compressions . +it is found that the difference between the baseline and the model is significant producing statistically significant improvements as measured by the bootstrap resampling method .,the marked systems produce statistically significant improvements as measured by bootstrap resampling method on bleu over the baseline system . +performance indicates that the semantic signatures provide a powerful resource for joint disambiguation .,these improvements demonstrate the importance of combining complementary objectives in a joint model for robust disambiguation . +it is been shown that domain information is fundamental for wsd .,it has been shown that domain information is fundamental for wsd . +"to this end , we design novel features for keyphrase extraction based on citation context .","in this paper , we propose a supervised model for keyphrase extraction from research papers , which are embedded in citation networks ." +classification approaches need to be extended to be applicable on weighted packed representations of ambiguous input .,"in order for these techniques to be more broadly applicable , they need to be extended to apply on weighted packed representations of ambiguous input ." +we propose an unsupervised algorithm based on lesk which performs visual sense disambiguation .,we propose an algorithm based on the lesk wsd algorithm in order to perform unsupervised visual sense disambiguation on our dataset . +all the code is written using tensorflow over the wiktionary dataset .,the implementation is done via the tensorflow library . +"dzikovska et al used a statistical classifier based on lexical overlap , taken from , and evaluated 3 different rule-based policies for combining its output with that of the semantic interpreter .",dzikovska et al showed that a statistical classifier trained on this data set can be used in combination with a semantic interpreter to significantly improve the overall quality of natural language interpretation in a dialogue-based its . +"in this paper , we propose a simple , fast , and effective method for recalling previously seen translation .","in this paper , we propose a simple and efficient model for using retrieved sentence pairs to guide an existing nmt model at test time ." +"we investigate the incorporation of extra knowledge , specifically speech-gaze temporal information and domain knowledge , with eye gaze to facilitate word acquisition .","then , we describe the enhanced models that incorporate temporal and semantic information about speech and eye gaze for word acquisition ." +"in this work , we address the new problem of topically annotating a trending hashtag .","in this work , we tackle the problem of mapping trending twitter topics to entities from wikipedia ." +"we extract continuous vector representations for concepts using the continuous log-linear skipgram model of mikolov et al , trained on the 100m word british national corpus .","we have used the continuous bag-of-words model of mikolov et al with a window size of eight by training the model with wikipedia text corpus , we obtained word embeddings for most of the lemmas and words contained in the vuamc ." +"in this section , we provide a brief background on data annotation with rationales in the context of active learning .","in section 4 , we present an active learning method using the learning with rationales framework and present relevant results ." +results of our proposed models compared against the baseline models described in pado et al are shown in table 2 .,we adopt the sentence-level evaluation metric used in pado et al results are shown in table 4 . +we evaluate the performance of different translation models using both bleu and ter metrics .,"the performance of the different systems is evaluated in terms of translation error rate , bleu , and precision ." +"based on handcrafted grammars , multimodal grammars can be brittle with respect to extragrammatical , erroneous and disfluent input .","however , handcrafted multimodal grammars can be brittle with respect to unexpected , erroneous , or disfluent inputs ." +"first , a statistical parser is used to construct a semantically-augmented parse tree that captures the semantic interpretation of individual words .","first , a statistical parser is used to generate a semantically-augmented parse tree ( sapt ) , where each internal node includes both a syntactic and semantic label ." +"in this work , we will go further to investigate factorization of rule structures .",we will also try to further exploit the factorized representation with discriminative learning . +"morinaga et al , yu and hatzivassiloglou , kim and hovy , hu and liu , and grefenstette et al 11 all begin by first creating prior-polarity lexicons .","yu and hatzivassiloglou , kim and hovy , hu and liu , and grefenstette et al 4 all begin by first creating prior-polarity lexicons ." +"for instance , shi and mihalcea improve semantic parsing using the knowledge of an aligned resource of framenet , wordnet , and verbnet .","shi and mihalcea propose the integration of verbnet , wordnet and framenet into a knowledge base and use it in the building of a semantic parser ." +su et al use the topic distribution of in-domain monolingual corpus to adapt the translation model .,su et al also apply htmm to monolingual data and apply the results to machine translation . +cite-p-24-3-9 considered direct optimization of a deep-learning-based asr recognizer without an explicit separation module .,cite-p-24-3-9 trained a multi-speaker speech recognizer using permutation-free training without explicit objective function for separation . +mcintyre and lapata create a story generation system that draws on earlier work on narrative schemas .,chambers and jurafsky presented an unsupervised learning system for narrative schemas based on coreferent arguments in chains of verbs . +the paraphrase database contains millions of automatically acquired paraphrases in 21 languages associated with features that serve to their ranking .,the paraphrase database contains millions of english paraphrases automatically extracted from bilingual parallel corpora . +"named entity recognizer ( ner ) generally has worse performance on machine translated text , because of the poor syntax of the mt output .",a named entity recognizer ( ner ) trained on an english corpus does not have the same performance when applied to machine-translated text . +the baseline is the bidirectional sequence-tosequence model using long short-term memory which is a kind of rnn .,the nmt architecture is an attentional encoder-decoder model similar to and uses a long short-term memory as the recurrent cell . +we propose a replicability analysis framework for a statistically sound analysis of multiple comparisons between algorithms .,we start by formulating a general hypothesis testing framework for a comparison between two algorithms . +we encode as first order logic rules and automatically combine with a topic model developed specifically for the relation extraction task .,we encode domain knowledge as first order logic ( fol ) rules and automatically integrate them with a topic model to produce clusters shaped by the data and the constraints at hand . +discourse connectives gather additional training instances that lead to significant performance gain .,explicit discourse connectives can potentially be exploited to collect more training data to collect more data and boost the performance . +the current technologies are expected to be further advanced to be effective for relatively complicated relation extraction tasks such as the one defined in ace .,yet further research work is still expected to make it effective with complicated relation extraction tasks such as the one defined in ace . +"as third dataset we use a noun compound dataset of 54,571 nouns from germanet 7 , which has been constructed by henrich and hinrichs .","as second data set we use a noun compound data set of 54,571 nouns from germanet , 21 which has been constructed by henrich and hinrichs ." +we have improved precision scores of the methods relying on per-topic word distributions from a cross-language topic model .,"in this paper , we extend the work on using latent cross-language topic models for identifying word translations across comparable corpora ." +"learned word representations are widely used in nlp tasks such as tagging , named entity recognition , and parsing .","word representations , especially brown clusters , have been extensively used for named entity recognition , parsing and pos tagging ." +and that the words in time expressions demonstrate similar syntactic behaviour .,"finally , words in time expressions demonstrate similar syntactic behaviour ." +sundermeyer et al also used bidirectional lstm rnn model to improve strong baselines when modeling translation .,"sundermeyer et al proposed word-and phrase-based rnn translation models and applied them to rescore n-best lists , reporting major improvements ." +mesgar and strube extend the entity graph as coherence model to measure the readability of texts .,"recently , mesgar and strube modeled these coherence patterns by subgraphs of the graph representation of documents ." +all parameters are initialized using glorot initialization .,all the parameters are initialized with xavier method . +schwartz and hearst implemented a simple algorithm that finds the shortest expression containing all alphanumerical letters of an abbreviation .,schwartz and hearst implemented an algorithm for identifying acronyms by using parenthetical expressions as a marker of a short form . +similar concepts of modeling documents hierarchically have shown benefits in some supervised tasks such as text classification .,hierarchical neural models have already been successfully used in document-level language modeling and document classification . +"contrasts with a high degree of accuracy , although top-down cues proved to be effective only on an interesting subset of the data .","top-down cues , on the other hand , were found to be effective only on a subset of the data , which corresponds to the interesting contrasts that cause lexical variation ." +"to achieve this performance , using labeledlda to exploit freebase dictionaries as a source of distant supervision .","labeledlda is applied , utilizing constraints based on an open-domain database ( freebase ) as a source of supervision ." +"niessen and ney , 2004 , describe a method that combines morphologically-split verbs in german , and also reorders questions in english and german .","niessen and ney describe an approach for translation from german to english that combines verbs with associated particles , and also reorders questions ." +"we present three supervised models of sentence similarity based on the winning system at semeval-2015 , sultan et al , 2015 .","we seek to bridge this gap by adopting the feature set of the best performing sts system at semeval-2015 , sultan et al , 2015 ." +"as for recurrent models , even if our model outperforms rnns , it is well below state-of-the-art .","as for recurrent models , our model outperforms rnns but is below state of the art lstm models ." +"in this work , we develop neural models in a sequential way , and encode sentence semantics and their relations automatically .","in this paper , we introduce a neural network approach to learn continuous document representation for sentiment classification ." +"on document level , previous work have shown that traditional text classification approaches can be quite effective when applied to sentiment analysis .",previous research on document sentiment classification has shown that machine learning based classifiers perform much better compared to rule-based systems . +data for all our experiments is taken from the english portion of the conll-2012 coreference resolution tasks .,we evaluate our approach on the english portion of the conll-2012 dataset . +"in the next section , we will describe these constraints .","first , we will consider the itg constraints ." +recognizing humor in text is challenging due to the complex characteristics of humor .,thus it is impossible to construct rules to identify humor . +automated communicative systems that are more sensitive to the emotive and the mental states of their users are often more successful than more neutral conversational agents .,previous research indicates that automated communication systems are more effective if they take into account the affective and mental states of the user . +"grammar induction is a central problem in computational linguistics , the aim of which is to induce linguistic structures from an unannotated text corpus .",grammar induction is a task within the field of natural language processing that attempts to construct a grammar of a given language solely on the basis of positive examples of this language . +"pennell and liu , 2011 ) firstly introduced an mt method at the character-level for normalization .",pennell and liu proposed to use a character-level mt model for text normalization . +wan incorporated unlabeled data in the target language into classifier with co-training to improve classification performance .,"particularly , wan used machine translation to translate the source language to the target language to bridge the gap and applied the co-training approach ." +storyline detection from news articles aims at summarizing events described under a certain news topic .,storyline detection from news articles aims at summarizing events described under a certain news topic and revealing how those events evolve over time . +"for dealing with any types of errors , grammatical error correction methods using phrase-based statistical machine translation are proposed .",there are several studies about grammatical error correction using phrase-based statistical machine translation . +the second learning algorithm we consider is the large-margin approach for structured prediction .,an alternative approach to training structured linear classifiers is based on maximum-margin markov networks . +scqa learns the similarity metric for question-question pairs by leveraging the question-answer pairs available in cqa forum .,scqa learns the shared model parameters and the similarity metric by minimizing the energy function connecting the twin networks . +our results show that this semi-supervised learning approach outperforms several baseline methods in identifying the prototypical goal activities .,we show that this approach outperforms several baseline methods when judged against goal-acts identified by human annotators . +"similarly , the gunning fog index is based on the average number of words per sentence and the percentage of words with three or more syllables .",the gunning fog index uses average sentence length and the percentage of words with at least three syllables . +"to determine which one should be corrected , the best model can rank the ground-truth error position within the top two .",experimental results show that our models can rank the ground-truth error position toward the top of the candidate list . +by enhancing the training set using unsupervised distributed representations of words .,sifier ( step 3 ) by expanding its training data using distributed vector representations of words . +we first conduct word segmentation with jieba and part of speech annotation using stanford corenlp toolkit .,"we perform chinese word segmentation , pos tagging , and dependency parsing for the chinese sentences with stanford corenlp ." +a ranking svm model is trained to automatically extract problem answers from the answer text provided by cqa .,we train a ranking svm model to identify ( structured ) problem answers from unstructured answer text . +"animacy is the semantic property of nouns denoting whether , or to what extent , the referent of that noun is alive , human-like or even cognitively sophisticated .",animacy is a an inherent property of the referents of nouns which has been claimed to figure as an influencing factor in a range of different grammatical phenomena in various languages and it is correlated with central linguistic concepts such as agentivity and discourse salience . +"in general , we can still do efficient joint inference using approximate belief propagation .",approximate inference can be done by loopy belief propagation . +the english text was tokenized using the word tokenize routine from nltk .,brown was used through the interface provided by nltk . +"another potential problem is the fact that web counts are far more noisy than counts obtained from a well-edited , carefully balanced corpus .",the other potential problem is the so-called “ bag-of-sentences ” assumption implicitly made by most of these summarizers . +in this paper we describe an unsupervised method for semantic role induction .,in this paper we present a simple approach to unsupervised semantic role labeling . +we use the subjectivity lexicon from the mpqa project .,we make use of the mpqa subjectivity lexicon . +"we address this issue , and investigate whether alignment models for qa can be trained from artificial question-answer pairs generated from discourse structures imposed on free text .","we propose two inexpensive methods for training alignment models solely using free text , by generating artificial question-answer pairs from discourse structures ." +"in a knowledge base ( kb ) by jointly embedding the union of all available schema types — not only types from multiple structured databases ( such as freebase or wikipedia infoboxes ) , but also types expressed as textual patterns from raw text .","ties and relations in a knowledge base ( kb ) by jointly embedding the union of all available schema types—not only types from multiple structured databases ( such as freebase or wikipedia infoboxes ) , but also types expressed as textual patterns from raw text ." +"we investigate the incorporation of extra knowledge , specifically speech-gaze temporal information and domain knowledge , with eye gaze to facilitate word acquisition .","particularly , we investigate the use of speech-gaze temporal information and word-entity semantic relatedness to facilitate word acquisition ." +"stance detection is the task of automatically determining from text whether the author is in favor of the given target , against the given target , or whether neither inference is likely .","stance detection is the task of estimating whether the attitude expressed in a text towards a given topic is ‘ in favour ’ , ‘ against ’ , or ‘ neutral ’ ." +"for evaluation , we used the case-insensitive bleu metric with a single reference .",we use case-insensitive bleu-4 and rouge-l as evaluation metrics for question decomposition . +the semeval 2012 competition includes a task targeted at semantic textual similarity between sentence pairs .,the semeval 2012 competition initiated a task targeted at semantic textual similarity between sentence pairs . +"given that bursty incomplete n-grams always accompany overlapping bursty phrases , we can avoid extracting bursty incomplete n-grams using the set cover problem .",our set cover-based method guarantees that all bursty n-grams including irregularly-formed ones must be covered by extracted bursty phrases . +"or writing specialized grammars , our approach disambiguates capitalized words and abbreviations by considering suggestive local contexts and repetitions of individual words within a document .","as opposed to the two dominant techniques of computing statistics or writing specialized grammars , our document-centered approach works by considering suggestive local contexts and repetitions of individual words within a document ." +"in the second part of the paper , i describe an implemented system based on the theoretical treatment which determines whether a specified sequence of events is or is not possible .",i have also described an implemented system based on the theoretical treatment which determines whether a specified sequence of trajectory-of-motionevents is or is not possible under varying situationally specified constraints . +wu et al adopted crfs as the dependency learner and accepted the results of the neighboring parsing as features to increase the original feature set .,cheng et al and wu et al used neighboring dependency attachment taggers to improve the performance of the deterministic parser . +"in this paper , we extend the standard hidden markov models ( hmms ) to learn distributed state representations to improve cross-domain prediction .","in this paper , we extended the standard hmms to learn distributed state representations and facilitate cross-domain sequence predictions ." +empirical experiments on chinese-to-english and japanese-to-english datasets show that the proposed attention model delivers significant improvements in terms of both alignment .,empirical experiments on chinese-to-english and japaneseto-english tasks demonstrate that the proposed attention based nmt delivers substantial gains in terms of both bleu and aer scores . +generative model was recently presented to tackle only pronoun anaphora resolution .,this paper has presented a pronoun anaphora resolution system based on fhmms . +"a more effective alternative , which however only delivers quasinormalized scores , is to train the network using the noise contrastive estimation or nce .","a different alternative , which however only delivers quasi-normalized scores , is to train the network using the noise contrastive estimation or nce for short ." +we used relative position representation in selfattention mechanism of both the encoder side and decoder side .,we use relative position representation in selfattention mechanism of both the encoder and decoder side for all systems . +"in this work , we first investigate label embeddings for text representations , and propose the label-embedding .","in this work , we first investigate label embeddings for text representations , and propose the label-embedding attentive models ." +"in this paper , we present the excitement open platform ( eop ) , a generic architecture and a comprehensive implementation for multilingual textual inference .","this paper presents the excitement open platform ( eop ) , a generic architecture and a comprehensive implementation for textual inference in multiple languages ." +we divide each article into topic segments using the unsupervised topic segmentation method developed by eisenstein and barzilay .,we use an automatic topic segmentation method to segment the source articles in our test corpus . +"we trained a subword model using bpe with 29,500 merge operations .","we further split the words into sub-words using joint bpe with 32 , 000 merge operations ." +model is used to evaluate the likelihood of possible substitutes for the target word in a given context .,the model is used to evaluate the likelihood of various substitutes for a word in a given context . +"unfortunately , global inference and learning for graph-based dependency parsing is typically np-hard .","in general , inference and learning for graph-based dependency parsing is np-hard when the score is factored over anything larger than arcs ." +"although the itg constraint allows more flexible reordering during decoding , zens and ney showed that the ibm constraint results in higher bleu scores .","zens and ney exhaustively compare the ibm and itg constraints , concluding that although the itg constraints permit more flexible re-orderings , the ibm constraints result in higher bleu scores ." +"in this paper , we describe our system for a semeval2015 .","in this work , we describe a system we developed and submitted to semeval2015 ." +collobert et al adjust the feature embeddings according to the specific task in a deep neural network architecture .,collobert et al used convolution for embeddings with a crf layer to attain alongside benchmarking several nlp tasks including ner . +"in this framework , lexical , syntactic and semantic information from the reference and the two hypotheses is compacted into relatively small distributed vector representations , and fed into a multi-layer .","lexical , syntactic and semantic information from the reference and the two hypotheses is compacted into relatively small distributed vector representations and fed into the input layer , together with a set of individual real-valued features coming from simple pre-existing mt evaluation metrics ." +the alignment template approach uses word classes rather than lexical items to model phrase translation .,the alignment template approach for pb-smt allows many-tomany relations between words . +in this paper we address the problem of learning transitive graphs that describe entailment .,in this paper we present a novel method for learning the edges of entailment graphs . +"krishnakumaran and zhu , 2007 ) uses lexical resources like wordnet and bigram counts generated from a large scale corpus to classify sentences into metaphorical or normal usages .","the system of krishnakumaran and zhu uses wordnet and word bigram counts to predict verbal , nominal and adjectival metaphors at the sentence level ." +"on the multimodal emotion recognition task , our model achieves better results compared to the state-of-the-art models across all emotions .","on the multimodal emotion recognition task , our model achieves better results compared to the state-of-the-art models across all emotions on the f1 score ." +xu et al described a bayesian semi-supervised cws model by considering the segmentation as the hidden variable in machine translation .,xu et al described a bayesian semisupervised model by considering the segmentation as the hidden variable in machine translation . +"recently , graph-based methods for knowledge-based wsd have gained much attention in the nlp community .","in recent years , we have seen an increasing use of graph-based methods in nlp ." +framing is further related to works which analyze biased language and subjectivity .,issue framing is related to the broader challenges of biased language analysis and subjectivity . +pooling over a linear sequence of values returns the subsequence of math-w-2-5-1-108 .,a tdnn convolves a sequence of inputs math-w-6-1-0-7 with a set of weights m . +"in this paper , we proposed a new neural network architecture , called an rnn encoder ¨c decoder that is able to learn .","in this paper , we propose a novel neural network model called rnn encoder¨c decoder that consists of two recurrent neural networks ( rnn ) ." +measurements based on a given sample will need to be extrapolated to obtain their estimates over larger unseen samples .,hence empirical measurements from large enough samples tend to be reliable for even larger sample sizes . +parsing scores or discourse model scores .,parsing scores or discourse based scores . +"as in previous work , we represent wordforms by their orthographic strings , and word-meanings by their semantic vector representations .","as in previous work , we represent wordforms by their orthographic strings , and word-meanings by their semantic vector representations as produced by a distributional semantic vector space model ." +we were able to improve the current state of the art for the full lexical substitution task .,our system improves over state of the art in the full lexical substitution task in all three languages . +"for all models , we use the 300-dimensional glove word embeddings .",we use glove embeddings for the english tasks and fasttext embeddings for all newswire tasks . +tomanek et al utilised eye-tracking data to evaluate difficulties of named entities for selecting training instances for active learning techniques .,tomanek et al utilised eye-tracking data to evaluate a degree of difficulty in annotating named entities . +"for each language , and second , using a given seed dictionary , we train a mapping function to connect the two monolingual vector spaces .","like these approaches , we first learn word embeddings for each language , then use a seed dictionary to train a mapping function between the two vector spaces ." +"in this paper , we have attempted to construct an algorithm for fully automatic distributional tagging .","in this paper , we describe an experiment on fully automatic derivation of the knowledge necessary for part-of-speech tagging ." +we include pos tags and the top 500 n-gram features .,we include pos tags and the top n-gram features as described in prior work . +"first , the joint representation is learned by taking both textual and non-textual features into a deep learning network .","in the former stage , a specially designed deep network is given to learn the unified representation using both textual and non-textual information ." +"in section 4 , we apply rd to recognize protein-protein interaction ( ppi ) sentences , using proteins as seeds .","in section 4 , we apply rd to recognize protein-protein interaction ( ppi ) sentences , using proteins as seeds for the entity discovery phase ." +"in another , cite-p-17-1-19 applies an svm to rank elements , by devising the input vector .","in another , cite-p-17-1-19 applies an svm to rank elements , by devising the input vector by subtraction of feature values ." +"cnns with similar structures have also been applied to other classification tasks , such as semantic matching , relation extraction , and information retrieval .","cnns have been effectively employed in nlp tasks such as text classification , sentiment analysis , relation classification , and so on ." +"recently , there has been a growing interest in neural language models , where the words are projected into a lower dimensional dense vector space via a hidden layer .",there is a growing interest in learning vectorspace representations of words and phrases using large training corpora in the field of natural language processing . +"time complexity , we also present a new method for speeding up svm classifying which does independent to the polynomial degree .","on the other hand , we also propose a new method for speeding up classification which is independent to the polynomial kernel degree ." +"in this paper , we propose a neural knowledge diffusion ( nkd ) dialogue system to benefit the neural dialogue generation .","in this paper , we propose a neural knowledge diffusion ( nkd ) model to introduce knowledge into dialogue generation ." +a chatbot for e-commerce sites known as superagent has been developed .,recently a chatbot for e-commerce sites known as superagent has been developed . +"in this work , we improve the robustness of encoder representations against noisy perturbations with adversarial learning .","in this paper , we address this challenge with adversarial stability training for neural machine translation ." +"in this study , we assume commonly used stanford typed dependency for english and the chunk-based dependency with ipadic for japanese .","as a supervised upper-bound baseline , we use stanford collapsed dependencies for the english data and dependencies coming from the mate tools for the german corpus ." +dialogue act classification is an essential task for dialogue systems .,dialogue act classification is a central challenge for dialogue systems . +results reported drops from 91 . 2 ( cite-p-26-1-9 ) to 80 . 56 ( cite-p-26-1-12 ) .,"comparatively , the best english srl results reported drops from 91.2 ( cite-p-26-1-9 ) to 80.56 ( cite-p-26-1-12 ) ." +the task of suggestion mining can be defined as the extraction of sentences that contain suggestions from unstructured text .,suggestion mining is defined as the extraction of suggestions from unstructured text . +"most importantly , reddy et al used a standard distributional model to predict the compositionality of compound-constituent pairs for 90 english compounds .","for instance , reddy et al collected numerical scores for 90 english nominal compounds regarding their compositionality ." +schmidt and wiegand note in their survey that supervised learning approaches are predominantly used for hate speech detection .,"the survey by schmidt and wiegand points out that bag-ofword models are good features for hate speech detection , although they ignore word order and sentence syntax ." +word alignment is the task of identifying word correspondences between parallel sentence pairs .,"word alignment is a crucial early step in the training of most statistical machine translation ( smt ) systems , in which the estimated alignments are used for constraining the set of candidates in phrase/grammar extraction ( cite-p-9-3-5 , cite-p-9-1-4 , cite-p-9-3-0 ) ." +"as a case study , we developed a japaneseto-korean translation .","as a case study , we experimented on the language pair of japanese and korean ." +that has been shown to work well for english .,this approach has previously been successfully used on english . +"in this paper , we present scidtb , a domain-specific discourse .","in this paper , we construct the discourse dependency corpus scidtb 1 ." +"tensor-matrix factorizations have been used for the problem of predicting links in the universal schema setting ( cite-p-14-2-2 , cite-p-14-2-5 ) .","recently , tensor factorization-based methods have been proposed for binary relation schema induction ( cite-p-13-3-13 ) , with gains in both speed and accuracy over previously proposed generative models ." +"first , tree-based position features are proposed to encode the relative positions of words in dependency trees .",we propose tree-based position features to encode the relative positions of words in a dependency tree . +our experiments directly utilize the embeddings trained by the cbow model on 100 billion words of google news .,we are using word embeddings trained on google news corpus for our experiments . +"by varying the number of dimensions of the covariates and the size of the training data , we show that the improvements over the baselines are robust across different parameter settings .","by varying the size of the training data and the dimensionality of the covariates , we have demonstrated that our proposed model is relatively robust across different parameter settings ." +and hence we need effective summarization .,"in this study , we focus on extractive summarization ." +acme yields significant relative error reduction over the input alignments and their heuristic-based combinations on three different language pairs .,we show that acme yields a significant relative error reduction over the input alignment systems and heuristic-based combinations on three different language pairs . +we initialize the embedding layer weights with glove vectors .,we initialized the embedding layer weights with glove vectors . +"semantic dictionaries are useful for many nlp tasks , as evidenced by the widespread use of wordnet .","hand-built lexicons , such as cyc and wordnet , are the most useful to provide resources for nlp applications ." +"we further consider the classic data set of rubenstein and goodenough , consisting of 65 noun pairs .",the classic data set of rubenstein and goodenough consists of 65 noun pairs . +"recurrent neural networks have successfully been used in sequence learning problems , for example machine translation , and language modeling .",long short-term memory networks have been applied to machine translation and semantic processing . +"framenet data show that our model significantly outperforms existing neural and non-neural approaches , achieving a 5 . 7 f1 gain over the current state of the art , for full frame .",empirical results demonstrate that our approach significantly outperforms existing neural and non-neural approaches on framenet data . +"however , only a few of the existing biomedical systems generate personalized content for the patients .","only few nlg systems generate personalized information from medical data for the patient , as opposed to health care personnel ." +in a restricted form of indexed grammars was discussed in which the stack associated with the nonterminal on the left of each production can only be associated with one of the occurrences of nonterminals on the right of the production .,gazdar discussed a restricted form of indexed grammars in which the stack associated with the nonterminal on the left of each production can only be associated with one of the occurrences of nonterminals on the right of the production . +"dependency analysis provides a useful approximation to the underlying meaning representations , and has been shown very helpful for nlp applications e . g . question answering ( cite-p-26-1-29 ) .","our algorithm is also applicable to other graph-structured representations , e.g . hpsg predicate-argument analysis ( cite-p-26-1-25 ) ." +"in their setting , lda merely serves the purpose of dimensionality reduction , whereas our particular motivation is to use topics as probabilistic indicators for the prediction of attributes .","in their setting , lda merely serves the purpose of dimensionality reduction , whereas our particular motivation is to use topics as probabilistic indicators for the prediction of attributes as semantic target categories in adjective-noun composition ." +"in the first approach , we use two sources of implicit linguistic information , eventuality type and modality , automatically derived , as features .","first , we add two sources of implicit linguistic information as features – eventuality type and modality of an event , which are also inferred automatically ." +named-entity disambiguation is the task of linking names mentioned in text with an established catalog of entities .,named entity linking is the task of mapping mentions of named entities to their canonical reference in a knowledge base . +the task of conll-2014 is grammatical error correction which consists of detecting and correcting the grammatical errors in english essays written by non-native speakers .,the goal of the conll-2014 shared task was to evaluate algorithms and systems for automatically correcting grammatical errors in english essays written by second language learners of english . +"category acquisition approach is based on decomposition of a matrix defined by context feature vectors , and it has not been shown to scale well .",such matrix decomposition is computationally heavy and has not been proven to scale well when the number of words assigned to categories grows . +our approach adopts a twin-candidate model to directly learn the competition criterion .,"by contrast , our approach adopts a twin-candidate learning model ." +mcclosky et al use self-training in combination with a pcfg parser and reranking .,mcclosky et al used an additional unlabeled corpus to reduce data sparsity . +"in this paper , we present multigrancnn , a general deep learning architecture for classifying .","we present multigrancnn , a general deep learning architecture for matching text chunks ." +to apply our fluency boost learning and inference mechanism to their models .,we attempt to address these limitations by proposing a fluency boost learning and inference mechanism . +"complexity of this task challenges systems to establish the meaning , reference , and identity across documents .","the complexity of this task challenges systems to establish the meaning , reference and identity across documents ." +resolving cross-narrative temporal relationships between medical events is essential to the task of generating an event timeline from across unstructured clinical narratives .,cross-narrative temporal ordering of medical events is essential to the task of generating a comprehensive timeline over a patient¡¯s history . +zaidan et al found that on average it takes twice as much time to annotate an instance with rationales than to annotate one without rationales .,"thus , based on the findings in zaidan et al , we assume that on average annotating an instance with feature feedback takes twice as much time as annotating an instance without feature feedback ." +rahman and ng also used event-related information by looking at which semantic role the entity mentions can have and the verb pairs of their predicates .,rahman and ng used event-related information by looking at which semantic role the entity mentions can have and the verb pairs of their predicates . +we adopt the approach of pang and lee described in section 2 for feature rating estimation .,we then adopt the machine learning method proposed in and the bayesian network classifier for feature rating estimation . +word embeddings are usually learned from unlabeled text corpus by predicting context words surrounded or predicting the current word given context words .,"generally , word embeddings are learned from a given text corpus without supervision by predicting the context of each word or predicting the current word given its context ." +we evaluate our proposed summarization approach on the tac 2008 and 2011 data sets using the standard rouge metric .,we evaluate our proposed method on the tac 2008 and 2011 data sets using the standard rouge metric and human evaluation of the linguistic quality . +"the use of deduction systems for specifying parsers has been proposed by shieber , schabes , and pereira and sikkel .","finally , both shieber , schabes , and pereira and sikkel have shown how to specify parsers in a simple , interpretable , item-based format ." +approach is that they have the potential to drop arbitrary words from the original sentence without the need to consider the boundaries determined by the tree structures .,their methods have the potential to drop arbitrary words from the original sentence without considering the boundary determined by the tree structures . +most often these methods depend on an intermediary machine translation system or a bilingual dictionary to bridge the language gap .,"usually , such methods need intermediary machine translation system or a bilingual dictionary to bridge the language gap ." +pattern matching capabilities of neural networks can be used to detect syntactic constituents of natural language .,the pattern matching capabilities of neural networks can be used to locate syntactic constituents of natural language . +"we have proposed a word alignment model based on an rnn , which captures long alignment .","we assume that this property would fit with a word alignment task , and we propose an rnn-based word alignment model ." +we compare the entity and relation extraction performance of our model with other systems .,"to compare our model with the other systems , we evaluated the performance of our model when the entity boundaries were given ." +johnson showed that the performance of an unlexicalized pcfg over the penn treebank could be improved enormously simply by annotating each node by its parent category .,johnson thinks that re-annotating each node with the category of its parent category in treebank is able to improve parsing performance . +in this paper we investigate named entity transliteration .,"in this paper , we develop a multilingual transliteration system for named entities ." +"we describe the pmi-cool system , which we developed to participate in semeval-2016 task 3 , subtask a , which asks to rerank the answers in a question-answer thread , ordering them from good to bad .","we have described our pmi-cool system for semeval-2016 , task 3 on community question answering , subtask a , which asks to rerank the comments from the thread for a given forum question from good to bad ." +"in our experiments , we show that our method outperforms the state-of-the-art methods .",our experimental results show the effectiveness of our method . +"to derive the ccg-based representation , we use the output of the easyccg parser .","we also apply the easyccg parser of lewis and steedman , using the supertags ." +the statistical phrase-based systems were trained using the moses toolkit with mert tuning .,the smt systems were trained using the moses toolkit and the experiment management system . +we evaluate kale with link prediction and triple classification tasks on wordnet .,we evaluate kale with the link prediction and triple classification tasks on wordnet and freebase data . +"we also have an additional held-out translation set , the development set , which is employed by the mt system to train the weights of its log-linear model to maximize bleu .",an additional translation set called the maximum bleu set is employed by the smt system to train the weights associated with the components of its log-linear model . +in the confusion set that occurred most often in the training corpus .,is the word in the confusion set that occurred most often in the training corpus . +the mert implementation uses the line search of cer et al to directly minimize corpus-level error .,"the phrasal implementation uses the line search algorithm of cer et al , uniform initialization , and 20 random starting points ." +"stoyanov et al , 2005 , required a known subjective vocabulary for their opinion qa .",stoyanov et al required a known subjective vocabulary for their opinion qa system . +"multi-sense embeddings outperformed the single-sense baselines , thus demonstrating the need to distinguish between word senses in a distributional semantic model .","while there is no overall best model , all models significantly outperform a word2vec single-sense skip baseline , thus demonstrating the need to distinguish between word senses in a distributional semantic model ." +"in order to incorporate word senses into smt , we propose a sense-based translation model that is built on maximum entropy .","in this paper , we propose a sense-based translation model to integrate word senses into statistical machine translation ." +"for the contextual polarity disambiguation subtask , covered in section 2 , we use a system that combines a lexicon based approach to sentiment detection .","in the message polarity classification subtask , we focus on the influence of domain information on sentiment classification ." +tu et al designed a re-constructor module for nmt in order to make the target representation contain the complete source information which can reconstruct back to the source sentence .,"tu et al incorporated a reconstructor module into nmt , which reconstructs the input source sentence from the hidden layer of the output target sentence to enhance source representation ." +brockett et al propose the use of the phrasal statistical machine translation technique to identify and correct esl errors .,brockett et al employed phrasal statistical machine translation techniques to correct countability errors . +kendall ¡¯ s math-w-2-5-3-76 and explain how it can be employed for evaluating information ordering .,"in this article , we argue that kendall¡¯s math-w-11-1-0-8 can be used as an automatic evaluation method for information-ordering tasks ." +the lexicon consists of one hundred thousand entries for both english and japanese .,"the lexicon consists of a strongly connected core , around which there is a kernel , an asymmetric grounding set and satellites ." +"and generate topic-tuned summaries , we propose a neural encoder-decoder based framework which takes an article along with a topic of interest .","in this paper , we propose an attention based rnn framework to generate multiple summaries of a single document tuned to different topics of interest ." +our phrase-based smt system is similar to the alignment template system described in och and ney .,"as described above , our base system is a phrasebased statistical mt system , similar to that of och and ney ." +"in this paper , we propose a variant of classification scheme for uncertainty identification in social media .","therefore , a different uncertainty classification scheme is needed in social media context ." +"stolcke presented a more sound criterion for computing the difference of models before and after pruning each n-gram , which is called relative entropy or kullback-leibler distance .",stolcke proposed a criterion for pruning n-gram language models based on the relative entropy between the original and the pruned model . +"this paper addresses an automatic classification of preposition types in german , comparing hard and soft clustering approaches .","this paper addresses an automatic classification of preposition types in german , comparing various clustering approaches ." +results of previous work suggest that a unigram baseline can be difficult to beat for certain types of debates .,previous work suggests that the unigram baseline can be difficult to beat for certain types of debates . +"on top of a distributed file system , the runtime transparently handles all other aspects of execution , on clusters ranging from a few to a few thousand nodes .","on top of a distributed file system , the runtime transparently handles all other aspects of execution , on clusters ranging from a few to a few thousand nodes ." +corex and anchored corex produce topics of comparable quality to unsupervised and semi-supervised variants of lda .,"both corex and anchored corex consistently produce topics that are of comparable quality to lda-based methods , despite only making use of binarized word counts ." +"that produces a logical form for the text , and a probabilistic database that defines a distribution over denotations for each predicate .",this logical form is evaluated against a learned probabilistic database that defines a distribution over denotations for each textual predicate . +we applied liblinear via its scikitlearn python interface to train the logistic regression model with l2 regularization .,we trained the l1-regularized logistic regression classifier implemented in liblinear . +atc in our system is performed using a hierarchical clustering method in which clusters are merged based on average mutual information measuring how strongly terms are related to one another .,atc in our system is performed using a hierarchical clustering method in which clusters are merged based on average mutual information measuring how strongly terms are related to one another . +"this results holds for both seen bigrams and for unseen bigrams whose counts were recreated using smoothing techniques , .",this result holds both for seen bigrams and for unseen bigrams whose counts have been re-created using smoothing techniques . +"another device is the use of an anaphor , frequently a deflnlte noun phrase , to refer to an antecedent tha~ is not currently the topic of conversation bu~ is in the background .",another device is the specification of a temporal or spatial parameter that is outside the normal range of a situation . +"for both english and chinese generation tasks yield significant improvements over results obtained by two state-of-the-art machine translation models , in terms of both automatic metrics and human evaluation .","empirical results show that our model outperforms state-of-the-art machine translation models , for both english and chinese , in terms of both automatic and human evaluation ." +"in general , a small window size allows to have a highest number of relevant contexts for a given target word , but leads to more data sparsity than with a larger window .","generally , the smaller windows allow to acquire more relevant contexts for a target , but increase the data sparseness problem ." +"to the best of our knowledge , our method is the first one to use a hierarchical clustering model for the metaphor processing task .","to the best of our knowledge , our method is the first that addresses this task in an al framework ." +"neural models based on the encode-attend-decode ( bahdanau et al . , 2014 ) paradigm have shown great success in various natural language generation ( nlg ) .","recent research in abstractive summarization has focused on data driven neural models based on the encode-attend-decode paradigm ( bahdanau et al. , 2014 ) ." +"as a result , our system achieves state-of-the-art performance with 83 . 95 % accuracy .","as a result , our proposed model trian achieves near state-of-the-art performance ." +jte is a switching graphical model performing a switch between expressions and topics similar to that in .,dtm is a switching graphical model performing a switch between topics and ad-expressions similar to that in . +"learning the probability of n-grams , together with their representation in a continous space , is an appropriate approximation for large vocabulary tasks .","learning the probability of n-grams , together with their representation in a continuous space , is an appropriate approximation for large vocabulary tasks ." +johnson thinks that re-annotating each node with the category of its parent category in treebank is able to improve parsing performance .,johnson showed that the performance of an unlexicalized pcfg over the penn treebank could be improved enormously simply by annotating each node by its parent category . +this had the best correlation with eyetracking data when different styles of presentation were compared for english .,this type of data has been found to yield the best correlation with eye-tracking data when different styles of presentation were compared for english . +"prior to translation , the domain of the source sentence is first predicted .","when a source sentence is to be translated , its domain is first predicted ." +mihalcea and moldovan use the semantic density between words to determine the word sense .,mihalcea and moldovan and lytinen et al used wordnet to obtain the sense of a word . +our results show that our proposed regularization technique is imperative for the rnn-based model .,our results show that this regularization technique is critical for obtaining a state-of-the-art result . +"coordination is a common syntactic phenomena and work has been done to improve coordination structures predication in the general case ( cite-p-9-4-2 , cite-p-9-4-1 , cite-p-9-4-8 , cite-p-9-4-6 ) .","coordination is a common syntactic phenomena , appearing in 38.8 % of the sentences in the penn treebank ( ptb ) ( cite-p-24-1-13 ) , and in 60.71 % of the sentences in the genia treebank ( cite-p-24-1-15 ) ." +"in contrast , in cite-p-12-1-11 , we derived and applied the finite-state constraints so as to guarantee a reduction in the worst-case complexity of the context-free parsing pipeline .","in this paper , we extend methods from cite-p-12-1-11 for reducing the worst-case complexity of a context-free parsing pipeline via hard constraints derived from finite-state tagging preprocessing ." +our best system yields a total boost of 44 % to precision and 70 % to recall .,our best system boosts precision by 44 % and recall by 70 % . +"in this paper , our approach describes how to exploit non-local information .","thus , we should incorporate non-local information into the model ." +the berkeley parser achieves an f1 score above 90 on newswire text .,the chinese system currently uses the berkeley parser . +"we also use rouge-s , a text summarization metric , and use the evaluation score as a feature .","to this end we use rouge , a recall-oriented evaluation package for automatic summarization ." +this paper demonstrates the importance of relation equivalence for entity translation .,a key contribution of this paper is using relation temporality for determining relation equivalence . +gildea presents a related method for binarizing rules while keeping the time complexity of parsing as small as possible .,gildea presents a general algorithm to binarize an lcfrs while minimizing a given scoring function . +lee et al tried to further improve the translation probabilities based on question-answer pairs by selecting the most important terms to build compact translation models .,compact translation models tried to further improve the translation probabilities based on question-answer pairs by selecting the most important terms to build compact translation models . +"second , beyond deterministic greedy search , beam search and principled dynamic programming strategies have been used to explore more possible hypotheses .","second , beyond deterministic greedy search , principled dynamic programming strategies can be employed to explore more possible hypotheses ." +"we present an endto-end evaluation framework for the wa task , and provide new evaluation metrics and detailed guidelines for evaluating semantic models on the wa task .","we have also provided new evaluation metrics inspired by research in ir , and guidelines for evaluating semantic representation models on the quantitative wa task ." +previous work has discussed the role of urls in information diffusion on twitter .,there also has been extensive work on modeling conversational interactions on twitter . +"for the decoder , we use a recurrent neural network language model , which is widely used in language generation tasks .","to compensate this , we apply a strong recurrent neural network language model ." +hatzivassiloglou and mckeown proposed a method to identify the polarity of adjectives based on conjunctions linking them .,"in previous work , hatzivassiloglou and mckeown propose a method to identify the polarity of adjectives ." +"and consequently key phrases tend to have close semantics to the title , we propose a novel semi-supervised key phrase extraction approach in this paper .","to overcome the deficiencies of these two kinds of methods , we propose a novel semi-supervised key phrase extraction approach in this paper , which explores title phrases as the source of knowledge ." +we use an information extraction tool for named entity recognition based on conditional random fields .,we use conditional random fields and memory-based learning as ml methods for word-level qe . +"in this paper , we adopt an n-best rescoring scheme using pitch-accent patterns .","in this paper , we attempt to integrate prosodic information for asr using an n-best rescoring scheme ." +"in this paper , we focus on class-based models of selectional preferences for german verbs .","in this paper , we undertake such a comparative study by looking at selectional preferences of german verbs ." +"all the meetings have been transcribed and annotated with dialog acts , topics , and extractive summaries .","these meetings have been transcribed , and annotated with extractive summaries ." +"in current implementation , only 0 . 79 % and 1 . 11 % of candidates for english person names and location names , respectively have to be proposed .","in name translation , only 0.79 % and 1.11 % of candidates for english person names and location names , respectively , have to be proposed ." +quality of this new convex model is significantly above that of the standard ibm models .,"although it does not beat the hmm , the new convex model improves upon the standard ibm model 2 significantly ." +we address this issue by investigating a model that learns domain-specific input representations .,we investigate this problem by learning domain-specific representations of input sentences using neural network . +for tuning we use mert and the newstest 2012 data provided by the annual workshop on statistical machine translation .,for tuning we use mert and the newstest 2011 data provided by the annual workshop on statistical machine translation . +"in this paper , we study the effect of keystroke patterns for deception detection in digital communications , which might be helpful in understanding the psychology of deception .","in this paper , we present the first empirical study that quantitatively measures the deception cues in real-time writing process as manifested in keystroke logs ." +turney and littman proposed to compute pair-wised mutual information between a target word and a set of seed positive and negative words to infer the so of the target word .,turney and littman determined the semantic orientation of a target word t by comparing its association with two seed sets of manually crafted target words . +this result is extended to formalisms beyond cfg .,"finally , we generalize this result to formalisms beyond cfg ." +"in domain-oriented dialogues the interaction with the system , typically modeled as a conversation with a virtual humanlike character , can be the main motivation for the interaction .","in emerging areas , such as domainoriented dialogues , the interaction with the system , typically modelled as a conversation with a virtual anthropomorphic character , can be the main motivation for the interaction ." +semantic role labeling is the task of locating and labeling the semantic arguments to predicates .,semantic role labeling consists of finding the arguments of a predicate and labeling them with semantic roles . +the weights are learned automatically using expectation maximization .,learning is done using a monte carlo variant of the expectation-maximization algorithm . +dsms can be categorized into unstructured that employ a bag-of-words model and structured that employ syntactic relationships between words .,"word-level dsms can be categorized into unstructured , that employ a bag-of-words model , and structured , that employ syntactic relationships between words ." +thus is a key issue for the use of annotated corpora in computational and theoretical linguistics .,consistency of corpus annotation is an essential property for the many uses of annotated corpora in computational and theoretical linguistics . +to answer the question is contained only in the story itself .,"by being fictional , the answer typically can be found only in the story itself ." +yao et al applied linear chain conditional random fields with features derived from ted to learn associations between questions and candidate answers .,"yao et al , 2013 ) applied linear chain crfs with features derived from ted to automatically learn associations between questions and candidate answers ." +"which enables us to efficiently guide the annotators , to store all their intermediate results , and to record user – system interaction data .","as a key property of our tool , we store all intermediate annotation results and record the user–system interaction data ." +"for example , tan et al find that the linguistic interaction between an opinion holder and opposing debater are highly predictive of persuasiveness .","for example , tan et al and zhang et al have found that the language used in arguments and the patterns of interaction between debaters are important predictors of persuasiveness ." +"to specify patterns , following we classify words into highfrequency words and content words .","following , we classified words into high-frequency words and content words ." +ikeda et al proposed a method that classifies polarities by learning them within a window around a word .,ikeda et al proposed a machine learning approach to handle sentiment polarity reversal . +"experiment results show that our adaptive scaling algorithm not only achieves a better performance , but also is more stable and more adaptive for training neural networks .",experiments show that our algorithm leads to a more effective and stable training of neural network based detection models . +by identifying the subtopics ( which are closely related to the original topic ) in the given body of texts and applying the extended string subsequence kernel to calculate their similarity with the questions .,r we identify the subtopics ( which are closely related to the original topic ) in the given body of texts by using lda and calculate their similarity with the questions by applying essk ( with disambiguated word senses ) . +"we present a new dataset of image caption annotations , conceptual captions , which contains an order of magnitude more images than the mscoco dataset ( cite-p-16-3-17 ) .","first , we present a new dataset of caption annotations ? , conceptual captions ( fig . 1 ) , which has an order of magnitude more images than the coco dataset ." +"since we look at two different languages , we follow the universal pos set proposed by petrov et al which attempts to cover pos tags across all languages .",we use the universal pos tagset proposed by petrov et al which has 12 pos tags that are applicable to both en and hi . +"corpus contains more than 10 , 000 clauses , approximately half of which are generic .","the corpus is balanced with respect to genericity and about 10,000 clauses in size ." +our first model uses the recurrent neural network language model of mikolov et al to project both mental state labels and query tuples into a latent conceptual space .,we construct our representations using a skip-gram model of mikolov et al trained on textual data to obtain linguistic embeddings and a deep convolutional neural network trained on image data to obtain visual embeddings . +we present an estimate of an upper bound of 1 . 75 bits for the entropy of characters .,the cross-entropy of the brown corpus and our model is 1.75 bits per character . +"in this work , we propose two novel statistical models to extract and categorize aspect terms automatically given some seeds in the user .","in this paper , we propose two statistical models to solve this seeded problem , which aim to discover exactly what the user wants ." +our results show that this simple but effective model is able to outperform previous work relying on substantially richer prior knowledge .,"experiments show that our model outperforms previous state-of-the-art methods , including those relying on much richer forms of prior knowledge ." +clark et al use the results of one pos tagger on unannotated data to inform the training of another tagger in a semi-supervised setting using a co-training routine with a markov model tagger and a maximum entropy tagger .,"for pos tagging , clark et al used the results of one pos tagger on unannotated data to inform the training of another tagger in a semisupervised setting using a co-training routine with a markov model tagger and a maximum entropy tagger ." +"rather , what we would measure is the tendency to use colours with visible things and not with abstract objects .","thus , this measure is a “ within-topic ” measure ." +"the feature set used in assert is a combination of features described in gildea and jurafsky as well as those introduced in pradhan et al , surdeanu et al , and the syntactic-frame feature proposed in .","they are a combination of features introduced by gildea and jurafsky , ones proposed in , surdeanu et al and the syntactic-frame feature proposed in ." +noisy channel model is dominant in query spelling correction research .,source channel model has been widely used for spelling correction . +we evaluate our cdc approach with the benchmark dataset from the acl-2007 semeval web person search evaluation campaign .,we evaluate our methods using the benchmark test collection from the acl semeval-2007 web person search task . +"on , a global crisis struck the financial markets and led to a severe slowdown of the real economy .","from 2007 on , a global crisis struck the financial markets and led to a severe slowdown of the real economy ." +"the application is unusual because it requires real-time synthesis of unedited , spontaneously generated conversational texts .","the application is unusual because it requires text-to-speech synthesis of unedited , spontaneously generated conversational text ." +"in this paper , we discuss methods for automatically creating models of dialog structure using dialog act and task .","in this paper , we address one aspect of this problem – inferring predictive models to structure task-oriented dialogs ." +ji and grishman employ an approach to propagate consistent event arguments across sentences and documents .,"ji and grishman even consider topic-related documents , proposing a cross-document method ." +they describe this method as unsupervised because they only use 14 seeds as paradigm words that define the semantic orientation rather than train the model .,they describe this setting as unsupervised because they only use 14 seeds as paradigm words that define the semantic orientation rather than train the model . +"in this paper , we compare and contrast the usefulness of abstracts and of citation text in automatically generating a technical survey on a given topic .","in this paper , we investigated the usefulness of directly summarizing citation texts ( sentences that cite other papers ) in the automatic creation of technical surveys ." +"soricut and marcu presented an approach to discourse parsing that relied on syntactic information produced by the charniak parser , and used a standard bottom-up chart parsing algorithm with dynamic programming to determine discourse structure .",related work soricut and marcu describe a discourse parser -a system that uses penn treebank syntax to identify intra-sentential discourse relations in the rst treebank . +"mimus follows the information state update approach to dialogue management , and supports english , german and spanish , with the possibility of changing language .","mimus follows the information state update approach to dialogue management , and has been developed under the eu–funded talk project ( cite-p-14-3-9 ) ." +"for the ¡° predicted ¡± setting , first , we predicted the subject labels in a similar manner to five-fold cross validation , and we used the predicted labels as features .","7 for the ¡°predicted¡± setting , first , we predicted the subject labels in a similar manner to five-fold cross validation , and we used the predicted labels as features for the episode classifier ." +"in earlier work , we demonstrated that contextual representations consisting of both local and topical components are effective for resolving word senses and can be automatically extracted from sample texts .","leacock , towell and voorhees demonstrated that contextual representations consisting of both local and topical components are effective for resolving word senses and can be automatically extracted from sample texts ." +"we use elmo word representations in this paper , which are learned from the internal states of a deep bidirectional language model , pre-trained on a large text corpus .","we used elmo embeddings , which are generated by training a bi-directional language model on a large corpus of unlabeled data ." +we have presented a cluster-ranking approach that recasts the mention resolution process as the problem of finding the best preceding cluster .,we propose a cluster-ranking approach to coreference resolution that combines the strengths of mention rankers and entity-mention models . +and we use the labels produced by the classifiers to dynamically create semantic features .,"we also explore a one-semantic-class-per-discourse heuristic , and use the classifiers to dynamically create semantic features ." +"in the following , we call this task the nsw detection task .","in this paper , we propose a method for nsw detection ." +"in order to extract such patterns automatically , we followed the algorithm given in .","for automated extraction of patterns , we followed the pattern definitions given in ." +"in the figure , the titles are sorted left to right based on the maximum mean story grade among the titles .","in the figure , the titles are sorted left to right based on the maximum mean story grade among the titles in the libitum approach ." +we propose a variation of a probabilistic word-lattice parsing technique that increases efficiency while incurring no loss of language modeling .,we present a technique that improves the efficiency of word-lattice parsing as used in speech recognition language modeling . +"in the following example , “ will go ” is translated as яay \ g ( jaaenge ) , with e \ g ( enge ) .","in the following example , “ will go ” is translated as яay\g ( jaaenge ) , with e\g ( enge ) as the future tense marker :" +"lu et al , 2009 , used a learning-based method for sentiment detection .","lu et al , 2009 , used shallow parsing to identify aspects for short comments ." +"work , we proposed an unsupervised method which detects the presence or absence of cohesive links between the component words of the idiom .",we propose a novel unsupervised approach for distinguishing literal and non-literal use of idiomatic expressions . +"earlier work on event coreference in muc was limited to several scenarios , eg , terrorist attacks , management succession , resignation .",earlier work on event coreference in the muc program was limited to several scenarios such as terrorist attacks and management succession . +"here , a channel is a description by an embedding version .",a channel is a communication medium associated with a particular encoding method . +we use the stanford parser to derive the trees .,we employed the stanford parser to produce parse trees . +neg-finder successfully removes the necessity of including manually crafted supervised knowledge .,neg - finder significantly outperforms bootstrapping prior to the domain expert¡¯s negative categories . +we describe our participation in the semeval 2007 web people search task .,we have described our participation in the semeval 2007 web people search task . +"in our method , global model parameters are estimated from training data .",parameters of the model are estimated from training data using gibbs sampling . +"in section 6 and 7 , we present our experimental results and analyses , and finally conclude our work .","in section 6 and 7 , we present our experimental results and analyses , and finally conclude our work in section 8 ." +the most popular approaches are the skip-gram and continuous-bag-of-words models .,"recently , continuous bag-of-words and skip-gram models , which can alleviate the above issue , have received much attention ." +"mean supervectors motivate us to perform discriminant analysis on the unit hypersphere rather than in the euclidean space , which leads to a novel dimensionality reduction technique .","motivated by the directional scattering patterns of the gmm mean supervectors , we peroform discriminant analysis on the unit hypersphere rather than in the euclidean space , leading to a novel dimensionality reduction technique ¡°sda¡± ." +"in this work , we develop neural models in a sequential way , and encode sentence semantics and their relations automatically .","afterwards , gated recurrent neural network is exploited to adaptively encode semantics of sentences and their inherent relations in document representations ." +"here we follow blitzer et al and set the negative values in w to zero , which yields a sparse representation .","following blitzer et al , we only use positive entries in the pivot predictors weight vectors to compute the svd ." +we propose the dependency-based gated recursive neural network ( dgrnn ) to integrate local features with long distance dependencies .,"to integrate local features with long distance dependencies , we propose a dependency-based gated recursive neural network ." +"many current interpretation models are based on pcfgs , trained on syntactic annotations from the penn treebank .",these parsers make use of the ccgbank that is created by inducing a ccg grammar from the penn treebank . +neelakantan et al proposed an extension of the skip-gram model combined with context clustering to estimate the number of senses for each word as well as learn sense embedding vectors .,neelakantan et al and make use of context-based word sense disambiguation during corpus training to allow on-line learning of multiple senses of a word with modified versions of skip-gram . +"in the next section , we will describe the baseline phrase-based translation model .",we describe the baseline phrase-based translation system and various refinements . +we achieve this loose binding of trees by adapting unordered tree alignment to a probabilistic setting .,"to this end , we adapt a formalism known as unordered tree alignment to a probabilistic setting ." +"we found that , although the ud annotation scheme should be consistent across treebanks , combining training sets for one language .","since the ud annotation scheme is applied on all of the treebanks , this suggests that the training data of the same language from different domains could be combined ." +"it is worth pointing out that there is a nombank-specific label in figure 1 , sup , to help introduce the arguments which occur outside the nominal predicate-headed noun phrase .","it is worth noticing that there is a nombank-specific label in figure 1 , sup , in helping introduce the arguments , which occur outside the nominal predicate-headed noun phrase ." +"our named entity recognition module uses the hmm approach of bikel , schwartz , and weischedel , which learns from a tagged corpus of named entities .","our named entity recognition module uses the hmm approach of , which learns from a tagged corpus of named entities ." +distributed representations can provide good directions for generalization in a bootstrapping system .,this distributed representation can inform an inductive bias to generalize in a bootstrapping system . +"for direct translation , we use the scfg decoder cdec 4 and build grammars using its implementation of the suffix array extraction method described in lopez .",we use the scfg decoder cdec 4 and build grammars using its implementation of the suffix array extraction method described in lopez . +"to maximize sentence importance while minimizing redundancy , the selection method uses maximal marginal relevance .","finally , the maximal marginal relevance algorithm is used to perform sentence re-ranking and selection ." +"context , along with social network structure , have been shown to improve sentiment analysis .",the aforementioned studies have shown that incorporating contextual information can improve sentiment analysis . +"in section 5 , we outline the experiments used to evaluate the models .","in section 5 , we outline the experiments used to evaluate the models and present their results ." +"this is related to topicmodeling methods such as latent dirichlet allocation , but here the induced topics are tied to a linear discourse structure .","this is similar in spirit to hidden topic models such as latent dirichlet allocation , but rather than assigning a hidden topic to each word , we constrain the topics to yield a linear segmentation of the document ." +"in this paper , we focused on extracting content features to measure the speech proficiency .","in this paper , we explore features representing the accuracy of the content of a spoken response ." +in this paper we concentrate on cases in which the system and user disagree .,"this paper describes the evaluator , concentrating on cases in which the system and user disagree ." +xiong et al focus on ensuring lexical cohesion by reinforcing the choice of lexical items during decoding .,xiong et al integrate various target-side lexical cohesion devices into document-level machine translation . +"given a word , the task of finding its semantic orientation is to determine if the word is more probable to be used in positive or negative connotation .","given a word , the task of finding the semantic orientation of the word is to identify if the word is more likely to be used in positive or negative sense ." +"yu et al proposed a factor-based compositional embedding model by deriving sentence-level and substructure embeddings from word embeddings , utilizing dependency trees and named entities .","yu et al proposed the factor-based compositional embedding model , which uses syntactic dependency trees together with sentence-level embeddings ." +phrase-based statistical machine translation models have achieved significant improvements in translation accuracy over the original ibm word-based model .,"in pursuit of better translation , phrase-based models have significantly improved the quality over classical word-based models ." +"chan and ng , 2007 ) introduce a system very similar to that of , but as applied to hierarchical phrase-based translation .","carpuat and wu , 2007 ) report an improvement in translation quality by incorporating a wsd system directly in a phrase-based translation system ." +dipre is a bootstrapping-based system that used a pattern matching system as classifier to exploit the duality between sets of patterns and relations .,brin proposed a bootstrapping-based method on the top of a self-developed pattern matching-based classifier to exploit the duality between patterns and relations . +"after imitation learning with user teaching improves the model performance further , not only on the dialogue policy .",applying reinforcement learning with user feedback after the imitation learning stage further improves the agent ’ s capability in successfully completing a task . +"the top-10k abstracts matching most search terms were selected for further processing with corenlp , including tokenisation , sentence splitting , pos tagging , lemmatisation and parsing .","the scripts were further post-processed with the stanford corenlp pipeline to perform tagging , parsing , named entity recognition and coreference resolution ." +"miwa et al proposed a hybrid kernel , which combines the all-paths graph kernel , the bag-of-words kernel , and the subset tree kernel .","miwa et al also proposed a hybrid kernel 5 , which is a composition of all-dependency-paths kernel , bag-of-words kernel and sst kernel ." +"on the other hand , math-w-6-1-0-93 and math-w-6-1-0-96 both happen in the interval math-w-6-1-0-103 .","on the other hand , math-w-6-1-0-93 and math-w-6-1-0-96 both happen in the interval math-w-6-1-0-103 but they form an overlap relation ." +"in section 3 , we describe each processing step of our approach .","in section 3 , we describe each processing step of our approach in detail ." +"in this paper , we propose an information retrieval-based method for sense .",we propose a knowledge-lean method that relies on word association and requires no syntactic annotation . +"named entity ( ne ) tagging is the task of recognizing and classifying phrases into one of many semantic classes such as persons , organizations and locations .",named entity ( ne ) tagging is a fundamental task for natural language processing and information extraction . +one of the main advantages of this approach is that it does not depend on multilingual resources .,one of the main advantages of this approach is that it does not depend on bilingual or multilingual resources . +chan and ng proposed maxsim that is based on a bipartite graph matching algorithm and assigns different weights to matches .,chan and ng proposed a machine translation evaluation metric based on the optimal algorithm for bipartite graph matching also known as the assignment problem . +"we use the wikipedia revision toolkit , an enhancement of the java wikipedia library , to gain access to the revision history of each article .",we access the wikipedia revision data using the freely available wikipedia revision toolkit together with the jwpl wikipedia api . +"to test whether a better set of initial parameter estimates can improve model 1 alignment accuracy , we use a heuristic model based on the loglikelihood-ratio statistic recommended by dunning .","to identify these terms , we use the log-likelihood statistic suggested by dunning and first used in summarization by lin and hovy ." +"davidov et al , 2010 ) used 50 hashtags and 15 emoticons as noisy labels to create a dataset for twitter sentiment classification .",davidov et al used 50 hashtags and 15 emoticons as sentiment labels for classification to allow diverse sentiment types for the tweet . +evaluation show that both approaches are useful for determining the similarity in meaning between pairs of sentences .,the task is concerned with determining the degree of semantic equivalence between a pair of sentences . +we present a single model that accounts for referent resolution of deictic and anaphoric expressions .,"in this paper , however , we focus on the use of the context model to resolve deictic and anaphoric expressions keyed in by the user ." +the dependency-to-string model proposed by translates a source dependency tree by applying head-dependents translation rule at each head node in a recursive way .,"the dependency-to-string model takes head-dependents relations as the elementary structures of dependency trees , and represents the translation rules with the source side as hdrs and the target side as string ." +"more sophisticated metrics , such as the rte metric , use higher level syntactic or even semantic analysis to determine the quality of the translation .","more sophisticated metrics , such as the rte metric , use higher level syntactic or semantic analysis to determine the grammaticality of the output ." +alignment of medical ontologies facilitates the integration of medical knowledge that is relevant to medical .,"consequently , semantic integration of these different but related types of medical knowledge that is present in disparate domain ontologies becomes necessary ." +"for each math-w-4-7-1-3 , we have a parameter math-w-4-7-1-11 , which is the probability of math-w-4-7-1-19 .","finally , we can write math-w-15-1-1-133 , where math-w-15-1-1-162 is a r 1×r vector that can again be computed offline ." +"as representative studies , yu and siskind propose a method that learns representations of word meanings from short video clips paired with sentences .","recently , yu and siskind presented a system that induces word-object mappings from features extracted from short videos paired with sentences ." +"to create a class vocabulary , the morphs were embedded in a 300-dimensional space using word2vec .","word embedding features were derived using word2vec , representing each word as a 300-d vector ." +"according to , dependency representations have the best phrasal cohesion properties across languages .","according to fox , dependency representations have the best inter-lingual phrasal cohesion properties ." +"in the final two articles , by piotrovskij and marcuk , the authors strongly advocate what they consider to be practical approaches to mt , while dismissing much of the work cited in the first three articles .","in the final two articles , by piotrovskij and marčuk , the authors strongly advocate what they consider to be practical approaches to mt , while dismissing much of the work cited in the first three articles as misguided and counterproductive ." +"multi-task learning has resulted in successful systems for various nlp tasks , especially in cross-lingual settings .",the use of unsupervised word embeddings in various natural language processing tasks has received much attention . +"lstm , and cnn are the two most popular neural network architectures in this regard .","graves , 2012 , and convolutional nns are the two most popular neural network architectures in this regard ." +"in our model , we use negative sampling discussed in to speed up the computation .","here , we use negative sampling as a speed-up technique ." +"the algorithm used in this research is an extension of the treeminer algorithm , modified to extract only closed subtrees .","this software is an implementation of the algorithm presented by , which extracts frequent ordered subtrees from a set of ordered trees ." +"the data contain around 11,800 sentences from movie reviews that were originally collected by pang and lee .","the corpus is based on the dataset introduced by pang and lee and consists of 11,855 single sentences extracted from movie reviews ." +"tasks require complex forms of inference , making it difficult to pinpoint the information .",the complexity of the tasks makes it however difficult to infer what kind of information is present in the representations . +dependency parsers have been enhanced by the use of neural networks and embedding vectors .,"recently , progress in dependency parsing has been made by introducing non-linear , neuralnetwork based models ." +in this paper we describe our participation in semeval-2015 task 12 .,in this paper we have described our participation in semeval-2015 task 12 ( absa ) . +"with a set of risk-labeled sentences , this proposed system applies fasttext to automatically identify high-risk sentences in those reports .","due to the superior performance of fasttext , the system highlights high risk sentences in those reports via using fasttext ." +"in this paper , we examine topological field parsing , a shallow form of parsing which identifies the major sections of a sentence in relation to the clausal main verb and subordinating heads .","in this paper , we examine topological field parsing , a shallow form of parsing which identifies the major sections of a sentence in relation to the clausal main verb and the subordinating heads ." +gupta and ji used a similar approach to recover implicit time information for events .,gupta and ji employ cross-event information to extract implicit time information . +"one of the most useful neural network techniques for nlp is the word embedding , which learns vector representations of words .",one of the touted advantages of neural network language models is their ability to model sparse data . +"lexical chaining has been investigated in many research tasks such as text segmentation , word sense disambiguation , and text summarisation .",lexical cohesion analysis has been used in such nlp applications as determining the structure of text and automatic text summarization . +smith et al demonstrate that language inter-linked article pairs in wikipedia offer valuable comparable data .,"for example , smith et al mine parallel sentences from comparable documents in wikipedia , demonstrating substantial gains on open domain translation ." +"specifically , we looked for a categorical structure within the communities by comparing words to the hypernym tree in wordnet .",we used wordnet as a source of synonyms and hypernyms for linking english words in the word relatedness graph . +"luong and manning , 2016 ) proposes a hybrid architecture for nmt that translates mostly at the word level and consults the character components for rare words when necessary .","luong and manning , 2016 ) presented a neural machine translation system using character rnns only for oov words , dropping the rnn output into a conventional word-based nmt system ." +"a0 is commonly mapped onto subject ( sbj ) , whereas a1 is often realized as object ( obj ) .","for example , a0 is commonly mapped onto subject ( sbj ) , whereas a1 is often realized as object ( obj ) ." +"to address this problem , we have used support vector machines , which are known to perform favourably on text classification problems .","our model consists of a linear classifier based on support vector machines , which have proved to provide competitive results in text categorization since their conception ." +galley et al describe an algorithm for inducing a string-to-tree grammar using a parallel corpus with syntax trees on target side .,galley et al proposes a method for extracting tree transducer rules from a parallel corpus . +"in this work , we detailed the gaokao history multiple choice questions ( gkhmc ) .","in this work , we focus on gaokao history multiple choice questions which is denoted as gkhmc ." +companion learning ( cl ) framework was proposed to integrate rule-based policy and rl-based policy .,this paper has proposed a companion learning framework to unify rule-based policy and rlbased policy . +"on the fly , it can adapt to the situation and special needs of the user .",it adapts to the user ’ s preferences and situation . +"we propose a general framework capable of enhancing various types of neural networks ( e . g . , cnns and rnns ) .",our framework is general and applicable to various types of neural architectures . +"erk and pad贸 proposed a structured vector space model in which a word is represented by multiple vectors , capturing its meaning but also the selectional restrictions it has for the different arguments .",erk and pad贸 introduce the concept of a structured vector space in which each word is associated with a set of selectional preference vectors corresponding to different syntactic dependencies . +"in ( cite-p-23-3-10 ) , the authors proposed a method that tackles online multi-task learning .","in ( cite-p-23-3-10 ) , the authors proposed a method that tackles online multi-task learning in the lifelong learning setting ." +"in this paper , we proposed two algorithms for automatically ontologizing binary semantic relations into wordnet .","in this paper , we propose two algorithms for automatically ontologizing ( attaching ) semantic relations into wordnet ." +"duh and kirchhoff adopt a minimally supervised approach that only requires raw text data from several das , as well as a msa morphological analyzer .","duh and kirchhoff adopted a minimally supervised approach that requires raw data from several das , and an msa morphological analyzer ." +"thus , zesch and gurevych used a semi-automatic process to create word pairs from domain-specific corpora .",zesch and gurevych created a third dataset from domain-specific corpora using a semi-automatic process . +"for the laptops domain , we used one cnn classifier that outputs probability scores .","for the laptops domain , we used only one cnn classifier that predicts the aspects based on a probability threshold ." +"acquirer detects unknown morphemes , enumerates candidates and selects the best candidates by comparing multiple examples kept in the storage .","every time a sentence is analyzed , it detects unknown morphemes , enumerates candidates and selects the best candidates by comparing multiple examples kept in the storage ." +"rangrej et al compared k-means , singular value decomposition , and affinity propagation for tweets , finding affinity propagation the most effective , using tf-idf with cosine similarity or jaccard for a similarity measure .","rangrej et al compared the performance of three document clustering techniques on twitter data , and found that graph-based approach using affinity propagation performs best in clustering tweets ." +"to translate , we firstly use a tm system to retrieve the most similar ¡® example ¡¯ source sentences together with their translations .",we constrain the translation of an input sentence using the most similar ¡®translation example¡¯ retrieved from the tm . +tomanek et al utilised eye-tracking data to evaluate the degree of difficulty in annotating named entities .,tomanek et al utilised eye-tracking data to evaluate a degree of difficulty in annotating named entities . +shoufan and al-ameri and al-ayyoub et al present a survey on nlp and deep learning methods for processing arabic dialectal data with an overview on arabic did of text and speech .,shoufan and alameri and al-ayyoub et al present a survey on nlp and deep learning methods for processing arabic dialectal data with an overview on arabic did of text and speech . +ambiguity is the task of building up multiple alternative linguistic structures for a single input .,ambiguity is a common feature of weps and wsd . +the cass partial parsing system makes use of a cascade of fsts .,the system employs simple partial parsing techniques as described by abney . +kudo and matsumoto applied the cascaded chunking algorithm to japanese dependency parsing .,it was first used for unlabeled dependency parsing by kudo and matsumoto and yamada and matsumoto . +we set up a web experiment using the nltk package to collect coherence ratings for implicit and explicit arguments .,we set up a web experiment using the evaluation toolkit by belz and kow to collect ratings of local coherence for implicit and explicit arguments . +dbpedia spotlight is a system that finds mentions of dbpedia resources in a textual document .,dbpedia spotlight is a tool for automatically annotating mentions of dbpedia resources in text . +"in creating the summary , instantiating the content model , we identify independent categories and dependent categories , and we argue that in order to preserve the cohesion of the text .","in creating the summary , instantiating the content model , we identify independent categories and dependent categories , and we argue that in order to preserve the cohesion of the text the independent categories should be determined first ( see section 3.2 ) ." +opinion lexicons have been obtained for english language and also for spanish language .,opinion lexicons have been obtained for english and also for spanish . +that uses a much simpler set of model parameters than similar phrase-based models .,the model is much simpler than other phrase-based statistical models . +classification is important for discourse mode identification .,it may be useful for discourse relation projection and discourse parsing . +which is due to the additional use of a cost function that boosts similarity of translations to human reference translations .,this is due to the possibility to boost similarity to human reference translations by the additional use of a cost function in our approach . +"and finally , the baselines reported for resnik ’ s test set were higher than those for the all-words task .","third and finally , the baselines reported for resnik ’ s test set were higher than those for the all-words task ." +"in this paper , we will investigate the case of the verb group construction and attempt to reproduce the study by nilsson et al on ud treebanks to find out whether or not the alternative representation is useful for parsing with ud .","in this paper , we have attempted to reproduce a study by nilsson et al that has shown that making auxiliaries heads in verb groups improves parsing but failed to show that those results port to parsing with universal dependencies ." +in this paper proposes a simple and effective use of machine learning .,"this paper presents such a method , exploiting machine learning in an innovative way ." +"however , as noted by lavie et al , liu et al , and chiang , the integration of syntactic information on both sides tends to decrease translation quality because the systems become too restrictive .","for example , lavie et al , liu et al , and chiang noted that translation quality tends to decrease in tree-to-tree systems because the rules become too restrictive ." +"it performs well on a number of natural language processing problems , including text categorization and word sense disambiguation .","it also performs well on a number of natural language processing problems , including text categorization , sebastiani et al , and word sense disambiguation , ." +"in both domains , and that the contextual role knowledge improves performance , especially on pronouns .","experiments in two domains showed that the contextual role knowledge improved coreference performance , especially on pronouns ." +"asahara et al extended hmms so as to incorporate 1 ) position-wise grouping , 2 ) word-level statis-tics , and 3 ) smoothing of word and pos level statistics .","asahara et al extended the original hmms by 1 ) position-wise grouping of pos tags , 2 ) word-level statistics , and 3 ) smoothing of word and pos level statistics ." +the front-end is a web application that makes nlp processes available in a user-friendly way with responsive and interactive visualizations .,"the back-end is a modular , expandable , scalable and flexible architecture with parallel and distributed processing capabilities ." +we train these two losses using a multi-task learning paradigm .,"in order to do this , we adopt a multi-task learning approach ." +"given a sentence pair and a corresponding wordto-word alignment , phrases are extracted following the criterion in .","given a sentence pair and a corresponding word alignment , phrases are extracted following the criterion in och and ney ." +"and , to implement it , we introduce a novel smooth version of the multi-focus attention function , which generalizes softmax .","we develop a novel smooth version of the multi-focus attention function , which generalizes the single focus softmax-function ." +for more details see the task description paper .,for more details see the overview paper by the organizers . +och proposed to apply minimum error rate training to optimize the different feature weights in the log-linear model combination on a small development data set .,och developed a training procedure that incorporates various mt evaluation criteria in the training procedure of log-linear mt models . +boyd-graber et al integrate a model of random walks on the wordnet graph into an lda topic model to build an unsupervised word sense disambiguation system .,boyd-graber et al integrate a topic model with wordnet and use it to carry out disambiguation and learn topics simultaneously . +multiword expressions ( mwes ) still pose serious issues for a variety of natural language processing tasks .,non-compositional multiword expressions ( mwes ) still pose serious issues for a variety of natural language processing ( nlp ) tasks . +the objective for web search used in this paper follows the pair-wise learning-to-rank paradigm outlined in .,the objective used in this paper follows the pair-wise learning-to-rank paradigm outlined in . +"in this paper , we propose a semi-supervised boosting method to improve statistical word alignment .","in addition , we improve the word alignment results by combining the results of the two semi-supervised boosting methods ." +"as data for developing and testing our system for why-qa , we use the webclopedia question set by .","for development and testing purposes , we use the webclopedia question set by ." +such strategy has been adopted in some multi-document summarization methods .,it has been widely adopted in the generic summarization task . +"the insensitivity of bleu and nist to perfectly legitimate syntactic and lexical variation has been raised , among others , in callison-burch et al , but the criticism is widespread .","the insensitivity of bleu and nist to perfectly legitimate variation has been raised , among others , in , but the criticism is widespread ." +we utilize multimodal features and domain-independent discourse features to achieve robust topic identification .,our main claim is that we utilize visual and audio information to achieve robust topic identification . +we report results in terms of case-insensitive 4-gram bleu scores .,"in addition , we report the bleu-4 scores that were computed on the word level ." +"han and baldwin begin with a set of string similarity metrics , and then apply dependency parsing to identify contextuallysimilar words .","han and baldwin use a classifier to detect illformed words , and then generate correction candidates based on morphophonemic similarity ." +"in this work , we present wikikreator , a system that is capable of generating content automatically .","to address the above-mentioned issues , we present wikikreator – a system that can automatically generate content for wikipedia stubs ." +"as the standard wsd does , we incorporate word senses that are automatically learned from data into our sense-based translation model .",we incorporate these learned word senses as translation evidences into maximum entropy classifiers which form the foundation of the proposed sense-based translation model . +"we apply our system to the latest version of the xtag english grammar , which is a large-scale fb-ltag grammar .","we constructed a type signature for the xtag english grammar , an existing broad-coverage grammar of english ." +we then present svm-based classifiers that use n-gram and stylistic features .,we then present svm-based classifiers which use two sets of features : n-gram and stylistic features . +"in this paper , we introduce a flexible notion of paths that describe chains of words .","in this paper , we explore a flexible application of dependency paths that overcomes this difficulty ." +grnn uses full binary tree as the recursive topological structure .,"unlike grconv and adasent , our model uses full binary tree as the topological structure ." +"that use the errorful re-decoded labels , partial-label learning provides a direct means to learn the encoded knowledge .","this new approach , without the need of using constrained re-decoding as a middle step , provides a direct means to learn the knowledge in the partial labels ." +developments of this approach have been proposed which improve on cluster quality and retrieval performance .,developments of this approach have been proposed that improve on cluster quality and retrieval performance . +"for data selection , we observe that our methods are able to select high-quality domain-relevant sentence pairs and improve the translation .","for this , an effective approach is to automatically select and expand domain-specific sentence pairs from large scale general-domain parallel corpus ." +xiong et al extend this approach by allowing gaps in rules .,xiong et al extend the treelet approach to allow dependency fragments with gaps . +that does not require labeling statements with logical forms .,the method does not require labeling sentences with logical forms . +hank and church pointed out the usefulness of pointwise mutual information for identifying collocations in lexicography .,hank and church pointed out the usefulness of mutual information for identifying monolingual collocations in lexicography . +we investigate the correlation between rouge and human evaluation of extractive meeting summaries .,"in this paper , we carefully examine how well the rouge scores correlate with human evaluation for extractive meeting summarization ." +"if arbitrary word-reorderings are allowed , the search problem is np-hard .","if arbitrary word-reorderings are permitted , the search problem is np-hard ." +"we conduct experiments using stanford natural language inference corpus , one of the most famous dataset for the nli task .","to evaluate the performance of our model , we conducted our experiments on stanford natural language inference corpus ." +mohammad and hirst showed that these distributional word-distance measures perform poorly when compared with wordnet-based concept-distance measures .,mohammad and hirst show that their approach performs better than other strictly corpusbased approaches that they experimented with . +"models can easily incorporate a rich set of linguistic features , and automatically learn their weights , eliminating the need for ad-hoc parameter tuning .","the models admit a rich set of linguistic features , and are trained to learn feature weights automatically by optimizing a regression objective ." +two attempts to overcome this drawback are presented in nerbonne and nerbonne .,two attempts to overcome this withdraw are presented in nerbonne and nerbonne . +"on the sentence completion challenge ( cite-p-17-5-5 ) , our model achieves an impressive 69 . 2 % accuracy , surpassing the previous state of the art .",our unidirectional-rm sets a new state of the art for the sentence completion challenge with 69.2 % accuracy . +we apply a state-of-the-art language-independent cross-lingual entity linking approach to link names from chinese to an english kb .,we apply a state-of-the-art language-independent entity linker to link each transliteration hypothesis to an english kb . +"as a refinement ( relabeling ) model , it achieves the best las .","as a refinement ( relabeling ) model , it achieves the best las on 5 out of 7 datasets ." +this corpus has been converted into an xml format conforming to the standards of the text encoding initiative .,the corpus has been converted into an xml format conforming to tei standards . +we then used the python nltk toolkit to tokenise the words .,we deployed these models using classifiers in the nltk python package . +"in this paper , we show that it is possible to take advantage of the properties of fiction .","in this paper , we have presented litner , an ner system targeted specifically at fiction ." +"for the twitter data set , we obtain a median error of 479 km , which improves on the 494 km error .","for twitter , we obtain a median error of 479 km and mean error of 967 km ." +where multiple filters are operated on the matrix to generate different feature maps .,different filters of the same 3 ¡á 3 shape are operated over the input matrix to output feature map tensors . +we use svm light to learn a linear-kernel classifier on pairwise examples in the training set .,"we use support vector machines , a maximum-margin classifier that realizes a linear discriminative model ." +transfer learning usually requires linguistic knowledge and resources about the relation between the source language and the target language .,"given no linguistic resources between the source language and the target language , transfer learning methods can be utilized instead ." +"for example , these networks could help automate the construction of a hypernym taxonomy with weighted branches , potentially augmenting resources like wordnet .","for example , resources such as word-net may be used to aid in the classification of geo-political entities ." +"sugiyama et al extract various features from the sentences based on the verbs and nouns in the sentences such as the verbal forms , and the part-of-speech tags of the 20 words surrounding the verb .","sugiyama et al extract features from the sentences based on the verbs and nouns in the sentences such as the verbal forms , and the part of speech tags of the 20 words surrounding the verb ." +"in this paper , we focus on translating into mrls and issues associated with word formation .","in this paper , we focus on translating into mrls and issues associated with word formation on the target side ." +sentiment classification remains a significant challenge : how to encode the intrinsic ( semantic or syntactic ) relations between sentences in the semantic meaning of document .,document level sentiment classification remains a challenge : encoding the intrinsic relations between sentences in the semantic meaning of a document . +"in this study , we analyzed the relationship between an individual ’ s traits and his / her aspect .","in this study , we analyzed the relationship between an individual ’ s traits and his/her aspect framing decisions ." +"in this paper , we present a computational approach to the generation of spatial locative expressions .",this paper presents an approach to incrementally generating locative expressions . +"in the decoding stage , the best first strategy is used to predict the bridging links .","in the decoding stage , the best first strategy is used to predict bridging links ." +image is selected using a graph-based method that makes use of both textual and visual information .,images are ranked using a graph-based method that makes use of both textual and visual information . +we train distributional similarity models with word2vec for the source and target side separately .,we create mwes with word2vec skipgram 1 and estimate w with scikit-learn . +a common criterion to optimize the coefficients of the log-linear combination of feature functions is to maximize the bleu score on a development set .,it is nowadays common practice to optimize the coefficients of the log-linear combination of feature functions by maximizing the bleu score on the development data . +the syntactically augmented translation model proposed by zollmann and venugopal uses syntactic categories extracted from target-side parse trees to augment nonterminals in hierarchical rules .,the syntax-augmented translation model of zollmann and venugopal annotates nonterminals in hierarchical rules with thousands of extended syntactic categories in order to capture the syntactic variations of phrase pairs . +we tokenize the sentences and perform truecasing with the moses scripts .,we tokenize the english sentences and perform truecasing with the moses scripts . +"in this paper , we showed how to lift structured prediction under bandit feedback from linear models to non-linear sequenceto-sequence learning .","in this paper , we focus on the application of machine translation via neural sequenceto-sequence learning ." +"this produces multiple paths between nodes , allowing the sash to shape itself to the data set .","this produces multiple paths between terms , allowing sash to shape itself to the data set ." +our method is based on the bag-of-words model in conjunction with word embeddings .,our approach is based on the classic bag-of-words model in conjunction with word embeddings . +wang et al show how to detect a known domain at test time in order to configure a generic translation system with domain-specific feature weights .,wang et al used a single translation model and generalized a single-domain decoder to deal with different domains . +krulwich and burkey use heuristics to extract keyphrases from a document .,krulwich and burkey use heuristics based on syntactic clues to extract keyphrases from a document . +"in this paper , we present an efficient query selection algorithm for the retrieval of web text data .",the current paper describes a new method for query selection and its applications in lm augmentation and adaptation using web data . +"zeng et al proposed a deep convolutional neural network with softmax classification , extracting lexical and sentence level features .",zeng et al exploit a convolutional neural network to extract lexical and sentence level features for relation classification . +resnik measures the similarity of two concepts by calculating the information content of their least common subsumer .,resnik measures the similarity between two concepts by finding the ic of the lcs of the two concepts . +we conduct extensive experiments and verify the effectiveness of incorporating word sememes for improved wrl .,we will explore the effectiveness of sememe information for wrl in other languages . +"socher et al propose matrix-vector recursive neural network , where instead of using only vectors for words , an additional matrix for each word is used to capture operator semantics in language .","socher et al assign a vector and a matrix to each word for the purpose of semantic composition , and build recursive neural network along constituency tree ." +they employ a lstm model based on the pretrained glove word embeddings from stanford-nlp group .,these embeddings were used together with pretrained glove and fasttext embeddings . +"hindle and rooth mention the interaction between the structural and the semantic factors in the disambiguation of a pp , indicating that verb adjuncts are the most difficult .","hindle and rooth mention the interaction between the structural and the semantic factors in the disambiguation of a pp , indicating that verb complements are the most difficult ." +phelan et al used tweets to recommend news articles based on user preferences .,"phelan et al , 2011 ) used tweets to recommend news articles based on user preferences ." +the semantic orientation of the opinion expression is identified .,the opinion words ( oword ) and their semantic orientations ( otype ) are identified . +"in this paper we set such problem as an application-oriented , crosslingual variant of the textual entailment recognition task .","similar to earlier work , we set this problem as a variant of the textual entailment recognition task ." +"in future work , we plan to explore more fully the semantics of modification , and to pursue the addition of a type system to the logic .","in future work , we plan to explore more fully the semantics of modification , and to pursue the addition of a type system to the logic to treat quantifiers analogously to cite-p-9-4-3 , cite-p-9-4-4 ." +"following prior work , we use gap score for evaluation in the subtask , which is a variant of map .","following previous work , we use generalized average precision to compare the ranking predicted by our model with the gold standard ." +conditional random fields are a probabilistic framework for labeling structured data and model p 位 .,conditional random fields are conditional models in the exponential family . +"studies have shown that the three most important , largely independent , dimensions of word meaning are valence ( positiveness ¨c negativeness / pleasure ¨c displeasure ) , arousal ( active ¨c passive ) , and dominance ( dominant ¨c submissive ) ( cite-p-19-3-15 , cite-p-19-3-19 , cite-p-19-3-20 ) .","cite-p-19-3-19 , cite-p-19-3-20 showed through similar analyses of emotion words that the three primary independent dimensions of emotions are valence or pleasure ( positiveness¨c negativeness/pleasure¨cdispleasure ) , arousal ( active¨cpassive ) , and dominance ( dominant¨c submissive ) ." +"in modeling word similarity , we propose an alternative , pattern-based , approach to word representation .",we introduce a symmetric pattern based approach to word representation which is particularly suitable for capturing word similarity . +we propose a method that learns separate distributed representations for each domain in which a word occurs .,"on the other hand , our proposed method learns a single representation for a particular word for each domain in which it occurs ." +araki et al evaluated their model using blanc evaluation metric whereas glava拧 and艩najder evaluated their model using the standard f 1 evaluation metric .,"araki et al , 2014 ) evaluated their model using blanc evaluation metric whereas evaluated their model using the standard f 1 evaluation metric ." +"in this article , we have presented an approach to temporal information extraction that represents the timeline of a story .","in this article , we propose an approach to temporal information extraction that identifies a single connected timeline for a text ." +bengio et al propose a feedforward neural network to train a word-level language model with a limited n-gram history .,chen et al show that n-gram model outperforms a popular feed-forward language model on a one billion word benchmark . +performance of the unsupervised self-trained approach is better than the performance of other unsupervised learning systems .,we show that this unsupervised system has better core performance than other learning approaches that do not use manually labeled data . +we call a sequence of words which are in lexieal cohesion relation with each other a icxical chain like .,here we call a sequence of words which have lexical cohesion relation with each other a lezical chain like . +"to test the linguistic qualities , we did not use an automatic evaluation because found that the ordering of content within the summaries is an aspect which is not evaluated by rouge .","to evaluate coherence , we did not use the rouge metric because from a manual analysis found that the ordering of content within the summaries is an aspect which is not evaluated by rouge ." +we present a cross-language faq retrieval system that handles the inherent noise in source language to retrieve faqs in a target language .,our method handles noisy representation of questions in a source language to retrieve answers across target languages . +"in all the experiments described in this article we use snow as the learning environment , with winnow as the update rule .","in all the experiments described in this paper we use snow as the learning environment , with winnow as the update rule ." +"while compound splitting is a well-studied task , compound merging has not received as much attention in the past .",compound splitting is a well defined computational linguistics task . +"as we will show later , recall is well below 50 % for all named entity types .","as we will show later , recall is well below 50 % for all named entity types on the new test sets ." +"to this end , we replicated the np-chunker described in sha and pereira and trained it as either an np-chunker or with the tagset extended to classify all 11 phrase types included in the conll-2000 task .","to test our crf implementation , following sha and pereira , we performed an np chunking task using the conll-2000 text chunking task data ." +"greedy-loglin closely resembles the learning model of lapata , as both are firstorder markovian and use the same inference procedure .","greedy-loglin closely resembles the learning model of lapata , except that it is a discriminative log-linear model , rather of a generative markovian model ." +the support vector machine based machine learning approach works on discriminative approach and makes use of both positive and negative examples to learn the distinction between the two classes .,the svm is based on discriminative approach and makes use of both positive and negative examples to learn the distinction between the two classes . +"similarly , choi and cardie successfully used a propbankbased semantic role labeler for opinion holder extraction .","similarly , choi et al successfully used a propbank-based semantic role labeler for opinion holder extraction ." +results show that the vcu systems obtained a higher score than the random baseline .,the first-order measures obtained a higher wu & palmer score than the second-order measure on the test data . +"our parser shows higher accuracy than zhang and nivre , which is the current state-of-the-art transition-based parser that uses beam search .",our improved cube-pruned parser represents a significant improvement over the feature-rich transition-based parser of zhang and nivre with a large beam size . +"we present experiments using the conll-2009 shared task datasets , for the verbal predicates of english .",we present experiments using our syntacticsemantic parser on the conll-2009 shared task english benchmark . +our machine translation system is a string-todependency hierarchical decoder based on and .,our baseline translation system is based on a stringto-dependency translation model similar to the implementation in . +another popular sentiment lexicon is the mpqa subjectivity lexicon which was constructed by manually annotating the subjective expressions in the mpqa corpus .,"another example is the mpqa subjectivity lexicon , which was built manually by annotating the subjective expressions in the mpqa corpus ." +"kim et al adopt walk-weighted subsequence kernel based on dependency paths to explore various substructures such as e-walks , partial match , and non-contiguous paths .","kim et al proposed walk-weighted subsequence kernel using e-walks , partial matches , non-contiguous paths , and different weights for different sub-structures ." +for all languages we evaluated translation output using case-insensitive ibm bleu .,we evaluated translation output using case-insensitive ibm bleu . +"in the work presented here , we explore a transfer learning scheme , whereby we train character-level recurrent neural taggers to predict morphological taggings for high-resource languages and low-resource languages .","in this work , we focus on transfer learning¡ªwe train a recurrent neural tagger for a low-resource language jointly with a tagger for a related high-resource language ." +"in this paper , we have addressed two types of data shift common in slu applications .",we are interested in addressing two types of data shift common in slu applications . +"although neither source-language nor target-language analysis was able to circumvent problems in mt , each approach had advantages relative to the other .","neither source-language nor target-language analysis was able to circumvent problems in mt , although each approach had advantages relative to the other ." +"in section 2 , we describe the details of the syntactic decision tree .","in section 2 , we describe the details of the syntactic decision tree lm ." +and their best model achieves coverage of 90 . 56 % and a bleu score of 0 . 7723 on penn-ii wsj section 23 sentences of length ≤20 .,"we currently achieve coverage of 95.26 % , a bleu score of 0.7227 and string accuracy of 0.7476 on the penn-ii wsj section 23 sentences of length ≤20 ." +entrainment is correlated with positive social characteristics and turn-taking features .,entrainment in many of these dimensions has also been associated with measures of dialogue success . +"metonymy is a figure of speech , in which one expression is used to refer to the standard referent of a related one .",metonymy is typically defined as a figure of speech in which a speaker uses one entity to refer to another that is related to it . +we also used the version of string-edit distance described by bangalore et al which normalises for length .,we also used the version of string-edit distance of bangalore et al which normalises for length . +"rather than assigning lexical heads to punctuations , we treat punctuations as properties of their neighbouring words , used as features .","in this method , punctuations are not associated with lexical heads , but are treated as properties of their neighbouring words ." +"in a sentence , such as agent and patient , we can derive various centering theory-motivated features in tracking the continuity or shift of the local discourse focus , thus allowing us to include document-level event .",this justifies our attempt to model the continuity or shift of the discourse focus in pronoun resolution via centering-motivated features from the semantic perspective . +"and show that , using linguistic constraints between the tasks and minimal joint learning , we can improve the performance of both tasks .","by enforcing consistency constraints between their predictions , we show improvements in the performance of both tasks without retraining the individual models ." +"in this article , we are also concerned with improving tagging efficiency .","in this article , we adopt their tagger for experiments ." +the attentional structure of a discourse can be modeled as a stack of focus spaces that contains the individuals salient at each point in a discourse .,the linguistic structure of a discourse is composed of utterances that exhibit meaningful hierarchical relationships . +density operators are used in quantum theory .,quantum states are expressed as density operators rather than kets . +"in a different vein , cite-p-19-1-12 introduced three unsupervised methods drawn from visual properties of images .","in a different vein , cite-p-19-1-12 introduced three unsupervised methods drawn from visual properties of images to determine a concept ’ s generality in hypernymy tasks ." +in our approach is to reduce content selection and surface realization into a common parsing problem .,central to our approach is the encoding of generation as a parsing problem . +"previous work has focused on congressional debates , company-internal discussions , and debates in online forums .","however , most work focuses on congressional debates or debates in online forums ." +the underlying model is a recurrent network that learns how far to jump after reading a few words of the input text .,the underlying model is a rnn encoder-decoder that explores possible binary tree structures and a reward mechanism that encourages structures that improve performances on downstream tasks . +the thesaurus 4 used in this work was automatically constructed by lin .,the thesaurus was acquired using the method described by lin . +"in the future , we would like to explore additional types of rules such as seed rules , which would assign tuples complying with the ¡° seed ¡± .","in the future , we would like to explore additional types of rules such as seed rules , which would assign tuples complying with the ¡°seed¡± information to distinct relations ." +we describe a method to automatically enrich the output of parsers with information that is present in existing treebanks .,we describe a method for enriching the output of a parser with information available in a corpus . +optimization requires computing the gradient of entropy or risk .,"in these settings , we must compute the gradient of entropy or risk ." +"for commercial engines such as yahoo ! and google , creating lists of named entities found on the web is critical .",creating lists of named entities is a critical problem at commercial engines such as yahoo ! and google . +efficient decoding can be performed with eisner algorithm in otime and ospace .,eisner algorithm can be modified trivially for secondorder decoding . +"in particular , we explore the frequent noun terms in pros and cons reviews as features , and train a one-class svm to identify aspects in the candidates .","thus , we extract the frequent noun terms from pros and cons reviews as features , then train a one-class svm to identify aspects from the candidates ." +"for example , collobert et al effectively used a multilayer neural network for chunking , part-ofspeech tagging , ner and semantic role labelling .","for example , collobert et al used a feed-forward neural network to effectively identify entities in a newswire corpus by classifying each word using contexts within a fixed number of surrounding words ." +"in this paper , we use a simple monotone submodular function .",our proposed method can be easily extended by using other types of submodular functions . +"to leverage as much history as possible , mikolov et al apply recurrent neural network to word-level language modeling .","mikolov et al applied an rnn for language modeling , and demonstrated that the word embeddings learned by the rnnlm capture both syntactic and semantic regularities ." +we use conditional random fields for sequence labelling .,as a sequence labeler we use conditional random fields . +"in this paper , we have presented the first extrinsic evaluations of simulated annealing and d-bees .","in this paper , we have presented the first extrinsic evaluations of simulated annealing and d-bees in a lexical substitution setting ." +zaidan and callison-burch created a monolingual arabic data set rich in dialectal content from user commentaries on newspaper websites .,zaidan and callison-burch developed an informal monolingual arabic online commentary annotated dataset with high dialectal content . +we compute the joint n-gram model using a language modeling toolkit .,"from the bilanguage corpus b , we train an n-gram language model using standard tools ." +"in our experiment , using glpk ¡¯ s branch-and-cut solver took 0 . 2 seconds to produce optimal ilp solutions for 1000 sentences .","in our experiment , using glpk¡¯s branch-and-cut solver took 0.2 seconds to produce optimal ilp solutions for 1000 sentences on a machine with intel core 2 duo cpu and 4gb ram ." +in doing so we can achieve better word retrieval performance than language models with only n-gram context .,"in all these cases , topic information was helpful in boosting retrieval performance above baseline vector space or n-gram models ." +"for all adjectives , we group adjectives into different scales .",we work with four scales of adjectives ( cf . table 1 ) . +experimental results show that our methods outperform the state-of-the-art extractive systems .,experimental results show improvements of our compressive solution over state-of-the-art systems . +which shows an english sl sentence and its german .,the german sentence is labeled using annotation projection . +"in this paper , we propose an endto-end model based on sequenceto-sequence learning with copy mechanism , which can jointly extract relational facts .","in this paper , we proposed an end2end neural model based on seq2seq learning framework with copy mechanism for relational facts extraction ." +bordes et al further improve their work by proposing the concept of subgraph embeddings .,bordes et al further improved their work by proposing the concept of subgraph embeddings . +named entity recognition is a well established information extraction task with many state of the art systems existing for a variety of languages .,"named entity recognition is the task of finding entities , such as people and organizations , in text ." +we can further improve the performances of the word aligners with available data and available alignment .,"in addition , we improve the word alignment results by combining the results of the two semi-supervised boosting methods ." +limitation of phrase-based systems is that they make little or no direct use of syntactic information .,"in summary , phrase-based systems have relatively limited potential to model word-order differences between different languages ." +experimental evaluation on two benchmark datasets has demonstrated the effectiveness of the model .,the experimental evaluation demonstrates the superior performance of the model on the benchmark datasets . +"conditional auto-encoders have been employed in , that generates diverse replies by capturing discourse-level information in the encoder .",conditional auto-encoders have been employed in that generate diverse replies by capturing discourselevel information in the encoder . +"in order to provide results on additional languages , we present in table 3 a comparison to the work of gillenwater et al , using the conll-x shared task data .","we also report an evaluation on all thirteen languages of the conll-x shared task , for comparison with the results by nivre and mcdonald ." +typos suggests that some language-specific properties of chinese lead to a part of input errors .,comparisons with english typos suggest that some language-specific properties result in a part of chinese input errors . +"we parse the text into typed dependency graphs with the stanford parser 3 , recording all verbs with subject , object , or prepositional typed dependencies .","we parse the text into typed dependency graphs with the stanford parser , recording all verbs with subject , object , or prepositional typed dependencies ." +baldwin looked at in vitro and in vivo methods for lexical type prediction of unknown words .,baldwin took a statistical approach to automated lexical acquisition for deep grammars . +we evaluate our methods using the benchmark test collection from the acl semeval-2007 web person search task .,"in this paper , we described our participating system in the semeval-2007 web people search task ." +for providing me with lots of instruction computation in subset ( often dreadful ) automata generated by his construction .,"support , and for providing me with lots of instruction computation in subset ( often dreadful ) automata generated by his construction ." +"faruqui et al employ semantic relations of ppdb , wordnet , framenet to retrofit word embeddings for various prediction tasks .",faruqui et al use synonym relations extracted from wordnet and other resources to construct an undirected graph . +our grading model is most closely related to the approach described in kakkonen and sutinen where the experiments were conducted in the finnish language .,our approach is most closely related to the approach described in kakkonen and sutinen where the experiments were conducted in the finnish language . +"the most common sentiment lexicons for english language are wordnet-affect and sentiwordnet , which are extensions of wordnet .",the most recent semi-automatic lexicon is sentiwordnet which assigns polarity to word senses in wordnet 3 known as synsets . +"in section 3 , we present the methodology of parallel data selection and terminology identification .","in section 3 , we present the methodology of parallel data selection and terminology identification to improve ontology label translation ." +"in this paper , we propose a collaborative framework for collecting unknown words from web pages .","in this paper , we are interested in extracting the unknown words with high precision and recall results ." +structural correspondence learning exploits unlabeled data from both source and target domain to find correspondences among features from different domains .,structural correspondence learning uses only unlabeled data to find a common feature representation for a source and a target domain . +linguistic similarities between native languages are reflected in similarities in esl reading .,nlir is likely to be related to linguistic characteristics of the respective native languages . +"in this work , we have sucessfully added grammatical features to a wsme language model .","in this work , we propose adding information to the wsme model which is provided by the grammatical structure of the sentence ." +"in this paper , we treat the word alignment problem as maximizing a submodular function subject to matroid constraints .",we cast the word alignment problem as maximizing a submodular function under matroid constraints . +"in this paper , we describe our approach using a modified svm based classifier on short text .","in this paper , we describe our approach using a modified svm based classifier on short text as in twitter messages ." +future work may consider features of the acoustic sequence .,future work will include a further investigation of parser– derived features . +the goal of this note is to point out that any distributed representation can be turned into a classifier through inversion .,the goal of this note is to point out inversion as an option for turning distributed language representations into classification rules . +"other approaches are based on external features allowing to cope with various mt systems , eg .","other approaches are based on external features allowing to deal with various mt systems , eg ." +crf is well known for sequence labeling tasks .,crfs are particularly suitable for sequence labelling tasks . +"in , the problem of personalized , interactive tag recommendation was also studied based on the statics of the tags co-occurrence .","in , the problem of personalized , interactive tag recommendation was also studied based on the statistics of the tags co-occurrence ." +"since the work of pang et al , various classification models and linguistic features have been proposed to improve the classification performance .","since the work of pang , lee , and vaithyanathan , various classification models and linguistic features have been proposed to improve classification performance ." +"in the current implementation , no acoustic information is used in disambiguating words ; only the pronunciations of words are used to verify the values of the semantic variables .","however in the current implementation , no acoustic information is used in disambiguating words ; only the pronunciations of words are used to verify the values of the semantic variables in cases when there is semantic ambiguity in finding the best matching string ." +"in the parliament domain , this means ( and is translated as ) ¡° report .","in the parliament domain , this means ( and is translated as ) ¡°report.¡±" +suggestion mining can be defined as the process of identifying and extracting sentences from unstructured text that contain suggestion .,the task of suggestion mining can be defined as the extraction of sentences that contain suggestions from unstructured text . +in our approach is that many predicates are associated with a standard linking .,"importantly , we do not assume that a single standard linking is valid for all predicates ." +where each topic is associated with a body of texts containing useful information .,a paragraph associated with each topic is used as the source of relevant information about the topic . +"work focuses on the fully-and partially-assimilated foreign words , i . e . , words that historically were borrowed from another language .","this paper focuses on translation of fully-and partially-assimilated foreign words , called “ borrowed words ” ." +"neural models , with various neural architectures , have recently achieved great success .","recently , dsms based on neural networks have rapidly grown in popularity ." +"while we focus on the sentence-level task , our approach can be easily extended to handle sentiment analysis .",we also show that constraints derived from the discourse context can be highly useful for disambiguating sentence-level sentiment . +all techniques are used from the scikitlearn toolkit .,"to implement this , the sklearn library is used ." +"the third measure rely on the latent semantic analysis , trained on the tasa corpus .",the measure relies on the latent semantic analysis trained on the tasa corpus . +"in this paper , we propose a general time-aware kb embedding , which incorporates creation time of entities and imposes temporal order .","in this paper , we propose a novel time-aware kb embedding approach taking advantage of the happening time of facts ." +word sense induction is performed by inferring a set of semantic types .,word sense induction is typically performed using unsupervised clustering . +subword units can be used effectively to improve the performance of kws systems .,we incorporate these subword units into a lattice framework within the kws system . +in this work that some of these relationships can be characterized and subsequently operationalized within models .,"in this work , we attempt to model all three of these dimensions in developing a computational model for applause ." +"however , the experiments in anderson et al failed to detect differential interactions of semantic models with brain areas .",anderson et al show that semantic models built from visual data correlate highly with fmribased brain activation patterns . +"most recently , text simplification has been addressed as a monolingual machine translation task from complex to simple language .",prior approaches to text simplification have addressed the task as a monolingual translation problem . +"the training and test data for this shared task are from the nucle corpus , which consists of about one million words of short essays written by relatively competent english language learners .","the training data released by the task organizers comes from the nucle corpus , which contains essays written by learners of english as a foreign language and is corrected by english teachers ." +in this paper we introduce picturebook embeddings produced by image search .,in this paper we introduce picturebook embeddings produced by image search using words as queries . +sentiwordnet is a lexical resource built on top of wordnet .,sentiwordnet is a large lexicon for sentiment analysis and opinion mining applications . +"in this paper , we propose a novel method for semi-supervised learning of non-projective log-linear dependency .",in this paper we use a non-projective dependency tree crf ( cite-p-16-3-4 ) . +"subsequently , levy et al conducted a comprehensive set of experiments that suggest that much of the improved results are due to the system design and parameter optimizations , rather than the selected method .","subsequently , levy et al conducted a comprehensive set of experiments and comparisons that suggest that much of the improved results are due to the system design and parameter optimizations , rather than the selected method ." +"in this paper , we introduce a novel sliding window technique which avoids the errors produced by previous systems .","in this paper , we present a novel sliding window based text alignment algorithm for real-time crowd captioning ." +"by incorporating the mers models , the baseline system achieves statistically significant improvements .","experiments show that by incorporating mers model , the baseline system achieves statistically significant improvement ." +in addition an experiment was conducted to evaluate auto the advantage .,"in addition an experiment was conducted to evaluate auto the advantage in terms of speed , the autosem" +co-training uses several classifiers trained on independent views of the same instances .,co-training uses both labeled and unlabeled data to train models that have two different views of the data . +"user profile representation have also been adopted to expand the query ( cite-p-18-1-12 , cite-p-18-1-4 ) .","local analysis and co-occurrence based user profile representation have also been adopted to expand the query ( cite-p-18-1-12 , cite-p-18-1-3 ) ." +the pyramid method provides a annotation method and metric that addresses the issues of reliability and stability of scoring .,the pyramid method is a summarization evaluation scheme designed to achieve consistent score while taking into account human variation in content selection and formulation . +"by using a quadratic kernel , we can effectively learn word regions , which outperform existing unsupervised models .","an important advantage of our model is that it can be used to learn region representations for words , by using a quadratic kernel ." +"more accurately , we adopt a proximity strategy to scale the input of convolutional layer with positional relevance between a word and the target .","specifically , we propose target specific transformation component to better integrate target information into the word representation ." +"in cases , and we formulate it as a text-to-text natural language generation ( nlg ) problem .",we formulate this task as a text-to-text natural language generation ( nlg ) problem . +"cite-p-22-1-5 proposed a supervised method to fuse disparate sentences , which takes as input .","cite-p-22-1-5 investigate the idea of fusing disparate sentences with a supervised algorithm , as discussed above ." +"and , surprisingly , emotion recognition is higher in a condition of modality deprivation ( i . e . only acoustic or only visual modality vs . bimodal display of emotion ) .",usually emotions achieve a low agreement among raters ( see cite-p-8-3-0 ) and surprisingly emotion recognition is higher in a condition of modality deprivation ( only acoustic or only visual vs. bimodal ) . +brown clustering is an agglomerative algorithm that induces a hierarchical clustering of words .,the brown algorithm is a hierarchical agglomerative hard-clustering algorithm . +"particular proposal is both precisely characterizable , through a compilation to linear indexed grammars , and computationally operational , by virtue of an efficient algorithm for recognition and parsing .","the particular proposal is both precisely characterizable , through a compilation to linear indexed grammars , and computationally operational , by virtue of an efficient algorithm for recognition and parsing ." +"to cope with this , we adopted an algorithm of maximum entropy estimation for feature forests , which allows parameters to be efficiently estimated .","to cope with this problem , we applied an efficient algorithm of maximum entropy estimation for feature forests ." +"in this paper , we discuss methods for automatically creating models of dialog structure .","in section 5 , we discuss the problem of segmenting and labeling dialog structure and building models for predicting these labels ." +"given a specific target text , the resulting system proposes linear combinations of parsing models .","to parse the target text , one simply uses the mixture of parsing models with the highest predicted accuracy ." +summary can be generated based on the semantic link network through summary structure .,summary structure is planned with sentences generated based on the semantic link network . +"in this paper , we proposed a svm-based solution to compute the semantic similarity between two sentences .",we proposed a method using svr to combine various features to evaluate the similarity between two sentences . +"in this paper , we address this challenge with adversarial stability training .","in this paper , we propose to improve the robustness of nmt models with adversarial stability training ." +"we specifically address questions of polysemy with respect to verbs , and how regular extensions of meaning can be achieved through the adjunction of particular syntactic phrases .","in this paper we specifically address questions of polysemy with respect to verbs , and how regular extensions of meaning can be achieved through the adjunction of particular syntactic phrases ." +that will provide further insights into the characterization of preposition behavior .,the data generated in the task provides ample opportunitites for further investigations of preposition behavior . +"in this paper , we improve the attention or alignment accuracy of neural machine translation by utilizing the alignments of training .","in this paper , we alleviate the above issue by utilizing the alignments ( human annotated data or machine alignments ) of the training set ." +"for arabic , but the approach is applicable to any language that needs affix removal .","in this paper , arabic was the target language but the approach is applicable to any language that needs affix removal ." +quickset is a multimodal ( pen/voice ) interface for map-based tasks .,quickset is a distributed system consisting of a collection of agents that communicate through the open agent architecture4 ( cite-p-2-60-7 ) . +"ratnaparkhi et al used 20,801 tuples for training and 3097 tuples for evaluation .","ratnaparkhi et al , 1994 ) used 20,801 tuples for training and 3097 tuples for evaluation ." +"our evaluation shows that our method obtains better rouge recall score compared with four baseline methods , and it also achieve reasonably high-quality aspect clusters .","we found that our method gave overall better rouge scores than four baseline methods , and the new sentence clustering and compression algorithm are robust ." +this paper presents a novel approach to grammatical representation that annotates semantic distinctions .,"this paper presented universal conceptual cognitive annotation ( ucca ) , a novel framework for semantic representation ." +"in this paper , we present an unsupervised bootstrapping approach for wsd which exploits huge amounts of automatically generated noisy data .","in this paper , we investigate the role of large amounts of noisily sense annotated data obtained using an unsupervised approach in relieving the data acquisition bottleneck for the wsd task ." +"we used mxpost , and in order to discover more general patterns , we map the tag set down after tagging , e .","we used mxpost , and in order to discover more general patterns , we map the tag set down after tagging , eg ." +"in this paper , we proposed our system to automate the process of arabic answer selection .","in this paper , we propose a knowledge-based answer selection system for arabic ." +"tang et al , khabsa and giles and tang et al investigated the effect of three different types of word representation features , including clustering-based , distributional and word embeddings , on biomedical name entity recognition tasks .","tang et al explored the impact of three different types of word representations on clustering-based representation , distributional representation and word embedding ." +we have presented a novel approach to grammatical error correction .,we introduce a novel method for grammatical error correction with a number of small corpora . +"the system employed a domain-independent feature set along with features generated from the output of chemspot , an existing chemical named entity recognition tool , as well as a collection of domain-specific resources .","they employed a domain-independent feature set along with features generated from the output of chemspot , an existing chemical named entity recognition tool , as well as a collection of domain-specific resources ." +"for tm , the additional model is log-linearly interpolated with the in-domain model using the multi-decoding method described in .",the additional model is log-linearly interpolated with the indomain model using the multidecoding method described in . +"kobayashi et al , yi et al , popescu and etzioni , hu and liu , .","yi et al , hu and liu , kobayashi et al , popescu and etzioni , ." +we propose a two-stage method to find the corresponding abbreviation .,we use a similarity sensitive re-rank method to get the final abbreviation . +"efficiency and performance of our approach are evaluated on different downstream tasks , namely sentiment analysis , speaker-trait recognition and emotion recognition .","we compare our model with the baselines and state-of-the-art models for sentiment analysis , speaker traits recognition and emotion recognition ." +"in transformation-based parsing , an ordered sequence of tree-rewriting rules ( tree transformations ) are applied to an initial parse structure .","in transformation-based parsing , a finite sequence of tree rewriting rules are checked for application to an input structure ." +the text corpus was lemmatized using the treetagger and parsed for syntactic dependency structures with parzu .,"the corpus was converted from xml to raw text , various string normalization operations were then applied , and the corpus was lemmatized using treetagger ." +"in section 4 we examine our parameters in the context of distributional compositional semantics , using the evaluation dataset from mitchell and lapata .","to test how cs , normalisation , and dimensionality reduction affect simple compositional vector operations we use the test portion of the phrasal similarity dataset from mitchell and lapata ." +"we can extract one million contradiction pairs and 500 , 000 causality pairs with about 70 % precision from a 600 million page web corpus .","our methods extract one million contradiction pairs with over 70 % precision , and 500,000 causality pairs with about 70 % precision from a 600 million page web corpus ." +"in order to have a more extensive database of affect-related terms , in the following experiments we used wordnet affect .","in order to have a more extensive database of affect-related terms , in the following experiments we used wordnet affect , sentiwordnet , micrownop ." +statistical topic models such as latent dirichlet allocation provide a powerful framework for representing and summarizing the contents of large document collections .,statistical topic models such as latent dirichlet allocation provide powerful tools for uncovering hidden thematic patterns in text and are useful for representing and summarizing the contents of large document collections . +vlachos suggests to use classifier confidence to define a stopping criterion for uncertaintybased sampling .,vlachos has used the classifier confidence score as a stopping criterion for the uncertainty sampling . +"in this paper , we define normal dominance constraints , a natural fragment of dominance constraints whose restrictions should be unproblematic .",here we identify the natural fragment of normal dominance constraints and show that its satisfiability problem is in deterministic polynomial time . +"in both our pilot experiment and current development work , we found that the method of clark and weir overall gave better performance , and so we limit our discussion here to the results on their model .","in our development work , we found that the method of clark and weir overall gave better performance , and so we limit our discussion here to the results on their model ." +"therefore , we adopt the greedy feature selection algorithm as described in jiang and ng to pick up positive features incrementally according to their contributions on the development data .","therefore , we adopt the greedy feature selection algorithm as described in jiang et al to pick up positive features incrementally according to their contributions ." +clark and curran demonstrates that this relatively small set has high coverage on unseen data and can be used to create low .,clark and curran demonstrates that this relatively small set has high coverage on unseen data and can be used to create a robust and accurate parser . +"around data , we believe that the data collected from our application is valuable .",we also look to lay the foundation for analysis based on implicit data collected from our application . +"in this paper , we have built the first diachronic distributional model that represents time as a continuous variable .","in this paper , we propose a deep neural network diachronic distributional model ." +"then , we use bidirectional single-layer lstms to encode c into vectors .",we use lstm to materialize both encoder and decoder . +experiments are conducted on semeval-2010 task 8 dataset .,experiments are conducted on the semeval-2010 task 8 dataset . +"second , we add a new term representing a more direct connection from the source sentence , which allows the model .","second , we integrate a simple lexical module which is jointly trained with the rest of the model ." +"to exploit these kind of labeling constraints , we resort to conditional random fields .","to predict labels , we train conditional random fields , which are directly optimized for splitting ." +"in the introductory section , we described the characteristic phenomena of biochemical terminology .",in this introduction we described the particularities of biochemical terminology . +qualia structure is a distinctive feature of the generative lexicon theory .,these relations form part of the qualia structure assumed in generative lexicon theory . +"semantic similarity evaluation , we build a universal model in combination of traditional nlp methods and deep learning methods together .","then we adopt a combination method to build a universal model to estimate semantic similarity , which consists of traditional natural language processing ( nlp ) methods and deep learning methods ." +xing et al presented topic aware response generation by incorporating topic words obtained from a pre-trained lda model .,xing et al pre-defined a set of topics from an external corpus to guide the generation of the seq2seq model . +this paper proposes a novel approach to create detectors for new relations .,this paper describes a novel approach to the semantic relation detection problem . +"aspect term extraction is based on supervised machine learning , where we build many models based on different classifiers , and finally combine their outputs using majority voting .","the aspect term extraction method is based on supervised learning algorithm , where we use different classifiers , and finally combine their outputs using a majority voting technique ." +"closer to the other end , we find work that focuses on defining morphological models with limited lexicons that are then extended using raw text .","next on the continuum , we find work that focuses on defining morphological models with limited lexica that are then extended using raw text ." +for phrase extraction the grow-diag-final heuristics described in is used to derive the refined alignment from bidirectional alignments .,the heuristic strategy of grow-diag-final-and is used to combine the bidirectional alignments to extract phrase translations and to reorder tables . +msa is the formal arabic that is mostly used in news broadcasting channels and magazines to address the entire arab region .,"msa is the language used in education , scripted speech and official settings while da is the primarily spoken native vernacular ." +"data sparsity is a major problem in building traditional n-gram language models , which assume that the probability of a word only depends on the previous math-w-2-1-0-68 words .","data sparsity is the bane of natural language processing ( nlp ) ( cite-p-15-5-2 , cite-p-15-3-7 ) ." +and the readability of the produced summaries have been mostly ignored .,"meanwhile , the fluency of the produced summaries has been mostly ignored ." +but this is a rather special case for multi-party dialogues .,these are not only problems in exploring multi-party dialogues . +and we focus on being able to incorporate relatively noisy machine alignments to improve the reordering model .,the main challenge we tackle is to generate quality data for training the reordering model in spite of the machine alignments being noisy . +"in this paper , we will show the efficacy of collaborative ranking on the entity linking task defined in the knowledge base population ( kbp ) track ( cite-p-25-3-2 ) .","in this paper , we will show the efficacy of collaborative ranking on the entity linking task defined in the knowledge base population ( kbp ) track ( cite-p-25-3-2 ) at text analysis conference ( tac ) ." +we empirically verify the effectiveness of cpra .,we empirically evaluate cpra on benchmark data created from freebase . +deep neural models provide a convenient way to share information among multiple tasks .,"our proposed models use deep neural networks , which can easily share information with hidden shared layers ." +data show that the proposed approach improves upon existing methods in terms of accuracy in different settings .,our method is more accurate than the baseline methods in different settings such as large rule sets and large vocabulary sizes . +zhang et al apply adversarial training to align monolingual word vector spaces with no supervision .,zhang et al utilize adversarial training to obtain cross-lingual word embeddings without any parallel data . +"we follow the setup of duan et al and split ctb5 into training , development , and test sets .","following the set-up of duan et al and zhang and clark , we split ctb5 into training ." +"to extract opinion targets , many studies regarded opinion words as strong indicators ( cite-p-16-1-3 , cite-p-16-1-16 , cite-p-16-1-10 , cite-p-16-1-18 , cite-p-16-3-5 ) .","in unsupervised methods , most approaches regarded opinion words as the important indicators for opinion targets ( cite-p-16-1-3 , cite-p-16-3-2 , cite-p-16-1-18 , cite-p-16-3-5 ) ." +whereby question-answer pairs are represented by concatenated distributed representation vectors and a multilayer perceptron is used to compute the score for an answer .,question-answer pairs are represented by concatenated distributed representation vectors and a multilayer perceptron is used to compute the score for an answer ( the probability of an answer being the best answer to the question ) . +"the morphosyntactically annotated corpus we used is a variant of the french treebank or ftb , .","the corpus used in our experiments is the french treebank , version from june 2010 , hereafter ftb ) ." +"by subsequence filtering , our models enable higher-order n-grams and larger monolingual corpora to be used more easily .",we show how a bf containing n-grams can enable us to use much larger corpora and higher-order models complementing a conventional n-gram lm within an smt system . +"in this paper , we present a uima framework to distribute the computation of cqa tasks over computer clusters .","in this paper , we propose a uima framework to manage the computation distribution of the complicated processing pipelines involved in cqa systems ." +"at semeval 2012 ¨c 2015 , most of the top-performing sts systems used a regression algorithm to combine different measures of similarity .",each of our systems uses the semeval 2012¨c2015 sts datasets to train a ridge regression model that combines different measures of similarity . +"in addition , we use l2 regularization and dropout technique to build a robust system .",we use both l 2 regularization and dropout techniques for regularization . +the structure of this paper is the following one .,the structure of the paper is the following . +"for testing new data collection schemes , we created a new sct dataset , sct-v1 . 5 , which overcomes some of the biases .","given the statistics we have aggregated , we have designed a new crowdsourcing scheme that creates a new sct dataset , which overcomes some of the biases ." +the models rely on part-of-speech tags as input and we used the ratnaparkhi tagger to provide these for the development and evaluation set .,our system assumes pos tags as input and uses the tagger of ratnaparkhi to provide tags for the development and evaluation sets . +"in cite-p-22-3-10 , binary features were trained only on a small development set .",the millions of parameters were tuned only on a small development set consisting of less than 1k sentences . +quirk et al and xiong et al used treelets to model the source dependency tree using synchronous grammars .,quirk et al used a source-side dependency parser and projected automatic parses across word alignments in order to model dependency syntax on phrase pairs . +"we employed the uima tokenizer 2 to generate tokens and sentences , and the treetagger for part-of-speech tagging and chunking .","after sentence splitting and tokenization , we applied the highly efficient treetagger for part-of-speech tagging and we extracted time and money entities with fast regular expressions ." +"for data selection , we observe that our methods are able to select high-quality domain-relevant sentence pairs and improve the translation performance by nearly 3 bleu points .","when the selected sentence pairs are evaluated on an endto-end mt task , our methods can increase the translation performance by 3 bleu points ." +our analysis of naturally occurring dialog indicates that human listeners understand many utterances that would appear pragmatically ill-formed to current natural language systems .,our analysis of naturally occurring dialog indicates that humans understand many utterances that would appear imperfect or incomplete to current natural language systems . +"in this paper , we proposed a new approach that tackles the issue of multi-domain belief tracking , such as model parameter .","in this paper , we propose a model that jointly identifies the domain and tracks the belief states corresponding to that domain ." +second algorithm is a fast approximation of the first one .,the second one is an approximation of the first algorithm . +"in this paper , we present a novel model that simultaneously utilizes constituency and dependency trees on the source side .","in this paper , we propose to combine the advantages of source side constituency and dependency trees ." +in this paper we investigate the role of cross-linguistic information in the task of english np semantic interpretation .,in this paper we investigate noun phrases based on cross-linguistic evidence and present a domain independent model for their semantic interpretation . +this paper proposes a solution which normalizes the word vectors .,this paper solves the inconsistence by normalizing the word vectors . +in this paper we presented a technique for extracting order constraints among plan elements .,"in this paper , we focus on learning the plan elements and the ordering constraints between them ." +"both parsers obtain state-of-the-art performance , and use a very simple api .","both parsers obtain state-of-the-art performance , are fast , and are easy to use through a simple api ." +"faced with these problems , we propose to integrate deep learning and topic modeling .","besides , we are interested to apply the method of combing topic model and deep learning into some traditional nlp tasks ." +"we propose a flexible domain score function to take the external information into consideration , such as word frequencies .",we propose a pure unsupervised d-topwords model to extract new domain-specific words . +klebanov et al used concreteness as a feature with baseline features and optimal weighting technique .,klebanov et al evaluated the effect of concreteness as a feature for metaphor detection using mrcpd . +"mention ¡° lukebryanonline ¡± , our model can find similar mentions like ¡° thelukebryan ¡± and ¡° lukebryan ¡± .","for example , for the oov mention ¡°lukebryanonline¡± , our model can find similar mentions like ¡°thelukebryan¡± and ¡°lukebryan¡± ." +"based on the attributes , several statistical classifiers were used to select operands and determine operators .","based on the attributes , several statistical classifiers were used to select operands and determine the operator ." +"in addition , we incorporated additional features such as pos tags and sentiment features .",we also incorporated additional features such as pos tags and sentiment features extracted from sentiment lexicons . +"in this paper , we propose learning continuous word representations as features for twitter sentiment .","in this paper , we propose learning sentiment-specific word embedding ( sswe ) for sentiment analysis ." +reichart and rappoport showed that self-training can improve the performance of a constituency parser without a reranker when a small training set is used .,reichart and rappoport showed that one can self-train with only a generative parser if the seed size is small . +experimental results show that undersampling causes negative effects on active learning .,experimental results show that oversampling is a relatively good choice in active learning for wsd in highly imbalanced data . +we address a representation-efficiency tradeoff in statistical natural language processing through the use of stacked learning .,we explore a stacked framework for learning to predict dependency structures for natural language sentences . +vietools has also been extended for converting and rearranging vietnamese words .,vietools is also extended to serve many purposes in vietnamese language processing . +triple representation offers a simple interface for applications .,this representation allows easy data share between kbs . +"recently , deep learning based models is rising a substantial interest in text semantic matching .","recently , there is rising interest in modelling the interactions of two sentences with deep neural networks ." +"by drawing on the aggregated results of the task ’ s participants , we have extracted highly representative pairs for each relation .","by drawing on the aggregated results of the task ’ s participants , we have extracted highly representative pairs for each relation to build an analogy set ." +"question answering ( qa ) is a specific form of the information retrieval ( ir ) task , where the goal is to find relevant well-formed answers to a posed question .",question answering ( qa ) is the task of retrieving answers to a question given one or more contexts . +"in this paper , we have evaluated structural learning approaches to genre classification .",in this paper we present a method of using the hierarchy of labels to improve the classification accuracy . +"in ( cite-p-17-3-4 ) , popescu and etzioni not only analyzed polarity of opinions regarding product features .","in ( cite-p-17-3-4 ) , popescu and etzioni not only analyzed polarity of opinions regarding product features but also ranked opinions based on their strength ." +"the lingo grammar matrix is situated theoretically within head-driven phrase structure grammar , a lexicalist , constraint-based framework .","this grammar is based on the framework of head-driven phrase structure grammar , one of the most prominent linguistic theories being used in natural language processing ." +cite-p-21-3-7 proposed a new neural network approach called sswe to train sentiment-aware word representation .,cite-p-21-3-10 proposed to learn a two-dimensional sentiment representation based on a simple neural network . +"to alleviate the noise issue caused by distant supervision , riedel et al and hoffmann et al propose multi-instance learning mechanisms .","to combat the noisy training data produced by heuristic labeling in distant supervision , researchers exploited multi-instance learning models ." +"to prevent errors , error handling would consist of steps to immediately detect an error when it occurs and to interact with the user to correct the error .","ideally , apart from strategies to prevent errors , error handling would consist of steps to immediately detect an error when it occurs and to interact with the user to correct the error in subsequent exchanges ." +"in this paper , we propose a translation-based approach for kg representation learning that leverages two different types of external , multimodal representations : linguistic representations created by analyzing the usage patterns of kg entities .","in this paper , we propose a multimodal translation-based approach that defines the energy of a kg triple as the sum of sub-energy functions that leverage both multimodal ( visual and linguistic ) and structural kg representations ." +characterlevel nodes have special tags called position-ofcharacter that indicate the word-internal position .,character-level nodes have special tags where position-of-character and pos tags are combined . +"in this work , we present a hybrid learning method for training task-oriented dialogue systems .",we propose a hybrid learning approach for such systems using endto-end trainable neural network model . +scarton and specia explore lexical cohesion and lsa cohesion for document-level qe .,scarton and specia propose a number of discourse-informed features in order to predict bleu and ter at document level . +"in this paper , we study a chart pruning method for cky-style mt decoding .",we present a context-sensitive chart pruning method for cky -style mt decoding . +"given a pair of words math-w-3-1-0-47 , find a math-w-3-1-0-55 for a given math-w-3-1-0-60 .",let math-w-8-4-0-1 be two points classified into math-w-8-4-0-12 . +"for latent meanings , we directly override the embeddings of the corresponding words in the vocabulary .","furthermore , since the latent meanings are included in the vocabulary , there is no extra embedding being generated ." +we can use this metric as a loss function within the mbr framework to design decoders .,this hierarchy includes the loss functions useful in both situations where we intend to apply mbr decoding . +"cherry and lin introduce soft syntactic itg constraints into a discriminative model , and use an itg parser to constrain the search for a viterbi alignment .",cherry and lin show that introducing soft syntactic constraints through discriminative training can improve alignment quality . +"up ” should be mapped to ‘ increased appetite ’ , while “ suppressed appetite ” should be mapped to ‘ loss of appetite ’ .","for example , “ appetite on 10 ” , “ my appetite way up ” should be mapped to ‘ increased appetite ’ , while “ suppressed appetite ” should be mapped to ‘ loss of appetite ’ ." +"for smt decoding , we use the moses toolkit with kenlm for language model queries .",here we used the lazy decoder program 5 which is based on the kenlm language model estimation and querying system . +"we have presented hyp , an open-source toolkit for representing and manipulating weighted directed hypergraphs , including functionality .","we present hyp , an open-source toolkit for the representation , manipulation , and optimization of weighted directed hypergraphs ." +using contextual features modeled by maximum entropy-based classifier is superior to the commonly used lexicalized reordering model .,"in smt , maximum entropy-based reordering model is often introduced as a better alternative to the commonly used lexicalized one ." +we apply the stochastic gradient descent algorithm with mini-batches and the adadelta update rule .,we use a minibatch stochastic gradient descent algorithm and adadelta to train each model . +we ran the alignment algorithm from on a chinese-english parallel corpus of 218 million english words .,"we ran the alignment algorithm from on a chinese-english parallel corpus of 218 million english words , available from the linguistic data consortium ." +"we chose the three models that achieved at least one best score in the closed tests from emerson , as well as the sub-word-based model of zhang , kikui , and sumita for comparison .","we chose the three models that achieved at least one best score in the closed tests from emerson , as well as the sub-word-based model of zhang et al for comparison ." +constraints are incorporated by various logic formulas and global formulas .,logic formulas are combined in a probabilistic framework to model soft constraints . +"in english event detection task , our approach achieved 73 . 4 % f-score with average 3 . 0 % absolute improvement .","in english event detection task , our approach achieved 73.4 % f-score with average 3.0 % absolute improvement compared to state-of-the-art ." +the word embeddings can provide word vector representation that captures semantic and syntactic information of words .,"word embedding techniques aim to use continuous low-dimension vectors representing the features of the words , captured in context ." +entity linking ( el ) has received considerable research attention in recent years .,entity linking ( el ) has received considerable attention in recent years . +"for the word-alignment problem , goldwater and mcclosky and eyig枚z et al suggested word alignment models that address morphology directly .",goldwater and mcclosky proposed a morpheme aware word alignment model for language pairs in which the source language words correspond to only one morpheme . +"sentence fusion is a text-to-text generation application , which given two related sentences , outputs a single sentence expressing the information shared by the two input sentences .",msc is a text-to-text generation process in which a novel sentence is produced as a result of summarizing a set of similar sentences originally called sentence fusion . +"consequently , considerable effort has gone into devising and improving automatic word alignment algorithms , and into evaluating their performance .","recently , several successful attempts have been made at using supervised machine learning for word alignment ." +the smt system deployed in our approach is an implementation of the alignment template approach of och and ney .,all mt systems are trained using a variant of the alignment template model of och and ney . +we present a novel technique for interpreting the effect of different text inputs .,"in addition , we have demonstrated a way to intuitively interpret the model ." +building on we utilize label propagation to determine the relation and observation type expressed by each pattern .,"building on , we utilize label propagation to determine the relation and observation type expressed by each pattern ." +the dictionary-type features were generated using the english and swahili models using the treetagger tool .,the text corpus was lemmatized using the treetagger and parsed for syntactic dependency structures with parzu . +"chapman et al created the negex algorithm , a simple rule-based system that uses regular expressions with trigger terms to determine whether a medical term is absent in a patient .",chapman et al created a simple regular expression algorithm called negex that can detect phrases indicating negation and identify medical terms falling within the negative scope . +daum茅 and jagarlamudi use contextual and string similarity to mine translations for oov words in a high resource language domain adaptation for a machine translation setting .,similar work on solving domain adaptation for smt by mining unseen words has been presented by snover et al and daum茅 and jagarlamudi . +in the literature there is a consensus that global statistics features lead to higher accuracies compared to the dynamic classification of multivariate time-series ( cite-p-18-1-6 ) .,"the literature consists of a series of well-established frameworks to explore a deeper understanding of the semantic relationship between entities , ranging from ontological reasoning to compositional as well as distributional semantics ( cite-p-13-1-2 ) ." +"yamada and matsumoto , 2003 ) made use of the polynomial kernel of degree 2 so they in fact use more conjunctive features .","yamada and matsumoto , 2003 , makes use of polynomial kernels of degree 2 which is equivalent to using even more conjunctive features ." +we are the first to suggest a general semi-supervised protocol that is driven by soft constraints .,we proposed to use constraints as a way to guide semi-supervised learning . +kappa is an evaluation measure which is increasingly used in nlp annotation work .,kappa coefficient is commonly used as a standard to reflect inter-annotator agreement . +we presented the first neural network based shift-reduce parsers for ccg .,"we present a neural network based shift-reduce ccg parser , the first neural network based parser for ccg ." +"in this paper , we propose a hierarchical attention model to select the supporting warrant .","in addition to the primary model , we propose an ensemble method to achieve a stable and credible accuracy ." +cite-p-21-3-10 proposed to learn a two-dimensional sentiment representation .,cite-p-21-1-3 proposed to build a sentiment lexicon by a propagation method . +"for feature extraction , we parse the french part of our training data using the berkeley parser and lemmatize and pos tag it using morfette .","we parse the english side of our parallel corpus with the berkeley parser , and tune parameters of the mt system with mira ." +dependency annotation for hindi is based on paninian framework for building the treebank .,the annotated hindi treebank is based on a dependency framework and has a very rich set of dependency labels . +we propose a method using the existing rbmt system as a black box to produce a synthetic bilingual corpus .,"we presented a method using the existing rbmt system as a black box to produce synthetic bilingual corpus , which was used as training data for the smt system ." +"training strategies drastically reduce the total training time , while delivering significant improvements both in terms of perplexity and in a large-scale translation task .","these methods are shown to greatly reduce the training time and to significantly improve performance , both in terms of perplexity and on a large-scale translation task ." +"to avoid the danger of aligning a token in one segment to excessive numbers of tokens in the other segment , we adopt a variant of competitive linking by melamed .","as in the previous methods , we avoid the danger of aligning a token in one segment to excessive numbers of tokens in the other segment , by adopting a variant of competitive linking by melamed ." +"at the realisation level , relies on wordnet synonym and antonym sets , and gives equivalent results on the examples cited in the literature .",it does not rely on the availability of an adjective classification scheme and uses wordnet antonym and synonym lists instead . +"in the above simulation , only a fraction of nodes were updated at each iteration .","in the above simulation , only a fraction of nodes were updated at each iteration in order to model a rapid change ." +"for our generative model lets us automatically calibrate parameters for each relation , which are sensitive to the performance .","in our approach , parameters are calibrated for each relation by maximizing the likelihood of our generative model ." +"word co-occurrences , we attempt to use the fofe to encode the full contexts of each focus word , including the order .",the word co-occurrence frequencies are based on fixed windows spanning in both directions from the focus word . +the character embeddings are computed using a method similar to word2vec .,"word embedding features were derived using word2vec , representing each word as a 300-d vector ." +alikaniotis et al present a model for essay scoring based on recurrent neural networks at the word level .,alikaniotis et al train score-specific word embeddings with several lstm architectures . +and louds succinctly represents it by a 2m + 1 bit string .,louds succinctly represents a trie with math-w-1-1-0-40 nodes as a 2m + 1 bit string . +we conducted experiments on standard rst discourse treebank to evaluate our proposed models .,we evaluate our parser on rst discourse treebank and thoroughly analyze different components of our method . +silberer and frank adapted an entity-based coreference resolution model to extend automatically the training corpus .,silberer and frank use an entity-based coreference resolution model to automatically extended the training set . +"the issues of correct identification of nes were specifically addressed and benchmarked by the developers of information extraction system , such as the gate system .","the problem of correct identification of nes is specifically addressed and benchmarked by the developers of information extraction system , such as the gate system ." +"for finding optimal translations , we extend the minimum error rate training ( mert ) algorithm ( cite-p-18-1-21 ) to tune feature weights with respect to bleu score for max-translation decoding .","as our decoder accounts for multiple derivations , we extend the mert algorithm to tune feature weights with respect to bleu score for max-translation decoding ." +our work is the use of deep-learning approaches for detecting token-level language tags for code-mixed content .,"in this work , we present an exploration of automatic ner of code-mixed data ." +upadhyay et al provide an empirical comparison of four cross-lingual word embedding models varying in terms of the amount of supervision .,upadhyay et al compared empirically some of the most recent development on cross-lingual models of word embeddings . +"table 3 shows the performance of these systems under three widely used evaluation metrics ter , bleu and meteor .",table 1 shows the comparison of srilm and randlm with respect to performance on bleu and ter on the test set . +the revised d-level sentence complexity scale forms the core of our syntactic complexity measure .,we apply the revised d-level sentence complexity scale as the basis of our syntactic complexity measure . +the sentences have been parsed automatically by the alpino parser for dutch .,the corpus has been automatically annotated with full syntactic dependency trees by the alpino parser for dutch . +the enju parser is a statistical deep parser based on the hpsg formalism .,the enju parser 6 is a deep parser based on the hpsg formalism . +jiang and zhou used smt to generate the second line of chinese couplets given the first line .,jiang and zhou generate chinese couplets using a phrasebased smt approach which translates the first line to the second line . +hierarchical topic modeling is able to detect automatically new topics .,this algorithm can be used to extract topic hierarchies from large document collections . +also suggest that the obtained subtree alignment can improve the performance of both phrase and syntax based smt systems .,further experiment shows that the obtained subtree alignment benefits both phrase and syntax based mt systems by delivering more weight on syntactic phrases . +"to extract the features of the rule selection model , we parse the english part of our training data using the berkeley parser .","to pre-order the chinese sentences using the syntax-based reordering method proposed by , we utilize the berkeley parser ." +we have presented a component for incremental speech synthesis ( iss ) and demonstrated its capabilities .,we present a component for incremental speech synthesis ( iss ) and a set of applications that demonstrate its capabilities . +the language model is a 5-gram with interpolation and kneserney smoothing .,it is based on 5-grams with extended kneser-ney smoothing . +"in this paper , we present a novel neural network framework for extractive document summarization by jointly learning to score and select sentences .","in this paper , we present a novel endto-end neural network framework for extractive document summarization by jointly learning to score and select sentences ." +"a construct is a set of knowledge , skills , and abilities measured by a test .","1 a construct is a set of knowledge , skills , and abilities measured by a test ." +for our baseline we use the moses software to train a phrase based machine translation model .,we build three different smt systems based on the moses toolkit with different input for training the phrase-based translation models . +"in this paper , we propose a generative cross-lingual mixture model ( clmm ) .",in this paper we propose a cross-lingual mixture model ( clmm ) for cross-lingual sentiment classification . +"in recent years , machine learning techniques , in particular reinforcement learning , have been applied to the task of dialogue management .","machine learning techniques , and particularly reinforcement learning , have recently received great interest in research on dialogue management ." +"from the perspective of structural linguistics , we explore paradigmatic and syntagmatic lexical relations for chinese pos tagging .",we hold a view of structuralist linguistics and study the impact of paradigmatic and syntagmatic lexical relations on chinese pos tagging . +"by using a japanese grammar , based on a monostratal theory of grammar , we could simultaneously annotate syntactic and semantic structure without overburdening the annotator .",by using a japanese grammar based on a monostratal theory of grammar we could simultaneously annotate syntactic and semantic structure without overburdening the annota-tor . +we also report results on the jfleg test set using gleu .,our test sets are the conll 2014 evaluation set and the jfleg test set . +ando and zhang present a semisupervised learning algorithm called alternating structure optimization .,ando and zhang presented a semi-supervised learning algorithm named alternating structure optimization for text chunking . +experimental results on both data sets indicate that models learned with our method can significantly outperform their counterparts learned with the random sampling strategy .,experimental results on two public data sets indicate that matching models get significant improvements when they are learned with the proposed method . +"with the application of popular deep learning methods , researchers have found that recurrent neural network can successfully model the non-sequential linguistic properties with sequential data input .","recently , deep learning-based sequential models of sentence , such as recurrent neural network , have proved to be effective in dealing with the non-sequential properties of human language ." +the word embeddings and other parameters of the network obtained at the previous stage are used to initialize the network that is then trained on a supervised corpus from semeval-2015 .,"at a final stage , the pre-trained parameters of the network are used to initialize the model which is then trained on the supervised training data from semeval-2015 ." +we conduct an efficient depth-first branch-and-bound search through the space of possible children .,"we then incorporate this model into a global , efficient branch-and-bound search through the space of permutations ." +we think it is a promising clue to improving translation quality .,we think this is a significant contribution since students or professors can use features as a feedback for better understanding essays writing . +"for creating our folds , we employ stratified cross-validation which aims to ensure that the proportion of classes within each partition is equal .","for creating our folds , we used stratified cross-validation , which aims to ensure that the proportion of classes within each partition is equal ." +such that we measures entity-context similarity under aggregated distance metrics of hierarchical category nodes .,"we learn a distance metric for each category node , and measure entity-context similarity under the aggregated metrics of all relevant categories ." +"on the conll ¡¯ 03 / aida data set , jerl outperforms state-of-art ner and linking systems , and we find improvements of 0 . 4 % absolute f .",jerl outperforms the state-of-art systems on both ner and linking tasks on the conll¡¯03/aida data set . +"the preprocessing phase comprises treatment of emoticons , spell-errors , slang terms , lemmatization and pos-tagging .","the preprocessing phase comprises treatment of emoticon , slang terms , lemmatization and pos-tagging ." +we adopt a two-pronged strategy for event extraction that handles event narrative documents differently from other documents .,another important aspect of our approach is a two-pronged strategy that handles event narratives differently from other documents . +"a . of course , it was situated behind a big neu but unobtrusive painting .","a. of course , it was situated behind a big neu but unobtrusive painting neu ." +"without any external translation resources , our bootstrapping approach yields lexicons that outperform the best performing corpus-based ble methods on standard test .",we also show that our approach outperforms the best performing fully corpus-based ble methods on these test sets . +"in this paper , we combine the strengths of 6 approaches that had previously been applied to 3 different tasks ( keyword extraction , multi-sentence compression , and summarization ) into a unified , fully unsupervised endto-end meeting speech summarization framework that can generate readable summaries .","our framework combines the strengths of 6 approaches that had previously been applied to 3 different tasks ( keyword extraction , multi-sentence compression , and summarization ) into a unified , fully unsupervised endto-end summarization framework , and introduces some novel components ." +"in this paper , we demonstrate the effectiveness of the syntactic tree features for relation extraction .","in this paper , we explore the syntactic features using convolution tree kernels for relation extraction ." +"applications of our technique include the dynamic tracking of topic and semantic evolution in a dialog , topic detection , automatic generation of document tags , and new story or event detection .","applications of our method include topic detection , event tracking , story/topic monitoring , new-event detection , summarization , information filtering , etc ." +we focus on a more challenging and arguably more realistic version of the domain-adaptation problem where only unlabeled data is available for the target domain .,we consider a semi-supervised setting for domain adaptation where only unlabeled data is available for the target domain . +"where a training set of labeled words is used , and in an unsupervised setting where only a handful of seeds is used to define the two polarity classes .","the method could be used both in a semi-supervised setting where a training set of labeled words is used , and in an unsupervised setting where a handful of seeds is used to define the two polarity classes ." +"in the last decade , a large amount of research has been conducted on detection of structural events , eg , sentence structure and disfluency structure , in spontaneous speech .","recently , in the nlp research field , an increasing amount of effort has been made on structural event detection in spontaneous speech ." +"left-corner parsing is a bottom-up technique where the right-hand-side symbols of the rules are matched from leftto right , s once the left-cornersymbol has been found , the grammar rulecan be used to predict what may come next .",left-corner ( lc ) parsing is a parsing strategy which has been used in different guises in various areas of computer science . +"for comparison , a conditional random field -based method was implemented using the same training and development sets .","in , the sfst-based model is compared with support vector machines and conditional random fields ." +the discriminative re-scoring method is also very successful .,but there is still a gap to the discriminative re-scoring methods . +sahami and heilman measured semantic similarity between two queries using snippets returned for those queries by a search engine .,"sahami et al , measure semantic similarity between two queries using the snippets returned for those queries by a search engine ." +communication accommodation theory ( 2001 ) states that humans use prosody and backchannels in order to adjust social distance .,communication accommodation theory states that people use nonverbal feedback to establish social distance during conversation . +"we propose a novel model for relation extraction from cqa data , that uses discourse of a qna pair to extract facts between entities mentioned in question and entities mentioned in answer sentences .","in this paper we proposed a model for relation extraction from qna data , which is capable of predicting relations between entities mentioned in question and answer sentences ." +several massive knowledge bases such as dbpedia and freebase have been released .,"over the last few years , several large scale knowledge bases such as freebase , nell , and yago have been developed ." +while wordnet includes many rare word senses .,"yet , wordnet is limited for entailment rule generation ." +"in this paper , we extend methods from cite-p-12-1-11 for reducing the worst-case complexity of a context-free parsing pipeline .","in this paper , we adopt the same paradigm pursued in cite-p-12-1-11 , but apply it to an exact inference cyk parser ( cite-p-12-1-2 , cite-p-12-3-1 , cite-p-12-1-8 ) ." +statistical topic models such as latent dirichlet allocation provide a powerful framework for representing and summarizing the contents of large document collections .,topic modeling algorithms such as latent dirichlet allocation and non negative matrix factorization are able to find the topics within a document collection . +"recommended best variants , we subsequently replicate a recent evaluation of state-of-the-art summarization systems revealing distinct conclusions about the relative performance of systems .","we subsequently replicate a recent large-scale evaluation that relied on , what we now know to be , suboptimal rouge variants revealing distinct conclusions about the relative performance of state-of-the-art summarization systems ." +"the phonological operations of such as primary stress and higher pitch have been well noted in the literature , culicover and rochemont among others ) .","the phonological processing of such as primary stress and higher pitch have been well noted in the literature , culicover and rochemont among others ) ." +though there has been a growing interest in mwes ( cite-p-17-1-11 ) .,there has been a growing awareness of japanese mwe problems ( cite-p-17-1-0 ) . +a central problem in grounded language acquisition is learning the correspondences between a rich world state .,a more flexible direction is grounded language acquisition : learning the meaning of sentences in the context of an observed world state . +in the context of one form of binary unbalanced task 2 : annotation of transcribed human-human dialogue for presence / absence of uncertainty .,we demonstrate the effectiveness of our approach in the context of one form of unbalanced task : annotation of transcribed human-human dialogues for presence/absence of uncertainty . +we present theoretical results concerning the correctness and efficiency of the proposed algorithms .,we present both theoretical and empirical results concerning the correctness and efficiency of these algorithms . +"in this paper , we demonstrate that a substantially simpler approach that starts from a tree drawn from the uniform distribution .","in this paper , we follow a different strategy , arguing that a much simpler inference strategy suffices ." +"we use the moses toolkit to create a statistical phrase-based machine translation model built on the best pre-processed data , as described above .","we use the moses package for this purpose , which uses a phrase-based approach by combining a translation model and a language model to generate paraphrases ." +"this paper proposes a knowledge-based method , called structural semantic relatedness ( ssr ) , which can enhance the named entity disambiguation by capturing and leveraging the structural semantic knowledge from multiple knowledge sources .","this paper proposes a knowledge-based method , called structural semantic relatedness ( ssr ) , which can enhance the named entity disambiguation by capturing and leveraging the structural semantic knowledge in multiple knowledge sources ." +lewis and gale pioneered the use of active learning for text categorization .,"in terms of active learning , lewis and gale mentioned the use of virtual examples in text classification ." +since the composite language model effectively encodes long range dependencies of natural language that n-gram is not viable to consider .,"although the markov chains are efficient at encoding local word interactions , the n-gram model clearly ignores the rich syntactic and semantic structures that constrain natural languages ." +"sis does not add extra complexity to my treatment of time-dependent expressions , but is needed for purposes of discourse understanding in general .","in section 7 , i argue that the context-dependent feature of the analysis does not add extra complexity to my treatment of time-dependent expressions , but is needed for purposes of discourse understanding in general ." +the newer method of latent semantic indexing 3 is a variant of the vsm in which documents are represented in a lower dimensional space created from the input training dataset .,the newer method of latent semantic indexing 1 is a variant of the vsm in which documents are represented in a lower dimensional space created from the input training dataset . +they apply the semi-supervised learning approach of suzuki and isozaki to dependency parsing and include additionally the cluster-based features of koo et al .,they extended a semi-supervised structured conditional model to the dependency parsing problem and combined their method with the approach of koo et al . +and we describe our long-term annotation effort to identify the dialect level ( and dialect itself ) in each sentence .,"furthermore , we describe a long-term crowdsourced effort to have the sentences labeled by arabic speakers for the level of dialect in each sentence and the dialect itself ." +"we proposed convolutional architectures for obtaining a guided representation of the entire source sentence , which can be used to augment the n-gram target language .","in this paper , we propose novel convolutional architectures to dynamically encode the relevant information in the source language ." +the two baseline methods were implemented using scikit-learn in python .,all linear svm models were implemented with scikit-learn and trained and tested using liblinear backend . +"neural models , with various neural architectures , have recently achieved great success .",neural language models based on recurrent neural networks and sequence-tosequence architectures have revolutionized the nlp world . +"therefore , backtranslation was adapted to train translation systems in a true translation setting based on monolingual corpora .","therefore , backtranslation was adapted to train a translation system in a true translation setting based on monolingual corpora ." +"in this paper , we concentrate on identifying predictive opinion .","in this paper , we proposed a framework for working with predictive opinion ." +"our results show that our decoding framework is effective and leads to substantial improvements in translations generated from the intersected models , where the typical greedy or beam search .","our empirical results show that our decoding framework is effective , and can leads to substantial improvements in translations , especially in situations where greedy search and beam search are not feasible ." +"by combining the hal model and relevance feedback , the cip can induce semantic patterns from the unannotated web corpora .",the hal model provides an informative infrastructure for the cip to induce semantic patterns from the unannotated psychiatry web corpora . +"woodsend and lapata utilized ilp to jointly optimize different aspects including content selection , surface realization , and rewrite rules in summarization .","woodsend and lapata , 2012 , utilized ilp to jointly optimize different aspects including content selection , surface realization , and rewrite rules in summarization ." +this strategy has been successful and commonly used in coreference resolution .,this is a practice common in the analysis of coreference resolution approaches . +"we proposed a general cross-lingual knowledge extraction framework called wikicike , in which extraction .",we propose a transfer learning-based cross-lingual knowledge extraction framework called wikicike . +rnn can model the entire sequence and capture long-term dependencies .,rnn can model the whole sequence and capture long-term dependencies ( cite-p-18-1-3 ) . +"in this paper , we exploit non-local features as an estimate of long-distance dependencies .","to solve this problem , we propose an approach to exploit non-local information in the next section ." +"recently , has introduced an alternative way to generate word embeddings using the skipgram model trained with stochastic gradient descent and negative sampling , named as sgns .","recently , proposed two particular models , skipgram and cbow , to learn word representations in large amounts of text data ." +"inference rules are an important building block of many semantic applications , such as question answering and information extraction .","inference rules for predicates have been identified as an important component in semantic applications , such as question answering and information extraction ." +that can scale linearly in the number of modalities .,lmf scales linearly in the number of modalities . +"to identify portions of the sentence relevant to the single feature f , we use the stanford typed dependency parser .","therefore , we apply the stanford parser to the definition of a page in order to extract all the dependency relations of the sentence ." +yamada and knight proposed a syntax-based translation model that transfers a source parse tree into a target string .,yamada and knight further extended the model to a syntax-to-string translation modeling . +we present a simple technique for mitigating the memory bottleneck in parallel .,we present a greedy document partitioning technique for the task . +we further extend the sparse prototype information to other words based on distributional similarity .,this sparse prototype information is then propagated to other words based on distributional similarity . +"for subtask b , besides ir method and traditional machine learning method , we also proposed two novel methods to improve semantic similarity estimation between question-question ( q-q ) pairs .","for subtask b , we proposed two novel methods to improve semantic similarity estimation between question-question pair by integrating the rank information of question-comment pair ." +pugs extend unification grammars based on feature structures by allowing a greatest diversity of geometric structures and a best control of resources .,pugs extend unification grammars with an explicit control of the saturation of structures by attributing a polarity to each object . +"using recently proposed axiomatic approaches and find that , with appropriate term weighting strategy , we are able to exploit the information from lexical resources to significantly improve the retrieval performance .","unlike previous studies , we are able to show that query expansion using only manually created lexical resources can significantly improve the retrieval performance ." +we used the maximum entropy approach 5 as a machine learner for this task .,we utilize maximum entropy model to design the basic classifier used in active learning for wsd and tc tasks . +this combinatorial optimisation problem can be solved in polynomial time through the hungarian algorithm .,"this problem can be solved in polynomial time , using eg , the hungarian algorithm ." +we formulate the global model as a generator and the language classification model as a discriminator using generative adversarial network .,we propose to use a generative adversarial network that consists of a generator g and a discriminator d . +our results suggest that the new corpus is the most robust resource for classifying argumentative text .,our results clearly indicate that training on the created webis-debate-16 corpus yield the most robust cross-domain classifier . +"pang et al , turney , we are interested in fine-grained subjectivity analysis -the identification , extraction and characterization of subjective language at the phrase or clause level .","pang et al , turney , we are interested in fine-grained subjectivity analysis , which is concerned with subjectivity at the phrase or clause level ." +soft cardinality was used successfully for the sts task in previous semeval editions .,this representation was used successfully for addressing the sts task with purely string-based approaches . +semantic role labeling was pioneered by gildea and jurafsky .,semantic role labeling was first defined in gildea and jurafsky . +"although there are several well-known spectral clustering algorithms in the literature , shi and malik , kannan et al , we adopt the one proposed by ng et al , as it is arguably the most widely-used .","although there are several well-known spectral clustering algorithms in the literature , meil葍 and shi , kannan et al , we adopt the one proposed by ng et al , as it is arguably the most widely used ." +"for the automatic evaluation , we used the bleu metric from ibm .","for the automatic evaluation we used the bleu , meteor and chrf metrics ." +"in this paper , we showed how a semantic relatedness measure computed in a multilingual space is able to acquire and leverage additional information from the multilingual representation , and thus be strengthened .","in this paper , we take a step further and explore a joint multilingual semantic relatedness metric , which aggregates semantic relatedness scores measured on several different languages ." +"based on part-of-speech information obtained using lets preprocess , we discarded all words but nouns , adjectives , adverbs , and verbs .","as content words , we considered nouns , adjectives , adverbs , and verbs , based on the part-of-speech output of the lets preprocess toolkit ." +"for both languages , we achieved the best results among all participants .","for both languages , english and spanish , we achieved the best results of all participants ( value f1 ) ." +memory consumption mainly comes from the embedding layers .,the memory consumption mainly comes from the word embedding layer . +a conditional random field can be seen as an undirected graph model in which the nodes corresponding to the label sequence y are conditional on the observed sequence x .,"conditional random fields is a discriminative model that estimates joint distribution pover the target sequence y , conditioned on the observed sequence x ." +"in this paper , we present a statistical analysis model for coordination disambiguation .","in this paper , we presented an efficient method for detecting and disambiguating coordinate structures ." +"for the contextual polarity disambiguation subtask , covered in section 2 , we use a system that combines a lexicon based approach to sentiment detection .","in the contextual polarity disambiguation subtask , we use a sentiment lexicon approach combined with polarity shift detection and tree kernel based classifiers ." +"veale and hao , however , did not evaluate to which extent their knowledge base of talking points and the associated reasoning framework are useful to interpret metaphorical expressions occurring in text .","veale and hao , however , did not evaluate to what extent their knowledge base of talking points and the associated reasoning framework are useful to interpret metaphorical expressions occurring in text ." +"we apply the adam algorithm for optimization , where the parameters of adam are set as in .","in practice , we set all weights 位 j to 1 , and employ adam for optimization ." +we use the max-margin criterion to train our model .,"motivated by the work , we adopt the max-margin objective ." +"more recently , abandah et al trained a recurrent neural network to transcribe undiacritized arabic text into fully diacritized sentences .",abandah et al trained a recurrent neural network to transcribe undiacritized arabic text with fully diacritized sentences . +"examples , our method yields significant improvements over state-of-the-art supervised methods , achieving best reported numbers to date .","furthermore , applying our method in a setting where all labeled examples are available also shows improvements over state-of-the-art supervised methods ." +recent empirical improvements with language models have showed that unsupervised pretraining on very large corpora is an integral part of many nlp tasks .,recent progress in natural language understanding shows that pre-training transformer decoders on language modelling tasks leads to remarkable transferable knowledge which boosts performance on a wide range of nlp tasks . +we present experiments aiming at automatically classifying spanish verbs into lexical semantic classes .,we present experiments aiming at an automatic classification of spanish verbs into lexical semantic classes . +"evaluations , hisan outperformed baseline methods .","in human evaluations , hisan also outperformed the baseline methods ." +this paper described a new approach to exploit relational models of dialogue .,this paper describes ongoing work on a new approach to dialogue management which attempts to fill this gap . +"to the best of our knowledge , there is no measure that would relate granularity , reliability of the annotation ( derived from iaa ) and the resulting information .","to the best of our knowledge , there has been no exact measure for the optimization , and the usefulness of a given resource can only be assessed when it is finished and used in applications ." +"on the wmt ’ 14 englishto-french task , we achieve bleu = 37 . 7 with a single attention .",our models are also validated on the more difficult wmt ’ 14 englishto-german task . +"to this end , we extend the dynamic oracle defined by goldberg and nivre , considering dag parsing arc-eager system of sagae and tsujii .",note that the result of applying the no-covered-roots restriction alone is equivalent to the arc-eager parser by sagae and tsujii . +"syllabic units , however , rival the performance of morphological units .","syllabic units , however , rival the performance of morphemes in the kws task ." +so we can easily provide comparison between both languages .,we provide comparison of both algorithms . +"using a dropout q-network , a companion strategy is proposed to control when the student policy directly consults rules and how often the student policy learns from the teacher ¡¯ s experiences .",a novel agent-aware dropout deep q-network ( aad-dqn ) is proposed to address the problem of when to consult the teacher and how to learn from the teacher¡¯s experiences . +we exploit the svm-light-tk toolkit for kernel computation .,our system uses the svm-light-tk toolkit 3 for computation of the hybrid kernels . +"in lm adaptation , this paper investigates how to effectively leverage named entity information for latent topic analysis .","in this paper , we investigate unsupervised lm adaptation using clustering and lda based topic analysis ." +we used the svm-light-tk 5 to train the reranker with a combination of tree kernels and feature vectors .,"we trained the rerankers using svm-light-tk 6 , which enables the use of structural kernels in svm-light ." +"nishigauchi and watanabe claimed that there were island constraints in japanese , but ishihara and sprouse et al mentioned that this language had no island constraint .","for example , nishigauchi and watanabe claimed that there were island constraints in japanese , but ishihara and sprouse et al mentioned that this language had no island constraint ." +"in this paper , we overcome this shortcoming using a constrained multi-task pairwise-preference learning .","in this paper , we address this problem of prompt adaptation using multi-task learning ." +"traditional ways to measure their relevance includes cosine distance , bilinear model , single layer neural network , etc .","conventional methods to measure the relevance between two arguments includes bilinear model , and single layer neural networks , etc ." +"knight and graehl , 1997 , describe a backtransliteration system for japanese .","knight and graehl , 1997 , describe a back transliteration system for japanese ." +we use the pku and msra data provided by the second international chinese word segmentation bakeoff to test our model .,we used the data provided by the second sighan bakeoff to test the two segmentation models . +we discuss three data sampling techniques that deal with this skewness .,we also use three data sampling approaches to solve the problem of data skewness . +koehn and knight derived such a seed lexicon from german-english cognates which were selected by using string similarity criteria .,koehn and knight tested this idea on a larger test set consisting of the 1000 most frequent words from a german-english lexicon . +"we present h eady , which is at the same time a novel system for abstractive headline generation , and a smooth clustering of patterns .","we have presented h eady , an abstractive headline generation system based on the generalization of syntactic patterns by means of a noisy-or bayesian network ." +1 the omitted argument is called a zero pronoun .,"a zero pronoun is a gap in the sentence , which refers to the component that is omitted because of the coherence of language ." +we propose a sense-aware neural model in this study .,we propose a sense-aware neural model to address this challenging task . +we use a shared subword vocabulary by applying byte-pair encoding to the data for all variants concatenated .,"following koehn and knowles , we process all the data with byte-pair encoding to construct a vocabulary of 50k subwords ." +"on a large scale , to maximize system performance , we explore different unsupervised feature learning methods to take advantage of a large amount of unsupervised social media data .","since sud data are often expensive to obtain at a large scale , to maximize system performance , we focus on methods that employ unsupervised feature learning to take advantage of a large amount of unsupervised social media data ." +"in this paper , we extract implicit interpretations intuitively understood by humans .","in this paper , we have presented a methodology to extract implicit interpretations from modal constructions ." +we successfully apply the attention scheme to detect word senses and learn representations according to contexts .,sac utilizes the attention scheme to automatically select appropriate senses for context words according to the target word . +"in this paper , we test this hypothesis by combining an incremental tag parser with an incremental semantic role labeler .","to test this hypothesis , we combined an incremental tag parser with an incremental semantic role labeler ." +"shriberg and stolcke studied the location and distribution of repairs in the switchboard corpus , but did not propose an actual model of repairs .","shriberg and stolcke studied the location and distribution of repairs in the switchboard corpus , the primary corpus for speech disfluency research , but did not propose an actual model of repairs ." +"experimental results show that , our method can improve the translation performance significantly on both data sets , compared with a state-of-the-art baseline .","experimental results show that , our method can significantly improve machine translation performance on both iwslt and nist data , compared with a state-of-the-art baseline ." +"in this paper , we propose an alternative approach for parsing .",we present a method for cross-formalism transfer in parsing . +"results show the effectiveness of collocation features , context words features and sentiment of neighboring sentences .","this result shows that collocation , context-words and neighboring sentence sentiment are effective in sentiment adjectives disambiguation ." +"ibm constraints , the lexical word reordering model , and inversion transduction grammar constraints belong to this type of approach .","ibm constraints , lexical word reordering model , and inversion transduction grammar constraints belong to this type of approach ." +"in this work , we propose a new approach to summarizing student course feedback .","in this work , we aim to summarize the student responses ." +we use the classifieds data provided by grenager et al and compare with results reported by hk06 and crr07 .,we use the classifieds data provided by grenager et al and compare with results reported by crr07 and mm08 for both supervised and semi-supervised learning . +"our model is several times faster than , while the accuracy is on par with the baseline lstm model .","surprisingly , we also observe our model beats the standard lstm in terms of accuracy ." +"ian interactively learns attentions in the contexts and targets , and generates the representations for targets and contexts separately .","ian interactively learns the coarse-grained attentions between the context and aspect , and concatenate the vectors for prediction ." +"the sentences were dependencyparsed with cabocha , and co-occurrence samples of event mentions were extracted .","the sentences were dependencyparsed with cabocha , and cooccurrence samples of event mentions were extracted ." +we use our reordering model for n-best re-ranking and optimize bleu using minimum error rate training .,"for language model , we train a 5-gram modified kneser-ney language model and use minimum error rate training to tune the smt ." +into an fst leads to far too big search spaces .,"however , the search space in mt can be quite large ." +that is it easy to incorporate source syntax in the stringto-tree model .,"by contrast , we incorporate source syntax into a stringto-tree model ." +somasundaran et al investigated subjectivity classification in meetings .,somasundaran et al developed a scheme for annotating sentiment and arguing expressions in meetings . +"transition-based dependency parsers scan an input sentence from left to right , and perform a sequence of transition actions to predict its parse tree .",greedy transition-based dependency parsers incrementally process an input sentence from left to right . +this streamlined architecture is able to outperform state-of-the-art results on a temporal qa task with a large margin .,"evaluated on qa-tempeval ( semeval2015 task 5 ) , our proposed technique outperforms state-of-the-art methods by a large margin ." +"the csj is a collection of monologues and dialogues , the majority being monologues such as academic presentations and simulated public speeches .","the csj is the biggest spontaneous speech corpus in the world , consisting of roughly 7m words with the total speech length of 700 hours , and is a collection of monologues such as academic presentations and simulated public speeches ." +"in the news domain , the task is often called wikification or entity linking and has been studied extensively recently .","in particular , when linked to wikipedia articles , the task is called wikifiation ." +other models use statistical language models to determine the most likely substitutes to represent the contexts .,other approaches rely on statistical language models to determine the most likely substitutes to represent contexts . +"however , only a small amount of annotated data is available for training quality assessment .","however , there is a lack of training data annotated with fine-grained quality information ." +wizard-of-oz frameworks have been used since early 90s in order to collect human-computer dialogue data to help design dialogue systems .,wizard-of-oz frameworks have been used in several studies since in order to collect human-computer dialogue data to help design dialogue systems . +emotion cause extraction can reveal important information about what causes a certain emotion .,emotion cause extraction aims to identify the reasons behind a certain emotion expressed in text . +the spelling normalisation component is a character-based statistical machine translation system implemented with the moses toolkit .,baseline is a phrase-based machine translation system which consists of training data comprising a bilingual dataset without preordering . +the semantic relations and clusters have been shown to be very effective knowledge sources for such nlp tasks as wsd and interpretation of noun sequences .,"moreover , those relations have been shown to be very effective knowledge sources for wsd and interpretation of noun sequences ." +in order to test statistical significance of differences between models we use stratified shuffling .,we calculate statistical significance of performance differences using stratified shuffling . +"hierarchical phrase-based translation ( hiero , ( cite-p-9-1-1 ) ) provides an attractive framework within which both short-and long-distance reorderings can be addressed consistently .","hierarchical phrase-based translation ( hiero , ( chiang , 2005 ) ) has proven to be a very useful compromise between syntactically informed and purely corpus-driven translation ." +"the language models were created using the srilm toolkit on the standard training sections of the ccgbank , with sentenceinitial words uncapitalized .","all evaluated systems use the same surface trigram language model , trained on approximately 340 million words from the english gigaword corpus using the srilm toolkit ." +we developed our approaches mostly in python using the igraph library for the graph representation and main core extraction .,we used the r igraph package to write graph building and weighted k-core implementation code . +"for example , sentences such as ¡° bake for 50 minutes ¡± do not explicitly mention what to bake .","for example , sentences such as ¡°bake for 50 minutes¡± do not explicitly mention what to bake or where ." +lossy ’ s extractions have proven useful as seed definitions in an unsupervised wsd task .,g lossy ’ s extractions have proven useful as seed definitions in an unsupervised wsd task . +in this paper we introduce a joint theoretical model for comprehensive semantic representation of the structure of comparison and ellipsis .,in this paper we introduced a novel semantic framework for jointly capturing the meaning of comparison and ellipsis constructions . +user : i want to prevent tom from reading my file .,user : i was cleaning out my account when i acciden- +this paper proposes a method for dealing with repairs in action control dialogue .,this paper proposed a method to identify the target of a driu for conversational agents in action control dialogue . +we use machine-learning techniques to build a semantic interpreter using the explicit semantic analysis approach .,we use the explicit semantic analysis based on wikipedia to compute semantic relatedness between concepts . +yih et al focused on answering single-relation factual questions by a semantic similarity model using convolutional neural networks .,yih et al used convolutional neural networks to answer single-relation questions on reverb . +swier and stevenson induce role labels with a bootstrapping scheme where the set of labeled instances is iteratively expanded using a classifier trained on previously labeled instances .,swier and stevenson induce role labels with a bootstrapping scheme in which the set of labeled instances is iteratively expanded using a classifier trained on previously labeled instances . +lin et al defined a search goal as an action-entity pair and utilized web trigram to generate fine-grained search goals .,lin et al defined a search goal as an action-entity pair and utilized a web trigram to identify fine-grained search goals . +"we illustrate the different effect of four feature types including direct lexical matching , idf-weighted lexical matching , modified bleu n-gram matching and named entities matching .","the experiments illustrate the different effect of four feature types including direct lexical matching , idf-weighted lexical matching , modified bleu n-gram matching and named entities matching ." +"tree substitution grammar ( tsg ) is a compelling grammar formalism which allows nonterminal rewrites in the form of trees , thereby enabling the modelling of complex linguistic phenomena such as argument frames , lexical agreement and idiomatic phrases .",tree substitution grammar ( tsg ) is a promising formalism for modeling language data . +"in this experiment , we only use sentiment related words as features to represent opinion documents .","in this experiment , we only use sentiment related words as features to represent opinion documents for classification , instead of using all words ." +"although this work represents the first formal study of relationship questions that we are aware of , by no means are we claiming a solution ¡ª .","although this work represents the first formal study of relationship questions that we are aware of , by no means are we claiming a solution¡ªwe see this as merely the first step in addressing a complex problem ." +of hobbs ' algorithm was implemented in the slot grammar framework .,it scored higher than a version of hobbs ' algorithm that we implemented for slot grammar . +the stanford dependency parser is used for extracting features from the dependency parse trees .,this metric uses the stanford dependency parser to generate the dependencies . +"we use a keras implementation , and fit the model parameters with adam with a batch size of 32 and iterations of 20 epochs .",we used a categorical cross entropy loss function and adam optimizer and trained the model for 10 epochs . +the affective text shared task on news headlines for emotion and valence level identification at semeval 2007 has drawn focus to this field .,affective text shared task on news headlines at semeval 2007 for emotion and valence level identification has drawn the focus to this field . +coreference resolution can benefit from semantic knowledge .,semantics plays indeed a role in coreference resolution . +"in this paper , we propose a simple but novel approach to automatically generate large-scale pseudo training data .","also we propose a two-step training method , which benefit from both large-scale pseudo training data and task-specific data , showing promising performance ." +"in chen , the authors romanized chinese nes and selected their english transliterations from english nes extracted from the web by comparing their phonetic similarities with chinese nes .","in , the authors romanized chinese nes and selected their english transliterations from english nes extracted from the web by comparing their phonetic similarities with chinese nes ." +we tag the source language with the stanford pos tagger .,"for tagging , we use the stanford pos tagger package ." +a tree transformation has linear size-increase if the size of each output tree is linearly bounded by the size of its corresponding input tree .,a tree transformation is sensible if the size of each output tree is uniformly bounded by a linear function in the size of the corresponding input tree . +"we build upon desr , the shift-reduce parser described in .","as a base parser we use desr , a shift-reduce parser described in ." +"for each task , we provide separate training , development , and test datasets for english , arabic , and spanish tweets .","for each task , we created labeled data from english , arabic , and spanish tweets ." +a kg is a directed graph whose nodes correspond to entities and edges to relations .,a kg is a multi-relational directed graph composed of entities as nodes and relations as edges . +"with a single rnn , we show a 54 % error reduction in relations that are available only sparsely .","on a large-scale freebase+clueweb prediction task , we achieve 25 % error reduction , and a 53 % error reduction on sparse relations ." +"in this paper , we adopt the full binary tree as the topological structure .","unlike grconv and adasent , our model uses full binary tree as the topological structure ." +"nonnative speakers make mistakes in a systematic manner , and errors often depend on the first language of the writer .",mistakes made by non-native speakers are systematic and also depend on the first language of the writer . +baldwin et al employed latent semantic analysis to determine the decomposability of mwes .,katz and giesbrecht and baldwin et al use latent semantic analysis for this purpose . +donaway et al suggested that it might be possible to use contentbased measures for summarization evaluation without generating model summaries .,donaway et al used exhaustive search to generate all three sentences extracts to evaluate different evaluation metrics . +the novelty of our work is the transformation of a source language topic model rather than the creation of a language independent model from parallel data .,the novelty of our work is the proposal of a method to automatically extract persuasive argumentation features from political debates by means of the use of semantic frames as pivoting features . +"being a specific case , our model can be easily generalized and applied to the other sequence labeling tasks .","besides , our model can also be easily generalized and applied to other sequence labeling tasks ." +"conditional random fields are undirected graphical models trained to maximize a conditional probability of random variables x and y , and the concept is well established for sequential labeling problem .","conditional random fields are undirected graphical models trained to maximize a conditional probability of random variables and y , and the concept is well established for sequential labeling problem ." +parser is trained by jointly optimizing performance on a syntactic parsing task and a distantly-supervised relation extraction task .,our parser is trained by combining a syntactic parsing task with a distantly-supervised relation extraction task . +we extend this line of work to study the extent to which discriminative learning methods can lead to better generative language models .,we propose a new approach to language modeling which utilizes discriminative learning methods . +testing data with automatic evaluation as well as human judgments suggest that the proposed method is able to enhance the paraphrase quality .,"with both automatic and human evaluations , the results show that the proposed method effectively balance between adequacy and dissimilarity ." +"embeddings , have recently shown to be effective in a wide range of tasks .","multitask learning models have been proven very useful for several nlp tasks and applications , ." +"conversely , a comparable corpus is a collection of multilingual documents written over the same set of classes ( ni et al. , 2011 ; yogatama and tanaka-ishii , 2009 ) without any restriction about translation or perfect correspondence between documents .",a comparable corpus consists of documents in two or more languages or varieties which are not translation of each other and deal with similar topics . +clarke and lapata use integer linear programming to find the optimal compression per sentence within linguistic constraints .,"clarke and lapata improved the above discriminative model by using ilp in decoding , making it convenient to add constraints to preserve grammatical structure ." +"our method achieves competitive rouge score and has good readability , while is much faster than the integer linear programming ( ilp ) method .","our experimental results showed that the graph cut based method achieved competitive performance compared to ilp , while about 100 times faster ." +"argument mining consists of the automatic identification of argumentative structures in documents , a valuable task with applications in policy making , summarization , and education , among others .",argument mining is a trending research domain that focuses on the extraction of arguments and their relations from text . +machine comprehension of text is a typical natural language processing task which remains an elusive challenge .,machine comprehension of text is the central goal in nlp . +"in the tagging scheme for such languages , a complete pos tag is formed by combining tags from multiple tag sets defined for each morphosyntactic category .","one reason for this is that in the tagging scheme for such languages , a complete pos tag is formed by combining tags from multiple tag sets defined for each morphosyntactic category ." +"aue and gamon explored various strategies for customizing sentiment classifiers to new domains , where the training is based on a small number of labelled examples and large amounts of unlabelled in-domain data .",aue and gamon attempt to solve the problem of the absence of large amounts of labeled data by customizing sentiment classifiers to new domains using training data from other domains . +"we employ the crf implementation in the wapiti toolkit , using default settings .",we choose the crf learning toolkit wapiti 1 to train models . +hashtags are spelling mistakes of twitter .,hashtags are extremely popular in twitter . +alignment and segmentation procedures were implemented with the help of openfst .,we implemented our model with wfsas using the openfst library . +"patty devised a sequence mining algorithm to extract relational phrases with semantic type signatures , and organized them into synonymy sets and hypernymy hierarchies .","patty used sequence mining algorithms for gathering a general class of relational phrases , organizing them into synsets , and inferring lexical type signatures ." +"it is a specific kind of generalized linear model , where its function is the logit function and the independent variable y is a binary or dicothomic variable which has a bernoulli distribution .",it is a specific kind of generalized linear model where its function is the logit function and the dependent variable y is a binary or dichotomic variable which has a bernoulli distribution . +"in particular , we employ the nonparametric bayesian phrasal inversion transduction grammar of neubig et al to perform phrase table extraction .","here , we employ a method proposed by neubig et al , which uses parametric bayesian inference with the phrasal itgs ." +topic signatures are word vectors related to a particular topic .,topic signatures are weighted topical vectors that are associated with senses or concepts . +klein and manning present a generative model for inducing constituent boundaries from part-of-speech tagged text .,klein and manning presented another approach focusing on constituent sequences called the constituent-context model . +our system ¡¯ s best result ranked 35 among 73 system runs with 0 . 7189 average pearson correlation over five test sets .,our system¡¯s best result ranked 35 among 73 submitted runs with 0.7189 average pearson correlations over five test sets . +the ef cambridge open language database is an english l2 corpus that was released recently .,the ef cambridge open language database is an english l2 corpus that was released in 2013 and used for nli in . +"this seems in line with the finding of watanabe et al that with on the order of 10,000 features , overfitting is possible , but we can still improve accuracy on new data .","when training over 10,000 features on a modest amount of data , we , like watanabe et al , did observe overfitting , yet saw improvements on new data ." +we used 300-dimensional pre-trained glove word embeddings .,we downloaded glove data as the source of pre-trained word embeddings . +a comparable corpus is a collection of texts composed independently in the respective languages and combined on the basis of similarity of content ( cite-p-12-1-15 ) .,a comparable corpus consists of documents in two or more languages or varieties which are not translation of each other and deal with similar topics . +"to re-train and evaluate models with different feature sets , we use the same training , development and test sets as provided in the conll shared task .","we use the same training , development and out-of-domain test set as provided in the conll 2009 shared task ." +articles that summarize the state-of-the-art are available in ( cite-p-20-1-2 ) .,a good survey of the state of the art is available in ( cite-p-20-12-0 ) . +"for our classifiers , we used the weka implementation of na茂ve bayes and the svmlight implementation of the svm .","for our learners , we used the na茂ve bayes implementation in the weka machine learning toolkit , a support vector machine , 3 and the crf implementation in mallet ." +"in this pilot study , we measure the extent to which human perception of basic user trait information .","in this study , we explore the feasibility of controlling human perception of traits using automated methods ." +we propose an event detection algorithm based on the sequence of community level emotion distribution .,"in this study , we focus on the problem of community-related event detection by community emotions ." +"better results were obtained using case frames constructed from larger corpora , and the performance showed no saturation even when the corpus size was 1 . 6 billion sentences .",we obtained better results by using case frames constructed from larger corpora ; the performance was not saturated even with a corpus size of 100 billion words . +"grosz , joshi , and weinstein admit that several factors may have an influence on the ranking of the cf but limit their exposition to the exploitation of grammatical roles only .","grosz , joshi , and weinstein state that cf may be ordered using different factors , but they only use information about grammatical roles ." +react achieves an accuracy of 92 % in distinguishing between onand off-topic information .,react achieves an accuracy of 92 % for the onand off-topic classification task and an f 1 -measure of 72 % for the semantic annotation . +"so , we used the lexrank algorithm to summarize the event clusters obtained in the previous step .",we extended the lexrank graph-based algorithm with information from the ner step . +parameters are initialized using the method described by glorot and bengio .,both were initialised by uniformly sampling values from the symmetric interval suggested by glorot and bengio . +"then we split the words into subwords by joint bytepair-encoding with 32,000 merge operations .","we further split the words into sub-words using joint bpe with 32 , 000 merge operations ." +that will provide further insights into the characterization of preposition behavior .,further insights may be available from the finer-grained data available in the preposition disambiguation task . +"nevertheless , the research community has been aware of the deficiencies of the bleu metric .","however , the research community is also aware of the deficiencies of these metrics ." +the key component is a new procedure to directly optimize the global scoring function used by a smt decoder .,"the key component is the so-called alignment model , which makes sure the embeddings of entities , relations , and words are in the same space ." +we used an implementation of the mira algorithm for regression .,"we implemented a version of mira from crammer and singer , which we used for regression ." +"in section 2 , we provide some background and review previous work on graph-based dependency parsing for mono-and cross-lingual settings .","in section 2 , we provide some background and review previous work on graph-based dependency parsing for mono- and cross-lingual settings and on word embeddings ." +"in this paper , we have proposed an approach to question search which models question topic and question focus .",we also propose to use the mdlbased tree cut model for identifying question topic and question focus automatically . +"in our dataset , we additionally provide the most similar training questions for each challenge .","in our dataset , we additionally provide the most similar training questions for each challenge question ." +our first two models include various lexical and syntactical constraints based on the work of clarke and lapata .,"in the following , we heavily rely on the work of clarke and lapata , who develop an approach based on ilp for monolingual sentence compression ." +word-based models are not suitable to process such complex languages .,"therefore , it is not suitable to exploit existing word-based models to translate this set of languages ." +"transliteration is a subtask in ne translation , which translates nes based on the phonetic similarity .","transliteration is a key building block for multilingual and cross-lingual nlp since it is essential for ( i ) handling of names in applications like machine translation ( mt ) and cross-lingual information retrieval ( clir ) , and ( ii ) user-friendly input methods ." +"while antonymy is defined as the oppositeness between words , synonymy refers to words that are similar in meaning ( cite-p-21-1-2 , cite-p-21-4-5 ) .","while antonymy is defined as the oppositeness between words , synonymy refers to words that are similar in meaning ( cite-p-11-1-2 , cite-p-11-1-10 ) ." +"among parallel jobs , training is efficiently and easily carried out by distributing training data among shards and by mixing parameters in each iteration ( cite-p-14-3-10 ) .",learning is efficiently parallelized by splitting training data among shards and by merging parameters in each round ( cite-p-14-3-10 ) . +and evaluate our methodology using both intrinsic and extrinsic measures .,we evaluate our methodology using intrinsic and extrinsic measures . +rcm might provide an automatic way to quantitatively measure the knowledge levels of words .,it is also observed that rcm might automatically measure the knowledge levels of words . +in this paper we attempt to deliver a framework useful for analyzing text in blogs .,in this paper we model discussions in online political blogs . +each document was split into sentences using the punkt sentence tokenizer in nltk .,individual word tokens in the utterances were further processed with the porter stemmer in the nltk package . +"in the source language , we infer constraints over the label distribution in the target language , and train a discriminative model .","to this end , we use automatically word aligned bitext between the source and target language pair , and learn a discriminative conditional random field model on the target side ." +we explained our participation in the new challenging task of cross-lingual textual entailment ( clte ) for content synchronization .,this paper describes our participation in the task denominated cross-lingual textual entailment ( clte ) for content synchronization . +we used the disambig tool provided by the srilm toolkit .,we use a 5-gram lm trained on the spanish part of europarl with the srilm toolkit . +"supervised approaches include the bayesian classifier , maximum entropy , skip-chain crf , discriminative reranking , among others .","supervised methods include hidden markov model , maximum entropy , conditional random fields , and support vector machines ." +"to predict labels , we train conditional random fields , which are directly optimized for splitting .",we use a conditional random field formalism to learn a model from labeled training data that can be applied to unseen data . +"in this work , we develop neural models in a sequential way , and encode sentence semantics and their relations automatically .","in this work , we represent document with convolutional-gated recurrent neural network , which adaptively encodes semantics of sentences and their relations ." +"on a set of manually annotated verbal readings , we found that our lexicon provided enough information to reliably predict the aspectual value of verbs across their readings .",our results experimentally confirm the theoretical assumption that a sufficiently detailed lexicon provides enough information to reliably predict the aspectual value of verbs across their readings . +that consists of two parts : multi-channel cnn and lstm .,"it consists of two parts : multi-channel cnn , and lstm ." +kendall ’ s math-w-2-5-2-97 as a performance measure for evaluating the output of information-ordering components .,"kendall ’ s math-w-11-5-2-1 can be easily used to evaluate the output of automatic systems , irrespectively of the domain or application at hand ." +"in this work , we assume the general connotation of each word over statistically prevailing senses .","in contrast , we present an automatic approach that infers the general connotation of words ." +"with this result , we further show that these paraphrases can be used to obtain high precision surface patterns .",we have also shown that these paraphrases can be used to obtain high precision extraction patterns for information extraction . +one uses confusion network decoding to combine translation systems as described in and .,one uses confusion networks formed along a skeleton sentence to combine translation systems as described in and . +"cnn is a neural network that can make use of the internal structure of data such as the 2d structure of image data through convolution layers , where each computation unit responds to a small region of input data ( e.g. , a small square of a large image ) .",a cnn is a feedforward network with convolution layers interleaved with pooling layers . +"englishgerman , english-french and chinese-to-english translation tasks .","english-german , english-french and chinese-to-english translation tasks ." +our macro f1 score were 32 . 73 % and 17 . 98 % for our english data and spanish data .,we achieved a macro f1 score of 32.73 % for the english data and 17.98 % for the spanish data . +"to be able to recognize non-continuous entities , an sbieon encoding was used .",all models used the sbieon encoding to support the recognition of non-continuous entities . +this approach is similar to previous structure learning modelings for dependency parsing .,such a model is extended from a graph-based model for dependency parsing . +"we also extract subject-verbobject event representations , using the stanford partof-speech tagger and maltparser .","we use the unigrams and bigrams to represent lexical features , and the stanford partof-speech tagger to extract the lexicalized named entity and part-of-speech features ." +"however , the ensemble system is able to effectively leverage this higher prediction , producing a prediction of 1 . 76 , which is the most accurate .","not surprisingly , the ensemble system performs the best , obtaining a weighted pearson correlation of 0.738 ." +zhang and clark proposed a word-based model using perceptron .,zhang and clark improve this model by using both character and word-based decoding . +the language model is trained and applied with the srilm toolkit .,trigram language models are implemented using the srilm toolkit . +experimental results show that our model outperforms competitive chinese poetry generation systems .,experimental results show that our model yields high quality poems compared to the state of the art . +"in mikolov et al , the authors are able to successfully learn word translations using linear transformations between the source and target word vector-spaces .",mikolov et al attempt different dimensions of word embedding for the source language and the target language to achieve the best translation quality . +the current state-of-the-art result on atis is an attention based bidirectional lstm model .,"as their published state-of-the-art result described in , their attention-based model is based on word-level embeddings ." +"wang et al , proposed an attention based lstm which introduced the aspect clues by concatenating the aspect embeddings and the word representations .",wang et al 2016 built on this framework and introduced attention mechanism for generating these sentential features . +"liu et al , meral et al , murphy , murphy and vogel and topkara et al all belong to the syntactic transformation category .","murphy , liu , sun , and wu , topkara , topkara , and atallah , meral et al , murphy and vogel , and meral et al all belong to this syntactic transformation category ." +this data was created by semi-automatically converting the penn treebank to ccg derivations .,ccgbank was created by semiautomatically converting the penn treebank to ccg derivations . +previous work has used citations to produce summaries of scientific work .,previous work that generate surveys of scientific topics use the text of citation sentences alone . +"more specifically , feature is the main verb token , extracted following the head-finding strategy by yamada and matsumoto , while feature is a boolean feature that indicates for each token if it is the main verb in the sentence or not .","shook in our example , extracted following the head-finding strategy by yamada and matsumoto , while feature is a boolean feature that indicates for each token if it is the main verb in the sentence or not ." +"with simplification , this method can be used in the traditional within-domain case .",the proposed method can also be used in the traditional within-domain problem with some simplifications . +"we perform experiments using the chinese treebank ( ctb ) corpora , demonstrating that the accuracies of the three tasks can be improved significantly over the pipeline .","in experiments using the chinese treebank ( ctb ) , we show that the accuracies of the three tasks can be improved significantly over the baseline models , particularly by 0.6 % for pos tagging and 2.4 % for dependency parsing ." +we use the europarl corpus in our experiments because our ultimate goal is to apply the analogy-based ebmt method to this kind of data .,we use the europarl parallel corpus as the basis for our small-scale cross-lingual experiments . +"urdu is the national language of pakistan , and also one of the state languages in india , and is written in perso-arabic script .",urdu is the national language of pakistan and one of the official languages of india . +hasegawa et al propose an approach that discovers relations between two named entity types .,hasegawa et al tried to extract multiple relations by choosing entity types . +"in this paper , we report on a set of initial results for englishto-arabic .","in this paper , we present an initial set of experiments on englishto-arabic smt ." +this type of model is closely related to several other approaches .,our models are similar to several other approaches . +"for this induction , bilingual word embeddings of 512 dimensions were obtained using word embeddings trained with fasttext 5 and aligned in the same space using unsupervised vecmap .",bwe of 512 dimensions were obtained using word embeddings trained with fasttext 9 and aligned in the same space using unsupervised vecmap 10 for this induction . +semi-supervised learning ( ssl ) is the use of small amounts of labeled data with relatively large amounts of unlabeled data to train predictors .,"semi-supervised learning ( ssl ) is a machine learning ( ml ) approach that utilizes large amounts of unlabeled data , combined with a smaller amount of labeled data , to learn a target function ( cite-p-23-1-15 , cite-p-23-1-2 ) ."