sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
a pun is a form of wordplay , which is often profiled by exploiting polysemy of a word or by replacing a phonetically similar sounding word for an intended humorous effect .
a pun is a form of wordplay in which one sign ( e.g. , a word or phrase ) suggests two or more meanings by exploiting polysemy , homonymy , or phonological similarity to another sign , for an intended humorous or rhetorical effect ( aarons , 2017 ; hempelmann and miller , 2017 ) .
previous work has shown that unlabeled text can be used to induce unsupervised word clusters that can improve performance of many supervised nlp tasks .
previous work has shown that unlabeled text can be used to induce unsupervised word clusters which can improve the performance of many supervised nlp tasks .
this paper presents an unsupervised topic identification method integrating linguistic and visual information based on hidden markov models .
this paper has described an unsupervised topic identification method integrating linguistic and visual information based on hidden markov models .
for the language model , we used srilm with modified kneser-ney smoothing .
we use a fourgram language model with modified kneser-ney smoothing as implemented in the srilm toolkit .
in this paper we describe an architecture that utilizes this mistake-driven algorithm for multi-class prediction .
in this paper we present a learning algorithm and an architecture with properties suitable for this domain .
in this novel corpus , we identify common events across texts and investigate the argument structures that were realized in each context .
in this paper , we presented a novel corpus of comparable texts that provides full discourse contexts for alternative verbalizations .
experimental results demonstrate that our approach outperforms the state-of-the-art methods .
experimental results show that our method achieves the best performance .
we build an open-vocabulary language model with kneser-ney smoothing using the srilm toolkit .
we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit .
this paper proposes that the process of language understanding can be modeled as a collective phenomenon that emerges from a myriad of microscopic and diverse activities .
the model simulates language processing as a collective phenomenon that emerges from a myriad of microscopic and diverse activities .
as a baseline we compared our results with the publicly available phrase-based system pharaoh , using the default feature set .
we compared our system to pharaoh , a leading phrasal smt decoder , and our treelet system .
text simplification ( ts ) is generally defined as the conversion of a sentence into one or more simpler sentences .
text simplification ( ts ) is a monolingual text-to-text transformation task where an original ( complex ) text is transformed into a target ( simpler ) text .
analysis of word embeddings demonstrate the effectiveness of our method .
experiments on word similarity and text classification demonstrate the effectiveness of our method .
we have modeled the simple parser on the paninian grammatical model which provides a dependency grammar framework .
we make use of the english dependency treebank , developed on the computational paninian grammar model , for this work .
to get a dictionary of word embeddings , we use the word2vec tool 2 and train it on the chinese gigaword corpus .
we train a word2vec cbow model on raw 517 , 400 emails from the en-ron email dataset to obtain the word embeddings .
li et al , 2004 , or a combination of them , or based on phonetic , eg .
li et al , 2004 , hybrid , or based on phonetic , eg .
for our experiments , we use a phrase-based translation system similar to moses .
our baseline is an in-house phrase-based statistical machine translation system very similar to moses .
we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing .
we apply srilm to train the 3-gram language model of target side .
we also use analysis-dependent style markers , that is , measures that represent the way in which the text has been processed .
in particular , we use a set of analysis-level style markers , i.e. , measures that represent the way in which the text has been processed by the tool .
part-of-speech ( pos ) tagging is a job to assign a proper pos tag to each linguistic unit such as word for a given sentence .
part-of-speech ( pos ) tagging is a well studied problem in these fields .
we also compare our results to those obtained using the system of durrett and denero on the same test data .
we also compare our results to those obtained by running the system of durrett and denero on the same training and test data .
although coreference resolution is a subproblem of natural language understanding , coreference resolution evaluation metrics have predominately been discussed in terms of abstract entities and hypothetical system errors .
coreference resolution is the task of determining whether two or more noun phrases refer to the same entity in a text .
vector-based models have been successful in simulating semantic priming ( cite-p-8-1-20 , cite-p-8-1-18 ) and text comprehension .
nlp tasks that could benefit from composition models include paraphrase identification and context-dependent language modeling ( cite-p-8-1-4 ) .
venugopal et al propose a method to watermark the output of machine translation systems to aid this distinction , with a negligible loss of quality .
venugopal et al propose a method to watermark the output of machine translation systems to aid this distinction .
chen et al shows that the source subtree features significantly improve performance .
chen et al reports a very high performance using subtree features from auto-parsed data .
we implement some of these features using the stanford parser .
we use the stanford parser for syntactic and dependency parsing .
task : given a sentence with an entity mention , the goal is to predict a set of free-form phrases ( e . g . skyscraper , songwriter , or criminal ) that describe appropriate types for the target entity .
to address this challenge , we present a new task : given a sentence with a target entity mention , predict free-form noun phrases that describe appropriate types for the role the target entity plays in the sentence .
relation extraction is the task of predicting semantic relations over entities expressed in structured or semi-structured text .
relation extraction is the task of finding semantic relations between two entities from text .
we model the acoustic-prosodic stream with two different models , a maximum entropy model .
we model the acoustic-prosodic stream with two different models , one a maximum entropy model and the other a traditional hmm .
we tested it on : for english , it outperforms the best published method we are aware of .
this method outperforms the best published method we are aware of on english and a recently published method on chinese .
semantic role labeling ( srl ) is a kind of shallow semantic parsing task and its goal is to recognize some related phrases and assign a joint structure ( who did what to whom , when , where , why , how ) to each predicate of a sentence ( cite-p-24-3-4 ) .
semantic role labeling ( srl ) is the process of assigning semantic roles to strings of words in a sentence according to their relationship to the semantic predicates expressed in the sentence .
as discussed in section 3 , this indicates the bias p arg ( v con ) in score works better than the bias p arg ( n , v con ) in score .
as discussed in section 3 , this indicates the bias p arg ( v con ) in score works better than the bias p arg ( n , v con ) in score cooc .
a 5-gram language model with kneser-ney smoothing is trained using s-rilm on the target language .
a knsmoothed 5-gram language model is trained on the target side of the parallel data with srilm .
in this paper , we use machine learning in a prediction task .
in this paper , we use machine learning in a prediction task as our approach to this .
in our experiments , learning from implicit supervision alone is not a viable strategy for algebra word problems .
we argue that the effective strategy is to learn from both explicit and implicit supervision signals jointly .
prettenhofer and stein proposed a cross-language structural correspondence learning method to induce language-independent features by using word translation oracles .
prettenhofer and stein use the structural correspondence learning algorithm to learn a map between the source language and the target language .
training and testing was done with a log-linear model via liblinear .
models were trained using the liblinear classification library .
further uses of the attention mechanism include parsing , natural language question answering , and image question answering .
the attention strategies have been widely used in machine translation and question answering .
li et al suggested a grapheme-based joint source-channel model within the direct orthographic mapping framework .
li et al , 2004 ) proposed a joint source-channel model to allow direct orthographical mapping between two different languages .
in real settings , this can be useful when receiving a text message or when looking at anonymous posts .
in real settings , this can be useful when receiving a text message or when looking at anonymous posts in forums .
the language models were built using srilm toolkits .
a 4-grams language model is trained by the srilm toolkit .
we use adagrad to maximize this objective function .
we use adagrad to learn the parameters of our models .
for this language model , we built a trigram language model with kneser-ney smoothing using srilm from the same automatically segmented corpus .
we applied a 5-gram mixture language model with each sub-model trained on one fifth of the monolingual corpus with kneser-ney smoothing using srilm toolkit .
vis-a-vis a specific domain with a restricted register , it is expected that the quality rather than the quantity of the corpus matters more in terminology mining .
more importantly , as terms are defined vis-¨¤-vis a specific domain with a restricted register , it is expected that the quality rather than the quantity of the corpus matters more in terminology mining .
in addition , we can use pre-trained neural word embeddings on large scale corpus for neural network initialization .
in particular , we use the neural-network based models from , also referred as word embeddings .
classifiers are then combined in a weighted ensemble to further enhance the cross-domain classification performance .
furthermore , we show that a weighted ensemble of the classifiers enhances the cross-domain classification performance .
mturk has been adopted for a variety of uses both in in dustry and academia from user studies to image labeling .
mturk has been adopted for a variety of uses both in industry and academia , ranging from user studies to image labeling .
variations of the disorder mentions were generated based on the commonly used variations of disorder mentions learned from the training data as well as from the umls .
for the disorder mention normalization ( task b ) , variations of disorder mentions were considered whenever exact matches were not found in the training data or in the umls .
in this paper , we present a system that uses word embeddings and recurrent convolutional networks .
this paper presents a system that uses word embeddings ( cite-p-9-1-11 ) and recurrent convolutional networks to this end .
in the acoustic model , in this paper , we investigate the problem of word fragment identification .
in section 2 we introduce the acoustic and prosodic features that we investigate for word fragment detection .
cross-lingual textual entailment is an extension of textual entailment .
cross-lingual textual entailment has been proposed as an extension of textual entailment .
this dataset was originally presented by silfverberg and hulden .
our second dataset was presented by silfverberg and hulden .
analysis of the parser output indicates that it is robust enough in the face of noisy non-native writing .
nevertheless , examination of parser output shows the parse features can be extracted reliably from esl data .
the mod- els h m are weighted by the weights 位 m which are tuned using minimum error rate training .
the feature weights 位 i are trained in concert with the lm weight via minimum error rate training .
inspired by hmm word alignment , our second distance measure is based on jump width .
the alignment aspect of our model is similar to the hmm model for word alignment .
the output files generated by the system for the dataset are classified using the weka tool .
these features are computed and presented for each sentence in a data file format used by the weka suite .
the evaluation metric for the overall translation quality was case-insensitive bleu4 .
the translation quality is evaluated by case-insensitive bleu and ter metric .
the tempeval shared tasks have been one of the key venues for researchers to compare methods for temporal information extraction .
the tempeval shared tasks have , since 2007 , provided a focus for research on temporal information extraction .
similar to our proposed approach , this model can also be applied to all languages in wikipedia .
therefore , we can learn embeddings for all languages in wikipedia without any additional annotation or supervision .
in this paper , we address the problem of divergence in tag distribution between primary and assisting languages .
in this work , we address this problem of drift in tag distribution owing to adding training data from a supporting language .
while defining generic data generators is difficult , we propose to allow generators to be “ weakly ” specified .
we proposed to allow data generators to be “ weakly ” specified , leaving the undetermined coefficients to be learned from data .
we used a standard pbmt system built using moses toolkit .
we used the moses toolkit to build mt systems using various alignments .
amr parsing is a much harder task in that the target vocabulary size is much larger , while the size of the dataset is much smaller .
the process of creating amr ’ s for sentences is called amr parsing .
the feature weights are tuned to optimize bleu using the minimum error rate training algorithm .
feature weights were set with minimum error rate training on a tuning set using bleu as the objective function .
in this paper we present and evaluate a new model for nlg in spoken dialogue systems .
in this paper we present and evaluate a new model for nlg in spoken dialogue systems as planning under uncertainty .
for this task , we used the svm implementation provided with the python scikit-learn module .
in all cases , we used the implementations from the scikitlearn machine learning library .
sentiment analysis is a recent attempt to deal with evaluative aspects of text .
sentiment analysis is the computational analysis of people ’ s feelings or beliefs expressed in texts such as emotions , opinions , attitudes , appraisals , etc . ( cite-p-11-3-3 ) .
if the anaphor is a pronoun but no referent is found in the cache , it is then necessary to operatingsearch memory .
if the anaphor is a pronoun , the cache is searched for a plausible referent .
their method uses word embeddings from turian et al as input , along with a binarized phrase-structure parse from the stanford parser .
turian et al learned a crf model using word embeddings as input features for ner and chunking tasks .
our trigram word language model was trained on the target side of the training corpus using the srilm toolkit with modified kneser-ney smoothing .
for the tree-based system , we applied a 4-gram language model with kneserney smoothing using srilm toolkit trained on the whole monolingual corpus .
more recent efforts introduced the thrax module , an extensible hadoop-based extraction toolkit for synchronous context free grammars .
subsequent efforts produced thrax , the extensible hadoop-based extraction tool for synchronous context-free grammars , later extended to support pivoting-based paraphrase extraction .
sentiment analysis is a collection of methods and algorithms used to infer and measure affection expressed by a writer .
sentiment analysis is the task of identifying the polarity ( positive , negative or neutral ) of review .
chen et al , 2012 ) used lexical and parser features , for detecting comments from youtube that are offensive .
chen et al , 2012 ) proposed the lexical syntactic feature architecture to detect offensive content and identify the potential offensive users in social media .
in addition , andreevskaia and bergler show that the performance of automatic annotation of subjectivity at the word level can be hurt by the presence of subjectivity-ambiguous words in the training sets they use .
however , kim and hovy and andreevskaia and bergler show that subjectivity recognition might be the harder problem with lower human agreement and automatic performance .
li and li have shown that word translation and bilingual bootstrapping is a good combination for disambiguation .
li and li proposed a bilingual bootstrapping approach for the more specific task of word translation disambiguation as opposed to the more general task of wsd .
we evaluate the performance of different translation models using both bleu and ter metrics .
we evaluate the translation quality using the case-insensitive bleu-4 metric .
barzilay and lee proposed a domain-dependent hmm model to capture topic shift in a text , where topics are represented by hidden states and sentences are observations .
the summarization technique of barzilay and lee captures topic transitions in the text span by a hidden markov model , referred to as a content model .
the rules were extracted using the pos tags generated by the treetagger .
the stts tags are automatically added using treetagger .
relation extraction ( re ) is the task of extracting instances of semantic relations between entities in unstructured data such as natural language text .
relation extraction is a subtask of information extraction that finds various predefined semantic relations , such as location , affiliation , rival , etc. , between pairs of entities in text .
such that , if a label math-w-3-3-0-39 is inferred as relevant to a document , then all the labels from math-w-3-3-0-54 to the root of the tree are also inferred as relevant to the document .
furthermore , if a document is attached to a topic math-w-5-1-0-138 , we assume that all the ancestor topics of math-w-5-1-0-149 are also relevant for that document .
neelakantan et al proposed an extension of the skip-gram model combined with context clustering to estimate the number of senses for each word as well as learn sense embedding vectors .
neelakantan et al proposed the mssg model which extends the skip-gram model to learn multi-prototype word embeddings by clustering the word embeddings of context words around each word .
relation extraction is the task of automatically detecting occurrences of expressed relations between entities in a text and structuring the detected information in a tabularized form .
relation extraction ( re ) is the task of identifying instances of relations , such as nationality ( person , country ) or place of birth ( person , location ) , in passages of natural text .
sentence compression is the task of producing a summary at the sentence level .
sentence compression is the task of compressing long sentences into short and concise ones by deleting words .
for the sake of comparison , we also built several other classifiers , including multinomial na茂ve bayes , svms , knn , and decision trees using the weka toolkit .
then , for the sake of comparison , we also built several other classifiers including multinomial na茂ve bayes , svms , knn , and decision trees using the weka toolkit .
like recent work , we use the lstm variant of recurrent neural networks as language modeling architecture .
finally , based on recent results in text classification , we also experiment with a neural network approach which uses a long-short term memory network .
our model is inspired by recent work in learning distributed representations of words .
our research is inspired by the recent work in learning vector representations of words using deep learning strategy .
waseem et al , 2017 ) proposed a typology of abusive language sub-tasks .
waseem et al proposed a typology for various sub-types of abusive language .
we present a method for unsupervised semantic role induction which we formalize as a graph partitioning problem .
contrary to these previous approaches , we conceptualize role induction in a novel way , as a graph partitioning problem .
lexical chains provide a representation of the lexical cohesion structure of the target document that is to be generated .
lexical chains provide a representation of the lexical cohesion structure of a text .
for data preparation and processing we use scikit-learn .
we use scikitlearn as machine learning library .
as we have shown that interactive methods help to create user-desired personalized summaries .
therefore , we research interactive computer-assisted approaches in order to produce personalized summaries .
to this end , we present a novel sentence fusion method based on dependency structure alignment .
we presented a novel sentence fusion method which formulates the fusion task as an optimization problem .
wmfvec is the first sense similarity measure based on latent semantics of sense definitions .
we construct a sense similarity wmfvec from the latent semantics of sense definitions .
we tokenize and frequent-case the data with the standard scripts from the moses toolkit .
we use the moses statistical mt toolkit to perform the translation .
for english , we used the dan bikel implementation of the collins parser .
we used the well-known bikel parser in its original version and the one used by collins .
semeval is the international workshop on semantic evaluation , formerly senseval .
semeval is a yearly event in which teams compete in natural language processing tasks .
however , tsunakawa and tsujii studied the issue of identifying bilingual synonymous technical terms only within manually compiled bilingual technical term lexicon and thus are quite limited in its applicability .
however , tsunakawa and tsujii studied the issue of identifying synonymous bilingual technical terms only within manually compiled bilingual technical term lexicon and thus are quite limited in its applicability .
it was trained on the webnlg dataset using the moses toolkit .
the statistical phrase-based systems were trained using the moses toolkit with mert tuning .
with nnlm however , the increase in context length at the input layer results in only a linear growth in complexity in the worst case .
increasing the context length at the input layer thus only causes a linear growth in complexity in the worst case .
in addition , we obtained the original template similarity lists learned by lin and pantel , and had available three distributional similarity measures learned by szpektor and dagan , over the rcv1 corpus , 7 as detailed in table 2 .
in addition , we obtained similarity lists learned by lin and pantel , and replicated 3 similarity measures learned by szpektor and dagan , over the rcv1 corpus 7 .
hu et al presented a dialog act tagger and link predictor which could be used to extract dap and dlc .
hu et al also annotated emails with speech acts and trained a structured prediction classifier .