id
stringlengths
8
19
document
stringlengths
2.18k
16.2k
challenge
stringlengths
76
208
approach
stringlengths
79
223
outcome
stringlengths
84
209
P06-1112
In this paper , we explore correlation of dependency relation paths to rank candidate answers in answer extraction . Using the correlation measure , we compare dependency relations of a candidate answer and mapped question phrases in sentence with the corresponding relations in question . Different from previous studies , we propose an approximate phrase mapping algorithm and incorporate the mapping score into the correlation measure . The correlations are further incorporated into a Maximum Entropy-based ranking model which estimates path weights from training . Experimental results show that our method significantly outperforms state-ofthe-art syntactic relation-based methods by up to 20 % in MRR . Answer Extraction is one of basic modules in open domain Question Answering ( QA ) . It is to further process relevant sentences extracted with Passage / Sentence Retrieval and pinpoint exact answers using more linguistic-motivated analysis . Since QA turns to find exact answers rather than text snippets in recent years , answer extraction becomes more and more crucial . Typically , answer extraction works in the following steps : • Recognize expected answer type of a question . • Annotate relevant sentences with various types of named entities . • Regard the phrases annotated with the expected answer type as candidate answers . • Rank candidate answers . In the above work flow , answer extraction heavily relies on named entity recognition ( NER ) . On one hand , NER reduces the number of candidate answers and eases answer ranking . On the other hand , the errors from NER directly degrade answer extraction performance . To our knowledge , most top ranked QA systems in TREC are supported by effective NER modules which may identify and classify more than 20 types of named entities ( NE ) , such as abbreviation , music , movie , etc . However , developing such named entity recognizer is not trivial . Up to now , we have n't found any paper relevant to QA-specific NER development . So , it is hard to follow their work . In this paper , we just use a general MUC-based NER , which makes our results reproducible . A general MUC-based NER ca n't annotate a large number of NE classes . In this case , all noun phrases in sentences are regarded as candidate answers , which makes candidate answer sets much larger than those filtered by a well developed NER . The larger candidate answer sets result in the more difficult answer extraction . Previous methods working on surface word level , such as density-based ranking and pattern matching , may not perform well . Deeper linguistic analysis has to be conducted . This paper proposes a statistical method which exploring correlation of dependency relation paths to rank candidate answers . It is motivated by the observation that relations between proper answers and question phrases in candidate sentences are always similar to the corresponding relations in question . For example , the question " What did Alfred Nobel invent ? " and the candidate sentence " ... in the will of Swedish industrialist Alfred Nobel , who invented dynamite . " For each question , firstly , dependency relation paths are defined and extracted from the question and each of its candidate sentences . Secondly , the paths from the question and the candidate sentence are paired according to question phrase mapping score . Thirdly , correlation between two paths of each pair is calculated by employing Dynamic Time Warping algorithm . The input of the calculation is correlations between dependency relations , which are estimated from a set of training path pairs . Lastly , a Maximum Entropy-based ranking model is proposed to incorporate the path correlations and rank candidate answers . Furthermore , sentence supportive measure are presented according to correlations of relation paths among question phrases . It is applied to re-rank the candidate answers extracted from the different candidate sentences . Considering phrases may provide more accurate information than individual words , we extract dependency relations on phrase level instead of word level . The experiment on TREC questions shows that our method significantly outperforms a densitybased method by 50 % in MRR and three stateof-the-art syntactic-based methods by up to 20 % in MRR . Furthermore , we classify questions by judging whether NER is used . We investigate how these methods perform on the two question sets . The results indicate that our method achieves better performance than the other syntactic-based methods on both question sets . Especially for more difficult questions , for which NER may not help , our method improves MRR by up to 31 % . The paper is organized as follows . Section 2 discusses related work and clarifies what is new in this paper . Section 3 presents relation path correlation in detail . Section 4 and 5 discuss how to incorporate the correlations for answer ranking and re-ranking . Section 6 reports experiment and results . In this paper , we propose a relation path correlation-based method to rank candidate answers in answer extraction . We extract and pair relation paths from questions and candidate sentences . Next , we measure the relation path correlation in each pair based on approximate phrase mapping score and relation sequence alignment , which is calculated by DTW algorithm . Lastly , a ME-based ranking model is proposed to incorporate the path correlations and rank candidate answers . The experiment on TREC questions shows that our method significantly outperforms a density-based method by 50 % in MRR and three state-of-the-art syntactic-based methods by up to 20 % in MRR . Furthermore , the method is especially effective for difficult questions , for which NER may not help . Therefore , it may be used to further enhance state-of-the-art QA systems even if they have a good NER . In the future , we are to further evaluate the method based on the overall performance of a QA system and adapt it to sentence retrieval task .
A generally accessible NER system for QA systems produces a larger answer candidate set which would be hard for current surface word-level ranking methods.
They propose a statistical method which takes correlations of dependency relation paths computed by the Dynamic Time Wrapping algorithm into account for ranking candidate answers.
The proposed method outperforms state-of-the-art syntactic relation-based methods by up to 20% and shows it works even better on harder questions where NER performs poorly.
2020.acl-main.528
Recently , many works have tried to augment the performance of Chinese named entity recognition ( NER ) using word lexicons . As a representative , Lattice-LSTM ( Zhang and Yang , 2018 ) has achieved new benchmark results on several public Chinese NER datasets . However , Lattice-LSTM has a complex model architecture . This limits its application in many industrial areas where real-time NER responses are needed . In this work , we propose a simple but effective method for incorporating the word lexicon into the character representations . This method avoids designing a complicated sequence modeling architecture , and for any neural NER model , it requires only subtle adjustment of the character representation layer to introduce the lexicon information . Experimental studies on four benchmark Chinese NER datasets show that our method achieves an inference speed up to 6.15 times faster than those of state-ofthe-art methods , along with a better performance . The experimental results also show that the proposed method can be easily incorporated with pre-trained models like BERT . 1 Named Entity Recognition ( NER ) is concerned with the identification of named entities , such as persons , locations , and organizations , in unstructured text . NER plays an important role in many downstream tasks , including knowledge base construction ( Riedel et al . , 2013 ) , information retrieval ( Chen et al . , 2015 ) , and question answering ( Diefenbach et al . , 2018 ) . In languages where words are naturally separated ( e.g. , English ) , NER has been conventionally formulated as a sequence labeling problem , and the state-of-the-art results have been achieved using neural-network-based models ( Huang et al . , 2015 ; Chiu and Nichols , 2016 ; Liu et al . , 2018 ) . Compared with NER in English , Chinese NER is more difficult since sentences in Chinese are not naturally segmented . Thus , a common practice for Chinese NER is to first perform word segmentation using an existing CWS system and then apply a word-level sequence labeling model to the segmented sentence ( Yang et al . , 2016 ; He and Sun , 2017b ) . However , it is inevitable that the CWS system will incorrectly segment query sentences . This will result in errors in the detection of entity boundary and the prediction of entity category in NER . Therefore , some approaches resort to performing Chinese NER directly at the character level , which has been empirically proven to be effective ( He and Wang , 2008 ; Liu et al . , 2010 ; Li et al . , 2014 ; Liu et al . , 2019 ; Sui et al . , 2019 ; Gui et al . , 2019b ; Ding et al . , 2019 ) . A drawback of the purely character-based NER method is that the word information is not fully exploited . With this consideration , Zhang and Yang , ( 2018 ) proposed Lattice-LSTM for incorporating word lexicons into the character-based NER model . Moreover , rather than heuristically choosing a word for the character when it matches multiple words in the lexicon , the authors proposed to preserve all words that match the character , leaving the subsequent NER model to determine which word to apply . To realize this idea , they introduced an elaborate modification to the sequence modeling layer of the LSTM-CRF model ( Huang et al . , 2015 ) . Experimental studies on four Chinese NER datasets have verified the effectiveness of Lattice-LSTM . However , the model architecture of Lattice-LSTM is quite complicated . In order to introduce lexicon information , Lattice-LSTM adds several additional edges between nonadjacent characters in the input sequence , which significantly slows its training and inference speeds . In addition , it is difficult to transfer the structure of Lattice-LSTM to other neural-network architectures ( e.g. , convolutional neural networks and transformers ) that may be more suitable for some specific tasks . In this work , we propose a simpler method to realize the idea of Lattice-LSTM , i.e. , incorporating all the matched words for each character to a character-based NER model . The first principle of our model design is to achieve a fast inference speed . To this end , we propose to encode lexicon information in the character representations , and we design the encoding scheme to preserve as much of the lexicon matching results as possible . Compared with Lattice-LSTM , our method avoids the need for a complicated model architecture , is easier to implement , and can be quickly adapted to any appropriate neural NER model by adjusting the character representation layer . In addition , ablation studies show the superiority of our method in incorporating more complete and distinct lexicon information , as well as introducing a more effective word-weighting strategy . The contributions of this work can be summarized as follows : • We propose a simple but effective method for incorporating word lexicons into the character representations for Chinese NER . • The proposed method is transferable to different sequence-labeling architectures and can be easily incorporated with pre-trained models like BERT ( Devlin et al . , 2018 ) . We performed experiments on four public Chinese NER datasets . The experimental results show that when implementing the sequence modeling layer with a single-layer Bi-LSTM , our method achieves considerable improvements over the state-of-theart methods in both inference speed and sequence labeling performance . In this work , we addressed the computational efficiency of utilizing word lexicons in Chinese NER . To obtain a high-performing Chinese NER system with a fast inference speed , we proposed a novel method to incorporate the lexicon information into the character representations . Experimental studies on four benchmark Chinese NER datasets reveal that our method can achieve a much faster inference speed and better performance than the compared state-of-the-art methods .
Named entity recognition in Chinese requires word segmentation causes errors or character-level model with lexical features that is complex and expensive.
They propose to encode lexicon features into character representations so it can keep the system simpler and achieve faster inference than previous models.
The proposed efficient character-based LSTM method with lexical features achieves 6.15 times faster inference speed and better performance than previous models.
P19-1352
Word embedding is central to neural machine translation ( NMT ) , which has attracted intensive research interest in recent years . In NMT , the source embedding plays the role of the entrance while the target embedding acts as the terminal . These layers occupy most of the model parameters for representation learning . Furthermore , they indirectly interface via a soft-attention mechanism , which makes them comparatively isolated . In this paper , we propose shared-private bilingual word embeddings , which give a closer relationship between the source and target embeddings , and which also reduce the number of model parameters . For similar source and target words , their embeddings tend to share a part of the features and they cooperatively learn these common representation units . Experiments on 5 language pairs belonging to 6 different language families and written in 5 different alphabets demonstrate that the proposed model provides a significant performance boost over the strong baselines with dramatically fewer model parameters . With the introduction of ever more powerful architectures , neural machine translation ( NMT ) has become the most promising machine translation method ( Kalchbrenner and Blunsom , 2013 ; Sutskever et al . , 2014 ; Bahdanau et al . , 2015 ) . For word representation , different architecturesincluding , but not limited to , recurrence-based ( Chen et al . , 2018 ) , convolution-based ( Gehring et al . , 2017 ) and transformation-based ( Vaswani et al . , 2017 ) NMT models-have been taking advantage of the distributed word embeddings to capture the syntactic and semantic properties of words ( Turian et al . , 2010 ) . Figure 1 : Comparison between ( a ) standard word embeddings and ( b ) shared-private word embeddings . In ( a ) , the English word " Long " and the German word " Lange " , which have similar lexical meanings , are represented by two private d-dimension vectors . While in ( b ) , the two word embeddings are made up of two parts , indicating the shared ( lined nodes ) and the private ( unlined nodes ) features . This enables the two words to make use of common representation units , leading to a closer relationship between them . NMT usually utilizes three matrices to represent source embeddings , target input embeddings , and target output embeddings ( also known as pre-softmax weight ) , respectively . These embeddings occupy most of the model parameters , which constrains the improvements of NMT because the recent methods become increasingly memory-hungry ( Vaswani et al . , 2017 ; Chen et al . , 2018 ) . 1 Even though converting words into subword units ( Sennrich et al . , 2016b ) , nearly 55 % of model parameters are used for word representation in the Transformer model ( Vaswani et al . , 2017 ) . To overcome this difficulty , several methods are proposed to reduce the parameters used for word representation of NMT . Press and Wolf ( 2017 ) propose two weight tying ( WT ) methods , called decoder WT and three-way WT , to substantially reduce the parameters of the word embeddings . Decoder WT ties the target input embedding and target output embedding , which has become the new de facto standard of practical NMT ( Sen- Figure 2 : Shared-private bilingual word embeddings perform between the source and target words or sub-words ( a ) with similar lexical meaning , ( b ) with same word form , and ( c ) without any relationship . Different sharing mechanisms are adapted into different relationship categories . This strikes the right balance between capturing monolingual and bilingual characteristics . The closeness of relationship decides the portion of features to be used for sharing . Words with similar lexical meaning tend to share more features , followed by the words with the same word form , and then the unrelated words , as illustrated by the lined nodes . nrich et al . , 2017 ) . Three-way WT uses only one matrix to represent the three word embeddings , where the source and target words that have the same word form tend to share a word vector . This method can also be adapted to sub-word NMT with a shared source-target sub-word vocabulary and it performs well in language pairs with many of the same characters , such as English-German and English-French ( Vaswani et al . , 2017 ) . Unfortunately , this method is not applicable to languages that are written in different alphabets , such as Chinese-English ( Hassan et al . , 2018 ) . Another challenge facing the source and target word embeddings of NMT is the lack of interactions . This degrades the attention performance , leading to some unaligned translations that hurt the translation quality . Hence , Kuang et al . ( 2018 ) propose to bridge the source and target embeddings , which brings better attention to the related source and target words . Their method is applicable to any language pairs , providing a tight interaction between the source and target word pairs . However , their method requires additional components and model parameters . In this work , we aim to enhance the word representations and the interactions between the source and target words , while using even fewer parameters . To this end , we present a languageindependent method , which is called sharedprivate bilingual word embeddings , to share a part of the embeddings of a pair of source and target words that have some common characteristics ( i.e. similar words should have similar vectors ) . Figure 1 illustrates the difference between the standard word embeddings and shared-private word embeddings of NMT . In the proposed method , each source ( or target ) word is represented by a word embedding that consists of the shared features and the private features . The shared features can also be regarded as the prior alignments connecting the source and target words . The private features allow the words to better learn the monolingual characteristics . Meanwhile , the features shared by the source and target embeddings result in a significant reduction of the number of parameters used for word representations . The experimental results on 6 translation datasets of different scales show that our model with fewer parameters yields consistent improvements over the strong Transformer baselines . In this work , we propose a novel sharing technique to improve the learning of word embeddings for NMT . Each word embedding is composed of shared and private features . The shared features act as a prior alignment guidance for the attention model to improve the quality of attention . Meanwhile , the private features enable the words to better capture the monolingual characteristics , result in an improvement of the overall translation quality . According to the degree of relevance between a parallel word pair , the word pairs are categorized into three different groups and the number of shared features is different . Our experimental results show that the proposed method outperforms the strong Transformer baselines while using fewer model parameters .
Word embeddings occupy a large amount of memory, and weight tying does not mitigate this issue for distant language pairs on translation tasks.
They propose a language independet method where a model shares embeddings between source and target only when words have some common characteristics.
Experiments on machine translation datasets involving multiple language families and scripts show that the proposed model outperforms baseline models while using fewer parameters.
D12-1061
This paper explores log-based query expansion ( QE ) models for Web search . Three lexicon models are proposed to bridge the lexical gap between Web documents and user queries . These models are trained on pairs of user queries and titles of clicked documents . Evaluations on a real world data set show that the lexicon models , integrated into a ranker-based QE system , not only significantly improve the document retrieval performance but also outperform two state-of-the-art log-based QE methods . Term mismatch is a fundamental problem in Web search , where queries and documents are composed using different vocabularies and language styles . Query expansion ( QE ) is an effective strategy to address the problem . It expands a query issued by a user with additional related terms , called expansion terms , so that more relevant documents can be retrieved . In this paper we explore the use of clickthrough data and translation models for QE . We select expansion terms for a query according to how likely it is that the expansion terms occur in the title of a document that is relevant to the query . Assuming that a query is parallel to the titles of documents clicked for that query ( Gao et al . 2010a ) , three lexicon models are trained on query-title pairs extracted from clickthrough data . The first is a word model that learns the translation probability between single words . The second model uses lexi-calized triplets to incorporate word dependencies for translation . The third is a bilingual topic model , which represents a query as a distribution of hidden topics and learns the translation between a query and a title term at the semantic level . We will show that the word model provides a rich set of expansion candidates while the triplet and topic models can effectively select good expansion terms , and that a ranker-based QE system which incorporates all three of these models not only significantly improves Web search result but outperforms other log-based QE methods that are stateof-the-art . There is growing interest in applying user logs to improve QE . A recent survey is due to Baeze-Yates and Ribeiro-Neto ( 2011 ) . Below , we briefly discuss two log-based QE methods that are closest to ours and are re-implemented in this study for comparison . Both systems use the same type of log data that we used to train the lexicon models . The term correlation model of Cui et al . ( 2002 ; 2003 ) is to our knowledge the first to explore querydocument relations for direct extraction of expansion terms for Web search . The method outperforms traditional QE methods that do not use log data e.g. the local analysis model of Xu and Croft ( 1996 ) . In addition , as pointed out by Cui et al . ( 2003 ) there are three important advantages that make log-based QE a promising technology to improve the performance of commercial search engines . First , unlike traditional QE methods that are based on relevance feedback , log-based QE derives expansion terms from search logs , allowing term correlations to be pre-computed offline . Compared to methods that are based on thesauri either compiled manually ( Prager et al . 2001 ) or derived au-tomatically from document collections ( Jing and Croft 1994 ) , the log-based method is superior in that it explicitly captures the correlation between query terms and document terms , and thus can bridge the lexical gap between them more effectively . Second , since search logs retrain querydocument pairs clicked by millions of users , the term correlations reflect the preference of the majority of users . Third , the term correlations evolve along with the accumulation of user logs , thus can reflect updated user interests at a specific time . However , as pointed out by Riezler et al . ( 2008 ) , Cui et al . 's correlation-based method suffers low precision of QE partly because the correlation model does not explicitly capture context information and is susceptible to noise . Riezler et al . developed a QE system by retraining a standard phrase-based statistical machine translation ( SMT ) system using query-snippet pairs extracted from clickthrough data ( Riezler et al . 2008 ; Riezler and Liu 2010 ) . The SMT-based system can produce cleaner , more relevant expansion terms because rich context information useful for filtering noisy expansions is captured by combining language model and phrase translation model in its decoder . Furthermore , in the SMT system all component models are properly smoothed using sophisticated techniques to avoid sparse data problems while the correlation model relies on pure counts of term frequencies . However , the SMT system is used as a black box in their experiments . So the relative contribution of different SMT components is not verified empirically . In this study we break this black box in order to build a better , simpler QE system . We will show that the proposed lexicon models outperform significantly the term correlation model , and that a simpler QE system that incorporates the lexicon models can beat the sophisticated , black-box SMT system . In this paper we extend the previous log-based QE methods in two directions . First , we formulate QE as the problem of translating a source language of queries into a target language of documents , represented as titles . This allows us to adapt the established techniques developed for SMT to QE . Specially , we propose three lexicon models based on terms , lexicalized triplets , and topics , respectively . These models are trained on pairs of user queries and the titles of clicked documents using EM . Second , we present a ranker-based QE system , the heart of which is a MRF-based ranker in which the lexicon models are incorporated as features . We perform experiments on the Web search task using a real world data set . Results show that the proposed system outperforms significantly other stateof-the-art QE systems . This study is part of a bigger , ongoing project , aiming to develop a real-time QE system for Web search , where simplicity is the key to the success . Thus , what we learned from this study is particularly encouraging . We demonstrate that with large amounts of clickthrough data for model training , simple lexicon models can achieve state-of-the-art QE performance , and that the MRF-based ranker provides a simple and flexible framework to incorporate a variety of features capturing different types of term dependencies in such an effective way that the Web search performance can be directly optimized .
Term mismatches between a query and documents hinder retrievals of relevant documents and black box statistical machine translation models are used to expand queries.
They propose to train lexicon query expansion models by using transaction logs that contain pairs of queries and titles of clicked documents.
The proposed query expansion model enables retrieval systems to significantly outperform models with previous expansion models while being more transparent.
N07-1011
Traditional noun phrase coreference resolution systems represent features only of pairs of noun phrases . In this paper , we propose a machine learning method that enables features over sets of noun phrases , resulting in a first-order probabilistic model for coreference . We outline a set of approximations that make this approach practical , and apply our method to the ACE coreference dataset , achieving a 45 % error reduction over a comparable method that only considers features of pairs of noun phrases . This result demonstrates an example of how a firstorder logic representation can be incorporated into a probabilistic model and scaled efficiently . Noun phrase coreference resolution is the problem of clustering noun phrases into anaphoric sets . A standard machine learning approach is to perform a set of independent binary classifications of the form " Is mention a coreferent with mention b ? " This approach of decomposing the problem into pairwise decisions presents at least two related difficulties . First , it is not clear how best to convert the set of pairwise classifications into a disjoint clustering of noun phrases . The problem stems from the transitivity constraints of coreference : If a and b are coreferent , and b and c are coreferent , then a and c must be coreferent . This problem has recently been addressed by a number of researchers . A simple approach is to perform the transitive closure of the pairwise decisions . However , as shown in recent work ( McCallum and Wellner , 2003 ; Singla and Domingos , 2005 ) , better performance can be obtained by performing relational inference to directly consider the dependence among a set of predictions . For example , McCallum and Wellner ( 2005 ) apply a graph partitioning algorithm on a weighted , undirected graph in which vertices are noun phrases and edges are weighted by the pairwise score between noun phrases . A second and less studied difficulty is that the pairwise decomposition restricts the feature set to evidence about pairs of noun phrases only . This restriction can be detrimental if there exist features of sets of noun phrases that can not be captured by a combination of pairwise features . As a simple example , consider prohibiting coreferent sets that consist only of pronouns . That is , we would like to require that there be at least one antecedent for a set of pronouns . The pairwise decomposition does not make it possible to capture this constraint . In general , we would like to construct arbitrary features over a cluster of noun phrases using the full expressivity of first-order logic . Enabling this sort of flexible representation within a statistical model has been the subject of a long line of research on first-order probabilistic models ( Gaifman , 1964 ; Halpern , 1990 ; Paskin , 2002 ; Poole , 2003 ; Richardson and Domingos , 2006 ) . Conceptually , a first-order probabilistic model can be described quite compactly . A configuration of the world is represented by a set of predi- cates , each of which has an associated real-valued parameter . The likelihood of each configuration of the world is proportional to a combination of these weighted predicates . In practice , however , enumerating all possible configurations , or even all the predicates of one configuration , can result in intractable combinatorial growth ( de Salvo Braz et al . , 2005 ; Culotta and McCallum , 2006 ) . In this paper , we present a practical method to perform training and inference in first-order models of coreference . We empirically validate our approach on the ACE coreference dataset , showing that the first-order features can lead to an 45 % error reduction . We have presented learning and inference procedures for coreference models using first-order features . By relying on sampling methods at training time and approximate inference methods at testing time , this approach can be made scalable . This results in a coreference model that can capture features over sets of noun phrases , rather than simply pairs of noun phrases . This is an example of a model with extremely flexible representational power , but for which exact inference is intractable . The simple approximations we have described here have enabled this more flexible model to outperform a model that is simplified for tractability . A short-term extension would be to consider features over entire clusterings , such as the number of clusters . This could be incorporated in a ranking scheme , as in Ng ( 2005 ) . Future work will extend our approach to a wider variety of tasks . The model we have described here is specific to clustering tasks ; however a similar formulation could be used to approach a number of language processing tasks , such as parsing and relation extraction . These tasks could benefit from first-order features , and the present work can guide the approximations required in those domains . Additionally , we are investigating more sophisticated inference algorithms that will reduce the greediness of the search procedures described here .
Existing approaches treat noun phrase coreference resolution as a set of independent binary classifications limiting the features to be only pairs of noun phrases.
They propose a machine learning method that uses sets of noun phrases as features that are coupled with a sampling method to enable scalability.
Evaluation on the ACE coreference dataset, the proposed method achieves a 45% error reduction over a previous method.
2021.acl-long.67
Bilingual lexicons map words in one language to their translations in another , and are typically induced by learning linear projections to align monolingual word embedding spaces . In this paper , we show it is possible to produce much higher quality lexicons with methods that combine ( 1 ) unsupervised bitext mining and ( 2 ) unsupervised word alignment . Directly applying a pipeline that uses recent algorithms for both subproblems significantly improves induced lexicon quality and further gains are possible by learning to filter the resulting lexical entries , with both unsupervised and semisupervised schemes . Our final model outperforms the state of the art on the BUCC 2020 shared task by 14 F 1 points averaged over 12 language pairs , while also providing a more interpretable approach that allows for rich reasoning of word meaning in context . Further analysis of our output and the standard reference lexicons suggests they are of comparable quality , and new benchmarks may be needed to measure further progress on this task . 1 Bilingual lexicons map words in one language to their translations in another , and can be automatically induced by learning linear projections to align monolingual word embedding spaces ( Artetxe et al . , 2016 ; Smith et al . , 2017 ; Lample et al . , 2018 , inter alia ) . Although very successful in practice , the linear nature of these methods encodes unrealistic simplifying assumptions ( e.g. all translations of a word have similar embeddings ) . In this paper , we show it is possible to produce much higher quality lexicons without these restrictions by introducing new methods that combine ( 1 ) unsupervised bitext mining and ( 2 ) unsupervised word alignment . We show that simply pipelining recent algorithms for unsupervised bitext mining ( Tran et al . , 2020 ) and unsupervised word alignment ( Sabet et al . , 2020 ) significantly improves bilingual lexicon induction ( BLI ) quality , and that further gains are possible by learning to filter the resulting lexical entries . Improving on a recent method for doing BLI via unsupervised machine translation ( Artetxe et al . , 2019 ) , we show that unsupervised mining produces better bitext for lexicon induction than translation , especially for less frequent words . These core contributions are established by systematic experiments in the class of bitext construction and alignment methods ( Figure 1 ) . Our full induction algorithm filters the lexicon found via the initial unsupervised pipeline . The filtering can be either fully unsupervised or weakly-supervised : for the former , we filter using simple heuristics and global statistics ; for the latter , we train a multi-layer perceptron ( MLP ) to predict the probability of a word pair being in the lexicon , where the features are global statistics of word alignments . In addition to BLI , our method can also be directly adapted to improve word alignment and reach competitive or better alignment accuracy than the state of the art on all investigated language pairs . We find that improved alignment in sentence representations ( Tran et al . , 2020 ) leads to better contextual word alignments using local similarity ( Sabet et al . , 2020 ) . Our final BLI approach outperforms the previous state of the art on the BUCC 2020 shared task ( Rapp et al . , 2020 ) by 14 F 1 points averaged over 12 language pairs . Manual analysis shows that most of our false positives are due to the incompleteness of the reference and that our lexicon is comparable to the reference lexicon and the output of a supervised system . Because both of our key building blocks make use of the pretrainined contextual representations from mBART ( Liu et al . , Word Alignment Statistical Feature Extraction cooccurrence(good , guten ) = 2 one-to-one align(good , guten ) = 2 many-to-one align(good , guten ) = 0 cosine_similarity(good , guten ) = 0.8 inner_product(good , guten ) = 1.8 count(good ) = 2 count(guten ) = 2 We present a direct and effective framework for BLI with unsupervised bitext mining and word alignment , which sets a new state of the art on the task . From the perspective of pretrained multilingual models ( Conneau et al . , 2019 ; Liu et al . , 2020 ; Tran et al . , 2020 , inter alia ) , our work shows that they have successfully captured information about word translation that can be extracted using similarity based alignment and refinement . Although BLI is only about word types , it strongly benefits from contextualized reasoning at the token level . umich.edu/ ˜mihalcea / wpt ( en-fr and ro-en ) ; https : //web.eecs.umich.edu/ ˜mihalcea / wpt05 ( enhi )
Existing methods to induce bilingual lexicons use linear projections to align word embeddings that are based on unrealistic simplifying assumptions.
They propose to use both unsupervised bitext mining and unsupervised word alignment methods to produce higher quality lexicons.
The proposed method achieves the state-of-the-art in the bilingual lexical induction task while keeping the interpretability of their pipeline.
D18-1065
In this paper we show that a simple beam approximation of the joint distribution between attention and output is an easy , accurate , and efficient attention mechanism for sequence to sequence learning . The method combines the advantage of sharp focus in hard attention and the implementation ease of soft attention . On five translation and two morphological inflection tasks we show effortless and consistent gains in BLEU compared to existing attention mechanisms . In structured input-output models as used in tasks like translation and image captioning , the attention variable decides which part of the input aligns to the current output . Many attention mechanisms have been proposed ( Xu et al . , 2015 ; Bahdanau et al . , 2014 ; Luong et al . , 2015 ; Martins and Astudillo , 2016 ) but the de facto standard is a soft attention mechanism that first assigns attention weights to input encoder states , then computes an attention weighted ' soft ' aligned input state , which finally derives the output distribution . This method is end to end differentiable and easy to implement . Another less popular variant is hard attention that aligns each output to exactly one input state but requires intricate training to teach the network to choose that state . When successfully trained , hard attention is often found to be more accurate ( Xu et al . , 2015 ; Zaremba and Sutskever , 2015 ) . In NLP , a recent success has been in a monotonic hard attention setting in morphological inflection tasks ( Yu et al . , 2016 ; Aharoni and Goldberg , 2017 ) . For general seq2seq learning , methods like Sparse-Max ( Martins and Astudillo , 2016 ) and local attention ( Luong et al . , 2015 ) were proposed to bridge the gap between soft and hard attention . * Both authors contributed equally to this work In this paper we propose a surprisingly simpler alternative based on the original joint distribution between output and attention , of which existing soft and hard attention mechanisms are approximations . The joint model couples input states individually to the output like in hard attention , but it combines the advantage of end-to-end trainability of soft attention . When the number of input states is large , we propose to use a simple approximation of the full joint distribution called Beam-joint . This approximation is also easily trainable and does not suffer from the high variance of Monte-Carlo sampling gradients of hard attention . We evaluated our model on five translation tasks and increased BLEU by 0.8 to 1.7 over soft attention , which in turn was better than hard and the recent Sparsemax ( Martins and Astudillo , 2016 ) attention . More importantly , the training process was as easy as soft attention . For further support , we also evaluate on two morphological inflection tasks and got gains over soft and hard attention . In this paper we showed a simple yet effective approximation of the joint attention-output distribution in sequence to sequence learning . Our joint model consistently provides higher accuracy without significant running time overheads in five translation and two morphological inflection tasks . An interesting direction for future work is to extend beam-joint to multi-head attention architectures as in ( Vaswani et al . , 2017 ; Xu Chen , 2018 ) .
Softmax attention models are popular because of their differentiable and easy to implement nature while hard attention models outperform them when successfully trained.
They propose a method to approximate the joint attention-output distribution which provides sharp attention as hard attention and easy implementation as soft attention.
The proposed approach outperforms soft attention models and recent hard attention and Sparsemax models on five translation tasks and also on morphological inflection tasks.
2022.acl-long.304
Contrastive learning has achieved impressive success in generation tasks to militate the " exposure bias " problem and discriminatively exploit the different quality of references . Existing works mostly focus on contrastive learning on the instance-level without discriminating the contribution of each word , while keywords are the gist of the text and dominant the constrained mapping relationships . Hence , in this work , we propose a hierarchical contrastive learning mechanism , which can unify hybrid granularities semantic meaning in the input text . Concretely , we first propose a keyword graph via contrastive correlations of positive-negative pairs to iteratively polish the keyword representations . Then , we construct intra-contrasts within instance-level and keyword-level , where we assume words are sampled nodes from a sentence distribution . Finally , to bridge the gap between independent contrast levels and tackle the common contrast vanishing problem , we propose an inter-contrast mechanism that measures the discrepancy between contrastive keyword nodes respectively to the instance distribution . Experiments demonstrate that our model outperforms competitive baselines on paraphrasing , dialogue generation , and storytelling tasks . Generation tasks such as storytelling , paraphrasing , and dialogue generation aim at learning a certain correlation between text pairs that maps an arbitrary-length input to another arbitrary-length output . Traditional methods are mostly trained with " teacher forcing " and lead to an " exposure bias " problem ( Schmidt , 2019 ) . Incorporating the generation method with contrastive learning achieved impressive performance on tackling such issues , which takes an extra consideration of synthetic negative samples contrastively ( Lee et al . , 2021 Existing contrastive mechanisms are mainly focused on the instance level ( Lee et al . , 2021 ; Cai et al . , 2020 ) . However , word-level information is also of great importance . Take the case shown in the upper part of Figure 1 for example , the keyword covers the gist of the input text and determines the embedding space of the text . The text representation will be significantly affected if adding a slight perturbation on the keyword , i.e. , changing " cosmology " to " astrophysics " . In addition , as shown on the bottom part , under some circumstances , it is too easy for the model to do the classification since the semantic gap between contrastive pairs is huge . Thus , the model fails to distinguish the actual discrepancy , which causes a " contrast vanishing " problem at both instance-level and keyword-level . Based on the above motivation , in this paper , we propose a hierarchical contrastive learning method built on top of the classic CVAE structure . We choose CVAE due to its ability in modeling global properties such as syntactic , semantic , and discourse coherence ( Li et al . , 2015 ; Yu et al . , 2020 ) . We first learn different granularity representations through two independent contrast , i.e. , instancelevel and keyword-level . Specifically , we use the universal and classic TextRank ( Mihalcea and Tarau , 2004 ) method to extract keywords from each text , which contain the most important information and need to be highlighted . On the instancelevel , we treat the keyword in the input text as an additional condition for a better prior semantic distribution . Then , we utilize Kullback-Leibler divergence ( Kullback and Leibler , 1951 ) to reduce the distance between prior distribution and positive posterior distribution , and increase the distance with the negative posterior distribution . While on the keyword-level , we propose a keyword graph via contrastive correlations of positive-negative pairs to learn informative and accurate keyword representations . By treating the keyword in the output text as an anchor , the imposter keyword is produced by neighboring nodes of the anchor keyword and forms the keyword-level contrast , where the similarity between the imposter keyword and the anchor keyword is poorer than the positive keyword . To unify individual intra-contrasts and tackle the " contrast vanishing " problem in independent contrastive granularities , we leverage an inter-contrast , the Mahalanobis contrast , to investigate the contrastive enhancement based on the Mahalanobis distance ( De Maesschalck et al . , 2000 ) , a measure of the distance between a point and a distribution , between the instance distribution and the keyword representation . Concretely , we ensure the distance from the anchor instance distribution to the groundtruth keyword vector is closer than to the imposter keyword vector . The Mahalanobis contrast plays an intermediate role that joins the different granularities contrast via incorporating the distribution of instance with the representation of its crucial part , and makes up a more comprehensive keyworddriven hierarchical contrastive mechanism , so as to ameliorate the generated results . We empirically show that our model outperforms CVAE and other baselines significantly on three generation tasks : paraphrasing , dialogue genera-tion , and storytelling . Our contributions can be summarized as follows : • To our best knowledge , we are the first to propose an inter-level contrastive learning method , which unifies instance-level and keyword-level contrasts in the CVAE framework . • We propose three contrastive learning measurements : KL divergence for semantic distribution , cosine distance for points , and Mahalanobis distance for points with distribution . • We introduce a global keyword graph to obtain polished keyword representations and construct imposter keywords for contrastive learning . In this paper , we propose a hierarchical contrastive learning mechanism , which consists of intra-contrasts within instance-level and keywordlevel and inter-contrast with Mahalanobis contrast . The experimental results yield significant out-performance over baselines when applied in the CVAE framework . In the future , we aim to extend the contrastive learning mechanism to different basic models , and will explore contrastive learning methods based on external knowledge .
Existing works on contrastive learning for text generation focus only on instance-level while word-level information such as keywords is also of great importance.
They propose a CVAE-based hierarchical contrastive learning within instance and keyword-level using a keyword graph which iteratively polishes the keyword representations.
The proposed model outperforms CVAE and baselines on storytelling, paraphrasing, and dialogue generation tasks.
2020.emnlp-main.384
Word embedding models are typically able to capture the semantics of words via the distributional hypothesis , but fail to capture the numerical properties of numbers that appear in a text . This leads to problems with numerical reasoning involving tasks such as question answering . We propose a new methodology to assign and learn embeddings for numbers . Our approach creates Deterministic , Independentof-Corpus Embeddings ( referred to as DICE ) for numbers , such that their cosine similarity reflects the actual distance on the number line . DICE outperforms a wide range of pre-trained word embedding models across multiple examples of two tasks : ( i ) evaluating the ability to capture numeration and magnitude ; and ( ii ) to perform list maximum , decoding , and addition . We further explore the utility of these embeddings in downstream applications by initializing numbers with our approach for the task of magnitude prediction . We also introduce a regularization approach to learn model-based embeddings of numbers in a contextual setting . Word embeddings capture semantic relationships between words by operationalizing the distributional hypothesis ( Harris , 1954 ; Firth , 1957 ) . They can be learned either non-contextually ( Mikolov et al . , 2013b ; Pennington et al . , 2014 ; Bojanowski et al . , 2017 ) or contextually ( Devlin et al . , 2018 ; Peters et al . , 2018 ) . Non-contextual embeddings have worked well on various language understanding and semantic tasks ( Rumelhart et al . , 1988 ; Mikolov et al . , 2013a , b ) . More recently , they have also been used as pre-trained word embeddings to aid more sophisticated contextual models for solving rigorous natural language processing ( NLP ) problems , including translation , paraphrasing , and sentence-similarity tasks ( Kiros et al . , 2015 ; Wieting et al . , 2015 ) . While word embeddings effectively capture semantic relationships between words , they are less effective at capturing numeric properties associated with numbers . Though numbers represent a significant percentage of tokens in a corpus , they are often overlooked . In non-contextual word embedding models , they are treated like any other word , which leads to misinterpretation . For instance , they exhibit unintuitive similarities with other words and do not contain strong prior information about the magnitude of the number they encode . In sentence similarity and reasoning tasks , failure to handle numbers causes as much as 29 % of contradictions ( De Marneffe et al . , 2008 ) . In other data-intensive tasks where numbers are abundant , like neural machine translation , they are masked to hide the translation models inefficiency in dealing with them ( Mitchell and Lapata , 2009 ) . There are a variety of tests proposed to measure the efficiency of number embeddings . For instance , Naik et al . ( 2019 ) shows that GloVe ( Pennington et al . , 2014 ) , word2vec ( Mikolov et al . , 2013b ) , and fastText ( Joulin et al . , 2016 ; Bojanowski et al . , 2017 ) fail to capture numeration and magnitude properties of a number . Numeration is the property of associating numbers with their corresponding word representations ( " 3 " and " three " ) while magnitude represents a number 's actual value ( 3 < 4 ) . Further , Wallace et al . ( 2019 ) proposes several tests for analyzing numerical reasoning of number embeddings that include list maximum , decoding , and addition . In this paper , we experimentally demonstrate that if the cosine similarity between word embeddings of two numbers reflects their actual distance on the number line , the resultant word embeddings are useful in downstream tasks . We first demonstrate how Deterministic , Independent-of-Corpus Embeddings ( DICE ) can be constructed such that they almost perfectly capture properties of numera-tion and magnitude . These non-contextual embeddings also perform well on related tests for numeracy ( Wallace et al . , 2019 ) . To demonstrate the efficacy of DICE for downstream tasks , we explore its utility in two experiments . First , we design a DICE embedding initialized Bi-LSTM network to classify the magnitude of masked numbers in the 600 K dataset ( Chen et al . , 2019 ) . Second , given the popularity of modern contextual model-based embeddings , we devise a regularization procedure that emulates the hypothesis proposed by DICE and can be employed in any task-based fine-tuning process . We demonstrate that adding such regularization helps the model internalize notions of numeracy while learning task-based contextual embeddings for the numbers present in the text . We find promising results in a numerical reasoning task that involves numerical question answering based on a sub-split of the popular SQuAD dataset ( Rajpurkar et al . , 2016 ) . Our contribution can be summarized as follows : • We propose a deterministic technique to learn numerical embeddings . DICE embeddings are learned independently of corpus and effectively capture properties of numeracy . • We prove experimentally that the resultant embeddings learned using the above methods improve a model 's ability to reason about numbers in a variety of tasks , including numeration , magnitude , list maximum , decoding , and addition . • We also demonstrate that properties of DICE can be adapted to contextual models , like BERT ( Devlin et al . , 2018 ) , through a novel regularization technique for solving tasks involving numerical reasoning . In this work , we methodologically assign and learn embeddings for numbers to reflect their numerical properties . We validate our proposed approach with several experiments that test number embeddings . The tests that evaluate the numeral embeddings are fundamentally applicable to all real numbers . Finally , we introduced an approach to jointly learn embeddings of numbers and words that preserve numerical properties and evaluated them on a contextual word embedding based model . In our future work , we would like to extend this idea to unseen numbers in vocabulary as a function of seen ones .
Existing word embeddings treat numbers like words failing to capture numeration and magnitude properties of numbers which is problematic for tasks such as question answering.
They propose a deterministic technique to learn numerical embeddings where cosine similarity reflects the actual distance and a regularization approach for a contextual setting.
A Bi-LSTM network initialized with the proposed embedding shows the ability to capture numeration and magnitude and to perform list maximum, decoding, and addition.
P12-1103
We propose a novel approach to improve SMT via paraphrase rules which are automatically extracted from the bilingual training data . Without using extra paraphrase resources , we acquire the rules by comparing the source side of the parallel corpus with the target-to-source translations of the target side . Besides the word and phrase paraphrases , the acquired paraphrase rules mainly cover the structured paraphrases on the sentence level . These rules are employed to enrich the SMT inputs for translation quality improvement . The experimental results show that our proposed approach achieves significant improvements of 1.6~3.6 points of BLEU in the oral domain and 0.5~1 points in the news domain . This work was done when the first author was visiting Baidu . The translation quality of the SMT system is highly related to the coverage of translation models . However , no matter how much data is used for training , it is still impossible to completely cover the unlimited input sentences . This problem is more serious for online SMT systems in real-world applications . Naturally , a solution to the coverage problem is to bridge the gaps between the input sentences and the translation models , either from the input side , which targets on rewriting the input sentences to the MT-favored expressions , or from the side of translation models , which tries to enrich the translation models to cover more expressions . In recent years , paraphrasing has been proven useful for improving SMT quality . The proposed methods can be classified into two categories according to the paraphrase targets : ( 1 ) enrich translation models to cover more bilingual expressions ; ( 2 ) paraphrase the input sentences to reduce OOVs or generate multiple inputs . In the first category , He et al . ( 2011 ) , Bond et al . ( 2008 ) and Nakov ( 2008 ) enriched the SMT models via paraphrasing the training corpora . Kuhn et al . ( 2010 ) and Max ( 2010 ) used paraphrases to smooth translation models . For the second category , previous studies mainly focus on finding translations for unknown terms using phrasal paraphrases . Callison-Burch et al . ( 2006 ) and Marton et al . ( 2009 ) paraphrase unknown terms in the input sentences using phrasal paraphrases extracted from bilingual and monolingual corpora . Mirkin et al . ( 2009 ) rewrite OOVs with entailments and paraphrases acquired from WordNet . Onishi et al . ( 2010 ) and Du et al . ( 2010 ) use phrasal paraphrases to build a word lattice to get multiple input candidates . In the above methods , only word or phrasal paraphrases are used for input sentence rewriting . No structured paraphrases on the sentence level have been investigated . However , the information in the sentence level is very important for disambiguation . For example , we can only substitute play with drama in a context related to stage or theatre . Phrasal paraphrase substitutions can hardly solve such kind of problems . In this paper , we propose a method that rewrites the input sentences of the SMT system using automatically extracted paraphrase rules which can capture structures on sentence level in addition to paraphrases on the word or phrase level . Without extra paraphrase resources , a novel approach is proposed to acquire paraphrase rules from the bilingual training corpus based on the results of Forward-Translation and Back-Translation . The rules target on rewriting the input sentences to an MT-favored expression to ensure a better translation . The paraphrase rules cover all kinds of paraphrases on the word , phrase and sentence levels , enabling structure reordering , word or phrase insertion , deletion and substitution . The experimental results show that our proposed approach achieves significant improvements of 1.6~3.6 points of BLEU in the oral domain and 0.5~1 points in the news domain . The remainder of the paper is organized as follows : Section 2 makes a comparison between the Forward-Translation and Back-Translation . Section 3 introduces our methods that extract paraphrase rules from the bilingual corpus of SMT . Section 4 describes the strategies for constructing word lattice with paraphrase rules . The experimental results and some discussions are presented in Section 5 and Section 6 . Section 7 compares our work to the previous researches . Finally , Section 8 concludes the paper and suggests directions for future work . In this paper , we propose a novel method for extracting paraphrase rules by comparing the source side of bilingual corpus to the target-tosource translation of the target side . The acquired paraphrase rules are employed to enrich the SMT inputs , which target on rewriting the input sentences to an MT-favored form . The paraphrase rules cover all kinds of paraphrases on the word , phrase and sentence levels , enabling structure reordering , word or phrase insertion , deletion and substitution . Experimental results show that the paraphrase rules can improve SMT quality in both the oral and news domains . The manual investigation on oral translation results indicate that the paraphrase rules capture four kinds of MTfavored transformation to ensure translation quality improvement .
Incorporating paraphrases improves statistical machine translation however no works investigate sentence level paraphrases.
They propose to use bilingual training data to obtain paraphrase rules on word, phrase and sentence levels to rewrite inputs to be MT-favored.
The acquired paraphrase rules improve translation qualities in oral and news domains.
N09-1072
Automatically extracting social meaning and intention from spoken dialogue is an important task for dialogue systems and social computing . We describe a system for detecting elements of interactional style : whether a speaker is awkward , friendly , or flirtatious . We create and use a new spoken corpus of 991 4-minute speed-dates . Participants rated their interlocutors for these elements of style . Using rich dialogue , lexical , and prosodic features , we are able to detect flirtatious , awkward , and friendly styles in noisy natural conversational data with up to 75 % accuracy , compared to a 50 % baseline . We describe simple ways to extract relatively rich dialogue features , and analyze which features performed similarly for men and women and which were gender-specific . How can we extract social meaning from speech , deciding if a speaker is particularly engaged in the conversation , is uncomfortable or awkward , or is particularly friendly and flirtatious ? Understanding these meanings and how they are signaled in language is an important sociolinguistic task in itself . Extracting them automatically from dialogue speech and text is crucial for developing socially aware computing systems for tasks such as detection of interactional problems or matching conversational style , and will play an important role in creating more natural dialogue agents ( Pentland , 2005 ; Nass and Brave , 2005 ; Brave et al . , 2005 ) . Cues for social meaning permeate speech at every level of linguistic structure . Acoustic cues such as low and high F0 or energy and spectral tilt are important in detecting emotions such as annoyance , anger , sadness , or boredom ( Ang et al . , 2002 ; Lee and Narayanan , 2002 ; Liscombe et al . , 2003 ) , speaker characteristics such as charisma ( Rosenberg and Hirschberg , 2005 ) , or personality features like extroversion ( Mairesse et al . , 2007 ; Mairesse and Walker , 2008 ) . Lexical cues to social meaning abound . Speakers with links to depression or speakers who are under stress use more first person singular pronouns ( Rude et al . , 2004 ; Pennebaker and Lay , 2002 ; Cohn et al . , 2004 ) , positive emotion words are cues to agreeableness ( Mairesse et al . , 2007 ) , and negative emotion words are useful cues to deceptive speech ( Newman et al . , 2003 ) . The number of words in a sentence can be a useful feature for extroverted personality ( Mairesse et al . , 2007 ) . Finally , dialog features such as the presence of disfluencies can inform listeners about speakers ' problems in utterance planning or about confidence ( Brennan and Williams , 1995 ; Brennan and Schober , 2001 ) . Our goal is to see whether cues of this sort are useful in detecting particular elements of conversational style and social intention ; whether a speaker in a speed-dating conversation is judged by the interlocutor as friendly , awkward , or flirtatious . The results presented here should be regarded with some caution . The sample is not a random sample of English speakers or American adults , and speed dating is not a natural context for expressing every conversational style . Therefore , a wider array of studies across populations and genres would be required before a more general theory of conversational styles is established . On the other hand , the presented results may under-reflect the relations being captured . The quality of recordings and coarse granularity ( 1 second ) of the time-stamps likely cloud the relations , and as the data is cleaned and improved , we expect the associations to only grow stronger . Caveats aside , we believe the evidence indicates that the perception of several types of conversational style have relatively clear signals across genders , but with some additional gender contextualization . Both genders convey flirtation by laughing more , speaking faster , and using higher and more variable pitch . Both genders convey friendliness by laughing more , and using collaborative completions . However , we do find gender differences ; men asl more questions when ( labeled as ) flirting , women ask fewer . Men labeled as flirting are softer , but women labeled as flirting are louder . Women flirt-ing swear more , while men are more likely to use sexual vocabulary . Gender differences exist as well for the other variables . Men labeled as friendly use you while women labeled as friendly use I. Friendly women are very disfluent ; friendly men are not . While the features for friendly and flirtatious speech overlap , there are clear differences . Men speaker faster and with higher f0 ( min ) in flirtatious speech , but not faster and with lower f0 ( min ) in friendly speech . For men , flirtatious speech involves more questions and repair questions , while friendly speech does not . For women , friendly speech is more disfluent than flirtatious speech , and has more collaborative style ( completions , repair questions , appreciations ) . We also seem to see a model of collaborative conversational style ( probably related to the collaborative floor of Edelsky ( 1981 ) and Coates ( 1996 ) ) , cued by the use of more collaborative completions , repair questions and other questions , you , and laughter . These collaborative techniques were used by both women and men who were labeled as friendly , and occurred less with men labeled as awkward . Women themselves displayed more of this collaborative conversational style when they labeled the men as friendly . For women only , collaborative style included appreciations ; while for men only , collaborative style included overlaps . In addition to these implications for social science , our work has implications for the extraction of meaning in general . A key focus of our work was on ways to extract useful dialog act and disfluency features ( repair questions , backchannels , appreciations , restarts , dispreferreds ) with very shallow methods . These features were indeed extractable and proved to be useful features in classification . We are currently extending these results to predict date outcomes including ' liking ' , extending work such as Madan and Pentland ( 2006 ) .
Methods to extract social meanings such as engagement from speech remain unknown while it is important in sociolinguistics and to develop socially aware computing systems.
They create a spoken corpus from conversations in speed-dating and perform analysis using extracted dialogue features with a focus on genders.
They found several gender dependent and independent phenomena in conversations related to the speed of speaking, laughing or asking questions.
P18-1256
We introduce the task of predicting adverbial presupposition triggers such as also and again . Solving such a task requires detecting recurring or similar events in the discourse context , and has applications in natural language generation tasks such as summarization and dialogue systems . We create two new datasets for the task , derived from the Penn Treebank and the Annotated English Gigaword corpora , as well as a novel attention mechanism tailored to this task . Our attention mechanism augments a baseline recurrent neural network without the need for additional trainable parameters , minimizing the added computational cost of our mechanism . We demonstrate that our model statistically outperforms a number of baselines , including an LSTM-based language model . In pragmatics , presuppositions are assumptions or beliefs in the common ground between discourse participants when an utterance is made ( Frege , 1892 ; Strawson , 1950 ; Stalnaker , 1973 Stalnaker , , 1998 ) ) , and are ubiquitous in naturally occurring discourses ( Beaver and Geurts , 2014 ) . Presuppositions underly spoken statements and written sentences and understanding them facilitates smooth communication . We refer to expressions that indicate the presence of presuppositions as presupposition triggers . These include definite descriptions , factive verbs and certain adverbs , among others . For example , consider the following statements : ( 1 ) John is going to the restaurant again . * Authors ( listed in alphabetical order ) contributed equally . ( 2 ) John has been to the restaurant . ( 1 ) is only appropriate in the context where ( 2 ) is held to be true because of the presence of the presupposition trigger again . One distinguishing characteristic of presupposition is that it is unaffected by negation of the presupposing context , unlike other semantic phenomena such as entailment and implicature . The negation of ( 1 ) , John is not going to the restaurant again . , also presupposes ( 2 ) . Our focus in this paper is on adverbial presupposition triggers such as again , also and still . Adverbial presupposition triggers indicate the recurrence , continuation , or termination of an event in the discourse context , or the presence of a similar event . In one study of presuppositional triggers in English journalistic texts ( Khaleel , 2010 ) , adverbial triggers were found to be the most commonly occurring presupposition triggers after existential triggers . 1 Despite their frequency , there has been little work on these triggers in the computational literature from a statistical , corpus-driven perspective . As a first step towards language technology systems capable of understanding and using presuppositions , we propose to investigate the detection of contexts in which these triggers can be used . This task constitutes an interesting testing ground for pragmatic reasoning , because the cues that are indicative of contexts containing recurring or similar events are complex and often span more than one sentence , as illustrated in Sentences ( 1 ) and ( 2 ) . Moreover , such a task has immediate practical consequences . For example , in language generation applications such as summarization and dialogue systems , adding presuppositional triggers in contextually appropriate loca-1 Presupposition of existence are triggered by possessive constructions , names or definite noun phrases . tions can improve the readability and coherence of the generated output . We create two datasets based on the Penn Treebank corpus ( Marcus et al . , 1993 ) and the English Gigaword corpus ( Graff et al . , 2007 ) , extracting contexts that include presupposition triggers as well as other similar contexts that do not , in order to form a binary classification task . In creating our datasets , we consider a set of five target adverbs : too , again , also , still , and yet . We focus on these adverbs in our investigation because these triggers are well known in the existing linguistic literature and commonly triggering presuppositions . We control for a number of potential confounding factors , such as class balance , and the syntactic governor of the triggering adverb , so that models can not exploit these correlating factors without any actual understanding of the presuppositional properties of the context . We test a number of standard baseline classifiers on these datasets , including a logistic regression model and deep learning methods based on recurrent neural networks ( RNN ) and convolutional neural networks ( CNN ) . In addition , we investigate the potential of attention-based deep learning models for detecting adverbial triggers . Attention is a promising approach to this task because it allows a model to weigh information from multiple points in the previous context and infer long-range dependencies in the data ( Bahdanau et al . , 2015 ) . For example , the model could learn to detect multiple instances involving John and restaurants , which would be a good indication that again is appropriate in that context . Also , an attention-based RNN has achieved success in predicting article definiteness , which involves another class of presupposition triggers ( Kabbara et al . , 2016 ) . As another contribution , we introduce a new weighted pooling attention mechanism designed for predicting adverbial presupposition triggers . Our attention mechanism allows for a weighted averaging of our RNN hidden states where the weights are informed by the inputs , as opposed to a simple unweighted averaging . Our model uses a form of self-attention ( Paulus et al . , 2018 ; Vaswani et al . , 2017 ) , where the input sequence acts as both the attention mechanism 's query and key / value . Unlike other attention models , instead of simply averaging the scores to be weighted , our approach aggregates ( learned ) attention scores by learning a reweighting scheme of those scores through another level ( dimension ) of attention . Additionally , our mechanism does not introduce any new parameters when compared to our LSTM baseline , reducing its computational impact . We compare our model using the novel attention mechanism against the baseline classifiers in terms of prediction accuracy . Our model outperforms these baselines for most of the triggers on the two datasets , achieving 82.42 % accuracy on predicting the adverb " also " on the Gigaword dataset . The contributions of this work are as follows : 1 . We introduce the task of predicting adverbial presupposition triggers . 2 . We present new datasets for the task of detecting adverbial presupposition triggers , with a data extraction method that can be applied to other similar pre-processing tasks . 3 . We develop a new attention mechanism in an RNN architecture that is appropriate for the prediction of adverbial presupposition triggers , and show that its use results in better prediction performance over a number of baselines without introducing additional parameters . 2 Related Work In this work , we have investigated the task of predicting adverbial presupposition triggers and introduced several datasets for the task . Additionally , we have presented a novel weighted-pooling attention mechanism which is incorporated into a recurrent neural network model for predicting the presence of an adverbial presuppositional trigger . Our results show that the model outperforms the CNN and LSTM , and does not add any additional parameters over the standard LSTM model . This shows its promise in classification tasks involving capturing and combining relevant information from multiple points in the previous context . In future work , we would like to focus more on designing models that can deal with and be optimized for scenarios with severe data imbalance . We would like to also explore various applications of presupposition trigger prediction in language generation applications , as well as additional attention-based neural network architectures .
Adverbaial triggers indicate the event recurrence, continuation, or termination in the discourse context and are frequently found in English but there are few related works.
They introduce an adverbial presupposition trigger prediction task and datasets and propose an attention mechanism that augments a recurrent neural network without additional trainable parameters.
The proposed model outperforms baselines including an LSTM-based language model on most of the triggers on the two datasets.
P08-1116
This paper proposes a novel method that exploits multiple resources to improve statistical machine translation ( SMT ) based paraphrasing . In detail , a phrasal paraphrase table and a feature function are derived from each resource , which are then combined in a log-linear SMT model for sentence-level paraphrase generation . Experimental results show that the SMT-based paraphrasing model can be enhanced using multiple resources . The phrase-level and sentence-level precision of the generated paraphrases are above 60 % and 55 % , respectively . In addition , the contribution of each resource is evaluated , which indicates that all the exploited resources are useful for generating paraphrases of high quality . Paraphrases are alternative ways of conveying the same meaning . Paraphrases are important in many natural language processing ( NLP ) applications , such as machine translation ( MT ) , question answering ( QA ) , information extraction ( IE ) , multidocument summarization ( MDS ) , and natural language generation ( NLG ) . This paper addresses the problem of sentencelevel paraphrase generation , which aims at generating paraphrases for input sentences . An example of sentence-level paraphrases can be seen below : S1 : The table was set up in the carriage shed . S2 : The table was laid under the cart-shed . Paraphrase generation can be viewed as monolingual machine translation ( Quirk et al . , 2004 ) , which typically includes a translation model and a language model . The translation model can be trained using monolingual parallel corpora . However , acquiring such corpora is not easy . Hence , data sparseness is a key problem for the SMT-based paraphrasing . On the other hand , various methods have been presented to extract phrasal paraphrases from different resources , which include thesauri , monolingual corpora , bilingual corpora , and the web . However , little work has been focused on using the extracted phrasal paraphrases in sentence-level paraphrase generation . In this paper , we exploit multiple resources to improve the SMT-based paraphrase generation . In detail , six kinds of resources are utilized , including : ( 1 ) an automatically constructed thesaurus , ( 2 ) a monolingual parallel corpus from novels , ( 3 ) a monolingual comparable corpus from news articles , ( 4 ) a bilingual phrase table , ( 5 ) word definitions from Encarta dictionary , and ( 6 ) a corpus of similar user queries . Among the resources , ( 1 ) , ( 2 ) , ( 3 ) , and ( 4 ) have been investigated by other researchers , while ( 5 ) and ( 6 ) are first used in this paper . From those resources , six phrasal paraphrase tables are extracted , which are then used in a log-linear SMTbased paraphrasing model . Both phrase-level and sentence-level evaluations were carried out in the experiments . In the former one , phrase substitutes occurring in the paraphrase sentences were evaluated . While in the latter one , the acceptability of the paraphrase sentences was evaluated . Experimental results show that : ( 1 ) The SMT-based paraphrasing is enhanced using multiple resources . The phrase-level and sentence-level precision of the generated paraphrases exceed 60 % and 55 % , respectively . ( 2 ) Although the contributions of the resources differ a lot , all the resources are useful . ( 3 ) The performance of the method varies greatly on different test sets and it performs best on the test set of news sentences , which are from the same source as most of the training data . The rest of the paper is organized as follows : Section 2 reviews related work . Section 3 introduces the log-linear model for paraphrase generation . Section 4 describes the phrasal paraphrase extraction from different resources . Section 5 presents the parameter estimation method . Section 6 shows the experiments and results . Section 7 draws the conclusion . This paper proposes a method that improves the SMT-based sentence-level paraphrase generation using phrasal paraphrases automatically extracted from different resources . Our contribution is that we combine multiple resources in the framework of SMT for paraphrase generation , in which the dic-tionary definitions and similar user queries are first used as phrasal paraphrases . In addition , we analyze and compare the contributions of different resources . Experimental results indicate that although the contributions of the exploited resources differ a lot , they are all useful to sentence-level paraphrase generation . Especially , the dictionary definitions and similar user queries are effective for paraphrasing some certain types of phrases . In the future work , we will try to use syntactic and context constraints in paraphrase generation to enhance the acceptability of the paraphrases . In addition , we will extract paraphrase patterns that contain more structural variation and try to combine the SMT-based and pattern-based systems for sentencelevel paraphrase generation .
Paraphrase generation requires monolingual parallel corpora which is not easily obtainable, and few works focus on using the extracted phrasal paraphrases in sentence-level paraphrase generation.
They propose to exploit six paraphrase resources to extract phrasal paraphrase tables that are further used to build a log-linear statistical machine translation-based paraphrasing model.
They show that using multiple resources enhances paraphrase generation quality in precision on phrase and sentence level especially when they are similar to user queries.
P08-1027
There are many possible different semantic relationships between nominals . Classification of such relationships is an important and difficult task ( for example , the well known noun compound classification task is a special case of this problem ) . We propose a novel pattern clusters method for nominal relationship ( NR ) classification . Pattern clusters are discovered in a large corpus independently of any particular training set , in an unsupervised manner . Each of the extracted clusters corresponds to some unspecified semantic relationship . The pattern clusters are then used to construct features for training and classification of specific inter-nominal relationships . Our NR classification evaluation strictly follows the ACL SemEval-07 Task 4 datasets and protocol , obtaining an f-score of 70.6 , as opposed to 64.8 of the best previous work that did not use the manually provided WordNet sense disambiguation tags . Automatic extraction and classification of semantic relationships is a major field of activity , of both practical and theoretical interest . A prominent type of semantic relationships is that holding between nominals1 . For example , in noun compounds many different semantic relationships are encoded by the same simple form ( Girju et al . , 2005 ) : ' dog food ' denotes food consumed by dogs , while ' summer morn-ing ' denotes a morning that happens in the summer . These two relationships are completely different semantically but are similar syntactically , and distinguishing between them could be essential for NLP applications such as question answering and machine translation . Relation classification usually relies on a training set in the form of tagged data . To improve results , some systems utilize additional manually constructed semantic resources such as WordNet ( WN ) ( Beamer et al . , 2007 ) . However , in many domains and languages such resources are not available . Furthermore , usage of such resources frequently requires disambiguation and connection of the data to the resource ( word sense disambiguation in the case of WordNet ) . Manual disambiguation is unfeasible in many practical tasks , and an automatic one may introduce errors and greatly degrade performance . It thus makes sense to try to minimize the usage of such resources , and utilize only corpus contexts in which the relevant words appear . A leading method for utilizing context information for classification and extraction of relationships is that of patterns ( Hearst , 1992 ; Pantel and Pennacchiotti , 2006 ) . The standard classification process is to find in an auxiliary corpus a set of patterns in which a given training word pair co-appears , and use pattern-word pair co-appearance statistics as features for machine learning algorithms . In this paper we introduce a novel approach , based on utilizing pattern clusters that are prepared separately and independently of the training set . We do not utilize any manually constructed resource or any manual tagging of training data beyond the cor-rect classification , thus making our method applicable to fully automated tasks and less domain and language dependent . Moreover , our pattern clustering algorithm is fully unsupervised . Our method is based on the observation that while each lexical pattern can be highly ambiguous , several patterns in conjunction can reliably define and represent a lexical relationship . Accordingly , we construct pattern clusters from a large generic corpus , each such cluster potentially representing some important generic relationship . This step is done without accessing any training data , anticipating that most meaningful relationships , including those in a given classification problem , will be represented by some of the discovered clusters . We then use the training set to label some of the clusters , and the labeled clusters to assign classes to tested items . One of the advantages of our method is that it can be used not only for classification , but also for further analysis and retrieval of the observed relationships2 . The semantic relationships between the components of noun compounds and between nominals in general are not easy to categorize rigorously . Several different relationship hierarchies have been proposed ( Nastase and Szpakowicz , 2003 ; Moldovan et al . , 2004 ) . Some classes , like Container-Contained , Time-Event and Product-Producer , appear in several classification schemes , while classes like Tool-Object are more vaguely defined and are subdivided differently . Recently , SemEval-07 Task 4 ( Girju et al . , 2007 ) proposed a benchmark dataset that includes a subset of 7 widely accepted nominal relationship ( NR ) classes , allowing consistent evaluation of different NR classification algorithms . In the SemEval event , 14 research teams evaluated their algorithms using this benchmark . Some of the teams have used the manually annotated WN labels provided with the dataset , and some have not . We evaluated our algorithm on SemEval-07 Task 4 data , showing superior results over participating algorithms that did not utilize WordNet disambiguation tags . We also show how pattern clusters can be used for a completely unsupervised classification of the test set . Since in this case no training data is used , this allows the automated discovery of a potentially unbiased classification scheme . Section 2 discusses related work , Section 3 outlines the pattern clustering algorithm , Section 4 details three classification methods , and Sections 5 and 6 describe the evaluation protocol and results . Relationship classification is known to improve many practical tasks , e.g. , textual entailment ( Tatu and Moldovan , 2005 ) . We have presented a novel framework for relationship classification , based on pattern clusters prepared as a standalone resource independently of the training set . Our method outperforms current state-of-the-art algorithms that do not utilize WordNet tags on Task 4 of SemEval-07 . In practical situations , it would not be feasible to provide a large amount of such sense disambiguation tags manually . Our method also shows competitive performance compared to the majority of task participants that do utilize WN tags . Our method can produce labeled pattern clusters , which can be potentially useful for automatic discovery of additional instances for a given relationship . We intend to pursue this promising direction in future work .
Using annotated data or semantic resources such as WordNet for relation classification introduces errors and such data is not available in many domains and languages.
They propose an unsupervised pattern clustering method for nominal relation classification using a large generic corpus enabling scale in domain and language.
Experiments on the ACL SemEval-07 dataset show the proposed method performs better than existing methods that do not use disambiguation tags.
2021.emnlp-main.185
Learning sentence embeddings from dialogues has drawn increasing attention due to its low annotation cost and high domain adaptability . Conventional approaches employ the siamese-network for this task , which obtains the sentence embeddings through modeling the context-response semantic relevance by applying a feed-forward network on top of the sentence encoders . However , as the semantic textual similarity is commonly measured through the element-wise distance metrics ( e.g. cosine and L2 distance ) , such architecture yields a large gap between training and evaluating . In this paper , we propose DialogueCSE , a dialogue-based contrastive learning approach to tackle this issue . DialogueCSE first introduces a novel matching-guided embedding ( MGE ) mechanism , which generates a contextaware embedding for each candidate response embedding ( i.e. the context-free embedding ) according to the guidance of the multi-turn context-response matching matrices . Then it pairs each context-aware embedding with its corresponding context-free embedding and finally minimizes the contrastive loss across all pairs . We evaluate our model on three multi-turn dialogue datasets : the Microsoft Dialogue Corpus , the Jing Dong Dialogue Corpus , and the E-commerce Dialogue Corpus . Evaluation results show that our approach significantly outperforms the baselines across all three datasets in terms of MAP and Spearman 's correlation measures , demonstrating its effectiveness . Further quantitative experiments show that our approach achieves better performance when leveraging more dialogue context and remains robust when less training data is provided . Sentence embeddings are used with success for a variety of NLP applications ( Cer et al . , 2018 ) and many prior methods have been proposed with different learning schemes . Kiros et al . ( 2015 ) ; Logeswaran and Lee ( 2018 ) ; Hill et al . ( 2016 ) train sentence encoders in a self-supervised manner with web pages and books . Conneau et al . ( 2017 ) ; Cer et al . ( 2018 ) ; Reimers and Gurevych ( 2019 ) propose to learn sentence embeddings on the supervised datasets such as SNLI ( Bowman et al . , 2015 ) and MNLI ( Williams et al . , 2018 ) . Although the supervised-learning approaches achieve better performance , they suffer from high cost of annotation in building the training dataset , which makes them hard to adapt to other domains or languages . Recently , learning sentence embeddings from dialogues has begun to attract increasing attention . Dialogues provide strong semantic relationships among conversational utterances and are usually easy to collect in large amounts . Such advantages make the dialogue-based self-supervised learning methods promising to achieve competitive or even superior performance against the supervised-learning methods , especially under the low-resource conditions . While promising , the issue of how to effectively exploit the dialogues for this task has not been sufficiently explored . Yang et al . ( 2018 ) propose to train an input-response prediction model on Reddit dataset ( Al-Rfou et al . , 2016 ) . Since they build their architecture based on the single-turn dialogue , the multi-turn dialogue history is not fully exploited . Henderson et al . ( 2020 ) demonstrate that introducing the multi-turn dialogue context can improve the sentence embedding performance . However , they concatenate the multi-turn dialogue context into a long token sequence , failing to model intersentence semantic relationships among the utterances . Recently , more advanced methods such as ( Reimers and Gurevych , 2019 ) achieve better performance by employing BERT ( Devlin et al . , 2019 ) as the sentence encoder . These works have in common that they employ a feed-forward network with a non-linear activation on top of the sentence en-coders to model the context-response semantic relevance , thereby learning the sentence embeddings . However , such architecture presents two limitations : ( 1 ) It yields a large gap between training and evaluating , since the semantic textual similarity is commonly measured by the element-wise distance metrics such as cosine and L2 distance . ( 2 ) Concatenating all the utterances in the dialogue context inevitably introduces the noise as well as the redundant information , resulting in a poor result . In this paper , we propose DialogueCSE , a dialogue-based contrastive learning approach to tackle these issues . We hold that the semantic matching relationships between the context and the response can be implicitly modeled through contrastive learning , thus making it possible to eliminate the gap between training and evaluating . To this end , we introduce a novel matching-guided embedding ( MGE ) mechanism . Specifically , MGE first pairs each utterance in the context with the response and performs a token-level dot-product operation across all the utterance-response pairs to obtain the multi-turn matching matrices . Then the multi-turn matching matrices are used as guidance to generate a context-aware embedding for the response embedding ( i.e. the context-free embedding ) . Finally , the context-aware embedding and the context-free embedding are paired as a training sample , whose label is determined by whether the context and the response are originally from the same dialogue . Our motivation is that once the context semantically matches the response , it has the ability to distill the context-aware information from the context-free embedding , which is exactly the learning objective of the sentence encoder that aims to produce context-aware sentence embeddings . We train our model on three multi-turn dialogue datasets : the Microsoft Dialogue Corpus ( MDC ) ( Li et al . , 2018 ) , the Jing Dong Dialogue Corpus ( JDDC ) ( Chen et al . , 2020 ) , and the E-commerce Dialogue Corpus ( ECD ) ( Zhang et al . , 2018 ) . To evaluate our model , we introduce two types of tasks : the semantic retrieval ( SR ) task and the dialogue-based semantic textual similarity ( D-STS ) task . Here we do not adopt the standard semantic textual similarity ( STS ) task ( Cer et al . , 2017 ) for two reasons : ( 1 ) As revealed in ( Zhang et al . , 2020 ) , the sentence embedding performance varies greatly as the domain of the training data changes . As a dialogue dataset is always about several certain domains , evaluating on the STS benchmark may mis-lead the evaluation of the model . ( 2 ) The dialoguebased sentence embeddings focus on context-aware rather than context-free semantic meanings , which may not be suitable to be evaluated through the context-free benchmarks . Since previous dialoguebased works have not set up a uniform benchmark , we construct two evaluation datasets for each dialogue corpus . A total of 18,964 retrieval samples and 4,000 sentence pairs are annotated by seven native speakers through the crowd-sourcing platform1 . The evaluation results indicate that DialogueCSE significantly outperforms the baselines on the three datasets in terms of both MAP and Spearman 's correlation metrics , demonstrating its effectiveness . Further quantitative experiments show that Dia-logueCSE achieves better performance when leveraging more dialogue context and remains robust when less training data is provided . To sum up , our contributions are threefold : • We propose DialogueCSE , a dialogue-based contrastive learning approach with MGE mechanism for learning sentence embeddings from dialogues . As far as we know , this is the first attempt to apply contrastive learning in this area . • We construct the dialogue-based sentence embedding evaluation benchmarks for three dialogue corpus . All of the datasets will be released to facilitate the follow-up researches . • Extensive experiments show that Dia-logueCSE significantly outperforms the baselines , establishing the state-of-the-art results . 2 Related Work In this work , we propose DialogueCSE , a dialoguebased contrastive learning approach to learn sentence embeddings from dialogues . We also propose uniform evaluation benchmarks for evaluating the quality of the dialogue-based sentence embeddings . Evaluation results show that DialogueCSE achieves the best result over the baselines while adding no additional parameters . In the next step , we will study how to introduce more interaction information to learn the sentence embeddings and try to incorporate the contrast learning method into the pre-training stage .
Existing methods to learn representations from dialogues have a similarity-measurement gap between training and evaluation time and do not exploit the multi-turn structure of data.
They propose a dialogue-based contrastive learning approach to learn sentence embeddings from dialogues by modelling semantic matching relationships between the context and response implicitly.
The proposed approach outperforms baseline methods on two newly introduced tasks coupled with three multi-turn dialogue datasets in terms of MAP and Spearman's correlation measures.
P02-1051
Named entity phrases are some of the most difficult phrases to translate because new phrases can appear from nowhere , and because many are domain specific , not to be found in bilingual dictionaries . We present a novel algorithm for translating named entity phrases using easily obtainable monolingual and bilingual resources . We report on the application and evaluation of this algorithm in translating Arabic named entities to English . We also compare our results with the results obtained from human translations and a commercial system for the same task . Named entity phrases are being introduced in news stories on a daily basis in the form of personal names , organizations , locations , temporal phrases , and monetary expressions . While the identification of named entities in text has received significant attention ( e.g. , Mikheev et al . ( 1999 ) and Bikel et al . ( 1999 ) ) , translation of named entities has not . This translation problem is especially challenging because new phrases can appear from nowhere , and because many named-entities are domain specific , not to be found in bilingual dictionaries . A system that specializes in translating named entities such as the one we describe here would be an important tool for many NLP applications . Statisti-cal machine translation systems can use such a system as a component to handle phrase translation in order to improve overall translation quality . Cross-Lingual Information Retrieval ( CLIR ) systems could identify relevant documents based on translations of named entity phrases provided by such a system . Question Answering ( QA ) systems could benefit substantially from such a tool since the answer to many factoid questions involve named entities ( e.g. , answers to who questions usually involve Persons / Organizations , where questions involve Locations , and when questions involve Temporal Expressions ) . In this paper , we describe a system for Arabic-English named entity translation , though the technique is applicable to any language pair and does not require especially difficult-to-obtain resources . The rest of this paper is organized as follows . In Section 2 , we give an overview of our approach . In Section 3 , we describe how translation candidates are generated . In Section 4 , we show how monolingual clues are used to help re-rank the translation candidates list . In Section 5 , we describe how the candidates list can be extended using contextual information . We conclude this paper with the evaluation results of our translation algorithm on a test set . We also compare our system with human translators and a commercial system . We have presented a named entity translation algorithm that performs at near human translation accuracy when translating Arabic named entities to English . The algorithm uses very limited amount of hard-to-obtain bilingual resources and should be easily adaptable to other languages . We would like to apply to other languages such as Chinese and Japanese and to investigate whether the current algorithm would perform as well or whether new algorithms might be needed . Currently , our translation algorithm does not use any dictionary of named entities and they are translated on the fly . Translating a common name incorrectly has a significant effect on the translation accuracy . We would like to experiment with adding a small named entity translation dictionary for common names and see if this might improve the overall translation accuracy .
Translating named entities is challenging since they can appear from nowhere, and cannot be found in bilingual dictionaries because they are domain specific.
They propose an algorithm for Arabic-English named entity translation which uses easily obtainable monolingual and bilingual resources and a limited amount of hard-to-obtain bilingual resources.
The proposed algorithm is compared with human translators and a commercial system and it performs at near human translation.
E06-1014
Probabilistic Latent Semantic Analysis ( PLSA ) models have been shown to provide a better model for capturing polysemy and synonymy than Latent Semantic Analysis ( LSA ) . However , the parameters of a PLSA model are trained using the Expectation Maximization ( EM ) algorithm , and as a result , the trained model is dependent on the initialization values so that performance can be highly variable . In this paper we present a method for using LSA analysis to initialize a PLSA model . We also investigated the performance of our method for the tasks of text segmentation and retrieval on personal-size corpora , and present results demonstrating the efficacy of our proposed approach . In modeling a collection of documents for information access applications , the documents are often represented as a " bag of words " , i.e. , as term vectors composed of the terms and corresponding counts for each document . The term vectors for a document collection can be organized into a term by document co-occurrence matrix . When directly using these representations , synonyms and polysemous terms , that is , terms with multiple senses or meanings , are not handled well . Methods for smoothing the term distributions through the use of latent classes have been shown to improve the performance of a number of information access tasks , including retrieval over smaller collections ( Deerwester et al . , 1990 ) , text segmentation ( Brants et al . , 2002 ) , and text classification ( Wu and Gunopulos , 2002 ) . The Probabilistic Latent Semantic Analysis model ( PLSA ) ( Hofmann , 1999 ) provides a probabilistic framework that attempts to capture polysemy and synonymy in text for applications such as retrieval and segmentation . It uses a mixture decomposition to model the co-occurrence data , and the probabilities of words and documents are obtained by a convex combination of the aspects . The mixture approximation has a well defined probability distribution and the factors have a clear probabilistic meaning in terms of the mixture component distributions . The PLSA model computes the relevant probability distributions by selecting the model parameter values that maximize the probability of the observed data , i.e. , the likelihood function . The standard method for maximum likelihood estimation is the Expectation Maximization ( EM ) algorithm . For a given initialization , the likelihood function increases with EM iterations until a local maximum is reached , rather than a global maximum , so that the quality of the solution depends on the initialization of the model . Additionally , the likelihood values across different initializations are not comparable , as we will show . Thus , the likelihood function computed over the training data can not be used as a predictor of model performance across different models . Rather than trying to predict the best performing model from a set of models , in this paper we focus on finding a good way to initialize the PLSA model . We will present a framework for using Latent Semantic Analysis ( LSA ) ( Deerwester et al . , 1990 ) to better initialize the parameters of a corresponding PLSA model . The EM algorithm is then used to further refine the initial estimate . This combination of LSA and PLSA leverages the advantages of both . This paper is organized as follows : in section 2 , we review related work in the area . In section 3 , we summarize related work on LSA and its probabilistic interpretation . In section 4 we review the PLSA model and in section 5 we present our method for initializing a PLSA model using LSA model parameters . In section 6 , we evaluate the performance of our framework on a text segmentation task and several smaller information retrieval tasks . And in section 7 , we summarize our results and give directions for future work . We have presented LSA-PLSA , an approach for improving the performance of PLSA by leveraging the best features of PLSA and LSA . Our approach uses LSA to initialize a PLSA model , allowing for arbitrary weighting schemes to be incorporated into a PLSA model while leveraging the optimization used to improve the estimate of the PLSA parameters . We have evaluated the proposed framework on two tasks : personalsize information retrieval and text segmentation . The LSA-PLSA model outperformed PLSA on all tasks . And in all cases , combining PLSA-based models outperformed a single model . The best performance was obtained with combined models when one of the models was the LSA-PLSA model . When combining multiple PLSA models , the use of LSA-PLSA in combination with either two PLSA models or one PLSA and one LSA model improved performance while reducing the running time over the combination of four or more PLSA models as used by others . Future areas of investigation include quantifying the expected performance of the LSAinitialized PLSA model by comparing performance to that of the empirically best performing model and examining whether tempered EM could further improve performance .
EM algorithm-baed Probabilistic latent semantic analysis models provide high variance in performance and models with different initializations are not comparable.
They propose to use Latent Semantic Analysis to initialize probabilistic latent semantic analysis models, EM algorithm is further used to refine the initial estimate.
They show that the model initialized in the proposed method always outperforms existing methods.
2021.naacl-main.34
We rely on arguments in our daily lives to deliver our opinions and base them on evidence , making them more convincing in turn . However , finding and formulating arguments can be challenging . In this work , we present the Arg-CTRL-a language model for argument generation that can be controlled to generate sentence-level arguments for a given topic , stance , and aspect . We define argument aspect detection as a necessary method to allow this fine-granular control and crowdsource a dataset with 5,032 arguments annotated with aspects . Our evaluation shows that the Arg-CTRL is able to generate high-quality , aspectspecific arguments , applicable to automatic counter-argument generation . We publish the model weights and all datasets and code to train the Arg-CTRL . 1 Nuclear reactors produce radioactive waste ... Language models ( Bengio et al . , 2003 ) allow to generate text through learned distributions of a language and have been applied to a variety of areas like machine translation ( Bahdanau et al . , 2015 ) , summarization ( Paulus et al . , 2018 ) , or dialogue systems ( Wen et al . , 2017 ) . A rather new field for these models is the task of producing text with argumentative content ( Wang and Ling , 2016 ) . We believe this technology can support humans in the challenging task of finding and formulating arguments . A politician might use this to prepare for a debate with a political opponent or for a press conference . It may be used to support students in writing argumentative essays or to enrich one-sided discussions with counter-arguments . In contrast to retrieval methods , generation allows to combine and stylistically adapt text ( e.g. arguments ) based on a given input ( usually the beginning of a sentence ) . Current argument generation models , however , produce lengthy texts and allow the user little control over the aspect the argument should address ( Hua et al . , 2019 ; Hua and Wang , 2018 ) . We show that argument generation can be enhanced by allowing for a fine-grained control and limiting the argument to a single but concise sentence . Controllable language models like the CTRL ( Keskar et al . , 2019 ) allow to condition the model at training time to certain control codes . At inference , these can be used to direct the model 's output with regard to content or style . We build upon this architecture to control argument generation based solely on a given topic , stance , and argument aspect . For instance , to enforce focus on the aspect of cancer for the topic of nuclear energy , we input a control code " Nuclear Energy CON cancer " that creates a contra argument discussing this aspect , for instance : " Studies show that people living next to nuclear power plants have a higher risk of developing cancer . " . To obtain control codes from training data , we pre-define a set of topics to retrieve documents for and rely on an existing stance detection model to classify whether a sentence argues in favor ( pro ) or against ( con ) the given topic ( Stab et al . , 2018a ) . Regarding argument aspect detection , however , past work has two drawbacks : it either uses simple rule-based extraction of verb-and noun-phrases ( Fujii and Ishikawa , 2006 ) or the definition of aspects is based on target-concepts located within the same sentence ( Gemechu and Reed , 2019 ) . Aspects as we require and define them are not bound to any part-of-speech tag and ( 1 ) hold the core reason upon which the conclusion / evidence is built and ( 2 ) encode the stance towards a general but not necessarily explicitly mentioned topic the argument discusses . For instance : Topic : Nuclear Energy Argument : Running nuclear reactors is costly as it involves long-time disposal of radioactive waste . The evidence of this argument is based upon the two underlined aspects . While these aspects encode a negative stance towards the topic of " Nuclear Energy " , the topic itself is not mentioned explicitly in the argument . Our final controlled argument generation pipeline ( see Figure 1 ) works as follows : ( 1 ) We gather several million documents for eight different topics from two large data sources . All sentences are classified into pro- , con- , and non-arguments . We detect aspects of all arguments with a model trained on a novel dataset and concatenate arguments with the same topic , stance , and aspect into training documents . ( 2 ) We use the collected classified data to condition the Arg-CTRL on the topics , stances , and aspects of all gathered arguments . ( 3 ) At inference , passing the control code [ Topic ] [ Stance ] [ Aspect ] to the model will generate an argument that follows these commands . Our evaluation shows that the Arg-CTRL is able to produce aspect-specific , high-quality arguments , applicable to automatic counter-argument generation . The contributions are as follows : ( i ) We adapt and fine-tune the CTRL for aspect-controlled neural argument generation . ( ii ) We show that detecting argument aspects and conditioning the generation model on them are necessary steps to control the model 's training process and its perspective while generating . ( iii ) We propose several methods to analyze and evaluate the quality of ( controllable ) argument generation models . ( iv ) We develop a new scheme to annotate argument aspects and release a dataset with 5,032 samples . We apply the concept of controlled neural text generation to the domain of argument generation . Our Arg-CTRL is conditioned on topics , stances , and aspects and can reliably create arguments using these control codes . We show that arguments generated with our approach are genuine and of high argumentative and grammatical quality in general . Moreover , we show that our approach can be used to generate counter-arguments in a transparent and interpretable way . We fine-tune the Arg-CTRL on two different data sources and find that using mixed data from Common-Crawl results in a higher quality of generated arguments than using user discussions from Reddit-Comments . Further , we define argument aspect detection for controlled argument generation and introduce a novel annotation scheme to crowdsource argument aspect annotations , resulting in a high-quality dataset . We publish the model weights , data , and all code necessary to train the Arg-CTRL .
Argumentative content generation can support humans but current models produce lengthy texts and offer a little controllability on aspects of the argument for users.
They train a controllable language model on a corpus annotated with control codes provided by a stance detection model and introduce a dataset for evaluation.
The proposed model can generate arguments that are genuine and argumentative and grammatically correct and also counter-arguments in a transparent and interpretable way.
N16-1181
We describe a question answering model that applies to both images and structured knowledge bases . The model uses natural language strings to automatically assemble neural networks from a collection of composable modules . Parameters for these modules are learned jointly with network-assembly parameters via reinforcement learning , with only ( world , question , answer ) triples as supervision . Our approach , which we term a dynamic neural module network , achieves state-of-theart results on benchmark datasets in both visual and structured domains . This paper presents a compositional , attentional model for answering questions about a variety of world representations , including images and structured knowledge bases . The model translates from questions to dynamically assembled neural networks , then applies these networks to world representations ( images or knowledge bases ) to produce answers . We take advantage of two largely independent lines of work : on one hand , an extensive literature on answering questions by mapping from strings to logical representations of meaning ; on the other , a series of recent successes in deep neural models for image recognition and captioning . By constructing neural networks instead of logical forms , our model leverages the best aspects of both linguistic compositionality and continuous representations . Our model has two components , trained jointly : first , a collection of neural " modules " that can be freely composed ( Figure 1b ) ; second , a network layout predictor that assembles modules into complete deep networks tailored to each question ( Figure 1a ) . Previous work has used manually-specified modular structures for visual learning ( Andreas et al . , 2016 ) . Here we : • learn a network structure predictor jointly with module parameters themselves • extend visual primitives from previous work to reason over structured world representations Training data consists of ( world , question , answer ) triples : our approach requires no supervision of network layouts . We achieve state-of-the-art performance on two markedly different question answering tasks : one with questions about natural images , and another with more compositional questions about United States geography.1 We have introduced a new model , the dynamic neural module network , for answering queries about both structured and unstructured sources of information . Given only ( question , world , answer ) triples as training data , the model learns to assemble neural networks on the fly from an inventory of neural models , and simultaneously learns weights for these modules so that they can be composed into novel structures . Our approach achieves state-of-the-art results on two tasks . We believe that the success of this work derives from two factors : Continuous representations improve the expressiveness and learnability of semantic parsers : by replacing discrete predicates with differentiable neural network fragments , we bypass the challenging combinatorial optimization problem associated with induction of a semantic lexicon . In structured world representations , neural predicate representations allow the model to invent reusable attributes and relations not expressed in the schema . Perhaps more importantly , we can extend compositional questionanswering machinery to complex , continuous world representations like images . Semantic structure prediction improves generalization in deep networks : by replacing a fixed network topology with a dynamic one , we can tailor the computation performed to each problem instance , using deeper networks for more complex questions and representing combinatorially many queries with comparatively few parameters . In practice , this results in considerable gains in speed and sample efficiency , even with very little training data . These observations are not limited to the question answering domain , and we expect that they can be applied similarly to tasks like instruction following , game playing , and language generation .
Existing works on visual learning use manually-specified modular structures.
They propose a question-answering model trained jointly to translate from questions to dynamically assembled neural networks then produce answers with using images or knowledge bases.
The proposed model achieves state-of-the-arts on visual and structured domain datasets showing that coutinous representations improve the expressiveness and learnability of semantic parsers.
2020.aacl-main.88
Large pre-trained language models reach stateof-the-art results on many different NLP tasks when fine-tuned individually ; They also come with a significant memory and computational requirements , calling for methods to reduce model sizes ( green AI ) . We propose a twostage model-compression method to reduce a model 's inference time cost . We first decompose the matrices in the model into smaller matrices and then perform feature distillation on the internal representation to recover from the decomposition . This approach has the benefit of reducing the number of parameters while preserving much of the information within the model . We experimented on BERTbase model with the GLUE benchmark dataset and show that we can reduce the number of parameters by a factor of 0.4x , and increase inference speed by a factor of 1.45x , while maintaining a minimal loss in metric performance . Deep learning models have been demonstrated to achieve state-of-the-art results , but require large parameter storage and computation . It 's estimated that training a Transformer model with a neural architecture search has a CO 2 emissions equivalent to nearly five times the lifetime emissions of the average U.S. car , including its manufacturing ( Strubell et al . , 2019 ) . Alongside the increase in deep learning models complexity , in the NLP domain , there has been a shift in the NLP modeling paradigm from training a randomly initialized model to fine-tuning a large and computational heavy pre-trained language model ( Howard and Ruder , 2018 ; Peters et al . , 2018 ; Devlin et al . , 2018 ; Radford , 2018 ; Radford et al . , 2019 ; Dai et al . , 2019 ; Yang et al . , 2019 ; Lample and Conneau , 2019 ; Liu et al . , 2019b ; Raffel et al . , 2019 ; Lan et al . , 2019 ; Lewis et al . , 2019 ) . While re-using pre-trained models offsets the training costs , inference time costs of the finetuned models remain significant , and are showstoppers in many applications . The main challenge with pre-trained models is how can we reduce their size while saving the information contained within them . Recent work , approached this by keeping some of the layers while removing others ( Sanh et al . , 2019 ; Sun et al . , 2019 ; Xu et al . , 2020 ) . A main drawback of such approach is in its coarse-grained nature : removing entire layers might discard important information contained within the model , and working at the granularity of layers makes the trade-off between compression and accuracy of a model hard to control . Motivated by this , in this work we suggest a more finegrained approach which decomposes each matrix to two smaller matrices and then perform feature distillation on the internal representation to recover from the decomposition . This approach has the benefit of preserving much of the information while reducing the number of parameters . Alongside the advantage of preserving the information within each layer , there is also a memory flexibility advantage compared to removing entire layers ; As a result of decomposing each matrix to two smaller matrices , we can store each of the two matrices in two different memory blocks . This has the benefit of distributing the model matrices in many small memory blocks , which is useful when working in shared CPU-based environments . We evaluated our approach on the General Language Understanding Evaluation ( GLUE ) benchmark dataset ( Wang et al . , 2018 ) and show that our approach is superior or competitive in the different GLUE tasks to previous approaches which remove entire layers . Furthermore , we study the effects of different base models to decompose and show the superiority of decomposing a fine-tuned model compared to a pre-trained model or a ran-domly initialized model . Finally , we demonstrate the trade-off between compression and accuracy of a model . We presented a way to compress pre-trained large language models fine-tuned for specific tasks , while preserving much of the information contained within them , by using matrix decomposition to two small matrices . For future work it might be interesting to combine this approach with another approach such as pruning or quantization to achieve smaller models .
Existing coarse-grained approaches for reducing the inference time of pretraining models remove layers, posing a trade-off between compression and the accuracy of a model.
They propose a model-compression method which decompresses the matrix and performs feature distillation on the internal representations to recover from the decomposition.
The proposed method reduces the model size by 0.4x and increases inference speed by 1.45x while keeping the performance degradation minimum on the GLUE benchmark.
D16-1205
Several studies on sentence processing suggest that the mental lexicon keeps track of the mutual expectations between words . Current DSMs , however , represent context words as separate features , thereby loosing important information for word expectations , such as word interrelations . In this paper , we present a DSM that addresses this issue by defining verb contexts as joint syntactic dependencies . We test our representation in a verb similarity task on two datasets , showing that joint contexts achieve performances comparable to single dependencies or even better . Moreover , they are able to overcome the data sparsity problem of joint feature spaces , in spite of the limited size of our training corpus . Distributional Semantic Models ( DSMs ) rely on the Distributional Hypothesis ( Harris , 1954 ; Sahlgren , 2008 ) , stating that words occurring in similar contexts have similar meanings . On such theoretical grounds , word co-occurrences extracted from corpora are used to build semantic representations in the form of vectors , which have become very popular in the NLP community . Proximity between word vectors is taken as an index of meaning similarity , and vector cosine is generally adopted to measure such proximity , even though other measures have been proposed ( Weeds et al . , 2004 ; Santus et al . , 2016 ) . Most of DSMs adopt a bag-of-words approach , that is they turn a text span ( i.e. , a word window or a parsed sentence ) into a set of words and they register separately the co-occurrence of each word with a given target . The problem with this approach is that valuable information concerning word interrelations in a context gets lost , because words co-occurring with a target are treated as independent features . This is why works like Ruiz-Casado et al . ( 2005 ) , Agirre et al . ( 2009 ) and Melamud et al . ( 2014 ) proposed to introduce richer contexts in distributional spaces , by using entire word windows as features . These richer contexts proved to be helpful to semantically represent verbs , which are characterized by highly context-sensitive meanings , and complex argument structures . In fact , two verbs may share independent words as features despite being very dissimilar from the semantic point of view . For instance kill and heal share the same object nouns in The doctor healed the patient and the The poison killed the patient , but are highly different if we consider their joint dependencies as a single context . Nonetheless , richer contexts like these suffer from data sparsity , therefore requiring either larger corpora or complex smoothing processes . In this paper , we propose a syntactically savvy notion of joint contexts . To test our representation , we implement several DSMs and we evaluate them in a verb similarity task on two datasets . The results show that , even using a relatively small corpus , our syntactic joint contexts are robust with respect to data sparseness and perform similarly or better than single dependencies in a wider range of parameter settings . The paper is organized as follows . In Section 2 , we provide psycholinguistic and computational background for this research , describing recent models based on word windows . In Section 3 , we describe our reinterpretation of joint contexts with syntactic dependencies . Evaluation settings and results are presented in Section 4 . In this paper , we have presented our proposal for a new type of vector representation based on joint features , which should emulate more closely the general knowledge about event participants that seems to be the organizing principle of our mental lexicon . A core issue of previous studies was the data sparseness challenge , and we coped with it by means of a more abstract , syntactic notion of joint context . The models using joint dependencies were able at least to perform comparably to traditional , dependency-based DSMs . In our experiments , they even achieved the best correlation scores across several parameter settings , especially after the application of SVD . We want to emphasize that previous works such as Agirre et al . ( 2009 ) already showed that large word windows can have a higher discriminative power than indipendent features , but they did it by using a huge training corpus . In our study , joint context-based representations derived from a small corpus such as RCV1 are already showing competitive performances . This result strengthens our belief that dependencies are a possible solution for the data sparsity problem of joint feature spaces . We also believe that verb similarity might not be the best task to show the usefulness of joint contexts for semantic representation . The main goal of the present paper was to show that joint contexts are a viable option to exploit the full potential of distributional information . Our successful tests on verb similarity prove that syntactic joint contexts do not suffer of data sparsity and are also able to beat other types of representations based on independent word features . Moreover , syntactic joint contexts are much simpler and more competitive with respect to window-based ones . The good performance in the verb similarity task motivates us to further test syntactic joint contexts on a larger range of tasks , such as word sense disambiguation , textual entailment and classification of semantic relations , so that they can unleash their full potential . Moreover , our proposal opens interesting perspectives for computational psycholinguistics , especially for modeling those semantic phenomena that are inherently related to the activation of event knowledge ( e.g. thematic fit ) .
Providing richer contexts to Distributional Semantic Models improves by taking word interrelations into account but it would suffer from data sparsity.
They propose a Distributional Semantic Model that incorporates verb contexts as joint syntactic dependencies so that it emulates knowledge about event participants.
They show that representations obtained by the proposed model outperform more complex models on two verb similarity datasets with a limited training corpus.
2021.acl-long.57
In this paper , we propose Inverse Adversarial Training ( IAT ) algorithm for training neural dialogue systems to avoid generic responses and model dialogue history better . In contrast to standard adversarial training algorithms , IAT encourages the model to be sensitive to the perturbation in the dialogue history and therefore learning from perturbations . By giving higher rewards for responses whose output probability reduces more significantly when dialogue history is perturbed , the model is encouraged to generate more diverse and consistent responses . By penalizing the model when generating the same response given perturbed dialogue history , the model is forced to better capture dialogue history and generate more informative responses . Experimental results on two benchmark datasets show that our approach can better model dialogue history and generate more diverse and consistent responses . In addition , we point out a problem of the widely used maximum mutual information ( MMI ) based methods for improving the diversity of dialogue response generation models and demonstrate it empirically . In recent years , neural end-to-end dialogue response generation models ( Sordoni et al . , 2015 ; Serban et al . , 2016 ; Bordes et al . , 2016 ) has gained increasing popularity with the recent advancements of neural sequence-to-sequence ( seq2seq ) learning models ( Sutskever et al . , 2014 ; Vaswani et al . , 2017 ) . While neural dialogue models can generate seemingly fluent responses , due to the over-simplified maximum likelihood estimation ( MLE ) training objective and the high frequency of generic responses in training corpora , they tend to produce dull and generic responses such as " I do n't know " much more often than that humans generally do ( Li et al . , 2015 ) , which makes dialogue agents less engaging and ineffective . In addition , recent research on whether neural dialogue systems use dialogue history effectively ( Sankar et al . , 2019 ) shows that most neural dialogue agents fail to take the dialogue history into account when generating responses . This problem makes neural dialogue systems tend to generate responses irrelevant to the current topic of the conversation and are not consistent with the dialogue history . This problem may also intensify the generic response problem , as dull responses are generally off-topic and irrelevant to the dialogue history . To address the above issues , in this paper , we propose Inverse Adversarial Training ( IAT ) algorithm for training neural dialogue systems to avoid generic responses and model dialogue history better , thus generating diverse and informative responses . Conventional adversarial training methods generally generate label-preserving adversarial inputs with carefully designed methods and train the model to generate the same output to enhance the model 's robustness . In contrast , our approach perturbs in input dialogue history such that a good dialogue model should not generate the same output if the output is non-generic and relevant to the dialogue history . We name our proposed method as inverse adversarial training because it is related to conventional adversarial training methods which aim to improve the model 's adversarial robustness but our proposed objective is motivated in the opposite direction . Note that our work is not directly related to TextGANs as well as their applications on dialogue response generation . Specifically , the proposed inverse adversarial training assigns higher rewards to generated responses or ground-truth responses if their likeli-hood decreases more when the dialogue history is perturbed , and penalize the model when it generates responses whose likelihood is almost unchanged given either original or perturbed dialogue history as input . This encourages the model to generate more relevant and informative responses and capture dialogue history better . The proposed IAT algorithm can be used in both supervised and self-supervised fashion ( with / without reference response ) , which can be viewed as a form of reward-augmented maximum likelihood ( RAML ) method ( Norouzi et al . , 2016 ) that improves the original MLE objective or a rewarding scheme for RL-based text generation algorithms . The inverse adversarial learning framework is also conceptually related to self-adversarial learning ( Zhou et al . , 2020 ) where the the comparison is made between different checkpoints of the same model to provide reward for RL training of the NLG model . In addition , we identify a limitation of the widely-used maximum mutual information ( MMI ) based methods for improving the diversity of dialogue response generation models . This will be discussed in detail in section 2.1 and empirically demonstrated in section 4.2 . We conduct experiments on two dialogue datasets , OpenSubtitiles and DailyDialog , to demonstrate the effectiveness of the proposed approach . Experimental results show IAT helps neural dialogue systems model dialogue history better and generate more diverse and informative responses . In this work , we introduce inverse adversarial training ( IAT ) algorithm that is able to simultaneously reduce the dull response problem and help neural dialogue systems model dialogue history better . IAT measures the relevance and consistency of responses by the difference of their likelihood conditioning on either original and perturbed dialogue history . In this way , it is able to prevent the dialogue system from preferring generic responses , even they are often of high frequency in the training corpora . Our method also encourages the dialogue agent to model dialogue history better by penalizing the model when generating the same responses given perturbed dialogue history . Experimental results on two benchmark datasets show that the proposed inverse adversarial training algorithm helps dialogue models capture dialogue history better and generate more diverse and consistent responses . We also identify a limitation of the widely-used MMI based methods for improving the diversity of dialogue response generation models and empirically demonstrate the existence of this problem through our experimetns .
Neural end-to-end dialogue models generate fluent yet dull and generic responses without taking dialogue histories into account due to the over-simplified maximum likelihood estimation objective.
They propose an algorithm which encourages to be sensitive to perturbations in dialogue histories and generates more diverse and consistent responses by applying penalization.
The proposed approach can model dialogue history better and generate more diverse and consistent responses on OpenSubtitles and DailyDialog.
D09-1065
demonstrated that corpus-extracted models of semantic knowledge can predict neural activation patterns recorded using fMRI . This could be a very powerful technique for evaluating conceptual models extracted from corpora ; however , fMRI is expensive and imposes strong constraints on data collection . Following on experiments that demonstrated that EEG activation patterns encode enough information to discriminate broad conceptual categories , we show that corpus-based semantic representations can predict EEG activation patterns with significant accuracy , and we evaluate the relative performance of different corpus-models on this task . Models of semantic relatedness induced from corpus data have proven effective in a number of empirical tasks ( Sahlgren , 2006 ) and there is increasing interest in whether distributional information extracted from corpora correlates with aspects of speakers ' semantic knowledge : see Lund and Burgess ( 1996 ) , Landauer and Dumais ( 1997 ) , Almuhareb ( 2006 ) , Padó and Lapata ( 2007 ) , Schulte i m Walde ( 2008 ) , among many others . For this purpose , corpus models have been tested on datasets that are based on semantic judgements ( metalinguistic or meta-cognitive intuitions about synonymy , semantic distance , category-membership ) or behavioural experiments ( semantic priming , property generation , free association ) . While all these data are valuable , they are indirect reflections of semantic knowledge , and when the predictions they make diverge from those of corpora , interpretation is problematic : is the corpus model missing essential aspects of semantics , or are non-semantic factors biasing the data elicited from informants ? Reading semantic processes and representations directly from the brain would be an ideal way to get around these limitations . Until recently , analysis of linguistic quantities using neural data collected with EEG ( measurement at the scalp of voltages induced by neuronal firing ) or fMRI ( measurement of changes of oxygen concentrations in the brain tied to cognitive processes ) had neither the advantages of corpora ( scale ) nor of informants ( finer grained judgements ) . However , some clear patterns of differential activity have been found for broad semantic classes . Viewing images of natural ( typically animals and plants ) and non-natural ( typically artefacts like tools or vehicles ) objects elicits different loci of activity in fMRI ( Martin and Chao , 2001 ) and EEG ( Kiefer , 2001 ) , that persist across participants . Differences have also been found in response to auditorily or visually presented words of different lexical classes , such as abstract / concrete , and verb / noun ( Pulvermüller , 2002 ) . But interpretation of such group results remains somewhat difficult , as they may be consistent with more than one distinction : the natural / artefactual division just mentioned , may rather be between living / nonliving entities , dynamic / static entities , or be based on embodied experience ( e.g. manipulable or not ) . More recently , however , machine learning and other numerical techniques have been successfully applied to extract semantic information from neural data in a more discriminative fashion , down to the level of individual concepts . The work presented here builds on two strands of previous work : Murphy et al . ( 2008 ) use EEG data to perform semantic categorisation on single stimuli ; and Mitchell et al . ( 2008 ) introduce an fMRIbased method that detects word level distinctions by learning associations between features of neural activity and semantic features derived from a corpus . We combine these innovations by introducing a method that extracts featural representations from the EEG signal , and uses corpusbased models to predict word level distinctions in patterns of EEG activity . The proposed method achieves a performance level significantly above chance ( also when distinguishing between concepts from the same semantic category , e.g. , dog and cat ) , and approaching that achieved with fMRI . The paper proceeds as follows . The next section describes a simple behavioural experiment where Italian-speaking participants had to name photographic images of mammals and tools while their EEG activity was being recorded , and continues to detail how the rich and multidimensional signals collected were reduced to a small set of optimally informative features using a new method . Section 3 describes a series of corpus-based semantic models derived from both a raw-text web corpus , and from various parsings of a conventional corpus . In Section 4 we describe the training of a series of linear models , that each learn the associations between a set of corpus semantic features and an individual EEG activity feature . By combining these models it is possible to predict the EEG activity pattern for a single unseen word , and compare this to the observed pattern for the corresponding concept . Results ( Section 5 ) show that these predictions succeed at a level significantly above chance , both for coarser distinctions between words in different superordinate categories ( e.g. , differentiating between drill and gorilla ) , and , at least for the model based on the larger web corpus , for those within the same category ( e.g. , drill vs spanner , koala vs gorilla ) . Our results show that corpus-extracted conceptual models can be used to distinguish between the EEG activation levels associated with conceptual categories to a degree that is significantly above chance . Though category specific patterns are detectable in the EEG signal alone ( as illustrated by the PCA analysis in figure 3 ) , on that basis we can not be sure that semantics is being detected . Some other property of the stimuli that co-varies with the semantic classes of interest could be responsible , such as visual complexity , conceptual familiarity , lexical frequency , or phonological form . Only by cross-training with individual corpus features and showing that these hold a predictive relationship to neural activity have we been able to establish that EEG patterns encode semantics . Present evidence indicates that fMRI may provide richer data for training such models than EEG ( Mitchell and colleagues obtain an average accuracy of 77 % , and 65 % for the within category setting ) . However , fMRI has several clear disadvantages as a tool for language researchers . First of all , the fine spatial resolution it provides ( down to 2 - 3 mm ) , while of great interest to neuroscientists , is not in itself linguistically informative . Its coarse temporal resolution ( of the order of several seconds ) , makes it ill-suited to analysing on-line linguistic processes . EEG on the other hand , despite its low spatial resolution ( several centimetres ) , gives millisecond-level temporal resolution , Finally , there is no prospect of fMRI being miniaturised , while wearable EEG systems are already becoming commercially available , making experimentation in more ecological settings a possibility ( e.g. , playing with a child , meeting at a desk , walking around ) . In short , while EEG can be used to carry out systematic investigations of categorical distinctions , doing so with fMRI would be prohibitively expensive . Present results indicate that distinctions between categories are easier than distinctions between category elements ; and that selecting the conceptual features by hand gives better results than discovering them automatically . Both of these results however may be due to limitations of the current method . One limitation is that we have been using the same set of features for all concepts , which is likely to blur the distinctions between members of a category more than those between categories . A second limitation of our present methodology is that it is constrained to use very small numbers of semantic features , which limits its applicability . For example it is hard to conceive of a small set of verbs , or other parts-ofspeech , whose co-occurrence patterns could successfully characterise the full range of meaning found in the human lexicon . Even the more economical corpus-extracted conceptual models tend to run in the hundreds of features ( Almuhareb , 2006 ) . We are currently working on variations in the method that will address these shortcomings . The web-based model with manually picked features outperformed all la Repubblica-based models . However , the results attained with repubblica-window-svd are encouraging , especially considering that we are reporting results for an EEG feature configuration optimised for the web data ( see footnote 5 ) , and that la Repubblica is several orders of magnitude smaller than the web . That data sparseness might be the main issue with la Repubblica models is suggested by the fact that repubblica-window-svd is the least sparse of them , since it does not filter data by position or dependency path , and compresses information from many verbs via SVD . In future research , we plan to extract richer models from larger corpora . And as the discriminative accuracy of crosstraining techniques improves , further insights into the relative validity of corpus representations will be attainable . One research aim is to see if individual corpus semantic properties are encoded neurally , so providing strong evidence for a particular model . These techniques may also prove more objective and reliable in evaluating representations of abstract concepts , for which it is more difficult to collect reliable judgements from informants .
The expensive cost of using fMRI hinders studies on the relationship between corpus-extracted models of semantic knowledge and neural activation patterns.
They propose to use EEG activation patterns instead of fMRI to reduce the cost.
They show that using EEG signals with corpus-based models, they can predict word level distinctions significantly above chance.
D09-1085
This paper introduces a new parser evaluation corpus containing around 700 sentences annotated with unbounded dependencies , from seven different grammatical constructions . We run a series of off-theshelf parsers on the corpus to evaluate how well state-of-the-art parsing technology is able to recover such dependencies . The overall results range from 25 % accuracy to 59 % . These low scores call into question the validity of using Parseval scores as a general measure of parsing capability . We discuss the importance of parsers being able to recover unbounded dependencies , given their relatively low frequency in corpora . We also analyse the various errors made on these constructions by one of the more successful parsers . Statistical parsers are now obtaining Parseval scores of over 90 % on the WSJ section of the Penn Treebank ( Bod , 2003 ; Petrov and Klein , 2007 ; Huang , 2008 ; Carreras et al . , 2008 ) . McClosky et al . ( 2006 ) report an F-score of 92.1 % using selftraining applied to the reranker of Charniak and Johnson ( 2005 ) . Such scores , in isolation , may suggest that statistical parsing is close to becoming a solved problem , and that further incremental improvements will lead to parsers becoming as accurate as POS taggers . A single score in isolation can be misleading , however , for a number of reasons . First , the single score is an aggregate over a highly skewed distribution of all constituent types ; evaluations which look at individual constituent or dependency types show that the accuracies on some , semantically important , constructions , such as coordination and PP-attachment , are much lower ( Collins , 1999 ) . Second , it is well known that the accuracy of parsers trained on the Penn Treebank degrades when they are applied to different genres and domains ( Gildea , 2001 ) . Finally , some researchers have argued that the Parseval metrics ( Black et al . , 1991 ) are too forgiving with respect to certain errors and that an evaluation based on syntactic dependencies , for which scores are typically lower , is a better test of parser performance ( Lin , 1995 ; Carroll et al . , 1998 ) . In this paper we focus on the first issue , that the performance of parsers on some constructions is much lower than the overall score . The constructions that we focus on are various unbounded dependency constructions . These are interesting for parser evaluation for the following reasons : one , they provide a strong test of the parser 's knowledge of the grammar of the language , since many instances of unbounded dependencies are difficult to recover using shallow techniques in which the grammar is only superficially represented ; and two , recovering these dependencies is necessary to completely represent the underlying predicateargument structure of a sentence , useful for applications such as Question Answering and Information Extraction . To give an example of the sorts of constructions we are considering , and the ( in)ability of parsers to recover the corresponding unbounded dependencies , none of the parsers that we have tested were able to recover the dependencies shown in bold from the following sentences : We have also developed techniques for recognizing and locating underground nuclear tests through the waves in the ground which they generate . By Monday , they hope to have a sheaf of documents both sides can trust . In this paper we have demonstrated that current parsing technology is poor at recovering some of the unbounded dependencies which are crucial for fully representing the underlying predicateargument structure of a sentence . We have also argued that correct recovery of such dependencies will become more important as parsing technology improves , despite the relatively low frequency of occurrence of the corresponding grammatical constructions . We also see this more focused parser evaluation methodology -in this case construction-focused -as a way of improving parsing technology , as an alternative to the exclusive focus on incremental improvements in overall accuracy measures such as Parseval .
While recent statistical parsers perform well on Penn Treebank, the results can be misleading due to several reasons originating from evaluation and datasets.
They propose a new corpus with unbounded dependencies from difference grammatical constructions.
Their evaluation of existing parsers with the proposed corpus shows lower scores than reported in previous works indicating a poor ability to recover unbounded dependencies.
P12-1013
Learning entailment rules is fundamental in many semantic-inference applications and has been an active field of research in recent years . In this paper we address the problem of learning transitive graphs that describe entailment rules between predicates ( termed entailment graphs ) . We first identify that entailment graphs exhibit a " tree-like " property and are very similar to a novel type of graph termed forest-reducible graph . We utilize this property to develop an iterative efficient approximation algorithm for learning the graph edges , where each iteration takes linear time . We compare our approximation algorithm to a recently-proposed state-of-the-art exact algorithm and show that it is more efficient and scalable both theoretically and empirically , while its output quality is close to that given by the optimal solution of the exact algorithm . Performing textual inference is in the heart of many semantic inference applications such as Question Answering ( QA ) and Information Extraction ( IE ) . A prominent generic paradigm for textual inference is Textual Entailment ( TUE ) ( Dagan et al . , 2009 ) . In TUE , the goal is to recognize , given two text fragments termed text and hypothesis , whether the hypothesis can be inferred from the text . For example , the text " Cyprus was invaded by the Ottoman Empire in 1571 " implies the hypothesis " The Ottomans attacked Cyprus " . Semantic inference applications such as QA and IE crucially rely on entailment rules ( Ravichandran and Hovy , 2002 ; Shinyama and Sekine , 2006 ) or equivalently inference rules , that is , rules that describe a directional inference relation between two fragments of text . An important type of entailment rule specifies the entailment relation between natural language predicates , e.g. , the entailment rule ' X invade Y → X attack Y ' can be helpful in inferring the aforementioned hypothesis . Consequently , substantial effort has been made to learn such rules ( Lin and Pantel , 2001 ; Sekine , 2005 ; Szpektor and Dagan , 2008 ; Schoenmackers et al . , 2010 ) . Textual entailment is inherently a transitive relation , that is , the rules ' x → y ' and ' y → z ' imply the rule ' x → z ' . Accordingly , Berant et al . ( 2010 ) formulated the problem of learning entailment rules as a graph optimization problem , where nodes are predicates and edges represent entailment rules that respect transitivity . Since finding the optimal set of edges respecting transitivity is NP-hard , they employed Integer Linear Programming ( ILP ) to find the exact solution . Indeed , they showed that applying global transitivity constraints improves rule learning comparing to methods that ignore graph structure . More recently , Berant et al . ( Berant et al . , 2011 ) introduced a more efficient exact algorithm , which decomposes the graph into connected components and then applies an ILP solver over each component . Despite this progress , finding the exact solution remains NP-hard -the authors themselves report they were unable to solve some graphs of rather moderate size and that the coverage of their method is limited . Thus , scaling their algorithm to data sets with tens of thousands of predicates ( e.g. , the extractions of Fader et al . ( 2011 ) ) is unlikely . In this paper we present a novel method for learning the edges of entailment graphs . Our method computes much more efficiently an approximate solution that is empirically almost as good as the exact solution . To that end , we first ( Section 3 ) conjecture and empirically show that entailment graphs exhibit a " tree-like " property , i.e. , that they can be reduced into a structure similar to a directed forest . Then , we present in Section 4 our iterative approximation algorithm , where in each iteration a node is removed and re-attached back to the graph in a locally-optimal way . Combining this scheme with our conjecture about the graph structure enables a linear algorithm for node re-attachment . Section 5 shows empirically that this algorithm is by orders of magnitude faster than the state-of-the-art exact algorithm , and that though an optimal solution is not guaranteed , the area under the precision-recall curve drops by merely a point . To conclude , the contribution of this paper is twofold : First , we define a novel modeling assumption about the tree-like structure of entailment graphs and demonstrate its validity . Second , we exploit this assumption to develop a polynomial approximation algorithm for learning entailment graphs that can scale to much larger graphs than in the past . Finally , we note that learning entailment graphs bears strong similarities to related tasks such as Taxonomy Induction ( Snow et al . , 2006 ) and Ontology induction ( Poon and Domingos , 2010 ) , and thus our approach may improve scalability in these fields as well . Learning large and accurate resources of entailment rules is essential in many semantic inference applications . Employing transitivity has been shown to improve rule learning , but raises issues of efficiency and scalability . The first contribution of this paper is a novel modeling assumption that entailment graphs are very similar to FRGs , which is analyzed and validated empirically . The main contribution of the paper is an efficient polynomial approximation algorithm for learning entailment rules , which is based on this assumption . We demonstrate empirically that our method is by orders of magnitude faster than the state-of-the-art exact algorithm , but still produces an output that is almost as good as the optimal solution . We suggest our method as an important step towards scalable acquisition of precise entailment resources . In future work , we aim to evaluate TNF on large graphs that are automatically generated from huge corpora . This of course requires substantial efforts of pre-processing and test-set annotation . We also plan to examine the benefit of TNF in learning similar structures , e.g. , taxonomies or ontologies .
Current inefficient algorithms aim to obtain entailment rules for semantic inference hindering the use of large resources.
They propose an efficient polynomial approximation algorithm that exploits their observation, entailment graphs have a "tree-like" property.
Their iterative algorithm runs by orders of magnitude faster than current exact state-of-the-art solutions while keeping close quality.
D15-1054
Sponsored search is at the center of a multibillion dollar market established by search technology . Accurate ad click prediction is a key component for this market to function since the pricing mechanism heavily relies on the estimation of click probabilities . Lexical features derived from the text of both the query and ads play a significant role , complementing features based on historical click information . The purpose of this paper is to explore the use of word embedding techniques to generate effective text features that can capture not only lexical similarity between query and ads but also the latent user intents . We identify several potential weaknesses of the plain application of conventional word embedding methodologies for ad click prediction . These observations motivated us to propose a set of novel joint word embedding methods by leveraging implicit click feedback . We verify the effectiveness of these new word embedding models by adding features derived from the new models to the click prediction system of a commercial search engine . Our evaluation results clearly demonstrate the effectiveness of the proposed methods . To the best of our knowledge this work is the first successful application of word embedding techniques for the task of click prediction in sponsored search . Sponsored search is a multibillion dollar market ( Easley and Kleinberg , 2010 ) that makes most search engine revenue and is one of the most successful ways for advertisers to reach their intended audiences . When search engines deliver results to a user , sponsored advertisement impressions ( ads ) are shown alongside the organic search results ( Figure 1 ) . Typically the advertiser pays the search engine based on the pay-per-click model . In this model the advertiser pays only if the impression that accompanies the search results is clicked . The price is usually set by a generalized second-price ( GSP ) auction ( Edelman et al . , 2005 ) that encourages advertisers to bid truthfully . An advertiser wins if the expected revenue for this advertiser , which is the bid Figure 1 : Sponsored ads when " pizza " was searched at Yahoo ! ( www.yahoo.com ) . price times the expected click probability ( also know as click through rate , or CTR ) , is ranked the highest . The price the advertiser pays , known as cost-per-click ( CPC ) , is the bid price for the second ranked advertiser times the ratio of the expected CTR between the second and first ranked advertisers . From this discussion it should be clear that CTR plays a key role in deciding both the ranking and the pricing of the ads . Therefore it is very important to predict CTR accurately . The state of the art search engine typically uses a machine learning model to predict CTR by exploiting various features that have been found useful in practice . These include historical click performance features such as historical click probability for the query , the ad , the user , and a combination of these ; contextual features such as temporal and geographical information ; and text-based features such as query keywords or ad title and description . Among these , historical click performance features often have the most predictive power for queries , ads and users that have registered many impressions . For queries , ads and users that have not registered many impressions , however , historical CTR may have too high a variance to be useful . Hillard et al . ( 2011 ) observed that the number of impressions and clicks recorded on query-ad pairs have a very long tail : only 61 % of queries has greater than three clicks . They also reported a drastic drop in the accuracy of the click prediction model when fewer historical observations are available . Furthermore , fine-grained historical CTR information takes a huge amount of space , which makes it costly to maintain . On the other hand , text features are always readily available , and thus are particularly useful for those cases for which there is insufficient historical information . Multiple researchers , for example ( Richardson , 2007 ; Cheng and Cantú-Paz , 2010 ) , reported the usage of text features including simple lexical similarity scores between the query and ads , word or phrase overlaps and the number of overlapping words and characters . Such features rely on the assumption that query-ad overlap is correlated with perceived relevance . While this is true to a certain extent , the use of simple lexical similarity can not capture semantic information such as synonyms , entities of the same type and strong relationships between entities ( e.g. CEO-company , brandmodel , part-of ) . Recently a host of studies on word embedding have been conducted ; all map words into a vector space such that semantically relevant words are placed near each other in the space ( Mikolov et al . , 2013a ; Pennington et al . , 2014 ; Baroni et al . , 2014 ) . The use of continuous word vectors has been shown to be helpful for a wide range of NLP tasks by better capturing both syntactic and semantic information than simple lexical features ( Socher et al . , 2012a ) . No previous research on sponsored search has successfully used word embeddings to generate text features . In this paper , we explore the use of word embeddings for click prediction . However , it is clear that conventional word embeddings ( which solely rely on word co-occurrence in a context window ) can only offer limited discriminative power because queries and ad text are typically very short . In addition , conventional word embeddings can not capture user intents , preferences and desires . Wang et al . ( 2013 ) showed that specific frequently occurring lexical patterns , e.g. , x% off , guaranteed return in x days and official site , are effective in triggering users desires , and thus lead to significant differences in CTR . Conventional word embeddings can not capture these phenomena since they do not incorporate the implicit feedback users provide through clicks and non-clicks . These observations naturally lead us to leverage click feedback to infuse users ' intentions and desires into the vector space . The simplest way to harness click feedback is to train conventional word embedding models on a corpus that only includes clicked impressions , where each " sentence " is constructed by mixing the query and ad text . Having trained a word embedding model , we simply take the average of word vectors of the query and ads respectively to obtain sentence ( or paragraph ) vectors , which in turn are used to compute the similarity scores between the query and ads . Our experiments show that this method does improve click prediction performance . However , this method has several potential weaknesses . First , the use of only clicked impressions ignores the large amount of negative signals contained in the non-clicked ad impressions . Second , the use of indirect signals ( word co-occurrences ) can be noisy or even harmful to our ultimate goal ( accurate click prediction ) when it is combined with direct signals ( impressions with click feedback ) . Third , without explicit consideration about the averaging step in the training process of word embedding models , a simple averaging scheme across word vectors may be a suboptimal . We therefore propose several joint word embedding models ; all of these aim to put query vectors close to relevant ad vectors by explicitly utilizing both positive and negative click feedback . We evaluate all these models against a large sponsored search data set from a commercial search engine , and demonstrate that our proposed models significantly improve click prediction performance . The rest of this paper is organized as follows . In Section 2 we present a brief summary of related work . In Section 3 we give some background information on ad click prediction in sponsored search . In Section 4 we describe our methods . In Section 5 we discuss our experiments . We finish with some conclusions and future directions in Section 6 . In this paper we explored the use of word embedding techniques to overcome the shortcomings of traditional lexical features for ad click prediction in sponsored search . We identified several potential weaknesses of the plain application of conventional word embedding methodologies : the lack of the right machinery to harness both positive and negative click feedback , the limited utility of pure word co-occurrence signals , and no consideration of vector composition in the word embedding training process . We proposed a set of new implicit feedback-based joint word embedding methods to address those issues . We evaluated the new word embedding methods in the context of a very good baseline click prediction system , on a large scale data set collected from Yahoo ! search engine logs . Our experimental results clearly demonstrate the effectiveness of the proposed methods . We also presented several examples for qualitative analysis to advance our understanding on how each algorithm really contributes to the improved performance . To the best of our knowledge this work is the first successful application of word embedding techniques for the sponsored search task . There are multiple interesting research directions for future work . One of these directions is to extend the vocabulary by identifying significant phrases ( as well as words ) before training word vectors . Hillard et al . ( 2011 ) employed Conditional Random Fields to divide queries with multiple words into segments and collected historical CTR on the segment level . We also like to investigate more structured embedding methods such as RNNs ( probably for ad descriptions ) . In case the computational cost of such methods are too high to be practical for sponsored search , we can employ them only for a small fraction of ads filtered by faster methods . It may be possible to deal with the implicit negative feedback of unclicked ad impressions in a more principled way by adopting ranking-based loss functions . However , this is only possible with the extra cost of identifying and aggregating related ads into a single transaction . Though not directly related to NLP , yet another promising direction is to jointly embed not only text data but also a variety of user activities ( e.g. , organic search results , mobile app usages , other daily activities ) all together in the same vector space . Since many of the different sources contain their own unique information , we might be able to obtain a much better understanding about the user state and intent through this rich joint embedding space . Joint embedding with rich information can also help us to perform automatic clustering of users , eventually leading to powerful smoothing methods for personalized historical CTR statistics .
Conventional word embeddings with a simple integration of click feedback information and averaging to obtain sentence representations do not work well for ad click prediction.
They propose several joint word embedding methods to leverage positive and negative click feedback which put query vectors close to relevant ad vectors.
The use of features obtained from the new models improves on a large sponsored search data of commercial Yahoo! search engine.
D09-1072
We propose a new model for unsupervised POS tagging based on linguistic distinctions between open and closed-class items . Exploiting notions from current linguistic theory , the system uses far less information than previous systems , far simpler computational methods , and far sparser descriptions in learning contexts . By applying simple language acquisition techniques based on counting , the system is given the closed-class lexicon , acquires a large open-class lexicon and then acquires disambiguation rules for both . This system achieves a 20 % error reduction for POS tagging over state-of-the-art unsupervised systems tested under the same conditions , and achieves comparable accuracy when trained with much less prior information . All recent research on unsupervised tagging , as well as the majority of work on supervised taggers , views POS tagging as a sequential labeling problem and treats all POS tags , both closed-and open-class , as roughly equivalent . In this work we explore a different understanding of the tagging problem , viewing it as a process of first identifying functional syntactic contexts , which are flagged by closed-class items , and then using these functional contexts to determine the POS labels . This disambiguation model differs from most previous work in three ways : 1 ) it uses different encodings over two distinct domains ( roughly open-and closed-class words ) with complementary distribution ( and so decodes separately ) ; 2 ) it is deterministic and 3 ) it is non-lexicalized . By learning disambiguation models for open-and closed-classes separately , we found that the deterministic , rulebased model can be learned from unannotated data by a simple strategy of selecting a rule in each appropriate context with the highest count . In contrast to this , most previous work on unsupervised tagging ( especially for English ) concentrates on improving the parameter estimation techniques for training statistical disambiguation models from unannotated data . For example , ( Smith&Eisner , 2005 ) proposes contrastive estimation ( CE ) for log-linear models ( CRF ) , achieving the current state-of-the-art performance of 90.4 % ; ( Goldwater&Griffiths , 2007 ) applies a Bayesian approach to improve maximumlikelihood estimation ( MLE ) for training generative models ( HMM ) . In the main experiments of both of these papers , the disambiguation model is learned , but the algorithms assume a complete knowledge of the lexicon with all possible tags for each word . In this work , we propose making such a large lexicon unnecessary by learning the bulk of the lexicon along with learning a disambiguation model . Little previous work has been done on this natural and simple idea because the clusters found by previous induction schemes are not in line with the lexical categories that we care about . ( Chan , 2008 ) is perhaps the first with the intention of generating " a discrete set of clusters . " By applying similar techniques to ( Chan , 2008 ) , which we discuss later , we can generate clusters that closely approximate the central open-class lexical categories , a major advance , but we still require a closed-class lexicon specifying possible tags for these words . This asymmetry in our lexicon acquisition model conforms with our understanding of natural language as structured data over two distinct domains with complementary distribution : open-class ( lexical ) and closed-class ( functional ) . Provided with only a closed-class lexicon of 288 words , about 0.6 % of the full lexicon , the system acquires a large open-class lexicon and then acquires disambiguation rules for both closed-and open-class words , achieving a tagging accuracy of 90.6 % for a 24k dataset , as high as the current state-of-the-art ( 90.4 % ) achieved with a complete dictionary . In the test condition where both algorithms are provided with a full lexicon , and are trained and evaluated over the same 96k dataset , we reduce the tagging error by up to 20 % . In Section 2 we explain our understanding of the POS tagging problem in detail and define the notions of functional context and open-and closedclass elements . Then we will introduce our methods for acquiring the lexicon ( Section 3 ) and learning disambiguation models ( Section 4 , 5 and 6 ) step by step . Results are reported in Section 7 followed by Section 8 which discusses the linguistic motivation behind this work and the simplicity and efficiency of our model . In this work on unsupervised tagging , we combine lexicon acquisition with the learning of a POS disambiguation model . Moreover , the disambiguation model we used is deterministic , nonlexicalized and defined over two distinct domains with complementary distribution ( open-and closed-class ) . Building a lexicon based on induced clusters requires our morphological knowledge of three special endings in English : -ing , -ed and -s ; on the other hand , to reduce the feature space used for category induction , we utilize vectors of functional features only , exploiting our knowledge of the role of determiners and modal verbs . However , the above information is restricted to the lexicon acquisition model . Taking a lexicon as input , which either consists of a known closed-class lexicon together with an acquired open-class lexicon or is composed by automatic extraction from the Penn Treebank , we need NO language-specific knowledge for learning the disambiguation model . We would like to point the reader to ( Chan , 2008 ) for more discussion on Category induction14 ; and discussions below will concentrate on the proposed disambiguation model . Current Chomskian theory , developed in the Minimalist Program ( MP ) ( Chomsky , 2006 ) , argues ( very roughly speaking ) that the syntactic structure of a sentence is built around a scaffolding provided by a set of functional elements15 . Each of these provides a large tree fragment ( roughly corresponding to what Chomsky calls a phase ) that provide the piece parts for full utterances . Chomsky observes that when these fragments combine , only the very edge of the fragments can change and that the internal structure of these fragments is rigid ( he labels this observation the Phase Impenetrability Condition , PIC ) . With the belief in PIC , we propose the concept of functional context , in which category property can be determined ; also we notice the distinct distribution of the elements ( functional ) on the edge of phase and those ( lexical ) assembled within the phase . Instead of chasing the highest possible performance by using the strongest method possible , we wanted to explore how well a deterministic , nonlexicalized model , following certain linguistic intuitions , can approach the NLP problem . For the unsupervised tagging task , this simple model , with less than two hundred rules learned , even outperforms non-deterministic generative models with ten of thousands of parameters . Another motivation for our pursuit of this deterministic , non-lexicalized model is computational efficiency 16 . It takes less than 3 minutes total for our model to acquire the lexicon , learn the disambiguation model , tag raw data and evaluate the output for a 96k dataset on a small laptop17 . And a model using only counting and selecting is common in the research field of language acquisition and perhaps more compatible to the way humans process language . We are certainly aware that our work does not yet address two problems : 1 ) . How the system can be adapted to work for other languages and 2 ) How to automatically obtain the knowledge of functional elements . We believe that , given the proper understanding of functional elements , our system will be easily adapted to other languages , but we clearly need to test this hypothesis . Also , we are highly interested in completing our system by incorporating the acquisition of functional elements . ( Chan , 2008 ) presents an extensive discussion of his work on morphological induction and ( Mintz et al . , 2002 ) presents interesting psychological experiments we can build on to acquire closed-class words .
Current approaches tackle unsupervised POS tagging as a sequential labelling problem and require a complete knowledge of the lexicon.
They propose to first identify functional syntactic contexts and then use them to make predictions for POS tagging.
The proposed method achieves equivalent performance by using 0.6% of the lexical knowledge used in baseline models.
2021.naacl-main.458
Non-autoregressive Transformer is a promising text generation model . However , current non-autoregressive models still fall behind their autoregressive counterparts in translation quality . We attribute this accuracy gap to the lack of dependency modeling among decoder inputs . In this paper , we propose CNAT , which learns implicitly categorical codes as latent variables into the non-autoregressive decoding . The interaction among these categorical codes remedies the missing dependencies and improves the model capacity . Experiment results show that our model achieves comparable or better performance in machine translation tasks than several strong baselines . Non-autoregressive Transformer ( NAT , Gu et al . , 2018 ; Wang et al . , 2019 ; Lee et al . , 2018 ; Ghazvininejad et al . , 2019 ) is a promising text generation model for machine translation . It introduces the conditional independent assumption among the target language outputs and simultaneously generates the whole sentence , bringing in a remarkable efficiency improvement ( more than 10× speed-up ) versus the autoregressive model . However , the NAT models still lay behind the autoregressive models in terms of BLEU ( Papineni et al . , 2002 ) for machine translation . We attribute the low-quality of NAT models to the lack of dependencies modeling for the target outputs , making it harder to model the generation of the target side translation . A promising way is to model the dependencies of the target language by the latent variables . A line of research works ( Kaiser et al . , 2018 ; Roy et al . , 2018 ; Shu et al . , 2019 ; Ma et al . , 2019 ) introduce latent variable modeling to the non-autoregressive Transformer and improves translation quality . The latent variables could be regarded as the springboard to bridge the modeling gap , introducing more informative decoder inputs than the previ-ously copied inputs . More specifically , the latentvariable based model first predicts a latent variable sequence conditioned on the source representation , where each variable represents a chunk of words . The model then simultaneously could generate all the target tokens conditioning on the latent sequence and the source representation since the target dependencies have been modeled into the latent sequence . However , due to the modeling complexity of the chunks , the above approaches always rely on a large number ( more than 2 15 , Kaiser et al . , 2018 ; Roy et al . , 2018 ) of latent codes for discrete latent spaces , which may hurt the translation efficiencythe essential goal of non-autoregressive decoding . Akoury et al . ( 2019 ) introduce syntactic labels as a proxy to the learned discrete latent space and improve the NATs ' performance . The syntactic label greatly reduces the search space of latent codes , leading to a better performance in both quality and speed . However , it needs an external syntactic parser to produce the reference syntactic tree , which may only be effective in limited scenarios . Thus , it is still challenging to model the dependency between latent variables for non-autoregressive decoding efficiently . In this paper , we propose to learn a set of latent codes that can act like the syntactic label , which is learned without using the explicit syntactic trees . To learn these codes in an unsupervised way , we use each latent code to represent a fuzzy target category instead of a chunk as the previous research ( Akoury et al . , 2019 ) . More specifically , we first employ vector quantization ( Roy et al . , 2018 ) to discretize the target language to the latent space with a smaller number ( less than 128 ) of latent variables , which can serve as the fuzzy word-class information each target language word . We then model the latent variables with conditional random fields ( CRF , Lafferty et al . , 2001 ; Sun et al . , 2019 ) . To avoid the mismatch of the training and inference for latent variable modeling , we propose using a gated neural network to form the decoder inputs . Equipping it with scheduled sampling ( Bengio et al . , 2015 ) , the model works more robustly . Experiment results on WMT14 and IWSLT14 show that CNAT achieves the new state-of-theart performance without knowledge distillation . With the sequence-level knowledge distillation and reranking techniques , the CNAT is comparable to the current state-of-the-art iterative-based model while keeping a competitive decoding speedup . We propose CNAT , which implicitly models the categorical codes of the target language , narrowing the performance gap between the nonautoregressive decoding and autoregressive decoding . Specifically , CNAT builds upon the latent Transformer and models the target-side categorical information with vector quantization and conditional random fields ( CRF ) model . We further employ a gated neural network to form the decoder inputs . Equipped with the scheduled sampling , CNAT works more robust . As a result , the CNAT achieves a significant improvement and moves closer to the performance of the Transformer on machine translation . For each latent variable , we list the top 3 frequent pos tags and their corresponding percentages .
Non-autoregressive translation models fall behind their autoregressive counterparts in translation quality due to the lack of dependency modelling for the target outputs.
They propose a non-autoregressive transformer-based model which implicitly learns categorical codes as latent variables into the decoding to complement missing dependencies.
The proposed model achieves state-of-the-art without knowledge distillation and a competitive decoding speedup with an interactive-based model when coupled with knowledge distillation and reranking techniques.
2021.emnlp-main.765
The clustering-based unsupervised relation discovery method has gradually become one of the important methods of open relation extraction ( OpenRE ) . However , high-dimensional vectors can encode complex linguistic information which leads to the problem that the derived clusters can not explicitly align with the relational semantic classes . In this work , we propose a relationoriented clustering model and use it to identify the novel relations in the unlabeled data . Specifically , to enable the model to learn to cluster relational data , our method leverages the readily available labeled data of pre-defined relations to learn a relationoriented representation . We minimize distance between the instance with same relation by gathering the instances towards their corresponding relation centroids to form a cluster structure , so that the learned representation is cluster-friendly . To reduce the clustering bias on predefined classes , we optimize the model by minimizing a joint objective on both labeled and unlabeled data . Experimental results show that our method reduces the error rate by 29.2 % and 15.7 % , on two datasets respectively , compared with current SOTA methods . Relation extraction ( RE ) , a crucial basic task in the field of information extraction , is of the utmost practical interest to various fields including web search ( Xiong et al . , 2017 ) , knowledge base completion ( Bordes et al . , 2013 ) , and question answering ( Yu et al . , 2017 ) . However , conventional RE paradigms such as supervision and distant supervision are generally designed for pre-defined relations , which can not deal with new emerging relations in the real world . Under this background , open relation extraction ( OpenRE ) has been widely studied for its use Figure 1 : Although both instances S 2 and S 3 express founded relation while S 1 expresses CEO relation , the distance between S 1 and S 2 is still smaller than that between S 2 and S 3 . This is because there may be more similar surface information ( e.g. word overlapping ) or syntactic structure between S 1 and S 2 , thus the derived clusters can not explicitly align with relations . in extracting new emerging relational types from open-domain corpora . The approaches used to handle open relations roughly fall into one of two groups . The first group is open information extraction ( OpenIE ) ( Etzioni et al . , 2008 ; Yates et al . , 2007 ; Fader et al . , 2011 ) , which directly extracts related phrases as representations of different relational types . However , if not properly canonicalized , the extracted relational facts can be redundant and ambiguous . The second group is unsupervised relation discovery ( Yao et al . , 2011 ; Shinyama and Sekine , 2006 ; Simon et al . , 2019 ) . In this type of research , much attention has been focused on unsupervised clustering-based RE methods , which cluster and recognize relations from high-dimensional representations ( Elsahar et al . , 2017 ) . Recently , the self-supervised signals in pretrained language model are further exploited for clustering optimization ( Hu et al . , 2020 ) . However , many studies show that highdimensional embeddings can encode complex linguistic information such as morphological ( Peters et al . , 2018 ) , local syntactic ( Hewitt and Manning , 2019 ) , and longer range semantic information ( Jawahar et al . , 2019 ) . Consequently , the distance of representation is not completely consistent with relational semantic similarity . Although Hu et al . ( 2020 ) use self-supervised signals to optimize clustering , there is still no guarantee that the learned clusters will explicitly align with the desired relational semantic classes ( Xing et al . , 2002 ) . As shown in Figure 1 , we use the method proposed by Hu et al . ( 2020 ) to get the instance representations . Although both instances S 2 and S 3 express the founded relation , the euclidean distance between them is larger than that between S 1 and S 2 , which express different relation . Obviously , the clustering algorithm tends to group instances S 1 and S 2 together , rather than S 2 and S 3 which express the same relation . In this work , we propose a relation-oriented clustering method . To enable the model to learn to cluster relational data , pre-defined relations and their existing labeled instances are leveraged to optimize a non-linear mapping , which transforms high-dimensional entity pair representations into relation-oriented representations . Specifically , we minimize distance between the instances with same relation by gathering the instances representation towards their corresponding relation centroids to form the cluster structure , so that the learned representation is cluster-friendly . In order to reduce the clustering bias on the predefined classes , we iteratively train the entity pair representations by optimizing a joint objective function on the labeled and unlabeled subsets of the data , improving both the supervised classification of the labeled data , and the clustering of the unlabeled data . In addition , the proposed method can be easily extended to incremental learning by classifying the pre-defined and novel relations with a unified classifier , which is often desirable in real-world applications . Our experimental results show that our method outperforms current state-of-the-art methods for OpenRE . Our codes are publicly available at Github * . To summarize , the main contributions of our work are as follows : ( 1 ) we propose a novel relation-oriented clustering method RoCORE to enable model to learn to cluster relational data ; ( 2 ) the proposed method achieves the incremental learning of unlabeled novel relations , which is often desirable in real-world applications ; ( 3 ) experimental results show that our method reduces * https://github.com / Ac-Zyx / RoCORE . the error rate by 29.2 % and 15.7 % , on two realworld datasets respectively , compared with current state-of-the-art OpenRE methods . In this work , we introduce a relation-oriented clustering method that extends the current unsupervised clustering-based OpenRE method . The proposed method leverages the labeled data of pre-defined relations to learn a relation-oriented representation from which the derived clusters explicitly align with relational classes . Iterative joint training method effectively reduces the unwanted bias on labeled data . In addition , the proposed method can be easily extended to incremental learning of novel relations . Experimental results show that our method outperforms SOTA methods for OpenRE .
Even though high-dimensional vectors that can encode complex information used for relation extraction are not guaranteed to be consistent with relational semantic similarity.
They propose to use available relation labeled data to obtain relation-oriented representation by minimizing the distance between the same relation instances.
The proposed approach can reduce error rates significantly from the best models for open relation extraction.

ACLSum: A New Dataset for Aspect-based Summarization of Scientific Publications

This repository contains data for our paper "ACLSum: A New Dataset for Aspect-based Summarization of Scientific Publications" and a small utility class to work with it.

HuggingFace datasets

You can also use Huggin Face datasets to load ACLSum (dataset link). This would be convenient if you want to train transformer models using our dataset.

Just do,

from datasets import load_dataset
dataset = load_dataset("sobamchan/aclsum", "challenge", split="train")
Downloads last month
2
Edit dataset card