text
stringlengths
0
164k
displaystyle fracLotu0 CLotu1 C
endarray
Thus
beginarrayrcl
Uasymu0 ot C Uasymu1 ot C log fracLot u0 CLotu1 Cleftdisplaystyle sum oin mathcal Opo displaystyle sum ohin mathcal Osetminus mathcal Opohright
log Lot u0 C log Lot u1 C
Ubasicu0 ot C Ubasicu1 ot C
endarray
Note that this proof also holds when an utterancelevel cost term textrm costu penalizing longer or more effortful utterances is incorporated into the utilities
beginarraylcl
Uasymu o Cs sum oh in mathcal O log L0o u Cs cup ohPoh textrm costu
Ubasicu o C log Lo u C textrm costu
endarray
since the same constant appears on both sides of inequality In principle it can also be extended to realvalued meanings mathcal L though additional assumptions must be made In addition to the qualitative predictions derived in the previous section our speaker model makes direct quantitative predictions about Exp 1 data Here we describe the details of a Bayesian Data Analysis evaluating this model on the empirical data and comparing it to an occlusionblind model which does not reason about possible hidden objects Because there were no differences observed in production based on the particular levels of target features eg whether the target was blue or red we collapse across these details and only feed the model which features of each distractor differed from the target on each trial After this simplification there were only 4 possible contexts far contexts where the distractors differed in every dimension and three varieties of close contexts where the critical distractor differed in only shape shape and color or shape and texture In addition we included in the model information about whether each trial had cells occluded or not The space of utterances used in our speaker model is derived from our feature annotations for each trial the speaker model selected among 7 utterances referring to each combination of features only mentioning the targets shape only mentioning the targets color mentioning the shape and the color and so on For the set of alternative objects mathcal O we used the full 64object stimulus space used in our experiment design and we placed a uniform prior over these objects such that the occlusionsensitive speaker assumed they were equally likely to be hidden Our model has four free parameters which we infer from the data using Bayesian inference The speaker optimality parameter alpha is a softmax temperature such that at alpha 1 the speaker produces utterances directly proportional to their utility and as alpha rightarrow infty the speaker maximizes In addition to account for the differential production of the three features see Fig 2B we assume separate production costs for each feature a texture cost ct a color cost cc and a shape cost cs We use uninformative uniform priors for all parameters
beginarrayrcl
alpha sim textrm Unif050
ct cc cs sim textrm Unif010
endarray
We compute speaker predictions for a particular parameter setting using nested enumeration and infer the posterior over parameters using MCMC We discard 5000 burnin samples and then take 5000 samples from the posterior with a lag of 2 Our posterior predictives are computed from these posteriors by taking the expected number of features produced by the speaker marginalizing over parameters and possible noncritical distractors in context this captures the statistics of our experimental contexts where there was always a distractor sharing the same color or texture but a different shape as the target Finally to precisely compute the Bayes Factor we enumerated over a discrete grid of parameter values in the prior We implemented our models and conducted inference in the probabilistic programming language WebPPL Goodman Stuhlmuller 2014 All code necessary to reproduce our model results are available at the project github httpsgithubcomhawkrobepragmaticsofperspectivetaking The statistical dependency structure of our ratings was more complex than standard mixedeffect model packages are designed to handle and the summary statistic we needed for our test was a simple difference score across conditions so we instead implemented a simple multistage nonparametric bootstrap scheme to appropriately account for different sources of variance In particular we needed to control for effects of judge item and speaker First to control for the repeated measurements of each judge rating the informativity of all labels we resampled our set of sixteen judge ids with replacement For each label we then computed informativity as the difference between the target and distractor fits within every judges ratings and took the mean across our bootstrapped sample of judges Next we controlled for item effects by resampling our eight item ids with replacement Finally we resampled speakers from pairs within each condition scripted vs unscripted and looked up the mean informativity of each utterance they produced for each of the resampled set of items Now we can take the mean within each condition and compute the difference across conditions which is our desired test statistic We repeated this multistage resampling procedure 1000 times to get the bootstrapped distribution of our test statistic that we reported in the main text Individual errors bars in Fig 4 are derived from the same procedure but without taking difference scores </s>
<s>[INST] Domain Adaptation via Teacher-Student Learning for End-to-End Speech Recognition [/INST] Recently with the advancement of deep learning great progress has been made in endtoend E2E automatic speech recognition ASR With the goal of directly mapping a sequence of speech frames to a sequence of output tokens an E2E ASR system incorporates the acoustic model language model and pronunciation model of a conventional ASR system into a single deep neural network DNN The most dominant approaches for E2E ASR include connectionist temporal classification CTC BIBREF0 BIBREF1 recurrent neural network transducer RNNT BIBREF2 and attentionbased encoderdecoder AED models BIBREF3 BIBREF4 BIBREF5 However the performance of E2E ASR degrades significantly when an acoustic mismatch exists between training and test conditions An intuitive solution is domain adaptation where a welltrained sourcedomain E2E model is adapted to the data in the target domain Different from speaker adaption domain adaptation allows for the usage of a large amount of adaptation data in both source and target domains There has been plenty of domain adaptation methods for hybrid systems that we can leverage for adapting E2E systems One popular approach is the adversarial learning in which an intermediate deep feature BIBREF6 BIBREF7 BIBREF8 or a frontend speech feature BIBREF9 BIBREF10 is learned to be invariant to the shifts between source and target domains Adversarial domain adaptation is suitable for the situation where no transcription or parallel adaptation data in both domains are available It can also effectively suppress the environment BIBREF11 BIBREF12 BIBREF13 and speaker BIBREF14 BIBREF15 variability during domain adaptation However in speech area a parallel sequence of targetdomain data can be easily simulated from the sourcedomain data such that the speech from both domains are framebyframe synchronized To take advantage of this teacherstudent TS learning BIBREF16 was proposed for the unsupervised domain adaptation of acoustic models in DNNhidden Markov model HMM hybrid systems BIBREF17 In TS learning the KullbackLeibler KL divergence between the output senone distributions of teacher and student acoustic models given parallel source and target domain data at the input is minimized by updating only the student model parameters TS training was shown to outperform the cross entropy training directly using the hard label in the target domain BIBREF17 BIBREF18 BIBREF19 BIBREF20 BIBREF21 One drawback of unsupervised TS learning is that the teacher model is not perfect and will sometimes make inaccurate predictions that mislead the student model toward suboptimal directions To overcome this onehot groundtruth labels are used to compensate for teachers imperfections Hinton et al proposed interpolated TS ITS learning BIBREF22 to interpolate the teachers soft class posteriors with onehot ground truth using a pair of globally fixed weights However the optimal weights are datadependent and can only be determined through careful tuning on a dev set More recently conditional TS CTS learning was proposed in BIBREF20 where the student model selectively chooses to learn from either the teacher or the ground truth depending on whether the teachers prediction is correct or not CTS does not disturb the statistical relationships among classes naturally embedded in the class posteriors and achieves significant word error rate WER improvement over TS for domain adaptation on CHiME3 dataset BIBREF23 In this work we focus on the domain adaptation of AED models for E2E ASR by using TS learning which was previously applied to learn smallfootprint AED models in BIBREF24 BIBREF25 BIBREF26 by distilling knowledge from a large powerful teacher AED For unsupervised domain adaptation we extend TS learning to AED models by introducing a twolevel knowledge transfer in addition to learning from the teachers soft token posteriors the student AED also conditions its decoder on the onebest token sequence decoded by the teacher AED We further propose an adaptive TS ATS learning method to improve TS learning using groundtruth labels By taking advantage of both ITS and CTS ATS adaptively assigns a pair of weights to the teachers soft token posteriors and the onehot groundtruth label at each decoder step depending on the confidence scores on each of the labels The confidence scores are dynamically estimated as a function of soft and onehot labels The student AED learns from an adaptive linear combination of both labels ATS inherits the linear interpolation of soft and onehot labels from ITS and borrows from CTS the judgement on the credibility of both knowledge sources before merging them It is expected to achieve improved performance over the other TS methods for domain adaptation As a general deep learning method ATS can be widely applied to the domain adaptation or model compression of any DNN With 3400 hours closetalk and farfield Microsoft Cortana data for domain adaptation TS learning achieves up to 249 and 63 relative WER gains over closetalk and farfield baseline AEDs respectively ATS improves the closetalk and farfield AEDs by 282 and 103 respectively consistently outperforming ITS and CTS In this work we perform domain adaptation on AED models BIBREF3 BIBREF4 BIBREF5 AED model was first introduced in BIBREF27 BIBREF28 for neural machine translation Without any conditional independence assumption as in CTC BIBREF0 AED was successfully applied to to E2E ASR in BIBREF3 BIBREF4 BIBREF5 and has recently achieved superior performance to conventional hybrid systems in BIBREF29 AED directly models the conditional probability distribution Pmathbf Y mathbf X over sequences of output tokens mathbf Ylbrace y1 ldots yLrbrace given a sequence of input speech frames mathbf Xlbrace mathbf x1 ldots mathbf xNrbrace as below To achieve this the AED model incorporates an encoder a decoder and an attention network The encoder maps a sequence of input speech frames mathbf X into a sequence of highlevel features mathbf H lbrace mathbf h1 ldots mathbf hNrbrace through an RNN An attention network is used to determine which encoded features in mathbf H should be attended to predict the output label yl and to generate a context vector mathbf zl as a linear combination of mathbf H BIBREF3 A decoder is used to model Pmathbf Ymathbf H which is equivalent to Pmathbf Ymathbf X At each time step t the decoder RNN takes the sum of the previous token embedding mathbf el1 and the context vector mathbf zl1 as the input to predict the conditional probability of each token ie Pu mathbf Y0l1 mathbf H u in mathbb U at the decoder step l where mathbb U is the set of all the output tokens In Eq DISPLAYFORM2 and Eq we sum together the mathbf zl and mathbf ql or mathbf et instead of concatenation because by summation we get a lowerdimensional combined vector than concatenation saving the number of parameters by half for the subsequent projection operation In our experiments concatenation does not improve the performance even with more parameters where mathbf ql is the hidden state of the decoder RNN bias mathbf by and the matrix Ky are learnable parameters An AED model is trained to minimize the following crossentropy CE loss on the training corpus mathbb Tr where mathbf YG lbrace yG1 ldots yGLGrbrace is the sequence of grouthtruth tokens LG represents the number of elements in mathbf YG and theta denotes all the model parameters in AED For unsupervised domain adaptation we want to make use of a large amount of unlabeled data that is widely available As shown in Fig FIGREF4 with TS learning only two sequences of parallel data are required an input sequence of sourcedomain speech frames to the teacher AED mathbf XTlbrace mathbf xT1 ldots mathbf xTNrbrace and an input sequence of targetdomain speech frames to the student model mathbf XSlbrace mathbf xS1 ldots mathbf xSNrbrace mathbf XT and mathbf XS are parallel to each other ie each pair of mathbf xSn and mathbf xTn forall n in lbrace 1 ldots Nrbrace are framebyframe synchronized For most domain adaptation tasks in ASR such as adapting from clean to noisy speech closetalk to farfield speech wideband to narrowband speech the parallel data in the target domain can be easily simulated from the data in the source domain BIBREF17 BIBREF19 Our goal is to train a student AED that can accurately predict the tokens of the targetdomain data by forcing the student to emulate the behaviors of the teacher To achieve this we minimize the KullbackLeibler KL divergence between the tokenlevel output distributions of the teacher and the student AEDs given the parrallel data mathbf XT and mathbf XS are fed as the input to the AEDs The KL divergence between the tokenlevel output distributions of the teacher and student AEDs are formulated below where mathbf YTlbrace yT1 ldots yTLTrbrace is the sequence of onebest token sequence decoded by the teacher AED as follows where LT is the number of tokens in mathbf YT and mathbf theta T mathbf theta S denote all the parameters in the teacher and student AED models respectively Note that for unsupervised domain adaptation the teacher AED can only condition its decoder on the token yTl1 predicted at the previous step since the groundtruth labels mathbf YG are not available We minimize the KL divergence with respect to theta S while keeping theta T fixed on the adaptation data corpus mathbb A which is equivalent to minimizing the tokenlevel TS loss function below The steps of tokenlevel TS learning for unsupervised domain adaptation of AED model are summarized as follows Clone the student AED from a teacher AED welltrained with transcribed sourcedomain data by minimizing Eq DISPLAYFORM3 Forwardpropagate the sourcedomain data mathbf XT through the teacher AED generate teachers onebest token sequence mathbf YT using Eq DISPLAYFORM6 and teachers soft posteriors for each decoder step Pumathbf YT0l1mathbf XT mathbf theta T u in mathbb U by Eqs DISPLAYFORM2 and Forwardpropagate the targetdomain data mathbf XS parallel to mathbf XT through the student AED generate students soft posteriors for each teachers decoder step Pumathbf YT0l1 mathbf XS mathbf theta S u in mathbb U by Eqs DISPLAYFORM2 and Compute error signal of the TS loss function in Eq DISPLAYFORM7 backpropagate the error through student AED and update the parameters of the student AED Repeat Steps UNKREF9 to UNKREF11 until convergence After TS learning only the adapted student AED is used for testing and the teacher AED is discarded From Eqs DISPLAYFORM6 and DISPLAYFORM7 to extend TS learning to AEDbased E2E models two levels of knowledge transfer are involved 1 the student learns from the teachers soft token posteriors Pumathbf YT0l1 mathbf XT mathbf theta T at each decoder step 2 the student AED conditions its decoder on the previous token yTl1 predicted by the teacher to make the current prediction Sequencelevel TS learning BIBREF24 BIBREF30 is another method for unsupervised domain adaptation in which a KL divergence between the sequencelevel output distributions of the teacher and student AEDs are minimized Equivalently we minimize the sequencelevel TS loss function below with respect to theta S where mathbb V is the set of all possible token sequences and the teachers sequencelevel output distribution Pmathbf V mathbf XT is approximated by mathbb 1mathbf Vmathbf YT for easy implementation mathbb 1cdot is an indicator function which equals to 1 if the condition in the squared bracket is satisfied and 0 otherwise From Eq DISPLAYFORM13 we see that only one level of knowledge transfer exists in sequencelevel TS ie the onebest token sequence mathbf YT decoded by the teacher AED The student AED learns from mathbf YT and conditions its decoder on it at each step Different from tokenlevel TS in sequencelevel TS onehot labels in mathbf YT are used as training targets of the student AED instead of the soft token posteriors In this section we want to make good use of the groundtruth labels of the adaptation data to further improve the TS domain adaptation Note that different from unsupervised TS in Section SECREF3 in supervised domain adaptation the teacher AED conditions its decoder on the groundtruth token instead of its previous decoding result because the token transcription mathbf YG is available in addition to mathbf XS and mathbf XT One shortcoming of unsupervised TS learning is that the teacher model can sporadically predict inaccurate token posteriors which misleads the student AED towards suboptimal performance Onehot groundtruth labels can be utilized to alleviate this issue One possible solution is the interpolated TS ITS learning BIBREF22 in which a weighted sum of teachers soft posteriors and the onehot ground truth is used as the target to train the student AED A pair of global weights summed to be one is applied to each pair of soft and onehot labels However the optimal global weights are hard to determine because they are datadependent and need to be carefully tuned on a dev set To address this issue conditional TS learning CTS BIBREF20 was proposed recently in which the student selectively chooses to learn from either the teacher AED or the ground truth conditioned on whether the teacher AED can correctly predict the groundtruth labels CTS have shown significant WER improvements over TS and ITS for both domain and speaker adaptation on CHiME3 dataset However in CTS the student is still not smart enough because for each token the student AED solely relies on either the teachers posteriors or the ground truth instead of dynamically extracting useful knowledge from both To further improve the effectiveness of knowledge transfer we propose an adaptive teacherstudent ATS learning method by taking advantage of both CTS and ITS As shown in Fig FIGREF14 instead of assigning a fixed pair of soft weight w and onehot weight 1w for all the decoder steps we adaptively weight the teachers soft posteriors at the ltextth decoder step Pumathbf YG0l1mathbf XTmathbf theta T uin mathbb U by wl in 01 and the onehot vector of the ltextth token in the groundtruth sequence mathbf YG by 1wl In order to quantify the value of the knowledge to be transferred wl should be positively correlated with a confidence score cl on the teachers prediction on token posteriors while 1wl should be positively correlated with a confidence score on the ground truth dl To achieve this we compute wl by normalizing cl against its summation with dl It is in general true that the higher posterior PylGmathbf YG0l1 mathbf XTtheta T a teacher assigns to the correct groundtruth token ylG the more accurate the teachers soft posteriors are at this decoder step Therefore the confidence score cl on teachers soft posteriors Pumathbf YG0l1 mathbf XT theta T uin mathbb U can be any monotonically increasing function of the correct token posterior predicted by the teacher PyGlmathbf YG0l1 mathbf XT theta T while the confidence score dl on the onehot ground truth can be any monotonically increasing function of 1PyGlmathbf YG0l1 mathbf XT theta T as follow where both f1 and f2 are any monotonically increasing functions on the interval 0 1 In this work we simply assume that f1 and f2 are both power functions of the same form ie f1x f2x xlambda lambda 0 Note that wl equals to PyGlmathbf YG0l1 mathbf XT theta T when lambda 1 In ATS a linear combination of the teachers soft posteriors and the onehot ground truth weighted by wl and 1 wl respectively is used as the training target for the student AED at each decoder step The ATS loss function is formulated as The steps of ATS learning for supervised domain adaptation of AED model are summarized as follows Perform tokenlevel unsupervised TS adaptation by following the steps in Section SECREF3 as the initialization Forwardpropagate the parallel source and target domain data mathbf XT and mathbf XS through the teacher and student AEDs generate teacher and students soft posteriors Pumathbf YG0l1mathbf XT mathbf theta T and Pumathbf YG0l1 mathbf XS mathbf theta S u in mathbb U for each decoder step by Eqs DISPLAYFORM2 and Compute the confidence scores cl and dl for teachers soft posteriors and onehot vector of ground truth yGl by Eqs DISPLAYFORM16 and compute the adaptive weight wl by Eq DISPLAYFORM15 Compute error signal of the ATS loss function in Eq DISPLAYFORM17 backpropagate the error through student AED and update the parameters of the student AED Repeat Steps UNKREF9 to UNKREF11 until convergence ATS is superior to ITS in that the combination weights for soft and onehot labels at each decoder step are adaptively assigned according to the confidence score on both labels ATS will degenerate to ITS if the combination weights wl are fixed globally Compared to CTS in ATS the student always adaptively learns from both the teachers soft posteriors and the onehot ground truth rather than choosing either of them depending on the correctness of teachers prediction We adapt a closetalk AED model to the farfield data through various TS learning methods with parallel closetalk and farfield Microsoft Cortana data for E2E ASR For both training and adaptation closetalk data consisting of 3400 hours of Microsoft live US English Cortana utterances are collected through a number of deployed speech services including voice search and SMD We simulate 3400 hours of farfield Microsoft Cortana data by convolving the closetalk signal with different room impulse responses and adding various environmental noise for both training and adaptation The 3400 hours farfield data is parallel with the 3400 hours closetalk data We collect 175k farfield utterances about 19 hours from Harman Kardon HK speaker as the test set 80dimensional log Mel filter bank features are extracted from the training adaptation and test speech every 10 ms over a 25 ms window We stack 3 consecutive frames and stride the stacked frame by 30 ms to form a sequence of 240dimensional input speech frames We first generate 34k mixedunits consisting of words and multiletter units as in BIBREF31 based on the training transcription and then tokenize the training adaptation transcriptions correspondingly We insert a special token space between every two adjacent words to indicate the word boundary and add sos eos to the beginning and end of each utterance respectively We first train an AED model predicting 34k mixed units with 3400 hours closetalk training data and it groundtruth labels for E2E ASR as in BIBREF32 BIBREF33 BIBREF34 The encoder is a bidirectional gated recurrent units GRUrecurrent neural network RNN BIBREF27 BIBREF35 with 6 hidden layers each with 512 hidden units We use GRU instead of long shortterm memory LSTM BIBREF36 BIBREF37 for RNN because it has less parameters and is trained faster than LSTM with no loss of performance Layer normalization BIBREF38 is applied for each encoder hidden layer Each mixed unit is represented as a 512dimensional embedding vector The decoder is a unidirectional GRURNN with 2 hidden layers each with 512 hidden units The 34kdimensional output layer of the decoder predicts the posteriors of all the mixed units in the vocabulary During training scheduled sampling BIBREF39 is applied to the decoder with a sampling probability starting at 00 and gradually increasing to 04 BIBREF29 Dropout BIBREF40 with a probability of 01 is used in both encoder and decoder A labelsmoothed crossentropy BIBREF41 loss is minimized during training Greedy decoding is performed to generate the ASR transcription We use PyTorch BIBREF42 toolkit for the experiments Table TABREF25 shows that the closetalk AED model achieves 758 and 1739 WERs on a closetalk Cortana test set used in BIBREF33 and the farfield HK speaker test set respectively Using the welltrained closetalk AED as the initialization we then train a farfield AED with 3400 hours farfield data and its groundtruth labels by following the same procedure When evaluated on the HK speaker test set the baseline farfield AED achieves 1393 WER for ASR as in Table TABREF25 We adapt the closetalk baseline AED to the 3400 hours farfield data using token and sequence level TS learning as discussed in Section SECREF3 To achieve this we feed the 3400 hours closetalk adaptation data as the input to the teacher AED and the 3400 hours parallel farfield adaptation data as the input to the student AED The student AED conditions its decoder on onebest token sequences generated by the teacher AED through greedy decoding In tokenlevel TS the soft posteriors generated by the teacher serve as the training targets of the student while in sequencelevel TS the onebest sequences decoded by the teacher are used the targets As shown in Table TABREF25 the tokenlevel TS achieves 1306 WER on HK speaker test set which is 249 and 625 relative improvements over the closetalk and farfield AED models respectively The sequencelevel TS achieves 1400 WER which is 195 relative improvement over the closetalk AED model The sequencelevel TS performs slightly worse than the farfield AED trained with groundtruth labels because the onebest decoding from the teacher AED is not always reliable to serve as the training targets for the student model The sequencelevel TS can be improved by using multiple decoded hypotheses generated by the teacher AED as the training targets as in BIBREF25 BIBREF26 We did not perform Nbest decoding because it will drastically increase the computational cost and will consumes much more adaptation time than the other TS methods The 67 relative WER gain obtained by tokenlevel TS over sequencelevel TS shows the benefit of using soft posteriors generated by the teacher AED as the training target at each decoder step when a reliable groundtruth transcription is not available The 63 relative WER gain of token TS over farfield AED baseline shows that the unsupervised TS learning with no groundtruth labels can significantly outperform the supervised domain adaptation with such information available Compared to the onehot labels the soft posteriors accurately models the inherent statistical relationships among different token classes in addition to the token identity encoded by a onehot vector It proves to be a more powerful target for the student to learn from which is consistent with what was observed in BIBREF17 BIBREF18 BIBREF19 BIBREF20 BIBREF21 As discussed in Section SECREF4 we want to further improve the TS learning by using onehot groundtruth labels when they are available As in BIBREF22 we perform ITS learning for supervised domain adaptation by using the linear interpolation of soft posterior and onehot ground truth as the training target of the student The interpolation weights are globally fixed at 05 and 05 for all decoder steps By following BIBREF20 we also conduct CTS for supervised domain adaptation where soft posteriors are used as the training target of the student if the teachers prediction is correct at the current decoder step otherwise the onehot ground truth is used as the target Finally ATS domain adaptation is performed by adaptively adjusting the weights assigned to the soft and onehot labels at each decoder step as in Eqs DISPLAYFORM15 to We explore using different power functions as f1x and f2x to compute the confidence scores by adjusting lambda For all the above supervised TS learning methods the 3400 hours closetalk and 3400 hours farfield parallel adaptation data is fed as the input to the teacher and student AEDs respectively As shown in Table TABREF25 ITS with w 02 achieves 1395 WER on HK speaker test set which is 255 70 and 08 relative improvements over the closetalk farfield and tokenlevel TS adapted AED models respectively With a 1282 WER CTS relatively improves the closetalk farfield and tokenlevel TS adapted AED models by 263 80 and 18 respectively Among different lambda s for ATS the best WER is 1249 which is 282 103 and 44 relative gains over closetalk farfield and tokenlevel TS adapted AEDs The minimum WER is reached when lambda 025 and cl Pylmathbf YG0l1 mathbf XT theta T025 Compared to lambda 1 ATS works better for lambda in 0 1 when confidence scores cl dl are both concave functions of the correct token posterior and the sum of incorrect token posteriors respectively All the ITS CTS and ATS outperform the unsupervised TS learning indicating that the onehot ground truth can further improve TS domain adaptation when it is properly used ATS achieves the largest gain in supervised domain adaptation methods showing the superiority of adaptively extracting useful knowledge from both the soft and onehot labels depending on their confidence scores In this paper we extend TS learning to unsupervised domain adaptation of AED models for E2E ASR TS learning requires only unlabeled parallel source and target domain data as the input to the teacher and student AEDs respectively In TS the student AED conditions its decoder on the onebest token sequences generated by the teacher The teachers soft posteriors and decoded onehot tokens are used as the training target of the student AED for tokenlevel and sequencelevel TS learning respectively For supervised domain adaption we propose adaptive TS learning in which the student always learns from a linear combination of the teachers soft posteriors and the onehot ground truth The combination weights are adaptively computed at each decoder step based on the confidence scores on both knowledge sources Domain adaptation is conducted on 3400 hours closetalk and 3400 hours farfield Microsoft Cortana data Tokenlevel TS achieves 63 relative WER improvement over the baseline farfield AED model trained with CE criterion By making use of the groundtruth labels ATS further improves the tokenlevel TS by 44 relative and achieves a total 103 relative gain over the farfield AED ATS also consistently outperforms ITS and CTS showing the advantage of learning from both the teacher and the ground truth as well as the adaptive adjustment of the combination weights </s>
<s>[INST] HAS-QA: Hierarchical Answer Spans Model for Open-domain Question Answering [/INST] Opendomain question answering OpenQA aims to seek answers for a broad range of questions from a large knowledge sources eg structured knowledge bases BIBREF0 BIBREF1 and unstructured documents from search engine BIBREF2 In this paper we focus on the OpenQA task with the unstructured knowledge sources retrieved by search engine Inspired by the reading comprehension RC task flourishing in the area of natural language processing BIBREF3 BIBREF4 BIBREF5 some recent works have viewed OpenQA as an RC task and directly applied the existing RC models to it BIBREF6 BIBREF7 BIBREF3 BIBREF8 However these RC models do not well fit for the OpenQA task Firstly they directly omit the paragraphs without answer string RC task assumes that the given paragraph contains the answer string Figure 1 top however it is not valid for the OpenQA task Figure 1 bottom Thats because the paragraphs to provide answer for an OpenQA question is collected from a search engine where each retrieved paragraph is merely relevant to the question Therefore it contains many paragraphs without answer string for instance in Figure 1 Paragraph2 When applying RC models to OpenQA task we have to omit these paragraphs in the training phase However during the inference phase when model meets one paragraph without answer string it will pick out a text span as an answer span with high confidence since RC model has no evidence to justify whether a paragraph contains the answer string Secondly they only consider the first answer span in the paragraph but omit the remaining rich multiple answer spans In RC task the answer and its positions in the paragraph are provided by the annotator in the training data Therefore RC models only need to consider the unique answer span eg in SQuAD BIBREF9 However the OpenQA task only provides the answer string as the groundtruth Therefore multiple answer spans are detected in the given paragraph which cannot be considered by the traditional RC models Take Figure 1 as an example all text spans contain fat are treated as answer span so we detect two answer spans in Paragraph1 Thirdly they assume that the start position and end position of an answer span is independent However the end position is evidently related with the start position especially when there are multiple answer spans in a paragraph Therefore it may introduce some problems when using such independence assumption For example the detected end position may correspond to another answer span rather than the answer span located by the start position In Figure 1 Paragraph1 fat in their cdots insulating effect fat has a high confidence to be an answer span under independence assumption In this paper we propose a Hierarchical Answer Span Model named HASQA based on a new threelevel probabilistic formulation of OpenQA task as shown in Figure 2 At the question level the conditional probability of the answer string given a question and a collection of paragraphs named answer probability is defined as the product of the paragraph probability and conditional answer probability based on the law of total probability At the paragraph level paragraph probability is defined as the degree to which a paragraph can answer the question This probability is used to measure the quality of a paragraph and targeted to tackle the first problem mentioned ie identify the useless paragraphs For calculation we first apply bidirectional GRU and an attention mechanism on the question aware context embedding to obtain a score Then we normalize the scores across the multiple paragraphs In the training phase we adopt a negative sampling strategy for optimization Conditional answer probability is the conditional probability that a text string is the answer given the paragraph Considering multiple answer spans in a paragraph the conditional answer probability can be further represented as the aggregation of several span probability defined later In this paper four types of functions ie HEAD RAND MAX and SUM are used for aggregation At the span level span probability represents the probability that a text span in a paragraph is the answer span Similarly to previous work BIBREF3 span probability can be computed as the product of two location probability ie location start probability and location end probability Then a conditional pointer network is proposed to model the probabilistic dependences between the start and end positions by making generation of end position depended on the start position directly rather than internal representation of start position BIBREF10 The contributions of this paper include 1 a probabilistic formulation of the OpenQA task based on the a threelevel hierarchical structure ie the question level the paragraph level and the answer span level 2 the proposal of an endtoend HASQA model to implement the threelevel probabilistic formulation of OpenQA task Section HASQA Model which tackles the three problems of direct applying existing RC models to OpenQA 3 extensive experiments on QuasarT TriviaQA and SearchQA datasets which show that HASQA outperforms traditional RC baselines and recent OpenQA baselines Research in reading comprehension grows rapidly and many successful RC models have been proposed BIBREF11 BIBREF4 BIBREF3 in this area Recently some works have treated OpenQA task as an RC task and directly applied existing RC models In this section we first review the approach of typical RC models then introduce some recent OpenQA models which are directly based on the RC approach RC models typically have two components context encoder and answer decoder Context encoder is used to obtain the embeddings of questions paragraphs and their interactions Most of recent works are based on the attention mechanism and its extensions The efficient way is to treat the question as a key to attention paragraph BIBREF3 BIBREF6 Adding the attention from paragraph to question BIBREF4 BIBREF5 enriches the representations of context encoder Some works BIBREF12 BIBREF13 BIBREF8 find that selfattention is useful for RC task Answer decoder aims to generate answer string based on the context embeddings There exist two sorts of approaches generate answer based on the entail word vocabulary BIBREF14 and retrieve answer from the current paragraph Almost all works in RC task choose the retrievalbased method Some of them use two independently position classifiers BIBREF6 BIBREF15 the others use the pointer networks BIBREF3 BIBREF4 BIBREF12 BIBREF13 An answer length limitation is applied in these models ie omit the text span longer than 8 We find that relaxing length constrain leads to performance drop Some recent works in OpenQA research directly introduce RC model to build a pure data driven pipline DrQA BIBREF6 is the earliest work that applies RC model in OpenQA task However its RC model is trained using typical RC dataset SQuAD BIBREF9 which turns to be overconfidence about its predicted results even if the candidate paragraphs contain no answer span R 3 BIBREF16 introduces a ranker model to rerank the original paragraph list so as to improve the input quality of the following RC model The training data of the RC model is solely limited to the paragraphs containing the answer span and the first appeared answer span location is chosen as the ground truth SharedNorm BIBREF8 applied a sharednorm trick which considers paragraphs without answer span in training RC models The trained RC model turns to be robust for the useless paragraphs and generates the lower span scores for them However it assumes that the start and the end positions of an answer span are independent which is not suitable for modeling multiple answer spans in one paragraph Therefore we realize that the existing OpenQA models rarely consider the differences between RC and OpenQA task In this paper we directly model the OpenQA task based on a probabilistic formulation in order to identify the useless paragraphs and utilize the multiple answer spans In OpenQA task the question Q and its answer string A are given Entering question Q into a search engine top K relevant paragraphs are returned denote as a list mathbf P P1dots PK The target of OpenQA is to find the maximum probability of PAQ mathbf P named answer probability for short We can see the following three characteristics of OpenQA 1 we cannot guarantee that paragraph retrieved by search engine contains the answer span for the question so the paragraphs without answer span have to be deleted when using the above RC models However these paragraphs are useful for distinguishing the quality of paragraphs in training More importantly the quality of a paragraph plays an important role in determining the answer probability in the inference phase It is clear that directly applying RC models fails to meet this requirement 2 only answer string is provided while the location of the answer string is unknown That means there may be many answer spans in the paragraph It is well known that traditional RC models are only valid for a single answer span To tackle this problem the authors of BIBREF7 propose a distantly supervised method to use the first exact match location of answer string in the paragraph as the groundtruth answer span However this method omit the valuable multiple answer spans information which may be important for the calculation of the answer probability 3 the start and end positions are coupled together to determine a specific answer span since there may be multiple answer spans However existing RC models usually assume that the start and end positions are independent Thats because there is only one answer span in the RC scenario This may introduce serious problem in the OpenQA task For example if we do not consider the relations between the start and end position the end position may be another answer spans end position instead of the one determined by the start position Therefore it is not appropriate to assume independence between start and end positions In this paper we propose to tackle the above three problems Firstly according to the law of total probability the answer probability can be rewritten as the following form PAQ mathbf P sum i1K PPiQ mathbf P PAQ Pi Eq 4 We name PPiQ mathbf P and PAQ Pi as the paragraph probability and conditional answer probability respectively We can see that the paragraph probability measures the quality of paragraph Pi across the list mathbf P while the conditional answer probability measures the probability that string A is an answer string given paragraph Pi The conditional answer probability can be treated as a function of multiple span probabilities lbrace PLjAQ Pirbrace j as shown in Eq 5 beginaligned
PAQ Pi mathcal Flbrace PLjAQ Pirbrace j
j in 1 mathcal LAPi
endaligned Eq 5 where the aggregation function mathcal F treats a list of spans mathcal LAPi as input and mathcal LAPi denotes the number of the text spans contain the string A A proper aggregation function makes use of all the answer spans information in OpenQA task Previous work BIBREF7 can be treated as a special case which uses a function of selecting first match span as the aggregation function mathcal F The span probability PLjAQ Pi represents the probability that a text span LjA in the paragraph Pi is an answer span We further decompose it into the product of location start probability PLsjAQ Pi and location end probability PLejAQ Pi LsjA shown in Eq 6 beginaligned
PLjAQ Pi PLsjAQ Pi
cdot PLejAQ Pi LsjA endaligned Eq 6 Some previous work such as DrQA BIBREF6 treats them as the two independently position classification tasks thus LsA and LeA are modeled by two different functions MatchLSTM BIBREF3 treats them as the pointer networks BIBREF10 The difference is that LeA is the function of the hidden state of LsA denote as mathbf Ms However LsA and LeA are still independent in probabilistic view because LeA depends on the hidden state mathbf Ms not the start position LsA In this paper the span positions LeA0 and LeA1 are determined by the question LeA2 and the paragraph LeA3 Specially end position LeA4 is also conditional on start position LeA5 directly With this conditional probability we can naturally remove the answer length limitation With above formulation we find that RC task is a special case of OpenQA task where we set the number of paragraph K to 1 set the paragraph probability to constant number 1 treat PAQPPLAQ P PLAQ PPLsAQ PPLeAQ P where P is the idealized paragraph that contain the answer string A and the right position LA is also known In this section we propose a Hierarchical Answer Span Model HASQA for OpenQA task based on the probabilistic view of OpenQA in Section Probabilistic Views of OpenQA HASQA has four components question aware context encoder conditional span predictor multiple spans aggregator and paragraph quality estimator We will introduce them one by one The question aware context embeddings mathbf C is generated by the context encoder while HASQA do not limit the use of context encoder We choose a simple but efficient context encoder in this paper It takes advantage of previous works BIBREF8 BIBREF3 which contains the characterlevel embedding enhancement the bidirectional attention mechanism BIBREF4 and the selfattention mechanism BIBREF12 We briefly describe the process below Word Embeddings use size 300 pretrained GloVe BIBREF17 word embeddings Char Embeddings encode characters in size 20 which are learnable Then obtain the embedding of each word by convolutional layer and max pooling layer Context Embeddings concatenate word embeddings and char embeddings and apply bidirectional GRU BIBREF18 to obtain the context embeddings Both question and paragraph get their own context embeddings Question Aware Context Embeddings use bidirectional attention mechanism from the BiDAF BIBREF4 to build question aware context embeddings Additionally we subsequently apply a layer of selfattention to get the final question aware context embeddings After the processes above we get the final question aware context embeddings denoted mathbf C in mathbb Rn times r where n is the length of the paragraph and r is size of the embedding Conditional span predictor defines the span probability for each text span in a paragraph using a conditional pointer network We first review the answer decoder in traditional RC models It mainly has two types two independently position classifiers IndCls and the pointer networks PtrNet Both of these approaches generate a distribution of start position mathbf ps in mathbb Rn and a distribution of end position mathbf pe in mathbb Rn where n is the length of the paragraph Starting from the context embeddings mathbf C two intermedia representations mathbf Ms in mathbb Rn times 2d and mathbf Me in mathbb Rn times 2d are generated using two bidirectional GRUs with the output dimension d mathbf Ms mathrm BiGRUmathbf C
textrm IndCls mathbf Me mathrm BiGRUmathbf C
textrm PtrNet mathbf Me mathrm BiGRUmathbf C mathbf Ms Eq 10 Then an additional Softmax function is used to generate the final positional distributions
beginaligned
mathbf ps mathrm softmaxmathbf Msws
mathbf pe mathrm softmaxmathbf Mewe
endaligned Eq 11 where ws we in mathbb R2d denotes the linear transformation parameters As mentioned in Section Probabilistic Views of OpenQA IndCls and PtrNet both treat start and end position as probabilistic independent Given the independent start and end positions can not distinguish the different answer spans in a paragraph properly so it is necessary to build a conditional model for them Therefore we proposed a conditional pointer network which directly feed the start position to the process of generating the end position
beginaligned
mathbf Mej mathrm BiGRUmathbf C mathbf Ms mathrm OneHotLsj
mathbf pej mathrm softmaxmathbf Mejwe
endaligned Eq 12 where Lsj denotes the start position selected from the start positional distribution mathbf ps and mathrm OneHotcdot denotes the transformation from a position index to an onehot vector In the training phase we are given the start and end positions of each answer span denote as Lsj and Lej The span probability is
PLjAQ Pi sj mathbf psLsj cdot mathbf pejLej Eq 13 In the inference phase we first select the start position Lsj from the start distribution mathbf ps Then we yield its corresponding end distribution mathbf pej using Eq 12 and select the end position Lej from it Finally we get the span probability using Eq 13 Multiple span aggregator is used to build the relations among multiple answer spans and outputs the conditional answer probability In this paper we design four types of aggregation functions mathcal F
beginaligned
textrm HEAD PAQ Pi s1
textrm RAND PAQ Pi textrm Randomsj
textrm MAX PAQ Pi max jnolimits sj
textrm SUM PAQ Pi sum jnolimits sj
endaligned Eq 15 where sj denotes the span probability defined in Eq 13 s1 denotes the first match answer span and textrm Random denotes a stochastic function for randomly choosing an answer span Different aggregation functions represent different assumptions about the distribution of the oracle answer spans in a paragraph The oracle answer span represents the answer of the question that can be merely determined by its context eg in Figure 1 the first answer span fat is the oracle answer span while the second one is not because we could retrieval the answer directly if we have read concentrating body fat in their humps HEAD operation simply chooses the first match span probability as the conditional answer probability which simulates the answer preprocessing in previous works BIBREF16 BIBREF7 This function only encourages the first match answer span as the oracle while punishes the others It can be merely worked in a paragraph with definition such as first paragraph in WikiPedia RAND operation randomly chooses a span probability as the conditional answer probability This function assumes that all answer spans are equally important and must be treated as oracle However balancing the probabilities of answer spans is hard It can be used in paraphrasing answer spans appear in a list MAX operation chooses the maximum span probability as the conditional answer probability This function assumes that only one answer span is the oracle It can be used in a noisy paragraph especially for those retrieved by a search engine SUM operation sums all the span probabilities as the conditional answer probability This function assumes that one or more answer spans are the oracle It can be used in a broad range of scenarios for its relatively weak assumption In the training phase all annotated answer spans contain the same answer string A we directly apply the Eq 15 to obtain the conditional answer probability in paragraph level In the inference phase we treat the top K span probabilities sj as the input of the aggregation function However we have to check all possible start and end positions to get the precise top K span probabilities Instead we use a beam search strategy BIBREF19 which only consider the top K1 start positions and the top K2 end positions where K1 K2 ge K Different span probabilities sj represent variance answer strings At Following the definition in Eq 15 we group them by different answer strings respectively Paragraph quality estimator takes the useless paragraphs into consideration which implements the paragraph probability PPiQ mathbf P directly Firstly we use an attentionbased network to generate a quality score denotes as hatqi in order to measure the quality of the given paragraph Pi
beginaligned
mathbf Mc textrm BiGRUmathbf C
hatqi mathbf Mctop cdot mathbf ps cdot wc
endaligned Eq 17 where mathbf Mc in mathbb Rn times 2d is the intermedia representation obtained by applying bidirectional GRU on the context embedding mathbf C Then let start distribution mathbf ps in mathbb Rn as a key to attention mathbf Mc and transform it to 1d value using weight wc in mathbb R2d Finally we get the quality score hatqi Paragraph probabilities PPiQ mathbf P are generated by normalizing across mathbf P
PPiQ mathbf P qi fracexp hatqisum Pj in mathbf P exp hatqj Eq 18 In the training phase we conduct a negative sampling strategy with one negative sample for efficient training Thus a pair of paragraphs P as positive and P as negative are used to approximate q approx PPQ P P and q approx PPQ P P In the inference phase the probability qi is obtained by normalizing across all the retrieved paragraphs mathbf P h HASQA Model in Training Phase 1 Q question A answer string mathbf P retrieved paragraphs mathcal L loss function P P in mathbf P Get answer locations mathbf Ls mathbf Le for P Get the context embedding mathbf C Compute mathbf ps Eq 11 Lsj Lej in mathbf Ls mathbf Le P0 Compute P1 Eq 12 P2 P3 Apply function P4 Compute P5 in P6 Eq 17 Eq 18 P7 P8 h HASQA Model in Inference Phase 1 Q question mathbf P retrieved paragraphs Abest answer string Pi in mathbf P Get the context embedding mathbf C Compute mathbf ps Eq 11 Lsj in Top K1 mathbf ps psj leftarrow mathbf psLsj Compute mathbf pej Eq 12 Lejk in Top mathbf P0 mathbf P1 mathbf P2 mathbf P3 Group mathbf P4 by extracted answer string mathbf P5 Apply function mathbf P6 Compute mathbf P7 Eq 17 Normalize lbrace hatqirbrace get lbrace qirbrace Eq 18 SAt leftarrow sum i qi cdot pAti Abest leftarrow arg max SAt Above all we describe our model with Algorithm Paragraph Quality Estimator in the training phase and Algorithm Paragraph Quality Estimator in the inference phase We evaluate our model on three OpenQA datasets QuasarT BIBREF21 TriviaQA BIBREF7 and SearchQA BIBREF22 QuasarT consists of 43k opendomain trivia questions whose answers obtained from various internet sources ClueWeb09 BIBREF23 serves as the background corpus for providing evidences paragraphs We choose the Long version which is truncated to 2048 characters and 20 paragraphs for each question TriviaQA consists of 95k opendomain questionanswer pairs authored by trivia enthusiasts and independently gathered evidence documents from Bing Web Search and Wikipedia six per question on average We focus on the open domain setting contains unfiltered documents SearchQA is based on a Jeopardy questions and collects about top 50 web page snippets from Google search engine for each question As we can see in Table 1 there exist amounts of negative paragraphs which contains no answer span especially in TriviaQA and SearchQA For all datasets more than 4 answer spans averagely obtained per paragraph These statistics illustrate that problems mentioned above exist in OpenQA datasets For RC baseline models GA BIBREF11 BiDAF BIBREF4 and AQA BIBREF20 their experimental results are collected from published papers BIBREF22 BIBREF7 The DrQA BIBREF6 R 3 BIBREF16 and SharedNorm BIBREF8 are evaluated using their released code Our model adopts the same data preprocessing and question context encoder presented in BIBREF8 In training step we use the Adadelta optimizer BIBREF24 with the batch size of 30 and we choose the model performed the best on develop set The hidden dimension of GRU is 200 and the dropout ratio is 08 We use 300 dimensional word embeddings pretrained by GloVe released by BIBREF17 and do not finetune in training step Additionally 20 dimensional character embeddings are left as learnable parameters In inference step for baseline models we set the answer length limitation to 8 while for our models it is unlimited We analyze different answer length limitation settings in the Section UID31 The parameters of beam search are K13 and K21 The experimental results on three OpenQA datasets are shown in Table 2 It concludes as follow 1 HASQA outperforms traditional RC baselines with a large gap such as GA BiDAF AQA listed in the first part For example in QuasarT it improves 168 in EM score and 204 in F1 score As RC task is just a special case of OpenQA task Some experiments on standard SQuAD datasetdevset BIBREF9 show that HASQA yields EMF107190798 which is comparable with the best released single model Reinforced Mnemonic Reader BIBREF25 in the leaderboard devset EMF107210816 Our performance is slightly worse because Reinforced Mnemonic Reader directly use the accurate answer span while we use multiple distantly supervised answer spans That may introduce noises in the setting of SQuAD since only one span is accurate 2 HASQA outperforms recent OpenQA baselines such as DrQA R 3 and SharedNorm listed in the second part For example in QuasarT it improves 46 in EM score and 35 in F1 score In this subsection we analyze our model by answering the following finegrained analytic questions 1 What advantages does HASQA have via modeling answer span using the conditional pointer network 2 How much does HASQA gain from modeling multiple answer spans in a paragraph 3 How does the paragraph quality work in HASQA The following three parts are used to answer these questions respectively In order to demonstrate the effect of the conditional pointer networks we compare SharedNorm which uses pointer networks with our model Then we gradually remove the answer length limitation from restricting 4 words to 128 words until no limitation denote as infty Finally we draw the tendency of the EM performance and average predicted answer length according to the different answer length limitations As shown in Figure 3 TopLeft the performance of SharedNorm decreases when removing the answer length limitation while the performance of HASQA first increases then becomes stable In Figure 3 TopRight we find that the average predicted answer length increases in SharedNorm when removing the answer length limitation However our model stably keeps average about 18 words where the oracle average answer length is about 19 words Example in Figure 3 Bottom illustrates that startend pointers in SharedNorm search their own optimal positions independently such as two Louis in paragraph It leads to an unreasonable answer span prediction The effects of utilizing multiple answer spans lay into two aspects 1 choose the aggregation functions in training phase and 2 select the parameters of beam search in inference phase In the training phase we evaluate four types of aggregation functions introduced in Section Multiple Spans Aggregator The experimental results on QuasarT dataset shown in Table 3 demonstrate the superiority of SUM and MAX operations They take advantages of using multiple answer spans for training and improve about 6 10 in EM comparing to the HEAD operation The performance of MAX operation is a little better than the SUM operation The failure of RAND operation mainly comes down to the conflicting training samples Therefore simple way to make use of multiple answer spans may not improve the performance In the inference phase Table 4 shows the effects of parameters in beam search We find that the larger K1 yields the better performance while K2 seems irrelevant to the performance As a conclusion we choose the parameters K13 K21 to balance the performance and the speed The paragraph probability is efficient to measure the quality of paragraphs especially for that containing useless paragraphs Figure 4 Left shows that with the increasing number of given paragraphs which ordered by the rank of a search engine EM performance of HASQA sustainably grows However EM performance of SharedNorm stops increasing at about 15 paragraphs and our model without paragraph quality denotes PosOnly stops increasing at about 5 paragraphs So that with the help of paragraph probability model performance can be improved by adding more evidence paragraphs We also evaluate the Mean Average Precision MAP score between the predicted scores and the label whether a paragraph contains answer spans Figure 4 Right The paragraph probability in our model outperforms PosOnly and SharedNorm so that it can rank the high quality paragraphs in the front of the given paragraph list In this paper we point out three distinct characteristics of OpenQA which make it inappropriate to directly apply existing RC models to this task In order to tackle these problems we first propose a new probabilistic formulation of OpenQA where the answer probability is written as the question paragraph and span threelevel structure In this formulation RC can be treated as a special case Then Hierarchical Answer Spans Model HASQA is designed to implement this structure Specifically a paragraph quality estimator makes it robust for the paragraphs without answer spans a multiple span aggregator points out that it is necessary to combine the contributions of multiple answer spans in a paragraph and a conditional span predictor is proposed to model the dependence between the start and end positions of each answer span Experiments on public OpenQA datasets including QuasarT TriviaQA and SearchQA show that HASQA significantly outperforms traditional RC baselines and recent OpenQA baselines This work was funded by the National Natural Science Foundation of China NSFC under Grants No 61773362 61425016 61472401 61722211 and 61872338 the Youth Innovation Promotion Association CAS under Grants No 20144310 and 2016102 and the National Key RD Program of China under Grants No 2016QY02D0405 </s>
<s>[INST] Question Answering and Question Generation as Dual Tasks [/INST] Question answering QA and question generation QG are two fundamental tasks in natural language processing BIBREF0 BIBREF1 Both tasks involve reasoning between a question sequence q and an answer sentence a In this work we take answer sentence selection BIBREF2 as the QA task which is a fundamental QA task and is very important for many applications such as search engine and conversational bots The task of QA takes a question sentence q and a list of candidate answer sentences as the input and finds the top relevant answer sentence from the candidate list The task of QG takes a sentence a as input and generates a question sentence q which could be answered by a It is obvious that the input and the output of these two tasks are almost reverse which is referred to as duality in this paper This duality connects QA and QG and potentially could help these two tasks to improve each other Intuitively QA could improve QG through measuring the relevance between the generated question and the answer This QAspecific signal could enhance the QG model to generate not only literally similar question string but also the questions that could be answered by the answer In turn QG could improve QA by providing additional signal which stands for the probability of generating a question given the answer Moreover QA and QG have probabilistic correlation as both tasks relate to the joint probability between q and a Given a questionanswer pair langle q a rangle the joint probability Pq a can be computed in two equivalent ways Pq a Pa Pqa PqPaq Eq 1 The conditional distribution Pqa is exactly the QG model and the conditional distribution Paq is closely related to the QA model Existing studies typically learn the QA model and the QG model separately by minimizing their own loss functions while ignoring the probabilistic correlation between them Based on these considerations we introduce a training framework that exploits the duality of QA and QG to improve both tasks There might be different ways of exploiting the duality of QA and QG In this work we leverage the probabilistic correlation between QA and QG as the regularization term to influence the training process of both tasks Specifically the training objective of our framework is to jointly learn the QA model parameterized by theta qa and the QG model parameterized by theta qg by minimizing their loss functions subject to the following constraint Paa Pqatheta qg PqqPaqtheta qa Eq 3 Paa and Pqq are the language models for answer sentences and question sentences respectively We examine the effectiveness of our training criterion by applying it to strong neural network based QA and QG models Specifically we implement a generative QG model based on sequencesequence learning which takes an answer sentence as input and generates a question sentence in an endtoend fashion We implement a discriminative QA model based on recurrent neural network where both question and answer are represented as continuous vector in a sequential way As every component in the entire framework is differentiable all the parameters could be conventionally trained through back propagation We conduct experiments on three datasets BIBREF2 BIBREF3 BIBREF4 Empirical results show that our training framework improves both QA and QG tasks The improved QA model performs comparably with strong baseline approaches on all three datasets In this section we first formulate the task of QA and QG and then present the proposed algorithm for jointly training the QA and QG models We also describe the connections and differences between this work and existing studies This work involves two tasks namely question answering QA and question generation QG There are different kinds of QA tasks in natural language processing community In this work we take answer sentence selection BIBREF2 as the QA task which takes a question q and a list of candidate answer sentences A lbrace a1 a2 aArbrace as input and outputs one answer sentence ai from the candidate list which has the largest probability to be the answer This QA task is typically viewed as a ranking problem Our QA model is abbreviated as fqaaqtheta qa which is parameterized by theta qa and the output is a realvalued scalar The task of QG takes a sentence a as input and outputs a question q which could be answered by a In this work we regard QG as a generation problem and develop a generative model based on sequencetosequence learning Our QG model is abbreviated as Pqgqatheta qg which is parameterized by theta qg and the output is the probability of generating a natural language question q We describe the proposed algorithm in this subsection Overall the framework includes three components namely a QA model a QG model and a regularization term that reflects the duality of QA and QG Accordingly the training objective of our framework includes three parts which is described in Algorithm 1 The QA specific objective aims to minimize the loss function lqafqaaqtheta qa label where label is 0 or 1 that indicates whether a is the correct answer of q or not Since the goal of a QA model is to predict whether a questionanswer pair is correct or not it is necessary to use negative QA pairs whose labels are zero The details about the QA model will be presented in the next section For each correct questionanswer pair the QG specific objective is to minimize the following loss function lqgq a log Pqgqatheta qg Eq 6 where a is the correct answer of q The negative QA pairs are not necessary because the goal of a QG model is to generate the correct question for an answer The QG model will be described in the following section tb Algorithm Description Input Language models Paa and Pqq for answer and question respectively hyper parameters lambda q and lambda a optimizer opt Output QA model fqaaq parameterized by theta qa QG model Pqgqa parameterized by theta qg Randomly initialize theta qa and Pqq0 Get a minibatch of positive QA pairs Pqq1 where Pqq2 is the answer of Pqq3 Get a minibatch of negative QA pairs Pqq4 where Pqq5 is not the answer of Pqq6 Calculate the gradients for theta qa and theta qg nonumber Gqa triangledown theta qa frac1msum i 1mlqafqaapiqpitheta qa 1
nonumber lqafqaaniqnitheta qa0
lambda aldualapiqpitheta qa theta qg Eq 7 nonumber Gqg triangledown theta qg frac1msum i 1m lqgqpiapi lambda qldualqpiapitheta qa theta qg Eq 8 Update theta qa and theta qg theta qa leftarrow opttheta qa Gqa theta qg leftarrow opttheta qg Gqg models converged The third objective is the regularization term which satisfies the probabilistic duality constrains as given in Equation 3 Specifically given a correct langle q a rangle pair we would like to minimize the following loss function nonumber ldualaqtheta qa theta qg logPaa log Pqatheta qg
logPqq logPaqtheta qa2 Eq 9 where Paa and Pqq are marginal distributions which could be easily obtained through language model Paqtheta qg could also be easily calculated with the markov chain rule Pqatheta qg prod t1q Pqtqt atheta qg where the function Pqtqt atheta qg is the same with the decoder of the QG model detailed in the following section However the conditional probability Paqtheta qa is different from the output of the QA model fqaaqtheta qa To address this given a question q we sample a set of answer sentences Aprime and derive the conditional probability Paqtheta qa based on our QA model with the following equation nonumber Paqtheta qa
dfracexpfqaaqtheta qaexpfqaaqtheta qa sum aprime in Aprime expfqaaprime qtheta qa Eq 10 In this way we learn the models of QA and QG by minimizing the weighted combination between the original loss functions and the regularization term Our work differs from BIBREF5 in that they regard reading comprehension RC as the main task and regard question generation as the auxiliary task to boost the main task RC In our work the roles of QA and QG are the same and our algorithm enables QA and QG to improve the performance of each other simultaneously Our approach differs from Generative DomainAdaptive Nets BIBREF5 in that we do not pretrain the QA model Our QA and QG models are jointly learned from random initialization Moreover our QA task differs from RC in that the answer in our task is a sentence rather than a text span from a sentence Our approach is inspired by dual learning BIBREF6 BIBREF7 which leverages the duality between two tasks to improve each other Different from the dual learning BIBREF6 paradigm our framework learns both models from scratch and does not need taskspecific pretraining The recently introduced supervised dual learning BIBREF7 has been successfully applied to image recognition machine translation and sentiment analysis Our work could be viewed as the first work that leveraging the idea of supervised dual learning for question answering Our approach differs from Generative Adversarial Nets GAN BIBREF8 in two respects On one hand the goal of original GAN is to learn a powerful generator while the discriminative task is regarded as the auxiliary task The roles of the two tasks in our framework are the same On the other hand the discriminative task of GAN aims to distinguish between the real data and the artificially generated data while we focus on the real QA task We describe the details of the question answer QA model in this section Overall a QA model could be formulated as a function fqaq atheta qa parameterized by theta qa that maps a questionanswer pair to a scalar In the inference process given a q and a list of candidate answer sentences fqaq atheta qa is used to calculate the relevance between q and every candidate a The top ranked answer sentence is regarded as the output We develop a neural network based QA model Specifically we first represent each word as a low dimensional and realvalued vector also known as word embedding BIBREF9 BIBREF10 BIBREF11 Afterwards we use recurrent neural network RNN to map a question of variable length to a fixedlength vector To avoid the problem of gradient vanishing we use gated recurrent unit GRU BIBREF12 as the basic computation unit The approach recursively calculates the hidden vector ht based on the current word vector eqt and the output vector ht1 in the last time step zi sigma Wzeqi Uzhi1
ri sigma Wreqi Urhi1
widetildehi tanh Wheqi Uhri odot hi1
hi zi odot widetildehi 1zi odot hi1 Eq 12 where zi and ri are update and reset gates of s odot stands for elementwise multiplication sigma is sigmoid function We use a bidirectional RNN to get the meaning of a question from both directions and use the concatenation of two last hidden states as the final question vector vq We compute the answer sentence vector va in the same way After obtaining vq and va we implement a simple yet effective way to calculate the relevance between questionsentence pair Specifically we represent a questionanswer pair as the concatenation of four vectors namely vq a vq va vq odot va ecqa where odot means elementwise multiplication cqa is the number of cooccurred words in q and a We observe that incorporating the embedding of the word cooccurrence eccqa could empirically improve the QA performance We use an additional embedding matrix Lc in mathbb Rdc times Vc where dc is the dimension of word cooccurrence vector and va0 is vocabulary size The values of va1 are jointly learned during training The output scalar va2 is calculated by feeding va3 to a linear layer followed by va4 We feed va5 to a va6 layer and use negative loglikelihood as the QA specific loss function The basic idea of this objective is to classify whether a given questionanswer is correct or not We also implemented a ranking based loss function va7 whose basic idea is to assign the correct QA pair a higher score than a randomly select QA pair However our empirical results showed that the ranking loss performed worse than the negative loglikelihood loss function We use loglikelihood as the QA loss function in the experiment We describe the question generation QG model in this section The model is inspired by the recent success of sequencetosequence learning in neural machine translation Specifically the QG model first calculates the representation of the answer sentence with an encoder and then takes the answer vector to generate a question in a sequential way with a decoder We will present the details of the encoder and the decoder respectively The goal of the encoder is to represent a variablelength answer sentence a as a fixedlength continuous vector The encoder could be implemented with different neural network architectures such as convolutional neural network BIBREF13 BIBREF14 and recurrent neural network RNN BIBREF15 BIBREF16 In this work we use bidirectional RNN based on GRU unit which is consistent with our QA model as described in Section 3 The concatenation of the last hidden vectors from both directions is used as the output of the encoder which is also used as the initial hidden state of the decoder The decoder takes the output of the encoder and generates the question sentence We implement a RNN based decoder which works in a sequential way and generates one question word at each time step The decoder generates a word qt at each time step t based on the representation of a and the previously predicted question words qtlbrace q1q2qt1rbrace This process is formulated as follows pqaprod qt1pqtqta Eq 14 Specifically we use an attentionbased architecture BIBREF17 which selectively finds relevant information from the answer sentence when generating the question word Therefore the conditional probability is calculated as follows pqtqtafdecqt1st ct Eq 15 where st is the hidden state of GRU based RNN at time step t and ct is the attention state at time step t The attention mechanism assigns a probabilityweight to each hidden state in the encoder at one time step and calculates the attention state ct through weighted averaging the hidden states of the encoder ctsum ai1alpha langle tirangle hi When calculating the attention weight of hi at time step t we also take into account of the attention distribution in the last time step Potentially the model could remember which contexts from answer sentence have been used before and does not repeatedly use these words to generate the question words alpha langle tirangle fracexp zsthisum Nj1alpha langle t1jrangle hjsum Hiprime 1exp zsthiprime sum Nj1alpha langle t1jrangle hj Eq 16 Afterwards we feed the concatenation of st and ct to a linear layer followed by a softmax function The output dimension of the softmax layer is equal to the number of top frequent question words eg 30K or 50K in the training data The output values of the softmax layer form the probability distribution of the question words to be generated Furthermore we observe that question sentences typically include informative but lowfrequency words such as named entities or numbers These lowfrequency words are closely related to the answer sentence but could not be well covered in the target vocabulary To address this we add a simple yet effective postprocessing step which replaces each unknown word with the most relevant word from the answer sentence Following BIBREF18 we use the attention probability as the relevance score of each word from the answer sentence Copying mechanism BIBREF19 BIBREF20 is an alternative solution that adaptively determines whether the generated word comes from the target vocabulary or from the answer sentence Since every component of the QG model is differentiable all the parameters could be learned in an endtoend way with back propagation Given a questionanswer pair langle qarangle where a is the correct answer of the question q the training objective is to minimize the following negative loglikelihood lqgqasum qt1log pytyta Eq 17 In the inference process we use beam search to get the top K confident results where K is the beam size The inference process stops when the model generates the symbol langle eos rangle which stands for the end of sentence We describe the experimental setting and report empirical results in this section We conduct experiments on three datasets including MARCO BIBREF4 SQUAD BIBREF3 and WikiQA BIBREF2 The MARCO and SQUAD datasets are originally developed for the reading comprehension RC task the goal of which is to answer a question with a text span from a document Despite our QA task answer sentence selection is different from RC we use these two datasets because of two reasons The first reason is that to our knowledge they are the QA datasets that contains largest manually labeled questionanswer pairs The second reason is that we could derive two QA datasets for answer sentence selection from the original MARCO and SQUAD datasets with an assumption that the answer sentences containing the correct answer span are correct and vice versa We believe that our training framework could be easily applied to RC task but we that is out of the focus of this work We also conduct experiments on WikiQA BIBREF2 which is a benchmark dataset for answer sentence selection Despite its data size is relatively smaller compared with MARCO and SQUAD we still apply our algorithm on this data and report empirical results to further compare with existing algorithms It is worth to note that a common characteristic of MARCO and SQUAD is that the ground truth of the test is invisible to the public Therefore we randomly split the original validation set into the dev set and the test set The statistics of SQUAD and MARCO datasets are given in Table 1 We use the official split of the WikiQA dataset We apply exactly the same model to these three datasets We evaluate our QA system with three standard evaluation metrics Mean Average Precision MAP Mean Reciprocal Rank MRR and Precision1 P1 BIBREF23 It is hard to find a perfect way to automatically evaluate the performance of a QG system In this work we use BLEU4 BIBREF24 score as the evaluation metric which measures the overlap between the generated question and the ground truth We train the parameters of the QA model and the QG model simultaneously We randomly initialize the parameters in both models with a combination of the fanin and fanout BIBREF25 The parameters of word embedding matrices are shared in the QA model and the QG model In order to learn question and answer specific word meanings we use two different embedding matrices for question words and answer words The vocabularies are the most frequent 30K words from the questions and answers in the training data We set the dimension of word embedding as 300 the hidden length of encoder and decoder in the QG model as 512 the hidden length of GRU in the QA model as 100 the dimension of word cooccurrence embedding as 10 the vocabulary size of the word cooccurrence embedding as 10 the hidden length of the attention layer as 30 We initialize the learning rate as 20 and use AdaDelta BIBREF26 to adaptively decrease the learning rate We use minibatch training and empirically set the batch size as 64 The sampled answer sentences do not come from the same passage We get 10 batches 640 instances and sort them by answer length for accelerating the training process The negative samples come from these 640 instances which are from different passages In this work we use smoothed bigram language models as paa and pqq We also tried trigram language model but did not get improved performance Alternatively one could also implement neural language model and jointly learn the parameters in the training process We first report results on the MARCO and SQUAD datasets As the dataset is splitted by ourselves we do not have previously reported results for comparison We compare with the following four baseline methods It has been proven that word cooccurrence is a very simple yet effective feature for this task BIBREF2 BIBREF22 so the first two baselines are based on the word cooccurrence between a question sentence and the candidate answer sentence WordCnt and WgtWordCnt use unnormalized and normalized word cooccurrence The ranker in these two baselines are trained with with FastTree which performs better than SVMRank and linear regression in our experiments We also compare with CDSSM BIBREF21 which is a very strong neural network approach to model the semantic relatedness of a sentence pair We further compare with ABCNN BIBREF22 which has been proven very powerful in various sentence matching tasks Basic QA is our QA model which does not use the duality between QA and QG Our ultimate model is abbreviated as Dual QA The QA performance on MARCO and SQUAD datasets are given in Table 2 We can find that CDSSM performs better than the word cooccurrence based method on MARCO dataset On the SQUAD dataset Dual QA achieves the best performance among all these methods On the MARCO dataset Dual QA performs comparably with ABCNN We can find that Dual QA still yields better accuracy than Basic QA which shows the effectiveness of the joint training algorithm It is interesting that word cooccurrence based method WgtWordCnt is very strong and hard to beat on the MARCO dataset Incorporating sophisticated features might obtain improved performance on both datasets however this is not the focus of this work and we leave it to future work Results on the WikiQA dataset is given in Table 3 On this dataset previous studies typically report results based on their deep features plus the number of words that occur both in the question and in the answer BIBREF2 BIBREF22 We also follow this experimental protocol We can find that our basic QA model is simple yet effective The Dual QA model achieves comparably to strong baseline methods To give a quantitative evaluation of our training framework on the QG model we report BLEU4 scores on MARCO and SQUAD datasets The results of our QG model with or without using joint training are given in Table 5 We can find that despite the overall BLEU4 scores are relatively low using our training algorithm could improve the performance of the QG model We would like to investigate how the joint training process improves the QA and QG models To this end we analyze the results of development set on the SQUAD dataset We randomly sample several cases that the Basic QA model gets the wrong answers while the Dual QA model obtains the correct results Examples are given in Table 4 From these examples we can find that the questions generated by Dual QG tend to have more word overlap with the correct question despite sometimes the point of the question is not correct For example compared with the Basic QG model the Dual QG model generates more informative words such as green in the first example purpose in the second example and how much in the third example We believe this helps QA because the QA model is trained to assign a higher score to the question which looks similar with the generated question It also helps QG because the QA model is trained to give a higher score to the real questionanswer pair so that generating more answeralike words gives the generated question a higher QA score Despite the proposed training framework obtains some improvements on QA and QG we believe the work could be further improved from several directions We find that our QG model not always finds the point of the reference question This is not surprising because the questions from these two reading comprehension datasets only focus on some spans of a sentence rather than the entire sentence Therefore the source side answer sentence carries more information than the target side question sentence Moreover we do not use the answer position information in our QG model Accordingly the model may pay attention to the point which is different from the annotators direction and generates totally different questions We are aware of incorporating the position of the answer span could get improved performance BIBREF29 however the focus of this work is a sentence level QA task rather than reading comprehension Therefore despite MARCO and SQUAD are of large scale they are not the desirable datasets for investigating the duality of our QA and QG tasks Pushing forward this area also requires large scale sentence level QA datasets We would like to discuss our understanding about the duality of QA and QG and also present our observations based on the experiments In this work duality means that the QA task and the QG task are equally important This characteristic makes our work different from Generative DomainAdaptive Nets BIBREF5 and Generative Adversarial Nets GAN BIBREF8 both of which have a main task and regard another task as the auxiliary one There are different ways to leverage the duality of QA and QG to improve both tasks We categorize them into two groups The first group is about the training process and the second group is about the inference process From this perspective dual learning BIBREF6 is a solution that leverages the duality in the training process In particular dual learning first pretrains the models for two tasks separately and then iteratively finetunes the models Our work also belongs to the first group Our approach uses the duality as a regularization item to guide the learning of QA and QG models simultaneously from scratch After the QA and QG models are trained we could also use the duality to improve the inference process which falls into the second group The process could be conducted on separately trained models or the models that jointly trained with our approach This is reasonable because the QA model could directly add one feature to consider q and qprime where qprime is the question generated by the QG model The first example in Table 4 also motivates this direction Similarly the QA model could give each langle qprime a rangle a score which could be assigned to each generated question qprime In this work we do not apply the duality in the inference process We leave it as a future plan This work could be improved by refining every component involved in our framework For example we use a simple yet effective QA model which could be improved by using more complex neural network architectures BIBREF30 BIBREF22 or more external resources We use a smoothed language model for both question and answer sentences which could be replaced by designed neural language models whose parameters are jointly learned together with the parameters in QA and QG models The QG model could be improved as well for example by developing more complex neural network architectures to take into account of more information about the answer sentence in the generation process In addition it is also very important to investigate an automatic evaluation metric to effectively measure the performance of a QG system BLEU score only measures the literal similarity between the generated question and the ground truth However it does not measure whether the question really looks like a question or not A desirable evaluation system should also have the ability to judge whether the generated question could be answered by input sentence even if the generated question use totally different words to express the meaning Our work relates to existing studies on question answering QA and question generation QG There are different types of QA tasks including textlevel QA BIBREF31 knowledge based QA BIBREF32 community based QA BIBREF33 and the reading comprehension BIBREF3 BIBREF4 Our work belongs to text based QA where the answer is a sentence In recent years neural network approaches BIBREF30 BIBREF31 BIBREF22 show promising ability in modeling the semantic relation between sentences and achieve strong performances on QA tasks Question generation also draws a lot of attentions in recent years QG is very necessary in real application as it is always time consuming to create largescale QA datasets In literature BIBREF34 use Minimal Recursion Semantics MRS to represent the meaning of a sentence and then realize the MSR structure into a natural language question BIBREF35 present a overgenerateandrank framework consisting of three stages They first transform a sentence into a simpler declarative statement and then transform the statement to candidate questions by executing welldefined syntactic transformations Finally a ranker is used to select the questions of highquality BIBREF36 focus on generating questions from a topic They first get a list of texts related to the topic and then generate questions by exploiting the named entity information and the predicate argument structures of the texts BIBREF37 propose an ontologycrowdrelevance approach to generate questions from novel text They encode the original text in a lowdimensional ontology and then align the question templates obtained via crowdsourcing to that space A final ranker is used to select the top relevant templates There also exists some studies on generating questions from knowledge base BIBREF38 BIBREF39 For example BIBREF39 develop a neural network approach which takes a knowledge fact including a subject an object and a predicate as input and generates the question with a recurrent neural network Recent studies also investigate question generation for the reading comprehension task BIBREF40 BIBREF29 The approaches are typically based on the encoderdecoder framework which could be conventionally learned in an endtoend way As the answer is a text span from the sentencepassage it is helpful to incorporate the position of the answer span BIBREF29 In addition the computer vision community also pays attention to generating natural language questions about an image BIBREF41 We focus on jointly training the question answering QA model and the question generation QG model in this paper We exploit the duality of QA and QG tasks and introduce a training framework to leverage the probabilistic correlation between the two tasks In our approach the duality is used as a regularization term to influence the learning of QA and QG models We implement simple yet effective QA and QG models both of which are neural network based approaches Experimental results show that the proposed training framework improves both QA and QG on three datasets </s>
<s>[INST] Multimodal Word Distributions [/INST] To model language we must represent words We can imagine representing every word with a binary onehot vector corresponding to a dictionary position But such a representation contains no valuable semantic information distances between word vectors represent only differences in alphabetic ordering Modern approaches by contrast learn to map words with similar meanings to nearby points in a vector space BIBREF0 from large datasets such as Wikipedia These learned word embeddings have become ubiquitous in predictive tasks BIBREF1 recently proposed an alternative view where words are represented by a whole probability distribution instead of a deterministic point vector Specifically they model each word by a Gaussian distribution and learn its mean and covariance matrix from data This approach generalizes any deterministic point embedding which can be fully captured by the mean vector of the Gaussian distribution Moreover the full distribution provides much richer information than point estimates for characterizing words representing probability mass and uncertainty across a set of semantics However since a Gaussian distribution can have only one mode the learned uncertainty in this representation can be overly diffuse for words with multiple distinct meanings polysemies in order for the model to assign some density to any plausible semantics BIBREF1 Moreover the mean of the Gaussian can be pulled in many opposing directions leading to a biased distribution that centers its mass mostly around one meaning while leaving the others not well represented In this paper we propose to represent each word with an expressive multimodal distribution for multiple distinct meanings entailment heavy tailed uncertainty and enhanced interpretability For example one mode of the word bank could overlap with distributions for words such as finance and money and another mode could overlap with the distributions for river and creek It is our contention that such flexibility is critical for both qualitatively learning about the meanings of words and for optimal performance on many predictive tasks In particular we model each word with a mixture of Gaussians Section Word Representation We learn all the parameters of this mixture model using a maximum margin energybased ranking objective BIBREF2 BIBREF1 Section Discussion where the energy function describes the affinity between a pair of words For analytic tractability with Gaussian mixtures we use the inner product between probability distributions in a Hilbert space known as the expected likelihood kernel BIBREF3 as our energy function Section Energy Function Additionally we propose transformations for numerical stability and initialization Implementation resulting in a robust straightforward and scalable learning procedure capable of training on a corpus with billions of words in days We show that the model is able to automatically discover multiple meanings for words Section Word Representation7 and significantly outperform other alternative methods across several tasks such as word similarity and entailment Section Word Similarity Word Similarity for Polysemous Words Word Entailment We have made code available at httpgithubcombenathiword2gm where we implement our model in Tensorflow tensorflow In the past decade there has been an explosion of interest in word vector representations word2vec arguably the most popular word embedding uses continuous bag of words and skipgram models in conjunction with negative sampling for efficient conditional probability estimation BIBREF0 BIBREF4 Other popular approaches use feedforward BIBREF5 and recurrent neural network language models BIBREF6 BIBREF7 BIBREF8 to predict missing words in sentences producing hidden layers that can act as word embeddings that encode semantic information They employ conditional probability estimation techniques including hierarchical softmax BIBREF9 BIBREF10 BIBREF11 and noise contrastive estimation BIBREF12 A different approach to learning word embeddings is through factorization of word cooccurrence matrices such as GloVe embeddings BIBREF13 The matrix factorization approach has been shown to have an implicit connection with skipgram and negative sampling BIBREF14 Bayesian matrix factorization where row and columns are modeled as Gaussians has been explored in BIBREF15 and provides a different probabilistic perspective of word embeddings In exciting recent work BIBREF1 propose a Gaussian distribution to model each word Their approach is significantly more expressive than typical point embeddings with the ability to represent concepts such as entailment by having the distribution for one word eg music encompass the distributions for sets of related words jazz and pop However with a unimodal distribution their approach cannot capture multiple distinct meanings much like most deterministic approaches Recent work has also proposed deterministic embeddings that can capture polysemies for example through a cluster centroid of context vectors BIBREF16 or an adapted skipgram model with an EM algorithm to learn multiple latent representations per word BIBREF17 BIBREF18 also extends skipgram with multiple prototype embeddings where the number of senses per word is determined by a nonparametric approach BIBREF19 learns topical embeddings based on latent topic models where each word is associated with multiple topics Another related work by BIBREF20 models embeddings in infinitedimensional space where each embedding can gradually represent incremental word sense if complex meanings are observed Probabilistic word embeddings have only recently begun to be explored and have so far shown great promise In this paper we propose to the best of our knowledge the first probabilistic word embedding that can capture multiple meanings We use a Gaussian mixture model which allows for a highly expressive distributions over words At the same time we retain scalability and analytic tractability with an expected likelihood kernel energy function for training The model and training procedure harmonize to learn descriptive representations of words with superior performance on several benchmarks In this section we introduce our Gaussian mixture GM model for word representations and present a training method to learn the parameters of the Gaussian mixture This method uses an energybased maximum margin objective where we wish to maximize the similarity of distributions of nearby words in sentences We propose an energy function that compliments the GM model by retaining analytic tractability We also provide critical practical details for numerical stability hyperparameters and initialization We represent each word w in a dictionary as a Gaussian mixture with K components Specifically the distribution of w fw is given by the density fwvecx sum i1K pwi mathcal Nleft vecx vecmu wi Sigma wi right
sum i1K fracpwi sqrt2 pi Sigma wi efrac12 vecx vecmu witop Sigma wi1 vecx vecmu wi Eq 2 where sum i1K pwi 1 The mean vectors vecmu wi represent the location of the ith component of word w and are akin to the point embeddings provided by popular approaches like word2vec pwi represents the component probability mixture weight and Sigma wi is the component covariance matrix containing uncertainty information Our goal is to learn all of the model parameters vecmu wi pwi Sigma wi from a corpus of natural sentences to extract semantic information of words Each Gaussian components mean vector of word w can represent one of the words distinct meanings For instance one component of a polysemous word such as rock should represent the meaning related to stone or pebbles whereas another component should represent the meaning related to music such as jazz or pop Figure 1 illustrates our word embedding model and the difference between multimodal and unimodal representations for words with multiple meanings The training objective for learning theta lbrace vecmu wi pwi Sigma wirbrace draws inspiration from the continuous skipgram model BIBREF0 where word embeddings are trained to maximize the probability of observing a word given another nearby word This procedure follows the distributional hypothesis that words occurring in natural contexts tend to be semantically related For instance the words jazz and music tend to occur near one another more often than jazz and cat hence jazz and music are more likely to be related The learned word representation contains useful semantic information and can be used to perform a variety of NLP tasks such as word similarity analysis sentiment classification modelling word analogies or as a preprocessed input for complex system such as statistical machine translation Each sample in the objective consists of two pairs of words wc and wcprime w is sampled from a sentence in a corpus and c is a nearby word within a context window of length ell For instance a word w jazz which occurs in the sentence I listen to jazz music has context words I listen to music cprime is a negative context word eg airplane obtained from random sampling The objective is to maximize the energy between words that occur near each other w and c and minimize the energy between w and its negative context cprime This approach is similar to negative sampling BIBREF0 BIBREF4 which contrasts the dot product between positive context pairs with negative context pairs The energy function is a measure of similarity between distributions and will be discussed in Section Energy Function We use a maxmargin ranking objective BIBREF2 used for Gaussian embeddings in BIBREF1 which pushes the similarity of a word and its positive context higher than that of its negative context by a margin m nonumber Ltheta w c cprime max 0
nonumber m log Etheta w c log Etheta w cprime Eq 6 This objective can be minimized by minibatch stochastic gradient descent with respect to the parameters theta lbrace vecmu wi pwi Sigma wirbrace the mean vectors covariance matrices and mixture weights of our multimodal embedding in Eq 2 We use a word sampling scheme similar to the implementation in word2vec BIBREF0 BIBREF4 to balance the importance of frequent words and rare words Frequent words such as the a to are not as meaningful as relatively less frequent words such as dog love rock and we are often more interested in learning the semantics of the less frequently observed words We use subsampling to improve the performance of learning word vectors BIBREF4 This technique discards word wi with probability Pwi 1 sqrttfwi where fwi is the frequency of word wi in the training corpus and t is a frequency threshold To generate negative context words each word type wi is sampled according to a distribution Pnwi propto Uwi34 which is a distorted version of the unigram distribution Uwi that also serves to diminish the relative importance of frequent words Both subsampling and the negative distribution choice are proven effective in word2vec training BIBREF4 For vector representations of words a usual choice for similarity measure energy function is a dot product between two vectors Our word representations are distributions instead of point vectors and therefore need a measure that reflects not only the point similarity but also the uncertainty We propose to use the expected likelihood kernel which is a generalization of an inner product between vectors to an inner product between distributions BIBREF3 That is
Efg int fx gx d x langle f g rangle L2
where langle cdot cdot rangle L2 denotes the inner product in Hilbert space L2 We choose this form of energy since it can be evaluated in a closed form given our choice of probabilistic embedding in Eq 2 For Gaussian mixtures fg representing the words wf wg fx sum i1K pi mathcal Nx vecmu fi Sigma fi and gx sum i1K qi mathcal Nx vecmu gi Sigma gi sum i 1K pi 1 and sum i 1K qi 1 we find see Section Derivation of Expected Likelihood Kernel the log energy is
log Etheta fg log sum j1K sum i1K pi qj exi ij Eq 9 where nonumber xi ij equiv log mathcal N0 vecmu fi vecmu gj Sigma fi Sigma gj nonumber frac12 log det Sigma fi Sigma gj fracD2 log 2 pi
frac12 vecmu fi vecmu gj top Sigma fi Sigma gj 1 vecmu fi vecmu gj Eq 10 We call the term xi ij partial log energy Observe that this term captures the similarity between the ith meaning of word wf and the jth meaning of word wg The total energy in Equation 9 is the sum of possible pairs of partial energies weighted accordingly by the mixture probabilities pi and qj The term vecmu fi vecmu gj top Sigma fi Sigma gj 1 vecmu fi vecmu gj in xi ij explains the difference in mean vectors of semantic pair wf i and wg j If the semantic uncertainty covariance for both pairs are low this term has more importance relative to other terms due to the inverse covariance scaling We observe that the loss function Ltheta in Section Discussion attains a low value when Etheta wc is relatively high High values of Etheta wc can be achieved when the component means across different words vecmu fi and vecmu gj are close together eg similar point representations High energy can also be achieved by large values of Sigma fi and xi ij0 which washes out the importance of the mean vector difference The term xi ij1 serves as a regularizer that prevents the covariances from being pushed too high at the expense of learning a good mean embedding At the beginning of training xi ij roughly are on the same scale among all pairs ij s During this time all components learn the signals from the word occurrences equally As training progresses and the semantic representation of each mixture becomes more clear there can be one term of xi ij s that is predominantly higher than other terms giving rise to a semantic pair that is most related The negative KL divergence is another sensible choice of energy function providing an asymmetric metric between word distributions However unlike the expected likelihood kernel KL divergence does not have a closed form if the two distributions are Gaussian mixtures We have introduced a model for multiprototype embeddings which expressively captures word meanings with whole probability distributions We show that our combination of energy and objective functions proposed in Section SkipGram enables one to learn interpretable multimodal distributions through unsupervised training for describing words with multiple distinct meanings By representing multiple distinct meanings our model also reduces the unnecessarily large variance of a Gaussian embedding model and has improved results on word entailment tasks To learn the parameters of the proposed mixture model we train on a concatenation of two datasets UKWAC 25 billion tokens and Wackypedia 1 billion tokens BIBREF21 We discard words that occur fewer than 100 times in the corpus which results in a vocabulary size of 314129 words Our word sampling scheme described at the end of Section Qualitative Evaluation is similar to that of word2vec with one negative context word for each positive context word After training we obtain learned parameters lbrace vecmu wi Sigma wi pirbrace i1K for each word w We treat the mean vector vecmu wi as the embedding of the itextth mixture component with the covariance matrix Sigma wi representing its subtlety and uncertainty We perform qualitative evaluation to show that our embeddings learn meaningful multiprototype representations and compare to existing models using a quantitative evaluation on word similarity datasets and word entailment We name our model as Word to Gaussian Mixture w2gm in constrast to Word to Gaussian w2g BIBREF1 Unless stated otherwise w2g refers to our implementation of w2gm model with one mixture component Unless stated otherwise we experiment with K2 components for the w2gm model but we have results and discussion of K3 at the end of section 43 We primarily consider the spherical case for computational efficiency We note that for diagonal or spherical covariances the energy can be computed very efficiently since the matrix inversion would simply require mathcal Od computation instead of mathcal Od3 for a full matrix Empirically we have found diagonal covariance matrices become roughly spherical after training Indeed for these relatively high dimensional embeddings there are sufficient degrees of freedom for the mean vectors to be learned such that the covariance matrices need not be asymmetric Therefore we perform all evaluations with spherical covariance models Models used for evaluation have dimension D50 and use context window ell 10 unless stated otherwise We provide additional hyperparameters and training details in the supplementary material Implementation Since our word embeddings contain multiple vectors and uncertainty parameters per word we use the following measures that generalizes similarity scores These measures pick out the component pair with maximum similarity and therefore determine the meanings that are most relevant A natural choice for a similarity score is the expected likelihood kernel an inner product between distributions which we discussed in Section Energy Function This metric incorporates the uncertainty from the covariance matrices in addition to the similarity between the mean vectors This metric measures the maximum similarity of mean vectors among all pairs of mixture components between distributions f and g That is displaystyle dfg max ij 1 hdots K frac langle mathbf mu fi mathbf mu gj rangle mathbf mu fi cdot mathbf mu gj which corresponds to matching the meanings of f and g that are the most similar For a Gaussian embedding maximum similarity reduces to the usual cosine similarity Cosine similarity is popular for evaluating embeddings However our training objective directly involves the Euclidean distance in Eq 10 as opposed to dot product of vectors such as in word2vec Therefore we also consider the Euclidean metric displaystyle dfg min ij 1 hdots K mathbf mu fi mathbf mu gj In Table 1 we show examples of polysemous words and their nearest neighbors in the embedding space to demonstrate that our trained embeddings capture multiple word senses For instance a word such as rock that could mean either stone or rock music should have each of its meanings represented by a distinct Gaussian component Our results for a mixture of two Gaussians model confirm this hypothesis where we observe that the 0th component of rock being related to basalt boulders and the 1st component being related to indie funk hiphop Similarly the word bank has its 0th component representing the river bank and the 1st component representing the financial bank By contrast in Table 1 bottom see that for Gaussian embeddings with one mixture component nearest neighbors of polysemous words are predominantly related to a single meaning For instance rock mostly has neighbors related to rock music and bank mostly related to the financial bank The alternative meanings of these polysemous words are not well represented in the embeddings As a numerical example the cosine similarity between rock and stone for the Gaussian representation of BIBREF1 is only 0029 much lower than the cosine similarity 0586 between the 0th component of rock and stone in our multimodal representation In cases where a word only has a single popular meaning the mixture components can be fairly close for instance one component of stone is close to stones stonework slab and the other to carving relic excavated which reflects subtle variations in meanings In general the mixture can give properties such as heavy tails and more interesting unimodal characterizations of uncertainty than could be described by a single Gaussian We provide an interactive visualization as part of our code repository httpsgithubcombenathiword2gmvisualization that allows realtime queries of words nearest neighbors in the embeddings tab for K1 2 3 components We use a notation similar to that of Table 1 where a token wi represents the component i of a word w For instance if in the K2 link we search for bank0 we obtain the nearest neighbors such as river1 confluence0 waterway1 which indicates that the 0th component of bank has the meaning river bank On the other hand searching for bank1 yields nearby words such as banking1 banker0 ATM0 indicating that this component is close to the financial bank We also have a visualization of a unimodal w2g for comparison in the K1 link In addition the embedding link for our Gaussian mixture model with K3 mixture components can learn three distinct meanings For instance each of the three components of cell is close to keypad digits incarcerated inmate or tissue antibody indicating that the distribution captures the concept of cellphone jail cell or biological cell respectively Due to the limited number of words with more than 2 meanings our model with K3 does not generally offer substantial performance differences to our model with K2 hence we do not further display K3 results for compactness We evaluate our embeddings on several standard word similarity datasets namely SimLex BIBREF22 WS or WordSim353 WSS similarity WSR relatedness BIBREF23 MEN BIBREF24 MC BIBREF25 RG BIBREF26 YP BIBREF27 MTurk287771 BIBREF28 BIBREF29 and RW BIBREF30 Each dataset contains a list of word pairs with a human score of how related or similar the two words are We calculate the Spearman correlation BIBREF31 between the labels and our scores generated by the embeddings The Spearman correlation is a rankbased correlation measure that assesses how well the scores describe the true labels The correlation results are shown in Table 2 using the scores generated from the expected likelihood kernel maximum cosine similarity and maximum Euclidean distance We show the results of our Gaussian mixture model and compare the performance with that of word2vec and the original Gaussian embedding by BIBREF1 We note that our model of a unimodal Gaussian embedding w2g also outperforms the original model which differs in model hyperparameters and initialization for most datasets Our multiprototype model w2gm also performs better than skipgram or Gaussian embedding methods on many datasets namely WS WSR MEN MC RG YP MT287 RW The maximum cosine similarity yields the best performance on most datasets however the minimum Euclidean distance is a better metric for the datasets MC and RW These results are consistent for both the singleprototype and the multiprototype models We also compare out results on WordSim353 with the multiprototype embedding method by BIBREF16 and BIBREF18 shown in Table 3 We observe that our singleprototype model w2g is competitive compared to models by BIBREF16 even without using a corpus with stop words removed This could be due to the autocalibration of importance via the covariance learning which decrease the importance of very frequent words such as the to a etc Moreover our multiprototype model substantially outperforms the model of BIBREF16 and the MSSG model of BIBREF18 on the WordSim353 dataset We use the dataset SCWS introduced by BIBREF16 where word pairs are chosen to have variations in meanings of polysemous and homonymous words We compare our method with multiprototype models by Huang BIBREF16 Tian BIBREF17 Chen BIBREF32 and MSSG model by BIBREF18 We note that Chen model uses an external lexical source WordNet that gives it an extra advantage We use many metrics to calculate the scores for the Spearman correlation MaxSim refers to the maximum cosine similarity AveSim is the average of cosine similarities with respect to the component probabilities In Table 4 the model w2g performs the best among all singleprototype models for either 50 or 200 vector dimensions Our model w2gm performs competitively compared to other multiprototype models In SCWS the gain in flexibility in moving to a probability density approach appears to dominate over the effects of using a multiprototype In most other examples we see w2gm surpass w2g where the multiprototype structure is just as important for good performance as the probabilistic representation Note that other models also use AvgSimC metric which uses context information which can yield better correlation BIBREF16 BIBREF32 We report the numbers using AvgSim or MaxSim from the existing models which are more comparable to our performance with MaxSim One motivation for our Gaussian mixture embedding is to model word uncertainty more accurately than Gaussian embeddings which can have overly large variances for polysemous words in order to assign some mass to all of the distinct meanings We see that our Gaussian mixture model does indeed reduce the variances of each component for such words For instance we observe that the word rock in w2g has much higher variance per dimension e18 approx 165 compared to that of Gaussian components of rock in w2gm which has variance of roughly e25 approx 082 We also see in the next section that w2gm has desirable quantitative behavior for word entailment We evaluate our embeddings on the word entailment dataset from BIBREF33 The lexical entailment between words is denoted by w1 models w2 which means that all instances of w1 are w2 The entailment dataset contains positive pairs such as aircraft models vehicle and negative pairs such as aircraft lnot models insect We generate entailment scores of word pairs and find the best threshold measured by Average Precision AP or F1 score which identifies negative versus positive entailment We use the maximum cosine similarity and the minimum KL divergence displaystyle dfg min ij 1 hdots K KLf g for entailment scores The minimum KL divergence is similar to the maximum cosine similarity but also incorporates the embedding uncertainty In addition KL divergence is an asymmetric measure which is more suitable for certain tasks such as word entailment where a relationship is unidirectional For instance w1 models w2 does not imply w2 models w1 Indeed aircraft models vehicle does not imply vehicle models aircraft since all aircraft are vehicles but not all vehicles are aircraft The difference between KLw1 w2 versus KLw2 w1 distinguishes which word distribution encompasses another distribution as demonstrated in Figure 1 Table 5 shows the results of our w2gm model versus the Gaussian embedding model w2g We observe a trend for both models with window size 5 and 10 that the KL metric yields improvement both AP and F1 over cosine similarity In addition w2gm generally outperforms w2g The multiprototype model estimates the meaning uncertainty better since it is no longer constrained to be unimodal leading to better characterizations of entailment On the other hand the Gaussian embedding model suffers from overestimatating variances of polysemous words which results in less informative word distributions and reduced entailment scores We introduced a model that represents words with expressive multimodal distributions formed from Gaussian mixtures To learn the properties of each mixture we proposed an analytic energy function for combination with a maximum margin objective The resulting embeddings capture different semantics of polysemous words uncertainty and entailment and also perform favorably on word similarity benchmarks Elsewhere latent probabilistic representations are proving to be exceptionally valuable able to capture nuances such as face angles with variational autoencoders BIBREF34 or subtleties in painting strokes with the InfoGAN BIBREF35 Moreover classically deterministic deep learning architectures are actively being generalized to probabilistic deep models for full predictive distributions instead of point estimates and significantly more expressive representations BIBREF36 BIBREF37 BIBREF38 BIBREF39 BIBREF40 Similarly probabilistic word embeddings can capture a range of subtle meanings and advance the state of the art Multimodal word distributions naturally represent our belief that words do not have single precise meanings indeed the shape of a word distribution can express much more semantic information than any point representation In the future multimodal word distributions could open the doors to a new suite of applications in language modelling where whole word distributions are used as inputs to new probabilistic LSTMs or in decision functions where uncertainty matters As part of this effort we can explore different metrics between distributions such as KL divergences which would be a natural choice for order embeddings that model entailment properties It would also be informative to explore inference over the number of components in mixture models for word distributions Such an approach could potentially discover an unbounded number of distinct meanings for words but also distribute the support of each word distribution to express highly nuanced meanings Alternatively we could imagine a dependent mixture model where the distributions over words are evolving with time and other covariates One could also build new types of supervised language models constructed to more fully leverage the rich information provided by word distributions We thank NSF IIS1563887 for support We derive the form of expected likelihood kernel for Gaussian mixtures Let fg be Gaussian mixture distributions representing the words wf wg That is fx sum i1K pi mathcal Nx mu fi Sigma fi and gx sum i1K qi mathcal Nx mu gi Sigma gi sum i 1K pi 1 and sum i 1K qi 1 The expected likelihood kernel is given by
Etheta fg int left sum i1K pi mathcal Nx mu fi Sigma fi right cdot left sum j1K qj mathcal Nx mu gj Sigma gj right d x
sum i1K sum j1K pi qj int mathcal Nx mu fi Sigma fi cdot mathcal Nx mu gj Sigma gj d x
sum i1K sum j1K pi qj mathcal N0 mu fi mu gj Sigma fi Sigma gj
sum i1K sum j1K pi qj exi ij
where we note that int mathcal Nx mu i Sigma i mathcal Nx mu j Sigma j dx mathcal N0 mu i mu j Sigma i Sigma j BIBREF1 and xi ij is the log partial energy given by equation 10 In this section we discuss practical details for training the proposed model We use a diagonal Sigma in which case inverting the covariance matrix is trivial and computations are particularly efficient Let mathbf df mathbf dg denote the diagonal vectors of Sigma f Sigma g The expression for xi ij reduces to
xi ij
frac12 sum r1D log dpr dqr
frac12 sum left mathbf mu pi mathbf mu qj circ frac1 mathbf dp dq circ mathbf mu p i mathbf mu qj right
where circ denotes elementwise multiplication The spherical case which we use in all our experiments is similar since we simply replace a vector mathbf d with a single value We optimize log mathbf d since each component of diagonal vector mathbf d is constrained to be positive Similarly we constrain the probability pi to be in 01 and sum to 1 by optimizing over unconstrained scores si in infty infty and using a softmax function to convert the scores to probability pi fracesisum j1K esj The loss computation can be numerically unstable if elements of the diagonal covariances are very small due to the term log dfr dgr and frac1 mathbf dq mathbf dp Therefore we add a small constant epsilon 104 so that dfr dgr and mathbf dq mathbf dp becomes dfr dgr epsilon and mathbf dq dp epsilon In addition we observe that xi ij can be very small which would result in exi ij approx 0 up to machine precision In order to stabilize the computation in eq 9 we compute its equivalent form
log Efg xi iprime jprime log sum j1K sum i1K pi qj exi ij xi iprime jprime
where xi iprime jprime max ij xi ij In the loss function Ltheta we use a margin m 1 and a batch size of 128 We initialize the word embeddings with a uniform distribution over sqrtfrac3D sqrtfrac3D so that the expectation of variance is 1 and the mean is zero BIBREF44 We initialize each dimension of the diagonal matrix or a single value for spherical case with a constant value v 005 We also initialize the mixture scores si to be 0 so that the initial probabilities are equal among all K components We use the threshold t 105 for negative sampling which is the recommended value for word2vec skipgram on large datasets We also use a separate output embeddings in addition to input embeddings similar to word2vec implementation BIBREF0 BIBREF4 That is each word has two sets of distributions qI and qO each of which is a Gaussian mixture For a given pair of word and context wc we use the input distribution qI for w input word and the output distribution qO for context c output word We optimize the parameters of both qI and qO and use the trained input distributions qI as our final word representations We use minibatch asynchronous gradient descent with Adagrad BIBREF41 which performs adaptive learning rate for each parameter We also experiment with Adam BIBREF43 which corrects the bias in adaptive gradient update of Adagrad and is proven very popular for most recent neural network models However we found that it is much slower than Adagrad approx 10 times This is because the gradient computation of the model is relatively fast so a complex gradient update algorithm such as Adam becomes the bottleneck in the optimization Therefore we choose to use Adagrad which allows us to better scale to large datasets We use a linearly decreasing learning rate from 005 to 000001 </s>
<s>[INST] Task-driven Visual Saliency and Attention-based Visual Question Answering [/INST] Visual question answering VQA comes as a classic task which combines visual and textual modal data into a unified system Taking an image and a natural language question about it as input a VQA system is supposed to output the corresponding natural language answer VQA problem requires image and text understanding common sense and knowledge inference The solution of VQA problem will be a great progress in approaching the goal of Visual Turing Test and is also conducive to tasks such as multimodal retrieval image captioning and accessibility facilities After the first attempt and introduction of VQA BIBREF0 more than thirty works on VQA have sprung up over the past one year from May 2015 Over ten VQA datasets and a big VQA challenge BIBREF1 have been proposed so far Four commonly used datasets ie DAQUAR BIBREF0 COCOQA BIBREF2 COCOVQA BIBREF1 and Visual7W BIBREF3 feature different aspects The common practice to tackle VQA problem is to translate the words as word embeddings and encode the questions using bagofword BoW or Long Short Term Memory LSTM network and encode the images using deep convolutional neural networks CNN The following important step is to combine the image and question representations through some kind of fusing methods for answer generation such as concatenation BIBREF4 BIBREF5 BIBREF6 elementwise multiplication BIBREF1 parameter prediction layer BIBREF7 episode memory BIBREF8 attention mechanism BIBREF9 BIBREF10 BIBREF11 etc Current VQA works focus on the fusion of these two features yet no one cares about where we focus to ask questions on the image It is a common practice to treat the VQA problem as either a generation classification or a scoring task and classification gains more popularity due to its simplicity and easiness for comparison These works treat VQA as a discriminative model learning the conditional probability of answer given the image and question From the generative view we emulate the behavior that before people ask questions about the given image they first glance at it and find some interesting regions In terms of a single person he has unique taste for choosing image regions that interest him For a large amount of people there are statistical regionofinterest RoI distributions These region patterns are taskdriven eg the picture in Figure 1 for VQA task people may focus mostly on the beds the chairs the laptop and the notebook regions namely the RoI patterns as captured in the weighted image but for image captioning task they pay attention to more areas including the striped floor It is very valuable to intensify the interesting region features and suppress others and this image preprocessing step provides more accurate visual features to the followup steps and is missing in current VQA works By analogy with visual saliency which captures the standing out regions or objects of an image we propose a region preselection mechanism named taskdriven visual saliency which attaches interesting regions more possibly questioned on with higher weights Taking advantage of the bidirectional LSTM BiLSTM BIBREF12 that the output at an arbitrary time step has complete and sequential information about all time steps before and after it we compute the weight of interest for each region feature which is relative to all of them To the best of our knowledge this is the first work that employs and analyzes BiLSTM in VQA models for taskdriven saliency detection and this is the first contribution of our work As a simple and effective VQA baseline method BIBREF4 shows that question feature always contributes more to predict the answer than image feature But image is as equally important as question for answer generation It is necessary to further explore finergrained image features to achieve better VQA performance eg attention mechanism BIBREF13 Current attention based models generally use the correlation scores between question and image representations as weights to perform weighted sum of region features the resulting visual vector is concatenated to the question vector for final answer generation The recent multistep attention models ie containing multiple attention layers BIBREF14 BIBREF11 dig deeper into the image understanding and help achieve better VQA performance than the regular attention models However the correlation score obtained by inner product between visual and textual features is essentially the sum of the correlation vector obtained by elementwise multiplication of the two features Besides BIBREF1 shows that elementwise multiplication of these features achieves more accurate results than concatenation of them in the baseline model Hence we propose to employ elementwise multiplication way in the attention mechanism the fused features are directly feed forward to a max pooling layer to get the final fused feature Together with the saliencylike region preselection operation this novel attention method effectively improves VQA performance and is the second contribution of this work The remainder of the paper is organized as follows We first briefly review saliency and the attention mechanism Then we elaborate our proposed method We present experiments of some baseline models and compare with stateoftheart models and visualize the preselection saliency and attention maps Finally we summarize our work Saliency generally comes from contrasts between a pixel or an object and its surroundings describing how outstanding it is It could facilitate learning by focusing the most pertinent regions Saliency detection methods mimic the human attention in psychology including both bottomup and topdown manners BIBREF15 Typical saliency methods BIBREF16 BIBREF17 are pixel or objectoriented which are not appropriate for VQA due to center bias and are difficulty in collecting large scale eye tracking data We think taskdriven saliency on image features could be conductive to solving VQA problem What inspires us is that BiLSTM used in saliency detection has achieved good results on text and video tasks In sentiment classification tasks BIBREF18 assigns saliency scores to words related to sentiment for visualizing and understanding the effects of BiLSTM in textual sentence While in video highlight detection BIBREF19 uses a recurrent autoencoder configured with BiLSTM cells and extracts video highlight segments effectively BiLSTM has demonstrated its effectiveness in saliency detection but to the best of our knowledge it has not been used in visual saliency for VQA task Visual attention mechanism has drawn great interest in VQA BIBREF14 BIBREF3 BIBREF9 and gained performance improvement from traditional methods using holistic image features Attention mechanism is typically the weighted sum of the image region features at each spatial location where the weights describe the correlation and are implemented as the inner products of the question and image features It explores finergrained visual features and mimics the behavior that people attend to different areas according to the questions Focusing on knowing where to look for multiplechoice VQA tasks BIBREF9 uses 99 detected object regions plus a holistic image feature to make correlation with the question encoding and uses the correlation scores as weights to fuse the features BIBREF14 uses the last pooling layer features 512times 14times 14 of VGG19 BIBREF20 as image region partitions and adopts twolayer attention to obtain more effective fused features for complex questions BIBREF21 proposes an ingenious idea to use assembled network modules according to the parsed questions and achieves multistep transforming attention by specific rules However these attention methods use correlation score ie inner product between visual and textual feature for each location which is the sum of the correlation vector representation ie elementwise multiplication between them Besides the concatenation of image and question features is less accurate than the elementwise multiplication vector of them shown in the baseline model BIBREF1 Moreover there are many answers derived from nonobject and background regions eg questions about scenes hence it is not fit for the object detection based attention methods Compared with image captioning which generates general descriptions about an image VQA focuses on specific image regions depending on the question On the one hand these regions include nonobject and background contents which are hard for object detection based VQA methods On the other hand although people may ask questions at any areas of a given image there are always some region patterns that attract more questions On the whole there are statistical regionofinterest RoI patterns which represent humaninterested areas that are important for later VQA task We propose a saliencylike region preselection and attentionbased VQA framework illustrated in Figure 2 The VQA is regarded as a classification task which is simple and easy to transform to a generating or scoring model In this section we elaborate our model consisting of four parts a image feature preselection part which models the tendency where people focus to ask questions b question encoding part which encodes the question words as a condensed semantic embedding c attentionbased feature fusion part performs second selection on image features and d answer generation part which gives the answer output As described above current object detection based VQA methods may not be qualified and the answers may not be derived from these specific object regions in images for example when asked Where is the birdcat the answers fencesink are not contained in ILSVRC BIBREF22 200 categories and Pascal VOC BIBREF23 20 categories detection classes Thus we use a more general pattern detector In addition from the generative perspective we pay attention to where people focus to ask questions General visual saliency provides analogous useful information of noticeable objects or areas which outstand the surroundings but it is not the only case for VQA task Current attention mechanism relates the question to the focusing location As more samples are available we could yield the region patterns that attract more questions by statistics From the statistical behavior of large amounts of workers on Amazon Mechanical Turk AMT who have labeled the questions we model the regionofinterest patterns that could attract more questions We propose to perform saliencylike preselection operation to alleviate the problems and model the RoI patterns The image is first divided into gtimes g grids as illustrated in Figure 2 Taking mtimes m grids as a region with s grids as the stride we obtain ntimes n regions where nleftlfloor fracgmsrightrfloor 1 We then feed the regions to a pretrained ResNet BIBREF24 deep convolutional neural network to produce ntimes ntimes dI dimensional region features where dI is the dimension of feature from the layer before the last fullyconnected layer Since the neighboring overlapped regions share some visual contents the corresponding features are related but focusing on different semantic information We regard the sequence of regions as the result of eye movement when glancing at the image and these regions are selectively allocated different degrees of interest Specifically the LSTM is a special kind of recurrent neural network RNN capable of learning longterm dependencies via the memory cell and the update gates which endows itself with the ability to retain information of previous timesteps ie the previous region sequence in this case The update rules of the LSTM at time step t are as follows itsigma WixtUiht1bi
ftsigma WfxtUfht1bf
otsigma WoxtUoht1bo
uttanh WuxtUuht1bu
ctutodot itct1odot ft
htotodot tanh ct Eq 7 where ifo denote the input forget and output gates xch are the input region feature memory cell and hidden unit output and WUb are the parameters to be trained We activate the gates by the sigmoid nonlinearity sigma x11ex and the cell contents by the hyperbolic tangent tanh xexexexex The gates control the information in the memory cell to be retained or forgotten through elementwise multiplication odot Inspired by the information completeness and high performance of BiLSTM we encode the region features in two directions using BiLSTM and obtain a scalar output per region The output of the BiLSTM is the summation of the forward and backward LSTM outputs at this region location hthtfhnt1b where n is the number of regions htfhnt1b are computed using Eq Hence the output at each location is influenced by the region features before and after it which embodies the correlation among these regions Note that although the DMN work BIBREF8 uses similar bidirectional gated recurrent units BiGRU in the visual input module their purpose is to produce input facts which contain global information Besides their BiGRU takes the features embedded to the textual space as inputs In contrast the BiLSTM used in our model takes directly visual CNN features as input and the main purpose is to output weights for region feature selection The output values of the BiLSTM are normalized through a softmax layer and the resulting weights are multiplied by the region features We treat the weights as degree of interest which are trained by error backpropagation of the final class cross entropy losses and higher weights embody that the corresponding region patterns will attract more questions in other words these region patterns may get higher attention values in the latter interaction with question embeddings in a statistical way Question can be encoded using various kinds of natural language processing NLP methods such as BoW LSTM CNN BIBREF25 BIBREF14 gated recurrent units GRU BIBREF26 skipthought vectors BIBREF27 or it can be parsed by Stanford Parser BIBREF28 etc Since question BoW encodings already dominate the contribution to answer generation compared with the image features BIBREF4 we simply encode the question word as word2vec embedding and use LSTM to encode the questions to match the preselected region features To encode more abstract and higherlevel information and achieve better performance a deeper LSTM BIBREF1 BIBREF29 for question encoding is adopted in our model The question encoding LSTM in our model has l hidden layers with r hidden units per layer and the question representation is the last output and the cell units of the LSTM and the dimension is dQ2times ltimes r The resulting condensed feature vector encodes the semantic and syntactic information of the question According to the statistic imagequestionanswer IQA training triples the image feature preselection has attached the regions with different prior weights generating more meaningful region features But different questions may focus on different aspects of the visual content It is necessary to use attention mechanism to second select regions by the question for more effective features We propose a novel attention method which takes the elementwise multiplication vector as correlation between image and question features at each spatial location Specifically given the preselected region features and question embedding we map the visual and textual features into a common space of dC dimension and perform elementwise multiplication between them The ntimes ntimes dC dimensional fused features contain visual and textual information and higher responses indicate more correlative features In traditional attention models the correlation score scalar achieved by inner product between the mapped visual and textual features per region is essentially the sum of elements in our fused feature This novel attention method has two noticeable advantages against traditional attention ie information richer correlation vector versus correlation scalar more effective elementwise multiplication vector versus the concatenated vector of the visual and textual features Since higher responses in the fused features indicate more correlative visual and textual features and the question may only focus on one or two regions We choose to apply max pooling operation on the intermediate fused features to pick out the maximum responses The produced dC dimensional fused feature is then fed to the final answer generation part Compared to the sumaverage operation in traditional attention models the max operation highlights the responses of the final fused feature from every spatial location Taking the VQA problem as a classification task is simple to be implemented and evaluated and it is easy to be extended to generation or multiple choice tasks through a network surgery using the fused feature in the previous step We use a linear layer and a softmax layer to map from the fused feature to the answer candidates of which the entries are the top1000 answers from the training data Considering multiple choice VQA problems eg Visual7W BIBREF3 telling questions and COCOVQA BIBREF1 multiple choice tasks our model is adaptive to be extended by concatenating the question and answer vectors before fusion with visual features or by using bilinear model between the final fused feature and answer feature BIBREF9 BIBREF30 which is a possible future work Meanwhile in view of generation VQA problem we can train an LSTM taking the fused feature as input to obtain answer word lists phrases or sentences BIBREF5 BIBREF6 Our framework is trained endtoend using backpropagation while the feature extraction part using ResNet is kept fixed to speed up training and avoid the noisy gradients backpropagated from the LSTM as elaborated in BIBREF6 RMSprop algorithm is employed with low initial learning rate of 3e4 which is proved important to prevent the softmax from spiking too early and prevent the visual features from dominating too early BIBREF9 Due to simplicity and proved similar performance as pretrained word embedding parameters we initialize the parameters of the network with random numbers We randomly sample 500 IQA triples per iteration In this section we describe the implementation details and evaluate our model SalAtt on the largescale COCOVQA dataset Besides we visualize and analyze the role of preselection and the novel attention method In our experiment the input images are first scaled to 448times 448times 3 pixels before we apply 4times 4 grids on them We obtain 3times 3 regions by employing 2times 2 grids ie 224times 224times 3 pixels as a region with stride 1 grid Then we extract the 2048D feature per region from the layer before the last fullyconnected layer of ResNet The dimension of word embedding is 200 and the weights of the embedding are initialized randomly from a uniform distribution on 008008 due to similar performance as the pretrained one The preselection BiLSTM for region features has 1 layer and the size is 1 and the LSTM for question uses 2 layers and 512 hidden units per layer The common space of visual and textual features is 1024dimensional We use dropout BIBREF31 after all convolutional and linear layers The nonlinear function is hyperbolic tangent The training procedure is early stopped when there is no accuracy increase in validation set for 5000 iterations where we evaluate every 1000 iterations It takes around 18 hours to train our model on a single NVIDIA Tesla K40 GPU for about 91000 iterations And for evaluation each sample needs less than 05 millisecond The COCOVQA dataset BIBREF1 is the largest among the commonly used VQA datasets which contains two tasks ie multiplechoice task and openended task on two image datasets ie real image MSCOCO dataset BIBREF32 and abstract scene dataset We follow the common practice to evaluate models on two tasks on the real image dataset which includes 248349 training questions 121512 validation questions and 244302 testing questions There are many types of questions which require image and question understanding commonsense knowledge knowledge inference and even external knowledge The answers are roughly divided into 3 types ie yesno number and other To evaluate the results each answer is compared with 10 humanlabeled answers the accuracy is computed via this metric minfracconsistent humanlabeled answers31 ie the accuracy is 100 if the predicted answer is consistent with at least 3 humanlabeled answers The COCOVQA dataset provide humanlabeled answers for the training and validation sets and the results of testing set can only be tested on the evaluation server The whole testing set is named teststandard and can be evaluated once per day and 5 times in total and a smaller development set named testdev can be tested 10 times per day and 9999 times in total In short the COCOVQA dataset is large and hard enough for evaluating models and hence we choose to evaluate our model on it We compare our propose model SalAtt with some functiondisabled models listed below to prove the effectiveness of the region preselection via BiLSTM and the novel attention method holistic The baseline model which maps the holistic image feature and LSTMencoded question feature to a common space and perform elementwise multiplication between them TraAtt The traditional attention model implementation of WTL model BIBREF9 using the same 3times 3 regions in SalAtt model RegAtt The region attention model which employs our novel attention method same as the SalAtt model but without region preselection ConAtt The convolutional region preselection attention model which replaces the BiLSTM in SalAtt model with a weightsharing linear mapping implemented by a convolutional layer Besides we also compare our SalAtt model with the popular baseline models ie iBOWIMG BIBREF4 VQA BIBREF1 and the stateoftheart attentionbased models ie WTL BIBREF9 NMN BIBREF21 SAN BIBREF14 AMA BIBREF33 FDA BIBREF34 DNMN BIBREF35 DMN BIBREF8 on two tasks of COCOVQA We train the functiondisabled models on COCOVQA training set and show the accuracies on validation set in Table 1 From the columns we can see that 1 holistic is better than TraAtt proving the effectiveness of elementwise multiplication feature fusion compared with concatenation of features 2 RegAtt is better than holistic indicating our novel attention method indeed enriches the visual features and improves the performance 3 SalAtt is better than RegAtt demonstrating the strength of our region preselection mechanism 4 ConAtt is worse than SalAtt showing that BiLSTM is important for the region preselection part From each row we find the consistent improvement by the ResNet features showing the importance of good CNN features to VQA We summarize the accuracies on testdev in Table 2 and the teststandard results in Table 3 Our results are comparative or higher than the attention based methods especially on multiplechoice tasks The results on answer type other which includes object and scene type questions demonstrate the competence of our model in RoI detection Note that we only apply the proposed region preselection mechanism to the basic VQA model BIBREF1 it can be embedded into any other attentionbased models to improve their performance Due to computation and training time we use only 3times 3 regions compared with other attentionbased methods eg 100 or 14times 14 region features Through observation we find that many small objects could not be split by the 3times 3 regions which is adverse to the counting questions and could be further improved and is a possible future work We illustrate three groups of samples produced by our model in Figure 3 Each group contains four figures from left to right and from top to bottom they are respectively the original image preselection weights on the image and two attention maps for different questions with the corresponding questions Q ground truth answers A and the predicted answers P shown below them And the number in the parentheses means the amount for this humanlabeled answer entry The weights are normalized to have minimum 0 and maximum 1 for visualization enhancement ie the weight in the dark region may not necessarily be 0 Take the first sample for example the preselection operation gives high weight to the boys head region which may be interesting to people and attract more questions eg questions containing the word boy For the question Is the boy dressed for the weather the attention map focuses on the boy his clothes and the surrounding regions to get a positive answer While for question What is the boy doing it attends the boy and the snowboard thus giving answer snowboarding The third sample gives inaccurate but explainable answers ie the birds may live in the parkzoo and come for food provided by the tourist so it may not be classified into pets and the left hand of the woman holds indeed a phone while the humanlabeled answers focus on the right hand In this work we propose a general VQA solution which integrates region preselection and a novel attention method to capture generic class region and richer fused feature representation These two procedures are independent meanwhile they both contribute to better VQA performance Although the model is simple it achieves comparative or higher empirical results than stateoftheart models Possible future works include adopting finergrained grids which capture more precise regions employing stacked attention layers for multistep reasoning and more accurate answer location and applying the general preselection method to other attentionbased VQA models The preselection mechanism is valuable and applicable to similar task such as image captioning </s>
<s>[INST] KagNet: Knowledge-Aware Graph Networks for Commonsense Reasoning [/INST] Human beings are rational and a major component of rationality is the ability to reason Reasoning is the process of combining facts and beliefs to make new decisions BIBREF0 as well as the ability to manipulate knowledge to draw inferences BIBREF1 Commonsense reasoning utilizes the basic knowledge that reflects our natural understanding of the world and human behaviors which is common to all humans Empowering machines with the ability to perform commonsense reasoning has been seen as the bottleneck of artificial general intelligence BIBREF2 Recently there have been a few emerging largescale datasets for testing machine commonsense with various focuses BIBREF3 BIBREF4 BIBREF5 In a typical dataset CommonsenseQA BIBREF6 given a question like Where do adults use glue sticks with the answer choices being classroom office desk drawer a commonsense reasoner is expected to differentiate the correct choice from other distractive candidates False choices are usually highly related to the question context but just less possible in realworld scenarios making the task even more challenging This paper aims to tackle the research question of how we can teach machines to make such commonsense inferences particularly in the questionanswering setting It has been shown that simply finetuning large pretrained language models such as Gpt BIBREF7 and Bert BIBREF8 can be a very strong baseline method However there still exists a large gap between performance of said baselines and human performance Reasoning with neural models is also lacking in transparency and interpretability There is no clear way as to how they manage to answer commonsense questions thus making their inferences dubious Merely relying on pretraining large language models on corpora cannot provide welldefined or reusable structures for explainable commonsense reasoning We argue that it would be more beneficial to propose reasoners that can exploit commonsense knowledge bases BIBREF9 BIBREF10 BIBREF11 Knowledgeaware models can explicitly incorporate external knowledge as relational inductive biases BIBREF12 to enhance their reasoning capacity as well as to increase the transparency of model behaviors for more interpretable results Furthermore a knowledgecentric approach is extensible through commonsense knowledge acquisition techniques BIBREF13 BIBREF14 We propose a knowledgeaware reasoning framework for learning to answer commonsense questions which has two major steps schema graph grounding Schema Graph Grounding and graph modeling for inference KnowledgeAware Graph Network As shown in Fig 1 for each pair of question and answer candidate we retrieve a graph from external knowledge graphs eg ConceptNet in order to capture the relevant knowledge for determining the plausibility of a given answer choice The graphs are named schema graphs inspired by the schema theory proposed by Gestalt psychologists BIBREF15 The grounded schema graphs are usually much more complicated and noisier unlike the ideal case shown in the figure Therefore we propose a knowledgeaware graph network module to further effectively model schema graphs Our model is a combination of graph convolutional networks BIBREF16 and LSTMs with a hierarchical pathbased attention mechanism which forms a GCNLSTMHPA architecture for pathbased relational graph representation Experiments show that our framework achieved a new stateoftheart performance on the CommonsenseQA dataset Our model also works better then other methods with limited supervision and provides humanreadable results via intermediate attention scores In this section we first formalize the commonsense question answering problem in a knowledgeaware setting and then introduce the overall workflow of our framework Given a commonsenserequired natural language question q and a set of N candidate answers lbrace airbrace the task is to choose one answer from the set From a knowledgeaware perspective we additionally assume that the question q and choices lbrace airbrace can be grounded as a schema graph denoted as g extracted from a large external knowledge graph G which is helpful for measuring the plausibility of answer candidates The knowledge graph GVE can be defined as a fixed set of concepts V and typed edges E describing semantic relations between concepts Therefore our goal is to effectively ground and model schema graphs to improve the reasoning process As shown in Fig 2 our framework accepts a pair of question and answer QApair denoted as q and a It first recognizes the mentioned concepts within them respectively from the concept set V of the knowledge graph We then algorithmically construct the schema graph g by finding paths between pairs of mentioned concepts Schema Graph Grounding The grounded schema graph is further encoded with our proposed knowledgeaware graph network module KnowledgeAware Graph Network We first use a modelagnostic language encoder which can either be trainable or a fixed feature extractor to represent the QApair as a statement vector The statement vector serves as an additional input to a GCNLSTMHPA architecture for pathbased attentive graph modeling to obtain a graph vector The graph vector is finally fed into a simple multilayer perceptron to score this QApair into a scalar ranging from 0 to 1 representing the plausibility of the inference The answer candidate with the maximum plausibility score to the same question becomes the final choice of our framework The grounding stage is threefold recognizing concepts mentioned in text Conclusion constructing schema graphs by retrieving paths in the knowledge graph Schema Graph Construction and pruning noisy paths Path Pruning via KG Embedding We match tokens in questions and answers to sets of mentioned concepts mathcal Cq and mathcal Ca respectively from the knowledge graph G for this paper we chose to use ConceptNet due to its generality A naive approach to mentioned concept recognition is to exactly match ngrams in sentences with the surface tokens of concepts in V For example in the question Sitting too close to watch tv can cause what sort of pain the exact matching result mathcal Cq would be sitting close watchtv watch tv sort pain etc We are aware of the fact that such retrieved mentioned concepts are not always perfect eg sort is not a semantically related concept close is a polysemous concept How to efficiently retrieve contextuallyrelated knowledge from noisy knowledge resources is still an open research question by itself BIBREF17 BIBREF18 and thus most prior works choose to stop here BIBREF19 BIBREF20 We enhance this straightforward approach with some rules such as soft matching with lemmatization and filtering of stop words and further deal with noise by pruning paths Path Pruning via KG Embedding and reducing their importance with attention mechanisms Hierarchical Attention Mechanism ConceptNet Before diving into the construction of schema graphs we would like to briefly introduce our target knowledge graph ConceptNet ConceptNet can be seen as a large set of triples of the form h r t like ice HasProperty cold where h and t represent head and tail concepts in the concept set V and r is a certain relation type from the predefined set R We delete and merge the original 42 relation types into 17 types in order to increase the density of the knowledge graph for grounding and modeling Subgraph Matching via Path Finding We define a schema graph as a subgraph g of the whole knowledge graph G which represents the related knowledge for reasoning a given questionanswer pair with minimal additional concepts and edges One may want to find a minimal spanning subgraph covering all the question and answer concepts which is actually the NPcomplete Steiner tree problem in graphs BIBREF21 Due to the incompleteness and tremendous size of ConceptNet we find that it is impractical to retrieve a comprehensive but helpful set of knowledge facts this way Therefore we propose a straightforward yet effective graph construction algorithm via path finding among mentioned concepts mathcal Cq cup mathcal Ca Specifically for each question concept ci in mathcal Cq and answer concept cj in mathcal Ca we can efficiently find paths between them that are shorter than k concepts Then we add edges if any between the concept pairs within mathcal Cq or mathcal Ca To prune irrelevant paths from potentially noisy schema graphs we first utilize knowledge graph embedding KGE techniques like TransE BIBREF22 to pretrain concept embeddings mathbf V and relation type embeddings mathbf R which are also used as initialization for KnowledgeAware Graph Network In order to measure the quality of a path we decompose it into a set of triples the confidence of which can be directly measured by the scoring function of the KGE method ie the confidence of triple classification Thus we score a path with the multiplication product of the scores of each triple in the path and then we empirically set a threshold for pruning Implementation Details of KagNet The core component of our reasoning framework is the knowledgeaware graph network module The first encodes plain structures of schema graphs with graph convolutional networks Graph Convolutional Networks to accommodate pretrained concept embeddings in their particular context within schema graphs It then utilizes LSTMs to encode the paths between mathcal Cq and mathcal Ca capturing multihop relational information Relational Path Encoding Finally we apply a hierarchical pathbased attention mechanism Hierarchical Attention Mechanism to complete the GCNLSTMHPA architecture which models relational schema graphs with respect to the paths between question and answer concepts Graph convolutional networks GCNs encode graphstructured data by updating node vectors via pooling features of their adjacent nodes BIBREF16 Our intuition for applying GCNs to schema graphs is to 1 contextually refine the concept vectors and 2 capture structural patterns of schema graphs for generalization Although we have obtained concept vectors by pretraining Path Pruning via KG Embedding the representations of concepts still need to be further accommodated to their specific schema graphs context Think of polysemous concepts such as close Conclusion which can either be a verb concept like in close the door or an adjective concept meaning a short distance apart Using GCNs to update the concept vector with their neighbors is thus helpful for disambiguation and contextualized concept embedding Also the pattern of schema graph structures provides potentially valuable information for reasoning For instance shorter and denser connections between question and answer concepts could mean higher plausibility under specific contexts As many works show BIBREF23 BIBREF24 relational GCNs BIBREF25 usually overparameterize the model and cannot effectively utilize multihop relational information We thus apply GCNs on the plain version unlabeled nondirectional of schema graphs ignoring relation types on the edges Specifically the vector for concept ciin mathcal Vg in the schema graph g is initialized by their pretrained embeddings at first hi0 mathbf Vi Then we update them at the l1 th layer by pooling features of their neighboring nodes Ni and their own at the l th layer with an nonlinear activation function sigma
hil1 sigma Wselflhilsum jin Nifrac1NiWlhjl
In order to capture the relational information in schema graphs we propose an LSTMbased path encoder on top of the outputs of GCNs Recall that our graph representation has a special purpose to measure the plausibility of a candidate answer to a given question Thus we propose to represent graphs with respect to the paths between question concepts mathcal Cq and answer concepts mathcal Ca We denote the k th path between i th question concept ciqin mathcal Cq and j th answer concept cjain mathcal Ca as Pijk which is a sequence of triples
Pijk ciq r0 t0tn1 rncja
Note that the relations are represented with trainable relation vectors initialized with pretrained relation embeddings and concept vectors are the GCNs outputs hl Thus each triple can be represented by the concatenation of the three corresponding vectors We employ LSTM networks to encode these paths as sequences of triple vectors taking the concatenation of the first and the last hidden states vspace100pt mathbf Rij frac1Pijsum k texttt LSTMPijk The above mathbf Rij can be viewed as the latent relation between the question concept ciq and the answer concept cja for which we aggregate the representations of all the paths between them in the schema graph Now we can finalize the vector representation of a schema graph mathbf g by aggregating all vectors in the matrix mathbf R using mean pooling
mathbf Tij texttt MLPmathbf s mathbf cqi mathbf caj
mathbf g fracsum ij mathbf Rij mathbf Tij mathcal Cqtimes mathcal Ca
where cdot cdot means concatenation of two vectors The statement vector mathbf s in the above equation is obtained from a certain language encoder which can either be a trainable sequence encoder like LSTM or features extracted from pretrained universal language encoders like GptBert To encode a questionanswer pair with universal language encoders we simply create a sentence combining the question and the answer with a special token question sep answer and then use the vector of cls as suggested by prior works BIBREF6 We concatenate mathbf Rij mathbf Rij with an additional vector mathbf Tij mathbf Tij before doing average pooling The mathbf Tij is inspired from the Relation Network BIBREF26 which also encodes the latent relational information yet from the context in the statement s instead of the schema graph g Simply put we want to combine the relational representations of a pair of questionanswer concepts from both the schema graph side symbolic space symbolic space and the language side semantic space semantic space Finally the plausibility score of the answer candidate a to the question q can be computed as texttt scoreqa texttt sigmoidtexttt MLPmathbf g A natural argument against the above GCNLSTMmean architecture is that mean pooling over the path vectors does not always make sense since some paths are more important than others for reasoning Also it is usually not the case that all pairs of question and answer concepts equally contribute to the reasoning Therefore we propose a hierarchical pathbased attention mechanism to selectively aggregate important path vectors and then more important questionanswer concept pairs This core idea is similar to the work of BIBREF27 2016 which proposes a document encoder that has two levels of attention mechanisms applied at the word and sentencelevel In our case we have pathlevel and conceptpairlevel attention for learning to contextually model graph representations We learn a parameter matrix mathbf W1 for pathlevel attention scores and the importance of the path Pijk is denoted as hatalpha ijcdot
alpha ijk mathbf Tij mathbf W1 texttt LSTMPijk
hatalpha ijcdot texttt SoftMaxalpha ijcdot
hatmathbf Rij sum k hatalpha ijk cdot texttt LSTMPijk
Afterwards we similarly obtain the attention over conceptpairs
beta ij mathbf smathbf W2 mathbf Tij
hatbeta cdot cdot texttt SoftMaxbeta cdot cdot
hatmathbf g sum ij hatbeta ij hatmathbf Rij mathbf Tij
The whole GCNLSTMHPA architecture is illustrated in Figure 3 To sum up we claim that the is a graph neural network module with the GCNLSTMHPA architecture that models relational graphs for relational reasoning under the context of both knowledge symbolic space knowledge symbolic space and language semantic space language semantic space We introduce our setups of the CommonsenseQA dataset BIBREF6 present the baseline methods and finally analyze experimental results The CommonsenseQA dataset consists of 12102 v111 natural language questions in total that require human commonsense reasoning ability to answer where each question has five candidate answers hard mode The authors also release an easy version of the dataset by picking two random termsphrases for sanity check CommonsenseQA is directly gathered from real human annotators and covers a broad range of types of commonsense including spatial social causal physical temporal etc To the best of our knowledge CommonsenseQA may be the most suitable choice for us to evaluate supervised learning models for question answering For the comparisons with the reported results in the CommonsenseQAs paper and leaderboard we use the official split 974112211140 named OFtrainOFdevOFtest Note that the performance on OFtest can only be tested by submitting predictions to the organizers To efficiently test other baseline methods and ablation studies we choose to use randomly selected 1241 examples from the training data as our inhouse data forming an 850012211241 split denoted as IHtrainIHdevIHtest All experiments are using the randomsplit setting as the authors suggested and three or more random states are tested on development sets to pick the bestperforming one We consider two different kinds of baseline methods as follows bullet Knowledgeagnostic Methods These methods either use no external resources or only use unstructured textual corpora as additional information including gathering textual snippets from search engine or large pretrained language models like BertLarge QABilinear QACompare ESIM are three supervised learning models for natural language inference that can be equipped with different word embeddings including GloVe and ELMo BIDAF utilizes Google web snippets as context and is further augmented with a selfattention layer while using ELMo as input features GptBertLarge are finetuning methods with an additional linear layer for classification as the authors suggested They both add a special token sep to the input and use the hidden state of the cls as the input to the linear layer More details about them can be found in the dataset paper BIBREF6 bullet Knowledgeaware Methods We also adopt some recently proposed methods of incorporating knowledge graphs for question answering KVMem BIBREF28 is a method that incorporates retrieved triples from ConceptNet at the wordlevel which uses a keyvalued memory module to improve the representation of each token individually by learning an attentive aggregation of related triple vectors CBPT BIBREF19 is a plugin method of assembling the predictions of any models with a straightforward method of utilizing pretrained concept embeddings from ConceptNet TextGraphCat BIBREF29 concatenates the graphbased and textbased representations of the statement and then feed it into a classifier We create sentence template for generating sentences and then feed retrieved triples as additional text inputs as a baseline method TripleString BIBREF30 2019 propose to collect human explanations for commonsense reasoning from annotators as additional knowledge CoSE and then train a language model based on such human annotations for improving the model performance Our best tested on OFdev settings of have two GCN layers 100 dim 50dim respectively and one bidirectional LSTMs 128dim We pretrain KGE using TransE 100 dimension initialized with GloVe embeddings The statement encoder in use is BertLarge which works as a pretrained sentence encoder to obtain fixed features for each pair of question and answer candidate The paths are pruned with pathscore threshold set to 015 keeping 6721 of the original paths We did not conduct pruning on concept pairs with less than three paths For very few pairs with none path hatmathbf Rij will be a randomly sampled vector We learn our models with Adam optimizers BIBREF31 In our experiments we found that the recall of ConceptNet on commonsense questions and answers is very high over 98 of QApairs have more than one grounded concepts Comparison with standard baselines As shown in Table 2 we first use the official split to compare our model with the baseline methods reported on the paper and leaderboard Bert and Gptbased pretraining methods are much higher than other baseline methods demonstrating the ability of language models to store commonsense knowledge in an implicit way This presumption is also investigated by BIBREF32 2019 and BIBREF33 2019 Our proposed framework achieves an absolute increment of 22 in accuracy on the test data a stateoftheart performance We conduct the experiments with our inhouse splits to investigate whether our can also work well on other universal language encoders GPT and BertBase particularly with different fractions of the dataset say 10 50 100 of the training data Table 1 shows that our based methods using fixed pretrained language encoders outperform finetuning themselves in all settings Furthermore we find that the improvements in a small data situation 10 is relatively limited and we believe an important future research direction is thus fewshot learning for commonsense reasoning Comparison with knowledgeaware baselines To compare our model with other adopted baseline methods that also incorporate ConceptNet we set up a bidirectional LSTM networksbased model for our inhouse dataset Then we add baseline methods and onto the BLSTMs to compare their abilities to utilize external knowledge Table 3 shows the comparisons under both easy mode and hard mode and our methods outperform all knowledgeaware baseline methods by a large margin in terms of accuracy Note that we compare our model and the CoSE in Table 2 Although CoSE also achieves better result than only finetuning BERT by training with humangenerated explanations we argue that our proposed KagNet does not utilize any additional human efforts to provide more supervision Ablation study on model components To better understand the effectiveness of each component of our method we have done ablation study as shown in Table 4 We find that replacing our GCNLSTMHPA architecture with traditional relational GCNs which uses separate weight matrices for different relation types results in worse performance due to its overparameterization The attention mechanisms matters almost equally in two levels and pruning also effectively filters noisy paths Error analysis In the failed cases there are three kinds of hard problems that is still not good at negative reasoning the grounding stage is not sensitive to the negation words and thus can choose exactly opposite answers comparative reasoning strategy For the questions with more than one highly plausible answers the commonsense reasoner should benefit from explicitly investigating the difference between different answer candidates while training method is not capable of doing so subjective reasoning Many answers actually depend on the personality of the reasoner For instance Traveling from new place to new place is likely to be what The dataset gives the answer as exhilarating instead of exhausting which we think is more like a personalized subjective inference instead of common sense Our framework enjoys the merit of being more transparent and thus provides more interpretable inference process We can understand our model behaviors by analyzing the hierarchical attention scores on the questionanswer concept pairs and path between them Figure 4 shows an example how we can analyze our framework through both pairlevel and pathlevel attention scores We first select the conceptpairs with highest attention scores and then look at the one or two topranked paths for each selected pair We find that paths located in this way are highly related to the inference process and also shows that noisy concepts like fountain will be diminished while modeling We study the transferability of a model that is trained on CommonsenseQA CSQA by directly testing it with another task while fixing its parameters Recall that we have obtained a BertLarge model and a model trained on CSQA Now we denoted them as CsqaBl and CsqaKn to suggest that they are not trainable anymore In order to investigate their transferability we separately test them on SWAG BIBREF3 and WSC BIBREF34 datasets We first test them the 20k validation examples in SWAG CsqaBl has an accuracy of 5653 while our fixed CsqaKn model achieves 5901 Similarly we also test both models on the WSCQA which is converted from the WSC pronoun resolution to a multichoice QA task The CsqaBL achieves an accuracy of 5123 while our model CsqaKN scores 5351 These two comparisons further support our assumption that as a knowledgecentric model is more extensible in commonsense reasoning As we expect for a good knowledgeaware frameworks to behave our indeed enjoys better transferablity than only finetuning large language encoders like Bert We argue that the utilizes the ConceptNet as the only external resource and other methods are improving their performance in orthogonal directions 1 we find that most of the other recent submissions as of Aug 2019 with public information on the leaderboard utilize larger additional textual corpora eg top 10 matched sentences in full Wikipedia via information retrieval tools and finetuning on larger pretrained encoders such as XLNet BIBREF35 RoBERTa BIBREF36 2 there are also models using multitask learning to transfer knowledge from other reading comprehension datasets such as RACE BIBREF37 and OpenBookQA BIBREF38 An interesting fact is that the best performance on the OFtest set is still achieved the original finetuned RoBERTa model which is pretrained with copora much larger than Bert All other RoBERTaextended methods have negative improvements We also use statement vectors from RoBERTa as the input vectors for and find that the performance on OFdev marginally improves from 7747 to 7756 Based on our abovementioned failed cases in error analysis we believe finetuning RoBERTa has achieved the limit due to the annotator biases of the dataset and the lack of comparative reasoning strategies Commonsense knowledge and reasoning There is a recent surge of novel largescale datasets for testing machine commonsense with various focuses such as situation prediction SWAG BIBREF3 social behavior understanding BIBREF11 BIBREF4 visual scene comprehension BIBREF5 and general commonsense reasoning BIBREF6 which encourages the study of supervised learning methods for commonsense reasoning BIBREF39 2018 find that large language models show promising results in WSC resolution task BIBREF34 but this approach can hardly be applied in a more general question answering setting and also not provide explicit knowledge used in inference A unique merit of our method is that it provides grounded explicit knowledge triples and paths with scores such that users can better understand and put trust in the behaviors and inferences of the model Injecting external knowledge for NLU Our work also lies in the general context of using external knowledge to encode sentences or answer questions BIBREF40 2017 are the among first ones to propose to encode sentences by keeping retrieving related entities from knowledge bases and then merging their embeddings into LSTM networks computations to achieve a better performance on entityevent extraction tasks BIBREF17 2017 BIBREF28 2018 and BIBREF41 2018 follow this line of works to incorporate the embeddings of related knowledge triples at the wordlevel and improve the performance of natural language understanding tasks In contrast to our work they do not explicitly impose graphstructured knowledge into models but limit its potential within transforming word embeddings to concept embeddings Some other recent attempts BIBREF19 BIBREF29 to use ConceptNet graph embeddings are adopted and compared in our experiments Experiments BIBREF30 2019 propose to manually collect more human explanations for correct answers as additional supervision for auxiliary training based framework focuses on injecting external knowledge as an explicit graph structure and enjoys the relational reasoning capacity over the graphs Relational reasoning can be seen as a knowledgeaugmented Relation Network module RN BIBREF26 which is proposed for the visual question answering task requiring relational reasoning ie questions about the relations between multiple 3Dobjects in an image We view the concepts in the questions and answers as objects and effectively utilize external knowledge graphs to model their relations from both semantic and symbolic spaces Relational Path Encoding while prior methods mainly work on the semantic one We propose a knowledgeaware framework for learning to answer commonsense questions The framework first constructs schema graphs to represent relevant commonsense knowledge and then model the graphs with our module The module is based on a GCNLSTMHPA architecture which effectively represent graphs for relational reasoning purpose in a transparent interpretable way yielding a new stateoftheart results on a largescale general dataset for testing machine commonsense Future directions include better question parsing methods to deal with negation and comparative question answering as well as incorporating knowledge to visual reasoning This work has been supported in part by National Science Foundation SMA 1829268 DARPA MCS and GAILA IARPA BETTER Schmidt Family Foundation Amazon Faculty Award Google Research Award Snapchat Gift and JP Morgan AI Research Award We would like to thank all the collaborators in the INK research lab for their constructive feedback on the work </s>
<s>[INST] Localized Flood DetectionWith Minimal Labeled Social Media Data Using Transfer Learning [/INST] There are various forms of a natural disaster such as flood earthquake volcano eruptions storms etc but the flood is one of the lethal and prominent forms of natural disaster according to World Meteorological Organization WMO for most of the countries National Weather Services NWS reported 28826 flash floods events in the United States from October 2007 to October 2015 which resulted in 278 live loss and milliondollar worth crop and property damage BIBREF0 Monitoring and detecting floods in advance and proactively working towards saving peoples live and minimizing damage at the same time is amongst one of the most important tasks nowadays In recent times humans are extremely active on social media such as Twitter Facebook Youtube Flickr Instagram etc People use these platform extensively to share crucial information via message photos and videos in realtime on social media for their interaction and information dissemination on every topic and acts as an active human sensor It has been observed in the past few years via several case studies that social media also contributes significantly and being used extensively for crisisrelated feeds BIBREF1 and extremely helpful in situation awareness towards crisis management BIBREF2 BIBREF3 BIBREF4 Emergency first responders agency humanitarian organizations city authorities and other end users are always looking for the right amount and content that would be helpful in the crisis scenarios but generally social media provides an overwhelming amount of unlabeled data and it is very crucial to filter out the right kind of information using text classification The advances in Artificial Intelligence AI which includes machine learning and Natural Language Processing NLP methods can track and focus on humanitarian relief process and extract meaningful insights from the huge amount of social media data generated regularly in a timely manner One of the major challenge while building a reliable and high accuracy model it needs a huge amount of labeled data in order to be evaluated properly and achieve higher accuracy Some of the platforms which uses crowdsourcing services and manually observe the data to label the disasterrelated information such as CrisisLexBIBREF5 CrisisNLPBIBREF6 CrisisMMDBIBREF7 AIDRBIBREF8 etc with already labeled data and pretrained models we can efficiently utilize the learned knowledge for the new target domain In general to make a good predictive model we need a huge amount of labeled data with specific domain to train that provide accurate reliable results for the new domain Transfer learning models efficiently leverage the existing knowledge and perform effectively the intended task by adapting to the new domain In Figure FIGREF1 shows the comparison of general transfer learning and NLP transfer learning Transfer learning learns from the source data model and applies the gained knowledge from the source domain to the target domain that requires relatively less labeled data Social media growth in last decade and availability of existing disasterrelated data sources labeled by crowdsourcing platforms provide an opportunity to utilize this data and build a learning model which learns the domain knowledge and transfer the learned knowledge to classify new data with higher accuracy and confidence automatically This can effectively solve some of the important problems in disaster management such as flood detection executing rescue operations sending feedback and contextual warnings to authorities improved situation awareness etc Transfer learning contains various type of knowledge sharing such as inductive transductive depending on the source and target domain data distribution and sourcetarget task relatedness BIBREF9 Figure FIGREF1 shows basic transfer Learning concept in NLP is slightly different than the general transfer learning In general transfer learning we have source domain and target domain the model build and learned from the source domain data is used to transfer the knowledge to the target domain task model Whereas in NLP the source domain is the general understanding of the text learned from not only one domain but from a giant corpus of text build a language model known as a pretrained language model These pretrained language models are further used for different downstream task such as text classification spam detection question answering etc We are using here the inductive transfer learning where we have a pretrained model as source task and improve the performance of the target task flood tweet classification We present in this study that using a pretrained model and very few labeled flood tweets we can achieve great accuracy effectively in no time The main contributions of this work are as follows We propose to use the inductive transfer learning method and adapt the ULMFiT Pretrain model for text classification We finetune the target model parameters by knowledge obtained from the source domain for quick and efficient flood tweet classification We show that ULMFiT method needs a very small amount of labeled data 5 to achieve high accuracy and performance This study demonstrates that this model can be applied in realtime flood detection and information extraction with very small training data for new application domain Growing active user base on social media and has been created a great opportunity for extracting crucial information in realtime for various events and topics Social media is being vigorously used as the communication channel in the time of any crisis or any natural disaster in order to convey the actionable information to the emergency responders to help them by more situational awareness context so that they make a better decision for rescue operations sending alerts reaching out people right on time There have been numerous works proposed related to crisis management using social media content which is discussed in the following section Social media for crisis management Mainly in the analysis of social media content related to crisis situations data type such as images geolocation videos text etc but most of the focus of these work has been images and geolocation towards crisis management BIBREF2 BIBREF3 BIBREF4 BIBREF10 Processing social media content is itself a huge challenge and comes with great challenges as well such as information processing cleaning filtering summarizing extracting etc There has been some progress lately in developing methods to extract meaningful information during a crisis for better situation awareness and better decision making BIBREF11 The text domain of the social media data has not been exploited to its fullest and it is generally the most valuable and available data on social media Text processing can provide great amount of details which can be useful for situation awareness and help towards extracting actionable insights Identifying relevant text data would eventually result in major event detection which is difficult to correctly track in a short amount of time and fast processing is needed in these scenarios BIBREF11 BIBREF10 Domain adaptation for crisis management Transfer learning is very popular and active research area of machine learning This learning method is known for learning the domain knowledge while solving the task and transfer its knowledge from one domain source to another domain target to solve the task in the new domain We need to know these basic things while applying transfer learning 1 What needs to be transferred 2 When to transfer the learned knowledge 3 How to transfer knowledge There are few basic transfer learning algorithm principles that include few simple steps as follows i it aims to minimize the error measure by reweighting the source label sample such that it appears as a target ii Adapt the methods iteratively and label target example using these common steps a model learned from labeled example b labels some target example c New model learns from new labels BIBREF12 BIBREF13 Transfer learning has been explored and applied in various classification problems for high quality and reliable results with less labeled data in the target domain It has also been used for feature selection pedestrian detection improving visual tracking and subtractive bias removal in medial domainBIBREF12 Some of the other example where transfer learning have been used are text classificationBIBREF13 sentiment classification BIBREF14 BIBREF15 domain adaptation BIBREF16 object classificationBIBREF17 In this section we explain about our data collection and cleaning process of the data followed by some data visualization for better understanding of the data The text data are decidedly very crucial and if leveraged carefully in time it can assist in various emergency response services It could greatly benefit the authorities in their decisionmaking process rescue operation increase situational awareness and early warnings We are using Twitter data since it is one of the widely used social media platform in recent times Data Collection We are using the disaster data from BIBREF5 It contains various dataset including the CrisiLexT6 dataset which contains six crisis events related to English tweets in 2012 and 2013 labeled by relatedness ontopic and offtopic of respective crisis Each crisis event tweets contain almost 10000 labeled tweets but we are only focused on floodrelated tweets thus we experimented with only two flood event ie Queensland flood in Queensland Australia and Alberta flood in Alberta Canada and relabeled all ontopic tweets as Related and Offtopic as Unrelated for implicit class labels understanding in this case The data collection process and duration of CrisisLex data is described in BIBREF5 details Data cleaning The tweets in general are very noisy and we need to clean the tweets in order to use them for efficient model building We removed the stop words numerical special symbols and characters punctuation white space random alphabets and URLs etc We also transform all the tweets into lower case alphabet to normalize it and remove the redundancy in the data After cleaning the tweets we performed some data visualization next for better data insights Data Visualization Our focus here is to understand the basic characteristics of tweets and demonstrate the power of transfer learning method in this application Although both of the datasets are similar in distribution thus we have selected Queensland flood dataset for elaboration Table TABREF6 shows the fairly equal class distribution in Queensland flood tweets with 5414 related flood tweet and 4619 unrelated flood tweets Figure FIGREF7 shows the number of words in a tweet which ranges from 5 words up to 30 words in a single tweet Figure FIGREF7 shows the tweet length distribution contains from 30 characters up to 140 characters in a tweet Figure FIGREF10 FIGREF10 FIGREF10 shows the top 20 most frequent words bigram and trigram respectively of the tweet dataset By visual inspection of these most frequent words bigram and trigram we have a general understanding of the major topics and themes in the data Tweets characteristics are generally similar in most of the cases so it is highly probable that it can be effectively applied for other scenarios or new location as well It is well known that numerous stateoftheart models in NLP require huge data to be trained on from scratch to achieve reasonable results These models take paramount of memory and immensely timeconsuming NLP researchers have been looking into various successful methodsmodels in computer vision CV and to attain similar success in NLP A major breakthrough in CV was transferring knowledge obtained from pretrained models on ImageNet BIBREF18 as a source task to target tasks for efficient results There has been a huge advancement in the area of transfer learning in NLP due to the introduction of the pretrained language models such as ULMFITBIBREF19 ELMO BIBREF20GLUE BIBREF21 BERT BIBREF22 Attentionnet BIBREF23 XLNet BIBREF24 and many more to come etc These pretrained models have acquired stateoftheart performance for many NLP task since they use a huge amount of training data for language understanding as their source models and finetune the model to achieve the high accuracy in the target task We are using ULMFiT in this study since it has been shown significant performance for target domain classification task with minimal labelled data along with less training time with reasonable hardware requirement Whereas other models such as BERT XLNet etc are much bigger and complex that need large training time and higher hardware architecture This method ULMFiT BIBREF19 was introduced by Howard and Ruder which can effectively be applied as a transfer learning method for various NLP task In inductive transfer learning the source task Language model is generally different than the target task Flood detection and requires labeled data in the target domain ULMFiT is very suitable for efficient and text classification BIBREF19 is a pretrained model This model significantly outperformed in text classification reducing error by 1824 on various datasets and achieving accuracy with very small labeled data Some of the examples where researchers have used ULMFiT to solve a specific problem using power of transfer learning are BIBREF25 BIBREF26 Although ULMFiT has the capability to handle any type of classification task such as topic classification question classification etc but we are specifically targeting the floodrelated tweet classification Text classification in any new area generally suffers from no or very little labeled data to work with initially Inductive transfer learning addressees this very same challenge and ULMFiT method is primarily based on this concept We have used the pretrained language model ULMFiT to do the classification for the target task and classify the related and unrelated flood tweets coming from different location social media Twitter As shown in Figure FIGREF16 our overall framework adapted from BIBREF19 to do the flood tweet classification As shown in Figure FIGREF16 we are using the ULMFIiT architecture to solve the flood tweet classification problem The source domain here is trained on the paramount of text data corpus from WikiText103 dataset which contains 103 million words 400 dimensional embedding size 3 layers neural network architecture AWDLSTM and 1150 hidden activations per layer that creates a general domain language model for general domain LM pretraining to predict the next word in the sequence learns general features of the language AWDLSTM BIBREF27 is a regular LSTM used for the Language Modeling with various regularization and optimization techniques that produce stateoftheart results Next step is Target Task LM FineTuning which entertain the transfer learning idea by gaining the knowledge from the previous step and utilize it in the target task Here the target task is flood tweet detection which has different data distribution and features so the general model finetunes according to the target task and adapt to the new domain target by learning the target taskspecific features of the language It is done using discriminative finetuning and slanted triangular learning rates for finetuning the LM Finally Target Task Classifier provide classification results as the probability distribution over flood class labels related and unrelated which is a very critical part of transfer learning method it needs to be very balanced not too slow or fast finetuned using the gradual unfreezing for finetuning the classifier We used some of the same hyperparameters for this task In this section we will discuss our experimental results of the text classification As described above in the methodology section that our source domain model comes from the ULMFiT and the target domain data is Queensland flood data which has almost 10000 tweets labeled as flood Related and Unrelated The pretrain ULMFiT model uses the AWDLSTM language model with embedding size of 400 3 layers 1150 hidden activations per layer with a batch size of 70 and a back propagation through time BPTT BIBREF19 Dropout here has been used as 07 to language model learner and 07 to text classifier learner A base learning rate of 001 for LM finetuning and multiple values ranging from 000001 to 01 of learning rate have been used for target classifier finetuning for various instances We have used gradual unfreezing of the model layers in this case to avoid the risk of catastrophic forgetting It starts finetuning of the last layer minimal general knowledge to the next lower layer on wards in every iterations to attain the highest performance of the model We have used the following hardware for the experimentation Windows 10 Education desktop consisting of intel core i7 processor and 16GB RAM We have used python 36 and Google colab notebook to execute our model and obtained the results discussed below The train and test data have divided into 7030 ratio and we got these results as shown in Table TABREF17 for the individual dataset and the combination of both The pretrained network was already trained and we used the target data Queensland flood which provided 96 accuracy with 0118 Test loss in only 11 seconds provided we used only 70 of training labeled data The second target data is Alberta flood with the same configuration of traintest split which provided 95 accuracy with 0118 Test loss in just 19 seconds As we can see it takes very less time to work with 20000 of tweets combined and at times of emergency it can handle a huge amount of unlabeled data to classify into meaningful categories in minutes Here Our focus is localized flood detection thus we are not merging multiple datasets we will leave the combination for our future work and staying with one Queensland flood data and explore that in details As it can be seen in Table TABREF18 that event with the 5 of data which is only 500 labeled tweets as target labeled data the model can adapt and finetuned the classification model wit 95 accuracy This model is very efficient and effective when we have a timesensitive application and instead of training a model from scratch with huge data we can use the pretrained model and successfully applies to the target domain application The Table TABREF18 also depicts that even with the very small labeled training data the model was able to achieve the accuracy almost equivalent to the 80 of the training data There is generally a direct relation which says the more training data is the better but here increased labeled data the accuracy did not contribute significantly towards the accuracy improvement There are some more measures for accessing the quality of the classification such as trainingtesting loss and average precision to avoid the bias in the accuracy Thus Figure FIGREF19 shows the learning rate adjusting according to the target classifier model showing with the specific learning rate it achieves the low amount of loss which is called as the slanted triangular Learning rate Figure FIGREF19 shows the PrecisionRecall curve for a particular classification instance where the average Precision is 094 It shows that the overall quality of the classification is fairly good and does not favor one class over another As described above and based on the experimental results we can use a very low amount of labeled data and solved the localized flood disaster situation efficiently for any new location We faced some limitations in this work and plan to include in our future work described in the next section We have been focused on a specific type of disaster Flood here and did not explore other disaster types since we wanted to capture specific kind of disaster characteristics and learn from it for another flood disaster We plan to perform extensive experimentation with some other kind of disaster data as well in the future We have explored and experimented with the twitter dataset only so far because it is widely available and accessible for everyone but we would attempt to include different kinds of data sources such as other social media platforms news feeds blogs text images etc as well to make it a multimodel transfer learning approach in our future models There are other stateoftheart pretrained language model such as BERT GPT2 TransformerXL etc for text classification available and we would want to compare this adaptation with other models as well for the most time effective models in the given situation There can be many more application where multiclass classification including various classes such damage rescue buildings transportation medical etc can be labeled with a small amount in order to build a very efficient classification model We also have the plan to formulate this a multiclass problem in order to deeply address the problems in disaster management This opens up a new door for cyberphysicalsocial systems that would rely on social media feeds coming from human sensors along with wireless physicalenvironmental sensors in tandem for various applications to create another layer of smart sensors that can achieve the high quality more reliable and faulttolerant system As we are aware of the calamity due to flood flash flood situation which needs close monitoring and detail attention With the exponential growth in social media users there is an ample amount of data which can be extremely useful in flood detection Transfer learning is very helpful in these applications where we need to train with general knowledge along with little target domain knowledge to attain a highly effective model We have discovered that inductive transfer learning methods are very useful for social media flood detection data with minimal labeled data We used Queensland Twitter data as one of the flood locations and used the pretrained model ULMFiT to successfully classify with accuracy 95 the floodrelated tweets with only 5 of labeled target samples under 10 seconds whereas in general it takes thousands of labeled tweets and huge time to achieve the similar performance The usage of pretrained models with minimal space and time complexity it can be a huge advantage to the timesensitive application where we need to process millions of tweets efficiently and classify them accordingly with high performance without compromising on the accuracy This research is funded by the National Science Foundation NSF grant number 1640625 I would like to thank my mentor and advisor Dr Nirmalya Roy for their motivation support and feedback for my research I am grateful for Dr Aryya Gangopadhyay coadvisor for the discussion and continuous encouragement towards my work </s>
<s>[INST] Applications of Online Deep Learning for Crisis Response Using Social Media Information [/INST] Emergency events such as natural or manmade disasters bring unique challenges for humanitarian response organizations Particularly suddenonset crisis situations demand officials to make fast decisions based on minimum information available to deploy rapid crisis response However information scarcity during timecritical situations hinders decisionmaking processes and delays response efforts BIBREF0 BIBREF1 During crises people post updates regarding their statuses ask for help and other useful information report infrastructure damages injured people etc on social media platforms like Twitter BIBREF2 Humanitarian organizations can use this citizengenerated information to provide relief if critical information is easily available in a timely fashion In this paper we consider the classification of the social media posts into different humanitarian categories to fulfill different information needs of humanitarian organizations Specifically we address two types of information needs described as follows Informativeness of social media posts Information posted on social networks during crises vary greatly in value Most messages contain irrelevant information not useful for disaster response and management Humanitarian organizations do not want a deluge of noisy messages that are of a personal nature or those that do not contain any useful information They want clean data that consists of messages containing potentially useful information They can then use this information for various purposes such as situational awareness In order to assist humanitarian organizations we perform binary classification That is we aim to classify each message into one of the two classes ie informative vs not informative Information types of social media posts Furthermore humanitarian organizations are interested in sorting social media posts into different categories Identifying social media posts by category assists humanitarian organizations in coordinating their response Categories such as infrastructure damage reports of deceased or injured urgent need for shelter food and water or donations of goods or services could therefore be directed to different relief functions In this work we show how we can classify tweets into multiple classes Automatic classification of short crisisrelated messages such as tweets is a challenging task due to a number of reasons Tweets are short only 140 characters informal often contain abbreviations spelling variations and mistakes and therefore they are hard to understand without enough context Despite advances in natural language processing NLP interpreting the semantics of short informal texts automatically remains a hard problem Traditional classification approaches rely on manually engineered features like cue words and TFIDF vectors for learning BIBREF1 Due to the high variability of the data during a crisis adapting the model to changes in features and their importance manually is undesirable and often infeasible To overcome these issues we use Deep Neural Networks DNNs to classify the tweets DNNs are usually trained using online learning and have the flexibility to adaptively learn the model parameters as new batches of labeled data arrive without requiring to retrain the model from scratch DNNs use distributed condensed representation of words and learn the representation as well as higher level abstract features automatically for the classification task Distributed representation as opposed to sparse discrete representation generalizes well This can be a crucial advantage at the beginning of a new disaster when there is not enough eventspecific labeled data We can train a reasonably good DNN model using previously labeled data from other events and then the model is finetuned adaptively as newly labeled data arrives in small batches In this paper we use Deep Neural Network DNN to address two types of information needs of response organizations identifying informative tweets and classifying them into topical classes DNNs use distributed representation of words and learn the representation as well as higher level features automatically for the classification task We propose a new online algorithm based on stochastic gradient descent to train DNNs in an online fashion during disaster situations Moreover we make our source code publicly available for crisis computing community for further research at httpsgithubcomCrisisNLPdeeplearningforbigcrisisdata In the next section we provide details regarding DNNs we use and the online learning algorithm Section Dataset and Experimental Settings describes datasets and online learning settings In Section Results we describe results of our models Section Related work presents relatedwork and we conclude our paper in Section Conclusions As argued before deep neural networks DNNs can be quite effective in classifying tweets during a disaster situation because of their distributed representation of words and automatic feature learning capabilities Furthermore DNNs are usually trained using online algorithms which nicely suits the needs of a crisis response situation Our main hypothesis is that in order to effectively classify tweets which are short and informal a classification model should learn the key features at different levels of abstraction To this end we use a Convolutional Neural Network CNN which has been shown to be effective for sentencelevel classification tasks BIBREF3 Figure 1 demonstrates how a CNN works with an example tweet Each word in the vocabulary V is represented by a D dimensional vector in a shared lookup table L in V times D L is considered a model parameter to be learned We can initialize L randomly or using pretrained word embedding vectors like word2vec BIBREF4 Given an input tweet mathbf s w1 cdots wT we first transform it into a feature sequence by mapping each word token wt in mathbf s to an index in L The lookup layer then creates an input vector mathbf xtin D for each token wt which are passed through a sequence of convolution and pooling operations to learn highlevel abstract features A convolution operation involves applying a filter mathbf u in LD to a window of L words to produce a new feature ht fmathbf u mathbf xttL1 bt Eq 5 where mathbf xttL1 denotes the concatenation of L input vectors bt is a bias term and f is a nonlinear activation function eg tanh A filter is also known as a kernel or a feature detector We apply this filter to each possible L word window in the tweet to generate a feature map mathbf hi h1 cdots hTL1 We repeat this process N times with N different filters to get N different feature maps We use a wide convolution BIBREF5 as opposed to narrow which ensures that the filters reach the entire sentence including the boundary words This is done by performing zeropadding where outofrange ie L0 L1 1 or L2 L3 L4 vectors are assumed to be zero After the convolution we apply a maxpooling operation to each feature map mathbf m mu pmathbf h1 cdots mu pmathbf hN Eq 6 where mu pmathbf hi refers to the max operation applied to each window of p features in the feature map mathbf hi For instance with p2 this pooling gives the same number of features as in the feature map because of the zeropadding Intuitively the filters compose local n grams into higherlevel representations in the feature maps and maxpooling reduces the output dimensionality while keeping the most important aspects from each feature map Since each convolutionpooling operation is performed independently the features extracted become invariant in locations ie where they occur in the tweet thus acting like bagof n grams However keeping the order information could be important for modeling sentences In order to model interactions between the features picked up by the filters and the pooling we include a dense layer of hidden nodes on top of the pooling layer mathbf z fVmathbf m mathbf bh Eq 7 where V is the weight matrix mathbf bh is a bias vector and f is a nonlinear activation The dense layer naturally deals with variable sentence lengths by producing fixed size output vectors mathbf z which are fed to the output layer for classification Depending on the classification tasks the output layer defines a probability distribution For binary classification tasks it defines a Bernoulli distribution pymathbf s theta y mathbf wT mathbf z b Eq 8 where refers to the sigmoid function and mathbf w are the weights from the dense layer to the output layer and b is a bias term For multiclass classification the output layer uses a softmax function Formally the probability of k th label in the output for classification into K classes Py kmathbf s theta fracexpmathbf wkTmathbf z bksum j1K expmathbf wjTmathbf z bj Eq 9 where mathbf wk are the weights associated with class k in the output layer We fit the models by minimizing the crossentropy between the predicted distributions hatyntheta pynmathbf sn theta and the target distributions yn ie the gold labels The objective function ftheta can be written as f theta sum n1N sum k1K ynklogPyn kmathbf sn theta Eq 11 where N is the number of training examples and ynk Iyn k is an indicator variable to encode the gold labels ie ytk1 if the gold label ytk otherwise 0 DNNs are usually trained with firstorder online methods like stochastic gradient descent SGD This method yields a crucial advantage in crisis situations where retraining the whole model each time a small batch of labeled data arrives is impractical Algorithm Online Learning demonstrates how our CNN model can be trained in a purely online setting We first initialize the model parameters theta 0 line 1 which can be a trained model from other disaster events or it can be initialized randomly to start from scratch As a new batch of labeled tweets Bt lbrace mathbf s1 ldots mathbf sn rbrace arrives we first compute the logloss cross entropy in Equation 11 for Bt with respect to the current parameters theta t line 2a Then we use backpropagation to compute the gradients fprime theta t of the loss with respect to the current parameters line 2b Finally we update the parameters with the learning rate eta t and the mean of the gradients line 2c We take the mean of the gradients to deal with minibatches of different sizes Notice that we take only the current minibatch into account to get an updated model Choosing a proper learning rate eta t can be difficult in practice Several adaptive methods such as ADADELTA BIBREF6 ADAM BIBREF7 etc have been proposed to overcome this issue In our model we use ADADELTA t 1 Initialize the model parameters theta 0 2 a minibatch Bt lbrace mathbf s1 ldots mathbf sn rbrace at time t a Compute the loss ftheta t in Equation 11 b Compute gradients of the loss fprime theta t using backpropagation c Update theta t1 theta t eta t frac1n fprime theta t Online learning of CNN As mentioned before we can initialize the word embeddings L randomly and learn them as part of model parameters by backpropagating the errors to the lookup layer Random initialization may lead the training algorithm to get stuck in a local minima One can plug the readily available embeddings from external sources eg Google embeddings BIBREF4 in the neural network model and use them as features without further taskspecific tuning However the latter approach does not exploit the automatic feature learning capability of DNN models which is one of the main motivations of using them In our work we use pretrained word embeddings see below to better initialize our models and we finetune them for our task which turns out to be beneficial Mikolov et al BIBREF4 propose two loglinear models for computing word embeddings from large unlabeled corpuses efficiently a bagofwords model CBOW that predicts the current word based on the context words and a skipgram model that predicts surrounding words given the current word They released their pretrained 300dimensional word embeddings trained by the skipgram model on a Google news dataset Since we work on disaster related tweets which are quite different from news we have trained domainspecific embeddings of 300dimensions vocabulary size 20 million using the Skipgram model of word2vec tool BIBREF8 from a large corpus of disaster related tweets The corpus contains 57908 tweets and 94 million tokens In this section we describe the datasets used for the classification tasks and the settings for CNN and online learning We use CrisisNLP BIBREF9 labeled datasets The CNN models were trained online using a labeled dataset related to the 2015 Nepal Earthquake and the rest of the datasets are used to train an initial model theta 0 in Algorithm Online Learning upon which the online learning is performed The Nepal earthquake dataset consists of approximately 12k labeled tweets collected from Twitter during the event using different keywords like NepalEarthquake Of all the labeled tweets 9k are labeled by trained volunteers during the actual event using the AIDR platform BIBREF10 and the remaining 3k tweets are labeled using the Crowdflower crowdsourcing platform The dataset is labeled into different informative classes eg affected individuals infrastructure damage donations etc and one notrelated or irrelevant class Table 1 provides a one line description of each class and also the total number of labels in each class Other useful information and Not related or irrelevant are the most frequent classes in the dataset Data Preprocessing We normalize all characters to their lowercased forms truncate elongations to two characters spell out every digit to D all twitter usernames to userID and all URLs to HTTP We remove all punctuation marks except periods semicolons question and exclamation marks We further tokenize the tweets using the CMU TweetNLP tool BIBREF11 Before performing the online learning we assume that an initial model theta 0 exists In our case we train the initial model using all the datasets from CrisisNLP except the Nepal earthquake For online training we sort the Nepal labeled data based on the time stamp of the tweets This brings the tweets in their posting order Next the dataset D is divided at each time interval dt in which case D is defined as D sum t1T dt where dt 200 For each time interval t we divide the available labeled dataset into a train set 70 dev set 10 and a test set 20 using skilearn toolkits module BIBREF12 which ensured that the class distribution remains reasonably balanced in each subset Based on the data splitting strategy mentioned above we start online learning to train a binary and a multiclass classifier For the binary classifier training we merge all the informative classes to create one general Informative class We train CNN models by optimizing the cross entropy in Equation 8 using the gradientbased online learning algorithm ADADELTA BIBREF6 The learning rate and the parameters were set to the values as suggested by the authors The maximum number of epochs was set to 25 To avoid overfitting we use dropout BIBREF13 of hidden units and early stopping based on the accuracy on the validation set We experimented with lbrace 00 02 04 05rbrace dropout rates and lbrace 32 64 128rbrace minibatch sizes We limit the vocabulary V to the most frequent P Pin lbrace 80 85 90rbrace words in the training corpus The word vectors in L were initialized with the pretrained embeddings We use rectified linear units ReLU for the activation functions f lbrace 100 150 200rbrace filters each having window size L of lbrace 2 3 4rbrace pooling length p of lbrace 23 4rbrace and lbrace 100 150 200rbrace dense layer units All the hyperparameters are tuned on the development set In this section we present our results for binary and multiclass classification tasks Figure 2 shows the results for the informative vs not informative binary classification task using online learning The performance of the model is quite inconsistent as the size of the inevent training data varies We observe an improvement in performance initially However the results dropped when the training size is between 2200 to 3900 tweets We investigated this strange result and found that this could be due to the inconsistencies in the annotation procedure and the data sources In our inevent Nepal Earthquake training data first 3000 tweets are from CrowdFlower and the rest are from AIDR Tweets in CrowdFlower were annotated by paid workers where AIDR tweets are annotated by volunteers We speculate these inconsistencies can affect the performance at the beginning but as the model sees more AIDR data 4000 the performance stabilizes Figure 3 summarizes the results of online training for the multiclass classification task Since multiclass classification is a harder task than binary classification the first training run provides very low accuracy and the results continue to drop until a good number of training examples are available which in this case is approximately 2200 labeled tweets As in the binary classification case after the initial dip in performance once over 3000 tweets are available the performance of the classifier improves and remains stable after that The benefit of using online learning methods like CNN compared to offline learning methods used in classifiers like SVM Naive Bayes and Logistic Regression is online training The labeled data comes in batches and retraining a model on the complete data every time with the addition of newly labeled data is an expensive task Online training methods learn in small batches which suits the situation in hand perfectly Another advantage of neural network methods is automatic feature extraction that does not require any manual feature engineering The models take labeled tweets as input and automatically learn features based on distributed representation of words Rapid analysis of social media posts during timecritical situations is important for humanitarian response organization to take timely decisions and to launch relief efforts This work proposes solutions to two main challenges that humanitarian organizations face while incorporating social media data into crisis response First how to filterout noisy and irrelevant messages from big crisis data and second categorization of the informative messages into different classes of interest By utilizing labeled data from past crises we show the performance of DNNs trained using the proposed online learning algorithm for binary and multiclass classification tasks We observe that past labeled data helps when no eventspecific data is available in the early hours of a crisis However labeled data from event always help improve the classification accuracy Recent studies have shown the usefulness of crisisrelated data on social media for disaster response and management BIBREF14 BIBREF15 BIBREF16 A number of systems have been developed to classify extract and summarize BIBREF17 crisisrelevant information from social media for a detailed survey see BIBREF1 Cameron et al describe a platform for emergency situation awareness BIBREF18 They classify interesting tweets using an SVM classifier Verma et al use Naive Bayes and MaxEnt classifiers to find situational awareness tweets from several crises BIBREF19 Imran et al implemented AIDR to classify a Twitter data stream during crises BIBREF10 They use a random forest classifier in an offline setting After receiving every minibatch of 50 training examples they replace the older model with a new one In BIBREF20 the authors show the performance of a number of nonneural network classifiers trained on labeled data from past crisis events However they do not use DNNs in their comparison DNNs and word embeddings have been applied successfully to address NLP problems BIBREF21 BIBREF22 BIBREF23 BIBREF24 BIBREF25 The emergence of tools such as word2vec BIBREF8 and GloVe BIBREF26 have enabled NLP researchers to learn word embeddings efficiently and use them to train better models Collobert et al BIBREF21 presented a unified DNN architecture for solving various NLP tasks including partofspeech tagging chunking named entity recognition and semantic role labeling They showed that DNNs outperform traditional models in most of these tasks They also proposed a multitask learning framework for solving the tasks jointly Kim BIBREF3 and Kalchbrenner et al BIBREF5 used convolutional neural networks CNN for sentencelevel classification tasks eg sentimentpolarity classification question classification and showed that CNNs outperform traditional methods eg SVMs MaxEnts Caragea Silvescu and Tapia used CNNs to identify informative messages during disasters BIBREF22 However to the best of our knowledge no previous research has shown the efficacy of CNNs to both the binary classification and the multiclass classification problems using online learning We presented an online learning model namely Convolutional Neural Network for the purpose of classifying tweets in a disaster response scenario We proposed a new online learning algorithm for training CNNs in online fashion We showed that online training of the model perfectly suits the disaster response situation We assume that a base model trained on past crisis labeled data exists and the eventspecific labeled data arrive in small batches which are used to perform online learning The neural network models bring an additive advantage of automatic feature extraction which eases the training process when compared with offline learning methods like SVM logistic regression The model uses only labeled tweets for training and automatically learns features from them We reported the results of two classification tasks ie binary and multiclass Moreover we also provide source code for the online learning of CNN models to research community for further extensions </s>
<s>[INST] Small and Practical BERT Models for Sequence Labeling [/INST] We discuss two core models for addressing sequence labeling problems and describe for each training them in a singlemodel multilingual setting 1 the MetaLSTM BIBREF0 an extremely strong baseline for our tasks and 2 a multilingual BERTbased model BIBREF1 The MetaLSTM is the bestperforming model of the CoNLL 2018 Shared Task BIBREF2 for universal partofspeech tagging and morphological features The model is composed of 3 LSTMs a characterBiLSTM a wordBiLSTM and a single joint BiLSTM which takes the output of the character and wordBiLSTMs as input The entire model structure is referred to as MetaLSTM To set up multilingual MetaLSTM training we take the union of all the word embeddings from the bojanowski2017enriching embeddings model on Wikipedia in all languages For outofvocabulary words a special unknown token is used in place of the word The model is then trained as usual with crossentropy loss The charBiLSTM and wordbiLSTM are first trained independently And finally we train the entire MetaLSTM BERT is a transformerbased model BIBREF3 pretrained with a maskedLM task on millions of words of text In this paper our BERTbased experiments make use of the cased multilingual BERT model available on GitHub and pretrained on 104 languages Models finetuned on top of BERT models achieve stateoftheart results on a variety of benchmark and realworld tasks To train a multilingual BERT model for our sequence prediction tasks we add a softmax layer on top of the the first wordpiece BIBREF4 of each token and finetune on data from all languages combined During training we concatenate examples from all treebanks and randomly shuffle the examples The results in Table TABREF1 make it clear that the BERTbased model for each task is a solid win over a MetaLSTM model in both the perlanguage and multilingual settings However the number of parameters of the BERT model is very large 179M parameters making deploying memory intensive and inference slow 230ms on an Intel Xeon CPU Our goal is to produce a model fast enough to run on a single CPU while maintaining the modeling capability of the large model on our tasks We choose a threelayer BERT we call MiniBERT that has the same number of layers as the MetaLSTM and has fewer embedding parameters and hidden units than both models Table TABREF7 shows the parameters of each model The MetaLSTM has the largest number of parameters dominated by the large embeddings BERTs parameters are mostly in the hidden units The MiniBERT has the fewest total parameters The inferencespeed bottleneck for MetaLSTM is the sequential characterLSTMunrolling and for BERT is the large feedforward layers and attention computation that has time complexity quadratic to the sequence length Table TABREF8 compares the model speeds BERT is much slower than both MetaLSTM and MiniBERT on CPU However it is faster than MetaLSTM on GPU due to the parallel computation of the transformer The MiniBERT is significantly faster than the other models on both GPU and CPU For model distillation BIBREF6 we extract sentences from Wikipedia in languages for which public multilingual is pretrained For each sentence we use the opensource BERT wordpiece tokenizer BIBREF4 BIBREF1 and compute crossentropy loss for each wordpiece INLINEFORM0 where INLINEFORM0 is the crossentropy function INLINEFORM1 is the softmax function INLINEFORM2 is the BERT models logit of the current wordpiece INLINEFORM3 is the small BERT models logits and INLINEFORM4 is a temperature hyperparameter explained in Section SECREF11 To train the distilled multilingual model mMiniBERT we first use the distillation loss above to train the student from scratch using the teachers logits on unlabeled data Afterwards we finetune the student model on the labeled data the teacher is trained on We use universal partofspeech tagging and morphology data from the The CoNLL 2018 Shared Task BIBREF7 BIBREF8 For comparison simplicity we remove the languages that the multilingual BERT public checkpoint is not pretrained on For segmentation we use a baseline segmenter UDPipe v22 provided by the shared task organizer to segment raw text We train and tune the models on goldsegmented data and apply the segmenter on the raw test of test data before applying our models The partofspeech tagging task has 17 labels for all languages For morphology we treat each morphological group as a class and union all classes as a output of 18334 labels For MetaLSTM we use the public repositorys hyperparameters Following devlin2019 we use a smaller learning rate of 3e5 for finetuning and a larger learning rate of 1e4 when training from scratch and during distillation Training batch size is set to 16 for finetuning and 256 for distillation For distillation we try temperatures INLINEFORM0 and use the teacherstudent accuracy for evaluation We observe BERT is very confident on its predictions and using a large temperature INLINEFORM1 to soften the distribution consistently yields the best result We compare perlanguage models trained on single language treebanks with multilingual models in Table TABREF1 and Table TABREF14 In the experimental results we use a prefix INLINEFORM0 to denote the model is a single multilingual model We compare MetaLSTM BERT and MiniBERT mBERT performs the best among all multilingual models The smallest and fastest model mMiniBERT performs comparably to mBERT and outperforms mMetaLSTM a stateoftheart model for this task When comparing with perlanguage models the multilingual models have lower F1 DBLPjournalscorrabs190402099 shows similar results MetaLSTM when trained in a multilingual fashion has bigger drops than BERT in general Most of the MetaLSTM drop is due to the characterLSTM which drops by more than 4 points F1 We pick languages with fewer than 500 training examples to investigate the performance of lowresource languages Tamil ta Marathi mr Belarusian be Lithuanian lt Armenian hy Kazakh kk Table TABREF15 shows the performance of the models While DBLPjournalscorrabs190409077 shows effective zeroshot crosslingual transfer from English to other highresource languages we show that crosslingual transfer is even effective on lowresource languages when we train on all languages as mBERT is significantly better than BERT when we have fewer than 50 examples In these cases the mMiniBERT distilled from the multilingual mBERT yields results better than training individual BERT models The gains becomes less significant when we have more training data The multilingual baseline mMetaLSTM does not do well on lowresource languages On the contrary mMiniBERT performs well and outperforms the stateoftheart MetaLSTM on the POS tagging task and on four out of size languages of the Morphology task We use the Universal Dependencies HindiEnglish codemixed data set BIBREF9 to test the models ability to label codemixed data This dataset is based on codeswitching tweets of Hindi and English multilingual speakers We use the Devanagari script provided by the data set as input tokens In the Universal Dependency labeling guidelines codeswitched or foreignword tokens are labeled as X along with other tokens that cannot be labeled The trained model learns to partition the languages in a codemixed input by labeling tokens in one language with X and tokens in the other language with any of the other POS tags It turns out that the 2ndmost likely label is usually the correct label in this case we evaluate on this label when the 1best is X Table TABREF25 shows that all multilingual models handle codemixed data reasonably well without supervised codemixed traininig data We have described the benefits of multilingual models over models trained on a single language for a single task and have shown that it is possible to resolve a major concern of deploying large BERTbased models by distilling our multilingual model into one that maintains the quality wins with performance fast enough to run on a single CPU Our distilled model outperforms a multilingual version of a very strong baseline model and for most languages yields comparable or better performance to a large BERT model We use exactly the same hyperparameters as the public multilingual BERT for finetuning our models We train the partofspeech tagging task for 10 epochs and the morphology task for 50 epochs For distillation we use the following hyperparameters for all tasks learning rate 1e4 temperature 3 batch size 256 num epochs 24 We take the Wikipedia pretraining data as is and drop sentences with fewer than 10 characters We use the vocab and wordpiece model included with the cased public multilingual model on GitHub We use the BERT configuration of the public multilingual BERT with the following modifications for mMiniBERT Hidden size 256 Intermediate layer size 1024 Num attention heads 4 Layers 3 To understand the importance of distillation in training mMiniBERT we compare it to a model with the MiniBERT structure trained from scratch using only labeled multilingual data the teacher is trained on Table TABREF37 shows that distillation plays an important role in closing the accuracy gap between teacher and student We show perlanguage F1 results of each model in Table SECREF38 and Table SECREF38 For perlanguage models no models are trained for treebanks without tuning data and metrics of those languages are not reported All macroaveraged results reported exclude those languages lccccc treebankBERTMetaLSTMmBERT mMetaLSTM mMiniBERT afafribooms97629763974993169608 amatt32856316 arpadt904690559032899006 arpud715968967106 behse94819105950287599495 bgbtb99019877987296439819 caancora98849862987797579845 cscac9917994399398469848 cscltt87488725876787628753 csfictree9862986398259729718 cspdt99069907989998229861 cspud9713965397 daddt97599747971892369593 degsd94819417945391949382 depud88768742887 elgdt9797974979194879716 enewt9582954595292249419 engum96229502947992339424 enlines97229681957993969525 enpartut9611959950293299461 esancora9887987898179627978 esgsd937939896590618958 espud85878618571 etedt97279717970294329564 eubdt962961955191539415 faseraji97579717971795299692 fiftb96269612931587238979 fipud955593239501 fitdt968197029399158926 frgsd96629645962395379605 frpartut961896954394359493 frpud90779019064 frsequoia96779759970795919675 frspoken9755957896190079325 gaidt91929155908384168572 glctg9699972196592879584 gltreegal9349128919 hehtb82768249826980938193 hihdtb973197399719629643 hipud864885338568 hrset9779979497479624972 huszeged9651947195998559547 hyarmtdp8442866263828698 idgsd9306933793390819335 idpud63526356333 itisdt983398069827967978 itpartut98129817980996999806 itpostwita956695869569417932 itpud938492729367 jagsd88638873885487038843 jamodern415551262161 japud89158796893 kkktb7593617813652918006 kogsd9592956490386398862 kokaist95569542938687469343 kopud419346113196 laittb9834984298397189765 laperseus899183858523 laproiel96349637959792029378 lthse888881439001656869 lvlvtb9479944793718825913 mrufal7745721759265487541 nlalpino9719616973393789619 nllassysmall9554959295729449547 nobokmaal9898979595279704 nonynorsklia940888279255 nonynorsk97949792976994919659 pllfg987985983995219748 plsz98569791980594739729 ptbosque96749673961695539585 ptgsd95839544938493079444 ptpud894889668929 rononstandard94679448949205919 rorrt97639752974795789671 rugsd92239139908488139014 rupud89788928952 rusyntagrus9839865983297139803 rutaiga936292759318 saufal324729582711 sksnk97089632969893619635 slssj97079668968994249558 slsst945190349179 srset98639833983194799736 svlines97219659969993649557 svpud945292069432 svtalbanken98039734977794919676 tattb757172774286151746 temtg94259272934287329342 thpud237273154 tltrg706928626828 trimst939694039318464918 trpud73168367247 ukiu97299669728939688 urudtb938393879369939305 vivtb77677642774472017706 yoytb434830853459 zhcfl498339774942 zhgsd876857859682768608 zhhk662957886586 zhpud8337338295 POS tagging F1 of all models lccccc treebankBERT F1MetaLSTM F1mBERT F1mMetaLSTM F1mMiniBERT F1 afafribooms97119736965388989375 amatt32363236 arpadt88268824877683148534 arpud363334283608 behse82837403875259168182 bgbtb9754975897479141954 caancora98379821982896049767 cscac96339649965488119347 cscltt81617989838678828061 csfictree9639964940983378759 cspdt97189691971589779463 cspud938887449181 daddt97229708956289829408 degsd9084905890480698899 depud30413055304 elgdt9457939594838769207 engum968796937990119371 enlines97329668931187499207 enpartut94889538907679999018 enpud93259123931 esancora98459842976951797 esgsd93529372887289268878 espud5275285273 etedt96149611957890519214 eubdt93279256926776728453 faseraji97359725969193829628 fiftb96349648923277898647 fipud935891129165 fitdt95039558909688448748 frgsd96059611946786979451 frpartut9332929388987488705 frpud59155755894 frsequoia97099713915485239074 frspoken100100986280679667 gaidt822817881263446682 glctg9898989595278998951 gltreegal800568737597 hehtb81278085807976897874 hihdtb93329385929189099065 hipud22122372203 hrset91999185912481628781 huszeged93659128929371258736 hyarmtdp41135445510836594643 idgsd948496948591629439 idpud398342793979 itisdt9779782978795479737 itpartut9735977398019633979 itpostwita95629605950391529317 itpud57825741576 jagsd90299045902990399041 jamodern63961176399 japud57457265727 kkktb64625555949 kogsd9962995599498999937 kokaist100100999499249993 kopud383338663827 laittb9679694971590789391 laperseus820964737224 laproiel90829101915179088399 lthse75216965736142516522 lvlvtb8861913488179118191 mrufal6395591164233635401 nlalpino9622961396539199567 nllassysmall96469602955592169528 nobokmaal96859713964891179531 nonynorsklia942289569108 nonynorsk9679704964992129479 pllfg95859468849647998456 plsz939919371473026536 ptbosque96279616870483138572 ptgsd9729533677276017188 ptpud520649795095 rononstandard8852889186898218214 rorrt9702972396589329485 rugsd8883867381446427893 rupud379735263749 rusyntagrus9702969959991969433 rutaiga885684028601 saufal15916141633 sksnk92068963915868258529 slssj94399378944182698923 slsst8846918978228559 srset94839471927973519048 svlines8954895588668327864 svpud773973947679 svtalbanken96929656961390239449 tattb7291710173754697022 temtg98969896985498689854 thpud8270843 tltrg293128622517 trimst89591886373238199 trpud237223842346 ukiu9249098926479498879 urudtb82248372826481898248 vivtb837484839383588394 yoytb587886826188 zhcfl465543554573 zhgsd8764883888318705885 zhhk663364976623 zhpud86358368614 Morphology F1 of all models </s>
<s>[INST] Dialogue Act Recognition via CRF-Attentive Structured Network [/INST] Dialogue Act Recognition DAR is an essential problem in modeling and detecting discourse structure The goal of DAR is to attach semantic labels to each utterance in a conversation and recognize the speakers intention which can be regarded as a sequence labeling task Many applications have benefited from the use of automatic dialogue act recognition such as dialogue systems machine translation automatic speech recognition topic identification and talking avatars BIBREF0 BIBREF1 BIBREF2 One of the primary applications of DAR is to support taskoriented discourse agent system Knowing the past utterances of DA can help ease the prediction of the current DA state thus help to narrow the range of utterance generation topics for the current turn For instance the Greeting and Farewell acts are often followed with another same type utterances the Answer act often responds to the former Question type utterance Thus if we can correctly recognize the current dialogue act we can easily predict the following utterance act and generate a corresponding response Table 1 shows a snippet of the kind of discourse structure in which we are interested The essential problem of DAR lies on predicting the utterances act by referring to contextual utterances with act labels Most of existing models adopt handcrafted features and formulate the DAR as a multiclassification problem However these methods which adopt feature engineering process and multiclassification algorithms reveal deadly weakness from two aspects First they are labor intensive and can not scale up well across different datasets Furthermore they abandon the useful correlation information among contextual utterances Typical multiclassification algorithms like SVM Naive Bayes BIBREF3 BIBREF4 BIBREF5 can not account for the contextual dependencies and classify the DA label in isolation It is evident that during a conversation the speakers intent is influenced by the former utterance such as the previous Greeting and Farewell examples To tackle these two problems some works have turn to structured prediction algorithm along with deep learning tactics such as DRLMConditional BIBREF6 LSTMSoftmax BIBREF0 and RCNN BIBREF7 However most of them failed to utilize the empirical effectiveness of attention in the graphical structured network and relies completely on the hidden layers of the network which may cause the structural bias A further limitation is that although these works claim they have considered the contextual correlations in fact they view the whole conversation as a flat sequence and neglect the dual dependencies in the utterance level and act level BIBREF8 BIBREF9 BIBREF10 Until now the achieved performances in DAR field are still far behind human annotators accuracy In this paper we present the problem of DAR from the viewpoint of extending richer CRFattentive structural dependencies along with neural network without abandoning endtoend training For simplicity we call the framework as CRFASN CRFAttentive Structured Network Specifically we propose the hierarchical semantic inference integrated with memory mechanism on the utterance modeling The memory mechanism is adopted in order to enable the model to look beyond localized features and have access to the entire sequence The hierarchical semantic modeling learns different levels of granularity including word level utterance level and conversation level We then develop internal structured attention network on the linearchain conditional random field CRF to specify structural dependencies in a soft manner This approach generalizes the softselection attention on the structural CRF dependencies and takes into account the contextual influence on the nearing utterances It is notably that the whole process is differentiable thus can be trained in an endtoend manner The main contributions of this paper are as follows The rest of this paper is organized as follows In section 2 we introduce the problem of dialogue act recognition from the viewpoint of introducing CRFstructured attention and propose the CRFattentive structural network with hierarchical semantic inference and memory mechanism A variety of experimental results are presented in Section 3 We have a comprehensive analysis on the experiment results and conduct the ablations to prove the availability of our model We then provide a brief review of the related work about dialogue act recognition problem in Section 4 Finally we provide some concluding remarks in Section 5 In this section we study the problem of dialogue act recognition from the viewpoint of extending rich CRFattentive structural dependencies We first present the hierarchical semantic inference with memory mechanism from three levels word level utterance level and conversation level We then develop graphical structured attention to the linear chain conditional random field to fully utilize the contextual dependencies Before presenting the problem we first introduce some basic mathematical notions and terminologies for dialogue act recognition Formally we assume the input is in the form of sequence pairs INLINEFORM0 with INLINEFORM1 INLINEFORM2 is the input of the INLINEFORM3 th conversation in dataset INLINEFORM4 and INLINEFORM5 is the INLINEFORM6 th targeted dialogue act type Each conversation INLINEFORM7 is composed of a sequence of utterances which denoted as INLINEFORM8 with aligned act types INLINEFORM9 We have each dialogue act type assigned to utterance INLINEFORM10 and each associated INLINEFORM11 denoted the possible dialogue act belongs to INLINEFORM12 act types Again each utterance consists of a sequence of diverse words INLINEFORM13 Most of the previous models do not leverage the implicit and intrinsic dependencies among dialogue act and utterances They just consider a conversation as a flat structure with an extremely long chain of words However such a construction suffers vanishing gradient problem as the extremely long words become impractical in the neural network backpropagation training process To alleviate this problem we consider the conversation to be a hierarchical structure composed of three level encoders first encode each word in a fine grained manner and the second encoder operates at the utterance level the last encoder encode each utterance in the conversation level Each encoder is based on the previous one thus can make sure the output of the previous one can capture the dependencies across the conversation Here we take an example to illustrate the sequence structure in Figure 1 Apart from hierarchical neural encoders we also integrate external memory to allow the model to have unrestricted access to the whole sequence rather than localized features as in RNNs Naturally the dialogue act recognition problem can be regarded as a sequence labeling task which can be assigned dialogue act through multiclassification method or the structured prediction algorithms In our formulation we adopt the linear chain conditional random field CRF along with hierarchical attentive encoders for the structured prediction Instead of labeling each utterance in isolation structured prediction models such as HMM CRF can better capture the contextual dependencies among utterances In our model we define the structured attention model as being an extended attention model which provides an alternative approach to incorporate the machinery of structural inference directly into our neural network Due to the hierarchical nature of conversations our proposed model is constructed at multiple levels of granularity eg word level utterance level and conversation level The representation of a conversation can be composed by each utterance INLINEFORM0 and each INLINEFORM1 can be obtained by combining the representations of constituent words INLINEFORM2 Taking inspiration from Memory Networks and incorporate socalled memory hops we adopt the memory enhanced contextual representations in order to have unrestricted access to the whole sequence rather than localized features as former recurrent neural network Here we include the memory enhanced hierarchical representation in Figure 2 to depict the conversation level representation As illustrated in Figure 2 the hierarchical semantic network can be divided into two parts 1 fine grained embedding layer 2 memory enhanced contextual representation layer The second part can be further broken down into three main components a the input memory INLINEFORM0 which takes in the output from the word embedding layer b the contextual attention which takes the consideration of the former utterance and the latter one c the output memory INLINEFORM1 which is obtained from the input memory connected with the attention mechanism The weights are determined by measuring the similarity between the input memory and the current utterance input Fine Grained Embedding For a given conversation each utterance INLINEFORM0 is encoded by a fine grained embedding layer We first try to utilize the rich lexical factors and linguistic properties to enhance the word representation For each word token INLINEFORM1 in each utterance we initialized the word embedding using pretrained embeddings such as Word2vec or Glove Furthermore in order to tackle the outofvocabulary OOV problem we adopt the characterlevel word embedding via CNN to combine with pretrained word level embeddings We also extend the lexical factors via POS tag and NER tag to enhance the utterance understanding The obtained four factors are concatenated to form a rich lexical representation as INLINEFORM2 Since we consider the bidirectional GRU to encode the representation of each utterance we concatenate the outputs from the forward and backward GRU hidden representations at the time step For each utterance INLINEFORM0 which consists a sequence of words INLINEFORM1 the original semantic representation is as follows INLINEFORM2 Here we utilize INLINEFORM0 and INLINEFORM1 to represent the word level embedding function and utterance level encoder in our hierarchical model After obtained the original semantic representations on each utterance we later apply the memory enhanced contextual layer to further explore the correlations between utterances Memory Enhanced Contextual Representation Every utterance in a conversation is encoded with INLINEFORM0 where INLINEFORM1 is the encoding function via BiGRU to map the input words into a vector INLINEFORM2 The original sequence utterances are denoted as INLINEFORM3 While this original semantic representation can be the input component in the context of memory network In order to tackle the drawback of insensitivity to temporal information between memory cells we adopt the approach in injecting temporal signal into the memory using a contextual recurrent encoding INLINEFORM4 where INLINEFORM0 INLINEFORM1 INLINEFORM2 are learnable parameters It is a remarkable fact that the new sequence INLINEFORM0 can be seen as the contextual integrated representations which take consider of the former utterances and the latter ones The injected temporal signal can further explore the contextual influence on the current input utterance We thus can make use of this obtained INLINEFORM1 to represent another INLINEFORM2 which cares more about the context influence For the current input utterance INLINEFORM0 in memory networks the input is required to be in the same space as the input memory Here we adopt the popular attention mechanism in the memory by measuring the relevance between current input utterance INLINEFORM1 and the contextual new representation INLINEFORM2 The relevance is measured with a softmax function INLINEFORM3 Once the attention weights have been computed the output memory can be used to generate the final output of the memory layer in the form of a weighted sum over the attention and the input utterance INLINEFORM0 The output allows the model to have unrestricted access to elements in previous steps as opposed to a single hidden state INLINEFORM0 in recurrent neural networks Thereby we can effectively detect the long range dependencies among utterances in a conversation To further extend the complex reasoning over multiple supporting facts from memory we adopt a stacking operation which stacks hops between the original utterance semantic representation INLINEFORM0 and the kth output hop INLINEFORM1 to be the input to the INLINEFORM2 th hop INLINEFORM3 where INLINEFORM0 encodes not only information at the current step INLINEFORM1 but also relevant knowledge from the contextual memory INLINEFORM2 Note that in the scope of this work we limit the number of hops to 1 to ease the computational cost Traditional attention networks have proven to be an effective approach for embedding categorical inference within a deep neural network However In DAR problem we need to further explore the structural dependencies among utterances and dialogue acts As we see utterances in a conversation are not exist independently The latter utterance may be the responding answer to the former question or that the chunk of utterances are in the same act type Here we consider generalizing selection to types of chunks selecting attention and propose the structured attention to model richer dependencies by incorporating structural distributions within networks Such a structured attention can be interpreted as using softselection that considers all possible structures over the utterance input In our paper we formulate the DAR as a sequence labeling problem It is a natural choice to assign a label to each element in the sequence via linear chain CRF which enable us to model dependencies among labels Here we do not directly apply the original linear chain CRF to the learned utterance Although the dependencies among utterances have been captured by the former hierarchical semantic networks we still need to further explore the dialogue act dependencies in the label level For dialogue act sequence labeling problem greedily predicting the dialogue act at each timestep might not optimal the solution Instead it is better to look into the correlations in both utterance level and the dialogue act level in order to jointly decode the best chain of dialogue acts Formally let INLINEFORM0 represent a sequence of utterance inputs let INLINEFORM1 be the corresponding dialogue act sequence Variable INLINEFORM2 are discrete latent act variables INLINEFORM3 with sample space INLINEFORM4 that encodes the desired selection among these inputs The aim of the structured attention is to produce a sequence aware INLINEFORM5 INLINEFORM6 based on the utterances INLINEFORM7 and the dialogue act sequence INLINEFORM8 We assume the attentive distribution INLINEFORM9 where we condition INLINEFORM10 on the input utterances INLINEFORM11 and the dialogue act sequence INLINEFORM12 Here we assume the utterances in the conversation as an undirected graph structure with INLINEFORM13 vertices The CRF is parameterized with clique potentials INLINEFORM14 indicating the subset of INLINEFORM15 give by clique INLINEFORM16 Under this definition the attention probability is defined as INLINEFORM17 For symmetry we use the softmax in a general sense ie INLINEFORM18 where INLINEFORM19 is the implied recognition function Here INLINEFORM20 comes from the former memory enhanced deep model over utterances INLINEFORM21 and corresponding dialogue acts INLINEFORM22 The INLINEFORM0 INLINEFORM1 over the utterances and dialogue acts is defined as expectation INLINEFORM2 where we assume the annotation function INLINEFORM0 factors into INLINEFORM1 The annotation function is defined to simply return the selected hidden state The INLINEFORM2 INLINEFORM3 can be interpreted as an dialogue act aware attentive conversation as taking the expectation of the annotation function with respect to a latent variable INLINEFORM4 where INLINEFORM5 is parameterized to be function of utterances INLINEFORM6 and dialogue acts INLINEFORM7 The expectation is a linear combination of the input representation and represents how much attention will be focused on each utterance according to the dialogue act sequence We can model the structural dependencies distribution over the latent INLINEFORM0 with a linear chain CRF with n states INLINEFORM1 where INLINEFORM0 is the pairwise potential for INLINEFORM1 and INLINEFORM2 Notice that the utterance INLINEFORM3 and the dialogue act sequence INLINEFORM4 are both obtained from downstream learned representation The marginal distribution INLINEFORM5 can be calculated efficiently in linear time via the forwardbackward algorithm These marginals further allow us to implicitly sum over the linear chain conditional random field We refer to this type of attention layer as a INLINEFORM6 INLINEFORM7 INLINEFORM8 where we can explicitly look into the undirected graphical CRF structure to find which utterances are in the same chunk or in isolation Here we define the node potentials with a unary CRF setting INLINEFORM0 where for each utterance we summarize the possible dialogue act to perform sequential reasoning Given the potential we compute the structural marginals INLINEFORM0 using the forwardbackward algorithm which is then used to compute the final probability of predicting the sequence of dialogue acts as INLINEFORM1 We adopt the maximum likelihood training estimation to learn the CRFattentive structured parameters Given the training set INLINEFORM0 with INLINEFORM1 conversation pairs the log likelihood can be written as INLINEFORM2 where we denote the INLINEFORM0 as the set of parameters within neural networks from hierarchical layers word embedding layer memory enhanced utterance modeling layer CRFattentive structured layer We define the objective function in training process DISPLAYFORM0 INLINEFORM0 is a hyperparameter to tradeoff the training loss and regularization By using SGD optimization with the diagonal variant of AdaGrad at time step t the parameter INLINEFORM1 is updated as follows DISPLAYFORM0 where INLINEFORM0 is the initial learning rate and INLINEFORM1 is the subgradient at time t Notice that one of our contributions is to apply CRF structural attention as the final layer of deep models The whole model can be trained in an endtoend manner Here we consider the standard Viterbi algorithm for computing the distribution INLINEFORM0 The main procedure is summarized in Algorithm 1 For testing we adopt Viterbi algorithm to obtain the optimal sequence by using dynamic programming techniques The testing procedure can be written as INLINEFORM0 t Viterbi algorithm for CRFASN 1 The observation space INLINEFORM0 The state space INLINEFORM0 The observation sequence INLINEFORM0 The probabilities INLINEFORM0 The most likely hidden state sequence INLINEFORM0 Construct transition matrix INLINEFORM0 each element stores the transition probability of transiting from state INLINEFORM1 to state INLINEFORM2 Construct emission matrix INLINEFORM3 each element stores the probability of observing INLINEFORM4 from state INLINEFORM5 each state INLINEFORM6 INLINEFORM7 INLINEFORM8 each observation INLINEFORM9 each state INLINEFORM10 INLINEFORM11 INLINEFORM12 INLINEFORM13 INLINEFORM14 INLINEFORM15 INLINEFORM16 INLINEFORM17 X In this section we conduct several experiments on two public DA datasets SwDA and MRDA and show the effectiveness of our approach CRFASN for dialogue act recognition We evaluate the performance of our method on two benchmark DA datasets Switchboard Dialogue Act Corpus SwDA and The ICSI Meeting Recorder Dialogue Act Corpus MRDA These two datasets have been widely used to conduct the dialogue act recognition or the dialogue act classification tasks by several prior studies SwDA Switchboard Dialogue Act Corpus is a large handlabeled dataset of 1155 conversations from the Switchboard corpus of spontaneous humantohuman telephone speech Each conversation involved two randomly selected strangers who had been charged with talking informally about one of several selfselected general interest topics For each utterance together with a variety of automatic and semiautomatic tools the tag set distinguishes 42 mutually exclusive utterance types via DAMSL taxonomy The top five frequent DA types include STATEMENT BACKCHANNEL ACKNOWLEDGE OPINION ABANDONED UNINTERPRETABLE AGREEMENT ACCEPT We list the top five percentages of utterance type in the overall corpus in table2 MRDA The ICSI Meeting Recorder Dialogue Act Corpus consists of handannotated dialog act adjacency pair and hotspot labels for the 75 meetings in the ICSI meeting corpus The MRDA scheme provides several classmaps and corresponding scripts for grouping related tags together into smaller number of DAs In this work we use the most widely used classmap that groups all tags into 5 DAs ie Disruption D indicates the current Dialogue Act is interrupted BackChannel B are utterances which are not made directly by a speaker as a response and do not function in a way that elicits a response either FloorGrabber F are dialogue acts for grabbing or maintaining the floor Question Q is for eliciting listener feedback And finally unless an utterance is completely indecipherable or else can be further described by a general tag then its default status is Statement S We respectively list the percentage of the five general dialogue acts in table 3 From the table 2 and table 3 we can see the datasets are highly imbalanced in terms of label distributions The dialogue act type STATEMENT occupies the largest proportion in both two datasets Following the second place is the BACKCHANNEL act type which somewhat reflect the speakers speech style We present the detailed data preparation procedure for obtaining the clear dataset For two datasets we performed preprocessing steps in order to filter out the noise and some informal nature of utterances We first strip the exclamations and commas and then we convert the characters into lowercase Notice that for SwDA we only get the training and testing datasets In order to smooth the training step and tune the parameters we depart the original training dataset into two parts one for training and the other small part used to be the validation set We list the detailed statistics of the two datasets in table 4 We mainly evaluate the performance of our proposed CRFASN method based on the widelyused evaluation criteria for dialogue act recognition Accuracy The Accuracy is the normalized criteria of accessing the quality of the predicted dialogue acts based on the testing utterance set INLINEFORM0 Given the testing conversation INLINEFORM1 with its groundtruth dialogue acts INLINEFORM2 we denote the predicted dialogue acts from our CRFASN method by INLINEFORM3 We now introduce the evaluation criteria below INLINEFORM4 We preprocess each utterance using the library of nltk BIBREF11 and exploit the popular pretrained word embedding Glove with 100 dimensional vectors BIBREF12 The size of charlevel embedding is also set as 100dimensional and is obtained by CNN filters under the instruction of Kim BIBREF13 The Gated Recurrent Unit BIBREF14 which is variant from LSTM BIBREF15 is employed throughout our model We adopt the AdaDelta BIBREF16 optimizer for training with an initial learning rate of 0005 We also apply dropout BIBREF17 between layers with a dropout rate of 02 For the memory network enhanced reasoning we set the number of hops as 1 to preliminary learn the contextual dependencies among utterances We do not set too many hops as increasing the number of GRU layers reduced the accuracy of the model Early stopping is also used on the validation set with a patience of 5 epochs Conversations with the same number of utterances were grouped together into minibatches and each utterance in a minibatch was padded to the maximum length for that batch The maximum batchsize allowed was 48 During training we set the moving averages of all weights as the exponential decay rate of 0999 BIBREF18 The whole training process takes approximately 14 hours on a single 1080Ti GPU All the hyperparameters were selected by tuning one hyperparameter at a time while keeping the others fixed We compare our propose method with other several stateoftheart methods for the problem of dialogue act recognition as follows BiLSTMCRF BIBREF19 method builds a hierarchical bidirectional LSTM as a base unit and the conditional random field as the top layer to do the dialogue act recognition task DRLMConditional BIBREF20 method combines postive aspects of neural network architectures with probabilistic graphical models The model combines a recurrent neural network language model with a latent variable model over shallow discourse structure LSTMSoftmax BIBREF0 method applies a deep LSTM structure to classify dialogue acts via softmax operation The authors claim that the word embeddings dropout weight decay and number of LSTM layers all have large effect on the final performance RCNN BIBREF8 method composes both sentence model and discourse model to extend beyond the single sentence The authors propose hierarchical CNN on sentence model and RNN on the contextual discourses CNN BIBREF21 method incorporates the preceding short texts to classify dialogue act The authors demonstrate that adding sequential information improves the quality of the predictions HMM BIBREF5 method treats the discourse structure of a conversation as a hidden Markov model and the individual dialogue acts as observations emanating from the model states CRF Simple baseline which applies the text encoding and CRFbased structure prediction on the DAR problem SVM Simple baseline which applies the text encoding and multiclassification algorithm on the DAR problem Among them The former five approaches eg BiLSTMCRF DRLMConditional LSTMSoftmax RCNN CNN all adopt the deep neural network model in order to better capture the utterances semantic representations The latter three methods HMM CRF SVM just employ the simple feature selection on the text processing About half of the baselines including BiLSTMCRF DRLMConditional HMM CRF consider the graphical structured prediction while the others eg RCNN CNN LSTMSoftmax SVM just adopt the traditional multiclassification algorithms Table 5 and Table 6 respectively show the experimental Accuracy results of the methods on the SwDA and MRDA datasets The hyperparameters and parameters which achieve the best performance on the validation set are chosen to conduct the testing evaluation The experiments reveal some interesting points The results show that our proposed model CRFASN obviously outperforms the stateoftheart baselines on both SwDA and MRDA datasets Numerically Our model improves the DAR accuracy over BiLSTMCRF by 21 and 08 on SwDA and MRDA respectively It is remarkable that our CRFASN method is nearly close to the human annotators performance on SwDA which is very convincing to prove the superiority of our model The deep neural networks outperform the other featurebased models We can see the last three nondeep models obtain worse performance than the top five deepbased methods This suggests that the performance of dialogue act recognition can be improved significantly with discriminative deep neural networks either in convolutional neural network or the recurrent neural network Apart from deep learning tactics the problem formulations are also critical to the DAR problem We see structured prediction approaches eg CRFASN BiLSTMCRF obtain better results than multiclassification eg LSTMSoftmax Whats more under the same text encoding situation the CRFbased model achieves much better results than the SVMbased method Which can fully prove the superiority of the structured prediction formulation We also notice that CRF is better than HMM when adopted to the DAR task The major differences between our proposed model CRFASN and the strong baseline BILSTMCRF lie in two aspects First we adopt a more fine grained manner to encode the utterances and utilize the memory enhanced mechanism to compute the contextual dependencies Second we employ an adapted structured attention network on the CRF layer rather than directly apply the original CRF on the utterances These two modifications are essential and improve the performance significantly We respectively evaluate the individual contribution of the proposed module in our model We conduct thorough ablation experiments on the SwDA dataset which are recorded on the table 7 To make it fair we only modify one module at a time and fix the other components to be in the same settings We replace the proposed structured CRFattention layer to simple CRF the results show structured CRFattention layer results in major improvement in the accuracy approximately over 21 absolute points We further replace the structure prediction formulation to multiclassification on SVM the results drop dramatically which illustrate the benefit of considering structural dependencies among utterances We replace the finegrained word INLINEFORM0 to the simple Glove vector The results suggest that fine grained word embedding is useful to represent a text We also adapt the context state INLINEFORM1 to only care its neighbor utterances The result is not satisfying which conveys us that the basic text understanding is critical in the semantic representations We replace the memory network to directly apply CRF layer to the utterance layer We also conduct a comparing experiment which plus the original utterance to memory enhanced output The two results show the designed hierarchical memoryenhanced components are helpful in the utterance understanding and modeling the contextual influence In Figure 3 we visualize of the output edge marginals produced by the CRFASN model for a conversation In this instance the actual dialogue act recognition procedure is displayed as INLINEFORM0 In the testing step the model is uncertain and select the most attentive path to maximize the true dialogue act recognition Here we can see from the marginal edges the path INLINEFORM1 occupies more attentive weights than the path INLINEFORM2 in predicting the dialogue act label Thus we ultimately select the right way to recognize the dialogue act Figure 4 shows the confusion heatmap of our proposed CRFASN model for the SwDA dataset Each element in the heatmap denotes the rate that the predicted label is the same to the true label We can see from the diagonal the sdsd bb pairs achieve the most satisfying matching score while qyd qyd is much worse than other pairs This can be explained that the sd statement and backnowledge have clearly selfidentification while qydDeclarative YesNoQuestion is more easier to be mistakenly recognized We can see that qydqy which represents Declarative YesNoQuestioYesNoQuestion is indeed hard to recognize since their dialogue type are too similar with each other For another reason we notice that due to the bias of the ground truth there are some cases that we predict the dialogue act correctly while the ground truth is wrong To some reason classifying so many finegrained dialogue act labels is not easy for human annotators besides the humansubjectivity occupies an important role in recognizing the dialogue act In this section we briefly review some related work on dialogue act recognition and attention network The main task of dialogue act recognition is to assign an act label to each utterance in a conversation which can be defined as a supervised problem due to the properties that each utterance has a corresponding act label Most of the existing work for the problem of dialogue act recognition can be categorized as following two groups Regarding the DAR as a multiclassification problem Reithinger et al BIBREF22 present deal with the dialogue act classification using a statistically based language model Webb et al BIBREF23 apply diverse intrautterance features involving word ngram cue phrases to understand the utterance and do the classification Geertzen et al BIBREF24 propose a multidimensional approach to distinguish and annotate units in dialogue act segmentation and classification Grau et al BIBREF3 focus on the dialogue act classification using a Bayesian approach Serafin et al BIBREF25 employ Latent Semantic Analysis LSA proper and augmented method to work for dialogue act classification Chen et al BIBREF26 had an empirical investigation of sparse loglinear models for improved dialogue act classification Milajevs et al BIBREF27 investigate a series of compositional distributional semantic models to dialogue act classification Regarding the DAR as a sequence labeling problem Stolcke et al BIBREF5 treat the discourse structure of a conversation as a hidden Markov model and the individual dialogue acts as observations emanating from the model states Tavafi et al BIBREF28 study the effectiveness of supervised learning algorithms SVMHMM for DA modeling across a comprehensive set of conversations Similar to the SVMHMM Surendran et al BIBREF29 also use a combination of linear support vector machines and hidden markov models for dialog act tagging in the HCRC MapTask corpus Lendvai et al BIBREF30 explore two sequence learners with a memorybased tagger and conditional random fields into turninternal DA chunks Boyer et al BIBREF31 also applied HMM to discover internal dialogue strategies inherent in the structure of the sequenced dialogue acts Galley et al BIBREF32 use skipchain conditional random field to model nonlocal pragmatic dependencies between paired utterances Zimmermann et al BIBREF33 investigate the use of conditional random fields for joint segmentation and classification of dialog acts exploiting both word and prosodic features Recently approaches based on deep learning methods improved many stateoftheart techniques in NLP including DAR accuracy on opendomain conversations BIBREF7 BIBREF34 BIBREF6 BIBREF35 BIBREF21 Kalchbrenner et al BIBREF7 used a mixture of CNN and RNN CNNs were used to extract local features from each utterance and RNNs were used to create a general view of the whole dialogue Khanpour et al BIBREF0 design a deep neural network model that benefits from pretrained word embeddings combined with a variation of the RNN structure for the DA classification task Ji et al BIBREF6 also investigated the performance of using standard RNN and CNN on DA classification and got the cutting edge results on the MRDA corpus using CNN Lee et al BIBREF21 proposes a model based on CNNs and RNNs that incorporates preceding short texts as context to classify current DAs Zhou et al BIBREF34 combine heterogeneous information with conditional random fields for Chinese dialogue act recognition Kumar et al BIBREF35 build a hierarchical encoder with CRF to learn multiple levels of utterance and act dependencies Unlike the previous studies we formulate the problem from the viewpoint of integrating contextual dependencies in both utterance level and the act label level We not only consider the fine grained multilevel semantic representations but also integrate the structured attention network to further capture the structure designpendencies in the CRF layer Attention mechanism has become an essential component in text understanding in recent years Since the first work proposed by Bahdanau et al BIBREF36 that adopt the attention mechanism in neural machine translation attention mechanism based neural networks have become a major trend in diverse text researching field such as in machine comprehension BIBREF37 BIBREF38 BIBREF39 BIBREF40 machine translation BIBREF41 BIBREF42 abstract summarization BIBREF43 BIBREF44 text classification BIBREF45 BIBREF46 BIBREF47 and so on The principle of attention mechanism is to select the most pertinent piece of information rather than using all available information a large part of it being irrelevant to compute the neural response In our work we propose the CRFattentive structured network in order to encode the internal utterance inference with dialogue acts The structured attention is a more general attention mechanism which take account of the graphical dependencies and allow for extending attention beyond the standard softselection approach The most similar work to our model is proposed by Kim et al BIBREF48 Kim et al also experiment with two different classes of structured attention networks subsequence selection and syntactic selection However the objectives of these two networks aims to segment the structure dependencies which are quite different from our DAR task In DAR task we care more on the dialogue act influences on the overall conversation structure thus the former structured attention may not be suitable for our problem In this paper we formulate the problem of dialogue act recognition from the viewpoint of capturing hierarchical rich utterance representations and generalize richer CRF attentive graphical structural dependencies without abandoning endtoend training We propose the CRFAttentive Structured Network CRFASN for the problem We implement the model in two steps We first encode the rich semantic representation on the utterance level by incorporating hierarchical granularity and memory enhanced inference mechanism The learned utterance representation can capture long term dependencies across the conversation We next adopt the internal structured attention network to compute the dialogue act influence and specify structural dependencies in a soft manner This approach enable the softselection attention on the structural CRF dependencies and take account of the contextual influence on the nearing utterances We demonstrate the efficacy of our method using the wellknown public datasets SwDA and MRDA The extensive experiments demonstrate that our model can achieve better performance than several stateoftheart solutions to the problem </s>