text
stringlengths
0
164k
it sigma mathbf WiXX mathbf Wicct1 mathbf bi
ft sigma mathbf WfXX mathbf Wfcct1 mathbf bf
ot sigma mathbf WoXX mathbf Wocct1 mathbf bo
ct ft odot ct1 it odot tanhmathbf WcXX mathbf bc
ht ot odot tanhct Eq 3 where it ft ot are input forget and output gates respectively The sigmoid function sigma is a soft gate function controlling the amount of information flow W s and b s are model parameters to learn A traditional RNN encoderdecoder approach BIBREF11 first encodes an arbitrary length input sequence into a fixedlength dense vector that can be used as input to subsequent classification models or to initialize the hidden state of a secondary decoder However the requirement to compress all necessary information into a single fixed length vector can be problematic A neural attention model BIBREF12 BIBREF13 has been recently proposed to alleviate this issue by enabling the network to attend to past outputs when decoding Thus the encoder no longer needs to represent an entire sequence with one vector instead it encodes information into a sequence of vectors and adaptively chooses a subset of the vectors when decoding In our cQA tasks the pair of objects are question question or question comment and the relationship is relevantirrelevant The left side of Figure 1 shows one intuitive way to predict relationships using RNNs Parallel LSTMs encode two objects independently and then concatenate their outputs as an input to a feedforward neural network FNN with a softmax output layer for classification The representations of the two objects are generated independently in this manner However we are more interested in the relationship instead of the object representations themselves Therefore we consider a serialized LSTMencoder model in the right side of Figure 1 that is similar to that in BIBREF2 but also allows an augmented feature input to the FNN classifier Figure 2 illustrates our attention framework in more detail The first LSTM reads one object and passes information through hidden units to the second LSTM The second LSTM then reads the other object and generates the representation of this pair after the entire sequence is processed We build another FNN that takes this representation as input to classify the relationship of this pair By adding an attention mechanism to the encoder we allow the second LSTM to attend to the sequence of output vectors from the first LSTM and hence generate a weighted representation of first object according to both objects Let hN be the last output of second LSTM and M h1 h2 cdots hL be the sequence of output vectors of the first object The weighted representation of the first object is hprime sum i1L alpha i hi Eq 7 The weight is computed by alpha i dfracexpahihNsum j1LexpahjhN Eq 8 where a is the importance model that produces a higher score for hi hN if hi is useful to determine the object pairs relationship We parametrize this model using another FNN Note that in our framework we also allow other augmented features eg the ranking score from the IR system to enhance the classifier So the final input to the classifier will be hN hprime as well as augmented features For task C in addition to an original question oriQ and an external comment relC the question which relC commented on is also given relQ To incorporate this extra information we consider a multitask learning framework which jointly learns to predict the relationships of the three pairs oriQrelQ oriQrelC relQrelC Figure 3 shows our framework the three lower models are separate serialized LSTMencoders for the three respective object pairs whereas the upper model is an FNN that takes as input the concatenation of the outputs of three encoders and predicts the relationships for all three pairs More specifically the output layer consists of three softmax layers where each one is intended to predict the relationship of one particular pair For the overall loss function we combine three separate loss functions using a heuristic weight vector beta that allocates a higher weight to the main task oriQrelC relationship prediction as follows mathcal L beta 1 mathcal L1 beta 2 mathcal L2 beta 3 mathcal L3 Eq 11 By doing so we hypothesize that the related tasks can improve the main task by leveraging commonality among all tasks We evaluate our approach on all three cQA tasks We use the cQA datasets provided by the Semeval 2016 task The cQA data is organized as follows there are 267 original questions each question has 10 related question and each related question has 10 comments Therefore for task A there are a total number of 26700 questioncomment pairs For task B there are 2670 questionquestion pairs For task C there are 26700 questioncomment pairs The test dataset includes 50 questions 500 related questions and 5000 comments which do not overlap with the training set To evaluate the performance we use mean average precision MAP and F1 score Table 2 shows the initial results using the RNN encoder for different tasks We observe that the attention model always gets better results than the RNN without attention especially for task C However the RNN model achieves a very low F1 score For task B it is even worse than the random baseline We believe the reason is because for task B there are only 2670 pairs for training which is very limited training for a reasonable neural network For task C we believe the problem is highly imbalanced data Since the related comments did not directly comment on the original question more than 90 of the comments are labeled as irrelevant to the original question The low F1 with high precision and low recall means our system tends to label most comments as irrelevant In the following section we investigate methods to address these issues One way to improve models trained on limited data is to use external data to pretrain the neural network We therefore considered two different datasets for this task Crossdomain The Stanford natural language inference SNLI corpus BIBREF17 has a huge amount of cleaned premise and hypothesis pairs Unfortunately the pairs are for a different task The relationship between the premise and hypothesis may be similar to the relation between questions and comments but may also be different Indomain since task A seems has reasonable performance and the network is also welltrained we could use it directly to initialize task B To utilize the data we first trained the model on each auxiliary data SNLI or Task A and then removed the softmax layer After that we retrain the network using the target data with a softmax layer that was randomly initialized For task A the SNLI cannot improve MAP or F1 scores Actually it slightly hurts the performance We surmise that it is probably because the domain is different Further investigation is needed for example we could only use the parameter for embedding layers etc For task B the SNLI yields a slight improvement on MAP 02 and Task A could give 12 on top of that No improvement was observed on F1 For task C pretraining by task A is also better than using SNLI task A is 1 better than the baseline while SNLI is almost the same In summary the indomain pretraining seems better but overall the improvement is less than we expected especially for task B which only has very limited target data We will not make a conclusion here since more investigation is needed As mentioned in Section Modeling QuestionExternal Comments we also explored a multitask learning framework that jointly learns to predict the relationships of all three tasks We set 08 for the main task task C and 01 for the other auxiliary tasks The MAP score did not improve but F1 increases to 01617 We believe this is because other tasks have more balanced labels which improves the shared parameters for task C There are many sources of external questionanswer pairs that could be used in our tasks For example WebQuestion was introduced by the authors of SEMPRE system BIBREF18 and The SimpleQuestions dataset All of them are positive examples for our task and we can easily create negative examples from it Initial experiments indicate that it is very easy to overfit these obvious negative examples We believe this is because our negative examples are noninformative for our task and just introduce noise Since the external data seems to hurt the performance we try to use the indomain pairs to enhance task B and task C For task B if relative question 1 rel1 and relative question 2 rel2 are both relevant to the original question then we add a positive sample rel1 rel2 1 If either rel1 and rel2 is irrelevant and the other is relevant we add a negative sample rel1 rel2 0 After doing this the samples of task B increase from 2670 to 11810 By applying this method the MAP score increased slightly from 05723 to 05789 but the F1 score improved from 04334 to 05860 For task C we used task As data directly The results are very similar with a slight improvement on MAP but large improvement on F1 score from 01449 to 02064 To further enhance the system we incorporate a one hot vector of the original IR ranking as an additional feature into the FNN classifier Table 3 shows the results In comparing the models with and without augmented features we can see large improvement for task B and C The F1 score for task A degrades slightly but MAP improves This might be because task A already had a substantial amount of training data Table 4 gives the final comparison between different models we only list the MAP score because it is the official score for the challenge Since the two baseline models did not use any additional data in this table our system was also restricted to the provided training data For task A we can see that if there is enough training data our single system already performs better than a very strong featurerich based system For task B since only limited training data is given both featurerich based system and our system are worse than the IR system For task C our system also got comparable results with the featurerich based system If we do a simple system combination average the rank score between our system and the IR system the combined system will give large gains on tasks B and C This implies that our system is complimentary with the IR system In addition to quantitative analysis it is natural to qualitatively evaluate the performance of the attention mechanism by visualizing the weight distribution of each instance We randomly picked several instances from the test set in task A for which the sentence lengths are more moderate for demonstration These examples are shown in Figure 5 and categorized into short long and noisy sentences for discussion A darker blue patch refers to a larger weight relative to other words in the same sentence Figure 5 illustrates two cQA examples whose questions are relatively short The comments corresponding to these questions are snorkeling two days ago off the coast of dukhan and the doha international airport We can observe that our model successfully learns to focus on the most representative part of the question pertaining to classifying the relationship which is place for snorkeling for the first example and place can visited in qatar for the second example In Figure 5 we investigate two examples with longer questions which both contain 63 words Interestingly the distribution of weights does not become more uniform the model still focuses attention on a small number of hot words for example puppy dog for mall and hectic driving in doha car insurance quite costly Additionally some words that appear frequently but carry little information for classification are assigned very small weights such as Iwemy isam like and to Due to the open nature of cQA forums some content is noisy Figure 5 is an example with excessive usage of question marks Again our model exhibits its robustness by allocating very low weights to the noise symbols and therefore excludes the noninformative content In this paper we demonstrate that a general RNN encoder framework can be applied to community question answering tasks By adding a neural attention mechanism we showed quantitatively and qualitatively that attention can improve the RNN encoder framework To deal with a more realistic scenario we expanded the framework to incorporate metadata as augmented inputs to a FNN classifier and pretrained models on larger datasets increasing both stability and performance Our model is consistently better than or comparable to a strong featurerich baseline system and is superior to an IRbased system when there is a reasonable amount of training data Our model is complimentary with an IRbased system that uses vast amounts of external resources but trained for general purposes By combining the two systems it exceeds the featurerich and IRbased system in all three tasks Moreover our approach is also language independent We have also performed preliminary experiments on the Arabic portion of the SemEval2016 cQA task The results are competitive with a handtuned strong baseline from SemEval2015 Future work could proceed in two directions first we can enrich the existing system by incorporating available metadata and preprocessing data with morphological normalization and outofvocabulary mappings second we can reinforce our model by carrying out wordbyword and historyaware attention mechanisms instead of attending only when reading the last word </s>
<s>[INST] Attentional Encoder Network for Targeted Sentiment Classification [/INST] Targeted sentiment classification is a finegrained sentiment analysis task which aims at determining the sentiment polarities eg negative neutral or positive of a sentence over opinion targets that explicitly appear in the sentence For example given a sentence I hated their service but their food was great the sentiment polarities for the target service and food are negative and positive respectively A target is usually an entity or an entity aspect In recent years neural network models are designed to automatically learn useful lowdimensional representations from targets and contexts and obtain promising results BIBREF0 BIBREF1 However these neural network models are still in infancy to deal with the finegrained targeted sentiment classification task Attention mechanism which has been successfully used in machine translation BIBREF2 is incorporated to enforce the model to pay more attention to context words with closer semantic relations with the target There are already some studies use attention to generate targetspecific sentence representations BIBREF3 BIBREF4 BIBREF5 or to transform sentence representations according to target words BIBREF6 However these studies depend on complex recurrent neural networks RNNs as sequence encoder to compute hidden semantics of texts The first problem with previous works is that the modeling of text relies on RNNs RNNs such as LSTM are very expressive but they are hard to parallelize and backpropagation through time BPTT requires large amounts of memory and computation Moreover essentially every training algorithm of RNN is the truncated BPTT which affects the models ability to capture dependencies over longer time scales BIBREF7 Although LSTM can alleviate the vanishing gradient problem to a certain extent and thus maintain long distance information this usually requires a large amount of training data Another problem that previous studies ignore is the label unreliability issue since neutral sentiment is a fuzzy sentimental state and brings difficulty for model learning As far as we know we are the first to raise the label unreliability issue in the targeted sentiment classification task This paper propose an attention based model to solve the problems above Specifically our model eschews recurrence and employs attention as a competitive alternative to draw the introspective and interactive semantics between target and context words To deal with the label unreliability issue we employ a label smoothing regularization to encourage the model to be less confident with fuzzy labels We also apply pretrained BERT BIBREF8 to this task and show our model enhances the performance of basic BERT model Experimental results on three benchmark datasets show that the proposed model achieves competitive performance and is a lightweight alternative of the best RNN based models The main contributions of this work are presented as follows The research approach of the targeted sentiment classification task including traditional machine learning methods and neural networks methods Traditional machine learning methods including rulebased methods BIBREF9 and statisticbased methods BIBREF10 mainly focus on extracting a set of features like sentiment lexicons features and bagofwords features to train a sentiment classifier BIBREF11 The performance of these methods highly depends on the effectiveness of the feature engineering works which are labor intensive In recent years neural network methods are getting more and more attention as they do not need handcrafted features and can encode sentences with lowdimensional word vectors where rich semantic information stained In order to incorporate target words into a model Tang et al tang2016effective propose TDLSTM to extend LSTM by using two singledirectional LSTM to model the left context and right context of the target word respectively Tang et al tang2016aspect design MemNet which consists of a multihop attention mechanism with an external memory to capture the importance of each context word concerning the given target Multiple attention is paid to the memory represented by word embeddings to build higher semantic information Wang et al wang2016attention propose ATAELSTM which concatenates target embeddings with word representations and let targets participate in computing attention weights Chen et al chen2017recurrent propose RAM which adopts multipleattention mechanism on the memory built with bidirectional LSTM and nonlinearly combines the attention results with gated recurrent units GRUs Ma et al ma2017interactive propose IAN which learns the representations of the target and context with two attention networks interactively Given a context sequence INLINEFORM0 and a target sequence INLINEFORM1 where INLINEFORM2 is a subsequence of INLINEFORM3 The goal of this model is to predict the sentiment polarity of the sentence INLINEFORM4 over the target INLINEFORM5 Figure FIGREF9 illustrates the overall architecture of the proposed Attentional Encoder Network AEN which mainly consists of an embedding layer an attentional encoder layer a targetspecific attention layer and an output layer Embedding layer has two types GloVe embedding and BERT embedding Accordingly the models are named AENGloVe and AENBERT Let INLINEFORM0 to be the pretrained GloVe BIBREF12 embedding matrix where INLINEFORM1 is the dimension of word vectors and INLINEFORM2 is the vocabulary size Then we map each word INLINEFORM3 to its corresponding embedding vector INLINEFORM4 which is a column in the embedding matrix INLINEFORM5 BERT embedding uses the pretrained BERT to generate word vectors of sequence In order to facilitate the training and finetuning of BERT model we transform the given context and target to CLS context SEP and CLS target SEP respectively The attentional encoder layer is a parallelizable and interactive alternative of LSTM and is applied to compute the hidden states of the input embeddings This layer consists of two submodules the MultiHead Attention MHA and the Pointwise Convolution Transformation PCT MultiHead Attention MHA is the attention that can perform multiple attention function in parallel Different from Transformer BIBREF13 we use IntraMHA for introspective context words modeling and InterMHA for contextperceptive target words modeling which is more lightweight and target is modeled according to a given context An attention function maps a key sequence INLINEFORM0 and a query sequence INLINEFORM1 to an output sequence INLINEFORM2 DISPLAYFORM0 where INLINEFORM0 denotes the alignment function which learns the semantic relevance between INLINEFORM1 and INLINEFORM2 DISPLAYFORM0 where INLINEFORM0 are learnable weights MHA can learn nhead different scores in parallel child spaces and is very powerful for alignments The INLINEFORM0 outputs are concatenated and projected to the specified hidden dimension INLINEFORM1 namely DISPLAYFORM0 where INLINEFORM0 denotes vector concatenation INLINEFORM1 INLINEFORM2 is the output of the INLINEFORM3 th head attention and INLINEFORM4 IntraMHA or multihead selfattention is a special situation for typical attention mechanism that INLINEFORM0 Given a context embedding INLINEFORM1 we can get the introspective context representation INLINEFORM2 by DISPLAYFORM0 The learned context representation INLINEFORM0 is aware of longterm dependencies InterMHA is the generally used form of attention mechanism that INLINEFORM0 is different from INLINEFORM1 Given a context embedding INLINEFORM2 and a target embedding INLINEFORM3 we can get the contextperceptive target representation INLINEFORM4 by DISPLAYFORM0 After this interactive procedure each given target word INLINEFORM0 will have a composed representation selected from context embeddings INLINEFORM1 Then we get the contextperceptive target words modeling INLINEFORM2 A Pointwise Convolution T ransformation PCT can transform contextual information gathered by the MHA Pointwise means that the kernel sizes are 1 and the same transformation is applied to every single token belonging to the input Formally given a input sequence INLINEFORM0 PCT is defined as DISPLAYFORM0 where INLINEFORM0 stands for the ELU activation INLINEFORM1 is the convolution operator INLINEFORM2 and INLINEFORM3 are the learnable weights of the two convolutional kernels INLINEFORM4 and INLINEFORM5 are biases of the two convolutional kernels Given INLINEFORM0 and INLINEFORM1 PCTs are applied to get the output hidden states of the attentional encoder layer INLINEFORM2 and INLINEFORM3 by DISPLAYFORM0 After we obtain the introspective context representation INLINEFORM0 and the contextperceptive target representation INLINEFORM1 we employ another MHA to obtain the targetspecific context representation INLINEFORM2 by DISPLAYFORM0 The multihead attention function here also has its independent parameters We get the final representations of the previous outputs by average pooling concatenate them as the final comprehensive representation INLINEFORM0 and use a full connected layer to project the concatenated vector into the space of the targeted INLINEFORM1 classes DISPLAYFORM0 where INLINEFORM0 is the predicted sentiment polarity distribution INLINEFORM1 and INLINEFORM2 are learnable parameters Since neutral sentiment is a very fuzzy sentimental state training samples which labeled neutral are unreliable We employ a Label Smoothing Regularization LSR term in the loss function which penalizes low entropy output distributions BIBREF14 LSR can reduce overfitting by preventing a network from assigning the full probability to each training example during training replaces the 0 and 1 targets for a classifier with smoothed values like 01 or 09 For a training sample INLINEFORM0 with the original groundtruth label distribution INLINEFORM1 we replace INLINEFORM2 with DISPLAYFORM0 where INLINEFORM0 is the prior distribution over labels and INLINEFORM1 is the smoothing parameter In this paper we set the prior label distribution to be uniform INLINEFORM2 LSR is equivalent to the KL divergence between the prior label distribution INLINEFORM0 and the networks predicted distribution INLINEFORM1 Formally LSR term is defined as DISPLAYFORM0 The objective function loss function to be optimized is the crossentropy loss with INLINEFORM0 and INLINEFORM1 regularization which is defined as DISPLAYFORM0 where INLINEFORM0 is the ground truth represented as a onehot vector INLINEFORM1 is the predicted sentiment distribution vector given by the output layer INLINEFORM2 is the coefficient for INLINEFORM3 regularization term and INLINEFORM4 is the parameter set We conduct experiments on three datasets SemEval 2014 Task 4 BIBREF15 dataset composed of Restaurant reviews and Laptop reviews and ACL 14 Twitter dataset gathered by Dong et al dong2014adaptive These datasets are labeled with three sentiment polarities positive neutral and negative Table TABREF31 shows the number of training and test instances in each category Word embeddings in AENGloVe do not get updated in the learning process but we finetune pretrained BERT in AENBERT Embedding dimension INLINEFORM0 is 300 for GloVe and is 768 for pretrained BERT Dimension of hidden states INLINEFORM1 is set to 300 The weights of our model are initialized with Glorot initialization BIBREF16 During training we set label smoothing parameter INLINEFORM2 to 02 BIBREF14 the coefficient INLINEFORM3 of INLINEFORM4 regularization item is INLINEFORM5 and dropout rate is 01 Adam optimizer BIBREF17 is applied to update all the parameters We adopt the Accuracy and MacroF1 metrics to evaluate the performance of the model In order to comprehensively evaluate and analysis the performance of AENGloVe we list 7 baseline models and design 4 ablations of AENGloVe We also design a basic BERTbased model to evaluate the performance of AENBERT NonRNN based baselines INLINEFORM0 Featurebased SVM BIBREF18 is a traditional support vector machine based model with extensive feature engineering INLINEFORM0 RecNN BIBREF0 firstly uses rules to transform the dependency tree and put the opinion target at the root and then learns the sentence representation toward target via semantic composition using Recursive NNs INLINEFORM0 MemNet BIBREF19 uses multihops of attention layers on the context word embeddings for sentence representation to explicitly captures the importance of each context word RNN based baselines INLINEFORM0 TDLSTM BIBREF1 extends LSTM by using two LSTM networks to model the left context with target and the right context with target respectively The left and right targetdependent representations are concatenated for predicting the sentiment polarity of the target INLINEFORM0 ATAELSTM BIBREF3 strengthens the effect of target embeddings which appends the target embeddings with each word embeddings and use LSTM with attention to get the final representation for classification INLINEFORM0 IAN BIBREF4 learns the representations of the target and context with two LSTMs and attentions interactively which generates the representations for targets and contexts with respect to each other INLINEFORM0 RAM BIBREF5 strengthens MemNet by representing memory with bidirectional LSTM and using a gated recurrent unit network to combine the multiple attention outputs for sentence representation AENGloVe ablations INLINEFORM0 AENGloVe wo PCT ablates PCT module INLINEFORM0 AENGloVe wo MHA ablates MHA module INLINEFORM0 AENGloVe wo LSR ablates label smoothing regularization INLINEFORM0 AENGloVeBiLSTM replaces the attentional encoder layer with two bidirectional LSTM Basic BERTbased model INLINEFORM0 BERTSPC feeds sequence CLS context SEP target SEP into the basic BERT model for sentence pair classification task Table TABREF34 shows the performance comparison of AEN with other models BERTSPC and AENBERT obtain substantial accuracy improvements which shows the power of pretrained BERT on smalldata task The overall performance of AENBERT is better than BERTSPC which suggests that it is important to design a downstream network customized to a specific task As the prior knowledge in the pretrained BERT is not specific to any particular domain further finetuning on the specific task is necessary for releasing the true power of BERT The overall performance of TDLSTM is not good since it only makes a rough treatment of the target words ATAELSTM IAN and RAM are attention based models they stably exceed the TDLSTM method on Restaurant and Laptop datasets RAM is better than other RNN based models but it does not perform well on Twitter dataset which might because bidirectional LSTM is not good at modeling small and ungrammatical text Featurebased SVM is still a competitive baseline but relying on manuallydesigned features RecNN gets the worst performances among all neural network baselines as dependency parsing is not guaranteed to work well on ungrammatical short texts such as tweets and comments Like AEN MemNet also eschews recurrence but its overall performance is not good since it does not model the hidden semantic of embeddings and the result of the last attention is essentially a linear combination of word embeddings As shown in Table TABREF34 the performances of AENGloVe ablations are incomparable with AENGloVe in both accuracy and macroF1 measure This result shows that all of these discarded components are crucial for a good performance Comparing the results of AENGloVe and AENGloVe wo LSR we observe that the accuracy of AENGloVe wo LSR drops significantly on all three datasets We could attribute this phenomenon to the unreliability of the training samples with neutral sentiment The overall performance of AENGloVe and AENGloVeBiLSTM is relatively close AENGloVe performs better on the Restaurant dataset More importantly AENGloVe has fewer parameters and is easier to parallelize To figure out whether the proposed AENGloVe is a lightweight alternative of recurrent models we study the model size of each model on the Restaurant dataset Statistical results are reported in Table TABREF37 We implement all the compared models base on the same source code infrastructure use the same hyperparameters and run them on the same GPU RNNbased and BERTbased models indeed have larger model size ATAELSTM IAN RAM and AENGloVeBiLSTM are all attention based RNN models memory optimization for these models will be more difficult as the encoded hidden states must be kept simultaneously in memory in order to perform attention mechanisms MemNet has the lowest model size as it only has one shared attention layer and two linear layers it does not calculate hidden states of word embeddings AENGloVes lightweight level ranks second since it takes some more parameters than MemNet in modeling hidden states of sequences As a comparison the model size of AENGloVeBiLSTM is more than twice that of AENGloVe but does not bring any performance improvements In this work we propose an attentional encoder network for the targeted sentiment classification task which employs attention based encoders for the modeling between context and target We raise the the label unreliability issue add a label smoothing regularization to encourage the model to be less confident with fuzzy labels We also apply pretrained BERT to this task and obtain new stateoftheart results Experiments and analysis demonstrate the effectiveness and lightweight of the proposed model </s>
<s>[INST] ThisIsCompetition at SemEval-2019 Task 9: BERT is unstable for out-of-domain samples [/INST] Opinion mining BIBREF0 is a huge field that covers many NLP tasks ranging from sentiment analysis BIBREF1 aspect extraction BIBREF2 and opinion summarization BIBREF3 among others Despite the vast literature on opinion mining the task on suggestion mining has given little attention Suggestion mining BIBREF4 is the task of collecting and categorizing suggestions about a certain product This is important because while opinions indirectly give hints on how to improve a product eg analyzing reviews suggestions are direct improvement requests eg tips advice recommendations from people who have used the product To this end BIBREF5 organized a shared task specifically on suggestion mining called SemEval 2019 Task 9 Suggestion Mining from Online Reviews and Forums The shared task is composed of two subtasks Subtask A and B In Subtask A systems are tasked to predict whether a sentence of a certain domain ie electronics entails a suggestion or not given a training data of the same domain In Subtask B systems are tasked to do suggestion prediction of a sentence from another domain ie hotels Organizers observed four main challenges a sparse occurrences of suggestions b figurative expressions c different domains and d complex sentences While previous attempts BIBREF6 BIBREF4 BIBREF7 made use of humanengineered features to solve this problem the goal of the shared task is to leverage the advancements seen on neural networks by providing a larger dataset to be used on dataintensive models to achieve better performance This paper describes our system JESSI Joint Encoders for Stable Suggestion Inference JESSI is built as a combination of two neuralbased encoders using multiple pretrained word embeddings including BERT BIBREF8 a pretrained deep bidirectional transformer that is recently reported to perform exceptionally well across several tasks The main intuition behind JESSI comes from our finding that although BERT gives exceptional performance gains when applied to indomain samples it becomes unstable when applied to outofdomain samples even when using a domain adversarial training BIBREF9 module This problem is mitigated using two tricks 1 jointly training BERT with a CNNbased encoder and 2 using an RNNbased encoder on top of BERT before feeding to the classifier JESSI is trained using only the datasets given on the shared task without using any additional external data Despite this JESSI performs second on Subtask A with an F1 score of 7778 among 33 other team submissions It also performs well on Subtask B with an F1 score of 7959 We present our model JESSI which stands for Joint Encoders for Stable Suggestion Inference shown in Figure FIGREF4 Given a sentence INLINEFORM0 JESSI returns a binary suggestion label INLINEFORM1 JESSI consists of four important components 1 A BERTbased encoder that leverages general knowledge acquired from a large pretrained language model 2 A CNNbased encoder that learns taskspecific sentence representations 3 an MLP classifier that predicts the label given the joint encodings and 4 a domain adversarial training module that prevents the model to distinguish between the two domains In this section we show our results and experiments We denote JESSIA as our model for Subtask A ie BERT INLINEFORM0 CNNCNN INLINEFORM1 Att and JESSIB as our model for Subtask B ie BERT INLINEFORM2 BiSRUCNN INLINEFORM3 AttDomAdv The performance of the models is measured and compared using the F1score We presented JESSI Joint Encoders for Stable Suggestion Inference our system for the SemEval 2019 Task 9 Suggestion Mining from Online Reviews and Forums JESSI builds upon jointly combined encoders borrowing pretrained knowledge from a language model BERT and a translation model CoVe We found that BERT alone performs bad and unstably when tested on outofdomain samples We mitigate the problem by appending an RNNbased sentence encoder above BERT and jointly combining a CNNbased encoder Results from the shared task show that JESSI performs competitively among participating models obtaining second place on Subtask A with an FScore of 7778 It also performs well on Subtask B with an FScore of 7959 even without using any additional external data This research was supported by the MSIT Ministry of Science ICT Korea under National Program for Excellence in SW 2015000910 and Artificial Intelligence Contact Center Solution 2018000605 supervised by the IITPInstitute for Information Communications Technology Planning Evaluation </s>
<s>[INST] DENS: A Dataset for Multi-class Emotion Analysis [/INST] Humans experience a variety of complex emotions in daily life These emotions are heavily reflected in our language in both spoken and written forms Many recent advances in natural language processing on emotions have focused on product reviews BIBREF0 and tweets BIBREF1 BIBREF2 These datasets are often limited in length eg by the number of words in tweets purpose eg product reviews or emotional spectrum eg binary classification Character dialogues and narratives in storytelling usually carry strong emotions A memorable story is often one in which the emotional journey of the characters resonates with the reader Indeed emotion is one of the most important aspects of narratives In order to characterize narrative emotions properly we must move beyond binary constraints eg good or bad happy or sad In this paper we introduce the Dataset for Emotions of Narrative Sequences DENS for emotion analysis consisting of passages from longform fictional narratives from both classic literature and modern stories in English The data samples consist of selfcontained passages that span several sentences and a variety of subjects Each sample is annotated by using one of 9 classes and an indicator for annotator agreement Using the categorical basic emotion model BIBREF3 BIBREF4 BIBREF5 studied creating lexicons from tweets for use in emotion analysis Recently BIBREF1 BIBREF6 and BIBREF2 proposed sharedtasks for multiclass emotion analysis based on tweets Fewer works have been reported on understanding emotions in narratives Emotional Arc BIBREF7 is one recent advance in this direction The work used lexicons and unsupervised learning methods based on unlabelled passages from titles in Project Gutenberg For labelled datasets on narratives BIBREF8 provided a sentencelevel annotated corpus of childrens stories and BIBREF9 provided phraselevel annotations on selected Project Gutenberg titles To the best of our knowledge the dataset in this work is the first to provide multiclass emotion labels on passages selected from both Project Gutenberg and modern narratives The dataset is available upon request for noncommercial research only purposes In this section we describe the process used to collect and annotate the dataset The dataset is annotated based on a modified Plutchiks wheel of emotions The original Plutchiks wheel consists of 8 primary emotions Joy Sadness Anger Fear Anticipation Surprise Trust Disgust In addition more complex emotions can be formed by combing two basic emotions For example Love is defined as a combination of Joy and Trust Fig 1 The intensity of an emotion is also captured in Plutchiks wheel For example the primary emotion of Anger can vary between Annoyance mild and Rage intense We conducted an initial survey based on 100 stories with a significant fraction sampled from the romance genre We asked readers to identify the major emotion exhibited in each story from a choice of the original 8 primary emotions We found that readers have significant difficulty in identifying Trust as an emotion associated with romantic stories Hence we modified our annotation scheme by removing Trust and adding Love We also added the Neutral category to denote passages that do not exhibit any emotional content The final annotation categories for the dataset are Joy Sadness Anger Fear Anticipation Surprise Love Disgust Neutral We selected both classic and modern narratives in English for this dataset The modern narratives were sampled based on popularity from Wattpad We parsed selected narratives into passages where a passage is considered to be eligible for annotation if it contained between 40 and 200 tokens In longform narratives many nonconversational passages are intended for transition or scene introduction and may not carry any emotion We divided the eligible passages into two parts and one part was pruned using selected emotionrich but ambiguous lexicons such as cry punch kiss etc Then we mixed this pruned part with the unpruned part for annotation in order to reduce the number of neutral passages See Appendix SECREF25 for the lexicons used MTurk was set up using the standard sentiment template and instructed the crowd annotators to pick the bestmajor emotion embodied in the passage We further provided instructions to clarify the intensity of an emotion such as RageAnnoyance is a form of Anger SerenityEcstasy is a form of Joy and Love includes RomanticFamilyFriendship along with sample passages We required all annotators have a master MTurk qualification Each passage was labelled by 3 unique annotators Only passages with a majority agreement between annotators were accepted as valid This is equivalent to a Fleisss kappa score of greater than 04 For passages without majority agreement between annotators we consolidated their labels using inhouse data annotators who are experts in narrative content A passage is accepted as valid if the inhouse annotators label matched any one of the MTurk annotators labels The remaining passages are discarded We provide the fraction of annotator agreement for each label in the dataset Though passages may lose some emotional context when read independently of the complete narrative we believe annotator agreement on our dataset supports the assertion that small excerpts can still convey coherent emotions During the annotation process several annotators had suggested for us to include additional emotions such as confused pain and jealousy which are common to narratives As they were not part of the original Plutchiks wheel we decided to not include them An interesting future direction is to study the relationship between emotions such as pain versus sadness or confused versus surprise and improve the emotion model for narratives The dataset contains a total of 9710 passages with an average of 624 sentences per passage 1616 words per sentence and an average length of 86 words The vocabulary size is 28K when lowercased It contains over 1600 unique titles across multiple categories including 88 titles 1520 passages from Project Gutenberg All of the modern narratives were written after the year 2000 with notable amount of themes in comingofage strongfemalelead and LGBTQ The genre distribution is listed in Table TABREF8 In the final dataset 210 of the data has consensus between all annotators 735 has majority agreement and 548 has labels assigned after consultation with inhouse annotators The distribution of data points over labels with top lexicons lowercased normalized is shown in Table TABREF9 Note that the Disgust category is very small and should be discarded Furthermore we suspect that the data labelled as Surprise may be noisier than other categories and should be discarded as well Table TABREF10 shows a few examples labelled data from classic titles More examples can be found in Table TABREF26 in the Appendix SECREF27 We performed benchmark experiments on the dataset using several different algorithms In all experiments we have discarded the data labelled with Surprise and Disgust We preprocessed the data by using the SpaCy pipeline We masked out named entities with entitytype specific placeholders to reduce the chance of benchmark models utilizing named entities as a basis for classification Benchmark results are shown in Table TABREF17 The dataset is approximately balanced after discarding the Surprise and Disgust classes We report the average microF1 scores with 5fold cross validation for each technique We provide a brief overview of each benchmark experiment below Among all of the benchmarks Bidirectional Encoder Representations from Transformers BERT BIBREF11 achieved the best performance with a 0604 microF1 score Overall we observed that deeplearning based techniques performed better than lexical based methods This suggests that a method which attends to context and themes could do well on the dataset We computed bagofwordsbased benchmarks using the following methods Classification with TFIDF Linear SVM TFIDF SVM Classification with Depeche Emotion lexicons BIBREF12 Linear SVM Depeche SVM Classification with NRC Emotion lexicons BIBREF13 BIBREF14 Linear SVM NRC SVM Combination of TFIDF and NRC Emotion lexicons TFNRC SVM We also used simple classification models with learned embeddings We trained a Doc2Vec model BIBREF15 using the dataset and used the embedding document vectors as features for a linear SVM classifier For this benchmark we considered a Hierarchical RNN following BIBREF16 We used two BiLSTMs BIBREF17 with 256 units each to model sentences and documents The tokens of a sentence were processed independently of other sentence tokens For each direction in the tokenlevel BiLSTM the last outputs were concatenated and fed into the sentencelevel BiLSTM as inputs The outputs of the BiLSTM were connected to 2 dense layers with 256 ReLU units and a Softmax layer We initialized tokens with publicly available embeddings trained with GloVe BIBREF18 Sentence boundaries were provided by SpaCy Dropout was applied to the dense hidden layers during training One challenge with RNNbased solutions for text classification is finding the best way to combine wordlevel representations into higherlevel representations Selfattention BIBREF19 BIBREF20 BIBREF21 has been adapted to text classification providing improved interpretability and performance We used BIBREF20 as the basis of this benchmark The benchmark used a layered Bidirectional RNN 60 units with GRU cells and a dense layer Both selfattention layers were 60 units in size and crossentropy was used as the cost function Note that we have omitted the orthogonal regularizer term since this dataset is relatively small compared to the traditional datasets used for training such a model We did not observe any significant performance gain while using the regularizer term in our experiments Deep Contextualized Word Representations ELMo BIBREF22 have shown recent success in a number of NLP tasks The unsupervised nature of the language model allows it to utilize a large amount of available unlabelled data in order to learn better representations of words We used the pretrained ELMo model v2 available on Tensorhub for this benchmark We fed the word embeddings of ELMo as input into a one layer Bidirectional RNN 16 units with GRU cells with dropout and a dense layer Crossentropy was used as the cost function Bidirectional Encoder Representations from Transformers BERT BIBREF11 has achieved stateoftheart results on several NLP tasks including sentence classification We used the finetuning procedure outlined in the original work to adapt the pretrained uncased BERTtextrm scriptsize LARGE to a multiclass passage classification task This technique achieved the best result among our benchmarks with an average microF1 score of 604 We introduce DENS a dataset for multiclass emotion analysis from longform narratives in English We provide a number of benchmark results based on models ranging from bagofword models to methods based on pretrained language models ELMo and BERT Our benchmark results demonstrate that this dataset provides a novel challenge in emotion analysis The results also demonstrate that attentionbased models could significantly improve performance on classification tasks such as emotion analysis Interesting future directions for this work include 1 incorporating commonsense knowledge into emotion analysis to capture semantic context and 2 using fewshot learning to bootstrap and improve performance of underrepresented emotions Finally as narrative passages often involve interactions between multiple emotions one avenue for future datasets could be to focus on the multiemotion complexities of human language and their contextual interactions Table TABREF26 shows sample passages from classic titles with corresponding labels </s>
<s>[INST] Multitask Learning with CTC and Segmental CRF for Speech Recognition [/INST] Stateoftheart speech recognition accuracy has significantly improved over the past few years since the application of deep neural networks BIBREF0 BIBREF1 Recently it has been shown that with the application of both neural network acoustic model and language model an automatic speech recognizer can approach humanlevel accuracy on the Switchboard conversational speech recognition benchmark using around 2000 hours of transcribed data BIBREF2 While progress is mainly driven by well engineered neural network architectures and a large amount of training data the hidden Markov model HMM that has been the backbone for speech recognition for decades is still playing a central role Though tremendously successful for the problem of speech recognition the HMMbased pipeline factorizes the whole system into several components and building these components separately may be less computationally efficient when developing a largescale system from thousands to hundred of thousands of examples BIBREF3 Recently along with hybrid HMMNN frameworks for speech recognition there has been increasing interest in endtoend training approaches The key idea is to directly map the input acoustic frames to output characters or words without the intermediate alignment to contextdependent phones used by HMMs In particular three architectures have been proposed for the goal of endtoend learning connectionist temporal classification CTC BIBREF4 BIBREF5 BIBREF6 BIBREF7 sequencetosequence with attention model BIBREF8 BIBREF9 BIBREF10 and neural network segmental conditional random field SCRF BIBREF11 BIBREF12 These endtoend models simplify the pipeline of speech recognition significantly They do not require intermediate alignment or segmentation like HMMs instead the alignment or segmentation is marginalized out during training for CTC and SCRF or inferred by the attention mechanism In terms of the recognition accuracy however the endtoend models usually lag behind their HMMbased counterparts Though CTC has been shown to outperform HMM systems BIBREF13 the improvement is based on the use of contextdependent phone targets and a very large amount of training data Therefore it has almost the same system complexity as HMM acoustic models When the training data is less abundant it has been shown that the accuracy of CTC systems degrades significantly BIBREF14 However endtoend models have the flexibility to be combined to mitigate their individual weaknesses For instance multitask learning with attention models has been investigated for machine translation BIBREF15 and Mandarin speech recognition using joint CharacterPinyin training BIBREF16 In BIBREF17 Kim et al proposed a multitask learning approach to train a joint attention model and a CTC model using a shared encoder They showed that the CTC auxiliary task can help the attention model to overcome the misalignment problem in the initial few epochs and speed up the convergence of the attention model Another nice property of the multitask learning approach is that the joint model can still be trained endtoend Inspired by this work we study endtoend training of a joint CTC and SCRF model using an interpolated loss function The key difference of our study from BIBREF17 is that the two loss functions of the CTC and attention models are locally normalized for each output token and they are both trained using the cross entropy criterion However the SCRF loss function is normalized at the sequencelevel which is similar to the sequence discriminative training objective function for HMMs From this perspective the interpolation of CTC and SCRF loss functions is analogous to the sequence discriminative training of HMMs with CE regularization to overcome overfitting where a sequencelevel loss is also interpolated with a framelevel loss eg BIBREF18 Similar to the observations in BIBREF17 we demonstrate that the joint training approach improves the recognition accuracies of both CTC and SCRF acoustic models Further we also show that CTC can be used to pretrain the neural network feature extractor to speed up the convergence of the joint model Experiments were performed on the TIMIT database SCRF is a variant of the linearchain CRF model where each output token corresponds to a segment of input tokens instead of a single input instance In the context of speech recognition given a sequence of input vectors of T frames X x1 cdots xT and its corresponding sequence of output labels y y1 cdots yJ the zeroorder linearchain CRF defines the sequencelevel conditional probability as Py X 1ZX t1T f yt xt where ZX denotes the normalization term and TJ Extension to higher order models is straightforward but it is usually computationally much more expensive The model defined in Eq Segmental Conditional Random Fields requires the length of X and y to be equal which makes it inappropriate for speech recognition because the lengths of the input and output sequences are not equal For the case where Tge J as in speech recognition SCRF defines the sequencelevel conditional probability with the auxiliary segment labels E e1 cdots eJ as Py E X 1ZX j1J f yj ej xj where mathbf ej langle sj nj rangle is a tuple of the beginning X x1 cdots xT 0 and the end X x1 cdots xT 1 time tag for the segment of X x1 cdots xT 2 and X x1 cdots xT 3 while X x1 cdots xT 4 X x1 cdots xT 5 and X x1 cdots xT 6 denotes the vocabulary set X x1 cdots xT 7 is the embedding vector of the segment corresponding to the token X x1 cdots xT 8 In this case X x1 cdots xT 9 sums over all the possible y y1 cdots yJ0 pairs ie ZX sum yE prod j1J exp f left yj ej barxj right Eq 1 Similar to other CRFs the function fcdot is defined as f left yj ej barxt right mathbf wtop Phi yj ej barxj Eq 2 where Phi cdot denotes the feature function and mathbf w is the weight vector Most of conventional approaches for SCRFbased acoustic models use a manually defined feature function Phi cdot where the features and segment boundary information are provided by an auxiliary system BIBREF19 BIBREF20 In BIBREF21 BIBREF12 we proposed an endtoend training approach for SCRFs where Phi cdot was defined with neural networks and the segmental level features were learned by RNNs The model was referred to as the segmental RNN SRNN and it will be used as the implementation of the SCRF acoustic model for multitask learning in this study SRNN uses an RNN to learn segmental level acoustic embeddings Given the input sequence X x1 cdots xT and we need to compute the embedding vector barxj in Eq 2 corresponding to the segment ej langle sj njrangle Since the segment boundaries are known it is straightforward to employ an RNN to map the segment into a vector as l hsj hsj1 vdots hnj l RNNh0 xsj RNNhsj xsj1 vdots RNNhnj1 xnj where h0 denotes the initial hidden state which is initialized to be zero RNN cdot denotes the nonlinear recurrence operation used in an RNN which takes the previous hidden state and the feature vector at the current timestep as inputs and produce an updated hidden state vector Given the recurrent hidden states the embedding vector can be simply defined as barxj hnj as in our previous work BIBREF12 However the drawback of this implementation is the large memory cost as we need to store the array of hidden states hsj cdots hnj for all the possible segments langle sj njrangle If we denote H as the dimension of an RNN hidden state the memory cost will be on the order of OT2H where T is the length of X It is especially problematic for the joint model as the CTC model requires additional memory space In this work we adopt another approach that requires much less memory In this approach we use an RNN to read the whole input sequence as c h1 h2 vdots hT l RNNh0 x1 RNNh1 x2 vdots RNNhT1 xT and we define the embedding vector for segment e langle k trangle as xj c hsj hnj In this case we only provide the context information for the feature function Phi cdot to extract segmental features We refer this approach as contextaware embedding Since we only need to read the input sequence once the memory requirement is on the order of OTH which is much smaller The cost however is the slightly degradation of the recognition accuracy This model is illustrated by Figure 1 The feature function Phi cdot also requires a vector representation of the label yj This embedding vector can be obtained using a linear embedding matrix following common practice for RNN language models More specifically yj is first represented as a onehot vector vj and it is then mapped into a continuous space by a linear embedding matrix M as uj M vj Eq 4 Given the acoustic embedding barxj and label embedding uj the feature function Phi cdot can be represented as yj ej xj W1uj W2xj b where sigma denotes a nonlinear activation function eg sigmoid or tanh W1 W2 and b are weight matrices and a bias vector Eq Connectionist Temporal Classification corresponds to one layer of nonlinear transformation In fact it is straightforward to stack multiple nonlinear layers in this feature function For speech recognition the segmentation labels E are usually unknown in the training set In this case we cannot train the model directly by maximizing the conditional probability in Eq Segmental Conditional Random Fields However the problem can be addressed by marginalizing out the segmentation variable as Lscrf Py X E Py E X E j f yj ej xj ZX y ZX where ZX y denotes the summation over all the possible segmentations when only y is observed To simplify notation the objective function mathcal Lmathit scrf is defined here with only one training utterance However the number of possible segmentations is exponential in the length of X which makes the nave computation of both ZX y and ZX impractical To address this problem a dynamic programming algorithm can be applied which can reduce the computational complexity to OT2cdot mathcal Y BIBREF22 The computational cost can be further reduced by limiting the maximum length of all the possible segments The reader is referred to BIBREF12 for further details including the decoding algorithm CTC also directly computes the conditional probability Py mid X with the key difference from SCRF in that it normalizes the probabilistic distribution at the frame level To address the problem of length mismatch between the input and output sequences CTC allows repetitions of output labels and introduces a special blank token which represents the probability of not emitting any label at a particular time step The conditional probability is then obtained by summing over all the probabilities of all the paths that corresponding to y after merging the repeated labels and removing the blank tokens ie Py X y PX where Psi y denotes the set of all possible paths that correspond to y after repetitions of labels and insertions of the blank token Now the length of pi is the same as X the probability Ppi mid X is then approximated by the independence assumption as PX t1T Pt xt where pi t ranges over mathcal Ycup lbrace rbrace and 0 can be computed using the softmax function The training criterion for CTC is to maximize the conditional probability of the ground truth labels which is equivalent to minimizing the negative log likelihood Lctc Py X which can be reformulated as the CE criterion More details regarding the computation of the loss and the backpropagation algorithm to train CTC models can be found in BIBREF23 Training the two models jointly is trivial We can simply interpolate the CTC and SCRF loss functions as L Lctc 1Lscrf where lambda in 0 1 is the interpolation weight The two models share the same neural network for feature extraction In this work we focus on the RNN with long shortterm memory LSTM BIBREF24 units for feature extraction Other types of neural architecture eg convolutional neural network CNN or combinations of CNN and RNN may be considered in future work Our experiments were performed on the TIMIT database and both the SRNN and CTC models were implemented using the DyNet toolkit BIBREF25 We followed the standard protocol of the TIMIT dataset and our experiments were based on the Kaldi recipe BIBREF26 We used the core test set as our evaluation set which has 192 utterances Our models were trained with 48 phonemes and their predictions were converted to 39 phonemes before scoring The dimension of mathbf uj was fixed to be 64 and the dimension of mathbf w in Eq 2 is also 64 We set the initial SGD learning rate to be 01 and we exponentially decay the learning rate by 075 when the validation error stopped decreasing We also subsampled the acoustic sequence by a factor of 4 using the hierarchical RNN as in BIBREF12 Our models were trained with dropout regularization BIBREF27 using a specific implementation for recurrent networks BIBREF28 The dropout rate was 02 unless specified otherwise Our models were randomly initialized with the same random seed Table 1 shows the baseline results of SRNN and CTC models using two different kinds of features The FBANK features are 120dimensional with delta and deltadelta coefficients and the fMLLR features are 40dimensional which were obtained from a Kaldi baseline system We used a 3layer bidirectional LSTMs for feature extraction and we used the greedy best path decoding algorithm for both models Our SRNN and CTC achieved comparable phone error rate PER for both kinds of features However for the CTC system Graves et al BIBREF29 obtained a better result using about the same size of neural network 3 hidden layers with 250 hidden units of bidirectional LSTMs compared to ours 186 vs 199 Apart from the implementation difference of using different code bases Graves et al BIBREF29 applied the prefix decoding with beam search which may have lower search error than our best path decoding algorithm Table 2 shows results of multitask learning for CTC and SRNN using the interpolated loss in Eq Joint Training Loss We only show results of using LSTMs with 250 dimensional hidden states The interpolation weight was set to be 05 In our experiments tuning the interpolation weight did not further improve the recognition accuracy From Table 2 we can see that multitask learning improves recognition accuracies of both SRNN and CTC acoustic models which may due to the regularization effect of the joint training loss The improvement for FBANK features is much larger than fMLLR features In particular with multitask learning the recognition accuracy of our CTC system with best path decoding is comparable to the results obtained by Graves et al BIBREF29 with beam search decoding One of the major drawbacks of SCRF models is their high computational cost In our experiments the CTC model is around 34 times faster than the SRNN model that uses the same RNN encoder The joint model by multitask learning is slightly more expensive than the standalone SRNN model To cut down the computational cost we investigated if CTC can be used to pretrain the RNN encoder to speed up the training of the joint model This is analogous to sequence training of HMM acoustic models where the network is usually pretrained by the framelevel CE criterion Figure 2 shows the convergence curves of the joint model with and without CTC pretraining and we see pretraining indeed improves the convergence speed of the joint model We investigated multitask learning with CTC and SCRF for speech recognition in this paper Using an RNN encoder for feature extraction both CTC and SCRF can be trained endtoend and the two models can be trained together by interpolating the two loss functions From experiments on the TIMIT dataset the multitask learning approach improved the recognition accuracies of both CTC and SCRF acoustic models We also showed that CTC can be used to pretrain the RNN encoder speeding up the training of the joint model In the future we will study the multitask learning approach for largerscale speech recognition tasks where the CTC pretraining approach may be more helpful to overcome the problem of high computational cost We thank the NVIDIA Corporation for the donation of a Titan X GPU </s>
<s>[INST] Filling Gender&Number Gaps in Neural Machine Translation with Black-box Context Injection [/INST] A common way for marking information about gender number and case in language is morphology or the structure of a given word in the language However different languages mark such information in different ways for example in some languages gender may be marked on the head word of a syntactic dependency relation while in other languages it is marked on the dependent on both or on none of them BIBREF0 This morphological diversity creates a challenge for machine translation as there are ambiguous cases where more than one correct translation exists for the same source sentence For example while the English sentence I love language is ambiguous with respect to the gender of the speaker Hebrew marks verbs for the gender of their subject and does not allow genderneutral translation This allows two possible Hebrew translations one in a masculine and the other in a feminine form As a consequence a sentencelevel translator either human or machine must commit to the gender of the speaker adding information that is not present in the source Without additional context this choice must be done arbitrarily by relying on language conventions world knowledge or statistical stereotypical knowledge Indeed the English sentence I work as a doctor is translated into Hebrew by Google Translate using the masculine verb form oved indicating a male speaker while I work as a nurse is translated with the feminine form ovedet indicating a female speaker verified on March 2019 While this is still an issue there have been recent efforts to reduce it for specific language pairs We present a simple blackbox method to influence the interpretation chosen by an NMT system in these ambiguous cases More concretely we construct predefined textual hints about the gender and number of the speaker and the audience the interlocutors which we concatenate to a given input sentence that we would like to translate accordingly We then show that a blackbox NMT system makes the desired morphological decisions according to the given hint even when no other evidence is available on the source side While adding those hints results in additional text on the target side we show that it is simple to remove leaving only the desired translation Our method is appealing as it only requires simple preandpost processing of the inputs and outputs without considering the system internals or requiring specific annotated data and training procedure as in previous work BIBREF1 We show that in spite of its simplicity it is effective in resolving many of the ambiguities and improves the translation quality in up to 23 BLEU when given the correct hints which may be inferred from text metadata or other sources Finally we perform a finegrained syntactic analysis of the translations generated using our method which shows its effectiveness Different languages use different morphological features marking different properties on different elements For example English marks for number case aspect tense person and degree of comparison However English does not mark gender on nouns and verbs Even when a certain property is marked languages differ in the form and location of the marking BIBREF0 For example marking can occur on the head of a syntactic dependency construction on its argument on both requiring agreement or on none of them Translation systems must generate correct targetlanguage morphology as part of the translation process This requires knowledge of both the sourceside and targetside morphology Current stateoftheart translation systems do capture many aspects of natural language including morphology when a relevant context is available BIBREF2 BIBREF3 but resort to guessing based on the trainingdata statistics when it is not Complications arise when different languages convey different kinds of information in their morphological systems In such cases a translation system may be required to remove information available in the source sentence or to add information not available in it where the latter can be especially tricky Our goal is to supply an NMT system with knowledge regarding the speaker and interlocutor of firstperson sentences in order to produce the desired targetside morphology when the information is not available in the source sentence The approach we take in the current work is that of blackbox injection in which we attempt to inject knowledge to the input in order to influence the output of a trained NMT system without having access to its internals or its training procedure as proposed by vanmassenhovehardmeierway2018EMNLP We are motivated by recent work by BIBREF4 who showed that NMT systems learn to track coreference chains when presented with sufficient discourse context We conjecture that there are enough sentenceinternal pronominal coreference chains appearing in the training data of largescale NMT systems such that stateoftheart NMT systems can and do track sentenceinternal coreference We devise a wrapper method to make use of this coreference tracking ability by introducing artificial antecedents that unambiguously convey the desired gender and number properties of the speaker and audience More concretely a sentence such as I love you is ambiguous with respect to the gender of the speaker and the gender and number of the audience However sentences such as I love you she told him are unambiguous given the coreference groups I she and you him which determine I to be feminine singular and you to be masculine singular We can thus inject the desired information by prefixing a sentence with short generic sentence fragment such as She told him or She told them that relying on the NMT systems coreference tracking abilities to trigger the correctly marked translation and then remove the redundant translated prefix from the generated target sentence We observed that using a parataxis construction ie she said to him almost exclusively results in targetside parataxis as well in 998 of our examples making it easy to identify and strip the translated version from the target side Moreover because the parataxis construction is grammatically isolated from the rest of the sentence it can be stripped without requiring additional changes or modification to the rest of the sentence ensuring grammaticality To demonstrate our method in a blackbox setting we focus our experiments on Googles machine translation system GMT accessed through its Cloud API To test the method on realworld sentences we consider a monologue from the standup comedy show Sarah Silverman A Speck of Dust The monologue consists of 1244 English sentences all by a female speaker conveyed to a plural genderneutral audience Our parallel corpora consists of the 1244 English sentences from the transcript and their corresponding Hebrew translations based on the Hebrew subtitles We translate the monologue one sentence at a time through the Google Cloud API Eyeballing the results suggest that most of the translations use the incorrect but default masculine and singular forms for the speaker and the audience respectively We expect that by adding the relevant condition of female speaking to an audience we will get better translations affecting both the gender of the speaker and the number of the audience To verify this we experiment with translating the sentences with the following variations No PrefixThe baseline translation as returned by the GMT system He saidSignaling a male speaker We expect to further skew the system towards masculine forms She saidSignaling a female speaker and unknown audience As this matches the actual speakers gender we expect an improvement in translation of firstperson pronouns and verbs with firstperson pronouns as subjects I said to themSignaling an unknown speaker and plural audience He said to themMasculine speaker and plural audience She said to themFemale speaker and plural audiencethe complete correct condition We expect the best translation accuracy on this setup Heshe said to himherHere we set an incorrect singular gendermarked audience to investigate our ability to control the audience morphology We compare the different conditions by comparing BLEU BIBREF5 with respect to the reference Hebrew translations We use the multibleuperl script from the Moses toolkit BIBREF6 Table shows BLEU scores for the different prefixes The numbers match our expectations Generally providing an incorrect speaker andor audience information decreases the BLEU scores while providing the correct information substantially improves it we see an increase of up to 23 BLEU over the baseline We note the BLEU score improves in all cases even when given the wrong gender of either the speaker or the audience We hypothesise this improvement stems from the addition of the word said which hints the model to generate a more spoken language which matches the tested scenario Providing correct information for both speaker and audience usually helps more than providing correct information to either one of them individually The one outlier is providing She for the speaker and her for the audience While this is not the correct scenario we hypothesise it gives an improvement in BLEU as it further reinforces the female gender in the sentence The BLEU score is an indication of how close the automated translation is to the reference translation but does not tell us what exactly changed concerning the gender and number properties we attempt to control We perform a finergrained analysis focusing on the relation between the injected speaker and audience information and the morphological realizations of the corresponding elements We parse the translations and the references using a Hebrew dependency parser In addition to the parse structure the parser also performs morphological analysis and tagging of the individual tokens We then perform the following analysis Speakers Gender Effects We search for firstperson singular pronouns with subject case ani unmarked for gender corresponding to the English I and consider the gender of its governing verb or adjectives in copular constructions such as I am nice The possible genders are masculine feminine and both where the latter indicates a case where the nonediacriticized written form admits both a masculine and a feminine reading We expect the gender to match the ones requested in the prefix Interlocutors Gender and Number Effects We search for secondperson pronouns and consider their gender and number For pronouns in subject position we also consider the gender and number of their governing verbs or adjectives in copular constructions For a singular audience we expect the gender and number to match the requested ones For a plural audience we expect the masculineplural forms Results Speaker Figure FIGREF3 shows the result for controlling the morphological properties of the speaker he she I said It shows the proportion of genderinflected verbs for the various conditions and the reference We see that the baseline system severely underpredicts the feminine form of verbs as compared to the reference The He said conditions further decreases the number of feminine verbs while the I said conditions bring it back to the baseline level Finally the She said prefixes substantially increase the number of femininemarked verbs bringing the proportion much closer to that of the reference though still underpredicting some of the feminine cases Results Audience The chart in Figure FIGREF3 shows the results for controlling the number of the audience to them vs nothing It shows the proportion of singular vs plural secondperson pronouns on the various conditions It shows a similar trend the baseline system severely underpredicts the plural forms with respect to the reference translation while adding the to them condition brings the proportion much closer to that of the reference Closely related to our work vanmassenhovehardmeierway2018EMNLP proposed a method and an EnglishFrench test set to evaluate genderaware translation based on the Europarl corpus BIBREF7 We evaluate our method using Google Translate and the given prefixes on their test set to see whether it is applicable to another language pair and domain Table shows the results of our approach vs their published results and the Google Translate baseline As may be expected Google Translate outperforms their system as it is trained on a different corpus and may use more complex machine translation models Using our method improves the BLEU score even further To test our methods outputs on multiple languages we run our preand postprocessing steps with Google Translate using examples we sourced from native speakers of different languages For every example we have an English sentence and two translations in the corresponding language one in masculine and one in feminine form Not all examples are using the same source English sentence as different languages mark different information Table shows that for these specific examples our method worked on INLINEFORM0 of the languages we had examples for while for INLINEFORM1 languages both translations are masculine and for 1 language both are feminine E171101 showed that given input with author traits like gender it is possible to retain those traits in Statistical Machine Translation SMT models W174727 showed that incorporating morphological analysis in the decoder improves NMT performance for morphologically rich languages burlothal01618387 presented a new protocol for evaluating the morphological competence of MT systems indicating that current translation systems only manage to capture some morphological phenomena correctly Regarding the application of constraints in NMT N161005 presented a method for controlling the politeness level in the generated output DBLPjournalscorrFiclerG17aa showed how to guide a neural text generation system towards style and content parameters like the level of professionalism subjectiveobjective sentiment and others W174811 showed that incorporating more context when translating subtitles can improve the coherence of the generated translations Most closely to our work vanmassenhovehardmeierway2018EMNLP also addressed the missing gender information by training proprietary models with a genderindicatingprefix We differ from this work by treating the problem in a blackbox manner and by addressing additional information like the number of the speaker and the gender and number of the audience We highlight the problem of translating between languages with different morphological systems in which the target translation must contain gender and number information that is not available in the source We propose a method for injecting such information into a pretrained NMT model in a blackbox setting We demonstrate the effectiveness of this method by showing an improvement of 23 BLEU in an EnglishtoHebrew translation setting where the speaker and audience gender can be inferred We also perform a finegrained syntactic analysis that shows how our method enables to control the morphological realization of first and secondperson pronouns together with verbs and adjectives related to them In future work we would like to explore automatic generation of the injected context or the use of crosssentence context to infer the injected information </s>
<s>[INST] Exploring End-to-End Techniques for Low-Resource Speech Recognition [/INST] Although development of the first speech recognition systems began half a century ago there has been a significant increase of the accuracy of ASR systems and number of their applications for the recent ten years even for lowresource languages BIBREF0 BIBREF1 This is mainly due to widespread applying of deep learning and very effective performance of neural networks in hybrid recognition systems DNNHMM However for last few years there has been a trend to change traditional ASR training paradigm Endtoend training systems gradually displace complex multistage learning process including training of GMMs BIBREF2 clustering of allophones states aligning of speech to clustered senones training neural networks with crossentropy loss followed by retraining with sequencediscriminative criterion The new approach implies training the system in one global step working only with acoustic data and reference texts and significantly simplifies or even completely excludes in some cases the decoding process It also avoids the problem of outofvocabulary words OOV because endtoend system trained with parts of the words as targets can construct new words itself using graphemes or subword units while traditional DNNHMM systems are limited with language model vocabulary The whole variety of endtoend systems can be divided into 3 main categories Connectionist Temporal Classification CTC BIBREF3 Sequencetosequence models with attention mechanism BIBREF4 RNNTransducers BIBREF5 Connectionist Temporal Classification CTC approach uses loss functions that utilize all possible alignments between reference text and audio data Targets for CTCbased system can be phonemes graphemes syllables and other subword units and even whole words However a lot more data is usually required to train such systems well compared to traditional hybrid systems Sequencetosequence models are used to map entire input sequences to output sequences without any assumptions about their alignment The most popular architecture for sequencetosequence models is encoderdecoder model with attention Encoder and decoder are usually constructed using recurrent neural networks basic attention mechanism calculates energy weights that emphasize importance of encoder vectors for decoding on this step and then sums all these vectors with energy weights Encoderdecoder models with attention mechanism show results close to traditional DNNHMM systems and in some cases surpass them but for a number of reasons their usage is still rather limited First of all this is related to the fact that such systems show best results when the duration of real utterances is close to the duration of utterances from training data However when the duration difference increases the performance degrades significantly BIBREF4 Moreover the entire utterance must be preprocessed by encoder before start of decoders work This is the reason why it is hard to apply the approach to recognize long recordings or streaming audio Segmenting long recordings into shorter utterances solves the duration issue but leads to a context break and eventually negatively affects recognition accuracy Secondly the computational complexity of encoderdecoder models is high because of recurrent networks usage so these models are rather slow and hard to parallelize The idea of RNNTransducer is an extension of CTC and provides the ability to model inner dependencies separately and jointly between elements of both input audio frames and output phonemes and other subword units sequences Despite of mathematical elegance such systems are very complicated and hard to implement so they are still rarely used although several impressive results were obtained using this technique CTCbased approach is easier to implement better scaled and has many degrees of freedom which allows to significantly improve baseline systems and achieve results close to stateoftheart Moreover CTCbased systems are well compatible with traditional WFSTdecoders and can be easily integrated with conventional ASR systems Besides as already mentioned CTCsystems are rather sensitive to the amount of training data so it is very relevant to study how to build effective CTCbased recognition system using a small amount of training samples It is especially actual for lowresource languages where we have only a few dozen hours of speech Building ASR system for lowresource languages is one of the aims of international Babel program funded by the Intelligence Advanced Research Projects Activity IARPA Within the program extensive research was carried out resulting in creation of a number of modern ASR systems for lowresource languages Recently endtoend approaches were applied to this task showing expectedly worse results than traditional systems although the difference is rather small In this paper we explore a number of ways to improve endtoend CTCbased systems in lowresource scenarios using the Turkish language dataset from the IARPA Babel collection In the next section we describe in more details different versions of CTCsystems and their application for lowresource speech recognition Section 3 describes the experiments and their results Section 4 summarizes the results and discusses possible ways for further work Development of CTCbased systems originates from the paper BIBREF3 where CTC loss was introduced This loss is a total probability of labels sequence given observation sequence which takes into account all possible alignments induced by a given words sequence Although a number of possible alignments increases exponentially with sequences lengths there is an efficient algorithm to compute CTC loss based on dynamic programming principle known as ForwardBackward algorithm This algorithm operates with posterior probabilities of any output sequence element observation given the time frame and CTC loss is differentiable with respect to these probabilities Therefore if an acoustic model is based on the neural network which estimates these posteriors its training may be performed with a conventional error backpropagation gradient descent BIBREF6 Training of ASR system based on such a model does not require an explicit alignment of input utterance to the elements of output sequence and thus may be performed in endtoend fashion It is also important that CTC loss accumulates the information about the whole output sequence and hence its optimization is in some sense an alternative to the traditional finetuning of neural network acoustic models by means of sequencediscriminative criteria such as sMBR BIBREF7 etc The implementation of CTC is conventionally based on RNNLSTM networks including bidirectional ones as acoustic models since they are known to model long context effectively The important component of CTC is a special blank symbol which fills in gaps between meaningful elements of output sequence to equalize its length to the number of frames in the input sequence It corresponds to a separate output neuron and blank symbols are deleted from the recognized sequence to obtain the final result In BIBREF8 a modification of CTC loss was proposed referred as Auto SeGmentation criterion ASG loss which does not use blank symbols Instead of using blank a simple transition probability model for an output symbols is introduced This leads to a significant simplification and speedup of computations Moreover the improved recognition results compared to basic CTC loss were obtained DeepSpeech BIBREF9 developed by Baidu Inc was one of the first systems that demonstrated an effectiveness of CTCbased speech recognition in LVCSR tasks Being trained on 2300 hours of English Conversational Telephone Speech data it demonstrated stateoftheart results on Hub500 evaluation set Research in this direction continued and resulted in DeepSpeech2 architecture BIBREF10 composed of both convolutional and recurrent layers This system demonstrates improved accuracy of recognition of both English and Mandarin speech Another successful example of applying CTC to LVCSR tasks is EESEN system BIBREF11 It integrates an RNNbased model trained with CTC criterion to the conventional WFSTbased decoder from the Kaldi toolkit BIBREF12 The paper BIBREF13 shows that endtoend systems may be successfully built from convolutional layers only instead of recurrent ones It was demonstrated that using Gated Convolutional Units GLUCNNs and training with ASGloss leads to the stateoftheart results on the LibriSpeech database 960 hours of training data Recently a new modification of DeepSpeech2 architecture was proposed in BIBREF14 Several lower convolutional layers were replaced with a deep residual network with depthwise separable convolutions This modification along with using strong regularization and data augmentation techniques leads to the results close to DeepSpeech2 in spite of significantly lower amount of data used for training Indeed one of the models was trained with only 80 hours of speech data which were augmented with noisy and speedperturbed versions of original data These results suggest that CTC can be successfully applied for the training of ASR systems for lowresource languages in particular for those included in Babel research program the amount of training data for them is normally 40 to 80 hours of speech Currently Babel corpus contains data for more than 20 languages and for most of them quite good traditional ASR system were built BIBREF15 BIBREF16 BIBREF17 In order to improve speech recognition accuracy for a given language data from other languages is widely used as well It can be used to train multilingual system via multitask learning or to obtain highlevel multilingual representations usually bottleneck features extracted from a pretrained multilingual network One of the first attempts to build ASR system for lowresource BABEL languages using CTCbased endtoend training was made recently BIBREF18 Despite the obtained results are somewhat worse compared to the stateoftheart traditional systems they still demonstrate that CTCbased approach is viable for building lowresource ASR systems The aim of our work is to investigate some ways to improve the obtained results For all experiments we used conversational speech from IARPA Babel Turkish Language Pack LDC2016S10 This corpus contains about 80 hours of transcribed speech for training and 10 hours for development The dataset is rather small compared to widely used benchmarks for conversational speech English Switchboard corpus 300 hours LDC97S62 and Fisher dataset 2000 hours LDC2004S13 and LDC2005S13 As targets we use 32 symbols 29 lowercase characters of Turkish alphabet BIBREF19 apostrophe space and special blank character that means no output Thus we do not use any prior linguistic knowledge and also avoid OOV problem as the system can construct new words directly All models are trained with CTCloss Input features are 40 melscaled log filterbank enegries FBanks computed every 10 ms with 25 ms window concatenated with deltas and deltadeltas 120 features in vector We also tried to use spectrogram and experimented with different normalization techniques For decoding we used characterbased beam search BIBREF20 with 3gram language model build with SRILM package BIBREF21 finding sequence of characters INLINEFORM0 that maximizes the following objective BIBREF9 INLINEFORM1 where INLINEFORM0 is language model weight and INLINEFORM1 is word insertion penalty For all experiments we used INLINEFORM0 INLINEFORM1 and performed decoding with beam width equal to 100 and 2000 which is not very large compared to 7000 and more active hypotheses used in traditional WFST decoders eg many Kaldi recipes do decoding with INLINEFORM2 To compare with other published results BIBREF18 BIBREF22 we used Sclite BIBREF23 scoring package to measure results of decoding with beam width 2000 that takes into account incomplete words and spoken noise in reference texts and doesnt penalize model if it incorrectly recognize these pieces Also we report WER word error rate for simple argmax decoder taking labels with maximum output on each time step and than applying CTC decoding rule collapse repeated labels and remove blanks We tried to explore the behavior of different neural network architectures in case when rather small data is available We used multilayer bidirectional LSTM networks tried fullyconvolutional architecture similar to Wav2Letter BIBREF8 and explored DeepSpeechlike architecture developed by Salesforce DSSF BIBREF14 The convolutional model consists of 11 convolutional layers with batch normalization after each layer The DeepSpeechlike architecture consists of 5layers residual network with depthwise separable convolutions followed by 4layer bidirectional Gated Recurrent Unit GRU as described in BIBREF14 Our baseline bidirectional LSTM is 6layers network with 320 hidden units per direction as in BIBREF18 Also we tried to use bLSTM to label every second frame 20 ms concatenating every first output from first layer with second and taking this as input for second model layer The performance of our baseline models is shown in Table TABREF6 It is known that CTCloss is very unstable for long utterances BIBREF3 and smaller utterances are more useful for this task Some techniques were developed to help model converge faster eg sortagrad BIBREF10 using shorter segments at the beginning of training To compute CTCloss we use all possible alignments between audio features and reference text but only some of the alignments make sense Traditional DNNHMM systems also use iterative training with finding best alignment and then training neural network to approximate this alignment Therefore we propose the following algorithm to use segmentation during training compute CTCalignment find the sequence of targets with minimal loss that can be mapped to real targets by collapsing repeated characters and removing blanks perform greedy decoding argmax on each step find wellrecognized words with INLINEFORM0 INLINEFORM1 is a hyperparameter segment should start and end with space word is wellrecognized when argmax decoding is equal to computed alignment if the word is wellrecognized divide the utterance into 5 segments left segment before space left space the word right space and right segment compute CTCloss for all this segments separately and do backpropagation as usual The results of training with this criterion are shown in Table TABREF13 The proposed criterion doesnt lead to consistent improvement while decoding with large beam width 2000 but shows significant improvement when decoding with smaller beam 100 We plan to further explore utilizing alignment information during training We explored different normalization techniques FBanks with cepstral mean normalization CMN perform better than raw FBanks We found using variance with mean normalization CMVN unnecessary for the task Using deltas and deltadeltas improves model so we used them in other experiments Models trained with spectrogram features converge slower and to worse minimum but the difference when using CMN is not very big compared to FBanks Experiments with varying number of hidden units of 6layer bLSTM models are presented in Table TABREF17 Models with 512 and 768 hidden units are worse than with 320 but model with 1024 hidden units is significantly better than others We also observed that model with 6 layers performs better than others To train our best model we chose the best network from our experiments 6layer bLSTM with 1024 hidden units trained it with Adam optimizer and finetuned with SGD with momentum using exponential learning rate decay The best model trained with speed and volume perturbation BIBREF24 achieved 458 WER which is the best published endtoend result on Babel Turkish dataset using indomain data For comparison WER of model trained using indomain data in BIBREF18 is 531 using 4 additional languages including English Switchboard dataset 487 It is also not far from Kaldi DNNHMM system BIBREF22 with 438 WER In this paper we explored different endtoend architectures in lowresource ASR task using Babel Turkish dataset We considered different ways to improve performance and proposed promising CTCloss modification that uses segmentation during training Our final system achieved 458 WER using indomain data only which is the best published result for Turkish endtoend systems Our work also shows than welltuned endtoend system can achieve results very close to traditional DNNHMM systems even for lowresource languages In future work we plan to further investigate different loss modifications GramCTC ASG and try to use RNNTransducers and multitask learning This work was financially supported by the Ministry of Education and Science of the Russian Federation Contract 14575210132 IDRFMEFI57517X0132 </s>
<s>[INST] Tag-based Multi-Span Extraction in Reading Comprehension [/INST] The task of reading comprehension where systems must understand a single passage of text well enough to answer arbitrary questions about it has seen significant progress in the last few years With models reaching human performance on the popular SQuAD dataset BIBREF0 and with much of the most popular reading comprehension datasets having been solved BIBREF1 BIBREF2 a new dataset DROP BIBREF3 was recently published DROP aimed to present questions that require more complex reasoning in order to answer than that of previous datasets in a hope to push the field towards a more comprehensive analysis of paragraphs of text In addition to questions whose answers are a single continuous span from the paragraph text questions of a type already included in SQuAD DROP introduced additional types of questions Among these new types were questions that require simple numerical reasoning ie questions whose answer is the result of a simple arithmetic expression containing numbers from the passage and questions whose answers consist of several spans taken from the paragraph or the question itself what we will denote as multispan questions Of all the existing models that tried to tackle DROP only one model BIBREF4 directly targeted multispan questions in a manner that wasnt just a byproduct of the models overall performance In this paper we propose a new method for tackling multispan questions Our method takes a different path from that of the aforementioned model It does not try to generalize the existing approach for tackling singlespan questions but instead attempts to attack this issue with a new tagbased approach Numericallyaware QANet NAQANet BIBREF3 was the model released with DROP It uses QANET BIBREF5 at the time the bestperforming published model on SQuAD 11 BIBREF0 without data augmentation or pretraining as the encoder On top of QANET NAQANet adds four different output layers which we refer to as heads Each of these heads is designed to tackle a specific question type from DROP where these types where identified by DROPs authors postcreation of the dataset These four heads are 1 Passage span head designed for producing answers that consist of a single span from the passage This head deals with the type of questions already introduced in SQuAD 2 Question span head for answers that consist of a single span from the question 3 Arithmetic head for answers that require adding or subtracting numbers from the passage 4 Count head for answers that require counting and sorting entities from the text In addition to determine which head should be used to predict an answer a 4way categorical variable as per the number of heads is trained We denote this categorical variable as the head predictor Numericallyaware BERT NABERT BIBREF6 introduced two main improvements over NAQANET The first was to replace the QANET encoder with BERT This change alone resulted in an absolute improvement of more than eight points in both EM and F1 metrics The second improvement was to the arithmetic head consisting of the addition of standard numbers and templates Standard numbers were predefined numbers which were added as additional inputs to the arithmetic head regardless of their occurrence in the passage Templates were an attempt to enrich the heads arithmetic capabilities by adding the ability of doing simple multiplications and divisions between up to three numbers MTMSN BIBREF4 is the first and only model so far that specifically tried to tackle the multispan questions of DROP Their approach consisted of two parts The first was to train a dedicated categorical variable to predict the number of spans to extract The second was to generalize the singlespan head method of extracting a span by utilizing the nonmaximum suppression NMS algorithm BIBREF7 to find the most probable set of nonoverlapping spans The number of spans to extract was determined by the aforementioned categorical variable Additionally MTMSN introduced two new other non spanrelated components The first was a new negation head meant to deal with questions deemed as requiring logical negation eg How many percent were not German The second was improving the arithmetic head by using beam search to rerank candidate arithmetic expressions Problem statement Given a pair xPxQ of a passage and a question respectively both comprised of tokens from a vocabulary V we wish to predict an answer y The answer could be either a collection of spans from the input or a number supposedly arrived to by performing arithmetic reasoning on the input We want to estimate pyxPxQ The basic structure of our model is shared with NABERT which in turn is shared with that of NAQANET the model initially released with DROP Consequently meticulously presenting every part of our model would very likely prove redundant As a reasonable compromise we will introduce the shared parts with more brevity and will go into greater detail when presenting our contributions Assume there are K answer heads in the model and their weights denoted by theta For each pair xPxQ we assume a latent categorical random variable zin leftlbrace 1ldots Krightrbrace such that the probability of an answer y is where each component of the mixture corresponds to an output head such that Note that a head is not always capable of producing the correct answer ytextgold for each type of question in which case pleftytextgold vert z xPxQtheta right0 For example the arithmetic head whose output is always a single number cannot possibly produce a correct answer for a multispan question For a multispan question with an answer composed of l spans denote ytextgoldtextit MSleftlbrace ytextgold1 ldots ytextgoldl rightrbrace NAQANET and NABERT had no head capable of outputting correct answers for multispan questions Instead of ignoring them in training both models settled on using semicorrect answers each ytextgold in ytextgoldtextit MS was considered to be a correct answer only in training By deliberately encouraging the model to provide partial answers for multispan questions they were able to improve the corresponding F1 score As our model does have a head with the ability to answer multispan questions correctly we didnt provide the aforementioned semicorrect answers to any of the other heads Otherwise we would have skewed the predictions of the head predictor and effectively mislead the other heads to believe they could predict correct answers for multispan questions Before going over the answer heads two additional components should be introduced the summary vectors and the head predictor Summary vectors The summary vectors are two fixedsize learned representations of the question and the passage which serve as an input for some of the heads To create the summary vectors first define mathbf T as BERTs output on a xPxQ input Then let mathbf TP and mathbf TQ be subsequences of T that correspond to xP and xQ respectively Finally let us also define Bdim as the dimension of the tokens in mathbf T eg 768 for BERTbase and have mathbf WP in mathbb Rtexttt Bdim and mathbf WQ in mathbb Rtexttt Bdim as learned linear layers Then the summary vectors are computed as Head predictor A learned categorical variable with its number of outcomes equal to the number of answer heads in the model Used to assign probabilities for using each of the heads in prediction where FFN is a twolayer feedforward network with RELU activation Passage span Define textbf WS in mathbb Rtexttt Bdim and textbf WE in mathbb Rtexttt Bdim as learned vectors Then the probabilities of the start and end positions of a passage span are computed as Question span The probabilities of the start and end positions of a question span are computed as where textbf etextbf TQotimes textbf hP repeats textbf hP for each component of textbf TQ Count Counting is treated as a multiclass prediction problem with the numbers 09 as possible labels The label probabilities are computed as Arithmetic As in NAQNET this head obtains all of the numbers from the passage and assigns a plus minus or zero ignore for each number As BERT uses wordpiece tokenization some numbers are broken up into multiple tokens Following NABERT we chose to represent each number by its first wordpiece That is if textbf Ni is the set of tokens corresponding to the itextth number we define a number representation as textbf hiN textbf Ni0 The selection of the sign for each number is a multiclass prediction problem with options lbrace 0 rbrace and the probabilities for the signs are given by As for NABERTs two additional arithmetic features we decided on using only the standard numbers as the benefits from using templates were deemed inconclusive Note that unlike the singlespan heads which are related to our introduction of a multispan head the arithmetic and count heads were not intended to play a significant role in our work We didnt aim to improve results on these types of questions perhaps only as a byproduct of improving the general reading comprehension ability of our model A subset of questions that wasnt directly dealt with by the base models NAQANET NABERT is questions that have an answer which is composed of multiple noncontinuous spans We suggest a head that will be able to deal with both singlespan and multispan questions To model an answer which is a collection of spans the multispan head uses the mathtt BIO tagging format BIBREF8 mathtt B is used to mark the beginning of a span mathtt I is used to mark the inside of a span and mathtt O is used to mark tokens not included in a span In this way we get a sequence of chunks that can be decoded to a final answer a collection of spans As words are broken up by the wordpiece tokenization for BERT we decided on only considering the representation of the first subtoken of the word to tag following the NER task from BIBREF2 For the ith token of an input the probability to be assigned a texttag in leftlbrace mathtt Bmathtt Imathtt O rightrbrace is computed as To train our model we try to maximize the loglikelihood of the correct answer pytextgoldxPxQtheta as defined in Section SECREF2 If no head is capable of predicting the gold answer the sample is skipped We enumerate over every answer head zin leftlbrace textit PS textit QS textit C textit A textit MSrightrbrace Passage Span Question Span Count Arithmetic MultiSpan to compute each of the objectives addends Note that we are in a weakly supervised setup the answer type is not given and neither is the correct arithmetic expression required for deriving some answers Therefore it is possible that ytextgold could be derived by more than one way even from the same head with no indication of which is the correct one We use the weakly supervised training method used in NABERT and NAQANET Based on BIBREF9 for each head we find all the executions that evaluate to the correct answer and maximize their marginal likelihood For a datapoint lefty xP xQ right let chi z be the set of all possible ways to get y for answer head zin leftlbrace textit PS textit QS textit C textit A textit MSrightrbrace Then as in NABERT we have Finally for the arithmetic head let mu be the set of all the standard numbers and the numbers from the passage and let mathbf chi textit A be the set of correct sign assignments to these numbers Then we have Denote by chi textit MS the set of correct tag sequences If the concatenation of a question and a passage is m tokens long then denote a correct tag sequence as lefttexttag1ldots texttagmright We approximate the likelihood of a tag sequence by assuming independence between the sequences positions and multiplying the likelihoods of all the correct tags in the sequence Then we have Since a given multispan answer is a collection of spans it is required to obtain its matching tag sequences in order to compute the training objective In what we consider to be a correct tag sequence each answer span will be marked at least once Due to the weakly supervised setup we consider all the questionpassage spans that match the answer spans as being correct To illustrate consider the following simple example Given the text X Y Z Z and the correct multispan answer Y Z there are three correct tag sequences mathtt OBBBquad mathtt OBBOquad mathtt OBOB The number of correct tag sequences can be expressed by where s is the number of spans in the answer and i is the number of times the itextth span appears in the text For questions with a reasonable amount of correct tag sequences we generate all of them before the training starts However there is a small group of questions for which the amount of such sequences is between 10000 and 100000000 too many to generate and train on In such cases inspired by BIBREF9 instead of just using an arbitrary subset of the correct sequences we use beam search to generate the topk predictions of the training model and then filter out the incorrect sequences Compared to using an arbitrary subset using these sequences causes the optimization to be done with respect to answers more compatible with the model If no correct tag sequences were predicted within the topk we use the tag sequence that has all of the answer spans marked Based on the outputs textbf pitexttagi we would like to predict the most likely sequence given the mathtt BIO constraints Denote textit validSeqs as the set of all mathtt BIO sequences of length m that are valid according to the rules specified in Section SECREF5 The mathtt BIO tag sequence to predict is then We considered the following approaches A natural candidate for getting the most likely sequence is Viterbi decoding BIBREF10 with transition probabilities learned by a mathtt BIO constrained Conditional Random Field CRF BIBREF11 However further inspection of our sequences properties reveals that such a computational effort is probably not necessary as explained in following paragraphs Due to our use of mathtt BIO tags and their constraints observe that past tag predictions only affect future tag predictions from the last mathtt B prediction and as long as the best tag to predict is mathtt I Considering the frequency and length of the correct spans in the question and the passage effectively theres no effect of past sequences positions on future ones other than a very few positions ahead Together with the fact that at each prediction step there are no more than 3 tags to consider it means using beam search to get the most likely sequence is very reasonable and even allows nearoptimal results with small beam width values Notice that greedy tagging does not enforce the mathtt BIO constraints However since the multispan heads training objective adheres to the mathtt BIO constraints via being given the correct tag sequences we can expect that even with greedy tagging the predictions will mostly adhere to these constraints as well In case there are violations their amendment is required postprediction Albeit faster greedy tagging resulted in a small performance hit as seen in Table TABREF26 We tokenize the passage question and all answer texts using the BERT uncased wordpiece tokenizer from huggingface The tokenization resulting from each xPxQ input pair is truncated at 512 tokens so it can be fed to BERT as an input However before tokenizing the dataset texts we perform additional preprocessing as listed below The raw dataset included almost a thousand of HTML entities that did not get parsed properly eg 160 instead of a simple space In addition we fixed some quirks that were introduced by the original Wikipedia parsing method For example when encountering a reference to an external source that included a specific page from that reference the original parser ended up introducing a redundant PAGE NUMBER into the parsed text Although we previously stated that we arent focusing on improving arithmetic performance while analyzing the training process we encountered two arithmeticrelated issues that could be resolved rather quickly a precision issue and a number extraction issue Regarding precision we noticed that while either generating expressions for the arithmetic head or using the arithmetic head to predict a numeric answer the value resulting from an arithmetic operation would not always yield the exact result due to floating point precision limitations For example 58 66 123999 instead of 124 This issue has caused a significant performance hit of about 15 points for both F1 and EM and was fixed by simply rounding numbers to 5 decimal places assuming that no answer requires a greater precision Regarding number extraction we noticed that some numeric entities required in order to produce a correct answer werent being extracted from the passage Examples include ordinals 121st 189th and some per units 15807km2 105095month The training dataset contains multispan questions with answers that are clearly incorrect with examples shown in Table TABREF22 In order to mitigate this we applied an answercleaning technique using a pretrained Named Entity Recognition NER model BIBREF12 in the following manner 1 Predefine question prefixes whose answer spans are expected to contain only a specific entity type and filter the matching questions 2 For a given answer of a filtered question remove any span that does not contain at least one token of the expected type where the types are determined by applying the NER model on the passage For example if a question starts with who scored we expect that any valid span will include a person entity mathtt PER By applying such rules we discovered that at least 3 of the multispan questions in the training dataset included incorrect spans As our analysis of prefixes wasnt exhaustive we believe that this method could yield further gains Table TABREF22 shows a few of our cleaning method results where we perfectly clean the first two questions and partially clean a third question The starting point for our implementation was the NABERT model which in turn was based on allenais NAQANET Our implementation can be found on GitHub All three models utilize the allennlp framework The pretrained BERT models were supplied by huggingface For our base model we used bertbaseuncased For our large models we used the standard bertlargeuncasedwholewordmasking and the squad finetuned bertlargeuncased wholewordmaskingfinetunedsquad Due to limited computational resources we did not perform any hyperparameter searching We preferred to focus our efforts on the ablation studies in hope to gain further insights on the effect of the components that we ourselves introduced For ease of performance comparison we followed NABERTs training settings we used the BERT Adam optimizer from huggingface with default settings and a learning rate of 1e5 The only difference was that we used a batch size of 12 We trained our base model for 20 epochs For the large models we used a batch size of 3 with a learning rate of 5e6 and trained for 5 epochs except for the model without the singlespan heads that was trained with a batch size of 2 for 7 epochs F1 was used as our validation metric All models were trained on a single GPU with 1216GB of memory Table TABREF24 shows the results on DROPs development set Compared to our base models our large models exhibit a substantial improvement across all metrics We can see that our base model surpasses the NABERT baseline in every metric The major improvement in multispan performance was expected as our multispan head was introduced specifically to tackle this type of questions For the other types most of the improvement came from better preprocessing A more detailed discussion could be found in Section SECREF36 Notice that different BERTlarge models were used so the comparison is less direct Overall our large models exhibits similar results to those of MTMSNlarge For multispan questions we achieve a significantly better performance While a breakdown of metrics was only available for MTMSNlarge notice that even when comparing these metrics to our base model we still achieve a 122 absolute improvement in EM and a 23 improvement in F1 All that while keeping in mind we compare a base model to a large model for reference note the 8 point improvement between MTMSNbase and MTMSNlarge in both EM and F1 Our best model largesquad exhibits a huge improvement of 297 in EM and 151 in F1 compared to MTMSNlarge When comparing singlespan performance our best model exhibits slightly better results but it should be noted that it retains the singlespan heads from NABERT while in MTMSN they have one head to predict both singlespan and multispan answers For a fairer comparison we trained our model with the singlespan heads removed where our multispan head remained the only head aimed for handling span questions With this nosinglespanheads setting while our multispan performance even improved a bit our singlespan performance suffered a slight drop ending up trailing by 08 in EM and 06 in F1 compared to MTMSN Therefore it could prove beneficial to try and analyze the reasons behind each models ours and MTMSN relative advantages and perhaps try to combine them into a more holistic approach of tackling span questions Table TABREF25 shows the results on DROPs test set with our model being the best overall as of the time of writing and not just on multispan questions In order to analyze the effect of each of our changes we conduct ablation studies on the development set depicted in Table TABREF26 Not using the simple preprocessing from Section SECREF17 resulted in a 25 point decrease in both EM and F1 The numeric questions were the most affected with their performance dropping by 35 points Given that number questions make up about 61 of the dataset we can deduce that our improved number handling is responsible for about a 21 point gain while the rest could be be attributed to the improved Wikipedia parsing Although NER span cleaning Section SECREF23 affected only 3 of the multispan questions it provided a solid improvement of 54 EM in multispan questions and 15 EM in singlespan questions The singlespan improvement is probably due to the combination of better multispan head learning as a result of fixing multispan questions and the fact that the multispan head can answer singlespan questions as well Not using the singlespan heads results in a slight drop in multispan performance and a noticeable drop in singlespan performance However when performing the same comparison between our large models see Table TABREF24 this performance gap becomes significantly smaller As expected not using the multispan head causes the multispan performance to plummet Note that for this ablation test the singlespan heads were permitted to train on multispan questions Compared to using greedy decoding in the prediction of multispan questions using beam search results in a small improvement We used a beam with of 5 and didnt perform extensive tuning of the beam width In this work we introduced a new approach for tackling multispan questions in reading comprehension datasets This approach is based on individually tagging each token with a categorical tag relying on the tokens contextual representation to bridge the information gap resulting from the tokens being tagged individually First we show that integrating this new approach into an existing model NABERT does not hinder performance on other questions types while substantially improving the results on multispan questions Later we compare our results to the current stateoftheart on multispan questions We show that our model has a clear advantage in handling multispan questions with a 297 absolute improvement in EM and a 151 absolute improvement in F1 Furthermore we show that our model slightly eclipses the current stateoftheart results on the entire DROP dataeset Finally we present some ablation studies analyzing the benefit gained from individual components of our model We believe that combining our tagbased approach for handling multispan questions with current successful techniques for handling singlespan questions could prove beneficial in finding better more holistic ways of tackling span questions in general Currently For each individual span we optimize the average likelihood over all its possible tag sequences see Section SECREF9 A different approach could be not taking each possible tag sequence into account but only the most likely one This could provide the model more flexibility during training and the ability to focus on the more correct tag sequences As mentioned in Section SECREF5 we only considered the representation of the first wordpiece subtoken in our model It would be interesting to see how different approaches to utilize the other subtokens representations in the tagging task affect the results </s>
<s>[INST] Transfer Learning Between Related Tasks Using Expected Label Proportions [/INST] Data annotation is a key bottleneck in many data driven algorithms Specifically deep learning models which became a prominent tool in many data driven tasks in recent years require large datasets to work well However many tasks require manual annotations which are relatively hard to obtain at scale An attractive alternative is lightly supervised learning BIBREF1 BIBREF2 BIBREF3 BIBREF4 BIBREF5 BIBREF6 BIBREF7 BIBREF8 in which the objective function is supplemented by a set of domainspecific softconstraints over the models predictions on unlabeled data For example in label regularization BIBREF0 the model is trained to fit the true label proportions of an unlabeled dataset Label regularization is special case of expectation regularization XR BIBREF0 in which the model is trained to fit the conditional probabilities of labels given features In this work we consider the case of correlated tasks in the sense that knowing the labels for task A provides information on the expected label composition of task B We demonstrate the approach using sentencelevel and aspectlevel sentiment analysis which we use as a running example knowing that a sentence has positive sentiment label task A we can expect that most aspects within this sentence task B will also have positive label While this expectation may be noisy on the individual example level it holds well in aggregate given a set of positivelylabeled sentences we can robustly estimate the proportion of positivelylabeled aspects within this set For example in a random set of positive sentences we expect to find 90 positive aspects while in a set of negative sentences we expect to find 70 negative aspects These proportions can be easily either guessed or estimated from a small set We propose a novel application of the XR framework for transfer learning in this setup We present an algorithm Sec SECREF12 that given a corpus labeled for task A sentencelevel sentiment learns a classifier for performing task B aspectlevel sentiment instead without a direct supervision signal for task B We note that the label information for task A is only used at training time Furthermore due to the stochastic nature of the estimation the task A labels need not be fully accurate allowing us to make use of noisy predictions which are assigned by an automatic classifier Sections SECREF12 and SECREF4 In other words given a mediumsized sentiment corpus with sentencelevel labels and a large collection of unannotated text from the same distribution we can train an accurate aspectlevel sentiment classifier The XR loss allows us to use task A labels for training task B predictors This ability seamlessly integrates into other semisupervised schemes we can use the XR loss on top of a pretrained model to finetune the pretrained representation to the target task and we can also take the model trained using XR loss and plentiful data and finetune it to the target task using the available smallscale annotated data In Section SECREF56 we explore these options and show that our XR framework improves the results also when applied on top of a pretrained Bertbased model BIBREF9 Finally to make the XR framework applicable to largescale deeplearning setups we propose a stochastic batched approximation procedure Section SECREF19 Source code is available at httpsgithubcomMatanBNXRTransfer An effective way to supplement small annotated datasets is to use lightly supervised learning in which the objective function is supplemented by a set of domainspecific softconstraints over the models predictions on unlabeled data Previous work in lightlysupervised learning focused on training classifiers by using prior knowledge of label proportions BIBREF2 BIBREF3 BIBREF10 BIBREF0 BIBREF11 BIBREF12 BIBREF7 BIBREF13 BIBREF14 BIBREF15 BIBREF16 BIBREF8 or prior knowledge of features label associations BIBREF1 BIBREF17 BIBREF18 BIBREF19 BIBREF20 In the context of NLP BIBREF17 suggested to use distributional similarities of words to train sequence models for partofspeech tagging and a classified ads information extraction task BIBREF19 used background lexical information in terms of wordclass associations to train a sentiment classifier BIBREF21 BIBREF22 suggested to exploit the bilingual correlations between a resource rich language and a resource poor language to train a classifier for the resource poor language in a lightly supervised manner Expectation Regularization XR BIBREF0 is a lightly supervised learning method in which the model is trained to fit the conditional probabilities of labels given features In the context of NLP XR was used by BIBREF20 to train twitteruser attribute prediction using hundreds of noisy distributional expectations based on census demographics Here we suggest using XR to train a target task aspectlevel sentiment based on the output of a related sourcetask classifier sentencelevel sentiment The main idea of XR is moving from a fully supervised situation in which each datapoint INLINEFORM0 has an associated label INLINEFORM1 to a setup in which sets of data points INLINEFORM2 are associated with corresponding label proportions INLINEFORM3 over that set Formally let INLINEFORM0 be a set of data points INLINEFORM1 be a set of INLINEFORM2 class labels INLINEFORM3 be a set of sets where INLINEFORM4 for every INLINEFORM5 and let INLINEFORM6 be the label distribution of set INLINEFORM7 For example INLINEFORM8 would indicate that 70 of data points in INLINEFORM9 are expected to have class 0 20 are expected to have class 1 and 10 are expected to have class 2 Let INLINEFORM10 be a parameterized function with parameters INLINEFORM11 from INLINEFORM12 to a vector of conditional probabilities over labels in INLINEFORM13 We write INLINEFORM14 to denote the probability assigned to the INLINEFORM15 th event the conditional probability of INLINEFORM16 given INLINEFORM17 A typically objective when training on fully labeled data of INLINEFORM0 pairs is to maximize likelihood of labeled data using the cross entropy loss INLINEFORM1 Instead in XR our data comes in the form of pairs INLINEFORM0 of sets and their corresponding expected label proportions and we aim to optimize INLINEFORM1 to fit the label distribution INLINEFORM2 over INLINEFORM3 for all INLINEFORM4 As counting the number of predicted class labels over a set INLINEFORM0 leads to a nondifferentiable objective BIBREF0 suggest to relax it and use instead the models posterior distribution INLINEFORM1 over the set DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 indicates the INLINEFORM1 th entry in INLINEFORM2 Then we would like to set INLINEFORM3 such that INLINEFORM4 and INLINEFORM5 are close BIBREF0 suggest to use KLdivergence for this KLdivergence is composed of two parts INLINEFORM6 INLINEFORM7 Since INLINEFORM0 is constant we only need to minimize INLINEFORM1 therefore the loss function becomes DISPLAYFORM0 Notice that computing INLINEFORM0 requires summation over INLINEFORM1 for the entire set INLINEFORM2 which can be prohibitive We present batched approximation Section SECREF19 to overcome this BIBREF0 find that XR might find a degenerate solution For example in a three class classification task where INLINEFORM0 it might find a solution such that INLINEFORM1 for every instance as a result every instance will be classified the same To avoid this BIBREF0 suggest to penalize flat distributions by using a temperature coefficient T likewise DISPLAYFORM0 Where z is a feature vector and W and b are the linear classifier parameters In the aspectbased sentiment classification ABSC task we are given a sentence and an aspect and need to determine the sentiment that is expressed towards the aspect For example the sentence Excellent food although the interior could use some help has two aspects food and interior a positive sentiment is expressed about the food but a negative sentiment is expressed about the interior A sentence INLINEFORM0 may contain 0 or more aspects INLINEFORM1 where each aspect corresponds to a subsequence of the original sentence and has an associated sentiment label Neg Pos or Neu Concretely we follow the task definition in the SemEval2015 and SemEval2016 shared tasks BIBREF23 BIBREF24 in which the relevant aspects are given and the task focuses on finding the sentiment label of the aspects While sentencelevel sentiment labels are relatively easy to obtain aspectlevel annotation are much more scarce as demonstrated in the small datasets of the SemEval shared tasks t Inputs A dataset INLINEFORM0 batch size INLINEFORM1 differentiable classifier INLINEFORM2 H not converged INLINEFORM3 random INLINEFORM4 INLINEFORM5 randomchoice INLINEFORM6 INLINEFORM7 INLINEFORM8 INLINEFORM9 INLINEFORM10 INLINEFORM11 Compute loss INLINEFORM12 eq 4 Compute gradients and update INLINEFORM13 INLINEFORM14 Stochastic Batched XR Consider two classification tasks over a shared input space a source task INLINEFORM0 from INLINEFORM1 to INLINEFORM2 and a target task INLINEFORM3 from INLINEFORM4 to INLINEFORM5 which are related through a conditional distribution INLINEFORM6 In other words a labeling decision for task INLINEFORM7 induces an expected label distribution over the task INLINEFORM8 For a set of datapoints INLINEFORM9 that share a source label INLINEFORM10 we expect to see a target label distribution of INLINEFORM11 Given a large unlabeled dataset INLINEFORM0 a small labeled dataset for the target task INLINEFORM1 classifier INLINEFORM2 or sufficient training data to train one for the source task we wish to use INLINEFORM3 and INLINEFORM4 to train a good classifier INLINEFORM5 for the target task This can be achieved using the following procedure Apply INLINEFORM0 to INLINEFORM1 resulting in a noisy sourceside labels INLINEFORM2 for the target task Estimate the conditional probability INLINEFORM0 table using MLE estimates over INLINEFORM1 INLINEFORM2 where INLINEFORM0 is a counting function over INLINEFORM1 Apply INLINEFORM0 to the unlabeled data INLINEFORM1 resulting in labels INLINEFORM2 Split INLINEFORM3 into INLINEFORM4 sets INLINEFORM5 according to the labeling induced by INLINEFORM6 INLINEFORM7 Use Algorithm SECREF12 to train a classifier for the target task using input pairs INLINEFORM0 and the XR loss In words by using XR training we use the expected label proportions over the target task given predicted labels of the source task to train a targetclass classifier BIBREF0 and following work take the base classifier INLINEFORM0 to be a logistic regression classifier for which they manually derive gradients for the XR loss and train with LBFGs BIBREF25 However nothing precludes us from using an arbitrary neural network instead as long as it culminates in a softmax layer One complicating factor is that the computation of INLINEFORM0 in equation EQREF5 requires a summation over INLINEFORM1 for the entire set INLINEFORM2 which in our setup may contain hundreds of thousands of examples making gradient computation and optimization impractical We instead proposed a stochastic batched approximation in which instead of requiring that the full constraint set INLINEFORM3 will match the expected label posterior distribution we require that sufficiently large random subsets of it will match the distribution At each training step we compute the loss and update the gradient with respect to a different random subset Specifically in each training step we sample a random pair INLINEFORM4 sample a random subset INLINEFORM5 of INLINEFORM6 of size INLINEFORM7 and compute the local XR loss of set INLINEFORM8 DISPLAYFORM0 where INLINEFORM0 is computed by summing over the elements of INLINEFORM1 rather than of INLINEFORM2 in equations EQREF5 2 The stochastic batched XR training algorithm is given in Algorithm SECREF12 For large enough INLINEFORM3 the expected label distribution of the subset is the same as that of the complete set We demonstrate the procedure given above by training Aspectbased Sentiment Classifier ABSC using sentencelevel sentiment signals We observe that while the sentencelevel sentiment does not determine the sentiment of individual aspects a positive sentence may contain negative remarks about some aspects it is very predictive of the proportion of sentiment labels of the fragments within a sentence Positively labeled sentences are likely to have more positive aspects and fewer negative ones and viceversa for negativelylabeled sentences While these proportions may vary on the individual sentence level we expect them to be stable when aggregating fragments from several sentences when considering a large enough sample of fragments that all come from positively labeled sentences we expect the different samples to have roughly similar label proportions to each other This situation is idealy suited for performing XR training as described in section SECREF12 The application to ABSC is almost straightforward but is complicated a bit by the decomposition of sentences into fragments each sentence level decision now corresponds to multiple fragmentlevel decisions Thus we apply the sentencelevel task A classifier INLINEFORM0 on the aspectlevel corpus INLINEFORM1 by applying it on the sentence level and then associating the predicted sentence labels with each of the fragments resulting in fragmentlevel labeling Similarly when we apply INLINEFORM2 to the unlabeled data INLINEFORM3 we again do it at the sentence level but the sets INLINEFORM4 are composed of fragments not sentences INLINEFORM5 We then apply algorithm SECREF12 as is at each step of training we sample a source label INLINEFORM0 PosNegNeu INLINEFORM1 sample INLINEFORM2 fragments from INLINEFORM3 and use the XR loss to fit the expected fragmentlabel proportions over these INLINEFORM4 fragments to INLINEFORM5 Figure FIGREF21 illustrates the procedure We model the ABSC problem by associating each sentenceaspect pair with a sentencefragment and constructing a neural classifier from fragments to sentiment labels We heuristically decompose a sentence into fragments We use the same BiLSTM based neural architecture for both sentence classification and fragment classification We now describe the procedure we use to associate a sentence fragment with each sentenceaspect pairs The shared tasks data associates each aspect with a pivotphrase INLINEFORM0 where pivot phrase INLINEFORM1 is defined as a predetermined sequence of words that is contained within the sentence For a sentence INLINEFORM2 a set of pivot phrases INLINEFORM3 and a specific pivot phrase INLINEFORM4 we consult the constituency parse tree of INLINEFORM5 and look for tree nodes that satisfy the following conditions The node governs the desired pivot phrase INLINEFORM0 The node governs either a verb VB VBD VBN VBG VBP VBZ or an adjective JJ JJR JJS which is different than any INLINEFORM0 The node governs a minimal number of pivot phrases from INLINEFORM0 ideally only INLINEFORM1 We then select the highest node in the tree that satisfies all conditions The span governed by this node is taken as the fragment associated with aspect INLINEFORM0 The decomposition procedure is demonstrated in Figure FIGREF22 When aspectlevel information is given we take the pivotphrases to be the requested aspects When aspectlevel information is not available we take each noun in the sentence to be a pivotphrase Our classification model is a simple 1layer BiLSTM encoder a concatenation of the last states of a forward and a backward running LSTMs followed by a linearpredictor The encoder is fed either a complete sentence or a sentence fragment Table TABREF44 compares these baselines to three XR conditions The first condition BiLSTMXRDev performs XR training on the automaticallylabeled sentencelevel dataset The only access it has to aspectlevel annotation is for estimating the proportions of labels for each sentencelevel label which is done based on the validation set of SemEval2015 ie 20 of the train set The XR setting is very effective without using any intask data this model already surpasses all other models both supervised and semisupervised except for the BIBREF35 BIBREF34 models which achieve higher F1 scores We note that in contrast to XR the competing models have complete access to the supervised aspectbased labels The second condition BiLSTMXR is similar but now the model is allowed to estimate the conditional label proportions based on the entire aspectbased training set the classifier still does not have direct access to the labels beyond the aggregate proportion information This improves results further showing the importance of accurately estimating the proportions Finally in BiLSTMXRFinetuning we follow the XR training with fully supervised finetuning on the small labeled dataset using the attentionbased model of BIBREF35 This achieves the best results and surpasses also the semisupervised BIBREF35 baseline on accuracy and matching it on F1 We report significance tests for the robustness of the method under random parameter initialization Our reported numbers are averaged over five random initialization Since the datasets are unbalanced wrt the label distribution we report both accuracy and macroF1 The XR training is also more stable than the other semisupervised baselines achieving substantially lower standard deviations across different runs In each experiment in this section we estimate the proportions using the SemEval2015 train set How does the XR training scale with the amount of unlabeled data Figure FIGREF54 a shows the macroF1 scores on the entire SemEval2016 dataset with different unlabeled corpus sizes measured in number of sentences An unannotated corpus of INLINEFORM0 sentences is sufficient to surpass the results of the INLINEFORM1 sentencelevel trained classifier and more unannotated data further improves the results Our method requires a sentence level classifier INLINEFORM0 to label both the targettask corpus and the unlabeled corpus How does the quality of this classifier affect the overall XR training We vary the amount of supervision used to train INLINEFORM1 from 0 sentences assigning the same label to all sentences to 100 1000 5000 and 10000 sentences We again measure macroF1 on the entire SemEval 2016 corpus The results in Figure FIGREF54 b show that when using the prior distributions of aspects 0 the model struggles to learn from this signal it learns mostly to predict the majority class and hence reaches very low F1 scores of 3528 The more data given to the sentence level classifier the better the potential results will be when training with our method using the classifier labels with a classifiers trained on 10010005000 and 10000 labeled sentences we get a F1 scores of 5381 5884 6181 6558 respectively Improvements in the source task classifiers quality clearly contribute to the target task accuracy The Stochastic Batched XR algorithm Algorithm SECREF12 samples a batch of INLINEFORM0 examples at each step to estimate the posterior label distribution used in the loss computation How does the size of INLINEFORM1 affect the results We use INLINEFORM2 fragments in our main experiments but smaller values of INLINEFORM3 reduce GPU memory load and may train better in practice We tested our method with varying values of INLINEFORM4 on a sample of INLINEFORM5 using batches that are composed of fragments of 5 25 100 450 1000 and 4500 sentences The results are shown in Figure FIGREF54 c Setting INLINEFORM6 result in low scores Setting INLINEFORM7 yields better F1 score but with high variance across runs For INLINEFORM8 fragments the results begin to stabilize we also see a slight decrease in F1scores with larger batch sizes We attribute this drop despite having better estimation of the gradients to the general trend of larger batch sizes being harder to train with stochastic gradient methods The XR training can be performed also over pretrained representations We experiment with two pretraining methods 1 pretraining by training the BiLSTM model to predict the noisy sentencelevel predictions 2 Using the pretrained Bert representation BIBREF9 For 1 we compare the effect of pretrain on unlabeled corpora of sizes of INLINEFORM0 INLINEFORM1 and INLINEFORM2 sentences Results in Figure FIGREF54 d show that this form of pretraining is effective for smaller unlabeled corpora but evens out for larger ones For the Bert experiments we experiment with the Bertbase model with INLINEFORM1 sets 30 epochs for XR training or sentence level finetuning and 15 epochs for aspect based finetuning on each training method we evaluated the model on the dev set after each epoch and the best model was chosen We compare the following setups Bert INLINEFORM0 Aspect Based Finetuning pretrained bert model finetuned to the aspect based task Bert INLINEFORM0 A pretrained bert model finetuned to the sentence level task on the INLINEFORM1 sentences and tested by predicting fragmentlevel sentiment Bert INLINEFORM0 INLINEFORM1 INLINEFORM2 Aspect Based Finetuning pretrained bert model finetuned to the sentence level task and finetuned again to the aspect based one Bert INLINEFORM0 XR pretrained bert model followed by XR training using our method Bert INLINEFORM0 XR INLINEFORM1 Aspect Based Finetuning pretrained bert followed by XR training and then finetuned to the aspect level task The results are presented in Table TABREF55 As before aspectbased finetuning is beneficial for both SemEval16 and SemEval15 Training a BiLSTM with XR surpasses pretrained bert models and using XR training on top of the pretrained Bert models substantially increases the results even further We presented a transfer learning method based on expectation regularization XR and demonstrated its effectiveness for training aspectbased sentiment classifiers using sentencelevel supervision The method achieves stateoftheart results for the task and is also effective for improving on top of a strong pretrained Bert model The proposed method provides an additional dataefficient tool in the modeling arsenal which can be applied on its own or together with another training method in situations where there is a conditional relations between the labels of a source task for which we have supervision and a target task for which we dont While we demonstrated the approach on the sentiment domain the required conditional dependence between task labels is present in many situations Other possible application of the method includes training language identification of tweets given geolocation supervision knowing the geographical region gives a prior on languages spoken training predictors for renal failure from textual medical records given classifier for diabetes there is a strong correlation between the two conditions training a political affiliation classifier from social media tweets based on agegroup classifiers zipcode information or socialstatus classifiers there are known correlations between all of these to political affiliation training hatespeech detection based on emotion detection and so on The work was supported in part by The Israeli Science Foundation grant number 155515 </s>
<s>[INST] The SIGMORPHON 2019 Shared Task: Morphological Analysis in Context and Cross-Lingual Transfer for Inflection [/INST] While producing a sentence humans combine various types of knowledge to produce fluent outputvarious shades of meaning are expressed through word selection and tone while the language is made to conform to underlying structural rules via syntax and morphology Native speakers are often quick to identify disfluency even if the meaning of a sentence is mostly clear Automatic systems must also consider these constraints when constructing or processing language Strong enough language models can often reconstruct common syntactic structures but are insufficient to properly model morphology Many languages implement large inflectional paradigms that mark both function and content words with a varying levels of morphosyntactic information For instance Romanian verb forms inflect for person number tense mood and voice meanwhile Archi verbs can take on thousands of forms BIBREF0 Such complex paradigms produce large inventories of words all of which must be producible by a realistic system even though a large percentage of them will never be observed over billions of lines of linguistic input Compounding the issue good inflectional systems often require large amounts of supervised training data which is infeasible in many of the worlds languages This years shared task is concentrated on encouraging the construction of strong morphological systems that perform two related but different inflectional tasks The first task asks participants to create morphological inflectors for a large number of underresourced languages encouraging systems that use highlyresourced related languages as a crosslingual training signal The second task welcomes submissions that invert this operation in light of contextual information Given an unannotated sentence lemmatize each word and tag them with a morphosyntactic description Both of these tasks extend upon previous morphological competitions and the best submitted systems now represent the state of the art in their respective tasks Annotated resources for the worlds languages are not distributed equallysome languages simply have more as they have more native speakers willing and able to annotate more data We explore how to transfer knowledge from highresource languages that are genetically related to lowresource languages The first task iterates on last years main task morphological inflection BIBREF1 Instead of giving some number of training examples in the language of interest we provided only a limited number in that language To accompany it we provided a larger number of examples in either a related or unrelated language Each test example asked participants to produce some other inflected form when given a lemma and a bundle of morphosyntactic features as input The goal thus is to perform morphological inflection in the lowresource language having hopefully exploited some similarity to the highresource language Models which perform well here can aid downstream tasks like machine translation in lowresource settings All datasets were resampled from UniMorph which makes them distinct from past years The mode of the task is inspired by BIBREF2 who finetune a model pretrained on a highresource language to perform well on a lowresource language We do not though require that models be trained by finetuning Joint modeling or any number of methods may be explored instead The model will have access to typelevel data in a lowresource target language plus a highresource source language We give an example here of Asturian as the target language with Spanish as the source language We score the output of each system in terms of its predictions exactmatch accuracy and the average Levenshtein distance between the predictions and their corresponding true forms Although inflection of words in a contextagnostic manner is a useful evaluation of the morphological quality of a system people do not learn morphology in isolation In 2018 the second task of the CoNLLSIGMORPHON Shared Task BIBREF1 required submitting systems to complete an inflectional cloze task BIBREF3 given only the sentential context and the desired lemma an example of the problem is given in the following lines A successful system would predict the plural form dogs Likewise a Spanish word form ayuda may be a feminine noun or a thirdperson verb form which must be disambiguated by context This years task extends the second task from last year Rather than inflect a single word in context the task is to provide a complete morphological tagging of a sentence for each word a successful system will need to lemmatize and tag it with a morphsyntactic description MSD width Context is criticaldepending on the sentence identical word forms realize a large number of potential inflectional categories which will in turn influence lemmatization decisions If the sentence were instead The barking dogs kept us up all night barking is now an adjective and its lemma is also barking We presented data in 100 language pairs spanning 79 unique languages Data for all but four languages Basque Kurmanji Murrinhpatha and Sorani are extracted from English Wiktionary a large multilingual crowdsourced dictionary with morphological paradigms for many lemmata 20 of the 100 language pairs are either distantly related or unrelated this allows speculation into the relative importance of data quantity and linguistic relatedness For each language the basic data consists of triples of the form lemma feature bundle inflected form as in tabsub1data The first feature in the bundle always specifies the core part of speech eg verb For each language pair separate files contain the high and lowresource training examples All features in the bundle are coded according to the UniMorph Schema a crosslinguistically consistent universal morphological feature set BIBREF8 BIBREF9 For each of the Wiktionary languages Wiktionary provides a number of tables each of which specifies the full inflectional paradigm for a particular lemma As in the previous iteration tables were extracted using a template annotation procedure described in BIBREF10 From each languages collection of paradigms we sampled the training development and test sets as in 2018 Crucially while the data were sampled in the same fashion the datasets are distinct from those used for the 2018 shared task Our first step was to construct probability distributions over the lemma feature bundle inflected form triples in our full dataset For each triple we counted how many tokens the inflected form has in the February 2017 dump of Wikipedia for that language To distribute the counts of an observed form over all the triples that have this token as its form we follow the method used in the previous shared task BIBREF1 training a neural network on unambiguous forms to estimate the distribution over all even ambiguous forms We then sampled 12000 triples without replacement from this distribution The first 100 were taken as training data for lowresource settings The first 10000 were used as highresource training sets As these sets are nested the highestcount triples tend to appear in the smaller training sets The final 2000 triples were randomly shuffled and then split in half to obtain development and test sets of 1000 forms each The final shuffling was performed to ensure that the development set is similar to the test set By contrast the development and test sets tend to contain lowercount triples than the training set We further adopted some changes to increase compatibility Namely we corrected some annotation errors created while scraping Wiktionary for the 2018 task and we standardized Romanian tcedilla and tcomma to tcomma The same was done with scedilla and scomma Our data for task 2 come from the Universal Dependencies treebanks BIBREF11 which provides predefined training development and test splits and annotations in a unified annotation schema for morphosyntax and dependency relationships Unlike the 2018 cloze task which used UD data we require no manual data preparation and are able to leverage all 107 monolingual treebanks As is typical data are presented in CoNLLU format although we modify the morphological feature and lemma fields The morphological annotations for the 2019 shared task were converted to the UniMorph schema BIBREF10 according to BIBREF12 who provide a deterministic mapping that increases agreement across languages This also moves the part of speech into the bundle of morphological features We do not attempt to individually correct any errors in the UD source material Further some languages received additional preprocessing In the Finnish data we removed morpheme boundaries that were present in the lemmata eg puhekieli mapsto puhekieli spokenlanguage Russian lemmata in the GSD treebank were presented in all uppercase to match the 2018 shared task we lowercased these In development and test data all fields except for form and index within the sentence were struck We include four neural sequencetosequence models mapping lemma into inflected word forms soft attention BIBREF13 nonmonotonic hard attention BIBREF14 monotonic hard attention and a variant with offsetbased transition distribution BIBREF15 Neural sequencetosequence models with soft attention BIBREF13 have dominated previous SIGMORPHON shared tasks BIBREF16 BIBREF14 instead models the alignment between characters in the lemma and the inflected word form explicitly with hard attention and learns this alignment and transduction jointly BIBREF15 shows that enforcing strict monotonicity with hard attention is beneficial in tasks such as morphological inflection where the transduction is mostly monotonic The encoder is a biLSTM while the decoder is a lefttoright LSTM All models use multiplicative attention and have roughly the same number of parameters In the model a morphological tag is fed to the decoder along with target character embeddings to guide the decoding During the training of the hard attention model dynamic programming is applied to marginalize all latent alignments exactly BIBREF17 The Lemming model is a loglinear model that performs joint morphological tagging and lemmatization The model is globally normalized with the use of a second order linearchain CRF To efficiently calculate the partition function the choice of lemmata are pruned with the use of preextracted edit trees BIBREF18 This is a stateoftheart neural model that also performs joint morphological tagging and lemmatization but also accounts for the exposure bias with the application of maximum likelihood MLE The model stitches the tagger and lemmatizer together with the use of jackknifing BIBREF19 to expose the lemmatizer to the errors made by the tagger model during training The morphological tagger is based on a characterlevel biLSTM embedder that produces the embedding for a word and a wordlevel biLSTM tagger that predicts a morphological tag sequence for each word in the sentence The lemmatizer is a neural sequencetosequence model BIBREF15 that uses the decoded morphological tag sequence from the tagger as an additional attribute The model uses hard monotonic attention instead of standard soft attention along with a dynamic programming based training scheme The SIGMORPHON 2019 shared task received 30 submissions14 for task 1 and 16 for task 2from 23 teams In addition the organizers baseline systems were evaluated Five teams participated in the first Task with a variety of methods aimed at leveraging the crosslingual data to improve system performance The University of Alberta UAlberta performed a focused investigation on four language pairs training cognateprojection systems from external cognate lists Two methods were considered one which trained a highresource neural encoderdecoder and projected the test data into the HRL and one that projected the HRL data into the LRL and trained a combined system Results demonstrated that certain language pairs may be amenable to such methods The Tuebingen University submission Tuebingen aligned source and target to learn a set of editactions with both linear and neural classifiers that independently learned to predict action sequences for each morphological category Adding in the crosslingual data only led to modest gains AXSemantics combined the low and highresource data to train an encoderdecoder seq2seq model optionally also implementing domain adaptation methods to focus later epochs on the target language The CMU submission first attends over a decoupled representation of the desired morphological sequence before using the updated decoder state to attend over the character sequence of the lemma Secondly in order to reduce the bias of the decoders language model they hallucinate two types of data that encourage common affixes and character copying Simply allowing the model to learn to copy characters for several epochs significantly outperforms the task baseline while further improvements are obtained through finetuning Making use of an adversarial language discriminator cross lingual gains are highlycorrelated to linguistic similarity while augmenting the data with hallucinated forms and multiple related target language further improves the model The system from ITIST also attends separately to tags and lemmas using a gating mechanism to interpolate the importance of the individual attentions By combining the gated dualhead attention with a SparseMax activation function they are able to jointly learn stem and affix modifications improving significantly over the baseline system The relative system performance is described in tabsub2team which shows the average perlanguage accuracy of each system The table reflects the fact that some teams submitted more than one system eg Tuebingen1 Tuebingen2 in the table Nine teams submitted system papers for Task 2 with several interesting modifications to either the baseline or other prior work that led to modest improvements CharlesSaarland achieved the highest overall tagging accuracy by leveraging multilingual BERT embeddings finetuned on a concatenation of all available languages effectively transporting the crosslingual objective of Task 1 into Task 2 Lemmas and tags are decoded separately with a joint encoder and separate attention Lemmas are a sequence of editactions while tags are calculated jointly There is no splitting of tags into features tags are atomic CBNU instead lemmatize using a transformer network while performing tagging with a multilayer perceptron with biaffine attention Input words are first lemmatized and then pipelined to the tagger which produces atomic tag sequences ie no splitting of features The team from Istanbul Technical University ITU jointly produces lemmatic editactions and morphological tags via a two level encoder first word embeddings and then context embeddings and separate decoders Their system slightly improves over the baseline lemmatization but significantly improves tagging accuracy The team from the University of Groningen RUG also uses separate decoders for lemmatization and tagging but uses ELMo to initialize the contextual embeddings leading to large gains in performance Furthermore joint training on related languages further improves results CMU approaches tagging differently than the multitask decoding weve seen so far baseline is used for lemmatization Making use of a hierarchical CRF that first predicts POS that is subsequently looped back into the encoder they then seek to predict each feature separately In particular predicting POS separately greatly improves results An attempt to leverage gold typological information led to little gain in the results experiments suggest that the system is already learning the pertinent information The team from Ohio State University OHIOSTATE concentrates on predicting tags the baseline lemmatizer is used for lemmatization To that end they make use of a dual decoder that first predicts features given only the word embedding as input the predictions are fed to a GRU seq2seq which then predicts the sequence of tags The UNT HiLTLing team investigates a lowresource setting of the tagging by using parallel Bible data to learn a translation matrix between English and the target language learning morphological tags through analogy with English The UFALPrague team extends their submission from the UD shared task multilayer LSTM replacing the pretrained embeddings with BERT to great success first in lemmatization 2nd in tagging Although they predict complete tags they use the individual features to regularize the decoder Small gains are also obtained from joining multilingual corpora and ensembling CUNIMalta performs lemmatization as operations over edit actions with LSTM and ReLU Tagging is a bidirectional LSTM augmented by the edit actions ie twostage decoding predicting features separately The Edinburgh system is a characterbased LSTM encoderdecoder with attention implemented in OpenNMT It can be seen as an extension of the contextual lemmatization system Lematus BIBREF20 to include morphological tagging or alternatively as an adaptation of the morphological reinflection system MED BIBREF21 to incorporate context and perform analysis rather than reinflection Like these systems it uses a completely generic encoderdecoder architecture with no specific adaptation to the morphological processing task other than the form of the input In the submitted version of the system the input is split into short chunks corresponding to the target word plus one word of context on either side and the system is trained to output the corresponding lemmas and tags for each threeword chunk Several teams relied on external resources to improve their lemmatization and feature analysis Several teams made use of pretrained embeddings CHARLESSAARLAND2 and UFALPRAGUE1 used pretrained contextual embeddings BERT provided by Google BIBREF22 CBNU1 used a mix of pretrained embeddings from the CoNLL 2017 shared task and fastText Further some teams trained their own embeddings to aid performance In general the application of typology to natural language processing BIBREF23 BIBREF24 provides an interesting avenue for multilinguality Further our shared task was designed to only leverage a single helper language though many may exist with lexical or morphological overlap with the target language Techniques like those of BIBREF25 may aid in designing universal inflection architectures Neither task this year included unannotated monolingual corpora Using such data is wellmotivated from an L1learning point of view and may affect the performance of lowresource data settings In the case of inflection an interesting future topic could involve departing from orthographic representation and using more IPAlike representations ie transductions over pronunciations Different languages in particular those with idiosyncratic orthographies may offer new challenges in this respect Only one team tried to learn inflection in a multilingual settingie to use all training data to train one model Such transfer learning is an interesting avenue of future research but evaluation could be difficult Whether any crosslanguage transfer is actually being learned vs whether having more data better biases the networks to copy strings is an evaluation step to disentangle Creating new data sets that accurately reflect learner exposure whether L1 or L2 is also an important consideration in the design of future shared tasks One pertinent facet of this is information about inflectional categoriesoften the inflectional information is insufficiently prescribed by the lemma as with the Romanian verbal inflection classes or nominal gender in German As we move toward multilingual models for morphology it becomes important to understand which representations are critical or irrelevant for adapting to new languages this may be probed in the style of BIBREF27 and it can be used as a first step toward designing systems that avoid catastrophic forgetting as they learn to inflect new languages BIBREF28 Future directions for Task 2 include exploring crosslingual analysisin stride with both Task 1 and BIBREF29and leveraging these analyses in downstream tasks The SIGMORPHON 2019 shared task provided a typelevel evaluation on 100 language pairs in 79 languages and a tokenlevel evaluation on 107 treebanks in 66 languages of systems for inflection and analysis On task 1 lowresource inflection with crosslingual transfer 14 systems were submitted while on task 2 lemmatization and morphological feature analysis 16 systems were submitted All used neural network models completing a trend in past years shared tasks and other recent work on morphology In task 1 gains from crosslingual training were generally modest with gains positively correlating with the linguistic similarity of the two languages In the second task several methods were implemented by multiple groups with the most successful systems implementing variations of multiheaded attention multilevel encoding multiple decoders and ELMo and BERT contextual embeddings We have released the training development and test sets and expect these datasets to provide a useful benchmark for future research into learning of inflectional morphology and stringtostring transduction MS has received funding from the European Research Council ERC under the European Unions Horizon 2020 research and innovation programme grant agreement No 771113 </s>
<s>[INST] Hierarchical Multi-Task Natural Language Understanding for Cross-domain Conversational AI: HERMIT NLU [/INST] Research in Conversational AI also known as Spoken Dialogue Systems has applications ranging from home devices to robotics and has a growing presence in industry A key problem in realworld Dialogue Systems is Natural Language Understanding NLU the process of extracting structured representations of meaning from user utterances In fact the effective extraction of semantics is an essential feature being the entry point of any Natural Language interaction system Apart from challenges given by the inherent complexity and ambiguity of human language other challenges arise whenever the NLU has to operate over multiple domains In fact interaction patterns domain and language vary depending on the device the user is interacting with For example chitchatting and instructiongiving for executing an action are different processes in terms of language domain syntax and interaction schemes involved And what if the user combines two interaction domains play some music but first whats the weather tomorrow In this work we present HERMIT a HiERarchical MultITask Natural Language Understanding architecture designed for effective semantic parsing of domainindependent user utterances extracting meaning representations in terms of highlevel intents and framelike semantic structures With respect to previous approaches to NLU for SDS HERMIT stands out for being a crossdomain multitask architecture capable of recognising multiple intentsframes in an utterance HERMIT also shows better performance with respect to current stateoftheart commercial systems Such a novel combination of requirements is discussed below A crossdomain dialogue agent must be able to handle heterogeneous types of conversation such as chitchatting giving directions entertaining and triggering domaintask actions A domainindependent and rich meaning representation is thus required to properly capture the intent of the user Meaning is modelled here through three layers of knowledge dialogue acts frames and frame arguments Frames and arguments can be in turn mapped to domaindependent intents and slots or to Frame Semantics BIBREF0 structures ie semantic frames and frame elements respectively which allow handling of heterogeneous domains and language Deriving such a multilayered meaning representation can be approached through a multitask learning approach Multitask learning has found success in several NLP problems BIBREF1 BIBREF2 especially with the recent rise of Deep Learning Thanks to the possibility of building complex networks handling more tasks at once has been proven to be a successful solution provided that some degree of dependence holds between the tasks Moreover multitask learning allows the use of different datasets to train subparts of the network BIBREF3 Following the same trend HERMIT is a hierarchical multitask neural architecture which is able to deal with the three tasks of tagging dialogue acts framelike structures and their arguments in parallel The network based on selfattention mechanisms seq2seq bidirectional LongShort Term Memory BiLSTM encoders and CRF tagging layers is hierarchical in the sense that information output from earlier layers flows through the network feeding following layers to solve downstream dependent tasks Another degree of complexity in NLU is represented by the granularity of knowledge that can be extracted from an utterance Utterance semantics is often rich and expressive approximating meaning to a single user intent is often not enough to convey the required information As opposed to the traditional singledialogue act and singleintent view in previous work BIBREF4 BIBREF5 BIBREF6 HERMIT operates on a meaning representation that is multidialogue act and multiintent In fact it is possible to model an utterances meaning through multiple dialogue acts and intents at the same time For example the user would be able both to request tomorrows weather and listen to hisher favourite music with just a single utterance A further requirement is that for practical application the system should be competitive with stateoftheart we evaluate HERMITs effectiveness by running several empirical investigations We perform a robust test on a publicly available NLUBenchmark NLUBM BIBREF7 containing 25K crossdomain utterances with a conversational agent The results obtained show a performance higher than wellknown offtheshelf tools ie Rasa DialogueFlow LUIS and Watson The contribution of the different network components is then highlighted through an ablation study We also test HERMIT on the smaller RoboticsOriented MUltitask Language UnderStanding ROMULUS corpus annotated with Dialogue Acts and Frame Semantics HERMIT produces promising results for the application in a real scenario Much research on Natural or Spoken depending on the input Language Understanding has been carried out in the area of Spoken Dialogue Systems BIBREF8 where the advent of statistical learning has led to the application of many datadriven approaches BIBREF9 In recent years the rise of deep learning models has further improved the stateoftheart Recurrent Neural Networks RNNs have proven to be particularly successful especially uni and bidirectional LSTMs and Gated Recurrent Units GRUs The use of such deep architectures has also fostered the development of joint classification models of intents and slots Bidirectional GRUs are applied in BIBREF10 where the hidden state of each time step is used for slot tagging in a seq2seq fashion while the final state of the GRU is used for intent classification The application of attention mechanisms in a BiLSTM architecture is investigated in BIBREF5 while the work of BIBREF11 explores the use of memory networks BIBREF12 to exploit encoding of historical user utterances to improve the slotfilling task Seq2seq with selfattention is applied in BIBREF13 where the classified intent is also used to guide a special gated unit that contributes to the slot classification of each token One of the first attempts to jointly detect domains in addition to intentslot tagging is the work of BIBREF4 An utterance syntax is encoded through a Recursive NN and it is used to predict the joined domainintent classes Syntactic features extracted from the same network are used in the perword slot classifier The work of BIBREF6 applies the same idea of BIBREF10 this time using a contextaugmented BiLSTM and performing domainintent classification as a single joint task As in BIBREF11 the history of user utterances is also considered in BIBREF14 in combination with a dialogue context encoder A twolayer hierarchical structure made of a combination of BiLSTM and BiGRU is used for joint classification of domains and intents together with slot tagging BIBREF15 apply multitask learning to the dialogue domain Dialogue state tracking dialogue act and intent classification and slot tagging are jointly learned Dialogue states and user utterances are encoded to provide hidden representations which jointly affect all the other tasks Many previous systems are trained and compared over the ATIS Airline Travel Information Systems dataset BIBREF16 which covers only the flightbooking domain Some of them also use bigger not publicly available datasets which appear to be similar to the NLUBM in terms of number of intents and slots but they cover no more than three or four domains Our work stands out for its more challenging NLU setting since we are dealing with a higher number of domainsscenarios 18 intents 64 and slots 54 in the NLUBM dataset and dialogue acts 11 frames 58 and frame elements 84 in the ROMULUS dataset Moreover we propose a multitask hierarchical architecture where each layer is trained to solve one of the three tasks Each of these is tackled with a seq2seq classification using a CRF output layer as in BIBREF3 The NLU problem has been studied also on the Interactive Robotics front mostly to support basic dialogue systems with few dialogue states and tailored for specific tasks such as semantic mapping BIBREF17 navigation BIBREF18 BIBREF19 or grounded language learning BIBREF20 However the designed approaches either based on formal languages or datadriven have never been shown to scale to real world scenarios The work of BIBREF21 makes a step forward in this direction Their model still deals with the single pick and place domain covering no more than two intents but it is trained on several thousands of examples making it able to manage more unstructured language An attempt to manage a higher number of intents as well as more variable language is represented by the work of BIBREF22 where the sole Frame Semantics is applied to represent user intents with no Dialogue Acts The identification of Dialogue Acts henceforth DAs is required to drive the dialogue manager to the next dialogue state General frame structures FRs provide a reference framework to capture user intents in terms of required or desired actions that a conversational agent has to perform Depending on the level of abstraction required by an application these can be interpreted as more domaindependent paradigms like intent or to shallower representations such as semantic frames as conceived in FrameNet BIBREF23 From this perspective semantic frames represent a versatile abstraction that can be mapped over an agents capabilities allowing also the system to be easily extended with new functionalities without requiring the definition of new adhoc structures Similarly frame arguments ARs act as slots in a traditional intentslots scheme or to frame elements for semantic frames In our work the whole process of extracting a complete semantic interpretation as required by the system is tackled with a multitask learning approach across DAs FRs and ARs Each of these tasks is modelled as a seq2seq problem where a taskspecific label is assigned to each token of the sentence according to the IOB2 notation BIBREF24 with B marking the Beginning of the chunk I the tokens Inside the chunk while O is assigned to any token that does not belong to any chunk Task labels are drawn from the set of classes defined for DAs FRs and ARs Figure TABREF5 shows an example of the tagging layers over the sentence Where can I find Starbucks where Frame Semantics has been selected as underlying reference theory The central motivation behind the proposed architecture is that there is a dependence among the three tasks of identifying DAs FRs and ARs The relationship between tagging frame and arguments appears more evident as also developed in theories like Frame Semantics although it is defined independently by each theory However some degree of dependence also holds between the DAs and FRs For example the FrameNet semantic frame Desiring expressing a desire of the user for an event to occur is more likely to be used in the context of an Inform DA which indicates the state of notifying the agent with an information other than in an Instruction This is clearly visible in interactions like Id like a cup of hot chocolate or Id like to find a shoe shop where the user is actually notifying the agent about a desire of hershis In order to reflect such intertask dependence the classification process is tackled here through a hierarchical multitask learning approach We designed a multilayer neural network whose architecture is shown in Figure FIGREF7 where each layer is trained to solve one of the three tasks namely labelling dialogue acts DA layer semantic frames FR layer and frame elements AR layer The layers are arranged in a hierarchical structure that allows the information produced by earlier layers to be fed to downstream tasks The network is mainly composed of three BiLSTM BIBREF25 encoding layers A sequence of input words is initially converted into an embedded representation through an ELMo embeddings layer BIBREF26 and is fed to the DA layer The embedded representation is also passed over through shortcut connections BIBREF1 and concatenated with both the outputs of the DA and FR layers Selfattention layers BIBREF27 are placed after the DA and FR BiLSTM encoders Where wt is the input word at time step t of the sentence textbf textrm w w1 wT the architecture can be formalised by where oplus represents the vector concatenation operator et is the embedding of the word at time t and textbf textrm sL s1L sTL is the embedded sequence output of each L layer with L lbrace DA FR ARrbrace Given an input sentence the final sequence of labels textbf yL for each task is computed through a CRF tagging layer which operates on the output of the DA and FR selfattention and of the AR BiLSTM embedding so that where aDA aFR are attended embedded sequences Due to shortcut connections layers in the upper levels of the architecture can rely both on direct word embeddings as well as the hidden representation atL computed by a previous layer Operationally the latter carries task specific information which combined with the input embeddings helps in stabilising the classification of each CRF layer as shown by our experiments The network is trained by minimising the sum of the individual negative loglikelihoods of the three CRF layers while at test time the most likely sequence is obtained through the Viterbi decoding over the output scores of the CRF layer In order to assess the effectiveness of the proposed architecture and compare against existing offtheshelf tools we run several empirical evaluations We tested the system on two datasets different in size and complexity of the addressed language The first publicly available dataset NLUBenchmark NLUBM contains 25716 utterances annotated with targeted Scenario Action and involved Entities For example schedule a call with Lisa on Monday morning is labelled to contain a calendar scenario where the setevent action is instantiated through the entities eventname a call with Lisa and date Monday morning The Intent is then obtained by concatenating scenario and action labels eg calendarsetevent This dataset consists of multiple home assistant task domains eg scheduling playing music chitchat and commands to a robot BIBREF7 The second dataset ROMULUS is composed of 1431 sentences for each of which dialogue acts semantic frames and corresponding frame elements are provided This dataset is being developed for modelling user utterances to opendomain conversational systems for robotic platforms that are expected to handle different interaction situationspatterns eg chitchat command interpretation The corpus is composed of different subsections addressing heterogeneous linguistic phenomena ranging from imperative instructions eg enter the bedroom slowly turn left and turn the lights off to complex requests for information eg good morning I want to buy a new mobile phone is there any shop nearby or opendomain chitchat eg nope thanks lets talk about cinema A considerable number of utterances in the dataset is collected through HumanHuman Interaction studies in robotic domain approx 70 though a small portion has been synthetically generated for balancing the frame distribution Note that while the NLUBM is designed to have at most one intent per utterance sentences are here tagged following the IOB2 sequence labelling scheme see example of Figure TABREF5 so that multiple dialogue acts frames and frame elements can be defined at the same time for the same utterance For example three dialogue acts are identified within the sentence good morningtextsc Opening I want to buy a new mobile phonetextsc Inform is there any shop nearbytextsc Reqinfo As a result though smaller the ROMULUS dataset provides a richer representation of the sentences semantics making the tasks more complex and challenging These observations are highlighted by the statistics in Table TABREF13 that show an average number of dialogue acts frames and frame elements always greater than 1 ie 133 141 and 354 respectively All the models are implemented with Keras BIBREF28 and Tensorflow BIBREF29 as backend and run on a Titan Xp Experiments are performed in a 10fold setting using one fold for tuning and one for testing However since HERMIT is designed to operate on dialogue acts semantic frames and frame elements the best hyperparameters are obtained over the ROMULUS dataset via a grid search using early stopping and are applied also to the NLUBM models This guarantees fairness towards other systems that do not perform any finetuning on the training data We make use of pretrained 1024dim ELMo embeddings BIBREF26 as word vector representations without retraining the weights This section shows the results obtained on the NLUBenchmark NLUBM dataset provided by BIBREF7 by comparing HERMIT to offtheshelf NLU services namely Rasa Dialogflow LUIS and Watson In order to apply HERMIT to NLUBM annotations these have been aligned so that Scenarios are treated as DAs Actions as FRs and Entities as ARs To make our model comparable against other approaches we reproduced the same folds as in BIBREF7 where a resized version of the original dataset is used Table TABREF11 shows some statistics of the NLUBM and its reduced version Moreover microaveraged Precision Recall and F1 are computed following the original paper to assure consistency TP FP and FN of intent labels are obtained as in any other multiclass task An entity is instead counted as TP if there is an overlap between the predicted and the gold span and their labels match Experimental results are reported in Table TABREF21 The statistical significance is evaluated through the Wilcoxon signedrank test When looking at the intent F1 HERMIT performs significantly better than Rasa Z2701 p 007 and LUIS Z2807 p 005 On the contrary the improvements wrt Dialogflow Z1173 p 241 do not seem to be significant This is probably due to the high variance obtained by Dialogflow across the 10 folds Watson is by a significant margin the most accurate system in recognising intents Z2191 p 028 especially due to its Precision score The hierarchical multitask architecture of HERMIT seems to contribute strongly to entity tagging accuracy In fact in this task it performs significantly better than Rasa Z2803 p 005 Dialogflow Z2803 p 005 LUIS Z2803 p 005 and Watson Z2805 p 005 with improvements from 708 to 3592 of F1 Following BIBREF7 we then evaluated a metric that combines intent and entities computed by simply summing up the two confusion matrices Table TABREF23 Results highlight the contribution of the entity tagging task where HERMIT outperforms the other approaches Pairedsamples ttests were conducted to compare the HERMIT combined F1 against the other systems The statistical analysis shows a significant improvement over Rasa Z2803 p 005 Dialogflow Z2803 p 005 LUIS Z2803 p 005 and Watson Z2803 p 005 In order to assess the contributions of the HERMITs components we performed an ablation study The results are obtained on the NLUBM following the same setup as in Section SECREF16 Results are shown in Table TABREF25 The first row refers to the complete architecture while SA shows the results of HERMIT without the selfattention mechanism Then from this latter we further remove shortcut connections SACN and CRF taggers SACRF The last row SACNCRF shows the results of a simple architecture without selfattention shortcuts and CRF Though not significant the contribution of the several architectural components can be observed The contribution of selfattention is distributed across all the tasks with a small inclination towards the upstream ones This means that while the entity tagging task is mostly lexicon independent it is easier to identify pivoting keywords for predicting the intent eg the verb schedule triggering the calendarsetevent intent The impact of shortcut connections is more evident on entity tagging In fact the effect provided by shortcut connections is that the information flowing throughout the hierarchical architecture allows higher layers to encode richer representations ie original word embeddings latent semantics from the previous task Conversely the presence of the CRF tagger affects mainly the lower levels of the hierarchical architecture This is not probably due to their position in the hierarchy but to the way the tasks have been designed In fact while the span of an entity is expected to cover few tokens in intent recognition ie a combination of Scenario and Action recognition the span always covers all the tokens of an utterance CRF therefore preserves consistency of IOB2 sequences structure However HERMIT seems to be the most stable architecture both in terms of standard deviation and task performance with a good balance between intent and entity recognition In this section we report the experiments performed on the ROMULUS dataset Table TABREF27 Together with the evaluation metrics used in BIBREF7 we report the span F1 computed using the CoNLL2000 shared task evaluation script and the Exact Match EM accuracy of the entire sequence of labels It is worth noticing that the EM Combined score is computed as the conjunction of the three individual predictions eg a match is when all the three sequences are correct Results in terms of EM reflect the complexity of the different tasks motivating their position within the hierarchy Specifically dialogue act identification is the easiest task 8931 with respect to frame 8260 and frame element 7973 due to the shallow semantics it aims to catch However when looking at the span F1 its score 8942 is lower than the frame element identification task 9226 What happens is that even though the label set is smaller dialogue act spans are supposed to be longer than frame element ones sometimes covering the whole sentence Frame elements instead are often one or two tokens long that contribute in increasing span based metrics Frame identification is the most complex task for several reasons First lots of frame spans are interlaced or even nested this contributes to increasing the network entropy Second while the dialogue act label is highly related to syntactic structures frame identification is often subject to the inherent ambiguity of language eg get can evoke both Commercebuy and Arriving We also report the metrics in BIBREF7 for consistency For dialogue act and frame tasks scores provide just the extent to which the network is able to detect those labels In fact the metrics do not consider any span information essential to solve and evaluate our tasks However the frame element scores are comparable to the benchmark since the task is very similar Overall getting back to the combined EM accuracy HERMIT seems to be promising with the network being able to reproduce all the three gold sequences for almost 70 of the cases The importance of this result provides an idea of the architecture behaviour over the entire pipeline The experimental evaluation reported in this section provides different insights The proposed architecture addresses the problem of NLU in widecoverage conversational systems modelling semantics through multiple Dialogue Acts and Framelike structures in an endtoend fashion In addition its hierarchical structure which reflects the complexity of the single tasks allows providing rich representations across the whole network In this respect we can affirm that the architecture successfully tackles the multitask problem with results that are promising in terms of usability and applicability of the system in real scenarios However a thorough evaluation in the wild must be carried out to assess to what extent the system is able to handle complex spoken language phenomena such as repetitions disfluencies etc To this end a real scenario evaluation may open new research directions by addressing new tasks to be included in the multitask architecture This is supported by the scalable nature of the proposed approach Moreover following BIBREF3 corpora providing different annotations can be exploited within the same multitask network We also empirically showed how the same architectural design could be applied to a dataset addressing similar problems In fact a comparison with offtheshelf tools shows the benefits provided by the hierarchical structure with better overall performance better than any current solution An ablation study has been performed assessing the contribution provided by the different components of the network The results show how the shortcut connections help in the more finegrained tasks successfully encoding richer representations CRFs help when longer spans are being predicted more present in the upstream tasks Finally the seq2seq design allowed obtaining a multilabel approach enabling the identification of multiple spans in the same utterance that might evoke different dialogue actsframes This represents a novelty for NLU in conversational systems as such a problem has always been tackled as a singleintent detection However the seq2seq approach carries also some limitations especially on the Frame Semantics side In fact label sequences are linear structures not suitable for representing nested predicates a tough and common problem in Natural Language For example in the sentence I want to buy a new mobile phone the to buy a new mobile phone span represents both the Desiredevent frame element of the Desiring frame and a Commercebuy frame at the same time At the moment of writing we are working on modeling nested predicates through the application of bilinear models We have started integrating a corpus of 5M sentences of real users chitchatting with our conversational agent though at the time of writing they represent only 16 of the current dataset As already pointed out in Section SECREF28 there are some limitations in the current approach that need to be addressed First we have to assess the networks capability in handling typical phenomena of spontaneous spoken language input such as repetitions and disfluencies BIBREF30 This may open new research directions by including new tasks to identifyremove any kind of noise from the spoken input Second the seq2seq scheme does not deal with nested predicates a common aspect of Natural Language To the best of our knowledge there is no architecture that implements an endtoend network for FrameNet based semantic parsing Following previous work BIBREF2 one of our future goals is to tackle such problems through hierarchical multitask architectures that rely on bilinear models In this paper we presented HERMIT NLU a hierarchical multitask architecture for semantic parsing sentences for crossdomain spoken dialogue systems The problem is addressed using a seq2seq model employing BiLSTM encoders and selfattention mechanisms and followed by CRF tagging layers We evaluated HERMIT on a 25K sentences NLUBenchmark and outperform stateoftheart NLU tools such as Rasa Dialogflow LUIS and Watson even without specific finetuning of the model This research was partially supported by the European Unions Horizon 2020 research and innovation programme under grant agreement No 688147 MuMMER project </s>
<s>[INST] Interactive Machine Comprehension with Information Seeking Agents [/INST] Many machine reading comprehension MRC datasets have been released in recent years BIBREF0 BIBREF1 BIBREF2 BIBREF3 BIBREF4 to benchmark a systems ability to understand and reason over natural language Typically these datasets require an MRC model to read through a document to answer a question about information contained therein The supporting document is more often than not static and fully observable This raises concerns since models may find answers simply through shallow pattern matching eg syntactic similarity between the words in questions and documents As pointed out by BIBREF5 for questions starting with when models tend to predict the only datetime answer in the supporting document Such behavior limits the generality and usefulness of MRC models and suggests that they do not learn a proper understanding of the intended task In this paper to address this problem we shift the focus of MRC data away from spoonfeeding models with sufficient information in fully observable static documents Instead we propose interactive versions of existing MRC tasks whereby the information needed to answer a question must be gathered sequentially The key idea behind our proposed interactive MRC iMRC is to restrict the document context that a model observes at one time Concretely we split a supporting document into its component sentences and withhold these sentences from the model Given a question the model must issue commands to observe sentences in the withheld set we equip models with actions such as CtrlF search for token and stop for searching through partially observed documents A model searches iteratively conditioning each command on the input question and the sentences it has observed previously Thus our task requires models to feed themselves rather than spoonfeeding them with information This casts MRC as a sequential decisionmaking problem amenable to reinforcement learning RL As an initial case study we repurpose two well known related corpora with different difficulty levels for our interactive MRC task SQuAD and NewsQA Table TABREF2 shows some examples of a model performing interactive MRC on these datasets Naturally our reframing makes the MRC problem harder however we believe the added demands of iMRC more closely match weblevel QA and may lead to deeper comprehension of documents content The main contributions of this work are as follows We describe a method to make MRC datasets interactive and formulate the new task as an RL problem We develop a baseline agent that combines a top performing MRC model and a stateoftheart RL optimization algorithm and test it on our iMRC tasks We conduct experiments on several variants of iMRC and discuss the significant challenges posed by our setting Skipreading BIBREF6 BIBREF7 BIBREF8 is an existing setting in which MRC models read partial documents Concretely these methods assume that not all tokens in the input sequence are useful and therefore learn to skip irrelevant tokens based on the current input and their internal memory Since skipping decisions are discrete the models are often optimized by the REINFORCE algorithm BIBREF9 For example the structuraljumpLSTM proposed in BIBREF10 learns to skip and jump over chunks of text In a similar vein BIBREF11 designed a QA task where the model reads streaming data unidirectionally without knowing when the question will be provided Skipreading approaches are limited in that they only consider jumping over a few consecutive tokens and the skipping operations are usually unidirectional Based on the assumption that a single pass of reading may not provide sufficient information multipass reading methods have also been studied BIBREF12 BIBREF13 Compared to skipreading and multiturn reading our work enables an agent to jump through a document in a more dynamic manner in some sense combining aspects of skipreading and rereading For example it can jump forward backward or to an arbitrary position depending on the query This also distinguishes the model we develop in this work from ReasoNet BIBREF13 where an agent decides when to stop unidirectional reading Recently BIBREF14 propose DocQN which is a DQNbased agent that leverages the tree structure of documents and navigates across sentences and paragraphs The proposed method has been shown to outperform vanilla DQN and IR baselines on TriviaQA dataset The main differences between our work and DocQA include iMRC does not depend on extra meta information of documents eg title paragraph title for building document trees as in DocQN our proposed environment is partiallyobservable and thus an agent is required to explore and memorize the environment via interaction the action space in our setting especially for the CtrlF command as defined in later section is arguably larger than the tree sampling action space in DocQN Closely related to iMRC is work by BIBREF15 in which the authors introduce a collection of synthetic tasks to train and test informationseeking capabilities in neural models We extend that work by developing a realistic and challenging textbased task Broadly speaking our approach is also linked to the optimal stopping problem in the literature Markov decision processes MDP BIBREF16 where at each timestep the agent either continues or stops and accumulates reward Here we reformulate conventional QA tasks through the lens of optimal stopping in hopes of improving over the shallow matching behaviors exhibited by many MRC systems We build the iSQuAD and iNewsQA datasets based on SQuAD v11 BIBREF0 and NewsQA BIBREF1 Both original datasets share similar properties Specifically every datapoint consists of a tuple lbrace p q arbrace where p represents a paragraph q a question and a is the answer The answer is a word span defined by head and tail positions in p NewsQA is more difficult than SQuAD because it has a larger vocabulary more difficult questions and longer source documents We first split every paragraph p into a list of sentences mathcal S lbrace s1 s2 snrbrace where n stands for number of sentences in p Given a question q rather than showing the entire paragraph p we only show an agent the first sentence s1 and withhold the rest The agent must issue commands to reveal the hidden sentences progressively and thereby gather the information needed to answer question q An agent decides when to stop interacting and output an answer but the number of interaction steps is limited Once an agent has exhausted its step budget it is forced to answer the question As described in the previous section we convert MRC tasks into sequential decisionmaking problems which we will refer to as games These can be described naturally within the reinforcement learning RL framework Formally tasks in iMRC are partially observable Markov decision processes POMDP BIBREF17 An iMRC datapoint is a discretetime POMDP defined by S T A Omega O R gamma where gamma in 0 1 is the discount factor and the other elements are described in detail below Environment States S The environment state at turn t in the game is st in S It contains the complete internal information of the game much of which is hidden from the agent When an agent issues an action at the environment transitions to state st1 with probability Tst1 st at In this work transition probabilities are either 0 or 1 ie deterministic environment Actions A At each game turn t the agent issues an action at in A We will elaborate on the action space of iMRC in the action space section Observations Omega The text information perceived by the agent at a given game turn t is the agents observation ot in Omega which depends on the environment state and the previous action with probability Ootst In this work observation probabilities are either 0 or 1 ie noiseless observation Reward Function R Based on its actions the agent receives rewards rt Rst at Its objective is to maximize the expected discounted sum of rewards E leftsum t gamma t rt right To better describe the action space of iMRC we split an agents actions into two phases information gathering and question answering During the information gathering phase the agent interacts with the environment to collect knowledge It answers questions with its accumulated knowledge in the question answering phase Information Gathering At step t of the information gathering phase the agent can issue one of the following four actions to interact with the paragraph p where p consists of n sentences and where the current observation corresponds to sentence sk1 le k le n previous jump to small leftlbrace beginarrayll sn textif k 1 sk1 textotherwise endarrayright next jump to small leftlbrace beginarrayll s1 textif k n sk1 textotherwise endarrayright CtrlF query jump to the sentence that contains the next occurrence of query stop terminate information gathering phase Question Answering We follow the output format of both SQuAD and NewsQA where an agent is required to point to the head and tail positions of an answer span within p Assume that at step t the agent stops interacting and the observation ot is sk The agent points to a headtail position pair in sk Given the question When is the deadline of AAAI as a human one might try searching AAAI on a search engine follow the link to the official AAAI website then search for keywords deadline or due date on the website to jump to a specific paragraph Humans have a deep understanding of questions because of their significant background knowledge As a result the keywords they use to search are not limited to what appears in the question Inspired by this observation we study 3 query types for the CtrlF query command One token from the question the setting with smallest action space Because iMRC deals with CtrlF commands by exact string matching there is no guarantee that all sentences are accessible from question tokens only One token from the union of the question and the current observation an intermediate level where the action space is larger One token from the dataset vocabulary the action space is huge see Table TABREF16 for statistics of SQuAD and NewsQA It is guaranteed that all sentences in all documents are accessible through these tokens Since iMRC involves both MRC and RL we adopt evaluation metrics from both settings First as a question answering task we use textF1 score to compare predicted answers against groundtruth as in previous works When there exist multiple groundtruth answers we report the max textF1 score Second mastering multiple games remains quite challenging for RL agents Therefore we evaluate an agents performance during both its training and testing phases During training we report training curves averaged over 3 random seeds During test we follow common practice in supervised learning tasks where we report the agents test performance corresponding to its best validation performance As a baseline we propose QADQN an agent that adopts components from QANet BIBREF18 and adds an extra command generation module inspired by LSTMDQN BIBREF19 As illustrated in Figure FIGREF6 the agent consists of three components an encoder an action generator and a question answerer More precisely at a game step t the encoder reads observation string ot and question string q to generate attention aggregated hidden representations Mt Using Mt the action generator outputs commands defined in previous sections to interact with iMRC If the generated command is stop or the agent is forced to stop the question answerer takes the current information at game step t to generate head and tail pointers for answering the question otherwise the information gathering procedure continues In this section we describe the highlevel model structure and training strategies of QADQN We refer readers to BIBREF18 for detailed information We will release datasets and code in the near future In this section we use game step t to denote one round of interaction between an agent with the iMRC environment We use ot to denote text observation at game step t and q to denote question text We use L to refer to a linear transformation cdot cdot denotes vector concatenation The encoder consists of an embedding layer two stacks of transformer blocks denoted as encoder transformer blocks and aggregation transformer blocks and an attention layer In the embedding layer we aggregate both word and characterlevel embeddings Word embeddings are initialized by the 300dimension fastText BIBREF20 vectors trained on Common Crawl 600B tokens and are fixed during training Character embeddings are initialized by 200dimension random vectors A convolutional layer with 96 kernels of size 5 is used to aggregate the sequence of characters We use a max pooling layer on the character dimension then a multilayer perceptron MLP of size 96 is used to aggregate the concatenation of word and characterlevel representations A highway network BIBREF21 is used on top of this MLP The resulting vectors are used as input to the encoding transformer blocks Each encoding transformer block consists of four convolutional layers with shared weights a selfattention layer and an MLP Each convolutional layer has 96 filters each kernels size is 7 In the selfattention layer we use a block hidden size of 96 and a single head attention mechanism Layer normalization and dropout are applied after each component inside the block We add positional encoding into each blocks input We use one layer of such an encoding block At a game step t the encoder processes text observation ot and question q to generate contextaware encodings hot in mathbb RLot times H1 and hq in mathbb RLq times H1 where Lot and Lq denote length of ot and q respectively H1 is 96 Following BIBREF18 we use a contextquery attention layer to aggregate the two representations hot and hq Specifically the attention layer first uses two MLPs to map hot and hq into the same space with the resulting representations denoted as hotprime in mathbb RLot times H2 and hqprime in mathbb RLq times H2 in which H2 is 96 Then a trilinear similarity function is used to compute the similarities between each pair of hotprime and hqprime items where odot indicates elementwise multiplication and w is trainable parameter vector of size 96 We apply softmax to the resulting similarity matrix S along both dimensions producing SA and SB Information in the two representations are then aggregated as where hoq is aggregated observation representation On top of the attention layer a stack of aggregation transformer blocks is used to further map the observation representations to action representations and answer representations The configuration parameters are the same as the encoder transformer blocks except there are two convolution layers with shared weights and the number of blocks is 7 Let Mt in mathbb RLot times H3 denote the output of the stack of aggregation transformer blocks in which H3 is 96 The action generator takes Mt as input and estimates Qvalues for all possible actions As described in previous section when an action is a CtrlF command it is composed of two tokens the token CtrlF and the query token Therefore the action generator consists of three MLPs Here the size of Lshared in mathbb R95 times 150 Laction has an output size of 4 or 2 depending on the number of actions available the size of Lctrlf is the same as the size of a datasets vocabulary size depending on different query type settings we mask out words in the vocabulary that are not query candidates The overall Qvalue is simply the sum of the two components Following BIBREF18 we append two extra stacks of aggregation transformer blocks on top of the encoder to compute head and tail positions Here Mhead and Mtail are outputs of the two extra transformer stacks L0 L1 L2 and L3 are trainable parameters with output size 150 150 1 and 1 respectively In iMRC some questions may not be easily answerable based only on observation of a single sentence To overcome this limitation we provide an explicit memory mechanism to QADQN Specifically we use a queue to store strings that have been observed recently The queue has a limited size of slots we use queues of size 1 3 5 in this work This prevents the agent from issuing next commands until the environment has been observed fully in which case our task would degenerate to the standard MRC setting The memory slots are reset episodically Because the question answerer in QADQN is a pointing model its performance relies heavily on whether the agent can find and stop at the sentence that contains the answer We design a heuristic reward to encourage and guide this behavior In particular we assign a reward if the agent halts at game step k and the answer is a substring of ok if larger memory slots are used we assign this reward if the answer is a substring of the memory at game step k We denote this reward as the sufficient information reward since if an agent sees the answer it should have a good chance of having gathered sufficient information for the question although this is not guaranteed Note this sufficient information reward is part of the design of QADQN whereas the question answering score is the only metric used to evaluate an agents performance on the iMRC task As mentioned above an agent might bypass CtrlF actions and explore an iMRC game only via next commands We study this possibility in an ablation study where we limit the agent to the CtrlF and stop commands In this setting an agent is forced to explore by means of search a queries In this section we describe our training strategy We split the training pipeline into two parts for easy comprehension We use Adam BIBREF22 as the step rule for optimization in both parts with the learning rate set to 000025 iMRC games are interactive environments We use an RL training algorithm to train the interactive informationgathering behavior of QADQN We adopt the Rainbow algorithm proposed by BIBREF23 which integrates several extensions to the original Deep QLearning algorithm BIBREF24 Rainbox exhibits stateoftheart performance on several RL benchmark tasks eg Atari games During game playing we use a minibatch of size 10 and push all transitions observation string question string generated command reward into a replay buffer of size 500000 We do not compute losses directly using these transitions After every 5 game steps we randomly sample a minibatch of 64 transitions from the replay buffer compute loss and update the network Detailed hyperparameter settings for action generation are shown in Table TABREF38 Similarly we use another replay buffer to store question answering transitions observation string when interaction stops question string groundtruth answer Because both iSQuAD and iNewsQA are converted from datasets that provide groundtruth answer positions we can leverage this information and train the question answerer with supervised learning Specifically we only push question answering transitions when the groundtruth answer is in the observation string For each transition we convert the groundtruth answer head and tailpositions from the SQuAD and NewsQA datasets to positions in the current observation string After every 5 game steps we randomly sample a minibatch of 64 transitions from the replay buffer and train the question answerer using the Negative LogLikelihood NLL loss We use a dropout rate of 01 In this study we focus on three factors and their effects on iMRC and the performance of the QADQN agent different CtrlF strategies as described in the action space section enabled vs disabled next and previous actions different memory slot sizes Below we report the baseline agents training performance followed by its generalization performance on test data It remains difficult for RL agents to master multiple games at the same time In our case each documentquestion pair can be considered a unique game and there are hundred of thousands of them Therefore as is common practice in the RL literature we study an agents training curves Due to the space limitations we select several representative settings to discuss in this section and provide QADQNs training and evaluation curves for all experimental settings in the Appendix We provide the agents sufficient information rewards ie if the agent stopped at a state where the observation contains the answer during training in Appendix as well Figure FIGREF36 shows QADQNs training performance textF1 score when next and previous actions are available Figure FIGREF40 shows QADQNs training performance textF1 score when next and previous actions are disabled Note that all training curves are averaged over 3 runs with different random seeds and all evaluation curves show the one run with max validation performance among the three From Figure FIGREF36 we can see that the three CtrlF strategies show similar difficulty levels when next and previous are available although QADQN works slightly better when selecting a word from the question as query especially on iNewsQA However from Figure FIGREF40 we observe that when next and previous are disabled QADQN shows significant advantage when selecting a word from the question as query This may due to the fact that when an agent must use CtrlF to navigate within documents the set of question words is a much smaller action space in contrast to the other two settings In the 4action setting an agent can rely on issuing next and previous actions to reach any sentence in a document The effect of action space size on model performance is particularly clear when using a datasets entire vocabulary as query candidates in the 2action setting From Figure FIGREF40 and figures with sufficient information rewards in the Appendix we see QADQN has a hard time learning in this setting As shown in Table TABREF16 both datasets have a vocabulary size of more than 100k This is much larger than in the other two settings where on average the length of questions is around 10 This suggests that the methods with better sample efficiency are needed to act in more realistic problem settings with huge action spaces Experiments also show that a larger memory slot size always helps Intuitively with a memory mechanism either implicit or explicit an agent could make the environment closer to fully observed by exploring and memorizing observations Presumably a larger memory may further improve QADQNs performance but considering the average number of sentences in each iSQuAD game is 5 a memory with more than 5 slots will defeat the purpose of our study of partially observable text environments Not surprisingly QADQN performs worse in general on iNewsQA in all experiments As shown in Table TABREF16 the average number of sentences per document in iNewsQA is about 6 times more than in iSQuAD This is analogous to games with larger maps in the RL literature where the environment is partially observable A better exploration in our case jumping strategy may help QADQN to master such harder games To study QADQNs ability to generalize we select the best performing agent in each experimental setting on the validation set and report their performance on the test set The agents test performance is reported in Table TABREF41 In addition to support our claim that the challenging part of iMRC tasks is information seeking rather than answering questions given sufficient information we also report the textF1 score of an agent when it has reached the piece of text that contains the answer which we denote as textF1textinfo From Table TABREF41 and validation curves provided in appendix we can observe that QADQNs performance during evaluation matches its training performance in most settings textF1textinfo scores are consistently higher than the overall textF1 scores and they have much less variance across different settings This supports our hypothesis that information seeking play an important role in solving iMRC tasks whereas question answering given necessary information is relatively straightforward This also suggests that an interactive agent that can better navigate to important sentences is very likely to achieve better performance on iMRC tasks In this work we propose and explore the direction of converting MRC datasets into interactive environments We believe interactive informationseeking behavior is desirable for neural MRC systems when knowledge sources are partially observable andor too large to encode in their entirety for instance when searching for information on the internet where knowledge is by design easily accessible to humans through interaction Despite being restricted our proposed task presents major challenges to existing techniques iMRC lies at the intersection of NLP and RL which is arguably less studied in existing literature We hope to encourage researchers from both NLP and RL communities to work toward solving this task For our baseline we adopted an offtheshelf topperforming MRC model and RL method Either component can be replaced straightforwardly with other methods eg to utilize a largescale pretrained language model Our proposed setup and baseline agent presently use only a single word with the query command However a host of other options should be considered in future work For example multiword queries with fuzzy matching are more realistic It would also be interesting for an agent to generate a vector representation of the query in some latent space This vector could then be compared with precomputed document representations eg in an open domain QA dataset to determine what text to observe next with such behavior tantamount to learning to do IR As mentioned our idea for reformulating existing MRC datasets as partially observable and interactive environments is straightforward and general Almost all MRC datasets can be used to study interactive informationseeking behavior through similar modifications We hypothesize that such behavior can in turn help in solving realworld MRC problems involving search </s>
<s>[INST] Exploring Hate Speech Detection in Multimodal Publications [/INST] Social Media platforms such as Facebook Twitter or Reddit have empowered individuals voices and facilitated freedom of expression However they have also been a breeding ground for hate speech and other types of online harassment Hate speech is defined in legal literature as speech or any form of expression that expresses or seeks to promote or has the capacity to increase hatred against a person or a group of people because of a characteristic they share or a group to which they belong BIBREF0 Twitter develops this definition in its hateful conduct policy as violence against or directly attack or threaten other people on the basis of race ethnicity national origin sexual orientation gender gender identity religious affiliation age disability or serious disease In this work we focus on hate speech detection Due to the inherent complexity of this task it is important to distinguish hate speech from other types of online harassment In particular although it might be offensive to many people the sole presence of insulting terms does not itself signify or convey hate speech And the other way around hate speech may denigrate or threaten an individual or a group of people without the use of any profanities People from the africanamerican community for example often use the term nigga online in everyday language without malicious intentions to refer to folks within their community and the word cunt is often used in non hate speech publications and without any sexist purpose The goal of this work is not to discuss if racial slur such as nigga should be pursued The goal is to distinguish between publications using offensive terms and publications attacking communities which we call hate speech Modern social media content usually include images and text Some of these multimodal publications are only hate speech because of the combination of the text with a certain image That is because as we have stated the presence of offensive terms does not itself signify hate speech and the presence of hate speech is often determined by the context of a publication Moreover users authoring hate speech tend to intentionally construct publications where the text is not enough to determine they are hate speech This happens especially in Twitter where multimodal tweets are formed by an image and a short text which in many cases is not enough to judge them In those cases the image might give extra context to make a proper judgement Fig FIGREF5 shows some of such examples in MMHS150K The contributions of this work are as follows noitemsepleftmargin We propose the novel task of hate speech detection in multimodal publications collect annotate and publish a large scale dataset We evaluate state of the art multimodal models on this specific task and compare their performance with unimodal detection Even though images are proved to be useful for hate speech detection the proposed multimodal models do not outperform unimodal textual models We study the challenges of the proposed task and open the field for future research The literature on detecting hate speech on online textual publications is extensive Schmidt and Wiegand BIBREF1 recently provided a good survey of it where they review the terminology used over time the features used the existing datasets and the different approaches However the field lacks a consistent dataset and evaluation protocol to compare proposed methods Saleem et al BIBREF2 compare different classification methods detecting hate speech in Reddit and other forums Wassem and Hovy BIBREF3 worked on hate speech detection on twitter published a manually annotated dataset and studied its hate distribution Later Wassem BIBREF4 extended the previous published dataset and compared amateur and expert annotations concluding that amateur annotators are more likely than expert annotators to label items as hate speech Park and Fung BIBREF5 worked on Wassem datasets and proposed a classification method using a CNN over Word2Vec BIBREF6 word embeddings showing also classification results on racism and sexism hate subclasses Davidson et al BIBREF7 also worked on hate speech detection on twitter publishing another manually annotated dataset They test different classifiers such as SVMs and decision trees and provide a performance comparison Malmasi and Zampieri BIBREF8 worked on Davidsons dataset improving his results using more elaborated features ElSherief et al BIBREF9 studied hate speech on twitter and selected the most frequent terms in hate tweets based on Hatebase a hate expression repository They propose a big hate dataset but it lacks manual annotations and all the tweets containing certain hate expressions are considered hate speech Zhang et al BIBREF10 recently proposed a more sophisticated approach for hate speech detection using a CNN and a GRU BIBREF11 over Word2Vec BIBREF6 word embeddings They show experiments in different datasets outperforming previous methods Next we summarize existing hate speech datasets noitemsepleftmargin RM BIBREF10 Formed by 2435 tweets discussing Refugees and Muslims annotated as hate or nonhate DT BIBREF7 Formed by 24783 tweets annotated as hate offensive language or neither In our work offensive language tweets are considered as nonhate WZLS BIBREF5 A combination of Wassem datasets BIBREF4 BIBREF3 labeled as racism sexism neither or both that make a total of 18624 tweets SemiSupervised BIBREF9 Contains 27330 general hate speech Twitter tweets crawled in a semisupervised manner Although often modern social media publications include images not too many contributions exist that exploit visual information Zhong et al BIBREF12 worked on classifying Instagram images as potential cyberbullying targets exploiting both the image content the image caption and the comments However their visual information processing is limited to the use of features extracted by a pretrained CNN the use of which does not achieve any improvement Hosseinmardi et al BIBREF13 also address the problem of detecting cyberbullying incidents on Instagram exploiting both textual and image content But again their visual information processing is limited to use the features of a pretrained CNN and the improvement when using visual features on cyberbullying classification is only of 001 A typical task in multimodal visual and textual analysis is to learn an alignment between feature spaces To do that usually a CNN and a RNN are trained jointly to learn a joint embedding space from aligned multimodal data This approach is applied in tasks such as image captioning BIBREF14 BIBREF15 and multimodal image retrieval BIBREF16 BIBREF17 On the other hand instead of explicitly learning an alignment between two spaces the goal of Visual Question Answering VQA is to merge both data modalities in order to decide which answer is correct This problem requires modeling very precise correlations between the image and the question representations The VQA task requirements are similar to our hate speech detection problem in multimodal publications where we have a visual and a textual input and we need to combine both sources of information to understand the global context and make a decision We thus take inspiration from the VQA literature for the tested models Early VQA methods BIBREF18 fuse textual and visual information by feature concatenation Later methods such as Multimodal Compact Bilinear pooling BIBREF19 utilize bilinear pooling to learn multimodal features An important limitation of these methods is that the multimodal features are fused in the latter model stage so the textual and visual relationships are modeled only in the last layers Another limitation is that the visual features are obtained by representing the output of the CNN as a one dimensional vector which losses the spatial information of the input images In a recent work Gao et al BIBREF20 propose a feature fusion scheme to overcome these limitations They learn convolution kernels from the textual information which they call questionguided kernels and convolve them with the visual information in an earlier stage to get the multimodal features MargffoyTuay et al BIBREF21 use a similar approach to combine visual and textual information but they address a different task instance segmentation guided by natural language queries We inspire in these latest feature fusion works to build the models for hate speech detection Existing hate speech datasets contain only textual data Moreover a reference benchmark does not exists Most of the published datasets are crawled from Twitter and distributed as tweet IDs but since Twitter removes reported user accounts an important amount of their hate tweets is no longer accessible We create a new manually annotated multimodal hate speech dataset formed by 150000 tweets each one of them containing text and an image We call the dataset MMHS150K and made it available online In this section we explain the dataset creation steps We used the Twitter API to gather realtime tweets from September 2018 until February 2019 selecting the ones containing any of the 51 Hatebase terms that are more common in hate speech tweets as studied in BIBREF9 We filtered out retweets tweets containing less than three words and tweets containing porn related terms From that selection we kept the ones that included images and downloaded them Twitter applies hate speech filters and other kinds of content control based on its policy although the supervision is based on users reports Therefore as we are gathering tweets from realtime posting the content we get has not yet passed any filter We aim to create a multimodal hate speech database where all the instances contain visual and textual information that we can later process to determine if a tweet is hate speech or not But a considerable amount of the images of the selected tweets contain only textual information such as screenshots of other tweets To ensure that all the dataset instances contain both visual and textual information we remove those tweets To do that we use TextFCN BIBREF22 BIBREF23 a Fully Convolutional Network that produces a pixel wise text probability map of an image We set empirical thresholds to discard images that have a substantial total text probability filtering out 23 of the collected tweets We annotate the gathered tweets using the crowdsourcing platform Amazon Mechanical Turk There we give the workers the definition of hate speech and show some examples to make the task clearer We then show the tweet text and image and we ask them to classify it in one of 6 categories No attacks to any community racist sexist homophobic religion based attacks or attacks to other communities Each one of the 150000 tweets is labeled by 3 different workers to palliate discrepancies among workers We received a lot of valuable feedback from the annotators Most of them had understood the task correctly but they were worried because of its subjectivity This is indeed a subjective task highly dependent on the annotator convictions and sensitivity However we expect to get cleaner annotations the more strong the attack is which are the publications we are more interested on detecting We also detected that several users annotate tweets for hate speech just by spotting slur As already said previously just the use of particular words can be offensive to many people but this is not the task we aim to solve We have not included in our experiments those hits that were made in less than 3 seconds understanding that it takes more time to grasp the multimodal context and make a decision We do a majority voting between the three annotations to get the tweets category At the end we obtain 112845 not hate tweets and 36978 hate tweets The latest are divided in 11925 racist 3495 sexist 3870 homophobic 163 religionbased hate and 5811 other hate tweets Fig FIGREF17 In this work we do not use hate subcategories and stick to the hate not hate split We separate balanced validation 5000 and test 10000 sets The remaining tweets are used for training We also experimented using hate scores for each tweet computed given the different votes by the three annotators instead of binary labels The results did not present significant differences to those shown in the experimental part of this work but the raw annotations will be published nonetheless for further research As far as we know this dataset is the biggest hate speech dataset to date and the first multimodal hate speech dataset One of its challenges is to distinguish between tweets using the same key offensive words that constitute or not an attack to a community hate speech Fig FIGREF18 shows the percentage of hate and not hate tweets of the top keywords All images are resized such that their shortest size has 500 pixels During training online data augmentation is applied as random cropping of 299times 299 patches and mirroring We use a CNN as the image features extractor which is an Imagenet BIBREF24 pretrained Google Inception v3 architecture BIBREF25 The finetuning process of the Inception v3 layers aims to modify its weights to extract the features that combined with the textual information are optimal for hate speech detection We train a single layer LSTM with a 150dimensional hidden state for hate not hate classification The input dimensionality is set to 100 and GloVe BIBREF26 embeddings are used as word input representations Since our dataset is not big enough to train a GloVe word embedding model we used a pretrained model that has been trained in two billion tweets This ensures that the model will be able to produce word embeddings for slang and other words typically used in Twitter To process the tweets text before generating the word embeddings we use the same pipeline as the model authors which includes generating symbols to encode Twitter special interactions such as user mentions user or hashtags hashtag To encode the tweet text and input it later to multimodal models we use the LSTM hidden state after processing the last tweet word Since the LSTM has been trained for hate speech classification it extracts the most useful information for this task from the text which is encoded in the hidden state after inputting the last tweet word The text in the image can also contain important information to decide if a publication is hate speech or not so we extract it and also input it to our model To do so we use Google Vision API Text Detection module BIBREF27 We input the tweet text and the text from the image separately to the multimodal models so it might learn different relations between them and between them and the image For instance the model could learn to relate the image text with the area in the image where the text appears so it could learn to interpret the text in a different way depending on the location where it is written in the image The image text is also encoded by the LSTM as the hidden state after processing its last word The objective of this work is to build a hate speech detector that leverages both textual and visual data and detects hate speech publications based on the context given by both data modalities To study how the multimodal context can boost the performance compared to an unimodal context we evaluate different models a Feature Concatenation Model FCM a Spatial Concatenation Model SCM and a Textual Kernels Model TKM All of them are CNNRNN models with three inputs the tweet image the tweet text and the text appearing in the image if any The image is fed to the Inception v3 architecture and the 2048 dimensional feature vector after the last average pooling layer is used as the visual representation This vector is then concatenated with the 150 dimension vectors of the LSTM last word hidden states of the image text and the tweet text resulting in a 2348 feature vector This vector is then processed by three fully connected layers of decreasing dimensionality 2348 1024 512 with following corresponding batch normalization and ReLu layers until the dimensions are reduced to two the number of classes in the last classification layer The FCM architecture is illustrated in Fig FIGREF26 Instead of using the latest feature vector before classification of the Inception v3 as the visual representation in the SCM we use the 8times 8times 2048 feature map after the last Inception module Then we concatenate the 150 dimension vectors encoding the tweet text and the tweet image text at each spatial location of that feature map The resulting multimodal feature map is processed by two InceptionE blocks BIBREF28 After that dropout and average pooling are applied and as in the FCM model three fully connected layers are used to reduce the dimensionality until the classification layer The TKM design inspired by BIBREF20 and BIBREF21 aims to capture interactions between the two modalities more expressively than concatenation models As in SCM we use the 8times 8times 2048 feature map after the last Inception module as the visual representation From the 150 dimension vector encoding the tweet text we learn Kt text dependent kernels using independent fully connected layers that are trained together with the rest of the model The resulting Kt text dependent kernels will have dimensionality of 1times 1times 2048 We do the same with the feature vector encoding the image text learning Kit kernels The textual kernels are convolved with the visual feature map in the channel dimension at each spatial location resulting in a 8times 8times KiKit multimodal feature map and batch normalization is applied Then as in the SCM the 150 dimension vectors encoding the tweet text and the tweet image text are concatenated at each spatial dimension The rest of the architecture is the same as in SCM two InceptionE blocks dropout average pooling and three fully connected layers until the classification layer The number of tweet textual kernels Kt and tweet image textual kernels Kit is set to Kt 10 and Kit 5 The TKM architecture is illustrated in Fig FIGREF29 We train the multimodal models with a CrossEntropy loss with Softmax activations and an ADAM optimizer with an initial learning rate of 1e4 Our dataset suffers from a high class imbalance so we weight the contribution to the loss of the samples to totally compensate for it One of the goals of this work is to explore how every one of the inputs contributes to the classification and to prove that the proposed model can learn concurrences between visual and textual data useful to improve the hate speech classification results on multimodal data To do that we train different models where all or only some inputs are available When an input is not available we set it to zeros and we do the same when an image has no text Table TABREF31 shows the Fscore the Area Under the ROC Curve AUC and the mean accuracy ACC of the proposed models when different inputs are available TT refers to the tweet text IT to the image text and I to the image It also shows results for the LSTM for the Davison method proposed in BIBREF7 trained with MMHS150K and for random scores Fig FIGREF32 shows the Precision vs Recall plot and the ROC curve which plots the True Positive Rate vs the False Positive Rate of the different models First notice that given the subjectivity of the task and the discrepancies between annotators getting optimal scores in the evaluation metrics is virtually impossible However a system with relatively low metric scores can still be very useful for hate speech detection in a real application it will fire on publications for which most annotators agree they are hate which are often the stronger attacks The proposed LSTM to detect hate speech when only text is available gets similar results as the method presented in BIBREF7 which we trained with MMHS150K and the same splits However more than substantially advancing the state of the art on hate speech detection in textual publications our key purpose in this work is to introduce and work on its detection on multimodal publications We use LSTM because it provides a strong representation of the tweet texts The FCM trained only with images gets decent results considering that in many publications the images might not give any useful information for the task Fig FIGREF33 shows some representative examples of the top hate and not hate scored images of this model Many hate tweets are accompanied by demeaning nudity images being sexist or homophobic Other racist tweets are accompanied by images caricaturing black people Finally MEMES are also typically used in hate speech publications The top scored images for not hate are portraits of people belonging to minorities This is due to the use of slur inside these communities without an offensive intention such as the word nigga inside the afroamerican community or the word dyke inside the lesbian community These results show that images can be effectively used to discriminate between offensive and nonoffensive uses of those words Despite the model trained only with images proves that they are useful for hate speech detection the proposed multimodal models are not able to improve the detection compared to the textual models Besides the different architectures we have tried different training strategies such as initializing the CNN weights with a model already trained solely with MMHS150K images or using dropout to force the multimodal models to use the visual information Eventually though these models end up using almost only the text input for the prediction and producing very similar results to those of the textual models The proposed multimodal models such as TKM have shown good performance in other tasks such as VQA Next we analyze why they do not perform well in this task and with this data noitemsepleftmargin Noisy data A major challenge of this task is the discrepancy between annotations due to subjective judgement Although this affects also detection using only text its repercussion is bigger in more complex tasks such as detection using images or multimodal detection Complexity and diversity of multimodal relations Hate speech multimodal publications employ a lot of background knowledge which makes the relations between visual and textual elements they use very complex and diverse and therefore difficult to learn by a neural network Small set of multimodal examples Fig FIGREF5 shows some of the challenging multimodal hate examples that we aimed to detect But although we have collected a big dataset of 150K tweets the subset of multimodal hate there is still too small to learn the complex multimodal relations needed to identify multimodal hate In this work we have explored the task of hate speech detection on multimodal publications We have created MMHS150K to our knowledge the biggest available hate speech dataset and the first one composed of multimodal data namely tweets formed by image and text We have trained different textual visual and multimodal models with that data and found out that despite the fact that images are useful for hate speech detection the multimodal models do not outperform the textual models Finally we have analyzed the challenges of the proposed task and dataset Given that most of the content in Social Media nowadays is multimodal we truly believe on the importance of pushing forward this research The code used in this work is available in </s>
<s>[INST] Self-Taught Convolutional Neural Networks for Short Text Clustering [/INST] Short text clustering is of great importance due to its various applications such as user profiling BIBREF0 and recommendation BIBREF1 for nowadays social media dataset emerged day by day However short text clustering has the data sparsity problem and most words only occur once in each short text BIBREF2 As a result the Term FrequencyInverse Document Frequency TFIDF measure cannot work well in short text setting In order to address this problem some researchers work on expanding and enriching the context of data from Wikipedia BIBREF3 or an ontology BIBREF4 However these methods involve solid Natural Language Processing NLP knowledge and still use highdimensional representation which may result in a waste of both memory and computation time Another way to overcome these issues is to explore some sophisticated models to cluster short texts For example Yin and Wang BIBREF5 proposed a Dirichlet multinomial mixture modelbased approach for short text clustering Yet how to design an effective model is an open question and most of these methods directly trained based on BagofWords BoW are shallow structures which cannot preserve the accurate semantic similarities Recently with the help of word embedding neural networks demonstrate their great performance in terms of constructing text representation such as Recursive Neural Network RecNN BIBREF6 BIBREF7 and Recurrent Neural Network RNN BIBREF8 However RecNN exhibits high time complexity to construct the textual tree and RNN using the hidden layer computed at the last word to represent the text is a biased model where later words are more dominant than earlier words BIBREF9 Whereas for the nonbiased models the learned representation of one text can be extracted from all the words in the text with nondominant learned weights More recently Convolution Neural Network CNN as the most popular nonbiased model and applying convolutional filters to capture local features has achieved a better performance in many NLP applications such as sentence modeling BIBREF10 relation classification BIBREF11 and other traditional NLP tasks BIBREF12 Most of the previous works focus CNN on solving supervised NLP tasks while in this paper we aim to explore the power of CNN on one unsupervised NLP task short text clustering We systematically introduce a simple yet surprisingly powerful SelfTaught Convolutional neural network framework for Short Text Clustering called STC INLINEFORM0 An overall architecture of our proposed approach is illustrated in Figure FIGREF5 We inspired by BIBREF13 BIBREF14 utilize a selftaught learning framework into our task In particular the original raw text features are first embedded into compact binary codes INLINEFORM1 with the help of one traditional unsupervised dimensionality reduction function Then text matrix INLINEFORM2 projected from word embeddings are fed into CNN model to learn the deep feature representation INLINEFORM3 and the output units are used to fit the pretrained binary codes INLINEFORM4 After obtaining the learned features Kmeans algorithm is employed on them to cluster texts into clusters INLINEFORM5 Obviously we call our approach selftaught because the CNN model is learnt from the pseudo labels generated from the previous stage which is quite different from the term selftaught in BIBREF15 Our main contributions can be summarized as follows This work is an extension of our conference paper BIBREF16 and they differ in the following aspects First we put forward a general a selftaught CNN framework in this paper which can flexibly couple various semantic features whereas the conference version can be seen as a specific example of this work Second in this paper we use a new short text dataset Biomedical in the experiment to verify the effectiveness of our approach Third we put much effort on studying the influence of various different semantic features integrated in our selftaught CNN framework which is not involved in the conference paper For the purpose of reproducibility we make the datasets and software used in our experiments publicly available at the website The remainder of this paper is organized as follows In Section SECREF2 we first briefly survey several related works In Section SECREF3 we describe the proposed approach STC INLINEFORM0 and implementation details Experimental results and analyses are presented in Section SECREF4 Finally conclusions are given in the last Section In this section we review the related work from the following two perspectives short text clustering and deep neural networks There have been several studies that attempted to overcome the sparseness of short text representation One way is to expand and enrich the context of data For example Banerjee et al BIBREF3 proposed a method of improving the accuracy of short text clustering by enriching their representation with additional features from Wikipedia and Fodeh et al BIBREF4 incorporate semantic knowledge from an ontology into text clustering However these works need solid NLP knowledge and still use highdimensional representation which may result in a waste of both memory and computation time Another direction is to map the original features into reduced space such as Latent Semantic Analysis LSA BIBREF17 Laplacian Eigenmaps LE BIBREF18 and Locality Preserving Indexing LPI BIBREF19 Even some researchers explored some sophisticated models to cluster short texts For example Yin and Wang BIBREF5 proposed a Dirichlet multinomial mixture modelbased approach for short text clustering Moreover some studies even focus the above both two streams For example Tang et al BIBREF20 proposed a novel framework which enrich the text features by employing machine translation and reduce the original features simultaneously through matrix factorization techniques Despite the above clustering methods can alleviate sparseness of short text representation to some extent most of them ignore word order in the text and belong to shallow structures which can not fully capture accurate semantic similarities Recently there is a revival of interest in DNN and many researchers have concentrated on using Deep Learning to learn features Hinton and Salakhutdinov BIBREF21 use DAE to learn text representation During the finetuning procedure they use backpropagation to find codes that are good at reconstructing the wordcount vector More recently researchers propose to use external corpus to learn a distributed representation for each word called word embedding BIBREF22 to improve DNN performance on NLP tasks The Skipgram and continuous bagofwords models of Word2vec BIBREF23 propose a simple singlelayer architecture based on the inner product between two word vectors and Pennington et al BIBREF24 introduce a new model for word representation called GloVe which captures the global corpus statistics In order to learn the compact representation vectors of sentences Le and Mikolov BIBREF25 directly extend the previous Word2vec BIBREF23 by predicting words in the sentence which is named Paragraph Vector Para2vec Para2vec is still a shallow windowbased method and need a larger corpus to yield better performance More neural networks utilize word embedding to capture true meaningful syntactic and semantic regularities such as RecNN BIBREF6 BIBREF7 and RNN BIBREF8 However RecNN exhibits high time complexity to construct the textual tree and RNN using the layer computed at the last word to represent the text is a biased model Recently Long ShortTerm Memory LSTM BIBREF26 and Gated Recurrent Unit GRU BIBREF27 as sophisticated recurrent hidden units of RNN has presented its advantages in many sequence generation problem such as machine translation BIBREF28 speech recognition BIBREF29 and text conversation BIBREF30 While CNN is better to learn nonbiased implicit features which has been successfully exploited for many supervised NLP learning tasks as described in Section SECREF1 and various CNN based variants are proposed in the recent works such as Dynamic Convolutional Neural Network DCNN BIBREF10 Gated Recursive Convolutional Neural Network grConv BIBREF31 and SelfAdaptive Hierarchical Sentence model AdaSent BIBREF32 In the past few days Visin et al BIBREF33 have attempted to replace convolutional layer in CNN to learn nonbiased features for object recognition with four RNNs called ReNet that sweep over lowerlayer features in different directions 1 bottom to top 2 top to bottom 3 left to right and 4 right to left However ReNet does not outperform stateoftheart convolutional neural networks on any of the three benchmark datasets and it is also a supervised learning model for classification Inspired by Skipgram of word2vec BIBREF34 BIBREF23 Skipthought model BIBREF35 describe an approach for unsupervised learning of a generic distributed sentence encoder Similar as Skipgram model Skipthought model trains an encoderdecoder model that tries to reconstruct the surrounding sentences of an encoded sentence and released an offtheshelf encoder to extract sentence representation Even some researchers introduce continuous Skipgram and negative sampling to CNN for learning visual representation in an unsupervised manner BIBREF36 This paper from a new perspective puts forward a general selftaught CNN framework which can flexibly couple various semantic features and achieve a good performance on one unsupervised learning task short text clustering Assume that we are given a dataset of INLINEFORM0 training texts denoted as INLINEFORM1 where INLINEFORM2 is the dimensionality of the original BoW representation Denote its tag set as INLINEFORM3 and the pretrained word embedding set as INLINEFORM4 where INLINEFORM5 is the dimensionality of word vectors and INLINEFORM6 is the vocabulary size In order to learn the INLINEFORM7 dimensional deep feature representation INLINEFORM8 from CNN in an unsupervised manner some unsupervised dimensionality reduction methods INLINEFORM9 are employed to guide the learning of CNN model Our goal is to cluster these texts INLINEFORM10 into clusters INLINEFORM11 based on the learned deep feature representation while preserving the semantic consistency As depicted in Figure FIGREF5 the proposed framework consist of three components deep convolutional neural network CNN unsupervised dimensionality reduction function and Kmeans module In the rest sections we first present the first two components respectively and then give the trainable parameters and the objective function to learn the deep feature representation Finally the last section describe how to perform clustering on the learned features In this section we briefly review one popular deep convolutional neural network Dynamic Convolutional Neural Network DCNN BIBREF10 as an instance of CNN in the following sections which as the foundation of our proposed method has been successfully proposed for the completely supervised learning task text classification Taking a neural network with two convolutional layers in Figure FIGREF9 as an example the network transforms raw input text to a powerful representation Particularly each raw text vector INLINEFORM0 is projected into a matrix representation INLINEFORM1 by looking up a word embedding INLINEFORM2 where INLINEFORM3 is the length of one text We also let INLINEFORM4 and INLINEFORM5 denote the weights of the neural networks The network defines a transformation INLINEFORM6 INLINEFORM7 which transforms an input raw text INLINEFORM8 to a INLINEFORM9 dimensional deep representation INLINEFORM10 There are three basic operations described as follows Wide onedimensional convolution This operation INLINEFORM0 is applied to an individual row of the sentence matrix INLINEFORM1 and yields a resulting matrix INLINEFORM2 where INLINEFORM3 is the width of convolutional filter Folding In this operation every two rows in a feature map are simply summed componentwisely For a map of INLINEFORM0 rows folding returns a map of INLINEFORM1 rows thus halving the size of the representation and yielding a matrix feature INLINEFORM2 Note that folding operation does not introduce any additional parameters Dynamic INLINEFORM0 max pooling Assuming the pooling parameter as INLINEFORM1 INLINEFORM2 max pooling selects the submatrix INLINEFORM3 of the INLINEFORM4 highest values in each row of the matrix INLINEFORM5 For dynamic INLINEFORM6 max pooling the pooling parameter INLINEFORM7 is dynamically selected in order to allow for a smooth extraction of higherorder and longerrange features BIBREF10 Given a fixed pooling parameter INLINEFORM8 for the topmost convolutional layer the parameter INLINEFORM9 of INLINEFORM10 max pooling in the INLINEFORM11 th convolutional layer can be computed as follows DISPLAYFORM0 where INLINEFORM0 is the total number of convolutional layers in the network As described in Figure FIGREF5 the dimensionality reduction function is defined as follows DISPLAYFORM0 where INLINEFORM0 are the INLINEFORM1 dimensional reduced latent space representations Here we take four popular dimensionality reduction methods as examples in our framework Average Embedding AE This method directly averages the word embeddings which are respectively weighted with TF and TFIDF Huang et al BIBREF37 used this strategy as the global context in their task and Socher et al BIBREF7 and Lai et al BIBREF9 used this method for text classification The weighted average of all word vectors in one text can be computed as follows DISPLAYFORM0 where INLINEFORM0 can be any weighting function that captures the importance of word INLINEFORM1 in the text INLINEFORM2 Latent Semantic Analysis LSA LSA BIBREF17 is the most popular global matrix factorization method which applies a dimension reducing linear projection Singular Value Decomposition SVD of the corresponding termdocument matrix Suppose the rank of INLINEFORM0 is INLINEFORM1 LSA decompose INLINEFORM2 into the product of three other matrices DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are the singular values of INLINEFORM2 INLINEFORM3 is a set of left singular vectors and INLINEFORM4 is a set of right singular vectors LSA uses the top INLINEFORM5 vectors in INLINEFORM6 as the transformation matrix to embed the original text features into a INLINEFORM7 dimensional subspace INLINEFORM8 BIBREF17 Laplacian Eigenmaps LE The top eigenvectors of graph Laplacian defined on the similarity matrix of texts are used in the method which can discover the manifold structure of the text space BIBREF18 In order to avoid storing the dense similarity matrix many approximation techniques are proposed to reduce the memory usage and computational complexity for LE There are two representative approximation methods sparse similarity matrix and Nystr INLINEFORM0 m approximation Following previous studies BIBREF38 BIBREF13 we select the former technique to construct the INLINEFORM1 local similarity matrix INLINEFORM2 by using heat kernel as follows DISPLAYFORM0 where INLINEFORM0 is a tuning parameter default is 1 and INLINEFORM1 represents the set of INLINEFORM2 nearestneighbors of INLINEFORM3 By introducing a diagonal INLINEFORM4 matrix INLINEFORM5 whose entries are given by INLINEFORM6 the graph Laplacian INLINEFORM7 can be computed by INLINEFORM8 The optimal INLINEFORM9 realvalued matrix INLINEFORM10 can be obtained by solving the following objective function DISPLAYFORM0 where INLINEFORM0 is the trace function INLINEFORM1 requires the different dimensions to be uncorrelated and INLINEFORM2 requires each dimension to achieve equal probability as positive or negative Locality Preserving Indexing LPI This method extends LE to deal with unseen texts by approximating the linear function INLINEFORM0 BIBREF13 and the subspace vectors are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the Riemannian manifold BIBREF19 Similar as LE we first construct the local similarity matrix INLINEFORM1 then the graph Laplacian INLINEFORM2 can be computed by INLINEFORM3 where INLINEFORM4 measures the local density around INLINEFORM5 and is equal to INLINEFORM6 Compute the eigenvectors INLINEFORM7 and eigenvalues INLINEFORM8 of the following generalized eigenproblem DISPLAYFORM0 The mapping function INLINEFORM0 can be obtained and applied to the unseen data BIBREF38 All of the above methods claim a better performance in capturing semantic similarity between texts in the reduced latent space representation INLINEFORM0 than in the original representation INLINEFORM1 while the performance of short text clustering can be further enhanced with the help of our framework selftaught CNN The last layer of CNN is an output layer as follows DISPLAYFORM0 where INLINEFORM0 is the deep feature representation INLINEFORM1 is the output vector and INLINEFORM2 is weight matrix In order to incorporate the latent semantic features INLINEFORM0 we first binary the realvalued vectors INLINEFORM1 to the binary codes INLINEFORM2 by setting the threshold to be the media vector INLINEFORM3 Then the output vector INLINEFORM4 is used to fit the binary codes INLINEFORM5 via INLINEFORM6 logistic operations as follows DISPLAYFORM0 All parameters to be trained are defined as INLINEFORM0 DISPLAYFORM0 Given the training text collection INLINEFORM0 and the pretrained binary codes INLINEFORM1 the log likelihood of the parameters can be written down as follows DISPLAYFORM0 Following the previous work BIBREF10 we train the network with minibatches by backpropagation and perform the gradientbased optimization using the Adagrad update rule BIBREF39 For regularization we employ dropout with 50 rate to the penultimate layer BIBREF10 BIBREF40 With the given short texts we first utilize the trained deep neural network to obtain the semantic representations INLINEFORM0 and then employ traditional Kmeans algorithm to perform clustering We test our proposed approach on three public short text datasets The summary statistics and semantic topics of these datasets are described in Table TABREF24 and Table TABREF25 SearchSnippets This dataset was selected from the results of web search transaction using predefined phrases of 8 different domains by Phan et al BIBREF41 StackOverflow We use the challenge data published in Kagglecom The raw dataset consists 3370528 samples through July 31st 2012 to August 14 2012 In our experiments we randomly select 20000 question titles from 20 different tags as in Table TABREF25 Biomedical We use the challenge data published in BioASQs official website In our experiments we randomly select 20 000 paper titles from 20 different MeSH major topics as in Table TABREF25 As described in Table TABREF24 the max length of selected paper titles is 53 For these datasets we randomly select 10 of data as the development set Since SearchSnippets has been preprocessed by Phan et al BIBREF41 we do not further process this dataset In StackOverflow texts contain lots of computer terminology and symbols and capital letters are meaningful thus we do not do any preprocessed procedures For Biomedical we remove the symbols and convert letters into lower case We use the publicly available word2vec tool to train word embeddings and the most parameters are set as same as Mikolov et al BIBREF23 to train word vectors on Google News setting except of vector dimensionality using 48 and minimize count using 5 For SearchSnippets we train word vectors on Wikipedia dumps For StackOverflow we train word vectors on the whole corpus of the StackOverflow dataset described above which includes the question titles and post contents For Biomedical we train word vectors on all titles and abstracts of 2014 training articles The coverage of these learned vectors on three datasets are listed in Table TABREF32 and the words not present in the set of pretrained words are initialized randomly In our experiment some widely used text clustering methods are compared with our approach Besides Kmeans Skipthought Vectors Recursive Neural Network and Paragraph Vector based clustering methods four baseline clustering methods are directly based on the popular unsupervised dimensionality reduction methods as described in Section SECREF11 We further compare our approach with some other nonbiased neural networks such as bidirectional RNN More details are listed as follows Kmeans Kmeans BIBREF42 on original keyword features which are respectively weighted with term frequency TF and term frequencyinverse document frequency TFIDF Skipthought Vectors SkipVec This baseline BIBREF35 gives an offtheshelf encoder to produce highly generic sentence representations The encoder is trained using a large collection of novels and provides three encoder modes that are unidirectional encoder SkipVec Uni with 2400 dimensions bidirectional encoder SkipVec Bi with 2400 dimensions and combined encoder SkipVec Combine with SkipVec Uni and SkipVec Bi of 2400 dimensions each Kmeans is employed on the these vector representations respectively Recursive Neural Network RecNN In BIBREF6 the tree structure is firstly greedy approximated via unsupervised recursive autoencoder Then semisupervised recursive autoencoders are used to capture the semantics of texts based on the predicted structure In order to make this recursivebased method completely unsupervised we remove the crossentropy error in the second phrase to learn vector representation and subsequently employ Kmeans on the learned vectors of the top tree node and the average of all vectors in the tree Paragraph Vector Para2vec Kmeans on the fixed size feature vectors generated by Paragraph Vector Para2vec BIBREF25 which is an unsupervised method to learn distributed representation of words and paragraphs In our experiments we use the open source software released by Mesnil et al BIBREF43 Average Embedding AE Kmeans on the weighted average vectors of the word embeddings which are respectively weighted with TF and TFIDF The dimension of average vectors is equal to and decided by the dimension of word vectors used in our experiments Latent Semantic Analysis LSA Kmeans on the reduced subspace vectors generated by Singular Value Decomposition SVD method The dimension of subspace is default set to the number of clusters we also iterate the dimensions ranging from 1010200 to get the best performance that is 10 on SearchSnippets 20 on StackOverflow and 20 on Biomedical in our experiments Laplacian Eigenmaps LE This baseline using Laplacian Eigenmaps and subsequently employing Kmeans algorithm is well known as spectral clustering BIBREF44 The dimension of subspace is default set to the number of clusters BIBREF18 BIBREF38 we also iterate the dimensions ranging from 1010200 to get the best performance that is 20 on SearchSnippets 70 on StackOverflow and 30 on Biomedical in our experiments Locality Preserving Indexing LPI This baseline projecting the texts into a lower dimensional semantic space can discover both the geometric and discriminating structures of the original feature space BIBREF38 The dimension of subspace is default set to the number of clusters BIBREF38 we also iterate the dimensions ranging from 1010200 to get the best performance that is 20 on SearchSnippets 80 on StackOverflow and 30 on Biomedical in our experiments bidirectional RNN biRNN We replace the CNN model in our framework as in Figure FIGREF5 with some biRNN models Particularly LSTM and GRU units are used in the experiments In order to generate the fixedlength document representation from the variablelength vector sequences for both biLSTM and biGRU based clustering methods we further utilize three pooling methods last pooling using the last hidden state mean pooling and elementwise max pooling These pooling methods are respectively used in the previous works BIBREF45 BIBREF27 BIBREF46 and BIBREF9 For regularization the training gradients of all parameters with an INLINEFORM0 2 norm larger than 40 are clipped to 40 as the previous work BIBREF47 The clustering performance is evaluated by comparing the clustering results of texts with the tagslabels provided by the text corpus Two metrics the accuracy ACC and the normalized mutual information metric NMI are used to measure the clustering performance BIBREF38 BIBREF48 Given a text INLINEFORM0 let INLINEFORM1 and INLINEFORM2 be the obtained cluster label and the label provided by the corpus respectively Accuracy is defined as DISPLAYFORM0 where INLINEFORM0 is the total number of texts INLINEFORM1 is the indicator function that equals one if INLINEFORM2 and equals zero otherwise and INLINEFORM3 is the permutation mapping function that maps each cluster label INLINEFORM4 to the equivalent label from the text data by Hungarian algorithm BIBREF49 Normalized mutual information BIBREF50 between taglabel set INLINEFORM0 and cluster set INLINEFORM1 is a popular metric used for evaluating clustering tasks It is defined as follows DISPLAYFORM0 where INLINEFORM0 is the mutual information between INLINEFORM1 and INLINEFORM2 INLINEFORM3 is entropy and the denominator INLINEFORM4 is used for normalizing the mutual information to be in the range of 0 1 The most of parameters are set uniformly for these datasets Following previous study BIBREF38 the number of nearest neighbors in Eqn EQREF15 is fixed to 15 when constructing the graph structures for LE and LPI For CNN model the networks has two convolutional layers The widths of the convolutional filters are both 3 The value of INLINEFORM0 for the top INLINEFORM1 max pooling in Eqn EQREF10 is 5 The number of feature maps at the first convolutional layer is 12 and 8 feature maps at the second convolutional layer Both those two convolutional layers are followed by a folding layer We further set the dimension of word embeddings INLINEFORM2 as 48 Finally the dimension of the deep feature representation INLINEFORM3 is fixed to 480 Moreover we set the learning rate INLINEFORM4 as 001 and the minibatch training size as 200 The output size INLINEFORM5 in Eqn EQREF19 is set same as the best dimensions of subspace in the baseline method as described in Section SECREF37 For initial centroids have significant impact on clustering results when utilizing the Kmeans algorithms we repeat Kmeans for multiple times with random initial centroids specifically 100 times for statistical significance as Huang BIBREF48 The all subspace vectors are normalized to 1 before applying Kmeans and the final results reported are the average of 5 trials with all clustering methods on three text datasets In Table TABREF43 and Table TABREF44 we report the ACC and NMI performance of our proposed approaches and four baseline methods Kmeans SkipVec RecNN and Para2vec based clustering methods Intuitively we get a general observation that 1 BoW based approaches including Kmeans TF and Kmeans TFIDF and SkipVec based approaches perform not well 2 RecNN based approaches both RecNN Ave and RecNN TopAve do better 3 Para2vec makes a comparable performance with the most baselines and 4 the evaluation clearly demonstrate the superiority of our proposed methods STC INLINEFORM0 It is an expected results For SkipVec based approaches the offtheshelf encoders are trained on the BookCorpus datasets BIBREF51 and then applied to our datasets to extract the sentence representations The SkipVec encoders can produce generic sentence representations but may not perform well for specific datasets in our experiments StackOverflow and Biomedical datasets consist of many computer terms and medical terms such as ASPNET XML C serum and glycolytic When we take a more careful look we find that RecNN Top does poorly even worse than Kmeans TFIDF The reason maybe that although recursive neural models introduce tree structure to capture compositional semantics the vector of the top node mainly captures a biased semantic while the average of all vectors in the tree nodes such as RecNN Ave can be better to represent sentence level semantic And we also get another observation that although our proposed STC INLINEFORM1 LE and STC INLINEFORM2 LPI outperform both BoW based and RecNN based approaches across all three datasets STC INLINEFORM3 AE and STC INLINEFORM4 LSA do just exhibit some similar performances as RecNN Ave and RecNN TopAve do in the datasets of StackOverflow and Biomedical We further replace the CNN model in our framework as in Figure FIGREF5 with some other nonbiased models such as biLSTM and biGRU and report the results in Table TABREF46 and Table TABREF47 As an instance the binary codes generated from LPI are used to guide the learning of biLSTMbiGRU models From the results we can see that biGRU and biLSTM based clustering methods do equally well no clear winner and both achieve great enhancements compared with LPI best Compared with these biLSTMbiGRU based models the evaluation results still demonstrate the superiority of our approach methods CNN based clustering model in the most cases As the results reported by Visin et al BIBREF33 despite bidirectional or multidirectional RNN models perform a good nonbiased feature extraction they yet do not outperform stateoftheart CNN on some tasks In order to make clear what factors make our proposed method work we report the bar chart results of ACC and MNI of our proposed methods and the corresponding baseline methods in Figure FIGREF49 and Figure FIGREF53 It is clear that although AE and LSA does well or even better than LE and LPI especially in dataset of both StackOverflow and Biomedical STC INLINEFORM0 LE and STC INLINEFORM1 LPI achieve a much larger performance enhancements than STC INLINEFORM2 AE and STC INLINEFORM3 LSA do The possible reason is that the information the pseudo supervision used to guide the learning of CNN model that make difference Especially for AE case the input features fed into CNN model and the pseudo supervision employed to guide the learning of CNN model are all come from word embeddings There are no different semantic features to be used into our proposed method thus the performance enhancements are limited in STC INLINEFORM4 AE For LSA case as we known LSA is to make matrix factorization to find the best subspace approximation of the original feature space to minimize the global reconstruction error And as BIBREF24 BIBREF52 recently point out that word embeddings trained with word2vec or some variances is essentially to do an operation of matrix factorization Therefore the information between input and the pseudo supervision in CNN is not departed very largely from each other and the performance enhancements of STC INLINEFORM5 AE is also not quite satisfactory For LE and LPI case as we known that LE extracts the manifold structure of the original feature space and LPI extracts both geometric and discriminating structure of the original feature space BIBREF38 We guess that our approach STC INLINEFORM6 LE and STC INLINEFORM7 LPI achieve enhancements compared with both LE and LPI by a large margin because both of LE and LPI get useful semantic features and these features are also different from word embeddings used as input of CNN From this view we say that our proposed STC has potential to behave more effective when the pseudo supervision is able to get semantic meaningful features which is different enough from the input of CNN Furthermore from the results of Kmeans and AE in Table TABREF43 TABREF44 and Figure FIGREF49 FIGREF53 we note that TFIDF weighting gives a more remarkable improvement for Kmeans while TF weighting works better than TFIDF weighting for Average Embedding Maybe the reason is that pretrained word embeddings encode some useful information from external corpus and are able to get even better results without TFIDF weighting Meanwhile we find that LE get quite unusual good performance than LPI LSA and AE in SearchSnippets dataset which is not found in the other two datasets To get clear about this and also to make a much better demonstration about our proposed approaches and other baselines we further report 2dimensional text embeddings on SearchSnippets in Figure FIGREF58 using tSNE BIBREF53 to get distributed stochastic neighbor embedding of the feature representations used in the clustering methods We can see that the results of from AE and LSA seem to be fairly good or even better than the ones from LE and LPI which is not the same as the results from ACC and NMI in Figure FIGREF49 FIGREF53 Meanwhile RecNN Ave performs better than BoW both TF and TFIDF while RecNN Top does not which is the same as the results from ACC and NMI in Table TABREF43 and Table TABREF44 Then we guess that both the same as and not the same as above is just a good example to illustrate that visualization tool such as tSNE get some useful information for measuring results which is different from the ones of ACC and NMI Moreover from this complementary view of tSNE we can see that our STC INLINEFORM0 AE STC INLINEFORM1 LSA STC INLINEFORM2 LE and STC INLINEFORM3 LPI show more clearcut margins among different semantic topics that is tagslabels compared with AE LSA LE and LPI respectively as well as compared with both baselines BoW and RecNN based ones From all these results with three measures of ACC NMI and tSNE under three datasets we can get a solid conclusion that our proposed approaches is an effective approaches to get useful semantic features for short text clustering With the emergence of social media short text clustering has become an increasing important task This paper explores a new perspective to cluster short texts based on deep feature representation learned from the proposed selftaught convolutional neural networks Our framework can be successfully accomplished without using any external tagslabels and complicated NLP preprocessing and and our approach is a flexible framework in which the traditional dimension reduction approaches could be used to get performance enhancement Our extensive experimental study on three short text datasets shows that our approach can achieve a significantly better performance In the future how to select and incorporate more effective semantic features into the proposed framework would call for more research We would like to thank reviewers for their comments and acknowledge Kaggle and BioASQ for making the datasets available This work is supported by the National Natural Science Foundation of China No 61602479 No 61303172 No 61403385 and the Strategic Priority Research Program of the Chinese Academy of Sciences Grant No XDB02070005 </s>
<s>[INST] Solving Arithmetic Word Problems Automatically Using Transformer and Unambiguous Representations [/INST] Students are exposed to simple arithmetic word problems starting in elementary school and most become proficient in solving them at a young age Automatic solvers of such problems could potentially help educators as well as become an integral part of general question answering services However it has been challenging to write programs to solve even such elementary school level problems well Solving a math word problem MWP starts with one or more sentences describing a transactional situation to be understood The sentences are processed to produce an arithmetic expression which is evaluated to provide an answer Recent neural approaches to solving arithmetic word problems have used various flavors of recurrent neural networks RNN as well as reinforcement learning Such methods have had difficulty achieving a high level of generalization Often systems extract the relevant numbers successfully but misplace them in the generated expressions More problematic they get the arithmetic operations wrong The use of infix notation also requires pairs of parentheses to be placed and balanced correctly bracketing the right numbers There have been problems with parentheses placement as well Correctly extracting the numbers in the problem is necessary Figure FIGREF1 gives examples of some infix representations that a machine learning solver can potentially produce from a simple word problem using the correct numbers Of the expressions shown only the first one is correct After carefully observing expressions that actual problem solvers have generated we want to explore if the use of infix notation may itself be a part of the problem because it requires the generation of additional characters the open and close parentheses which must be balanced and placed correctly The actual numbers appearing in MWPs vary widely from problem to problem Real numbers take any conceivable value making it almost impossible for a neural network to learn representations for them As a result trained programs sometimes generate expressions that have seemingly random numbers For example in some runs a trained program could generate a potentially inexplicable expression such as 2501 4 9 for the problem given in Figure FIGREF1 with one or more numbers not in the problem sentences We hypothesize that replacing the numbers in the problem statement with generic tags like rm langle n1 rangle rm langle n2 rangle and rm langle n3 rangle and saving their values as a preprocessing step does not take away from the generality of the solution but suppresses the problem of fertility in number generation leading to the introduction of numbers not present in the question sentences Another idea we want to test is whether a neural network which has been pretrained to acquire language knowledge is better able to understand the problem sentences Pretraining with a large amount of arithmeticrelated text is likely to help develop such knowledge but due to the lack of large such focused corpora we want to test whether pretraining with a sufficient general corpus is beneficial In this paper we use the Transformer model BIBREF0 to solve arithmetic word problems as a particular case of machine translation from text to the language of arithmetic expressions Transformers in various configurations have become a staple of NLP in the past two years Past neural approaches did not treat this problem as pure translation like we do and additionally these approaches usually augmented the neural architectures with various external modules such as parse trees or used deep reinforcement learning which we do not do In this paper we demonstrate that Transformers can be used to solve MWPs successfully with the simple adjustments we describe above We compare performance on four individual datasets In particular we show that our translationbased approach outperforms stateoftheart results reported by BIBREF1 BIBREF2 BIBREF3 BIBREF4 BIBREF5 by a large margin on three of four datasets tested On average our best neural architecture outperforms previous results by almost 10 although our approach is conceptually more straightforward We organize our paper as follows The second section presents related work Then we discuss our approach We follow by an analysis of experimental results and compare them to those of other recent approaches We also discuss our successes and shortcomings Finally we share our concluding thoughts and end with our direction for future work Past strategies have used rules and templates to match sentences to arithmetic expressions Some such approaches seemed to solve problems impressively within a narrow domain but performed poorly when out of domain lacking generality BIBREF6 BIBREF7 BIBREF8 BIBREF9 Kushman et al BIBREF3 used feature extraction and templatebased categorization by representing equations as expression forests and finding a near match Such methods required human intervention in the form of feature engineering and development of templates and rules which is not desirable for expandability and adaptability Hosseini et al BIBREF2 performed statistical similarity analysis to obtain acceptable results but did not perform well with texts that were dissimilar to training examples Existing approaches have used various forms of auxiliary information Hosseini et al BIBREF2 used verb categorization to identify important mathematical cues and contexts Mitra and Baral BIBREF10 used predefined formulas to assist in matching KoncelKedziorski et al BIBREF11 parsed the input sentences enumerated all parses and learned to match requiring expensive computations Roy and Roth BIBREF12 performed searches for semantic trees over large spaces Some recent approaches have transitioned to using neural networks Semantic parsing takes advantage of RNN architectures to parse MWPs directly into equations or expressions in a mathspecific language BIBREF9 BIBREF13 RNNs have shown promising results but they have had difficulties balancing parenthesis and also sometimes incorrectly choose numbers when generating equations Rehman et al BIBREF14 used POS tagging and classification of equation templates to produce systems of equations from thirdgrade level MWPs Most recently Sun et al BIBREF13 used a BiDirectional LSTM architecture for math word problems Huang et al BIBREF15 used a deep reinforcement learning model to achieve character placement in both seen and novel equation templates Wang et al BIBREF1 also used deep reinforcement learning We view math word problem solving as a sequencetosequence translation problem RNNs have excelled in sequencetosequence problems such as translation and question answering The recent introduction of attention mechanisms has improved the performance of RNN models Vaswani et al BIBREF0 introduced the Transformer network which uses stacks of attention layers instead of recurrence Applications of Transformers have achieved stateoftheart performance in many NLP tasks We use this architecture to produce character sequences that are arithmetic expressions The models we experiment with are easy and efficient to train allowing us to test several configurations for a comprehensive comparison We use several configurations of Transformer networks to learn the prefix postfix and infix notations of MWP equations independently Prefix and postfix representations of equations do not contain parentheses which has been a source of confusion in some approaches If the learned target sequences are simple with fewer characters to generate it is less likely to make mistakes during generation Simple targets also may help the learning of the model to be more robust Experimenting with all three representations for equivalent expressions may help us discover which one works best We train on standard datasets which are readily available and commonly used Our method considers the translation of English text to simple algebraic expressions After performing experiments by training directly on math word problem corpora we perform a different set of experiments by pretraining on a general language corpus The success of pretrained models such as ELMo BIBREF16 GPT2 BIBREF17 and BERT BIBREF18 for many natural language tasks provides reasoning that pretraining is likely to produce better learning by our system We use pretraining so that the system has some foundational knowledge of English before we train it on the domainspecific text of math word problems However the output is not natural language but algebraic expressions which is likely to limit the effectiveness of such pretraining We work with four individual datasets The datasets contain addition subtraction multiplication and division word problems AI2 BIBREF2 AI2 is a collection of 395 addition and subtraction problems containing numeric values where some may not be relevant to the question CC BIBREF19 The Common Core dataset contains 600 2step questions The Cognitive Computation Group at the University of Pennsylvania gathered these questions IL BIBREF4 The Illinois dataset contains 562 1step algebra word questions The Cognitive Computation Group compiled these questions also MAWPS BIBREF20 MAWPS is a relatively large collection primarily from other MWP datasets We use 2373 of 3915 MWPs from this set The problems not used were more complex problems that generate systems of equations We exclude such problems because generating systems of equations is not our focus We take a randomly sampled 95 of examples from each dataset for training From each dataset MWPs not included in training make up the testing data used when generating our results Training and testing are repeated three times and reported results are an average of the three outcomes We take a simple approach to convert infix expressions found in the MWPs to the other two representations Two stacks are filled by iterating through string characters one with operators found in the equation and the other with the operands From these stacks we form a binary tree structure Traversing an expression tree in preorder results in a prefix conversion Postorder traversal gives us a postfix expression Three versions of our training and testing data are created to correspond to each type of expression By training on different representations we expect our test results to change We pretrain half of our networks to endow them with a foundational knowledge of English Pretraining models on significantsized language corpora have been a common approach recently We explore the pretraining approach using a general English corpus because the language of MWPs is regular English interspersed with numerical values Ideally the corpus for pretraining should be a very general and comprehensive corpus like an English Wikipedia dump or many gigabytes of humangenerated text scraped from the internet like GPT2 BIBREF21 used However in this paper we want to perform experiments to see if pretraining with a smaller corpus can help In particular for this task we use the IMDb Movie Reviews dataset BIBREF22 This set contains 314041 unique sentences Since movie reviewers wrote this data it is a reference to natural language not related to arithmetic Training on a much bigger and general corpus may make the language model stronger but we leave this for future work We compare pretrained models to nonpretrained models to observe performance differences Our pretrained models are trained in an unsupervised fashion to improve the encodings of our finetuned solvers In the pretraining process we use sentences from the IMDb reviews with a target output of an empty string We leave the input unlabelled which focuses the network on adjusting encodings while providing unbiased decoding when we later change from IMDb English text to MWPData The input sequence is a natural language specification of an arithmetic word problem The MWP questions and equations have been encoded using the subword text encoder provided by the TensorFlow Datasets library The output is an expression in prefix infix or postfix notation which then can be manipulated further and solved to obtain a final answer All examples in the datasets contain numbers some of which are unique or rare in the corpus Rare terms are adverse for generalization since the network is unlikely to form good representations for them As a remedy to this issue our networks do not consider any relevant numbers during training Before the networks attempt any translation we preprocess each question and expression by a number mapping algorithm This algorithm replaces each numeric value with a corresponding identifier eg langle n1 rangle langle n2 rangle etc and remembers the necessary mapping We expect that this approach may significantly improve how networks interpret each question When translating the numbers in the original question are tagged and cached From the encoded English and tags a predicted sequence resembling an expression presents itself as output Since each networks learned output resembles an arithmetic expression eg langle n1 rangle langle n2 rangle langle n3 rangle we use the cached tag mapping to replace the tags with the corresponding numbers and return a final mathematical expression Three representation models are trained and tested separately PrefixTransformer PostfixTransformer and InfixTransformer For each experiment we use representationspecific Transformer architectures Each model uses the Adam optimizer with beta1095 and beta2099 with a standard epsilon of 1 times e9 The learning rate is reduced automatically in each training session as the loss decreases Throughout the training each model respects a 10 dropout rate We employ a batch size of 128 for all training Each model is trained on MWP data for 300 iterations before testing The networks are trained on a machine using 1 Nvidia 1080 Ti graphics processing unit GPU We compare mediumsized small and minimal networks to show if network size can be reduced to increase training and testing efficiency while retaining high accuracy Networks over six layers have shown to be noneffective for this task We tried many configurations of our network models but report results with only three configurations of Transformers Transformer Type 1 This network is a small to mediumsized network consisting of 4 Transformer layers Each layer utilizes 8 attention heads with a depth of 512 and a feedforward depth of 1024 Transformer Type 2 The second model is small in size using 2 Transformer layers The layers utilize 8 attention heads with a depth of 256 and a feedforward depth of 1024 Transformer Type 3 The third type of model is minimal using only 1 Transformer layer This network utilizes 8 attention heads with a depth of 256 and a feedforward depth of 512 We calculate the loss in training according to a mean of the sparse categorical crossentropy formula Sparse categorical crossentropy BIBREF23 is used for identifying classes from a feature set which assumes a large target classification set Evaluation between the possible translation classes all vocabulary subword tokens and the produced class predicted token is the metric of performance here During each evaluation target terms are masked predicted and then compared to the masked known value We adjust the models loss according to the mean of the translation accuracy after predicting every determined subword in a translation where K Translation Classes J Translation and I is the number of examples Some of the problems encountered by prior approaches seem to be attributable to the use of infix notation In this experiment we compare translation BLEU2 scores to spot the differences in representation interpretability Traditionally a BLEU score is a metric of translation quality BIBREF24 Our presented BLEU scores represent an average of scores a given model received over each of the target test sets We use a standard bigram weight to show how accurate translations are within a window of two adjacent terms After testing translations we calculate an average BLEU2 score per test set which is related to the success over that data An average of the scores for each dataset become the presented value where N is the number of test datasets which is 4 This experiment compares our networks to recent previous work We count a given test score by a simple correct versus incorrect method The answer to an expression directly ties to all of the translation terms being correct which is why we do not consider partial precision We compare average accuracies over 3 test trials on different randomly sampled test sets from each MWP dataset This calculation more accurately depicts the generalization of our networks We also explore the effect of language pretraining as discussed earlier This training occurs over 30 iterations at the start of the two experiments to introduce a good level of language understanding before training on the MWP data The same Transformer architectures are also trained solely on the MWP data We calculate the reported results as where R is the number of test repetitions which is 3 N is the number of test datasets which is 4 P is the number of MWPs and C is the number of correct equation translations We now present the results of our various experiments We compare the three representations of target equations and three architectures of the Transformer model in each test Results of Experiment 1 are given in Table TABREF21 For clarity the number in parentheses in front of a row is the Transformer type By using BLEU scores we assess the translation capability of each network This test displays how networks transform different math representations to a character summary level We compare by average BLEU2 accuracy among our tests in the Average column of Table TABREF21 to communicate these translation differences To make it easier to understand the results Table TABREF22 provides a summary of Table TABREF21 Looking at Tables TABREF21 and TABREF22 we note that both the prefix and postfix representations of our target language perform better than the generally used infix notation The nonpretrained models perform slightly better than the pretrained models and the small or Type 2 models perform slightly better than the minimalsized and mediumsized Transformer models The nonpretrained type 2 prefix Transformer arrangement produced the most consistent translations Table TABREF23 provides detailed results of Experiment 2 The numbers are absolute accuracies ie they correspond to cases where the arithmetic expression generated is 100 correct leading to the correct numeric answer Results by BIBREF1 BIBREF2 BIBREF4 BIBREF5 are sparse but indicate the scale of success compared to recent past approaches Prefix postfix and infix representations in Table TABREF23 show that network capabilities are changed by how teachable the target data is The values in the last column of Table TABREF23 are summarized in Table TABREF24 How the models compare with respect to accuracy closely resembles the comparison of BLEU scores presented earlier Thus BLEU scores seem to correlate well with accuracy values in our case While our networks fell short of BIBREF1 AI2 testing accuracy we present stateoftheart results for the remaining three datasets The AI2 dataset is tricky because it has numeric values in the word descriptions that are extraneous or irrelevant to the actual computation whereas the other datasets have only relevant numeric values The type 2 postfix Transformer received the highest testing average of 872 Our attempt at language pretraining fell short of our expectations in all but one tested dataset We had hoped that more stable language understanding would improve results in general As previously mentioned using more general and comprehensive corpora of language could help grow semantic ability All of the network configurations used were very successful for our task The prefix representation overall provides the most stable network performance To display the capability of our most successful model type 2 postfix Transformer we present some outputs of the network in Figure FIGREF26 The models respect the syntax of math expressions even when incorrect For the majority of questions our translators were able to determine operators based solely on the context of language Our pretraining was unsuccessful in improving accuracy even when applied to networks larger than those reported We may need to use more inclusive language or pretrain on very math specific texts to be successful Our results support our thesis of infix limitation Our system while performing above standard could still benefit from some improvements One issue originates from the algorithmic preprocessing of our questions and expressions In Figure FIGREF27 we show an example of one such issue The excerpt comes from a type 3 nonpretrained Transformer test The example shows an overlooked identifier langle n1 rangle The issue is attributed to the identifier algorithm only considering numbers in the problem Observe in the question that the word eight is the number we expect to relate to langle n2 rangle Our identifying algorithm could be improved by considering such number words and performing conversion to a numerical value If our algorithm performed as expected the identifier langle n1 rangle relates with 4 the first occurring number in the question and langle n2 rangle with 8 the converted number word appearing second in the question The overall translation was incorrect whether or not our algorithm was successful but it is essential to analyze problems like these that may result in future improvements Had all questions been tagged correctly our performance would have likely improved In this paper we have shown that the use of Transformer networks improves automatic math word problemsolving We have also shown that the use of postfix target expressions performs better than the other two expression formats Our improvements are wellmotivated but straightforward and easy to use demonstrating that the wellacclaimed Transformer architecture for language processing can handle MWPs well obviating the need to build specialized neural architectures for this task Extensive pretraining over much larger corpora of language has extended the capabilities of many neural approaches For example networks like BERT BIBREF18 trained extensively on data from Wikipedia perform relatively better in many tasks Pretraining on a much larger corpus remains an extension we would like to try We want to work with more complex MWP datasets Our datasets contain basic arithmetic expressions of and and only up to 3 of them For example datasets such as Dolphin18k BIBREF25 consisting of webanswered questions from Yahoo Answers require a wider variety of arithmetic operators to be understood by the system We have noticed that the presence of irrelevant numbers in the sentences for MWPs limits our performance We can think of such numbers as a sort of adversarial threat to an MWP solver that stresstest it It may be interesting to explore how to keep a networks performance high even in such cases With a hope to further advance this area of research and heighten interests all of the code and data used is available on GitHub The National Science Foundation supports the work reported in this paper under Grant No 1659788 Any opinions findings any conclusions or recommendations expressed in this work are those of the authors and do not necessarily reflect the views of the National Science Foundation </s>
<s>[INST] What Do You Mean I'm Funny? Personalizing the Joke Skill of a Voice-Controlled Virtual Assistant [/INST] Voicecontrolled virtual assistants VVA such as Siri and Alexa have experienced an exponential growth in terms of number of users and provided capabilities They are used by millions for a variety of tasks including shopping playing music and even telling jokes Arguably their success is due in part to the emotional and personalized experience they provide One important aspect of this emotional interaction is humor a fundamental element of communication Not only can it create in the user a sense of personality but also be used as fallback technique for outofdomain queries BIBREF0 Usually a VVAs humorous responses are invoked by users with the phrase Tell me a joke In order to improve the joke experience and overall user satisfaction with a VVA we propose to personalize the response to each request To achieve this a method should be able to recognize and evaluate humor a challenging task that has been the focus of extensive work Some authors have applied traditional NLP techniques BIBREF1 while others deep learning models BIBREF2 Moreover BIBREF3 follows a semanticbased approach while BIBREF4 and BIBREF5 tackle the challenge from a cognitive and linguistic perspective respectively To this end we have developed two methods The first one is based on traditional NLP techniques Although relatively simple it is robust scalable and has low latency a fundamental property for realtime VVA systems The other approaches combine multitask learning BIBREF6 and selfattentional networks BIBREF7 to obtain better results at the cost of added complexity Both BERT BIBREF8 and an adapted transformer BIBREF7 architecture are considered This choice of architecture was motivated by the advantages it presents over traditional RNN and CNN models including better performance BIBREF9 faster traininginference important for realtime systems and better sense disambiguation BIBREF10 an important component of computational humor BIBREF3 The proposed models use binary classifiers to perform pointwise ranking and therefore require a labelled dataset To generate it we explore two implicit userfeedback labelling strategies fiveminute reuse and oneday return Online AB testing is used to determine if these labelling strategies are suited to optimize the desired usersatisfaction metrics and offline data to evaluated and compared the systems performance Generating labels for this VVA skill is challenging Label generation through explicit user feedback is unavailable since asking users for feedback creates friction and degrade the user experience In addition available humor datasets such as BIBREF3 BIBREF11 only contain jokes and corresponding labels but not the additional features we need to personalize the jokes To overcome this difficulty it is common to resort to implicit feedback In particular many VVA applications use interruptions as negative labels the rationale being that unhappy users will stop the VVA This strategy however is not suitable for our usecase since responses are short and users need to hear the entire joke to decide if it is funny Instead we explore two other implicit feedback labelling strategies fiveminute reuse and 1day return Fiveminute reuse labels an instance positive if it was followed by a new joke request within fiveminutes Conversely 1day return marks as positive all joke requests that were followed by a new one within the following 1 to 25hour interval Both strategies assume that if a user returns he is happy with the jokes This is clearly an approximation since a returning user might be overall satisfied with the experience but not with all the jokes The same is true for the implied negatives the user might have been satisfied with some or all of the jokes Therefore these labels are noisy and only provide weak supervision to the models Table TABREF2 shows an example of the labels values for a set of joke requests from one user All models have access to the same raw features which we conceptually separate into user item and contextual features Examples of features in each of these categories are shown in Table TABREF4 Some of these are used directly by the models while others need to be preprocessed The manner in which each model consumes them is explained next To favor simplicity over accuracy a logistic regression LR model is first proposed Significant effort was put into finding expressive features Categorical features are onehot encoded and numerical ones are normalized The raw Joke Text and Timestamp features require special treatment The Joke Text is tokenized and the stopwords are removed We can then compute computational humor features on the clean text such as sense combination BIBREF3 and ambiguity BIBREF12 In addition since many jokes in our corpus are related to specific events Christmas etc we check for keywords that relate the jokes to them For example if Santa is included we infer it is a Christmas joke Finally precomputed word embeddings with subword information are used to represent jokes by taking the average and maximum vectors over the token representations Subword information is important when encoding jokes since many can contain outofvocabulary tokens The jokes vector representations are also used to compute a summarized view of the users past liked and disliked jokes We consider that a user liked a joke when the assigned label is 1 an approximation given the noisy nature of the labels The users likeddisliked joke vectors are also combined with the candidate joke vector by taking the cosine similarity between them For the raw Timestamp feature we first extract simple timedate features such as month day and isWeekend We then compute binary features that mark if the timestamp occurred near one of the special events mentioned before Some of these events occur the same day every year while others change for example the Super Bowl In addition many events are country dependent The timestamps event features are combined with the jokes event features to allow the model to capture if an eventrelated joke occurs at the right time of the year The LR classifier is trained on the processed features and one of the labels The models posterior probability is used to sort the candidates which are chosen randomly from a pool of unheard jokes Although useful see Validation section this model has several shortcomings In particular many of the used features require significant feature engineering andor are countrylanguage dependent limiting the extensibility of the model To overcome the LRmodels limitations we propose the following model see Figure FIGREF7 In the input layer features are separated into context item and user features Unlike the LRmodel time and text features do not require extensive feature engineering Instead simple features day month and year are extracted from the timestamp After tokenization and stopword removal text features are passed through a pretrained word embeding layer and later input into the joke encoder block The basis of the joke encoder is a modified transformer Firstly only the encoder is needed Moreover since studies suggest that humor is subjective and conditioned on the users context BIBREF13 we add an additional sublayer in the transformer encoder that performs attention over the users features This sublayer inserted between the two typical transformer sublayers at certain depths of the network allows the encoder to adapt the representations of the jokes to different user contexts Thus the same joke can be encoded differently depending on the users features In practice this additional sublayer works like the normal selfattention sublayer except it creates its query matrix Q from the sublayer below and its K and V matrices from the user features As an alternative we also test encoding the jokes using a pretrained BERT model Regardless of the used encoder we average the token representations to obtain a global encoding of the jokes The same encoder is used to represent the items the joke to rank and the users liked and disliked jokes textual features through weight sharing and the cosine similarity between both representations are computed The processed features are then concatenated and passed through a final block of fully connected layers that contains the output layers Since experiments determined see Validation section that both labeling strategies can improve the desired business metrics instead of optimizing for only one of them we take a multitask learning approach Thus we have two softmax outputs Finally we use a loss function that considers label uncertainty class imbalance and the different labeling functions We start from the traditional crossentropy loss for one labelling function We then apply uniform label smoothing BIBREF14 which converts the onehotencoded label vectors into smoothed label vectors towards 05 with epsilon a hyperparameter Label smoothing provides a way of considering the uncertainty on the labels by encouraging the model to be less confident We have also experimented with other alternatives including specialized losses such as BIBREF15 However they did not produce a significant increase in performance in our tests To further model the possible uncertainty in the feedback we apply sample weights calculated using an exponential decay function on the time difference between the current and the following training instance of the same customer where wi is the weight of sample i ti is the time difference between instances i and i1 for the same user and ab are hyperparameters such that a0 and 0b1 The rationale behind these weights is the following If for example we consider labeling function 1 and a user asks for consecutive jokes first within 10 seconds and later within 49 minutes both instances are labeled as positive However we hypothesize that there is a lower chance that in the second case the user requested an additional joke because he liked the first one In addition class weights are applied to each sample to account for the natural class imbalance of the dataset Finally the total loss to be optimized is the weighted sum of the losses for each of the considered labeling functions where wl are manually set weights for each label and mathcal Ll are the losses corresponding to each label which include all the weights mentioned before A twostep validation was conducted for Englishspeaking customers An initial AB testing for the LR model in a production setting was performed to compare the labelling strategies A second offline comparison of the models was conducted on historical data and a selected labelling strategy One month of data and a subset of the customers was used approx eighty thousand The sampled dataset presents a fraction of positive labels of approximately 05 for reuse and 02 for oneday return Importantly since this evaluation is done on a subset of users the dataset characteristics do not necessarily represent real production traffic The joke corpus in this dataset contains thousands of unique jokes of different categories scifi sports etc and types puns limerick etc The dataset was split timewise into trainingvalidationtest sets and hyperparameters were optimized to maximize the AUCROC on the validation set As a benchmark we also consider two additional methods a nonpersonalized popularity model and one that follows BIBREF16 replacing the transformer joke encoder with a CNN network the specialized loss and other characteristics of the DL model are kept Hyperparameters were optimized using gridsearch for the LRModel Due to computational constraints random search was instead used for the DLModel In both cases hyperparameters are selected to optimize the AUCROC on the validation set Table TABREF11 lists some of the considered hyperparameter values and ranges for both models The actual optimal values are sample specific Two treatment groups are considered one per label Users in the control group are presented jokes at random without repetition Several usersatisfaction metrics such as user interruption rate reuse of this and other VVA skills and number of active dialogs are monitored during the tests The relative improvementdecline of these metrics is compared between the treatments and control and between the treatments themselves The statistical significance is measured when determining differences between the groups Results show that the LRbased model consistently outperforms the heuristic method for both labeling strategies significantly improving retention dialogs and interruptions These results suggest that models trained using either label can improve the VVAs joke experience Oneday return was selected for the offline evaluation because models trained on it have a better AUCROC and both labeling strategies were successful in the online validation All results are expressed as relative change with respect to the popularity model We start by evaluating the models using AUCROC As seen in Table TABREF14 the transformerbased models and in particular our custom architecture outperform all other approaches Similar conclusions can be reached regarding overall accuracy However given the class imbalance accuracy is not necessarily the best metric to consider In addition to better understand the effect to the original transformer architecture we present the performance of the model with and without the modified loss and special attention sublayer see Table TABREF14 Results suggest both modifications have a positive impact on the performance Finally to further evaluate the ranking capabilities of the proposed methods we use top1 accuracy Additional positions in the ranking are not considered because only the top ranked joke is presented to the customer Results show that the DL based models outperform the other systems with a relative change in top1 accuracy of 14 for DLBERT and 043 for DLT compared with 014 for the LR method Results show that the proposed methods provide different compromises between accuracy scalability and robustness On one hand the relatively good performance of the LR model with engineered features provides a strong baseline both in terms of accuracy and traininginference performance at the cost of being difficult to extend to new countries and languages On the other hand DL based methods give a significant accuracy gain and require no feature engineering which facilitates the expansion of the joke experience to new markets and languages This comes at a cost of added complexity if deployed in production In addition given the size of the BERT model 340M parameters realtime inference using DLBERT becomes problematic due to latency constraints In this regard the DLT model could be a good compromise since its complexity can be adapted and it provides good overall accuracy This paper describes systems to personalize a VVAs joke experience using NLP and deeplearning techniques that provide different compromises between accuracy scalability and robustness Implicit feedback signals are used to generate weak labels and provide supervision to the ranking models Results on production data show that models trained on any of the considered labels present a positive realworld impact on user satisfaction and that the deep learning approaches can potentially improve the joke skill with respect to the other considered methods In the future we would like to compare all methods in AB testing and to extend the models to other languages </s>
<s>[INST] A Measure of Similarity in Textual Data Using Spearman's Rank Correlation Coefficient [/INST] Over the past few years the term big data has become an important key point for research into data mining and information retrieval Through the years the quantity of data managed across enterprises has evolved from a simple and imperceptible task to an extent to which it has become the central performance improvement problem In other words it evolved to be the next frontier for innovation competition and productivity BIBREF0 Extracting knowledge from data is now a very competitive environment Many companies process vast amounts of customeruser data in order to improve the quality of experience QoE of their customers For instance a typical usecase scenario would be a book seller that performs an automatic extraction of the content of the books a customer has bought and subsequently extracts knowledge of what customers prefer to read The knowledge extracted could then be used to recommend other books Book recommending systems are typical examples where data mining techniques should be considered as the primary tool for making future decisions BIBREF1 KE from TDs is an essential field of research in data mining and it certainly requires techniques that are reliable and accurate in order to neutralize or even eliminate uncertainty in future decisions Grouping TDs based on their content and mutual key information is referred to as clustering Clustering is mostly performed with respect to a measure of similarity between TDs which must be represented as vectors in a vector space beforehand BIBREF2 News aggregation engines can be considered as a typical representative where such techniques are extensively applied as a subfield of natural language processing NLP In this paper we present a new technique for measuring similarity between TDs represented in a vector space based on SRCC a statistical measure of association between two things BIBREF3 which in this case things refer to TDs The mathematical properties of SRCC such as the ability to detect nonlinear correlation make it compelling to be researched into Our motivation is to provide a new technique of improving the quality of KE based on the wellknown association measure SRCC as opposed to other wellknown TD similarity measures The paper is organized as follows Section SECREF2 gives a brief overview of the vector space representation of a TD and the corresponding similarity measures in Section SECREF3 we address conducted research of the role of SRCC in data mining and trend prediction Section SECREF4 is a detailed description of the proposed technique and later in Section SECREF5 we present clustering and classification experiments conducted on several sets of TDs while Section SECREF6 summarizes our research and contribution to the broad area of statistical text analysis In this section we provide a brief background of vector space representation of TDs and existing similarity measures that have been widely used in statistical text analysis To begin with we consider the representation of documents A document d can be defined as a finite sequence of terms independent textual entities within a document for example words namely dt1t2dots tn A general idea is to associate weight to each term ti within d such that which has proven superior in prior extensive research BIBREF4 The most common weight measure is Term Frequency Inverse Document Frequency TFIDF TF is the frequency of a term within a single document and IDF represents the importance or uniqueness of a term within a set of documents Dlbrace d1 d2 dots dmrbrace TFIDF is defined as follows where such that f is the number of occurrences of t in d and log is used to avoid very small values close to zero Having these measures defined it becomes obvious that each wi for i1dots n is assigned the TFIDF value of the corresponding term It turns out that each document is represented as a vector of TFIDF weights within a vector space model VSM with its properties BIBREF5 Different ways of computing the similarity of two vector exist There are two main approaches in similarity computation Deterministic similarity measures exploiting algebraic properties of vectors and their geometrical interpretation These include for instance cosine similarity CS Jaccard coefficients for binary representations etc Stochastic similarity measures in which uncertainty is taken into account These include for instance statistics such as Pearsons Correlation Coefficient PCC BIBREF6 Let mathbf u and mathbf v be the vector representations of two documents d1 and d2 Cosine similarity simply measures costheta where theta is the angle between mathbf u and mathbf v cosine similarity PCC where All of the above measures are widely used and have proven efficient but an important aspect is the lack of importance of the order of terms in textual data It is easy for one to conclude that two documents containing a single sentence each but in a reverse order of terms most deterministic methods fail to express that these are actually very similar On the other hand PCC detects only linear correlation which constraints the diversity present in textual data In the following section we study relevant research in solving this problem and then in Sections SECREF4 and SECREF5 we present our solution and results A significant number of similarity measures have been proposed and this topic has been thoroughly elaborated Its main application is considered to be clustering and classification of textual data organized in TDs In this section we provide an overview of relevant research on this topic to which we can later compare our proposed technique for computing vector similarity KE also referred to as knowledge discovery techniques are used to extract information from unstructured data which can be subsequently used for applying supervised or unsupervised learning techniques such as clustering and classification of the content BIBREF7 Text clustering should address several challenges such as vast amounts of data very high dimensionality of more than 10000 terms dimensions and most importantly an understandable description of the clusters BIBREF8 which essentially implies the demand for high quality of extracted information Regarding high quality KE and information accuracy much effort has been put into improving similarity measurements An improvement based on linear algebra known as Singular Value Decomposition SVD is oriented towards word similarity but instead its main application is document similarity BIBREF9 Alluring is the fact that this measure takes the advantage of synonym recognition and has been used to achieve humanlevel scores on multiplechoice synonym questions from the Test of English as a Foreign Language TOEFL in a technique known as Latent Semantic Analysis LSA BIBREF10 BIBREF5 Other semantic term similarity measures have been also proposed based on information exclusively derived from large corpora of words such as Pointwise Mutual Information PMI which has been reported to have achieved a large degree of correctness in the synonym questions in the TOEFL and SAT tests BIBREF11 Moreover normalized knowledgebased measures such as Leacock Chodrow BIBREF12 Lesk how to tell a pine cone from an icecream cone BIBREF13 or measures for the depth of two concepts preferably vebs in the WordNet taxonomy BIBREF14 have experimentally proven to be efficient Their accuracy converges to approximately 69 Leacock Chodrow and Lesk have showed the highest precision and having them combined turns out to be the approximately optimal solution BIBREF11 The main idea behind our proposed technique is to introduce uncertainty in the calculations of the similarity between TDs represented in a vector space model based on the nonlinear properties of SRCC Unlike PCC which is only able to detect linear correlation SRCCs nonlinear ability provides a convenient way of taking different ordering of terms into account The Spreamans Rank Correlation Coefficient BIBREF3 denoted rho has a from which is very similar to PCC Namely for n raw scores Ui Vi for i1dots n denoting TFIDF values for two document vectors mathbf U mathbf V where ui and vi are the corresponding ranks of Ui and Vi for i0dots n1 A metric to assign the ranks of each of the TFIDF values has to be determined beforehand Each Ui is assigned a rank value ui such that ui01dots n1 It is important to note that the metric by which the TFIDF values are ranked is essentially their sorting criteria A convenient way of determining this criteria when dealing with TFIDF values which emphasize the importance of a term within a TD set is to sort these values in an ascending order Thus the largest or most important TFIDF value within a TD vector is assigned the rank value of n1 and the least important is assigned a value of 0 Consider two TDs d1 and d2 each containing a single sentence Document 1 John had asked Mary to marry him before she left Document 2 Before she left Mary was asked by John to be his wife Now consider these sentences lemmatized Document 1 John have ask Mary marry before leave Document 2 Before leave Mary ask John his wife Let us now represent d1 and d2 as TFIDF vectors for the vocabulary in our small corpus The results in Table TABREF7 show that SRCC performs much better in knowledge extraction The two documents contents contain the same idea expressed by terms in a different order that John had asked Mary to marry him before she left It is obvious that cosine similarity cannot recognize this association but SRCC has successfully recognized it and produced a similarity value of 0285714 SRCC is essentially conducive to semantic similarity Rising the importance of a term in a TD will eventually rise its importance in another TD But if the two TDs are of different size the terms importance values will also differ by which a nonlinear association will emerge This association will not be recognized by PCC at all as it only detects linear association but SRCC will definitely catch this detail and produce the desirable similarity value The idea is to use SRCC to catch such terms which drive the semantic context of a TD which will follow a nonlinear and lie on a polynomial curve and not on the line xy In our approach we use a nonstandard measure of similarity in textual data with simple and common frequency values such as TFIDF in contrast to the statement that simple frequencies are not enough for highquality knowledge extraction BIBREF5 In the next section we will present our experiments and discuss the results we have obtained In order to test our proposed approach we have conducted a series of experiments In this section we briefly discuss the outcome and provide a clear view of whether our approach is suitable for knowledge extraction from textual data in a semantic context We have used a dataset of 14 TDs to conduct our experiments There are several subjects on which their content is based aliens stories law news BIBREF15 In this part we have compared the similarity values produced by each of the similarity measures CS SRCC and PCC We have picked a few notable results and they are summarized in Table TABREF9 below In Table TABREF9 that SRCC mostly differs from CS and PCC which also differ in some casesFor instance d1 refers to leadership in the nineties while d5 refers to the family and medical lead act of 1993 We have empirically observed that the general topics discussed in these two textual documents are very different Namely discusses different frameworks for leadership empowerment while d5 discusses medical treatment and selfcare of employees We have observed that the term employee is the only connection between d1 and d5 The similarity value of CS of 036 is very unreal in this case while PCC 005 and especially SRCC 00018 provide a much more realistic view of the semantic knowledge aggregated in these documents Another example are d8 and d9 The contents of these documents are very straightforward and very similar because they discuss aliens seen by Boeing747 pilots and d9 discusses angels that were considered to be aliens It is obvious that SRCC is able to detect this association as good as CS and PCC which are very good in such straightforward cases We have observed that SRCC does not perform worse than any other of these similarity measures It does not always produce the most suitable similarity value but it indeed does perform at least equally good as other measures The values in Table TABREF9 are very small and suggest that SRCC performs well in extracting tiny associations in such cases It is mostly a few times larger than CS and PCC when there actually exist associations between the documents These results are visually summarized in Figure FIGREF10 The two abovedescribed examples can be clearly seen as standing out In this part we will briefly present the nonlinear association between some of the TDs we have used in our experiments Our purpose is to point out that d6d10 and d7d12 are the pairs where SRCC is the most appropriate measure for the observed content and as such it is able to detect the nonlinear association between them This can be seen in Figure FIGREF12 below The straightforward case of d8 and d9 also stands out here SRCC can also detect it very well The obtained results showed that our technique shows good performance on similarity computing although it is not a perfect measure But it sure comes close to convenient and widely used similarity measures such as CS and PCC The next section provides a conclusion of our research and suggestions for further work In this paper we have presented a nonstandard technique for computing the similarity between TFIDF vectors We have propagated our idea and contributed a portion of new knowledge in this field of text analysis We have proposed a technique that is widely used in similar fields and our goal is to provide starting information to other researches in this area We consider our observations promising and they should be extensively researched Our experiments have proved that our technique should be a subject for further research Our future work will concentrate on the implementation of machine learning techniques such as clustering and subsequent classification of textual data We expect an information of good quality to be extracted To summarize the rapidly emerging area of big data and information retrieval is where our technique should reside and where it should be applied </s>
<s>[INST] CamemBERT: a Tasty French Language Model [/INST] Pretrained word representations have a long history in Natural Language Processing NLP from nonneural methods BIBREF0 BIBREF1 BIBREF2 to neural word embeddings BIBREF3 BIBREF4 and to contextualised representations BIBREF5 BIBREF6 Approaches shifted more recently from using these representations as an input to taskspecific architectures to replacing these architectures with large pretrained language models These models are then finetuned to the task at hand with large improvements in performance over a wide range of tasks BIBREF7 BIBREF8 BIBREF9 BIBREF10 These transfer learning methods exhibit clear advantages over more traditional taskspecific approaches probably the most important being that they can be trained in an unsupervised manner They nevertheless come with implementation challenges namely the amount of data and computational resources needed for pretraining that can reach hundreds of gigabytes of uncompressed text and require hundreds of GPUs BIBREF11 BIBREF9 The latest transformer architecture has gone uses as much as 750GB of plain text and 1024 TPU v3 for pretraining BIBREF10 This has limited the availability of these stateoftheart models to the English language at least in the monolingual setting Even though multilingual models give remarkable results they are often larger and their results still lag behind their monolingual counterparts BIBREF12 This is particularly inconvenient as it hinders their practical use in NLP systems as well as the investigation of their language modeling capacity something that remains to be investigated in the case of for instance morphologically rich languages We take advantage of the newly available multilingual corpus OSCAR BIBREF13 and train a monolingual language model for French using the RoBERTa architecture We pretrain the model which we dub CamemBERT and evaluate it in four different downstream tasks for French partofspeech POS tagging dependency parsing named entity recognition NER and natural language inference NLI CamemBERT improves the state of the art for most tasks over previous monolingual and multilingual approaches which confirms the effectiveness of large pretrained language models for French We summarise our contributions as follows We train a monolingual BERT model on the French language using recent largescale corpora We evaluate our model on four downstream tasks POS tagging dependency parsing NER and natural language inference NLI achieving stateoftheart results in most tasks confirming the effectiveness of large BERTbased models for French We release our model in a userfriendly format for popular opensource libraries so that it can serve as a strong baseline for future research and be useful for French NLP practitioners The first neural word vector representations were noncontextualised word embeddings most notably word2vec BIBREF3 GloVe BIBREF4 and fastText BIBREF14 which were designed to be used as input to taskspecific neural architectures Contextualised word representations such as ELMo BIBREF5 and flair BIBREF6 improved the expressivity of word embeddings by taking context into account They improved the performance of downstream tasks when they replaced traditional word representations This paved the way towards larger contextualised models that replaced downstream architectures in most tasks These approaches trained with language modeling objectives range from LSTMbased architectures such as ULMFiT BIBREF15 to the successful transformerbased architectures such as GPT2 BIBREF8 BERT BIBREF7 RoBERTa BIBREF9 and more recently ALBERT BIBREF16 and T5 BIBREF10 Since the introduction of word2vec BIBREF3 many attempts have been made to create monolingual models for a wide range of languages For noncontextual word embeddings the first two attempts were by BIBREF17 and BIBREF18 who created word embeddings for a large number of languages using Wikipedia Later BIBREF19 trained fastText word embeddings for 157 languages using Common Crawl and showed that using crawled data significantly increased the performance of the embeddings relatively to those trained only on Wikipedia Following the success of large pretrained language models they were extended to the multilingual setting with multilingual BERT a single multilingual model for 104 different languages trained on Wikipedia data and later XLM BIBREF12 which greatly improved unsupervised machine translation A few monolingual models have been released ELMo models for Japanese Portuguese German and Basque and BERT for Simplified and Traditional Chinese and German However to the best of our knowledge no particular effort has been made toward training models for languages other than English at a scale similar to the latest English models eg RoBERTa trained on more than 100GB of data Our approach is based on RoBERTa BIBREF9 which replicates and improves the initial BERT by identifying key hyperparameters for more robust performance In this section we describe the architecture training objective optimisation setup and pretraining data that was used for CamemBERT CamemBERT differs from RoBERTa mainly with the addition of wholeword masking and the usage of SentencePiece tokenisation BIBREF20 Similar to RoBERTa and BERT CamemBERT is a multilayer bidirectional Transformer BIBREF21 Given the widespread usage of Transformers we do not describe them in detail here and refer the reader to BIBREF21 CamemBERT uses the original BERT small textsc BASE configuration 12 layers 768 hidden dimensions 12 attention heads which amounts to 110M parameters We train our model on the Masked Language Modeling MLM task Given an input text sequence composed of N tokens x1 xN we select 15 of tokens for possible replacement Among those selected tokens 80 are replaced with the special mask token 10 are left unchanged and 10 are replaced by a random token The model is then trained to predict the initial masked tokens using crossentropy loss Following RoBERTa we dynamically mask tokens instead of fixing them statically for the whole dataset during preprocessing This improves variability and makes the model more robust when training for multiple epochs Since we segment the input sentence into subwords using SentencePiece the input tokens to the models can be subwords An upgraded version of BERT and BIBREF22 have shown that masking whole words instead of individual subwords leads to improved performance Wholeword masking WWM makes the training task more difficult because the model has to predict a whole word instead of predicting only part of the word given the rest As a result we used WWM for CamemBERT by first randomly sampling 15 of the words in the sequence and then considering all subword tokens in each of these 15 words for candidate replacement This amounts to a proportion of selected tokens that is close to the original 15 These tokens are then either replaced by mask tokens 80 left unchanged 10 or replaced by a random token Subsequent work has shown that the next sentence prediction task NSP originally used in BERT does not improve downstream task performance BIBREF12 BIBREF9 we do not use NSP as a consequence Following BIBREF9 we optimise the model using Adam BIBREF23 beta 1 09 beta 2 098 for 100k steps We use large batch sizes of 8192 sequences Each sequence contains at most 512 tokens We enforce each sequence to only contain complete sentences Additionally we used the DOCSENTENCES scenario from BIBREF9 consisting of not mixing multiple documents in the same sequence which showed slightly better results We segment the input text into subword units using SentencePiece BIBREF20 SentencePiece is an extension of BytePair encoding BPE BIBREF24 and WordPiece BIBREF25 that does not require pretokenisation at the word or token level thus removing the need for languagespecific tokenisers We use a vocabulary size of 32k subword tokens These are learned on 107 sentences sampled from the pretraining dataset We do not use subword regularisation ie sampling from multiple possible segmentations in our implementation for simplicity Pretrained language models can be significantly improved by using more data BIBREF9 BIBREF10 Therefore we used French text extracted from Common Crawl in particular we use OSCAR BIBREF13 a preclassified and prefiltered version of the November 2018 Common Craw snapshot OSCAR is a set of monolingual corpora extracted from Common Crawl specifically from the plain text WET format distributed by Common Crawl which removes all HTML tags and converts all text encodings to UTF8 OSCAR follows the same approach as BIBREF19 by using a language classification model based on the fastText linear classifier BIBREF26 BIBREF27 pretrained on Wikipedia Tatoeba and SETimes which supports 176 different languages OSCAR performs a deduplication step after language classification and without introducing a specialised filtering scheme other than only keeping paragraphs containing 100 or more UTF8 encoded characters making OSCAR quite close to the original Crawled data We use the unshuffled version of the French OSCAR corpus which amounts to 138GB of uncompressed text and 327B SentencePiece tokens We fist evaluate CamemBERT on the two downstream tasks of partofspeech POS tagging and dependency parsing POS tagging is a lowlevel syntactic task which consists in assigning to each word its corresponding grammatical category Dependency parsing consists in predicting the labeled syntactic tree capturing the syntactic relations between words We run our experiments using the Universal Dependencies UD paradigm and its corresponding UD POS tag set BIBREF28 and UD treebank collection version 22 BIBREF29 which was used for the CoNLL 2018 shared task We perform our work on the four freely available French UD treebanks in UD v22 GSD Sequoia Spoken and ParTUT GSD BIBREF30 is the secondlargest treebank available for French after the FTB described in subsection SECREF25 it contains data from blogs news articles reviews and Wikipedia The Sequoia treebank BIBREF31 BIBREF32 comprises more than 3000 sentences from the French Europarl the regional newspaper LEst Rpublicain the French Wikipedia and documents from the European Medicines Agency Spoken is a corpus converted automatically from the Rhapsodie treebank BIBREF33 BIBREF34 with manual corrections It consists of 57 sound samples of spoken French with orthographic transcription and phonetic transcription aligned with sound word boundaries syllables and phonemes syntactic and prosodic annotations Finally ParTUT is a conversion of a multilingual parallel treebank developed at the University of Turin and consisting of a variety of text genres including talks legal texts and Wikipedia articles among others ParTUT data is derived from the alreadyexisting parallel treebank ParallelTUT BIBREF35 Table TABREF23 contains a summary comparing the sizes of the treebanks We evaluate the performance of our models using the standard UPOS accuracy for POS tagging and Unlabeled Attachment Score UAS and Labeled Attachment Score LAS for dependency parsing We assume gold tokenisation and gold word segmentation as provided in the UD treebanks To demonstrate the value of building a dedicated version of BERT for French we first compare CamemBERT to the multilingual cased version of BERT designated as mBERT We then compare our models to UDify BIBREF36 UDify is a multitask and multilingual model based on mBERT that is near stateoftheart on all UD languages including French for both POS tagging and dependency parsing It is relevant to compare CamemBERT to UDify on those tasks because UDify is the work that pushed the furthest the performance in finetuning endtoend a BERTbased model on downstream POS tagging and dependency parsing Finally we compare our model to UDPipe Future BIBREF37 a model ranked 3rd in dependency parsing and 6th in POS tagging during the CoNLL 2018 shared task BIBREF38 UDPipe Future provides us a strong baseline that does not make use of any pretrained contextual embedding We will compare to the more recent crosslingual language model XLM BIBREF12 as well as the stateoftheart CoNLL 2018 shared task results with predicted tokenisation and segmentation in an updated version of the paper Named Entity Recognition NER is a sequence labeling task that consists in predicting which words refer to realworld objects such as people locations artifacts and organisations We use the French Treebank FTB BIBREF39 in its 2008 version introduced by ccclustering09short and with NER annotations by sagot2012annotation The NERannotated FTB contains more than 12k sentences and more than 350k tokens extracted from articles of the newspaper Le Monde published between 1989 and 1995 In total it contains 11636 entity mentions distributed among 7 different types of entities namely 2025 mentions of Person 3761 of Location 2382 of Organisation 3357 of Company 67 of Product 15 of POI Point of Interest and 29 of Fictional Character A large proportion of the entity mentions in the treebank are multiword entities For NER we therefore report the 3 metrics that are commonly used to evaluate models precision recall and F1 score Here precision measures the percentage of entities found by the system that are correctly tagged recall measures the percentage of named entities present in the corpus that are found and the F1 score combines both precision and recall measures giving a general idea of a models performance Most of the advances in NER haven been achieved on English particularly focusing on the CoNLL 2003 BIBREF40 and the Ontonotes v5 BIBREF41 BIBREF42 English corpora NER is a task that was traditionally tackled using Conditional Random Fields CRF BIBREF43 which are quite suited for NER CRFs were later used as decoding layers for BiLSTM architectures BIBREF44 BIBREF45 showing considerable improvements over CRFs alone These BiLSTMCRF architectures were later enhanced with contextualised word embeddings which yet again brought major improvements to the task BIBREF5 BIBREF6 Finally large pretrained architectures settled the current state of the art showing a small yet important improvement over previous NERspecific architectures BIBREF7 BIBREF46 In nonEnglish NER the CoNLL 2002 shared task included NER corpora for Spanish and Dutch corpora BIBREF47 while the CoNLL 2003 included a German corpus BIBREF40 Here the recent efforts of BIBREF48 settled the state of the art for Spanish and Dutch while BIBREF6 did it for German In French no extensive work has been done due to the limited availability of NER corpora We compare our model with the strong baselines settled by BIBREF49 who trained both CRF and BiLSTMCRF architectures on the FTB and enhanced them using heuristics and pretrained word embeddings We also evaluate our model on the Natural Language Inference NLI task using the French part of the XNLI dataset BIBREF50 NLI consists in predicting whether a hypothesis sentence is entailed neutral or contradicts a premise sentence The XNLI dataset is the extension of the MultiGenre NLI MultiNLI corpus BIBREF51 to 15 languages by translating the validation and test sets manually into each of those languages The English training set is also machine translated for all languages The dataset is composed of 122k train 2490 valid and 5010 test examples As usual NLI performance is evaluated using accuracy To evaluate a model on a language other than English such as French we consider the two following settings TRANSLATETEST The French test set is machine translated into English and then used with an English classification model This setting provides a reasonable although imperfect way to circumvent the fact that no such data set exists for French and results in very strong baseline scores TRANSLATETRAIN The French model is finetuned on the machinetranslated English training set and then evaluated on the French test set This is the setting that we used for CamemBERT For the TRANSLATETEST setting we report results of the English RoBERTa to act as a reference In the TRANSLATETRAIN setting we report the best scores from previous literature along with ours BiLSTMmax is the best model in the original XNLI paper mBERT which has been reported in French in BIBREF52 and XLM MLMTLM is the bestpresented model from BIBREF50 In this section we measure the performance of CamemBERT by evaluating it on the four aforementioned tasks POS tagging dependency parsing NER and NLI We use the RoBERTa implementation in the fairseq library BIBREF53 Our learning rate is warmed up for 10k steps up to a peak value of 00007 instead of the original 00001 given our large batch size 8192 The learning rate fades to zero with polynomial decay We pretrain our model on 256 Nvidia V100 GPUs 32GB each for 100k steps during 17h For each task we append the relevant predictive layer on top of CamemBERTs Transformer architecture Following the work done on BERT BIBREF7 for sequence tagging and sequence labeling we append a linear layer respectively to the s special token and to the first subword token of each word For dependency parsing we plug a biaffine graph predictor head as inspired by BIBREF54 following the work done on multilingual parsing with BERT by BIBREF36 We refer the reader to these two articles for more details on this module We finetune independently CamemBERT for each task and each dataset We optimise the model using the Adam optimiser BIBREF23 with a fixed learning rate We run a grid search on a combination of learning rates and batch sizes We select the best model on the validation set out of the 30 first epochs Although this might push the performances even further for all tasks except NLI we dont apply any regularisation techniques such as weight decay learning rate warmup or discriminative finetuning We show that finetuning CamemBERT in a straightforward manner leads to stateoftheart results on most tasks and outperforms the existing BERTbased models in most cases The POS tagging dependency parsing and NER experiments are run using hugging faces Transformer library extended to support CamemBERT and dependency parsing BIBREF55 The NLI experiments use the fairseq library following the RoBERTa implementation For POS tagging and dependency parsing we compare CamemBERT to three other near stateoftheart models in Table TABREF32 CamemBERT outperforms UDPipe Future by a large margin for all treebanks and all metrics Despite a much simpler optimisation process CamemBERT beats UDify performances on all the available French treebanks CamemBERT also demonstrates higher performances than mBERT on those tasks We observe a larger error reduction for parsing than for tagging For POS tagging we observe error reductions of respectively 071 for GSD 081 for Sequoia 07 for Spoken and 028 for ParTUT For parsing we observe error reductions in LAS of 296 for GSD 333 for Sequoia 170 for Spoken and 165 for ParTUT On the XNLI benchmark CamemBERT obtains improved performance over multilingual language models on the TRANSLATETRAIN setting 812 vs 802 for XLM while using less than half the parameters 110M vs 250M However its performance still lags behind models trained on the original English training set in the TRANSLATETEST setting 812 vs 8291 for RoBERTa It should be noted that CamemBERT uses far fewer parameters than RoBERTa 110M vs 355M parameters For named entity recognition our experiments show that CamemBERT achieves a slightly better precision than the traditional CRFbased SEM architectures described above in Section SECREF25 CRF and BiLSTMCRF but shows a dramatic improvement in finding entity mentions raising the recall score by 35 points Both improvements result in a 236 point increase in the F1 score with respect to the best SEM architecture BiLSTMCRF giving CamemBERT the state of the art for NER on the FTB One other important finding is the results obtained by mBERT Previous work with this model showed increased performance in NER for German Dutch and Spanish when mBERT is used as contextualised word embedding for an NERspecific model BIBREF48 but our results suggest that the multilingual setting in which mBERT was trained is simply not enough to use it alone and finetune it for French NER as it shows worse performance than even simple CRF models suggesting that monolingual models could be better at NER CamemBERT displays improved performance compared to prior work for the 4 downstream tasks considered This confirms the hypothesis that pretrained language models can be effectively finetuned for various downstream tasks as observed for English in previous work Moreover our results also show that dedicated monolingual models still outperform multilingual ones We explain this point in two ways First the scale of data is possibly essential to the performance of CamemBERT Indeed we use 138GB of uncompressed text vs 57GB for mBERT Second with more data comes more diversity in the pretraining distribution Reaching stateoftheart performances on 4 different tasks and 6 different datasets requires robust pretrained models Our results suggest that the variability in the downstream tasks and datasets considered is handled more efficiently by a general language model than by Wikipediapretrained models such as mBERT CamemBERT improves the state of the art for multiple downstream tasks in French It is also lighter than other BERTbased approaches such as mBERT or XLM By releasing our model we hope that it can serve as a strong baseline for future research in French NLP and expect our experiments to be reproduced in many other languages We will publish an updated version in the near future where we will explore and release models trained for longer with additional downstream tasks baselines eg XLM and analysis we will also train additional models with potentially cleaner corpora such as CCNet BIBREF56 for more accurate performance evaluation and more complete ablation This work was partly funded by three French National grants from the Agence Nationale de la Recherche namely projects PARSITI ANR16CE330021 SoSweet ANR15CE380011 and BASNUM ANR18CE380003 as well as by the last authors chair in the PRAIRIE institute We analyze the addition of wholeword masking on the downstream performance of CamemBERT As reported for English on other downstream tasks whole word masking improves downstream performances for all tasks but NER as seen in Table TABREF46 NER is highly sensitive to capitalisation prefixes suffixes and other subword features that could be used by a model to correctly identify entity mentions Thus the added information by learning the masking at a subword level rather than at wholeword level seems to have a detrimental effect on downstream NER results </s>
<s>[INST] Vocabulary-based Method for Quantifying Controversy in Social Media [/INST] Controversy is a phenomenom with a high impact at various levels It has been broadly studied from the perspective of different disciplines ranging from the seminal analysis of the conflicts within the members of a karate club BIBREF0 to political issues in modern times BIBREF1 BIBREF2 The irruption of digital social networks BIBREF3 gave raise to new ways of intentionally intervening on them for taking some advantage BIBREF4 BIBREF5 Moreover highly contrasting points of view in some groups tend to provoke conflicts that lead to attacks from one community to the other by harassing brigading or trolling it BIBREF6 The existing literature shows different issues that controversy brings up such as splitting of communities biased information hateful discussions and attacks between groups generally proposing ways to solve them For example Kumar Srijan et al BIBREF6 analyze many techniques to defend us from attacks in Reddit while Stewart et al BIBREF4 insinuate that there was external interference in Twitter during the 2016 US presidential elections to benefit one candidate Also as shown in BIBREF7 detecting controversy could provide the basis to improve the news diet of readers offering the possibility to connect users with different points of views by recommending them new content to read BIBREF8 Moreover other studies on bridging echo chambers BIBREF9 and the positive effects of intergroup dialogue BIBREF10 BIBREF11 suggest that direct engagement could be effective for mitigating such conflicts Therefore easily and automatically identifying controversial topics could allow us to quickly implement different strategies for preventing missinformation fights and bias Quantifying the controversy is even more powerful as it allows us to establish controversy levels and in particular to classify controversial and noncontroversial topics by establishing a threshold score that separates the two types of topics With this aim we propose in this work a systematic languageagnostic method to quantify controversy on social networks taking tweets content as root input Our main contribution is a new vocabularybased method that works in any language and equates the performance of stateoftheart structurebased methods Finally controversy quantification through vocabulary analysis opens several research avenues to analyze whether polarization is being created maintained or augmented by the ways of talking of each community Having this in mind and if we draw from the premise that when a discussion has a high controversy it is in general due to the presence of two principal communities fighting each other or conversely that when there is no controversy there is just one principal community the members of which share a common point of view we can measure the controversy by detecting if the discussion has one or two principal jargons in use Our method is tested on Twitter datasets This microblogging platform has been widely used to analyze discussions and polarization BIBREF12 BIBREF13 BIBREF14 BIBREF15 BIBREF2 It is a natural choice for these kind of problems as it represents one of the main fora for public debate in online social media BIBREF15 it is a common destination for affiliative expressions BIBREF16 and is often used to report and read news about current events BIBREF17 An extra advantage of Twitter for this kind of studies is the availability of realtime data generated by millions of users Other social media platforms offer similar datasharing services but few can match the amount of data and the accompanied documentation provided by Twitter One last asset of Twitter for our work is given by retweets whom typically indicate endorsement BIBREF18 and hence become a useful concept to model discussions as we can set who is with who However our method has a general approach and it could be used a priori in any social network In this work we report excellent result tested on Twitter but in future work we are going to test it in other social networks Our paper is organized as follows in Section SECREF2 we review related work Section SECREF3 contains the detailed explanation of the pipeline we use for quantifying controversy of a topic and each of its stages In Section SECREF4 we report the results of an extensive empirical evaluation of the proposed measure of controversy Finally Section SECREF5 is devoted to discuss possible improvements and directions for future work as well as lessons learned Many previous works are dedicated to quantifying the polarization observed in online social networks and social media BIBREF1 BIBREF19 BIBREF20 BIBREF21 BIBREF22 BIBREF23 The main characteristic of those works is that the measures proposed are based on the structural characteristics of the underlying graph Among them we highlight the work of Garimella et alBIBREF23 that presents an extensive comparison of controversy measures different graphbuilding approaches and data sources achieving the best performance of all In their research they propose different metrics to measure polarization on Twitter Their techniques based on the structure of the endorsement graph can successfully detect whether a discussion represented by a set of tweets is controversial or not regardless of the context and most importantly without the need of any domain expertise They also consider two different methods to measure controversy based on the analysis of the posts contents but both fail when used to create a measure of controversy Matakos et al BIBREF24 develop a polarization index Their measure captures the tendency of opinions to concentrate in network communities creating echochambers They obtain a good performance at identifying controversy by taking into account both the network structure and the existing opinions of users However they model opinions as positive or negative with a real number between 1 and 1 Their performance is good but although it is an opinionbased method it is not a textrelated oneOther recent works BIBREF25 BIBREF26 BIBREF27 have shown that communities may express themselves with different terms or ways of speaking use different jargon which in turn can be detected with the use of textrelated techniques In his thesis BIBREF28 Jang explains controversy via generating a summary of two conflicting stances that make up the controversy This work shows that a specific subset of tweets could represent the two opposite positions in a polarized debate A good tool to see how communities interact is ForceAtlas2 BIBREF29 a forcedirected layout widely used for visualization This layout has been recently found to be very useful at visualizing community interactions BIBREF30 as this algorithm will draw groups with little communication between them in different areas whereas if they have many interactions they will be drawn closer to each other Therefore whenever there is controversy the layout will show two well separated groups and will tend to show only one big community otherwise The method we propose to measure the controversy equates in accuracy the one developed by Garimella et alBIBREF23 and improves considerably computing time and robustness wrt the amount of data needed to effectively apply it Our method is also based on a graph approach but it has its main focus on the vocabulary We first train an NLP classifier that estimates opinion polarity of main users then we run labelpropagation BIBREF31 on the endorsement graph to get polarity of the whole network Finally we compute the controversy score through a computation inspired in Dipole Moment a measure used in physics to estimate electric polarity on a system In our experiments we use the same datasets from other works BIBREF32 BIBREF23 BIBREF33 as well as other datasets that we collected by us using a similar criterion described in Section SECREF4 Our approach to measuring controversy is based on a systematic way of characterizing social media activity through its content We employ a pipeline with five stages namely graph building community identification model training predicting and controversy measure The final output of the pipeline is a value that measures how controversial a topic is with higher values corresponding to higher degrees of controversy The method is based on analysing posts content through Fasttext BIBREF34 a library for efficient learning of word representations and sentence classification developed by Facebook Research team In short our method works as follows through Fasttext we train a languageagnostic model which can predict the community of many users by their jargon Then we take there predictions and compute a score based on the physic notion Dipole Moment using a language approach to identify core or characteristic users and set the polarity trough them We provide a detailed description of each stage in the following Graph Building This paragraph provides details about the approach used to build graphs from raw data As we said in Section SECREF1 we extract our discussions from Twitter Our purpose is to build a conversation graph that represents activity related to a single topic of discussion a debate about a specific event For each topic we build a graph G where we assign a vertex to each user who contributes to it and we add a directed edge from node u to node v whenever user u retweets a tweet posted by v Retweets typically indicate endorsement BIBREF18 users who retweet signal endorsement of the opinion expressed in the original tweet by propagating it further Retweets are not constrained to occur only between users who are connected in Twitters social network but users are allowed to retweet posts generated by any other user As many other works in literature BIBREF5 BIBREF35 BIBREF36 BIBREF37 BIBREF4 BIBREF2 we establish that one retweet among a pair of users are needed to define an edge between them Community Identification To identify a communitys jargon we need to be very accurate at defining its members If we in our will of finding two principal communities force the partition of the graph in that precise number of communities we may be adding noise in the jargon of the principal communities that are fighting each other Because of that we decide to cluster the graph trying two popular algorithms Walktrap BIBREF38 and Louvain BIBREF39 Both are structurebased algorithms that have very good performance with respect to the Modularity Q measure These techniques does not detect a fixed number of clusters their output will depend on the Modularity Q optimization resulting in less noisy communities The main differences between the two methods in what regards our work are that Louvain is a much faster heuristic algorithm but produces clusters with worse Modularity Q Therefore in order to analyze the tradeoff between computing time and quality we decide to test both methods At this step we want to capture the tweets of the principal communities to create the model that could differentiate them Therefore we take the two communities identified by the cluster algorithm that have the maximum number of users and use them for the following step of our method Model Training After detecting the principal communities we create our training dataset to feed the model To do that we extract the tweets of each cluster we sanitize and we subject them to some transformations First we remove duplicate tweets eg retweets without additional text Second we remove from the text of the tweets user names links punctuation tabs leading and lagging blanks general spaces and RT the text that points that a tweet is in fact a retweet As shown in previous works emojis are correlated with sentiment BIBREF40 Moreover as we think that communities will express different sentiment during discussion it is forseeable that emojis will play an important role as separators of tweets that differentiate between the two sides Accordingly we decide to add them to the trainset by translating each emoji into a different word For example the emoji will be translated into happy and into sad Relations between emojis and words are defined in the R library textclean Finally we group tweets by user concatenating them in one string and labeling them with the users community namely with tags C1 and C2 corresponding respectively to the biggest and second biggest groups It is important to note that we take the same number of users of each community to prevent bias in the model Thus we use the number of users of the smallest principal community The trainset built that way is used to feed the model As we said we use Fasttext BIBREF34 to do this training To define the values of the hyperparameters we use the findings of BIBREF41 In their work they investigate the best hyperparameters to train word embedding models using Fasttext BIBREF34 and Twitter data We also change the default value of the hyperparameter epoch to 20 instead of 5 because we want more convergence preventing as much as possible the variance between different training These values could change in other context or social networks where we have more text per user or different discussion dynamics Predicting The next stage consists of identifying the characteristic users of each side the discussion These are the users that better represent the jargon of each side To do that tweets of the users belonging to the largest connected component of the graph are sanitized and transformed exactly as in the Training step We decide to restrict to the largest connected component because in all cases it contains more than 90 of the nodes The remaining 10 of the users dont participate in the discussion from a collective point of view but rather in an isolated way and this kind of intervention does not add interesting information to our approach Then we remove from this component users with degree smaller or equal to 2 ie users that were retweeted by another user or retweeted other person less than three times in total Their participation in the discussion is marginal consequently they are not relevant wrt controversy as they add more noise than information at measuring time This step could be adjusted differently in a different social network We name this result component rootgraph Finally lets see how we do classification Considering that Fasttext returns for each classification both the predicted tag and the probability of the prediction we classify each user of the resulting component by his sanitized tweets with our trained model and take users that were tagged with a probability greater or equal than 09 These are the characteristic users that will be used in next step to compute the controversy measure Controversy Measure This section describes the controversy measures used in this work This computation is inspired in the measure presented by Morales et al BIBREF2 and is based on the notion of dipole moment that has its origin in physics First we assign to the characteristic users the probability returned by the model negativizing them if the predicted tag was C2 Therefore these users are assigned values in the set 109 cup 091 Then we set values for the rest of the users of the rootgraph by labelpropagation BIBREF31 an iterative algorithm to propagate values through a graph by nodes neighborhood Let n and n be the number of vertices V with positive and negative values respectively and Delta A dfracmid n nmid mid V mid the absolute difference of their normalized size Moreover let gc gc be the average value among vertices n n and set tau as half their absolute difference tau dfracmid gc gc mid 2 The dipole moment content controversy measure is defined as textit DMC 1 Delta Atau The rationale for this measure is that if the two sides are well separated then label propagation will assign different extreme values to the two partitions where users from one community will have values near to 1 and users from the other to 1 leading to higher values of the DMC measure Note also that larger differences in the size of the two partitions reflected in the value of Delta A lead to smaller values for the measure which takes values between zero and one In this section we report the results obtained by running the above proposed method over different discussions In the literature a topic is often defined by a single hashtag However this might be too restrictive in many cases In our approach a topic is operationalized as an specific hashtags or key words Sometimes a discussion in a particular moment could not have a defined hashtag but it could be around a certain keyword ie a word or expression that is not specifically a hashtag but it is widely used in the topic For example during the Brazilian presidential elections in 2018 we captured the discussion by the mentions to the word Bolsonaro that is the principal candidates surname Thus for each topic we retrieve all the tweets that contain one of its hashtags or the keyword and that are generated during the observation window We also ensure that the selected topic is associated with a large enough volume of activity In this section we detail the discussions we use to test our metric and how we determine the ground truth ie if the discussion is controversial or not We use thirty different discussions that took place between March 2015 and June 2019 half of them with controversy and half without it We considered discussions in four different languages English Portuguese Spanish and French occurring in five regions over the world South and North America Western Europe Central and Southern Asia We also studied these discussions taking first 140 characters and then 280 from each tweet to analyze the difference in performance and computing time wrt the length of the posts To define the amount of data needed to run our method we established that the Fasttext model has to predict at least one user of each community with a probability greater or equal than 09 during ten different trainings If that is not the case we are not able to use DPC method This decision made us consider only a subset of the datasets used in BIBREF23 because due to the time elapsed since their work many tweets had been deleted and consequently the volume of the data was not enough for our framework To enlarge our experiment base we added new debates more detailed information about each one is shown in Table TABREF24 in UNKREF6 To select new discussions and to determine if they are controversial or not we looked for topics widely covered by mainstream media and that have generated ample discussion both online and offline For noncontroversy discussions we focused on soft news and entertainment but also to events that while being impactful andor dramatic did not generate large controversies To validate that intuition we manually checked a sample of tweets being unable to identify any clear instance of controversy On the other side for controversial debates we focused on political events such as elections corruption cases or justice decisions To furtherly establish the presence of absence of controversy of our datasets we visualized the corresponding networks through ForceAtlas2 BIBREF29 Figures FIGREF9 and FIGREF9 show an example of how noncontroversial and controversial discussions look like respectively with ForceAtlas2 layout As we can see in these figures in a controversial discussion this layout tends to show two well separated groups while in a noncontroversial one it tends to be only one big group More information on the discussions is given in Table TABREF24 To avoid potential overfitting we use only twelve graphs as testbed during the development of the measures half of them controversial netanyahu ukraine mauriciomacri 111 Jan Kavanaugh 3 Oct mauriciomacri 1118 Mar Bolsonaro 27 Oct and half noncontroversial sxsw germanwings onedirection ultralive nepal mothersday This procedure resembles a 4060 traintest split in traditional machine learning applications Some of the discussions we consider refer to the same topics but in different periods of time We needed to split them because our computing infrastructure does not allow us to compute such an enormous amount of data However being able to estimate controversy with only a subset of the discussion is an advantage because discussions could take many days or months and we want to identify controversy as soon as possible without the need of downloading the whole discussion Moreover for very long lasting discussions in social networks gathering the whole data would be impractical for any method Training a Fasttext model is not a deterministic process as different runs could yield different results even using the same training set in each one To analyze if these differences are significant we decide to compute 20 scores for each discussion The standard deviations among these 20 scores were low in all cases with mean 001 and maximum 005 Consequently we decided to report in this paper the average between the 20 scores in practice taking the average between 5 runs would be enough Figure FIGREF18 reports the scores computed by our measure in each topic for the two cluster methods The beanplot shows the estimated probability density function for a measure computed on the topics the individual observations are shown as small white lines in a onedimensional scatter plot and the median as a longer black line The beanplot is divided into two groups one for controversial topics leftdark and one for noncontroversial ones rightlight Hence the black group shows the score distribution over controversial discussions and the white group over noncontroversial ones A larger separation of the two distributions indicates that the measure is better at capturing the characteristics of controversial topics because a good separation allows to establish a threshold in the score that separates controversial and noncontroversial discussions As we may see in the figure the medians are well separated in both cases with little overlapping To better quantify this overlap we measure the sensitivity BIBREF42 of these predictions by measuring the area under the ROC curve AUC ROC obtaining a value of 098 for Walktrap clustering and 0967 for Louvain where 1 represents a perfect separation and 05 means that they are indistinguishable As Garimella et al BIBREF23 have made their code public we reproduced their best method Randomwalk on our datasets and measured the AUC ROC obtaining a score of 0935 An interesting finding was that their method had a poor performance over their own datasets This was due to the fact already explained in Section SECREF4 that it was not possible to retrieve the complete discussions moreover in no case could we restore more than 50 of the tweets So we decided to remove these discussions and measure again the AUC ROC of this method obtaining a 099 value Our hypothesis is that the performance of that method was seriously hurt by the incompleteness of the data We also tested our method on these datasets obtaining a 099 AUC ROC with Walktrap and 0989 with Louvain clustering We conclude that our method works better as in practice both approaches show same performances specially with Walktrap but in presence of incomplete information our measure is more robust The performance of Louvain is slightly worse but as we mentioned in Section SECREF3 this method is much faster Therefore we decided to compare the running time of our method with both clustering techniques and also with the Randomwalk algorithm In figure FIGREF18 we can see the distribution of running times of all techniques through box plots Both versions of our method are faster than Randomwalk while Louvain is faster than Walktrap We now analyze the impact of the length of the considered text in our method Figure FIGREF18 depicts the results of similar experiment as Figure FIGREF18 but considering only 140 characters per tweet As we may see here the overlapping is bigger having an AUC of 088 As for the impact on computing time we observe that despite of the results of BIBREF34 that reported a complexity of Oh log2k at training and test tasks in practice we observed a linear growth We measured the running times of the training and predicting phases the two textrelated phases of our method the resulting times are reported in figure FIGREF18 which shows running time as a function of the textsize We include also the best estimated function that approximate computing time as a function of textset size As it may be seen time grows almost linearly ranging from 30 seconds for a set of 111 KB to 84 seconds for a set of 11941 KB Finally we measured running times for the whole method over each dataset with 280 characters Times were between 170 and 2467 seconds with a mean of 842 making it in practice a reasonable amount of time The task we address in this work is certainly not an easy one and our study has some limitations which we discuss in this section Our work lead us to some conclusions regarding the overall possibility of measuring controversy through text and what aspects need to be considered to deepen our work As our approach to controversy is similar to that of Garimella et al BIBREF23 we share some of their limitations with respect to several aspects Evaluation difficulties to establish groundtruth Multisided controversies controversy with more than two sides Choice of data manually pick topics and Overfitting small set of experiments Although we have more discussions it is still small set from statistical point of view Apart from that our languagebased approach has other limitations which we mention in the following together with their solutions or mitigation Datasize Training an NLP model that can predict tags with a probability greater or equal than 09 requires significant amount of text therefore our method works only for big discussions Most interesting controversies are those that have consequence at a society level in general big enough for our method Multilanguage discussions When multiple languages are participating in a discussion it is common that users tend to retweet more tweets in their own language creating subcommunities In this cases our model will tend to predict higher controversy scores This is the case for example of germanwings where users tweet in English German and Spanish and it has the highest score in nocontroversial topics However the polarization that we tackle in this work is normally part of a society cell a nation a city etc and thus developed in just one language We think that limiting the effectiveness of our analysis to singlelanguage discussions is not a serious limitation Twitter only Our findings are based on datasets coming from Twitter While this is certainly a limitation Twitter is one of the main venues for online public discussion and one of the few for which data is available Hence Twitter is a natural choice However Twitters characteristic limit of 280 characters per message 140 till short time ago is an intrinsic limitation of that network We think that in other social networks as Facebook or Reddit our method will work even better as having more text per user could redound on a better NLP model as we verified comparing the results with 140 and 280 characters per post In this article we introduced the first largescale systematic method for quantifying controversy in social media through content We have shown that this method works on Spanish English French and Portuguese it is contextagnostic and does not require the intervention of a domain expert We have compared its performance with stateoftheart structurebased controversy measures showing that they have same performance and it is more robust We also have shown that more text implies better performance and without significantly increasing computing time therefore it could be used in other contexts such as other social networks like Reddit or Facebook and we are going to test it in future works Training the model is not an expensive task since Fasttext has a good performance at this However the best performance for detecting principal communities is obtained by Walktrap The complexity of that algorithm is Omn2BIBREF38 where m and n are the number of edges and vertices respectively This makes this method rather expensive to compute on big networks Nevertheless we have shown that with Louvain the method still obtains a very similar AUC ROC 099 with Walktrap and 0989 with Louvain With incomplete information its performance gets worse but it is still good 096 and better than previous state of the art This work opens several avenues for future research One is identifying what words semanticsconcepts or language expressions make differ one community from the other There are various ways to do this for instance through the wordembbedings that Fasttext returns after training BIBREF34 Also we could use interpretability techniques on machine learning models BIBREF43 Finally we could try other techniques for measuring controversy through text using another NLP model as pretrained neural network BERT BIBREF44 or in a completely different approach measuring the dispersion index of the discussions wordembbedings BIBREF25 We are currently starting to follow this direction F </s>
<s>[INST] Semantic Sentiment Analysis of Twitter Data [/INST] Microblog sentiment analysis Twitter opinion mining Sentiment Analysis This is text analysis aiming to determine the attitude of a speaker or a writer with respect to some topic or the overall contextual polarity of a piece of text Sentiment analysis on Twitter is the use of natural language processing techniques to identify and categorize opinions expressed in a tweet in order to determine the authors attitude toward a particular topic or in general Typically discrete labels such as positive negative neutral and objective are used for this purpose but it is also possible to use labels on an ordinal scale or even continuous numerical values Internet and the proliferation of smart mobile devices have changed the way information is created shared and spreads eg microblogs such as Twitter weblogs such as LiveJournal social networks such as Facebook and instant messengers such as Skype and WhatsApp are now commonly used to share thoughts and opinions about anything in the surrounding world This has resulted in the proliferation of social media content thus creating new opportunities to study public opinion at a scale that was never possible before Naturally this abundance of data has quickly attracted business and research interest from various fields including marketing political science and social studies among many others which are interested in questions like these Do people like the new Apple Watch What do they hate about iPhone6 Do Americans support ObamaCare What do Europeans think of Popes visit to Palestine How do we recognize the emergence of health problems such as depression Do Germans like how Angela Merkel is handling the refugee crisis in Europe What do republican voters in USA likehate about Donald Trump How do Scottish feel about the Brexit Answering these questions requires studying the sentiment of opinions people express in social media which has given rise to the fast growth of the field of sentiment analysis in social media with Twitter being especially popular for research due to its scale representativeness variety of topics discussed as well as ease of public access to its messages BIBREF0 BIBREF1 Despite all these opportunities the rise of social media has also presented new challenges for natural language processing NLP applications which had largely relied on NLP tools tuned for formal text genres such as newswire and thus were not readily applicable to the informal language and style of social media That language proved to be quite challenging with its use of creative spelling and punctuation misspellings slang new words URLs and genrespecific terminology and abbreviations eg RT for retweet and hashtags In addition to the genre difference there is also a difference in length social media messages are generally short often lengthlimited by design as in Twitter ie a sentence or a headline rather than a full document How to handle such challenges has only recently been the subject of thorough research BIBREF2 BIBREF3 BIBREF4 BIBREF5 BIBREF6 BIBREF7 BIBREF8 BIBREF9 Sentiment analysis has a wide number of applications in areas such as market research political and social sciences and for studying public opinion in general and Twitter is one of the most commonlyused platforms for this This is due to its streaming nature which allows for realtime analysis to its social aspect which encourages people to share opinions and to the short size of the tweets which simplifies linguistic analysis There are several formulations of the task of Sentiment Analysis on Twitter that look at different sizes of the target eg at the level of words vs phrases vs tweets vs sets of tweets at different types of semantic targets eg aspect vs topic vs overall tweet at the explicitness of the target eg sentiment vs stance detection at the scale of the expected label 2point vs 3point vs ordinal etc All these are explored at SemEval the International Workshop on Semantic Evaluation which has created a number of benchmark datasets and has enabled direct comparison between different systems and approaches both as part of the competition and beyond Traditionally the task has been addressed using supervised and semisupervised methods as well as using distant supervision with the most important resource being sentiment polarity lexicons and with featurerich approaches as the dominant research direction for years With the recent rise of deep learning which in many cases eliminates the need for any explicit feature modeling the importance of both lexicons and features diminishes while at the same time attention is shifting towards learning from large unlabeled data which is needed to train the high number of parameters of such complex models Finally as methods for sentiment analysis mature more attention is also being paid to linguistic structure and to multilinguality and crosslinguality Sentiment analysis emerged as a popular research direction in the early 2000s Initially it was regarded as standard document classification into topics such as business sport and politics BIBREF10 However researchers soon realized that it was quite different from standard document classification BIBREF11 and that it crucially needed external knowledge in the form of sentiment polarity lexicons Around the same time other researchers realized the importance of external sentiment lexicons eg Turney BIBREF12 proposed an unsupervised approach to learn the sentiment orientation of wordsphrases positive vs negative Later work studied the linguistic aspects of expressing opinions evaluations and speculations BIBREF13 the role of context in determining the sentiment orientation BIBREF14 of deeper linguistic processing such as negation handling BIBREF15 of finergrained sentiment distinctions BIBREF16 of positional information BIBREF17 etc Moreover it was recognized that in many cases it is crucial to know not just the polarity of the sentiment but also the topic toward which this sentiment is expressed BIBREF18 Until the rise of social media research on opinion mining and sentiment analysis had focused primarily on learning about the language of sentiment in general meaning that it was either genreagnostic BIBREF19 or focused on newswire texts BIBREF20 and customer reviews eg from web forums most notably about movies BIBREF10 and restaurants BIBREF21 but also about hotels digital cameras cell phones MP3 and DVD players BIBREF22 laptops BIBREF21 etc This has given rise to several resources mostly word and phrase polarity lexicons which have proven to be very valuable for their respective domains and types of texts but less useful for short social media messages Later with the emergence of social media sentiment analysis in Twitter became a hot research topic Unfortunately research in that direction was hindered by the unavailability of suitable datasets and lexicons for system training development and testing While some Twitterspecific resources were developed initially they were either small and proprietary such as the isieve corpus BIBREF6 were created only for Spanish like the TASS corpus BIBREF23 or relied on noisy labels obtained automatically eg based on emoticons and hashtags BIBREF24 BIBREF25 BIBREF10 This situation changed with the shared task on Sentiment Analysis on Twitter which was organized at SemEval the International Workshop on Semantic Evaluation a semantic evaluation forum previously known as SensEval The task ran in 2013 2014 2015 and 2016 attracting over 40 participating teams in all four editions While the focus was on general tweets the task also featured outofdomain testing on SMS messages LiveJournal messages as well as on sarcastic tweets SemEval2013 Task 2 BIBREF26 and SemEval2014 Task 9 BIBREF27 focused on expressionlevel and messagelevel polarity SemEval2015 Task 10 BIBREF28 BIBREF29 featured topicbased message polarity classification on detecting trends toward a topic and on determining the outofcontext a priori strength of association of Twitter terms with positive sentiment SemEval2016 Task 4 BIBREF30 introduced a 5point scale which is used for human review ratings on popular websites such as Amazon TripAdvisor Yelp etc from a research perspective this meant moving from classification to ordinal regression Moreover it focused on quantification ie determining what proportion of a set of tweets on a given topic are positivenegative about it It also featured a 5point scale ordinal quantification subtask BIBREF31 Other related tasks have explored aspectbased sentiment analysis BIBREF32 BIBREF33 BIBREF21 sentiment analysis of figurative language on Twitter BIBREF34 implicit event polarity BIBREF35 stance in tweets BIBREF36 outofcontext sentiment intensity of phrases BIBREF37 and emotion detection BIBREF38 Some of these tasks featured languages other than English Tweetlevel sentiment The simplest and also the most popular task of sentiment analysis on Twitter is to determine the overall sentiment expressed by the author of a tweet BIBREF30 BIBREF28 BIBREF26 BIBREF29 BIBREF27 Typically this means choosing one of the following three classes to describe the sentiment Positive Negative and Neutral Here are some examples Positive nokia lumia620 cute and small and pocketsize and available in the brigh test colours of day lumiacaption Negative I hate tweeting on my iPhone 5 its so small Neutral If you work as a security in a samsung storeDoes that make you guardian of the galaxy Sentiment polarity lexicons Naturally the overall sentiment in a tweet can be determined based on the sentimentbearing words and phrases it contains as well as based on emoticons such as and For this purpose researchers have been using lexicons of sentimentbearing words For example cute is a positive word while hate is a negative one and the occurrence of these words in 1 and 2 can help determine the overall polarity of the respective tweet We will discuss these lexicons in more detail below Prior sentiment polarity of multiword phrases Unfortunately many sentimentbearing words are not universally good or universally bad For example the polarity of an adjective could depend on the noun it modifies eg hot coffee and unpredictable story express positive sentiment while hot beer and unpredictable steering are negative Thus determining the outofcontext a priori strength of association of Twitter terms especially multiword terms with positivenegative sentiment is an active research direction BIBREF28 BIBREF29 Phraselevel polarity in context Even when the target noun is the same the polarity of the modifying adjective could be different in different tweets eg small is positive in 1 but negative in 2 even though they both refer to a phone Thus there has been research in determining the sentiment polarity of a term in the context of a tweet BIBREF26 BIBREF29 BIBREF27 Sarcasm Going back to tweetlevel sentiment analysis we should mention sarcastic tweets which are particularly challenging as the sentiment they express is often the opposite of what the words they contain suggest BIBREF4 BIBREF29 BIBREF27 For example 4 and 5 express a negative sentiment even though they contain positive words and phrases such as thanks love and boosts my morale Negative Thanks manager for putting me on the schedule for Sunday Negative I just love missing my train every single day Really boosts my morale Sentiment toward a topic Even though tweets are short as they are limited to 140 characters by design even though this was relaxed a bit as of September 19 2016 and now media attachments such as images videos polls etc and quoted tweets no longer reduce the character count they are still long enough to allow the tweets author to mention several topics and to express potentially different sentiment toward each of them A topic can be anything that people express opinions about eg a product eg iPhone6 a political candidate eg Donald Trump a policy eg Obamacare an event eg Brexit etc For example in 6 the author is positive about Donald Trump but negative about Hillary Clinton A political analyzer would not be interested so much in the overall sentiment expressed in the tweet even though one could argue that here it is positive overall but rather in the sentiment with respect to a topic of hisher interest of study As a democrat I couldnt ethically support Hillary no matter who was running against her Just so glad that its Trump just love the guy topic Hillary INLINEFORM0 Negative topic Trump INLINEFORM0 Positive Aspectbased sentiment analysis Looking again at 1 and 2 we can say that the sentiment is not about the phone lumia620 and iPhone 5 respectively but rather about some specific aspect thereof namely size Similarly in 7 instead of sentiment toward the topic lasagna we can see sentiment toward two aspects thereof quality Positive sentiment and quantity Negative sentiment Aspectbased sentiment analysis is an active research area BIBREF32 BIBREF33 BIBREF21 The lasagna is delicious but do not come here on an empty stomach Stance detection A task related to but arguably different in some respect from sentiment analysis is that of stance detection The goal here is to determine whether the author of a piece of text is in favor of against or neutral toward a proposition or a target BIBREF36 For example in 8 the author has a negative stance toward the proposition women have the right to abortion even though the target is not mentioned at all Similarly in 9 the author expresses a negative sentiment toward Mitt Romney from which one can imply that she has a positive stance toward the target Barack Obama A foetus has rights too Make your voice heard Target women have the right to abortion INLINEFORM0 Against All Mitt Romney cares about is making money for the rich Target Barack Obama INLINEFORM0 InFavor Ordinal regression The above tasks were offered in different granularities eg 2way Positive Negative 3way Positive Neutral Negative 4way Positive Neutral Negative Objective 5way HighlyPositive Positive Neutral Negative HighlyNegative and sometimes even 11way BIBREF34 It is important to note that the 5way and the 11way scales are ordinal ie the classes can be associated with numbers eg INLINEFORM0 2 INLINEFORM1 1 0 1 and 2 for the 5point scale This changes the machine learning task as not all mistakes are equal anymore BIBREF16 For example misclassifying a HighlyNegative example as HighlyPositive is a bigger mistake than misclassifying it as Negative or as Neutral From a machine learning perspective this means moving from classification to ordinal regression This also requires different evaluation measures BIBREF30 Quantification Practical applications are hardly ever interested in the sentiment expressed in a specific tweet Rather they look at estimating the prevalence of positive and negative tweets about a given topic in a set of tweets from some time interval Most if not all tweet sentiment classification studies conducted within political science BIBREF39 BIBREF40 BIBREF41 economics BIBREF42 BIBREF7 social science BIBREF43 and market research BIBREF44 BIBREF45 use Twitter with an interest in aggregate data and not in individual classifications Thus some tasks such as SemEval2016 Task 4 BIBREF30 replace classification with class prevalence estimation which is also known as quantification in data mining and related fields Note that quantification is not a mere byproduct of classification since a good classifier is not necessarily a good quantifier and vice versa BIBREF46 Finally in case of multiple labels on an ordinal scale we have yet another machine learning problem ordinal quantification Both versions of quantification require specific evaluation measures and machine learning algorithms Preprocessing Tweets are subject to standard preprocessing steps for text such as tokenization stemming lemmatization stopword removal and partofspeech tagging Moreover due to their noisy nature they are also processed using some Twitterspecific techniques such as substitutionremoval of URLs of user mentions of hashtags and of emoticons spelling correction elongation normalization abbreviation lookup punctuation removal detection of amplifiers and diminishers negation scope detection etc For this one typically uses Twitterspecific NLP tools such as partofspeech and named entity taggers syntactic parsers etc BIBREF47 BIBREF48 BIBREF49 Negation handling Special handling is also done for negation The most popular approach to negation handling is to transform any word that appeared in a negation context by adding a suffix NEG to it eg good would become goodNEG BIBREF50 BIBREF10 A negated context is typically defined as a text span between a negation word eg no not shouldnt and a punctuation mark or the end of the message Alternatively one could flip the polarity of sentiment words eg the positive word good would become negative when negated It has also been argued BIBREF51 that negation affects different words differently and thus it was also proposed to build and use special sentiment polarity lexicons for words in negation contexts BIBREF52 Features Traditionally systems for Sentiment Analysis on Twitter have relied on handcrafted features derived from wordlevel eg great freshly roasted coffee becoming president and characterlevel INLINEFORM0 grams eg bec beco comin oming stems eg becom lemmata eg become roast punctuation eg exclamation and question marks partofspeech tags eg adjectives adverbs verbs nouns word clusters eg probably probly and maybe could be collapsed to the same word cluster and Twitterspecific encodings such as emoticons eg D hashtags Brexit user tags eg allenaiorg abbreviations eg RT BTW F2F OMG elongated words eg soooo yaayyy use of capitalization eg proportion of ALL CAPS words URLs etc Finally the most important features are those based on the presence of words and phrases in sentiment polarity lexicons with positivenegative scores examples of such features include number of positive terms number of negative terms ratio of the number of positive terms to the number of positivenegative terms ratio of the number of negative terms to the number of positivenegative terms sum of all positive scores sum of all negative scores sum of all scores etc Supervised learning Traditionally the above features were fed into classifiers such as Maximum Entropy MaxEnt and Support Vector Machines SVM with various kernels However observation over the SemEval Twitter sentiment task in recent years shows growing interest in and by now clear dominance of methods based on deep learning In particular the bestperforming systems at SemEval2015 and SemEval2016 used deep convolutional networks BIBREF53 BIBREF54 Conversely kernel machines seem to be less frequently used than in the past and the use of learning methods other than the ones mentioned above is at this point scarce All these models are examples of supervised learning as they need labeled training data Semisupervised learning We should note two things about the use of deep neural networks First they can often do quite well without the need for explicit feature modeling as they can learn the relevant features in their hidden layers starting from the raw text Second they have too many parameters and thus they require a lot of training data orders of magnitude more than it is realistic to have manually annotated A popular way to solve this latter problem is to use self training a form of semisupervised learning where first a system is trained on the available training data only then this system is applied to make predictions on a large unannotated set of tweets and finally it is trained for a few more iterations on its own predictions This works because parts of the network eg with convolution or with LSTMs BIBREF55 BIBREF54 BIBREF56 need to learn something like a language model ie which word is likely to follow which one Training these parts needs no labels While these parts can be also pretrained it is easier and often better to use self training Distantlysupervised learning Another way to make use of large unannotated datasets is to rely on distant supervision BIBREF41 For example one can annotate tweets for sentiment polarity based on whether they contain a positive or a negative emoticon This results in noisy labels which can be used to train a system BIBREF54 to induce sentimentspecific word embeddings BIBREF57 sentimentpolarity lexicons BIBREF25 etc Unsupervised learning Fully unsupervised learning is not a popular method for addressing sentiment analysis tasks Yet some features used in sentiment analysis have been learned in an unsupervised way eg Brown clusters to generalize over words BIBREF58 Similarly word embeddings are typically trained from raw tweets that have no annotation for sentiment even though there is also work on sentimentspecific word embeddings BIBREF57 which uses distant supervision Despite the wide variety of knowledge sources explored so far in the literature sentiment polarity lexicons remain the most commonly used resource for the task of sentiment analysis Until recently such sentiment polarity lexicons were manually crafted and were thus of small to moderate size eg LIWC BIBREF59 has 2300 words the General Inquirer BIBREF60 contains 4206 words Bing Lius lexicon BIBREF22 includes 6786 words and MPQA BIBREF14 has about 8000 words Early efforts toward building sentiment polarity lexicons automatically yielded lexicons of moderate sizes such as the SentiWordNet BIBREF19 BIBREF61 However recent results have shown that automatically extracted largescale lexicons eg up to a million words and phrases offer important performance advantages as confirmed at shared tasks on Sentiment Analysis on Twitter at SemEval 20132016 BIBREF30 BIBREF26 BIBREF29 BIBREF27 Using such largescale lexicons was crucial for the performance of the topranked systems Similar observations were made in the related AspectBased Sentiment Analysis task at SemEval 2014 BIBREF21 In both tasks the winning systems benefitted from building and using massive sentiment polarity lexicons BIBREF25 BIBREF62 The two most popular largescale lexicons were the Hashtag Sentiment Lexicon and the Sentiment140 lexicon which were developed by the team of NRC Canada for their participation in the SemEval2013 shared task on sentiment analysis on Twitter Similar automatically induced lexicons proved useful for other SemEval tasks eg for SemEval2016 Task 3 on Community Question Answering BIBREF63 BIBREF30 The importance of building sentiment polarity lexicons has resulted in a special subtask BIBREF29 at SemEval2015 part of Task 4 and an entire task BIBREF37 at SemEval2016 namely Task 7 on predicting the outofcontext sentiment intensity of words and phrases Yet we should note though that the utility of using sentiment polarity lexicons for sentiment analysis probably needs to be revisited as the best system at SemEval2016 Task 4 could win without using any lexicons BIBREF53 it relied on semisupervised learning using a deep neural network Various approaches have been proposed in the literature for bootstrapping sentiment polarity lexicons starting from a small set of seeds positive and negative terms words and phrases The dominant approach is that of Turney BIBREF12 who uses pointwise mutual information and bootstrapping to build a large lexicon and to estimate the semantic orientation of each word in that lexicon He starts with a small set of seed positive eg excellent and negative words eg bad and then uses these words to induce sentiment polarity orientation for new words in a large unannotated set of texts in his case product reviews The idea is that words that cooccur in the same text with positive seed words are likely to be positive while those that tend to cooccur with negative words are likely to be negative To quantify this intuition Turney defines the notion of sentiment orientation SO for a term INLINEFORM0 as follows INLINEFORM0 where PMI is the pointwise mutual information INLINEFORM0 and INLINEFORM1 are placeholders standing for any of the seed positive and negative terms respectively and INLINEFORM2 is a target wordphrase from the large unannotated set of texts here tweets A positivenegative value for INLINEFORM0 indicates positivenegative polarity for the word INLINEFORM1 and its magnitude shows the corresponding sentiment strength In turn INLINEFORM2 where INLINEFORM3 is the probability to see INLINEFORM4 with any of the seed positive words in the same tweet INLINEFORM5 is the probability to see INLINEFORM6 in any tweet and INLINEFORM7 is the probability to see any of the seed positive words in a tweet INLINEFORM8 is defined similarly The pointwise mutual information is a notion from information theory given two random variables INLINEFORM0 and INLINEFORM1 the mutual information of INLINEFORM2 and INLINEFORM3 is the amount of information in units such as bits obtained about the random variable INLINEFORM4 through the random variable INLINEFORM5 BIBREF64 Let INLINEFORM0 and INLINEFORM1 be two values from the sample space of INLINEFORM2 and INLINEFORM3 respectively The pointwise mutual information between INLINEFORM4 and INLINEFORM5 is defined as follows DISPLAYFORM0 INLINEFORM0 takes values between INLINEFORM1 which happens when INLINEFORM2 0 and INLINEFORM3 if INLINEFORM4 In his experiments Turney BIBREF12 used five positive and five negative words as seeds His PMIbased approach further served as the basis for the creation of the two abovementioned largescale automatic lexicons for sentiment analysis in Twitter for English initially developed by NRC for their participation in SemEval2013 BIBREF25 The Hashtag Sentiment Lexicon uses as seeds hashtags containing 32 positive and 36 negative words eg happy and sad Similarly the Sentiment140 lexicon uses smileys as seed indicators for positive and negative sentiment eg and as positive seeds and and as negative ones An alternative approach to lexicon induction has been proposed BIBREF65 which instead of using PMI assigns positivenegative labels to the unlabeled tweets based on the seeds and then trains an SVM classifier on them using word INLINEFORM0 grams as features These INLINEFORM1 grams are then used as lexicon entries words and phrases with the learned classifier weights as polarity scores Finally it has been shown that sizable further performance gains can be obtained by starting with midsized seeds ie hundreds of words and phrases BIBREF66 Sentiment analysis on Twitter has applications in a number of areas including political science BIBREF39 BIBREF40 BIBREF41 economics BIBREF42 BIBREF7 social science BIBREF43 and market research BIBREF44 BIBREF45 It is used to study company reputation online BIBREF45 to measure customer satisfaction to identify detractors and promoters to forecast market growth BIBREF42 to predict the future income from newlyreleased movies to forecast the outcome of upcoming elections BIBREF41 BIBREF7 to study political polarization BIBREF39 BIBREF9 etc We expect the quest for more interesting formulations of the general sentiment analysis task to continue We see competitions such as those at SemEval as the engine of this innovation as they not only perform headtohead comparisons but also create databases and tools that enable followup research for many years afterward In terms of methods we believe that deep learning BIBREF55 BIBREF54 BIBREF56 together with semisupervised and distantlysupervised methods BIBREF67 BIBREF57 will be the main focus of future research We also expect more attention to be paid to linguistic structure and sentiment compositionality BIBREF68 BIBREF69 Moreover we forecast more interest for languages other than English and for crosslingual methods BIBREF40 BIBREF70 BIBREF71 which will allow leveraging on the rich resources that are already available for English Last but not least the increase in opinion spam on Twitter will make it important to study astroturfing BIBREF72 and troll detection BIBREF73 BIBREF74 BIBREF75 Microblog Sentiment Analysis 100590 Multiclassifier System for Sentiment Analysis and Opinion Mining 351 Sentiment Analysis in Social Media 120 Sentiment Analysis of Microblogging Data 110168 Sentiment Analysis of Reviews 110169 Sentiment Analysis Basics of 110159 Sentiment Quantification of UserGenerated Content 110170 Social Media Analysis for Monitoring Political Sentiment 110172 Twitter Microblog Sentiment Analysis 265 User Sentiment and Opinion Analysis 192 For general research on sentiment analysis we recommend the following surveys BIBREF76 and BIBREF15 For sentiment analysis on Twitter we recommend the overview article on Sentiment Analysis on Twitter about the SemEval task BIBREF28 as well as the task description papers for different editions of the task BIBREF30 BIBREF26 BIBREF29 BIBREF27 </s>
<s>[INST] COSTRA 1.0: A Dataset of Complex Sentence Transformations [/INST] Vector representations are becoming truly essential in majority of natural language processing tasks Word embeddings became widely popular with the introduction of word2vec BIBREF0 and GloVe BIBREF1 and their properties have been analyzed in length from various aspects Studies of word embeddings range from word similarity BIBREF2 BIBREF3 over the ability to capture derivational relations BIBREF4 linear superposition of multiple senses BIBREF5 the ability to predict semantic hierarchies BIBREF6 or POS tags BIBREF7 up to data efficiency BIBREF8 Several studies BIBREF9 BIBREF10 BIBREF11 BIBREF12 show that word vector representations are capable of capturing meaningful syntactic and semantic regularities These include for example malefemale relation demonstrated by the pairs manwoman kingqueen and the countrycapital relation RussiaMoscow JapanTokyo These regularities correspond to simple arithmetic operations in the vector space Sentence embeddings are becoming equally ubiquitous in NLP with novel representations appearing almost every other week With an overwhelming number of methods to compute sentence vector representations the study of their general properties becomes difficult Furthermore it is not so clear in which way the embeddings should be evaluated In an attempt to bring together more traditional representations of sentence meanings and the emerging vector representations bojaretaljnlerepresentations2019 introduce a number of aspects or desirable properties of sentence embeddings One of them is denoted as relatability which highlights the correspondence between meaningful differences between sentences and geometrical relations between their respective embeddings in the highly dimensional continuous vector space If such a correspondence could be found we could use geometrical operations in the space to induce meaningful changes in sentences In this work we present COSTRA a new dataset of COmplex Sentence TRAnsformations In its first version the dataset is limited to sample sentences in Czech The goal is to support studies of semantic and syntactic relations between sentences in the continuous space Our dataset is the prerequisite for one of possible ways of exploring sentence meaning relatability we envision that the continuous space of sentences induced by an ideal embedding method would exhibit topological similarity to the graph of sentence variations For instance one could argue that a subset of sentences could be organized along a linear scale reflecting the formalness of the language used Another set of sentences could form a partially ordered set of gradually less and less concrete statements And yet another set intersecting both of the previous ones in multiple sentences could be partially or linearly ordered according to the strength of the speakers confidence in the claim Our long term goal is to search for an embedding method which exhibits this behaviour ie that the topological map of the embedding space corresponds to meaningful operations or changes in the set of sentences of a language or more languages at once We prefer this behaviour to emerge as it happened for word vector operations but regardless if the behaviour is emergent or trained we need a dataset of sentences illustrating these patterns If large enough such a dataset could serve for training If it will be smaller it will provide a test set In either case these sentences could provide a skeleton to the continuous space of sentence embeddings The paper is structured as follows related summarizes existing methods of sentence embeddings evaluation and related work annotation describes our methodology for constructing our dataset data details the obtained dataset and some first observations We conclude and provide the link to the dataset in conclusion As hinted above there are many methods of converting a sequence of words into a vector in a highly dimensional space To name a few BiLSTM with the maxpooling trained for natural language inference BIBREF13 masked language modeling and next sentence prediction using bidirectional Transformer BIBREF14 maxpooling last states of neural machine translation among many languages BIBREF15 or the encoder final state in attentionless neural machine translation BIBREF16 The most common way of evaluating methods of sentence embeddings is extrinsic using so called transfer tasks ie comparing embeddings via the performance in downstream tasks such as paraphrasing entailment sentence sentiment analysis natural language inference and other assignments However even simple bagofwords BOW approaches achieve often competitive results on such tasks BIBREF17 Adi16 introduce intrinsic evaluation by measuring the ability of models to encode basic linguistic properties of a sentence such as its length word order and word occurrences These so called probing tasks are further extended by a depth of the syntactic tree top constituent or verb tense by DBLPjournalscorrabs180501070 Both transfer and probing tasks are integrated in SentEval BIBREF18 framework for sentence vector representations Later Perone2018 applied SentEval to eleven different encoding methods revealing that there is no consistently well performing method across all tasks SentEval was further criticized for pitfalls such as comparing different embedding sizes or correlation between tasks BIBREF19 BIBREF20 shietal2016string show that NMT encoder is able to capture syntactic information about the source sentence DBLPjournalscorrBelinkovDDSG17 examine the ability of NMT to learn morphology through POS and morphological tagging Still very little is known about semantic properties of sentence embeddings Interestingly cifkabojarmeanings2018 observe that the better selfattention embeddings serve in NMT the worse they perform in most of SentEval tasks zhuetal2018exploring generate automatically sentence variations such as Original sentence A rooster pecked grain Synonym Substitution A cock pecked grain NotNegation A rooster didnt peck grain QuantifierNegation There was no rooster pecking grain and compare their triplets by examining distances between their embeddings ie distance between 1 and 2 should be smaller than distances between 1 and 3 2 and 3 similarly 3 and 4 should be closer together than 13 or 14 In our previous study BIBREF21 we examined the effect of small sentence alternations in sentence vector spaces We used sentence pairs automatically extracted from datasets for natural language inference BIBREF22 BIBREF23 and observed that the simple vector difference familiar from word embeddings serves reasonably well also in sentence embedding spaces The examined relations were however very simple a change of gender number addition of an adjective etc The structure of the sentence and its wording remained almost identical We would like to move to more interesting nontrivial sentence comparison beyond those in zhuetal2018exploring or BaBo2019 such as change of style of a sentence the introduction of a small modification that drastically changes the meaning of a sentence or reshuffling of words in a sentence that alters its meaning Unfortunately such a dataset cannot be generated automatically and it is not available to our best knowledge We try to start filling this gap with COSTRA 10 We acquired the data in two rounds of annotation In the first one we were looking for original and uncommon sentence change suggestions In the second one we collected sentence alternations using ideas from the first round The first and second rounds of annotation could be broadly called as collecting ideas and collecting data respectively We manually selected 15 newspaper headlines Eleven annotators were asked to modify each headline up to 20 times and describe the modification with a short name They were given an example sentence and several of its possible alternations see tabfirstroundexamples Unfortunately these examples turned out to be highly influential on the annotators decisions and they correspond to almost two thirds of all of modifications gathered in the first round Other very common transformations include change of a word order or transformation into a interrogativeimperative sentence Other interesting modification were also proposed such as change into a fairytale style excessive use of diminutivesvulgarisms or dadaisma swap of roles in the sentence so that the resulting sentence is grammatically correct but nonsensical in our world Of these suggestions we selected only the dadaistic swap of roles for the current exploration see nonsense in Table TABREF7 In total we collected 984 sentences with 269 described unique changes We use them as an inspiration for second round of annotation We selected 15 modifications types to collect COSTRA 10 They are presented in annotationinstructions We asked for two distinct paraphrases of each sentence because we believe that a good sentence embedding should put paraphrases close together in vector space Several modification types were specifically selected to constitute a thorough test of embeddings In different meaning the annotators should create a sentence with some other meaning using the same words as the original sentence Other transformations which should be difficult for embeddings include minimal change in which the sentence meaning should be significantly changed by using only very small modification or nonsense in which words of the source sentence should be shuffled so that it is grammatically correct but without any sense The source sentences for annotations were selected from Czech data of Global Voices BIBREF24 and OpenSubtitles BIBREF25 We used two sources in order to have different styles of seed sentences both journalistic and common spoken language We considered only sentences with more than 5 and less than 15 words and we manually selected 150 of them for further annotation This step was necessary to remove sentences that are too unreal out of this world such as Jedno fotonov torpdo a je z tebe vesmrn topinka One photon torpedo and youre a space toast photo captions ie incomplete sentences eg Zvltn ekvdorsk ppad Correa vs Crudo Specific Ecuadorian case Correa vs Crudo too vague overly dependent on the context B tam a mluv na ni Go there and speak to her Many of the intended sentence transformations would be impossible to apply to such sentences and annotators time would be wasted Even after such filtering it was still quite possible that a desired sentence modification could not be achieved for a sentence For such a case we gave the annotators the option to enter the keyword IMPOSSIBLE instead of the particular impossible modification This option allowed to explicitly state that no such transformation is possible At the same time most of the transformations are likely to lead to a large number possible outcomes As documented in scratching2013 Czech sentence might have hundreds of thousand of paraphrases To support some minimal exploration of this possible diversity most of sentences were assigned to several annotators The annotation is a challenging task and the annotators naturally make mistakes Unfortunately a single typo can significantly influence the resulting embedding BIBREF26 After collecting all the sentence variations we applied the statistical spellchecker and grammar checker Korektor BIBREF27 in order to minimize influence of typos to performance of embedding methods We manually inspected 519 errors identified by Korektor and fixed 129 which were identified correctly In the second round we collected 293 annotations from 12 annotators After Korektor there are 4262 unique sentences including 150 seed sentences that form the COSTRA 10 dataset Statistics of individual annotators are available in tabstatistics The time needed to carry out one piece of annotation ie to provide one seed sentence with all 15 transformations was on average almost 20 minutes but some annotators easily needed even half an hour Out of the 4262 distinct sentences only 188 was recorded more than once In other words the chance of two annotators producing the same output string is quite low The most repeated transformations are by far past future and ban The least repeated is paraphrase with only single one repeated multipleannots documents this in another way The 293 annotations are split into groups depending on how many annotators saw the same input sentence 30 annotations were annotated by one person only 30 annotations by two different persons etc The last column shows the number of unique outputs obtained in that group Across all cases 968 of produced strings were unique In line with instructions the annotators were using the IMPOSSIBLE option scarcely 95 times ie only 2 It was also a case of 7 annotators only the remaining 5 annotators were capable of producing all requested transformations The top three transformations considered unfeasible were different meaning using the same set of words past esp for sentences already in the past tense and simple sentence We embedded COSTRA sentences with LASER BIBREF15 the method that performed very well in revealing linear relations in BaBo2019 Having browsed a number of 2D visualizations PCA and tSNE of the space we have to conclude that visually LASER space does not seem to exhibit any of the desired topological properties discussed above see figpca for one example The lack of semantic relations in the LASER space is also reflected in vector similarities summarized in similarities The minimal change operation substantially changed the meaning of the sentence and yet the embedding of the transformation lies very closely to the original sentence average similarity of 0930 Tense changes and some form of negation or banning also keep the vectors very similar The lowest average similarity was observed for generalization 0739 and simplification 0781 which is not any bad sign However the fact that paraphrases have much smaller similarity 0826 than opposite meaning 0902 documents that the vector space lacks in terms of relatability We presented COSTRA 10 a small corpus of complex transformations of Czech sentences We plan to use this corpus to analyze a wide spectrum sentence embeddings methods to see to what extent the continuous space they induce reflects semantic relations between sentences in our corpus The very first analysis using LASER embeddings indicates lack of meaning relatability ie the ability to move along a trajectory in the space in order to reach desired sentence transformations Actually not even paraphrases are found in close neighbourhoods of embedded sentences More semantic sentence embeddings methods are thus to be sought for The corpus is freely available at the following link httphdlhandlenet1123413123 Aside from extending the corpus in Czech and adding other language variants we are also considering to wrap COSTRA 10 into an API such as SentEval so that it is very easy for researchers to evaluate their sentence embeddings in terms of relatability </s>
<s>[INST] Learning to Create Sentence Semantic Relation Graphs for Multi-Document Summarization [/INST] Todays increasing flood of information on the web creates a need for automated multidocument summarization systems that produce high quality summaries However producing summaries in a multidocument setting is difficult as the language used to display the same information in a sentence can vary significantly making it difficult for summarization models to capture Given the complexity of the task and the lack of datasets most researchers use extractive summarization where the final summary is composed of existing sentences in the input documents More specifically extractive summarization systems output summaries in two steps via sentence ranking where an importance score is assigned to each sentence and via the subsequent sentence selection where the most appropriate sentence is chosen by considering 1 their importance and 2 their frequency among all documents Due to data sparcity models heavily rely on welldesigned features at the word level BIBREF0 BIBREF1 BIBREF2 BIBREF3 or take advantage of other large manually annotated datasets and then apply transfer learning BIBREF4 Additionally most of the time all sentences in the same collection of documents are processed independently and therefore their relationships are lost In realistic scenarios features are hard to craft gathering additional annotated data is costly and the large variety in expressing the same fact cannot be handled by the use of wordbased features only as is often the case In this paper we address these obstacles by proposing to simultaneously leverage two types of sentence embeddings namely embeddings pretrained on a large corpus that capture a variety of meanings and domainspecific embeddings learned during training The former is typically trained on an unrelated corpus composed of high quality texts allowing to cover additional contexts for each encountered word and sentence Hereby we build on the assumption that sentence embeddings capture both the syntactic and semantic content of sentences We hypothesize that using two types of sentence embeddings general and domainspecific is beneficial for the task of multidocument summarization as the former captures the most common semantic structures from a large general corpus while the latter captures the aspects related to the domain We present SemSentSum Figure FIGREF3 a fully datadriven summarization system which does not depend on handcrafted features nor additional data and is thus domainindependent It first makes use of general sentence embedding knowledge to build a sentenc semantic relation graph that captures sentence similarities Section SECREF4 In a second step it trains genrespecific sentence embeddings related to the domains of the collection of documents by utilizing a sentence encoder Section SECREF5 Both representations are afterwards merged by using a graph convolutional network BIBREF5 Section SECREF6 Then it employs a linear layer to project highlevel hidden features for individual sentences to salience scores Section SECREF8 Finally it greedily produces relevant and nonredundant summaries by using sentence embeddings to detect similarities between candidate sentences and the current summary Section SECREF11 The main contributions of this work are as follows We aggregate two types of sentences embeddings using a graph representation They share different properties and are consequently complementary The first one is trained on a large unrelated corpus to model general semantics among sentences whereas the second is domainspecific to the dataset and learned during training Together they enable a model to be domainindependent as it can be applied easily on other domains Moreover it could be used for other tasks including detecting information cascades queryfocused summarization keyphrase extraction and information retrieval We devise a competitive multidocument summarization system which does not need handcrafted features nor additional annotated data Moreover the results are competitive for 665byte and 100word summaries Usually models are compared in one of the two settings but not both and thus lack comparability Let C denote a collection of related documents composed of a set of documents lbrace Dii in 1Nrbrace where N is the number of documents Moreover each document Di consists of a set of sentences lbrace Sijj in 1Mrbrace M being the number of sentences in Di Given a collection of related documents C our goal is to produce a summary Sum using a subset of these in the input documents ordered in some way such that Sum Si1j1Si2j2Sinjm In this section we describe how SemSentSum estimates the salience score of each sentence and how it selects a subset of these to create the final summary The architecture of SemSentSum is depicted in Figure FIGREF3 In order to perform sentence selection we first build our sentence semantic relation graph where each vertex is a sentence and edges capture the semantic similarity among them At the same time each sentence is fed into a recurrent neural network as a sentence encoder to generate sentence embeddings using the last hidden states A singlelayer graph convolutional neural network is then applied on top where the sentence semantic relation graph is the adjacency matrix and the sentence embeddings are the node features Afterward a linear layer is used to project highlevel hidden features for individual sentences to salience scores representing how salient a sentence is with respect to the final summary Finally based on this we devise an innovative greedy method that leverages sentence embeddings to detect redundant sentences and select sentences until reaching the summary length limit We model the semantic relationship among sentences using a graph representation In this graph each vertex is a sentence Sij jth sentence of document Di from the collection documents C and an undirected edge between Siuju and Sivjv indicates their degree of similarity In order to compute the semantic similarity we use the model of BIBREF6 trained on the English Wikipedia corpus In this manner we incorporate general knowledge ie not domainspecific that will complete the specialized sentence embeddings obtained during training see Section SECREF5 We process sentences by their model and compute the cosine similarity between every sentence pair resulting in a complete graph However having a complete graph alone does not allow the model to leverage the semantic structure across sentences significantly as every sentence pair is connected and likewise a sparse graph does not contain enough information to exploit semantic similarities Furthermore all edges have a weight above zero since it is very unlikely that two sentence embeddings are completely orthogonal To overcome this problem we introduce an edgeremovalmethod where every edge below a certain threshold tsimg is removed in order to emphasize high sentence similarity Nonetheless tsimg should not be too large as we otherwise found the model to be prone to overfitting After removing edges below tsimg our sentence semantic relation graph is used as the adjacency matrix A The impact of tsimg with different values is shown in Section SECREF26 Based on our aforementioned hypothesis that a combination of general and genrespecific sentence embeddings is beneficial for the task of multidocument summarization we further incorporate general sentence embeddings pretrained on Wikipedia entries into edges between sentences Additionally we compute specialised sentence embeddings which are related to the domains of the documents see Section SECREF35 Note that 1 the pretrained sentence embeddings are only used to compute the weights of the edges and are not used by the summarization model as others are produced by the sentence encoder and 2 the edge weights are static and do not change during training Given a list of documents C we encode each documents sentence Sij where each has at most L words wij1 wij2 wijL In our experiments all words are kept and converted into word embeddings which are then fed to the sentence encoder in order to compute specialized sentence embeddings Sprime ij We employ a singlelayer forward recurrent neural network using Long ShortTerm Memory LSTM of BIBREF7 as sentence encoder where the sentence embeddings are extracted from the last hidden states We then concatenate all sentence embeddings into a matrix X which constitutes the input node features that will be used by the graph convolutional network After having computed all sentence embeddings and the sentence semantic relation graph we apply a singlelayer Graph Convolutional Network GCN from BIBREF5 in order to capture highlevel hidden features for each sentence encapsulating sentence information as well as the graph structure We believe that our sentence semantic relation graph contains information not present in the data via universal embeddings and thus we leverage this information by running a graph convolution on the first order neighborhood The GCN model takes as input the node features matrix X and a squared adjacency matrix A The former contains all sentence embeddings of the collection of documents while the latter is our underlying sentence semantic relation graph It outputs hidden representations for each node that encode both local graph structure and nodess features In order to take into account the sentences themselves during the information propagation we add selfconnections ie the identity matrix to A such that tildeA A I Subsequently we obtain our sentence hidden features by using Equation DISPLAYFORM7 where Wi is the weight matrix of the ith graph convolution layer and bi the bias vector We choose the Exponential Linear Unit ELU activation function from BIBREF8 due to its ability to handle the vanishing gradient problem by pushing the mean unit activations close to zero and consequently facilitating the backpropagation By using only one hidden layer as we only have one inputtohidden layer and one hiddentooutput layer we limit the information propagation to the first order neighborhood We use a simple linear layer to estimate a salience score for each sentence and then normalize the scores via softmax and obtain our estimated salience score Ssij Our model SemSentSum is trained in an endtoend manner and minimizes the crossentropy loss of Equation DISPLAYFORM10 between the salience score prediction and the ROUGE1 F1 score for each sentence F1S is computed as the ROUGE1 F1 score unlike the common practice in the area of single and multidocument summarization as recall favors longer sentences whereas F1 prevents this tendency The scores are normalized via softmax While our model SemSentSum provides estimated saliency scores we use a greedy strategy to construct an informative and nonredundant summary Sum We first discard sentences having less than 9 words as in BIBREF9 and then sort them in descending order of their estimated salience scores We iteratively dequeue the sentence having the highest score and append it to the current summary Sum if it is nonredundant with respect to the current content of Sum We iterate until reaching the summary length limit To determine the similarity of a candidate sentence with the current summary a sentence is considered as dissimilar if and only if the cosine similarity between its sentence embeddings and the embeddings of the current summary is below a certain threshold tsims We use the pretrained model of BIBREF6 to compute sentence as well as summary embeddings similarly to the sentence semantic relation graph construction Our approach is novel since it focuses on the semantic sentence structures and captures similarity between sentence meanings instead of focusing on word similarities only like previous TFIDF approaches BIBREF0 BIBREF1 BIBREF3 BIBREF4 We conduct experiments on the most commonly used datasets for multidocument summarization from the Document Understanding Conferences DUC We use DUC 2001 2002 2003 and 2004 as the tasks of generic multidocument summarization because they have been carried out during these years We use DUC 2001 2002 2003 and 2004 for generic multidocument summarization where DUC 20012002 are used for training DUC 2003 for validation and finally DUC 2004 for testing following the common practice For the evaluation we use ROUGE BIBREF10 with the official parameters of the DUC tasks and also truncate the summaries to 100 words for DUC 200120022003 and to 665 bytes for DUC 2004 Notably we take ROUGE1 and ROUGE2 recall scores as the main metrics for comparison between produced summaries and golden ones as proposed by BIBREF11 The goal of the ROUGEN metric is to compute the ratio of the number of Ngrams from the generated summary matching these of the human reference summaries To define the edge weights of our sentence semantic relation graph we employ the 600dimensional pretrained unigram model of BIBREF6 using English Wikipedia as source corpus We keep only edges having a weight larger than tsimg 05 tuned on the validation set For word embeddings the 300dimensional pretrained GloVe embeddings BIBREF12 are used and fixed during training The output dimension of the sentence embeddings produced by the sentence encoder is the same as that of the word embeddings ie 300 For the graph convolutional network the number of hidden units is 128 and the size of the generated hidden feature vectors is also 300 We use a batch size of 1 a learning rate of 00075 using Adam optimizer BIBREF13 with beta 109 beta 20999 and epsilon 108 In order to make SemSentSum generalize better we use dropout BIBREF14 of 02 batch normalization BIBREF15 clip the gradient norm at 10 if higher add L2norm regularizer with a regularization factor of 1012 and train using early stopping with a patience of 10 iterations Finally the similarity threshold tsims in the summary generation process is 08 tuned on the validation set We train our model SemSentSum on DUC 20012002 tune it on DUC 2003 and assess the performance on DUC 2004 In order to fairly compare SemSentSum with other models available in the literature experiments are conducted with summaries truncated to 665 bytes official summary length in the DUC competition but also with summaries with a length constraint of 100 words To the best of our knowledge we are the first to conduct experiments on both summary lengths and compare our model with other systems producing either 100 words or 665 bytes summaries We investigate different methods to build our sentence semantic relation graph and vary the value of tsimg from 00 to 075 to study the impact of the threshold cutoff Among these are Cosine Using cosine similarity Tfidf Considering a node as the query and another as document The weight corresponds to the cosine similarity between the query and the document TextRank BIBREF16 A weighted graph is created where nodes are sentences and edges defined by a similarity measure based on word overlap Afterward an algorithm similar to PageRank BIBREF17 is used to compute sentence importance and refined edge weights LexRank BIBREF9 An unsupervised multidocument summarizer based on the concept of eigenvector centrality in a graph of sentences to set up the edge weights Approximate Discourse Graph ADG BIBREF2 Approximation of a discourse graph where nodes are sentences and edges SuSv indicates sentence Sv can be placed after Su in a coherent summary Personalized ADG PADG BIBREF3 Normalized version of ADG where sentence nodes are normalized over all edges In order to quantify the contribution of the different components of SemSentSum we try variations of our model by removing different modules one at a time Our two main elements are the sentence encoder Sent and the graph convolutional neural network GCN When we omit Sent we substitute it with the pretrained sentence embeddings used to build our sentence semantic relation graph Three dimensions are used to evaluate our model SemSentSum 1 the summarization performance to assess its capability 2 the impact of the sentence semantic relation graph generation using various methods and different thresholds tsimg 3 an ablation study to analyze the importance of each component of SemSentSum We compare the results of SemSentSum for both settings 665 bytes and 100 words summaries We only include models using the same parameters to compute the ROUGE1ROUGE2 score and recall as metrics The results for 665 bytes summaries are reported in Table TABREF28 We compare SemSentSum with three types of model relying on either 1 sentence or document embeddings 2 various handcrafted features or 3 additional data For the first category we significantly outperform MMR BIBREF18 PVDBOWBS BIBREF19 and PGMMR BIBREF20 Although their methods are based on embeddings to represent the meaning it shows that using only various distance metrics or encoderdecoder architecture on these is not efficient for the task of multidocument summarization as also shown in the Ablation Study We hypothesize that SemSentSum performs better by leveraging pretrained sentence embeddings and hence lowering the effects of data scarcity Systems based on handcrafted features include a widelyused learningbased summarization method built on support vector regression SVR BIBREF21 a graphbased method based on approximating discourse graph GFlow BIBREF2 Peer 65 which is the best peer systems participating in DUC evaluations and the recursive neural network R2N2 of BIBREF1 that learns automatically combinations of handcrafted features As can be seen among these models completely dependent on handcrafted features SemSentSum achieves highest performance on both ROUGE scores This denotes that using different linguistic and wordbased features might not be enough to capture the semantic structures in addition to being cumbersome to craft The last type of model is shown in TCSum BIBREF4 and uses transfer learning from a text classifier model based on a domainrelated dataset of 30000 documents from New York Times sharing the same topics of the DUC datasets In terms of ROUGE1 SemSentSum significantly outperforms TCSum and performs similarly on ROUGE2 score This demonstrates that collecting more manually annotated data and training two models is unnecessary in addition to being difficult to use in other domains whereas SemSentSum is fully data driven domainindependent and usable in realistic scenarios Table TABREF32 depicts models producing 100 words summaries all depending on handcrafted features We use as baselines FreqSum BIBREF22 TsSum BIBREF23 traditional graphbased approaches such as Cont LexRank BIBREF9 Centroid BIBREF24 CLASSY04 BIBREF25 its improved version CLASSY11 BIBREF26 and the greedy model GreedyKL BIBREF27 All of these models are significantly underperforming compared to SemSentSum In addition we include stateoftheart models RegSum BIBREF0 and GCNPADG BIBREF3 We outperform both in terms of ROUGE1 For ROUGE2 scores we achieve better results than GCNPADG but without any use of domainspecific handcrafted features and a much smaller and simpler model Finally RegSum achieves a similar ROUGE2 score but computes sentence saliences based on word scores incorporating a rich set of wordlevel and domainspecific features Nonetheless our model is competitive and does not depend on handcrafted features due to its full datadriven nature and thus it is not limited to a single domain Consequently the experiments show that achieving good performance for multidocument summarization without handcrafted features or additional data is clearly feasible and SemSentSum produces competitive results without depending on these is domain independent fast to train and thus usable in real scenarios Table TABREF34 shows the results of different methods to create the sentence semantic relation graph with various thresholds tsimg for 665 bytes summaries we obtain similar results for 100 words A first observation is that using cosine similarity with sentence embeddings significantly outperforms all other methods for ROUGE1 and ROUGE2 scores mainly because it relies on the semantic of sentences instead of their individual words A second is that different methods evolve similarly PADG Textrank Tfidf behave similarly to an Ushaped curve for both ROUGE scores while Cosine is the only one having an inverted Ushaped curve The reason for this behavior is a consequence of its distribution being similar to a normal distribution because it relies on the semantic instead of words while the others are more skewed towards zero This confirms our hypothesis that 1 having a complete graph does not allow the model to leverage much the semantic 2 a sparse graph might not contain enough information to exploit similarities Finally Lexrank and ADG have different trends between both ROUGE scores We quantify the contribution of each module of SemSentSum in Table TABREF36 for 665 bytes summaries we obtain similar results for 100 words Removing the sentence encoder produces slightly lower results This shows that the sentence semantic relation graph captures semantic attributes well while the finetuned sentence embeddings obtained via the encoder help boost the performance making these methods complementary By disabling only the graph convolutional layer a drastic drop in terms of performance is observed which emphasizes that the relationship among sentences is indeed important and not present in the data itself Therefore our sentence semantic relation graph is able to capture sentence similarities by analyzing the semantic structures Interestingly if we remove the sentence encoder in addition to the graph convolutional layer similar results are achieved This confirms that alone the sentence encoder is not able to compute an efficient representation of sentences for the task of multidocument summarization probably due to the poor size of the DUC datasets Finally we can observe that the use of sentence embeddings only results in similar performance to the baselines which rely on sentence or document embeddings BIBREF18 BIBREF19 The idea of using multiple embeddings has been employed at the word level BIBREF28 use an attention mechanism to combine the embeddings for each word for the task of natural language inference BIBREF29 BIBREF30 concatenate the embeddings of each word into a vector before feeding a neural network for the tasks of aspect extraction and sentiment analysis To our knowledge we are the first to combine multiple types of sentence embeddings Extractive multidocument summarization has been addressed by a large range of approaches Several of them employ graphbased methods BIBREF31 introduced a crossdocument structure theory as a basis for multidocument summarization BIBREF9 proposed LexRank an unsupervised multidocument summarizer based on the concept of eigenvector centrality in a graph of sentences Other works exploit shallow or deep features from the graphs topology BIBREF32 BIBREF33 BIBREF34 pairs graphbased methods eg random walk with clustering BIBREF35 improved results by using a reinforced random walk model to rank sentences and keep nonredundant ones The system by BIBREF2 does sentence selection while balancing coherence and salience and by building a graph that approximates discourse relations across sentences BIBREF36 Besides graphbased methods other viable approaches include Maximum Marginal Relevance BIBREF37 which uses a greedy approach to select sentences and considers the tradeoff between relevance and redundancy support vector regression BIBREF21 conditional random field BIBREF38 or hidden markov model BIBREF25 Yet other approaches rely on ngrams regression as in BIBREF39 More recently BIBREF1 built a recursive neural network which tries to automatically detect combination of handcrafted features BIBREF4 employ a neural model for text classification on a large manually annotated dataset and apply transfer learning for multidocument summarization afterward The work most closely related to ours is BIBREF3 They create a normalized version of the approximate discourse graph BIBREF2 based on handcrafted features where sentence nodes are normalized over all the incoming edges They then employ a deep neural network composed of a sentence encoder three graph convolutional layers one document encoder and an attention mechanism Afterward they greedily select sentences using TFIDF similarity to detect redundant sentences Our model differs in four ways 1 we build our sentence semantic relation graph by using pretrained sentence embeddings with cosine similarity where neither heavy preprocessing nor handcrafted features are necessary Thus our model is fully datadriven and domainindependent unlike other systems In addition the sentence semantic relation graph could be used for other tasks than multidocument summarization such as detecting information cascades queryfocused summarization keyphrase extraction or information retrieval as it is not composed of handcrafted features 2 SemSentSum is much smaller and consequently has fewer parameters as it only uses a sentence encoder and a single convolutional layer 3 The loss function is based on ROUGE1 F1 score instead of recall to prevent the tendency of choosing longer sentences 4 Our method for summary generation is also different and novel as we leverage sentence embeddings to compute the similarity between a candidate sentence and the current summary instead of TFIDF based approaches In this work we propose a method to combine two types of sentence embeddings 1 universal embeddings pretrained on a large corpus such as Wikipedia and incorporating general semantic structures across sentences and 2 domainspecific embeddings learned during training We merge them together by using a graph convolutional network that eliminates the need of handcrafted features or additional annotated data We introduce a fully datadriven model SemSentSum that achieves competitive results for multidocument summarization on both kind of summary length 665 bytes and 100 words summaries without requiring handcrafted features or additional annotated data As SemSentSum is domainindependent we believe that our sentence semantic relation graph and model can be used for other tasks including detecting information cascades queryfocused summarization keyphrase extraction and information retrieval In addition we plan to leave the weights of the sentence semantic relation graph dynamic during training and to integrate an attention mechanism directly into the graph We thank Michaela Benk for proofreading and helpful advice </s>
<s>[INST] A Deep Neural Architecture for Sentence-level Sentiment Classification in Twitter Social Networking [/INST] Twitter sentiment classification have intensively researched in recent years BIBREF0 BIBREF1 Different approaches were developed for Twitter sentiment classification by using machine learning such as Support Vector Machine SVM with rulebased features BIBREF2 and the combination of SVMs and Naive Bayes NB BIBREF3 In addition hybrid approaches combining lexiconbased and machine learning methods also achieved high performance described in BIBREF4 However a problem of traditional machine learning is how to define a feature extractor for a specific domain in order to extract important features Deep learning models are different from traditional machine learning methods in that a deep learning model does not depend on feature extractors because features are extracted during training progress The use of deep learning methods becomes to achieve remarkable results for sentiment analysis BIBREF5 BIBREF6 BIBREF7 Some researchers used Convolutional Neural Network CNN for sentiment classification CNN models have been shown to be effective for NLP For example BIBREF6 proposed various kinds of CNN to learn sentimentbearing sentence vectors BIBREF5 adopted two CNNs in characterlevel to sentencelevel representation for sentiment analysis BIBREF7 constructs experiments on a characterlevel CNN for several largescale datasets In addition Long ShortTerm Memory LSTM is another stateoftheart semantic composition model for sentiment classification with many variants described in BIBREF8 The studies reveal that using a CNN is useful in extracting information and finding feature detectors from texts In addition a LSTM can be good in maintaining word order and the context of words However in some important aspects the use of CNN or LSTM separately may not capture enough information Inspired by the models above the goal of this research is using a Deep Convolutional Neural Network DeepCNN to exploit the information of characters of words in order to support wordlevel embedding A BiLSTM produces a sentencewide feature representation based on these embeddings The BiLSTM is a version of BIBREF9 with Full Gradient described in BIBREF10 In addition the rulesbased approach also effects classification accuracy by focusing on important subsentences expressing the main sentiment of a tweet while removing unnecessary parts of a tweet The paper makes the following contributions The organization of the present paper is as follows In section 2 we describe the model architecture which introduces the structure of the model We explain the basic idea of model and the way of constructing the model Section 3 show results and analysis and section 4 summarize this paper Our proposed model consists of a deep learning classifier and a tweet processor The deep learning classifier is a combination of DeepCNN and BiLSTM The tweet processor standardizes tweets and then applies semantic rules on datasets We construct a framework that treats the deep learning classifier and the tweet processor as two distinct components We believe that standardizing data is an important step to achieve high accuracy To formulate our problem in increasing the accuracy of the classifier we illustrate our model in Figure FIGREF4 as follows Tweets are firstly considered via a processor based on preprocessing steps BIBREF0 and the semantic rulesbased method BIBREF11 in order to standardize tweets and capture only important information containing the main sentiment of a tweet We use DeepCNN with Wide convolution for characterlevel embeddings A wide convolution can learn to recognize specific ngrams at every position in a word that allows features to be extracted independently of these positions in the word These features maintain the order and relative positions of characters A DeepCNN is constructed by two wide convolution layers and the need of multiple wide convolution layers is widely accepted that a model constructing by multiple processing layers have the ability to learn representations of data with higher levels of abstraction BIBREF12 Therefore we use DeepCNN for characterlevel embeddings to support morphological and shape information for a word The DeepCNN produces INLINEFORM0 global fixedsized feature vectors for INLINEFORM1 words A combination of the global fixedsize feature vectors and wordlevel embedding is fed into BiLSTM The BiLSTM produces a sentencelevel representation by maintaining the order of words Our work is philosophically similar to BIBREF5 However our model is distinguished with their approaches in two aspects Using DeepCNN with two wide convolution layers to increase representation with multiple levels of abstraction Integrating global character fixedsized feature vectors with wordlevel embedding to extract a sentencewide feature set via BiLSTM This deals with three main problems i Sentences have any different size ii The semantic and the syntactic of words in a sentence are captured in order to increase information for a word iii Important information of characters that can appear at any position in a word are extracted In subsection B we introduce various kinds of dataset The modules of our model are constructed in other subsections Stanford Twitter Sentiment Corpus STS Corpus STS Corpus contains 1600K training tweets collected by a crawler from BIBREF0 BIBREF0 constructed a test set manually with 177 negative and 182 positive tweets The Stanford test set is small However it has been widely used in different evaluation tasks BIBREF0 BIBREF5 BIBREF13 Sanders Twitter Sentiment Corpus This dataset consists of handclassified tweets collected by using search terms INLINEFORM0 google microsoft and twitter We construct the dataset as BIBREF14 for binary classification Health Care Reform HCR This dataset was constructed by crawling tweets containing the hashtag hcr BIBREF15 Task is to predict positivenegative tweets BIBREF14 We firstly take unique properties of Twitter in order to reduce the feature space such as Username Usage of links None URLs and Repeated Letters We then process retweets stop words links URLs mentions punctuation and accentuation For emoticons BIBREF0 revealed that the training process makes the use of emoticons as noisy labels and they stripped the emoticons out from their training dataset because BIBREF0 believed that if we consider the emoticons there is a negative impact on the accuracies of classifiers In addition removing emoticons makes the classifiers learns from other features eg unigrams and bigrams presented in tweets and the classifiers only use these nonemoticon features to predict the sentiment of tweets However there is a problem is that if the test set contains emoticons they do not influence the classifiers because emoticon features do not contain in its training data This is a limitation of BIBREF0 because the emoticon features would be useful when classifying test data Therefore we keep emoticon features in the datasets because deep learning models can capture more information from emoticon features for increasing classification accuracy In Twitter social networking people express their opinions containing subsentences These subsentences using specific PoS particles Conjunction and Conjunctive adverbs like but while however despite however have different polarities However the overall sentiment of tweets often focus on certain subsentences For example lonedog bwahahahyou are amazing However it was quite the letdown kirstiealley my dentist is great but shes expensive In two tweets above the overall sentiment is negative However the main sentiment is only in the subsentences following but and however This inspires a processing step to remove unessential parts in a tweet Rulebased approach can assists these problems in handling negation and dealing with specific PoS particles led to effectively affect the final output of classification BIBREF11 BIBREF16 BIBREF11 summarized a full presentation of their semantic rules approach and devised ten semantic rules in their hybrid approach based on the presentation of BIBREF16 We use five rules in the semantic rules set because other five rules are only used to compute polarity of words after POS tagging or Parsing steps We follow the same naming convention for rules utilized by BIBREF11 to represent the rules utilized in our proposed method The rules utilized in the proposed method are displayed in Table TABREF15 in which is included examples from STS Corpus and output after using the rules Table TABREF16 illustrates the number of processed sentences on each dataset To construct embedding inputs for our model we use a fixedsized word vocabulary INLINEFORM0 and a fixedsized character vocabulary INLINEFORM1 Given a word INLINEFORM2 is composed from characters INLINEFORM3 the characterlevel embeddings are encoded by column vectors INLINEFORM4 in the embedding matrix INLINEFORM5 where INLINEFORM6 is the size of the character vocabulary For wordlevel embedding INLINEFORM7 we use a pretrained wordlevel embedding with dimension 200 or 300 A pretrained wordlevel embedding can capture the syntactic and semantic information of words BIBREF17 We build every word INLINEFORM8 into an embedding INLINEFORM9 which is constructed by two subvectors the wordlevel embedding INLINEFORM10 and the character fixedsize feature vector INLINEFORM11 of INLINEFORM12 where INLINEFORM13 is the length of the filter of wide convolutions We have INLINEFORM14 character fixedsize feature vectors corresponding to wordlevel embedding in a sentence DeepCNN in the deep learning module is illustrated in Figure FIGREF22 The DeepCNN has two wide convolution layers The first layer extract local features around each character windows of the given word and using a max pooling over character windows to produce a global fixedsized feature vector for the word The second layer retrieves important context characters and transforms the representation at previous level into a representation at higher abstract level We have INLINEFORM0 global character fixedsized feature vectors for INLINEFORM1 words In the next step of Figure FIGREF4 we construct the vector INLINEFORM0 by concatenating the wordlevel embedding with the global character fixedsize feature vectors The input of BiLSTM is a sequence of embeddings INLINEFORM1 The use of the global character fixedsize feature vectors increases the relationship of words in the wordlevel embedding The purpose of this BiLSTM is to capture the context of words in a sentence and maintain the order of words toward to extract sentencelevel representation The top of the model is a softmax function to predict sentiment label We describe in detail the kinds of CNN and LSTM that we use in next subpart 1 and 2 The onedimensional convolution called timedelay neural net has a filter vector INLINEFORM0 and take the dot product of filter INLINEFORM1 with each mgrams in the sequence of characters INLINEFORM2 of a word in order to obtain a sequence INLINEFORM3 DISPLAYFORM0 Based on Equation 1 we have two types of convolutions that depend on the range of the index INLINEFORM0 The narrow type of convolution requires that INLINEFORM1 and produce a sequence INLINEFORM2 The wide type of convolution does not require on INLINEFORM3 or INLINEFORM4 and produce a sequence INLINEFORM5 Outofrange input values INLINEFORM6 where INLINEFORM7 or INLINEFORM8 are taken to be zero We use wide convolution for our model Given a word INLINEFORM0 composed of INLINEFORM1 characters INLINEFORM2 we take a character embedding INLINEFORM3 for each character INLINEFORM4 and construct a character matrix INLINEFORM5 as following Equation 2 DISPLAYFORM0 The values of the embeddings INLINEFORM0 are parameters that are optimized during training The trained weights in the filter INLINEFORM1 correspond to a feature detector which learns to recognize a specific class of ngrams The ngrams have size INLINEFORM2 The use of a wide convolution has some advantages more than a narrow convolution because a wide convolution ensures that all weights of filter reach the whole characters of a word at the margins The resulting matrix has dimension INLINEFORM3 Long ShortTerm Memory networks usually called LSTMs are a improved version of RNN The core idea behind LSTMs is the cell state which can maintain its state over time and nonlinear gating units which regulate the information flow into and out of the cell The LSTM architecture that we used in our proposed model is described in BIBREF9 A single LSTM memory cell is implemented by the following composite function DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 is the logistic sigmoid function INLINEFORM1 and INLINEFORM2 are the input gate forget gate output gate cell and cell input activation vectors respectively All of them have a same size as the hidden vector INLINEFORM3 INLINEFORM4 is the hiddeninput gate matrix INLINEFORM5 is the inputoutput gate matrix The bias terms which are added to INLINEFORM6 and INLINEFORM7 have been omitted for clarity In addition we also use the full gradient for calculating with full backpropagation through time BPTT described in BIBREF10 A LSTM gradients using finite differences could be checked and making practical implementations more reliable For regularization we use a constraint on INLINEFORM0 of the weight vectors BIBREF18 For the Stanford Twitter Sentiment Corpus we use the number of samples as BIBREF5 The training data is selected 80K tweets for a training data and 16K tweets for the development set randomly from the training data of BIBREF0 We conduct a binary prediction for STS Corpus For Sander dataset we use standard 10fold cross validation as BIBREF14 We construct the development set by selecting 10 randomly from 9fold training data In Health Care Reform Corpus we also select 10 randomly for the development set in a training set and construct as BIBREF14 for comparison We describe the summary of datasets in Table III for all datasets the filter window size INLINEFORM0 is 7 with 6 feature maps each for the first wide convolution layer the second wide convolution layer has a filter window size of 5 with 14 feature maps each Dropout rate INLINEFORM1 is 05 INLINEFORM2 constraint learning rate is 01 and momentum of 09 Minibatch size for STS Corpus is 100 and others are 4 In addition training is done through stochastic gradient descent over shuffled minibatches with Adadelta update rule BIBREF19 we use the publicly available Word2Vec trained from 100 billion words from Google and TwitterGlove of Stanford is performed on aggregated global wordword cooccurrence statistics from a corpus Word2Vec has dimensionality of 300 and Twitter Glove have dimensionality of 200 Words that do not present in the set of pretrain words are initialized randomly Table IV shows the result of our model for sentiment classification against other models We compare our model performance with the approaches of BIBREF0 BIBREF5 on STS Corpus BIBREF0 reported the results of Maximum Entropy MaxEnt NB SVM on STS Corpus having good performance in previous time The model of BIBREF5 is a stateoftheart so far by using a CharSCNN As can be seen 8663 is the best prediction accuracy of our model so far for the STS Corpus For Sanders and HCR datasets we compare results with the model of BIBREF14 that used a ensemble of multiple base classifiers ENS such as NB Random Forest RF SVM and Logistic Regression LR The ENS model is combined with bagofwords BoW feature hashing FH and lexicons The model of BIBREF14 is a stateoftheart on Sanders and HCR datasets Our models outperform the model of BIBREF14 for the Sanders dataset and HCR dataset As can be seen the models with SR outperforms the model with no SR Semantic rules is effective in order to increase classification accuracy We evaluate the efficiency of SR for the model in Table V of our full paper We also conduct two experiments on two separate models DeepCNN and BiLSTM in order to show the effectiveness of combination of DeepCNN and BiLSTM In addition the model using TwitterGlove outperform the model using GoogleW2V because TwitterGlove captures more information in Twitter than GoogleW2V These results show that the characterlevel information and SR have a great impact on Twitter Data The pretrain word vectors are good universal feature extractors The difference between our model and other approaches is the ability of our model to capture important features by using SR and combine these features at high benefit The use of DeepCNN can learn a representation of words in higher abstract level The combination of global character fixedsized feature vectors and a word embedding helps the model to find important detectors for particles such as not that negate sentiment and potentiate sentiment such as too so standing beside expected features The model not only learns to recognize single ngrams but also patterns in ngrams lead to form a structure significance of a sentence In the present work we have pointed out that the use of character embeddings through a DeepCNN to enhance information for word embeddings built on top of Word2Vec or TwitterGlove improves classification accuracy in Tweet sentiment classification Our results add to the wellestablish evidence that character vectors are an important ingredient for wordlevel in deep learning for NLP In addition semantic rules contribute handling nonessential subtweets in order to improve classification accuracy </s>
<s>[INST] Logic Attention Based Neighborhood Aggregation for Inductive Knowledge Graph Embedding [/INST] Knowledge graphs KGs such as Freebase BIBREF0 DBpedia BIBREF1 and YAGO BIBREF2 play a critical role in various NLP tasks including question answering BIBREF3 information retrieval BIBREF4 and personalized recommendation BIBREF5 A typical KG consists of numerous facts about a predefined set of entities Each fact is in the form of a triplet INLINEFORM0 or INLINEFORM1 for short where INLINEFORM2 and INLINEFORM3 are two entities and INLINEFORM4 is a relation the fact describes Due to the discrete and incomplete natures of KGs various KG embedding models are proposed to facilitate KG completion tasks eg link prediction and triplet classification After vectorizing entities and relations in a lowdimensional space those models predict missing facts by manipulating the involved entity and relation embeddings Although proving successful in previous studies traditional KG embedding models simply ignore the evolving nature of KGs They require all entities to be present when training the embeddings However BIBREF6 shi2018open suggest that on DBpedia 200 new entities emerge on a daily basis between late 2015 and early 2016 Given the infeasibility of retraining embeddings from scratch whenever new entities come missing facts about emerging entities are unfortunately not guaranteed to be inferred in time By transforming realistic networks eg citation graphs social networks and protein interaction graphs to simple graphs with singletyped and undirected edges recent explorations BIBREF7 shed light on the evolution issue for homogeneous graphs While learning embeddings for existing nodes they inductively learn a neighborhood aggregator that represents a node by aggregating its neighbors embeddings The embeddings of unseen nodes can then be obtained by applying the aggregator on their existing neighbors It is well received that KGs differ from homogeneous graphs by their multirelational structure BIBREF8 Despite the difference it seems promising to generalize the neighborhood aggregating scheme to embed emerging KG entities in an inductive manner For example in Figure FIGREF1 a news article may describe an emerging entity marked gray as well as some facts involving existing entities By generalizing structural information in the underlying KG eg other entities residing in a similar neighborhood or involving similar relations to the current entitys neighborhood we can infer that it may probably live in Chicago Inspired by the above example the inductive KG embedding problem boils down to designing a KGspecific neighborhood aggregator to capture essential neighborhood information Intuitively an ideal aggregator should have the following desired properties This paper concentrates on KGspecific neighborhood aggregators which is of practical importance but only received limited focus BIBREF9 To the best of our knowledge neither conventional aggregators for homogeneous graphs nor those for KGs satisfy all the above three properties In this regard we employ the attention mechanism BIBREF10 and propose an aggregator called Logic Attention Network LAN Aggregating neighbors by a weighted combination of their transformed embeddings LAN is inherently permutation invariant To estimate the attention weights in LAN we adopt two mechanisms to model relation and neighborlevel information in a coarsetofine manner At both levels LAN is made aware of both neighborhood redundancy and query relation To summarize our contributions are 1 We propose three desired properties that decent neighborhood aggregators for KGs should possess 2 We propose a novel aggregator ie Logic Attention Network to facilitate inductive KG embedding 3 We conduct extensive comparisons with conventional aggregators on two KG completions tasks The results validate the superiority of LAN wrt the three properties In recent years representation learning problems on KGs have received much attention due to the wide applications of the resultant entity and relation embeddings Typical KG embedding models include TransE BIBREF11 Distmult BIBREF12 Complex BIBREF13 Analogy BIBREF14 to name a few For more explorations we refer readers to an extensive survey BIBREF15 However conventional approaches on KG embedding work in a transductive manner They require that all entities should be seen during training Such limitation hinders them from efficiently generalizing to emerging entities To relieve the issue of emerging entities several inductive KG embedding models are proposed including BIBREF16 xie2016representation BIBREF6 shi2018open and BIBREF17 xie2016image which use description text or images as inputs Although the resultant embeddings may be utilized for KG completion it is not clear whether the embeddings are powerful enough to infer implicit or new facts beyond those expressed in the textimage Moreover when domain experts are recruited to introduce new entities via partial facts rather than text or images those approaches may not help much In light of the above scenario existing neighbors of an emerging entity are considered as another type of input for inductive models In BIBREF9 ijcai2017250 the authors propose applying Graph Neural Network GNN on the KG which generates the embedding of a new entity by aggregating all its known neighbors However their model aggregates the neighbors via simple pooling functions which neglects the difference among the neighbors Other works like BIBREF18 fu2017hin2vec and BIBREF19 tang2015pte aim at embedding nodes for node classification given the entire graph and thus are inapplicable for inductive KGspecific tasks BIBREF20 schlichtkrull2017modeling and BIBREF21 xiong2018one also rely on neighborhood structures to embed entities but they either work transductively or focus on emerging relations Finally we note another related line of studies on node representation learning for homogeneous graphs Similar to text or imagebased inductive models for KGs BIBREF22 duran2017learning BIBREF23 yang2016revisiting BIBREF24 velivckovic2017graph and BIBREF25 rossi2018deep exploit additional node attributes to embed unseen nodes Another work more related to ours is BIBREF26 hamilton2017inductive They tackle inductive node embedding by the neighborhood aggregation scheme Their aggregators either trivially treat neighbors equally or unnecessarily require them to be ordered Moreover like all embedding models for homogeneous graphs their model cannot be directly applied to KGs with multirelational edges Let INLINEFORM0 and INLINEFORM1 be two sets of entities and relations of size INLINEFORM2 and INLINEFORM3 respectively A knowledge graph is composed of a set of triplet facts namely DISPLAYFORM0 For each INLINEFORM0 we denote the reverse of INLINEFORM1 by INLINEFORM2 and add an additional triplet INLINEFORM3 to INLINEFORM4 For an entity INLINEFORM0 we denote by INLINEFORM1 its neighborhood in INLINEFORM2 ie all related entities with the involved relations Formally DISPLAYFORM0 We denote the projection of INLINEFORM0 on INLINEFORM1 and INLINEFORM2 by INLINEFORM3 and INLINEFORM4 respectively Here INLINEFORM5 are neighbors and INLINEFORM6 are neighboring relations When the context is clear we simplify the INLINEFORM7 th entity INLINEFORM8 by its subscript INLINEFORM9 We denote vectors by bold lower letters and matrices or sets of vectors by bold upper letters Given a knowledge graph INLINEFORM0 we would like to learn a neighborhood aggregator INLINEFORM1 that acts as follows For an entity INLINEFORM0 on INLINEFORM1 INLINEFORM2 depends on INLINEFORM3 s neighborhood INLINEFORM4 to embed INLINEFORM5 as a lowdimensional vector INLINEFORM6 For an unknown triplet INLINEFORM0 the embeddings of INLINEFORM1 and INLINEFORM2 output by INLINEFORM3 suggest the plausibility of the triplet When a new entity emerges with some triplets involving INLINEFORM0 and INLINEFORM1 we could apply such an aggregator INLINEFORM2 on its newly established neighborhood and use the output embedding to infer new facts about it To obtain such a neighborhood aggregator INLINEFORM0 we adopt an encoderdecoder framework as illustrated by Figure FIGREF12 Given a training triplet the encoder INLINEFORM1 encodes INLINEFORM2 and INLINEFORM3 into two embeddings with INLINEFORM4 The decoder measures the plausibility of the triplet and provides feedbacks to the encoder to adjust the parameters of INLINEFORM5 In the remainder of this section we describe general configurations of the two components As specified in Figure FIGREF12 for an entity INLINEFORM0 on focus the encoder works on a collection of input neighbor embeddings and output INLINEFORM1 s embedding To differentiate between input and output embeddings we use superscripts INLINEFORM2 and INLINEFORM3 on the respective vectors Let INLINEFORM4 which is obtained from an embedding matrix INLINEFORM5 be the embedding of a neighbor INLINEFORM6 where INLINEFORM7 To reflect the impact of relation INLINEFORM8 on INLINEFORM9 we apply a relationspecific transforming function INLINEFORM10 on INLINEFORM11 as follows DISPLAYFORM0 where INLINEFORM0 is the transforming vector for relation INLINEFORM1 and is restricted as a unit vector We adopt this transformation from BIBREF27 wang2014knowledge since it does not involve matrix product operations and is of low computation complexity After neighbor embeddings are transformed these transformed embeddings are fed to the aggregator INLINEFORM0 to output an embedding INLINEFORM1 for the target entity INLINEFORM2 ie DISPLAYFORM0 By definition an aggregator INLINEFORM0 essentially takes as input a collection of vectors INLINEFORM1 INLINEFORM2 and maps them to a single vector With this observation the following two types of functions seem to be natural choices for neighborhood aggregators and have been adopted previously Pooling Functions A typical pooling function is meanpooling which is defined by INLINEFORM0 Besides meanpooling other previously adopted choices include sum and maxpooling BIBREF9 Due to their simple forms pooling functions are permutationinvariant but consider the neighbors equally It is aware of neither potential redundancy in the neighborhood nor the query relations Recurrent Neural Networks RNNs In various natural language processing tasks RNNs prove effective in modeling sequential dependencies In BIBREF26 the authors adopt an RNN variant LSTM BIBREF28 as neighborhood aggregator ie INLINEFORM0 To train and apply the LSTMbased aggregator they have to randomly permute the neighbors which violates the permutation variance property Given the subject and object embeddings INLINEFORM0 and INLINEFORM1 output by the encoder the decoder is required to measure the plausibility of the training triplet To avoid potential mixture with relations INLINEFORM2 in the neighborhood we refer to the relation in the training triplet by query relation and denote it by INLINEFORM3 instead After looking up INLINEFORM4 s representation INLINEFORM5 from an embedding matrix INLINEFORM6 the decoder scores the training triplet INLINEFORM7 with a scoring function INLINEFORM8 Following BIBREF9 ijcai2017250 we mainly investigate a scoring function based on TransE BIBREF11 defined by DISPLAYFORM0 where INLINEFORM0 denotes the L1 norm To test whether the studied aggregators generalize among different scoring function we will also consider several alternatives in experiments As discussed above traditional neighborhood aggregators do not preserve all desired properties In this section we describe a novel aggregator namely Logic Attention Network LAN which addresses all three properties We also provide details in training the LAN aggregator Traditional neighborhood aggregators only depend on collections of transformed embeddings They neglect other useful information in the neighborhood INLINEFORM0 and the query relation INLINEFORM1 which may facilitate more effective aggregation of the transformed embeddings To this end we propose generalizing the aggregators from INLINEFORM2 to INLINEFORM3 Specifically for an entity INLINEFORM0 its neighbors INLINEFORM1 should contribute differently to INLINEFORM2 according to its importance in representing INLINEFORM3 To consider the different contribution while preserving the permutation invariance property we employ a weighted or attentionbased aggregating approach on the transformed embeddings The additional information in INLINEFORM4 and INLINEFORM5 is then exploited when estimating the attention weights Formally we obtain INLINEFORM6 by DISPLAYFORM0 Here INLINEFORM0 is the attention weight specified for each neighbor INLINEFORM1 given INLINEFORM2 and the query relation INLINEFORM3 To assign larger weights INLINEFORM0 to more important neighbors from the perspective of INLINEFORM1 we ask ourselves two questions at progressive levels 1 What types of neighboring relations may lead us to potentially important neighbors 2 Following those relations which specific neighbor in transformed embedding may contain important information Inspired by the two questions we adopt the following two mechanisms to estimate INLINEFORM2 Relations in a KG are simply not independent of each other For an entity INLINEFORM0 one neighboring relation INLINEFORM1 may imply the existence of another neighboring relation INLINEFORM2 though they may not necessarily connect INLINEFORM3 to the same neighbor For example a neighboring relation playfor may suggest the home city ie livein of the current athlete entity Following notations in logics we denote potential dependency between INLINEFORM4 and INLINEFORM5 by a logic rule INLINEFORM6 To measure the extent of such dependency we define the confidence of a logic rule INLINEFORM7 as follows DISPLAYFORM0 Here the function INLINEFORM0 equals 1 when INLINEFORM1 is true and 0 otherwise As an empirical statistic over the entire KG INLINEFORM2 is larger if more entities with neighboring relation INLINEFORM3 also have INLINEFORM4 as a neighboring relation With the confidence scores INLINEFORM0 between all relation pairs at hand we are ready to characterize neighboring relations INLINEFORM1 that lead to important neighbors On one hand such a relation INLINEFORM2 should have a large INLINEFORM3 ie it is statistically relevant to INLINEFORM4 Following the above example playfor should be consulted to if the query relation is livein On the other hand INLINEFORM5 should not be implied by other relations in the neighborhood For example no matter whether the query relation is livein or not the neighboring relation workas should not be assigned too much weight because sufficient information is already provided by playfor Following the above intuitions we implement the logic rule mechanism of measuring neighboring relations usefulness as follow DISPLAYFORM0 We note that INLINEFORM0 promotes relations INLINEFORM1 strongly implying INLINEFORM2 the numerator and demotes those implied by some other relation in the same neighborhood the denominator In this manner our logic rule mechanism addresses both query relation awareness and neighborhood redundancy awareness With global statistics about relations the logic rule mechanism guides the attention weight to be distributed at a coarse granularity of relations However it may be insufficient not to consult finergrained information hidden in the transformed neighbor embeddings to determine which neighbor is important indeed To take the transformed embeddings into consideration we adopt an attention network BIBREF10 Specifically given a query relation INLINEFORM0 the importance of an entity INLINEFORM1 s neighbor INLINEFORM2 is measured by DISPLAYFORM0 Here the unnormalized attention weight INLINEFORM0 is given by an attention neural network as DISPLAYFORM0 In this equation INLINEFORM0 and INLINEFORM1 are global attention parameters while INLINEFORM2 is a relationspecific attention parameter for the query relation INLINEFORM3 All those attention parameters are regarded as parameters of the encoder and learned directly from the data Note that unlike the logic rule mechanism at relation level the computation of INLINEFORM0 concentrates more on the neighbor INLINEFORM1 itself This is useful when the neighbor entity INLINEFORM2 is also helpful to explain the current training triplet For example in Figure FIGREF12 the neighbor ChicagoBulls could help to imply the object of livein since there are other athletes playing for ChicagoBulls while living in Chicago Although working at the neighbor level the dependency on transformed neighbor embeddings INLINEFORM3 and the relationspecific parameter INLINEFORM4 make INLINEFORM5 aware of both neighborhood redundancy and the query relation Finally to incorporate these two weighting mechanisms together in measuring the importance of neighbors we employ a doubleview attention and reformulate Eq EQREF22 as DISPLAYFORM0 To train the entire model in Figure FIGREF12 we need both positive triplets and negative ones All triplets INLINEFORM0 from the knowledge graph naturally serve as positive triplets which we denote by INLINEFORM1 To make up for the absence of negative triplets for each INLINEFORM2 we randomly corrupt the object or subject but not both by another entity in INLINEFORM3 and denote the corresponding negative triplets by INLINEFORM4 Formally DISPLAYFORM0 To encourage the decoder to give high scores for positive triplets and low scores for negative ones we apply a marginbased ranking loss on each triplet INLINEFORM0 ie DISPLAYFORM0 Here INLINEFORM0 denotes the positive part of x and INLINEFORM1 is a hyperparameter for the margin Finally the training objective is defined by DISPLAYFORM0 The above training objective only optimizes the output of the aggregator ie the output entity embeddings INLINEFORM0 The input entity embeddings INLINEFORM1 however are not directly aware of the structure of the entire KG To make the input embeddings and thus the aggregation more meaningful we set up a subtask for LAN First we define a second scoring function which is similar to Eq EQREF20 except that input embeddings INLINEFORM0 from INLINEFORM1 are used to represent the subject and object ie DISPLAYFORM0 The embedding of query relation INLINEFORM0 is obtained from the same embedding matrix INLINEFORM1 as in the first scoring function Then a similar marginbased ranking loss INLINEFORM2 as Eq EQREF32 is defined for the subtask Finally we combine the subtask with the main task and reformulate the overall training objective of LAN as DISPLAYFORM0 We evaluate the effectiveness of our LAN model on two typical knowledge graph completion tasks ie link prediction and triplet classification We compare our LAN with two baseline aggregators MEAN and LSTM as described in the Encoder section MEAN is used on behalf of pooling functions since it leads to the best performance in BIBREF9 ijcai2017250 LSTM is used due to its large expressive capability BIBREF26 In both tasks we need datasets whose test sets contain new entities unseen during training For the task of triplet classification we directly use the datasets released by BIBREF9 ijcai2017250 which are based on WordNet11 BIBREF29 Since they do not conduct experiments on the link prediction task we construct the required datasets based on FB15K BIBREF11 following a similar protocol used in BIBREF9 ijcai2017250 as follows Sampling unseen entities Firstly we randomly sample INLINEFORM0 of the original testing triplets to form a new test set INLINEFORM1 for our inductive scenario BIBREF9 ijcai2017250 samples INLINEFORM2 testing triplets Then two different strategies are used to construct the candidate unseen entities INLINEFORM6 One is called Subject where only entities appearing as the subjects in INLINEFORM7 are added to INLINEFORM8 Another is called Object where only objects in INLINEFORM9 are added to INLINEFORM10 For an entity INLINEFORM11 if it does not have any neighbor in the original training set such an entity is filtered out yielding the final unseen entity set INLINEFORM12 For a triplet INLINEFORM13 if INLINEFORM14 or INLINEFORM15 it is removed from INLINEFORM16 Filtering and splitting data sets The second step is to ensure that unseen entities would not appear in final training set or validation set We split the original training set into two data sets the new training set and auxiliary set For a triplet INLINEFORM0 in original training set if INLINEFORM1 it is added to the new training set If INLINEFORM2 or INLINEFORM3 it is added to the auxiliary set which serves as existing neighbors for unseen entities in INLINEFORM4 Finally for a triplet INLINEFORM0 in the original validation set if INLINEFORM1 or INLINEFORM2 it is removed from the validation set The statistics for the resulting INLINEFORM0 datasets using Subject and Object strategies are in Table TABREF34 Triplet classification aims at classifying a fact triplet INLINEFORM0 as true or false In the dataset of BIBREF9 ijcai2017250 triplets in the validation and testing sets are labeled as true or false while triplets in the training set are all true ones To tackle this task we preset a threshold INLINEFORM0 for each relation r If INLINEFORM1 the triplet is classified as positive otherwise it is negative We determine the optimal INLINEFORM2 by maximizing classification accuracy on the validation set Since this task is also conducted in BIBREF9 ijcai2017250 we use the same configurations with learning rate INLINEFORM0 embedding dimension INLINEFORM1 and margin INLINEFORM2 for all datasets We randomly sample 64 neighbors for each entity Zero padding is used when the number of neighbors is less than 64 L2regularization is applied on the parameters of LAN The regularization rate is INLINEFORM3 We search the best hyperparameters of all models according to the performance on validation set In detail we search learning rate INLINEFORM0 in INLINEFORM1 embedding dimension for neighbors INLINEFORM2 in INLINEFORM3 and margin INLINEFORM4 in INLINEFORM5 The optimal configurations are INLINEFORM6 for all the datasets The results are reported in Table TABREF42 Since we did not achieve the same results for MEAN as reported in BIBREF9 ijcai2017250 with either our implementation or their released source code the best results from their original paper are reported From the table we observe that on one hand LSTM results in poorer performance compared with MEAN which involves fewer parameters though This demonstrates the necessity of the permutation invariance for designing neighborhood aggregators for KGs On the other hand our LAN model consistently achieves the best results on all datasets demonstrating the effectiveness of LAN on this KBC task Link prediction in the inductive setting aims at reasoning the missing part in a triplet when given INLINEFORM0 or INLINEFORM1 with emerging entities INLINEFORM2 or INLINEFORM3 respectively To tackle the task we firstly hide the object subject of each testing triplet in SubjectR ObjectR to produce a missing part Then we replace the missing part with all entities in the entity set INLINEFORM4 to construct candidate triplets We compute the scoring function INLINEFORM5 defined in Eq EQREF20 for all candidate triplets and rank them in descending order Finally we evaluate whether the groundtruth entities are ranked ahead of other entities We use traditional evaluation metrics as in the KG completion literature ie Mean Rank MR Mean Reciprocal Rank MRR and the proportion of ground truth entities ranked topk Hitsk INLINEFORM6 Since certain candidate triplets might also be true we follow previous works and filter out these fake negatives before ranking The results on Subject10 and Object10 are reported in Table TABREF43 The results on other datasets are similar and we summarize them later in Figure FIGREF50 From Table TABREF43 we still observe consistent results for all the models as in the triplet classification task Firstly LSTM results in the poorest performance on all datasets Secondly our LAN model outperforms all the other baselines significantly especially on the Hitk metrics The improvement on the MR metric of LAN might not be considerable This is due to the flaw of the MR metric since it is more sensitive to lower positions of the ranking which is actually of less importance The MRR metric is proposed for this reason where we could observe consistent improvements brought by LAN The effectiveness of LAN on link prediction validates LANs superiority to other aggregators and the necessities to treat the neighbors differently in a permutation invariant way To analyze whether LAN outperforms the others for expected reasons and generalizes to other configurations we conduct the following studies In this experiment we would like to confirm that its necessary for the aggregator to be aware of the query relation Specifically we investigate the attention neural network and design two degenerated baselines One is referred to as QueryAttention and is simply an attention network as in LAN except that the logic rule mechanism is removed The other is referred to as GlobalAttention which is also an attention network except that the query relation embedding INLINEFORM0 in Eq EQREF28 is masked by a zero vector The results are reported in Table TABREF46 We observe that although superior to MEAN GlobalAttention is outperformed by QueryAttention demonstrating the necessity of query relation awareness The superiority of GlobalAttention over MEAN could be attributed to the fact that the attention mechanism is effective to identify the neighbors which are globally important regardless of the query We find that the logic rules greatly help to improve the attention network in LAN We confirm this point by conducting further experiments where the logic rule mechanism is isolated as a single model referred to as Logic Rules Only The results are also demonstrated in Table TABREF46 from which we find that QueryAttention outperforms MEAN by a limited margin Meanwhile Logic Rules Only outperforms both MEAN and QueryAttention by significant margins These results demonstrate the effectiveness of logic rules in assigning meaningful weights to the neighbors Specifically in order to generate representations for unseen entities it is crucial to incorporate the logic rules to train the aggregator instead of depending solely on neural networks to learn from the data By combining the logic rules and neural networks LAN takes a step further in outperforming all the other models To find out whether the superiority of LAN to the baselines can generalize to other scoring functions we replace the scoring function in Eq EQREF20 and Eq EQREF36 by three typical scoring functions mentioned in Related Works We omit the results of LSTM for it is still inferior to MEAN The results are listed in Table TABREF48 from which we observe that with different scoring functions LAN outperforms MEAN consistently by a large margin on all the evaluation metrics Note that TransE leads to the best results on MEAN and LAN Its reasonable to suppose that when the ratio of the unseen entities over the training entities increases namely the observed knowledge graph becomes sparser all models performance would deteriorate To figure out whether our LAN could suffer less on sparse knowledge graphs we conduct link prediction on datasets with different sample rates INLINEFORM0 as described in Step 1 of the Data Construction section The results are displayed in Figure FIGREF50 We observe that the increasing proportion of unseen entities certainly has a negative impact on all models However the performance of LAN does not decrease as drastically as that of MEAN and LSTM indicating that LAN is more robust on sparse KGs In order to visualize how LAN specifies weights to neighbors we sample some cases from the Subject10 testing set From Table FIGREF50 we have the following observations First with the query relation LAN could attribute higher weights to neighbors with more relevant relations In the first case when the query is origin the top two neighbors are involved by placelived and breedorigin which are helpful to imply origin In addition in all three cases neighbors with relation gender gain the lowest weights since they imply nothing about the query relation Second LAN could attribute higher weights to neighbor entities that are more informative When the query relation is profession the neighbors Aristotle Metaphysics and Aesthetics are all relevant to the answer Philosopher In the third case we also observe similar situations Here the neighbor with the highest weight is institution UniversityofCalgary since the query relation placelived helps the aggregator to focus on the neighboring relation institution then the neighbor entity UniversityofCalgary assists in locating the answer Calgary In this paper we address inductive KG embedding which helps embed emerging entities efficiently We formulate three characteristics required for effective neighborhood aggregators To meet the three characteristics we propose LAN which attributes different weights to an entitys neighbors in a permutation invariant manner considering both the redundancy of neighbors and the query relation The weights are estimated from data with logic rules at a coarse relation level and neural attention network at a fine neighbor level Experiments show that LAN outperforms baseline models significantly on two typical KG completion tasks We thank the three anonymous authors for their constructive comments This work is supported by the National Natural Science Foundation of China 61472453 U1401256 U1501252 U1611264 U1711261 U1711262 </s>
<s>[INST] Learning with Noisy Labels for Sentence-level Sentiment Classification [/INST] It is well known that sentiment annotation or labeling is subjective BIBREF0 Annotators often have many disagreements This is especially so for crowdworkers who are not well trained That is why one always feels that there are many errors in an annotated dataset In this paper we study whether it is possible to build accurate sentiment classifiers even with noisylabeled training data Sentiment classification aims to classify a piece of text according to the polarity of the sentiment expressed in the text eg positive or negative BIBREF1 BIBREF0 BIBREF2 In this work we focus on sentencelevel sentiment classification SSC with labeling errors As we will see in the experiment section noisy labels in the training data can be highly damaging especially for DNNs because they easily fit the training data and memorize their labels even when training data are corrupted with noisy labels BIBREF3 Collecting datasets annotated with clean labels is costly and timeconsuming as DNN based models usually require a large number of training examples Researchers and practitioners typically have to resort to crowdsourcing However as mentioned above the crowdsourced annotations can be quite noisy Research on learning with noisy labels dates back to 1980s BIBREF4 It is still vibrant today BIBREF5 BIBREF6 BIBREF7 BIBREF8 BIBREF9 BIBREF10 BIBREF11 BIBREF12 as it is highly challenging We will discuss the related work in the next section This paper studies the problem of learning with noisy labels for SSC Formally we study the following problem Problem Definition Given noisy labeled training sentences Slbrace x1y1xnynrbrace where xii1n is the ith sentence and yiin lbrace 1crbrace is the sentiment label of this sentence the noisy labeled sentences are used to train a DNN model for a SSC task The trained model is then used to classify sentences with clean labels to one of the c sentiment labels In this paper we propose a convolutional neural Network with Abnetworks NetAb to deal with noisy labels during training as shown in Figure FIGREF2 We will introduce the details in the subsequent sections Basically NetAb consists of two convolutional neural networks CNNs see Figure FIGREF2 one for learning sentiment scores to predict clean labels and the other for learning a noise transition matrix to handle input noisy labels We call the two CNNs Anetwork and Abnetwork respectively The fundamental here is that 1 DNNs memorize easy instances first and gradually adapt to hard instances as training epochs increase BIBREF3 BIBREF13 and 2 noisy labels are theoretically flipped from the cleantrue labels by a noise transition matrix BIBREF14 BIBREF15 BIBREF16 BIBREF17 We motivate and propose a CNN model with a transition layer to estimate the noise transition matrix for the input noisy labels while exploiting another CNN to predict clean labels for the input training and test sentences In training we pretrain Anetwork in early epochs and then train Abnetwork and Anetwork with their own loss functions in an alternating manner To our knowledge this is the first work that addresses the noisy label problem in sentencelevel sentiment analysis Our experimental results show that the proposed model outperforms the stateoftheart methods Our work is related to sentence sentiment classification SSC SSC has been studied extensively BIBREF18 BIBREF19 BIBREF20 BIBREF21 BIBREF22 BIBREF23 BIBREF24 BIBREF25 BIBREF26 BIBREF27 BIBREF28 None of them can handle noisy labels Since many social media datasets are noisy researchers have tried to build robust models BIBREF29 BIBREF30 BIBREF31 However they treat noisy data as additional information and dont specifically handle noisy labels A noiseaware classification model in BIBREF12 trains using data annotated with multiple labels BIBREF32 exploited the connection of users and noisy labels of sentiments in social networks Since the two works use multiplelabeled data or users information we only use singlelabeled data and we do not use any additional information they have different settings than ours Our work is closely related to DNNs based approaches to learning with noisy labels DNNs based approaches explored three main directions 1 training DNNs on selected samples BIBREF33 BIBREF34 BIBREF35 BIBREF17 2 modifying the loss function of DNNs with regularization biases BIBREF5 BIBREF36 BIBREF37 BIBREF38 BIBREF39 BIBREF40 and 3 plugging an extra layer into DNNs BIBREF14 BIBREF41 BIBREF15 BIBREF16 All these approaches were proposed for image classification where training images were corrupted with noisy labels Some of them require noise rate to be known a priori in order to tune their models during training BIBREF37 BIBREF17 Our approach combines direction 1 and direction 3 and trains two networks jointly without knowing the noise rate We have used five latest existing methods in our experiments for SSC The experimental results show that they are inferior to our proposed method In addition BIBREF42 BIBREF43 BIBREF44 BIBREF45 BIBREF46 and BIBREF47 studied weaklysupervised DNNs or semisupervised DNNs But they still need some cleanlabeled training data We use no cleanlabeled data Our model builds on CNN BIBREF25 The key idea is to train two CNNs alternately one for addressing the input noisy labels and the other for predicting clean labels The overall architecture of the proposed model is given in Figure FIGREF2 Before going further we first introduce a proposition a property and an assumption below Proposition 1 Noisy labels are flipped from clean labels by an unknown noise transition matrix Proposition UNKREF3 is reformulated from BIBREF16 and has been investigated in BIBREF14 BIBREF15 BIBREF41 This proposition shows that if we know the noise transition matrix we can use it to recover the clean labels In other words we can put noise transition matrix on clean labels to deal with noisy labels Given these we ask the following question How to estimate such an unknown noise transition matrix Below we give a solution to this question based on the following property of DNNs Property 1 DNNs tend to prioritize memorization of simple instances first and then gradually memorize hard instances BIBREF3 BIBREF13 further investigated this property of DNNs Our setting is that simple instances are sentences of clean labels and hard instances are those with noisy labels We also have the following assumption Assumption 1 The noise rate of the training data is less than 50 This assumption is usually satisfied in practice because without it it is hard to tackle the input noisy labels during training Based on the above preliminaries we need to estimate the noisy transition matrix Qin mathbb Rctimes c c2 in our case ie positive and negative and train two classifiers ddotysim Pddotyxtheta and widehatysim Pwidehatyxvartheta where x is an input sentence ddoty is its noisy label widehaty is its clean label theta and vartheta are the parameters of two classifiers Note that both ddoty and widehaty here are the prediction results from our model not the input labels We propose to formulate the probability of the sentence x labeled as j with where Pddotyjwidehatyi is an item the jith item in the noisy transition matrix Q We can see that the noisy transition matrix Q is exploited on the clean scores Pwidehatyxvartheta to tackle noisy labels We now present our model NetAb and introduce how NetAb performs Eq DISPLAYFORM6 As shown in Figure FIGREF2 NetAb consists of two CNNs The intuition here is that we use one CNN to perform Pwidehatyixvartheta and use another CNN to perform Pddotyjxtheta Meanwhile the CNN performing Pddotyjxtheta estimates the noise transition matrix Q to deal with noisy labels Thus we add a transition layer into this CNN More precisely in Figure FIGREF2 the CNN with a clean loss performs Pwidehatyixvartheta We call this CNN the Anetwork The other CNN with a noisy loss performs Pddotyjxtheta We call this CNN the Abnetwork Abnetwork shares all the parameters of Anetwork except the parameters from the Gate unit and the clean loss In addition Abnetwork has a transition layer to estimate the noisy transition matrix Q In such a way Anetwork predict clean labels and Abnetwork handles the input noisy labels We use crossentropy with the predicted labels ddoty and the input labels y given in the dataset to compute the noisy loss formulated as below where mathbb I is the indicator function if yi mathbb I1 otherwise mathbb I0 and ddotS is the number of sentences to train Abnetwork in each batch Similarly we use crossentropy with the predicted labels widehaty and the input labels y to compute the clean loss formulated as where widehatS is the number of sentences to train Anetwork in each batch Next we introduce how our model learns the parameters vartheta theta and Q An embedding matrix v is produced for each sentence x by looking up a pretrained word embedding database eg GloVe840B BIBREF48 Then an encoding vector hCNNv and uCNNv is produced for each embedding matrix v in Anetwork and Abnetwork A sofmax classifier gives us Phatyixvartheta ie clean sentiment scores on the learned encoding vector h As the noise transition matrix Q indicates the transition values from clean labels to noisy labels we compute Q as follows where Wi is a trainable parameter matrix bi and fi are two trainable parameter vectors They are trained in the Abnetwork Finally Pddotyjxtheta is computed by Eq DISPLAYFORM6 In training NetAb is trained endtoend Based on Proposition UNKREF3 and Property UNKREF4 we pretrain Anetwork in early epochs eg 5 epochs Then we train Abnetwork and Anetwork in an alternating manner The two networks are trained using their respective crossentropy loss Given a batch of sentences we first train Abnetwork Then we use the scores predicted from Anetwork to select some possibly clean sentences from this batch and train Anetwork on the selected sentences Specifically speaking we use the predicted scores to compute sentiment labels by arg max i lbrace ddotyiddotysim Pddotyxtheta rbrace Then we select the sentences whose resulting sentiment label equals to the input label The selection process is marked by a Gate unit in Figure FIGREF2 When testing a sentence we use Anetwork to produce the final classification result In this section we evaluate the performance of the proposed NetAb model we conduct two types of experiments 1 We corrupt cleanlabeled datasets to produce noisylabeled datasets to show the impact of noises on sentiment classification accuracy 2 We collect some real noisy data and use them to train models to evaluate the performance of NetAb Cleanlabeled Datasets We use three clean labeled datasets The first one is the movie sentence polarity dataset from BIBREF19 The other two datasets are laptop and restaurant datasets collected from SemEval2016 The former consists of laptop review sentences and the latter consists of restaurant review sentences The original datasets ie Laptop and Restaurant were annotated with aspect polarity in each sentence We used all sentences with only one polarity positive or negative for their aspects That is we only used sentences with aspects having the same sentiment label in each sentence Thus the sentiment of each aspect gives the groundtruth as the sentiments of all aspects are the same For each cleanlabeled dataset the sentences are randomly partitioned into training set and test set with 80 and 20 respectively Following BIBREF25 We also randomly select 10 of the test data for validation to check the model during training Summary statistics of the training validation and test data are shown in Table TABREF9 Noisylabeled Training Datasets For the above three domains movie laptop and restaurant we collected 2000 reviews for each domain from the same review source We extracted sentences from each review and assigned reviews label to its sentences Like previous work we treat 4 or 5 stars as positive and 1 or 2 stars as negative The data is noisy because a positive negative review can contain negative positive sentences and there are also neutral sentences This gives us three noisylabeled training datasets We still use the same test sets as those for the cleanlabeled datasets Summary statistics of all the datasets are shown in Table TABREF9 Experiment 1 Here we use the cleanlabeled data ie the last three columns in Table TABREF9 We corrupt the clean training data by switching the labels of some random instances based on a noise rate parameter Then we use the corrupted data to train NetAb and CNN BIBREF25 The test accuracy curves with the noise rates 0 01 02 03 04 05 are shown in Figure FIGREF13 From the figure we can see that the test accuracy drops from around 08 to 05 when the noise rate increases from 0 to 05 but our NetAb outperforms CNN The results clearly show that the performance of the CNN drops quite a lot with the noise rate increasing Experiment 2 Here we use the real noisylabeled training data to train our model and the baselines and then test on the test data in Table TABREF9 Our goal is two fold First we want to evaluate NetAb using real noisy data Second we want to see whether sentences with review level labels can be used to build effective SSC models Baselines We use one strong nonDNN baseline NBSVM with unigrams or bigrams features BIBREF23 and six DNN baselines The first DNN baseline is CNN BIBREF25 which does not handle noisy labels The other five were designed to handle noisy labels The comparison results are shown in Table TABREF12 From the results we can make the following observations 1 Our NetAb model achieves the best ACC and F1 on all datasets except for F1 of negative class on Laptop The results demonstrate the superiority of NetAb 2 NetAb outperforms the baselines designed for learning with noisy labels These baselines are inferior to ours as they were tailored for image classification Note that we found no existing method to deal with noisy labels for SSC Training Details We use the publicly available pretrained embedding GloVe840B BIBREF48 to initialize the word vectors and the embedding dimension is 300 For each baseline we obtain the system from its author and use its default parameters As the DNN baselines except CNN were proposed for image classification we change the input channels from 3 to 1 For our NetAb we follow BIBREF25 to use window sizes of 3 4 and 5 words with 100 feature maps per window size resulting in 300dimensional encoding vectors The input length of sentence is set to 40 The network parameters are updated using the Adam optimizer BIBREF49 with a learning rate of 0001 The learning rate is clipped gradually using a norm of 096 in performing the Adam optimization The dropout rate is 05 in the input layer The number of epochs is 200 and batch size is 50 This paper proposed a novel CNN based model for sentencelevel sentiment classification learning for data with noisy labels The proposed model learns to handle noisy labels during training by training two networks alternately The learned noisy transition matrices are used to tackle noisy labels Experimental results showed that the proposed model outperforms a wide range of baselines markedly We believe that learning with noisy labels is a promising direction as it is often easy to collect noisylabeled training data Hao Wang and Yan Yangs work was partially supported by a grant from the National Natural Science Foundation of China No 61572407 </s>
<s>[INST] Keep Calm and Switch On! Preserving Sentiment and Fluency in Semantic Text Exchange [/INST] There has been significant research on style transfer with the goal of changing the style of text while preserving its semantic content The alternative where semantics are adjusted while keeping style intact which we call semantic text exchange STE has not been investigated to the best of our knowledge Consider the following example where the replacement entity defines the new semantic context Original Text It is sunny outside Ugh that means I must wear sunscreen I hate being sweaty and sticky all over Replacement Entity weather rainy Desired Text It is rainy outside Ugh that means I must bring an umbrella I hate being wet and having to carry it around The weather within the original text is sunny whereas the actual weather may be rainy Not only is the word sunny replaced with rainy but the rest of the texts content is changed while preserving its negative sentiment and fluency With the rise of natural language processing NLP has come an increased demand for massive amounts of text data Manually collecting and scraping data requires a significant amount of time and effort and data augmentation techniques for NLP are limited compared to fields such as computer vision STE can be used for text data augmentation by producing various modifications of a piece of text that differ in semantic content Another use of STE is in building emotionally aligned chatbots and virtual assistants This is useful for reasons such as marketing overall enjoyment of interaction and mental health therapy However due to limited data with emotional content in specific semantic contexts the generated text may contain incorrect semantic content STE can adjust text semantics eg to align with reality or a specific task while preserving emotions One specific example is the development of virtual assistants with adjustable socioemotional personalities in the effort to construct assistive technologies for persons with cognitive disabilities Adjusting the emotional delivery of text in subtle ways can have a strong effect on the adoption of the technologies BIBREF0 It is challenging to transfer style this subtly due to lack of datasets on specific topics with consistent emotions Instead large datasets of emotionally consistent interactions not confined to specific topics exist Hence it is effective to generate text with a particular emotion and then adjust its semantics We propose a pipeline called SMERTI pronounced smarty for STE Combining entity replacement ER similarity masking SM and text infilling TI SMERTI can modify the semantic content of text We define a metric called the Semantic Text Exchange Score STES that evaluates the overall ability of a model to perform STE and an adjustable parameter masking replacement rate threshold MRTRRT that can be used to control the amount of semantic change We evaluate on three datasets Yelp and Amazon reviews BIBREF1 and Kaggle news headlines BIBREF2 We implement three baseline models for comparison Noun WordNet Semantic Text Exchange Model NWNSTEM General WordNet Semantic Text Exchange Model GWNSTEM and Word2Vec Semantic Text Exchange Model W2VSTEM We illustrate the STE performance of two SMERTI variations on the datasets demonstrating outperformance of the baselines and pipeline stability We also run a human evaluation supporting our results We analyze the results in detail and investigate relationships between the semantic change fluency sentiment and MRTRRT Our major contributions can be summarized as We define a new task called semantic text exchange STE with increasing importance in NLP applications that modifies text semantics while preserving other aspects such as sentiment We propose a pipeline SMERTI capable of multiword entity replacement and text infilling and demonstrate its outperformance of baselines We define an evaluation metric for overall performance on semantic text exchange called the Semantic Text Exchange Score STES Word2Vec BIBREF3 BIBREF4 allows for analogy representation through vector arithmetic We implement a baseline W2VSTEM using this technique The Universal Sentence Encoder USE BIBREF5 encodes sentences and is trained on a variety of web sources and the Stanford Natural Language Inference corpus BIBREF6 Flair embeddings BIBREF7 are based on architectures such as BERT BIBREF8 We use USE for SMERTI as it is designed for transfer learning and shows higher performance on textual similarity tasks compared to other models BIBREF9 Text infilling is the task of filling in missing parts of sentences called masks MaskGAN BIBREF10 is restricted to a single word per mask token while SMERTI is capable of variable length infilling for more flexible output BIBREF11 uses a transformerbased architecture They fill in random masks while SMERTI fills in masks guided by semantic similarity resulting in more natural infilling and fulfillment of the STE task Notable works in stylesentiment transfer include BIBREF12 BIBREF13 BIBREF14 BIBREF15 They attempt to learn latent representations of various text aspects such as its context and attributes or separate style from content and encode them into hidden representations They then use an RNN decoder to generate a new sentence given a targeted sentiment attribute BIBREF16 generates fake reviews from scratch using language models BIBREF17 BIBREF18 BIBREF19 generate reviews from scratch given auxiliary information eg the item category and star rating BIBREF20 generates reviews using RNNs with two components generation from scratch and review customization Algorithm 2 in BIBREF20 They define review customization as modifying the generated review to fit a new topic or context such as from a Japanese restaurant to an Italian one They condition on a keyword identifying the desired context and replace similar nouns with others using WordNet BIBREF21 They require a reference dataset required to be on topic easy enough for restaurant reviews but less so for arbitrary conversational agents As noted by BIBREF19 the method of BIBREF20 may also replace words independently of context We implement their review customization algorithm NWNSTEM and a modified version GWNSTEM as baseline models The task is to transform a corpus C of lines of text Si and associated replacement entities REiC lbrace S1RE1S2RE2ldots Sn REnrbrace to a modified corpus hatC lbrace hatS1hatS2ldots hatSnrbrace where hatSi are the original text lines Si replaced with REi and overall semantics adjusted SMERTI consists of the following modules shown in Figure FIGREF15 Entity Replacement Module ERM Identify which words within the original text are best replaced with the RE which we call the Original Entity OE We replace OE in S with RE We call this modified text Sprime Similarity Masking Module SMM Identify wordsphrases in Sprime similar to OE and replace them with a mask Group adjacent masks into a single one so we can fill a variable length of text into each We call this masked text Sprime prime Text Infilling Module TIM Fill in mask tokens with text that better suits the RE This will modify semantics in the rest of the text This final output text is called hatS For entity replacement we use a combination of the Universal Sentence Encoder BIBREF5 and Stanford Parser BIBREF22 The Stanford Parser is a constituency parser that determines the grammatical structure of sentences including phrases and partofspeech POS labelling By feeding our RE through the parser we are able to determine its parsetree Iterating through the parsetree and its subtrees we can obtain a list of constituent tags for the RE We then feed our input text S through the parser and through a similar process we can obtain a list of leaves where leaves under a single label are concatenated that are equal or similar to any of the RE constituent tags This generates a list of entities having the same or similar grammatical structure as the RE and are likely candidates for the OE We then feed these entities along with the RE into the Universal Sentence Encoder USE The USE is a sentencelevel embedding model that comes with a deep averaging network DAN and transformer model BIBREF5 We choose the transformer model as these embeddings take context into account and the exact same wordphrase will have a different embedding depending on its context and surrounding words We compute the semantic similarity between two embeddings u and v simuv using the angular cosine distance defined as cos theta uv ucdot vu v such that simuv 1frac1pi arccoscos theta uv Results are in 01 with higher values representing greater similarity Using USE and the above equation we can identify wordsphrases within the input text S which are most similar to RE To assist with this we use the Stanford Parser as described above to obtain a list of candidate entities In the rare case that this list is empty we feed in each word of S into USE and identify which word is the most similar to RE We then replace the most similar entity or word OE with the RE and generate Sprime An example of this entity replacement process is in Figure FIGREF18 Two parsetrees are shown for RE a and S b and c Figure FIGREF18d is a semantic similarity heatmap generated from the USE embeddings of the candidate OEs and RE where values are similarity scores in the range 01 As seen in Figure FIGREF18d we calculate semantic similarities between RE and entities within S which have noun constituency tags Looking at the row for our RE restaurant the most similar entity excluding itself is hotel We can then generate Sprime i love this restaurant the beds are comfortable and the service is great Next we mask words similar to OE to generate Sprime prime using USE We look at semantic similarities between every word in S and OE along with semantic similarities between OE and the candidate entities determined in the previous ERM step to broaden the range of phrases our module can mask We ignore RE OE and any entities or phrases containing OE for example this hotel After determining words similar to the OE discussed below we replace each of them with a mask token Next we replace mask tokens adjacent to each other with a single mask We set a base similarity threshold ST that selects a subset of words to mask We compare the actual fraction of masked words to the masking rate threshold MRT as defined by the user and increase ST in intervals of 005 until the actual masking rate falls below the MRT Some sample masked outputs Sprime prime using various MRTST combinations for the previous example are shown in Table TABREF21 more examples in Appendix A The MRT is similar to the temperature parameter used to control the novelty of generated text in works such as BIBREF20 A high MRT means the user wants to generate text very semantically dissimilar to the original and may be desired in cases such as creating a lively chatbot or correcting text that is heavily incorrect semantically A low MRT means the user wants to generate text semantically similar to the original and may be desired in cases such as text recovery grammar correction or correcting a minor semantic error in text By varying the MRT various pieces of text that differ semantically in subtle ways can be generated assisting greatly with text data augmentation The MRT also affects sentiment and fluency as we show in Section SECREF59 We use two seq2seq models for our TIM an RNN recurrent neural network model BIBREF23 called SMERTIRNN and a transformer model called SMERTITransformer We use a bidirectional variant of the GRU BIBREF24 and hence two RNNs for the encoder one reads the input sequence in standard sequential order and the other is fed this sequence in reverse The outputs are summed at each time step giving us the ability to encode information from both past and future context The decoder generates the output in a sequential tokenbytoken manner To combat information loss we implement the attention mechanism BIBREF25 We use a Luong attention layer BIBREF26 which uses global attention where all the encoders hidden states are considered and use the decoders current timestep hidden state to calculate attention weights We use the dot score function for attention where ht is the current target decoder state and barhs is all encoder states scorehtbarhshtTbarhs Our second model makes use of the transformer architecture and our implementation replicates BIBREF27 We use an encoderdecoder structure with a multihead selfattention token decoder to condition on information from both past and future context It maps a query and set of keyvalue pairs to an output The queries and keys are of dimension dk and values of dimension dv To compute the attention we pack a set of queries keys and values into matrices Q K and V respectively The matrix of outputs is computed as Multihead attention allows the model to jointly attend to information from different positions The decoder can make use of both local and global semantic information while filling in each mask We train our two TIMs on the three datasets The Amazon dataset BIBREF1 contains over 83 million user reviews on products with duplicate reviews removed The Yelp dataset includes over six million user reviews on businesses The news headlines dataset from Kaggle contains approximately 200000 news headlines from 2012 to 2018 obtained from HuffPost BIBREF2 We filter the text to obtain reviews and headlines which are English do not contain hyperlinks and other obvious noise and are less than 20 words long We found that many longer than twenty words ramble on and are too verbose for our purposes Rather than filtering by individual sentences we keep each text in its entirety so SMERTI can learn to generate multiple sentences at once We preprocess the text by lowercasing and removing rareduplicate punctuation and space For Amazon and Yelp we treat reviews greater than three stars as containing positive sentiment equal to three stars as neutral and less than three stars as negative For each training and testing set we include an equal number of randomly selected positive and negative reviews and half as many neutral reviews This is because neutral reviews only occupy one out of five stars compared to positive and negative which occupy two each Our dataset statistics can be found in Appendix B To set up our training and testing data for text infilling we mask the text We use a tiered masking approach for each dataset we randomly mask 15 of the words in onethird of the lines 30 of the words in another onethird and 45 in the remaining onethird These masked texts serve as the inputs while the original texts serve as the groundtruth This allows our TIM models to learn relationships between masked words and relationships between masked and unmasked words The bidirectional RNN decoder fills in blanks one by one with the objective of minimizing the cross entropy loss between its output and the groundtruth We use a hidden size of 500 two layers for the encoder and decoder teacherforcing ratio of 10 learning rate of 00001 dropout of 01 batch size of 64 and train for up to 40 epochs For the transformer we use scaled dotproduct attention and the same hyperparameters as BIBREF27 We use the Adam optimizer BIBREF28 with beta 1 09 beta 2 098 and epsilon 109 As in BIBREF27 we increase the learningrate linearly for the first warmupsteps training steps and then decrease the learningrate proportionally to the inverse square root of the step number We set factor1 and use warmupsteps 2000 We use a batch size of 4096 and we train for up to 40 epochs We implement three models to benchmark against First is NWNSTEM Algorithm 2 from BIBREF20 We use the training sets as the reference review sets to extract similar nouns to the RE using MINsim 01 We then replace nouns in the text similar to the RE with nouns extracted from the associated reference review set Secondly we modify NWNSTEM to work for verbs and adjectives and call this GWNSTEM From the reference review sets we extract similar nouns verbs and adjectives to the RE using MINsim 01 where the RE is now not restricted to being a noun We replace nouns verbs and adjectives in the text similar to the RE with those extracted from the associated reference review set Lastly we implement W2VSTEM using Gensim BIBREF29 We train unigram Word2Vec models for single word REs and fourgram models for phrases Models are trained on the training sets We use cosine similarity to determine the most similar wordphrase in the input text to RE which is the replaced OE For all other wordsphrases we calculate wiprime wi wOE wRE where wi is the original wordphrases embedding vector wOE is the OEs wRE is the REs and wiprime is the resulting embedding vector The replacement wordphrase is wiprime s nearest neighbour We use similarity thresholds to adjust replacement rates RR and produce text under various replacement rate thresholds RRT We manually select 10 nouns 10 verbs 10 adjectives and 5 phrases from the top 10 most frequent wordsphrases in each test set as our evaluation REs We filter the verbs and adjectives through a list of sentiment words BIBREF30 to ensure we do not choose REs that would obviously significantly alter the texts sentiment For each evaluation RE we choose onehundred lines from the corresponding test set that does not already contain RE We choose lines with at least five words as many with less carry little semantic meaning eg Great It is okay For Amazon and Yelp we choose 50 positive and 50 negative lines per RE We repeat this process three times resulting in three sets of 1000 lines per dataset per POS excluding phrases and three sets of 500 lines per dataset for phrases Our final results are averaged metrics over these three sets For SMERTITransformer SMERTIRNN and W2VSTEM we generate four outputs per text for MRTRRT of 20 40 60 and 80 which represent upperbounds on the percentage of the input that can be masked andor replaced Note that NWNSTEM and GWNSTEM can only evaluate on limited POS and their maximum replacement rates are limited We select MINsim values of 0075 and 0 for nouns and 01 and 0 for verbs as these result in replacement rates approximately equal to the actual MRRR of the other models outputs for 20 and 40 MRTRRT respectively Fluency SLOR We use syntactic logodds ratio SLOR BIBREF31 for sentence level fluency and modify from their wordlevel formula to characterlevel SLORc We use Flair perplexity values from a language model trained on the One Billion Words corpus BIBREF32 where S and w are the character lengths of the input text S and the word w respectively pMS and pMw are the probabilities of S and w under the language model M respectively and PPLS and PPLw are the characterlevel perplexities of S and w respectively SLOR from hereon we refer to characterlevel SLOR as simply SLOR measures aspects of text fluency such as grammaticality Higher values represent higher fluency We rescale resulting SLOR values to the interval 01 by first fitting and normalizing a Gaussian distribution We then truncate normalized data points outside 33 which shifts approximately 069 of total data Finally we divide each data point by six and add 05 to each result Sentiment Preservation Accuracy SPA is defined as the percentage of outputs that carry the same sentiment as the input We use VADER BIBREF33 to evaluate sentiment as positive negative or neutral It handles typos emojis and other aspects of online text Content Similarity Score CSS ranges from 0 to 1 and indicates the semantic similarity between generated text and the RE A value closer to 1 indicates stronger semantic exchange as the output is closer in semantic content to the RE We also use the USE for this due to its design and strong performance as previously mentioned We come up with a single score to evaluate overall performance of a model on STE that combines the key evaluation metrics It uses the harmonic mean similar to the F1 score or Fscore BIBREF34 BIBREF35 and we call it the Semantic Text Exchange Score STES where A is SPA B is SLOR and C is CSS STES ranges between 0 and 1 with scores closer to 1 representing higher overall performance Like the F1 score STES penalizes models which perform very poorly in one or more metrics and favors balanced models achieving strong results in all three Table TABREF38 shows overall average results by model Table TABREF41 shows outputs for a Yelp example As observed from Table TABREF41 see also Appendix F SMERTI is able to generate high quality output text similar to the RE while flowing better than other models outputs It can replace entire phrases and sentences due to its variable length infilling Note that for nouns the outputs from GWNSTEM and NWNSTEM are equivalent We conduct a human evaluation with eight participants 6 males and 2 females that are affiliated project researchers aged 2039 at the University of Waterloo We randomly choose one evaluation line for a randomly selected word or phrase for each POS per dataset The input text and each models output for 40 MRTRRT chosen as a good middle ground for each line is presented to participants resulting in a total of 54 pieces of text and rated on the following criteria from 15 RE Match How related is the entire text to the concept of X where X is a word or phrase 1 not at all related 3 somewhat related 5 very related Note here that X is a given RE Fluency Does the text make sense and flow well 1 not at all 3 somewhat 5 very Sentiment How do you think the author of the text was feeling 1 very negative 3 neutral 5 very positive Each participant evaluates every piece of text They are presented with a single piece of text at a time with the order of models POS and datasets completely randomized Average human evaluation scores are displayed in Table TABREF50 Sentiment Preservation between 0 and 1 is calculated by comparing the average Sentiment rating for each models output text to the Sentiment rating of the input text and if both are less than 25 negative between 25 and 35 inclusive neutral or greater than 35 positive this is counted as a valid case of Sentiment Preservation We repeat this for every evaluation line to calculate the final values per model Harmonic means of all three metrics using rescaled 01 values of RE Match and Fluency are also displayed As seen in Table TABREF38 both SMERTI variations achieve higher STES and outperform the other models overall with the WordNet models performing the worst SMERTI excels especially on fluency and content similarity The transformer variation achieves slightly higher SLOR while the RNN variation achieves slightly higher CSS The WordNet models perform strongest in sentiment preservation SPA likely because they modify little of the text and only verbs and nouns They achieve by far the lowest CSS likely in part due to this limited text replacement They also do not account for context and many words eg proper nouns do not exist in WordNet Overall the WordNet models are not very effective at STE W2VSTEM achieves the lowest SLOR especially for higher RRT as supported by the example in Table TABREF41 see also Appendix F W2VSTEM and WordNet models output grammatically incorrect text that flows poorly In many cases words are repeated multiple times We analyze the average Type Token Ratio TTR values of each models outputs which is the ratio of unique divided by total words As shown in Table TABREF52 the SMERTI variations achieve the highest TTR while W2VSTEM and NWNSTEM the lowest Note that while W2VSTEM achieves lower CSS than SMERTI it performs comparably in this aspect This is likely due to its vector arithmetic operations algorithm which replaces each word with one more similar to the RE This is also supported by the lower TTR as W2VSTEM frequently outputs the same words multiple times As seen in Table TABREF50 the SMERTI variations outperform all baseline models overall particularly in RE Match SMERTITransformer performs the best with SMERTIRNN second The WordNet models achieve high Sentiment Preservation but much lower on RE Match W2VSTEM achieves comparably high RE Match but lowest Fluency These results correspond well with our automatic evaluation results in Table TABREF38 We look at the Pearson correlation values between RE Match Fluency and Sentiment Preservation with CSS SLOR and SPA respectively These are 09952 09327 and 08768 respectively demonstrating that our automatic metrics are highly effective and correspond well with human ratings As seen from Table TABREF55 SMERTIs SPA values are highest for nouns likely because they typically carry little sentiment and lowest for adjectives likely because they typically carry the most SLOR is lowest for adjectives and highest for phrases and nouns Adjectives typically carry less semantic meaning and SMERTI likely has more trouble figuring out how best to infill the text In contrast nouns typically carry more and phrases the most since they consist of multiple words SMERTIs CSS is highest for phrases then nouns likely due to phrases and nouns carrying more semantic meaning making it easier to generate semantically similar text Both SMERTIs and the input texts CSS are lowest for adjectives likely because they carry little semantic meaning Overall SMERTI appears to be more effective on nouns and phrases than verbs and adjectives As seen in Table TABREF58 SMERTIs SPA is lowest for news headlines Amazon and Yelp reviews naturally carry stronger sentiment likely making it easier to generate text with similar sentiment Both SMERTIs and the input texts SLOR appear to be lower for Yelp reviews This may be due to many reasons such as more typos and emojis within the original reviews and so forth SMERTIs CSS values are slightly higher for news headlines This may be due to them typically being shorter and carrying more semantic meaning as they are designed to be attention grabbers Overall it seems that using datasets which inherently carry more sentiment will lead to better sentiment preservation Further the quality of the datasets original text unsurprisingly influences the ability of SMERTI to generate fluent text From Table TABREF60 it can be seen that as MRTRRT increases SMERTIs SPA and SLOR decrease while CSS increases These relationships are very strong as supported by the Pearson correlation values of 09972 09183 and 09078 respectively When SMERTI can alter more text it has the opportunity to replace more related to sentiment while producing more of semantic similarity to the RE Further SMERTI generates more of the text itself becoming less similar to the humanwritten input resulting in lower fluency To further demonstrate this we look at average SMERTI BLEU BIBREF36 scores against MRTRRT shown in Table TABREF60 BLEU generally indicates how close two pieces of text are in content and structure with higher values indicating greater similarity We report our final BLEU scores as the average scores of 1 to 4grams As expected BLEU decreases as MRTRRT increases and this relationship is very strong as supported by the Pearson correlation value of 09960 It is clear that MRTRRT represents a tradeoff between CSS against SPA and SLOR It is thus an adjustable parameter that can be used to control the generated text and balance semantic exchange against fluency and sentiment preservation We introduced the task of semantic text exchange STE demonstrated that our pipeline SMERTI performs well on STE and proposed an STES metric for evaluating overall STE performance SMERTI outperformed other models and was the most balanced overall We also showed a tradeoff between semantic exchange against fluency and sentiment preservation which can be controlled by the masking replacement rate threshold Potential directions for future work include adding specific methods to control sentiment and finetuning SMERTI for preservation of persona or personality Experimenting with other text infilling models eg finetuning BERT BIBREF8 is also an area of exploration Lastly our human evaluation is limited in size and a larger and more diverse participant pool is needed We conclude by addressing potential ethical misuses of STE including assisting in the generation of spam and fakereviewsnews These risks come with any intelligent chatbot work but we feel that the benefits including usage in the detection of misuse such as fakenews greatly outweigh the risks and help progress NLP and AI research We thank our anonymous reviewers study participants and Huawei Technologies Co Ltd for financial support </s>
<s>[INST] CN-CELEB: a challenging Chinese speaker recognition dataset [/INST] Speaker recognition including identification and verification aims to recognize claimed identities of speakers After decades of research performance of speaker recognition systems has been vastly improved and the technique has been deployed to a wide range of practical applications Nevertheless the present speaker recognition approaches are still far from reliable in unconstrained conditions where uncertainties within the speech recordings could be arbitrary These uncertainties might be caused by multiple factors including free text multiple channels environmental noises speaking styles and physiological status These uncertainties make the speaker recognition task highly challenging BIBREF0 BIBREF1 Researchers have devoted much effort to address the difficulties in unconstrained conditions Early methods are based on probabilistic models that treat these uncertainties as an additive Gaussian noise JFA BIBREF2 BIBREF3 and PLDA BIBREF4 are the most famous among such models These models however are shallow and linear and therefore cannot deal with the complexity of reallife applications Recent advance in deep learning methods offers a new opportunity BIBREF5 BIBREF6 BIBREF7 BIBREF8 Resorting to the power of deep neural networks DNNs in representation learning these methods can remove unwanted uncertainties by propagating speech signals through the DNN layer by layer and retain speakerrelevant features only BIBREF9 Significant improvement in robustness has been achieved by the DNNbased approach BIBREF10 which makes it more suitable for applications in unconstrained conditions The success of DNNbased methods however largely relies on a large amount of data in particular data that involve the true complexity in unconstrained conditions Unfortunately most existing datasets for speaker recognition are collected in constrained conditions where the acoustic environment channel and speaking style do not change significantly for each speaker BIBREF11 BIBREF12 BIBREF13 These datasets tend to deliver over optimistic performance and do not meet the request of research on speaker recognition in unconstrained conditions To address this shortage in datasets researchers have started to collect data in the wild The most successful wild dataset may be VoxCeleb BIBREF14 BIBREF15 which contains millions of utterances from over thousands of speakers The utterances were collected from opensource media using a fully automated pipeline based on computer vision techniques in particular face detection tracking and recognition plus videoaudio synchronization The automated pipeline is almost costless and thus greatly improves the efficiency of data collection In this paper we reimplement the automated pipeline of VoxCeleb and collect a new largescale speaker dataset named CNCeleb Compared with VoxCeleb CNCeleb has three distinct features CNCeleb specially focuses on Chinese celebrities and contains more than 130000 utterances from 1000 persons CNCeleb covers more genres of speech We intentionally collected data from 11 genres including entertainment interview singing play movie vlog live broadcast speech drama recitation and advertisement The speech of a particular speaker may be in more than 5 genres As a comparison most of the utterances in VoxCeleb were extracted from interview videos The diversity in genres makes our database more representative for the true scenarios in unconstrained conditions but also more challenging CNCeleb is not fully automated but involves human check We found that more complex the genre is more errors the automated pipeline tends to produce Ironically the errorpron segments could be highly valuable as they tend to be boundary samples We therefore choose a twostage strategy that employs the automated pipeline to perform preselection and then perform human check The rest of the paper is organized as follows Section SECREF2 presents a detailed description for CNCeleb and Section SECREF3 presents more quantitative comparisons between CNCeleb and VoxCeleb on the speaker recognition task Section SECREF4 concludes the entire paper The original purpose of the CNCeleb dataset is to investigate the true difficulties of speaker recognition techniques in unconstrained conditions and provide a resource for researchers to build prototype systems and evaluate the performance Ideally it can be used as a standalone data source and can be also used with other datasets together in particular VoxCeleb which is free and large For this reason CNCeleb tries to be distinguished from but also complementary to VoxCeleb from the beginning of the design This leads to three features that we have discussed in the previous section Chinese focused complex genres and quality guarantee by human check In summary CNCeleb contains over 130000 utterances from 1000 Chinese celebrities It covers 11 genres and the total amount of speech waveforms is 274 hours Table TABREF5 gives the data distribution over the genres and Table TABREF6 presents the data distribution over the length of utterances Table TABREF13 summarizes the main difference between CNCeleb and VoxCeleb Compared to VoxCeleb CNCeleb is a more complex dataset and more challenging for speaker recognition research More details of these challenges are as follows Most of the utterances involve realworld noise including ambient noise background babbling music cheers and laugh A certain amount of utterances involve strong and overlapped background speakers especially in the dram and movie genres Most of speakers have different genres of utterances which results in significant variation in speaking styles The utterances of the same speaker may be recorded at different time and with different devices leading to serious crosstime and crosschannel problems Most of the utterances are short which meets the scenarios of most real applications but leads to unreliable decision CNCeleb was collected following a twostage strategy firstly we used an automated pipeline to extract potential segments of the Person of Interest POI and then applied a human check to remove incorrect segments This process is much faster than purely humanbased segmentation and reduces errors caused by a purely automated process Briefly the automated pipeline we used is similar to the one used to collect VoxCeleb1 BIBREF14 and VoxCeleb2 BIBREF15 though we made some modification to increase efficiency and precision Especially we introduced a new facespeaker double check step that fused the information from both the image and speech signals to increase the recall rate while maintaining the precision The detailed steps of the collection process are summarized as follows STEP 1 POI list design We manually selected 1000 Chinese celebrities as our target speakers These speakers were mostly from the entertainment sector such as singers drama actorsactrees news reporters interviewers Region diversity was also taken into account so that variation in accent was covered STEP 2 Pictures and videos download Pictures and videos of the 1000 POIs were downloaded from the data source httpswwwbilibilicom by searching for the names of the persons In order to specify that we were searching for POI names the word human was added in the search queries The downloaded videos were manually examined and were categorized into the 11 genres STEP 3 Face detection and tracking For each POI we first obtained the portrait of the person This was achieved by detecting and clipping the face images from all pictures of that person The RetinaFace algorithm was used to perform the detection and clipping BIBREF16 Afterwards video segments that contain the target person were extracted This was achieved by three steps 1 For each frame detect all the faces appearing in the frame using RetinaFace 2 Determine if the target person appears by comparing the POI portrait and the faces detected in the frame We used the ArcFace face recognition system BIBREF17 to perform the comparison 3 Apply the MOSSE face tracking system BIBREF18 to produce face streams STEP 4 Active speaker verification As in BIBREF14 an active speaker verification system was employed to verify if the speech was really spoken by the target person This is necessary as it is possible that the target person appears in the video but the speech is from other persons We used the SyncNet model BIBREF19 as in BIBREF14 to perform the task This model was trained to detect if a stream of mouth movement and a stream of speech are synchronized In our implementation the stream of mouth movement was derived from the face stream produced by the MOSSE system STEP 5 Double check by speaker recognition Although SyncNet worked well for videos in simple genres it failed for videos of complex genres such as movie and vlog A possible reason is that the video content of these genres may change dramatically in time which leads to unreliable estimation for the stream of the mouth movement hence unreliable synchronization detection In order to improve the robustness of the active speaker verification in complex genres we introduced a double check procedure based on speaker recognition The idea is simple whenever the speaker recognition system states a very low confidence for the target speaker the segment will be discarded even if the confidence from SyncNet is high vice versa if the speaker recognition system states a very high confidence the segment will be retained We used an offtheshelf speaker recognition system BIBREF20 to perform this double check In our study this double check improved the recall rate by 30 absolutely STEP 6 Human check The segments produced by the above automated pipeline were finally checked by human According to our experience this human check is rather efficient one could check 1 hour of speech in 1 hour As a comparison if we do not apply the automated preselection checking 1 hour of speech requires 4 hours In this section we present a series of experiments on speaker recognition using VoxCeleb and CNCeleb to compare the complexity of the two datasets VoxCeleb The entire dataset involves two parts VoxCeleb1 and VoxCeleb2 We used SITW BIBREF21 a subset of VoxCeleb1 as the evaluation set The rest of VoxCeleb1 was merged with VoxCeleb2 to form the training set simply denoted by VoxCeleb The training set involves 1236567 utterances from 7185 speakers and the evaluation set involves 6445 utterances from 299 speakers precisely this is the Eval Core set within SITW CNCeleb The entire dataset was split into two parts the first part CNCelebT involves 111260 utterances from 800 speakers and was used as the training set the second part CNCelebE involves 18849 utterances from 200 speakers and was used as the evaluation set Two stateoftheart baseline systems were built following the Kaldi SITW recipe BIBREF22 an ivector system BIBREF3 and an xvector system BIBREF10 For the ivector system the acoustic feature involved 24dimensional MFCCs plus the log energy augmented by the first and secondorder derivatives We also applied the cepstral mean normalization CMN and the energybased voice active detection VAD The universal background model UBM consisted of 2048 Gaussian components and the dimensionality of the ivector space was 400 LDA was applied to reduce the dimensionality of the ivectors to 150 The PLDA model was used for scoring BIBREF4 For the xvector system the featurelearning component was a 5layer timedelay neural network TDNN The slicing parameters for the five timedelay layers were t2 t1 t t1 t2 t2 t t2 t3 t t3 t t The statistic pooling layer computed the mean and standard deviation of the framelevel features from a speech segment The size of the output layer was consistent with the number of speakers in the training set Once trained the activations of the penultimate hidden layer were read out as xvectors In our experiments the dimension of the xvectors trained on VoxCeleb was set to 512 while for CNCeleb it was set to 256 considering the less number of speakers in the training set Afterwards the xvectors were projected to 150dimensional vectors by LDA and finally the PLDA model was employed to score the trials Refer to BIBREF10 for more details We first present the basic results evaluated on SITW and CNCelebE Both the frontend ivector or xvector models and backend LDAPLDA models were trained with the VoxCeleb training set Note that for SITW the averaged length of the utterances is more than 80 seconds while this number is about 8 seconds for CNCelebE For a better comparison we resegmented the data of SITW and created a new dataset denoted by SITWS where the averaged lengths of the enrollment and test utterances are 28 and 8 seconds respectively These numbers are similar to the statistics of CNCelebE The results in terms of the equal error rate EER are reported in Table TABREF24 It can be observed that for both the ivector system and the xvector system the performance on CNCelebE is much worse than the performance on SITW and SITWS This indicates that there is big difference between these two datasets From another perspective it demonstrates that the model trained with VoxCeleb does not generalize well although it has achieved reasonable performance on data from a similar source SITW To further compare CNCeleb and VoxCeleb in a quantitative way we built systems based on CNCeleb and VoxCeleb respectively For a fair comparison we randomly sampled 800 speakers from VoxCeleb and built a new dataset VoxCelebL whose size is comparable to CNCelebT This data set was used for backend LDAPLDA training The experimental results are shown in Table TABREF26 Note that the performance of all the comparative experiments show the same trend with the ivector system and the xvector system we therefore only analyze the ivector results Firstly it can be seen that the system trained purely on VoxCeleb obtained good performance on SITWS 1st row This is understandable as VoxCeleb and SITWS were collected from the same source For the pure CNCeleb system 2nd row although CNCelebT and CNCelebE are from the same source the performance is still poor 1424 More importantly with retraining the backend model with VoxCelebL 4th row the performance on SITW becomes better than the samesource result on CNCelebE 1134 vs 1424 All these results reconfirmed the significant difference between the two datasets and indicates that CNCeleb is more challenging than VoxCeleb We introduced a free dataset CNCeleb for speaker recognition research The dataset contains more than 130k utterances from 1000 Chinese celebrities and covers 11 different genres in real world We compared CNCeleb and VoxCeleb a widely used dataset in speaker recognition by setting up a series of experiments based on two stateoftheart speaker recognition models Experimental results demonstrated that CNCeleb is significantly different from VoxCeleb and it is more challenging for speaker recognition research The EER performance we obtained in this paper suggests that in unconstrained conditions the performance of the current speaker recognition techniques might be much worse than it was thought </s>
<s>[INST] Conditional BERT Contextual Augmentation [/INST] Deep neural networkbased models are easy to overfit and result in losing their generalization due to limited size of training data In order to address the issue data augmentation methods are often applied to generate more training samples Recent years have witnessed great success in applying data augmentation in the field of speech area BIBREF0 BIBREF1 and computer vision BIBREF2 BIBREF3 BIBREF4 Data augmentation in these areas can be easily performed by transformations like resizing mirroring random cropping and color shifting However applying these universal transformations to texts is largely randomized and uncontrollable which makes it impossible to ensure the semantic invariance and label correctness For example given a movie review The actors is good by mirroring we get doog si srotca ehT or by random cropping we get actors is both of which are meaningless Existing data augmentation methods for text are often loss of generality which are developed with handcrafted rules or pipelines for specific domains A general approach for text data augmentation is replacementbased method which generates new sentences by replacing the words in the sentences with relevant words eg synonyms However words with synonyms from a handcrafted lexical database likes WordNet BIBREF5 are very limited and the replacementbased augmentation with synonyms can only produce limited diverse patterns from the original texts To address the limitation of replacementbased methods Kobayashi BIBREF6 proposed contextual augmentation for labeled sentences by offering a wide range of substitute words which are predicted by a labelconditional bidirectional language model according to the context But contextual augmentation suffers from two shortages the bidirectional language model is simply shallow concatenation of a forward and backward model and the usage of LSTM models restricts their prediction ability to a short range BERT which stands for Bidirectional Encoder Representations from Transformers pretrained deep bidirectional representations by jointly conditioning on both left and right context in all layers BERT addressed the unidirectional constraint by proposing a masked language model MLM objective by masking some percentage of the input tokens at random and predicting the masked words based on its context This is very similar to how contextual augmentation predict the replacement words But BERT was proposed to pretrain text representations so MLM task is performed in an unsupervised way taking no label variance into consideration This paper focuses on the replacementbased methods by proposing a novel data augmentation method called conditional BERT contextual augmentation The method applies contextual augmentation by conditional BERT which is finetuned on BERT We adopt BERT as our pretrained language model with two reasons First BERT is based on Transformer Transformer provides us with a more structured memory for handling longterm dependencies in text Second BERT as a deep bidirectional model is strictly more powerful than the shallow concatenation of a lefttoright and rightto left model So we apply BERT to contextual augmentation for labeled sentences by offering a wider range of substitute words predicted by the masked language model task However the masked language model predicts the masked word based only on its context so the predicted word maybe incompatible with the annotated labels of the original sentences In order to address this issue we introduce a new finetuning objective the conditional masked language modelCMLM The conditional masked language model randomly masks some of the tokens from an input and the objective is to predict a labelcompatible word based on both its context and sentence label Unlike Kobayashis work the CMLM objective allows a deep bidirectional representations by jointly conditioning on both left and right context in all layers In order to evaluate how well our augmentation method improves performance of deep neural network models following Kobayashi BIBREF6 we experiment it on two most common neural network structures LSTMRNN and CNN on text classification tasks Through the experiments on six various different text classification tasks we demonstrate that the proposed conditional BERT model augments sentence better than baselines and conditional BERT contextual augmentation method can be easily applied to both convolutional or recurrent neural networks classifier We further explore our conditional MLM tasks connection with style transfer task and demonstrate that our conditional BERT can also be applied to style transfer too Our contributions are concluded as follows To our best knowledge this is the first attempt to alter BERT to a conditional BERT or apply BERT on text generation tasks Language model pretraining has attracted wide attention and finetuning on pretrained language model has shown to be effective for improving many downstream natural language processing tasks Dai BIBREF7 pretrained unlabeled data to improve Sequence Learning with recurrent networks Howard BIBREF8 proposed a general transfer learning method Universal Language Model Finetuning ULMFiT with the key techniques for finetuning a language model Radford BIBREF9 proposed that by generative pretraining of a language model on a diverse corpus of unlabeled text large gains on a diverse range of tasks could be realized Radford BIBREF9 achieved large improvements on many sentencelevel tasks from the GLUE benchmark BIBREF10 BERT BIBREF11 obtained new stateoftheart results on a broad range of diverse tasks BERT pretrained deep bidirectional representations which jointly conditioned on both left and right context in all layers following by discriminative finetuning on each specific task Unlike previous works finetuning pretrained language model to perform discriminative tasks we aim to apply pretrained BERT on generative tasks by perform the masked language modelMLM task To generate sentences that are compatible with given labels we retrofit BERT to conditional BERT by introducing a conditional masked language model task and finetuning BERT on the task Text data augmentation has been extensively studied in natural language processing Samplebased methods includes downsampling from the majority classes and oversampling from the minority class both of which perform weakly in practice Generationbased methods employ deep generative models such as GANs BIBREF12 or VAEs BIBREF13 BIBREF14 trying to generate sentences from a continuous space with desired attributes of sentiment and tense However sentences generated in these methods are very hard to guarantee the quality both in label compatibility and sentence readability In some specific areas BIBREF15 BIBREF16 BIBREF17 word replacement augmentation was applied Wang BIBREF18 proposed the use of neighboring words in continuous representations to create new instances for every word in a tweet to augment the training dataset Zhang BIBREF19 extracted all replaceable words from the given text and randomly choose r of them to be replaced then substituted the replaceable words with synonyms from WordNet BIBREF5 Kolomiyets BIBREF20 replaced only the headwords under a taskspecific assumption that temporal trigger words usually occur as headwords Kolomiyets BIBREF20 selected substitute words with top K scores given by the Latent Words LM BIBREF21 which is a LM based on fixed length contexts Fadaee BIBREF22 focused on the rare word problem in machine translation replacing words in a source sentence with only rare words A word in the translated sentence is also replaced using a word alignment method and a rightward LM The work most similar to our research is Kobayashi BIBREF6 Kobayashi used a fillintheblank context for data augmentation by replacing every words in the sentence with language model In order to prevent the generated words from reversing the information related to the labels of the sentences Kobayashi BIBREF6 introduced a conditional constraint to control the replacement of words Unlike previous works we adopt a deep bidirectional language model to apply replacement and the attention mechanism within our model allows a more structured memory for handling longterm dependencies in text which resulting in more general and robust improvement on various downstream tasks In general the language modelLM models the probability of generating natural language sentences or documents Given a sequence textbf textit S of N tokens t1t2tN a forward language model allows us to predict the probability of the sequence as pt1t2tN prod i1Nptit1 t2 ti1 Eq 8 Similarly a backward language model allows us to predict the probability of the sentence as pt1t2tN prod i1Nptiti1 ti2 tN Eq 9 Traditionally a bidirectional language model a shallow concatenation of independently trained forward and backward LMs In order to train a deep bidirectional language model BERT proposed Masked Language Model MLM task which was also referred to Cloze Task BIBREF23 MLM task randomly masks some percentage of the input tokens and then predicts only those masked tokens according to their context Given a masked token ti the context is the tokens surrounding token ti in the sequence textbf textit S ie cloze sentence textbf textit Sbackslash lbrace ti rbrace The final hidden vectors corresponding to the mask tokens are fed into an output softmax over the vocabulary to produce words with a probability distribution pcdot textbf textit Sbackslash lbrace ti rbrace MLM task only predicts the masked words rather than reconstructing the entire input which suggests that more pretraining steps are required for the model to converge Pretrained BERT can augment sentences through MLM task by predicting new words in masked positions according to their context As shown in Fig 1 our conditional BERT shares the same model architecture with the original BERT The differences are the input representation and training procedure The input embeddings of BERT are the sum of the token embeddings the segmentation embeddings and the position embeddings For the segmentation embeddings in BERT a learned sentence A embedding is added to every token of the first sentence and if a second sentence exists a sentence B embedding will be added to every token of the second sentence However the segmentation embeddings has no connection to the actual annotated labels of a sentence like sense sentiment or subjectivity so predicted word is not always compatible with annotated labels For example given a positive movie remark this actor is good we have the word good masked Through the Masked Language Model task by BERT the predicted word in the masked position has potential to be negative word likes bad or boring Such new generated sentences by substituting masked words are implausible with respect to their original labels which will be harmful if added to the corpus to apply augmentation In order to address this issue we propose a new task conditional masked language model The conditional masked language model randomly masks some of the tokens from the labeled sentence and the objective is to predict the original vocabulary index of the masked word based on both its context and its label Given a masked token ti the context textbf textit Sbackslash lbrace ti rbrace and label y are both considered aiming to calculate pcdot ytextbf textit Sbackslash lbrace ti rbrace instead of calculating pcdot textbf textit Sbackslash lbrace ti rbrace Unlike MLM pretraining the conditional MLM objective allows the representation to fuse the context information and the label information which allows us to further train a labelconditional deep bidirectional representations To perform conditional MLM task we finetune on pretrained BERT We alter the segmentation embeddings to label embeddings which are learned corresponding to their annotated labels on labeled datasets Note that the BERT are designed with segmentation embedding being embedding A or embedding B so when a downstream task dataset with more than two labels we have to adapt the size of embedding to label size compatible We train conditional BERT using conditional MLM task on labeled dataset After the model has converged it is expected to be able to predict words in masked position both considering the context and the label After the conditional BERT is welltrained we utilize it to augment sentences Given a labeled sentence from the corpus we randomly mask a few words in the sentence Through conditional BERT various words compatibly with the label of the sentence are predicted by conditional BERT After substituting the masked words with predicted words a new sentences is generated which shares similar context and same label with original sentence Then new sentences are added to original corpus We elaborate the entire process in algorithm Conditional BERT Contextual Augmentation Conditional BERT contextual augmentation algorithm Finetuning on the pretrained BERT we retrofit BERT to conditional BERT using conditional MLM task on labeled dataset After the model converged we utilize it to augment sentences New sentences are added into dataset to augment the dataset 1 Alter the segmentation embeddings to label embeddings Finetune the pretrained BERT using conditional MLM task on labeled dataset D until convergence each iteration i12M Sample a sentence s from D Randomly mask k words Using finetuned conditional BERT to predict labelcompatible words on masked positions to generate a new sentence Sprime Add new sentences into dataset D to get augmented dataset Dprime Perform downstream task on augmented dataset Dprime In this section we present conditional BERT parameter settings and following Kobayashi BIBREF6 we apply different augmentation methods on two types of neural models through six text classification tasks The pretrained BERT model we used in our experiment is BERT BASE with number of layers ie Transformer blocks L 12 the hidden size H 768 and the number of selfattention heads A 12 total parameters 110M Detailed pretrain parameters setting can be found in original paper BIBREF11 For each task we perform the following steps independently First we evaluate the augmentation ability of original BERT model pretrained on MLM task We use pretrained BERT to augment dataset by predicted masked words only condition on context for each sentence Second we finetune the original BERT model to a conditional BERT Welltrained conditional BERT augments each sentence in dataset by predicted masked words condition on both context and label Third we compare the performance of the two methods with Kobayashis BIBREF6 contextual augmentation results Note that the original BERTs segmentation embeddings layer is compatible with twolabel dataset When the taskspecific dataset is with more than two different labels we should retrain a label size compatible label embeddings layer instead of directly finetuning the pretrained one Six benchmark classification datasets are listed in table 1 Following Kim BIBREF24 for a dataset without validation data we use 10 of its training set for the validation set Summary statistics of six classification datasets are shown in table 1 SST BIBREF25 SST Stanford Sentiment Treebank is a dataset for sentiment classification on movie reviews which are annotated with five labels SST5 very positive positive neutral negative or very negative or two labels SST2 positive or negative Subj BIBREF26 Subj Subjectivity dataset is annotated with whether a sentence is subjective or objective MPQA BIBREF27 MPQA Opinion Corpus is an opinion polarity detection dataset of short phrases rather than sentences which contains news articles from a wide variety of news sources manually annotated for opinions and other private states ie beliefs emotions sentiments speculations etc RT BIBREF28 RT is another movie review sentiment dataset contains a collection of short review excerpts from Rotten Tomatoes collected by Bo Pang and Lillian Lee TREC BIBREF29 TREC is a dataset for classification of the six question types whether the question is about person location numeric information etc We evaluate the performance improvement brought by conditional BERT contextual augmentation on sentence classification tasks so we need to prepare two common sentence classifiers beforehand For comparison following Kobayashi BIBREF6 we adopt two typical classifier architectures CNN or LSTMRNN The CNNbased classifier BIBREF24 has convolutional filters of size 3 4 5 and word embeddings All outputs of each filter are concatenated before applied with a maxpooling over time then fed into a twolayer feedforward network with ReLU followed by the softmax function An RNNbased classifier has a single layer LSTM and word embeddings whose output is fed into an output affine layer with the softmax function For both the architectures dropout BIBREF30 and Adam optimization BIBREF31 are applied during training The train process is finish by early stopping with validation at each epoch Sentence classifier hyperparameters including learning rate embedding dimension unit or filter size and dropout ratio are selected using gridsearch for each taskspecific dataset We refer to Kobayashis implementation in the released code For BERT all hyperparameters are kept the same as Devlin BIBREF11 codes in Tensorflow and PyTorch are all available on github and pretrained BERT model can also be downloaded The number of conditional BERT training epochs ranges in 150 and number of masked words ranges in 12 We compare the performance improvements obtained by our proposed method with the following baseline methods w means with wsynonym Words are randomly replaced with synonyms from WordNet BIBREF5 wcontext Proposed by Kobayashi BIBREF6 which used a bidirectional language model to apply contextual augmentation each word was replaced with a probability wcontextlabel Kobayashis contextual augmentation method BIBREF6 in a labelconditional LM architecture Table 2 lists the accuracies of the all methods on two classifier architectures The results show that for various datasets on different classifier architectures our conditional BERT contextual augmentation improves the model performances most BERT can also augments sentences to some extent but not as much as conditional BERT does For we masked words randomly the masked words may be labelsensitive or labelinsensitive If labelinsensitive words are masked words predicted through BERT may not be compatible with original labels The improvement over all benchmark datasets also shows that conditional BERT is a general augmentation method for multilabels sentence classification tasks We also explore the effect of number of training steps to the performance of conditional BERT data augmentation The finetuning epoch setting ranges in 150 we list the finetuning epoch of conditional BERT to outperform BERT for various benchmarks in table 3 The results show that our conditional BERT contextual augmentation can achieve obvious performance improvement after only a few finetuning epochs which is very convenient to apply to downstream tasks In this section we further deep into the connection to style transfer and apply our well trained conditional BERT to style transfer task Style transfer is defined as the task of rephrasing the text to contain specific stylistic properties without changing the intent or affect within the context BIBREF32 Our conditional MLM task changes words in the text condition on given label without changing the context View from this point the two tasks are very close So in order to apply conditional BERT to style transfer task given a specific stylistic sentence we break it into two steps first we find the words relevant to the style second we mask the stylerelevant words then use conditional BERT to predict new substitutes with sentence context and target style property In order to find stylerelevant words in a sentence we refer to Xu BIBREF33 which proposed an attentionbased method to extract the contribution of each word to the sentence sentimental label For example given a positive movie remark This movie is funny and interesting we filter out the words contributes largely to the label and mask them Then through our conditional BERT contextual augmentation method we fill in the masked position by predicting words conditioning on opposite label and sentence context resulting in This movie is boring and dull The words boring and dull contribute to the new sentence being labeled as negative style We sample some sentences from dataset SST2 transferring them to the opposite label as listed in table 4 In this paper we finetune BERT to conditional BERT by introducing a novel conditional MLM task After being well trained the conditional BERT can be applied to data augmentation for sentence classification tasks Experiment results show that our model outperforms several baseline methods obviously Furthermore we demonstrate that our conditional BERT can also be applied to style transfer task In the future 1We will explore how to perform text data augmentation on imbalanced datasets with pretrained language model 2 we believe the idea of conditional BERT contextual augmentation is universal and will be applied to paragraph or document level data augmentation </s>
<s>[INST] Recent Advances in Neural Question Generation [/INST] Question Generation QG concerns the task of automatically generating questions from various inputs such as raw text database or semantic representation BIBREF0 People have the ability to ask rich creative and revealing questions BIBREF1 eg asking Why did Gollum betray his master Frodo Baggins after reading the fantasy novel The Lord of the Rings How can machines be endowed with the ability to ask relevant and tothepoint questions given various inputs This is a challenging complementary task to Question Answering QA Both QA and QG require an indepth understanding of the input source and the ability to reason over relevant contexts But beyond understanding QG additionally integrates the challenges of Natural Language Generation NLG ie generating grammatically and semantically correct questions QG is of practical importance in education forming good questions are crucial for evaluating students knowledge and stimulating selflearning QG can generate assessments for course materials BIBREF2 or be used as a component in adaptive intelligent tutoring systems BIBREF3 In dialog systems fluent QG is an important skill for chatbots eg in initiating conversations or obtaining specific information from human users QA and reading comprehension also benefit from QG by reducing the needed human labor for creating largescale datasets We can say that traditional QG mainly focused on generating factoid questions from a single sentence or a paragraph spurred by a series of workshops during 20082012 BIBREF4 BIBREF5 BIBREF6 BIBREF7 Recently driven by advances in deep learning QG research has also begun to utilize neural techniques to develop endtoend neural models to generate deeper questions BIBREF8 and to pursue broader applications BIBREF9 BIBREF10 While there have been considerable advances made in NQG the area lacks a comprehensive survey This paper fills this gap by presenting a systematic survey on recent development of NQG focusing on three emergent trends that deep learning has brought in QG 1 the change of learning paradigm 2 the broadening of the input spectrum and 3 the generation of deep questions For the sake of clean exposition we first provide a broad overview of QG by conceptualizing the problem from the perspective of the three introduced aspects 1 its learning paradigm 2 its input modalities and 3 the cognitive level it involves This combines past research with recent trends providing insights on how NQG connects to traditional QG research QG research traditionally considers two fundamental aspects in question asking What to ask and How to ask A typical QG task considers the identification of the important aspects to ask about what to ask and learning to realize such identified aspects as natural language how to ask Deciding what to ask is a form of machine understanding a machine needs to capture important information dependent on the target application akin to automatic summarization Learning how to ask however focuses on aspects of the language quality such as grammatical correctness semantically preciseness and language flexibility Past research took a reductionist approach separately considering these two problems of what and how via content selection and question construction Given a sentence or a paragraph as input content selection selects a particular salient topic worthwhile to ask about and determines the question type What When Who etc Approaches either take a syntactic BIBREF11 BIBREF12 BIBREF13 or semantic BIBREF14 BIBREF3 BIBREF15 BIBREF16 tack both starting by applying syntactic or semantic parsing respectively to obtain intermediate symbolic representations Question construction then converts intermediate representations to a natural language question taking either a tranformation or templatebased approach The former BIBREF17 BIBREF18 BIBREF13 rearranges the surface form of the input sentence to produce the question the latter BIBREF19 BIBREF20 BIBREF21 generates questions from predefined question templates Unfortunately such QG architectures are limiting as their representation is confined to the variety of intermediate representations transformation rules or templates In contrast neural models motivate an endtoend architectures Deep learned frameworks contrast with the reductionist approach admitting approaches that jointly optimize for both the what and how in an unified framework The majority of current NQG models follow the sequencetosequence Seq2Seq framework that use a unified representation and joint learning of content selection via the encoder and question construction via the decoder In this framework traditional parsingbased content selection has been replaced by more flexible approaches such as attention BIBREF22 and copying mechanism BIBREF23 Question construction has become completely datadriven requiring far less labor compared to transformation rules enabling better language flexibility compared to question templates However unlike other Seq2Seq learning NLG tasks such as Machine Translation Image Captioning and Abstractive Summarization which can be loosely regarded as learning a onetoone mapping generated questions can differ significantly when the intent of asking differs eg the target answer the target aspect to ask about and the questions depth In Section Methodology we summarize different NQG methodologies based on Seq2Seq framework investigating how some of these QGspecific factors are integrated with neural models and discussing what could be further explored The change of learning paradigm in NQG era is also represented by multitask learning with other NLP tasks for which we discuss in Section Multitask Learning Question generation is an NLG task for which the input has a wealth of possibilities depending on applications While a host of input modalities have been considered in other NLG tasks such as text summarization BIBREF24 image captioning BIBREF25 and tabletotext generation BIBREF26 traditional QG mainly focused on textual inputs especially declarative sentences explained by the original application domains of question answering and education which also typically featured textual inputs Recently with the growth of various QA applications such as Knowledge Base Question Answering KBQA BIBREF27 and Visual Question Answering VQA BIBREF28 NQG research has also widened the spectrum of sources to include knowledge bases BIBREF29 and images BIBREF10 This trend is also spurred by the remarkable success of neural models in feature representation especially on image features BIBREF30 and knowledge representations BIBREF31 We discuss adapting NQG models to other input modalities in Section Wider Input Modalities Finally we consider the required cognitive process behind question asking a distinguishing factor for questions BIBREF32 A typical framework that attempts to categorize the cognitive levels involved in question asking comes from Blooms taxonomy BIBREF33 which has undergone several revisions and currently has six cognitive levels Remembering Understanding Applying Analyzing Evaluating and Creating BIBREF32 Traditional QG focuses on shallow levels of Blooms taxonomy typical QG research is on generating sentencebased factoid questions eg Who What Where questions whose answers are simple constituents in the input sentence BIBREF2 BIBREF13 However a QG system achieving human cognitive level should be able to generate meaningful questions that cater to higher levels of Blooms taxonomy BIBREF34 such as Why Whatif and How questions Traditionally those deep questions are generated through shallow methods such as handcrafted templates BIBREF20 BIBREF21 however these methods lack a real understanding and reasoning over the input Although asking deep questions is complex NQGs ability to generalize over voluminous data has enabled recent research to explore the comprehension and reasoning aspects of QG BIBREF35 BIBREF1 BIBREF8 BIBREF34 We investigate this trend in Section Generation of Deep Questions examining the limitations of current Seq2Seq model in generating deep questions and the efforts made by existing works indicating further directions ahead The rest of this paper provides a systematic survey of NQG covering corpus and evaluation metrics before examining specific neural models As QG can be regarded as a dual task of QA in principle any QA dataset can be used for QG as well However there are at least two corpusrelated factors that affect the difficulty of question generation The first is the required cognitive level to answer the question as we discussed in the previous section Current NQG has achieved promising results on datasets consisting mainly of shallow factoid questions such as SQuAD BIBREF36 and MS MARCO BIBREF38 However the performance drops significantly on deep question datasets such as LearningQ BIBREF8 shown in Section Generation of Deep Questions The second factor is the answer type ie the expected form of the answer typically having four settings 1 the answer is a text span in the passage which is usually the case for factoid questions 2 humangenerated abstractive answer that may not appear in the passage usually the case for deep questions 3 multiple choice question where question and its distractors should be jointly generated and 4 no given answer which requires the model to automatically learn what is worthy to ask The design of NQG system differs accordingly Table 1 presents a listing of the NQG corpora grouped by their cognitive level and answer type along with their statistics Among them SQuAD was used by most groups as the benchmark to evaluate their NQG models This provides a fair comparison between different techniques However it raises the issue that most NQG models work on factoid questions with answer as text span leaving other types of QG problems less investigated such as generating deep multichoice questions To overcome this a wider variety of corpora should be benchmarked against in future NQG research Although the datasets are commonly shared between QG and QA it is not the case for evaluation it is challenging to define a gold standard of proper questions to ask Meaningful syntactically correct semantically sound and natural are all useful criteria yet they are hard to quantify Most QG systems involve human evaluation commonly by randomly sampling a few hundred generated questions and asking human annotators to rate them on a 5point Likert scale The average rank or the percentage of bestranked questions are reported and used for quality marks As human evaluation is timeconsuming common automatic evaluation metrics for NLG such as BLEU BIBREF41 METEOR BIBREF42 and ROUGE BIBREF43 are also widely used However some studies BIBREF44 BIBREF45 have shown that these metrics do not correlate well with fluency adequacy coherence as they essentially compute the n gram similarity between the source sentence and the generated question To overcome this BIBREF46 proposed a new metric to evaluate the answerability of a question by calculating the scores for several questionspecific factors including question type content words function words and named entities However as it is newly proposed it has not been applied to evaluate any NQG system yet To accurately measure what makes a good question especially deep questions improved evaluation schemes are required to specifically investigate the mechanism of question asking Many current NQG models follow the Seq2Seq architecture Under this framework given a passage usually a sentence X x1 cdots xn and possibly a target answer A a text span in the passage as input an NQG model aims to generate a question Y y1 cdots ym asking about the target answer A in the passage X which is defined as finding the best question barY that maximizes the conditional likelihood given the passage X and the answer A barY arg max Y PY vert X A
vspace1422636pt
arg max Y sum t1m Pyt vert X A y t Eq 5 BIBREF47 pioneered the first NQG model using an attention Seq2Seq model BIBREF22 which feeds a sentence into an RNNbased encoder and generate a question about the sentence through a decoder The attention mechanism is applied to help decoder pay attention to the most relevant parts of the input sentence while generating a question Note that this base model does not take the target answer as input Subsequently neural models have adopted attention mechanism as a default BIBREF48 BIBREF49 BIBREF50 Although these NQG models all share the Seq2Seq framework they differ in the consideration of 1 QGspecific factors eg answer encoding question word generation and paragraphlevel contexts and 2 common NLG techniques eg copying mechanism linguistic features and reinforcement learning discussed next The most commonly considered factor by current NQG systems is the target answer which is typically taken as an additional input to guide the model in deciding which information to focus on when generating otherwise the NQG model tend to generate questions without specific target eg What is mentioned Models have solved this by either treating the answers position as an extra input feature BIBREF48 BIBREF51 or by encoding the answer with a separate RNN BIBREF49 BIBREF52 The first type of method augments each input word vector with an extra answer indicator feature indicating whether this word is within the answer span BIBREF48 implement this feature using the BIO tagging scheme while BIBREF50 directly use a binary indicator In addition to the target answer BIBREF53 argued that the context words closer to the answer also deserve more attention from the model since they are usually more relevant To this end they incorporate trainable position embeddings dp1 dp2 cdots dpn into the computation of attention distribution where pi is the relative distance between the i th word and the answer and dpi is the embedding of pi This achieved an extra BLEU4 gain of 089 on SQuAD To generate answerrelated questions extra answer indicators explicitly emphasize the importance of answer however it also increases the tendency that generated questions include words from the answer resulting in useless questions as observed by BIBREF52 For example given the input John Francis OHara was elected president of Notre Dame in 1934 an improperly generated question would be Who was elected John Francis which exposes some words in the answer To address this they propose to replace the answer into a special token for passage encoding and a separate RNN is used to encode the answer The outputs from two encoders are concatenated as inputs to the decoder BIBREF54 adopted a similar idea that separately encodes passage and answer but they instead use the multiperspective matching between two encodings as an extra input to the decoder We forecast treating the passage and the target answer separately as a future trend as it results in a more flexible model which generalizes to the abstractive case when the answer is not a text span in the input passage However this inevitably increases the model complexity and difficulty in training Question words eg when how and why also play a vital role in QG BIBREF53 observed that the mismatch between generated question words and answer type is common for current NQG systems For example a whenquestion should be triggered for answer the end of the Mexican War while a whyquestion is generated by the model A few works BIBREF49 BIBREF53 considered question word generation separately in model design BIBREF49 proposed to first generate a question template that contains question word eg how to where is the placeholder before generating the rest of the question To this end they train two Seq2Seq models the former learns to generate question templates for a given text while the latter learns to fill the blank of template to form a complete question Instead of a twostage framework BIBREF53 proposed a more flexible model by introducing an additional decoding mode that generates the question word When entering this mode the decoder produces a question word distribution based on a restricted set of vocabulary using the answer embedding the decoder state and the context vector The switch between different modes is controlled by a discrete variable produced by a learnable module of the model in each decoding step Determining the appropriate question word harks back to question type identification which is correlated with the question intention as different intents may yield different questions even when presented with the same passage answer input pair This points to the direction of exploring question pragmatics where external contextual information such as intent can inform and influence how questions should optimally be generated Leveraging rich paragraphlevel contexts around the input text is another natural consideration to produce better questions According to BIBREF47 around 20 of questions in SQuAD require paragraphlevel information to be answered However as input texts get longer Seq2Seq models have a tougher time effectively utilizing relevant contexts while avoiding irrelevant information To address this challenge BIBREF51 proposed a gated selfattention encoder to refine the encoded context by fusing important information with the contexts selfrepresentation properly which has achieved stateoftheart results on SQuAD The long passage consisting of input texts and its context is first embedded via LSTM with answer position as an extra feature The encoded representation is then fed through a gated selfmatching network BIBREF55 to aggregate information from the entire passage and embed intrapassage dependencies Finally a feature fusion gate BIBREF56 chooses relevant information between the original and selfmatching enhanced representations Instead of leveraging the whole context BIBREF57 performed a prefiltering by running a coreference resolution system on the context passage to obtain coreference clusters for both the input sentence and the answer The coreferred sentences are then fed into a gating network from which the outputs serve as extra features to be concatenated with the original input vectors The aforementioned models require the target answer as an input in which the answer essentially serves as the focus of asking However in the case that only the input passage is given a QG system should automatically identify questionworthy parts within the passage This task is synonymous with content selection in traditional QG To date only two works BIBREF58 BIBREF59 have worked in this setting They both follow the traditional decomposition of QG into content selection and question construction but implement each task using neural networks For content selection BIBREF58 learn a sentence selection task to identify questionworthy sentences from the input paragraph using a neural sequence tagging model BIBREF59 train a neural keyphrase extractor to predict keyphrases of the passage For question construction they both employed the Seq2Seq model for which the input is either the selected sentence or the input passage with keyphrases as target answer However learning what aspect to ask about is quite challenging when the question requires reasoning over multiple pieces of information within the passage cf the Gollum question from the introduction Beyond retrieving questionworthy information we believe that studying how different reasoning patterns eg inductive deductive causal and analogical affects the generation process will be an aspect for future study Common techniques of NLG have also been considered in NQG model summarized as 3 tactics 1 Copying Mechanism Most NQG models BIBREF48 BIBREF60 BIBREF61 BIBREF50 BIBREF62 employ the copying mechanism of BIBREF23 which directly copies relevant words from the source sentence to the question during decoding This idea is widely accepted as it is common to refer back to phrases and entities appearing in the text when formulating factoid questions and difficult for a RNN decoder to generate such rare words on its own 2 Linguistic Features Approaches also seek to leverage additional linguistic features that complements word embeddings including word case POS and NER tags BIBREF48 BIBREF61 as well as coreference BIBREF50 and dependency information BIBREF62 These categorical features are vectorized and concatenated with word embeddings The feature vectors can be either onehot or trainable and serve as input to the encoder 3 Policy Gradient Optimizing for just groundtruth log likelihood ignores the many equivalent ways of asking a question Relevant QG work BIBREF60 BIBREF63 have adopted policy gradient methods to add taskspecific rewards such as BLEU or ROUGE to the original objective This helps to diversify the questions generated as the model learns to distribute probability mass among equivalent expressions rather than the single ground truth question In Table 2 we summarize existing NQG models with their employed techniques and their bestreported performance on SQuAD These methods achieve comparable results as of this writing BIBREF51 is the stateoftheart Two points deserve mention First while the copying mechanism has shown marked improvements there exist shortcomings BIBREF52 observed many invalid answerrevealing questions attributed to the use of the copying mechanism cf the John Francis example in Section Emerging Trends They abandoned copying but still achieved a performance rivaling other systems In parallel application areas such as machine translation the copy mechanism has been to a large extent replaced with selfattention BIBREF64 or transformer BIBREF65 The future prospect of the copying mechanism requires further investigation Second recent approaches that employ paragraphlevel contexts have shown promising results not only boosting performance but also constituting a step towards deep question generation which requires reasoning over rich contexts We discuss three trends that we wish to call practitioners attention to as NQG evolves to take the center stage in QG Multitask Learning Wider Input Modalities and Deep Question Generation As QG has become more mature work has started to investigate how QG can assist in other NLP tasks and vice versa Some NLP tasks benefit from enriching training samples by QG to alleviate the data shortage problem This idea has been successfully applied to semantic parsing BIBREF66 and QA BIBREF67 In the semantic parsing task that maps a natural language question to a SQL query BIBREF66 achieved a 3 performance gain with an enlarged training set that contains pseudolabeled SQL question pairs generated by a Seq2Seq QG model In QA BIBREF67 employed the idea of selftraining BIBREF68 to jointly learn QA and QG The QA and QG models are first trained on a labeled corpus Then the QG model is used to create more questions from an unlabeled text corpus and the QA model is used to answer these newlycreated questions The newlygenerated questionanswer pairs form an enlarged dataset to iteratively retrain the two models The process is repeated while performance of both models improve Investigating the core aspect of QG we say that a welltrained QG system should have the ability to 1 find the most salient information in the passage to ask questions about and 2 given this salient information as target answer to generate an answer related question BIBREF69 leveraged the first characteristic to improve text summarization by performing multitask learning of summarization with QG as both these two tasks require the ability to search for salient information in the passage BIBREF49 applied the second characteristic to improve QA For an input question q and a candidate answer hata they generate a question hatq for hata by way of QG system Since the generated question hatq is closely related to hata the similarity between q and hatq helps to evaluate whether hata is the correct answer Other works focus on jointly training to combine QG and QA BIBREF70 simultaneously train the QG and QA models in the same Seq2Seq model by alternating input data between QA and QG examples BIBREF71 proposed a training algorithm that generalizes Generative Adversarial Network GANs BIBREF72 under the question answering scenario The model improves QG by incorporating an additional QAspecific loss and improving QA performance by adding artificially generated training instances from QG However while joint training has shown some effectiveness due to the mixed objectives its performance on QG are lower than the stateoftheart results which leaves room for future exploration QG work now has incorporated input from knowledge bases KBQG and images VQG Inspired by the use of SQuAD as a question benchmark BIBREF9 created a 30M largescale dataset of KB triple question pairs to spur KBQG work They baselined an attention seq2seq model to generate the target factoid question Due to KB sparsity many entities and predicates are unseen or rarely seen at training time BIBREF73 address these fewzeroshot issues by applying the copying mechanism and incorporating textual contexts to enrich the information for rare entities and relations Since a single KB triple provides only limited information KBgenerated questions also overgeneralize a model asks Who was born in New York when given the triple DonaldTrump Placeofbirth NewYork To solve this BIBREF29 enrich the input with a sequence of keywords collected from its related triples Visual Question Generation VQG is another emerging topic which aims to ask questions given an image We categorize VQG into grounded and openended VQG by the level of cognition Grounded VQG generates visually grounded questions ie all relevant information for the answer can be found in the input image BIBREF74 A key purpose of grounded VQG is to support the dataset construction for VQA To ensure the questions are grounded existing systems rely on image captions to varying degrees BIBREF75 and BIBREF76 simply convert image captions into questions using rulebased methods with textual patterns BIBREF74 proposed a neural model that can generate questions with diverse types for a single image using separate networks to construct dense image captions and to select question types In contrast to grounded QG humans ask higher cognitive level questions about what can be inferred rather than what can be seen from an image Motivated by this BIBREF10 proposed openended VQG that aims to generate natural and engaging questions about an image These are deep questions that require high cognition such as analyzing and creation With significant progress in deep generative models marked by variational autoencoders VAEs and GANs such models are also used in openended VQG to bring creativity into generated questions BIBREF77 BIBREF78 showing promising results This also brings hope to address deep QG from text as applied in NLG eg SeqGAN BIBREF79 and LeakGAN BIBREF80 Endowing a QG system with the ability to ask deep questions will help us build curious machines that can interact with humans in a better manner However BIBREF81 pointed out that asking highquality deep questions is difficult even for humans Citing the study from BIBREF82 to show that students in college asked only about 6 deepreasoning questions per hour in a questionencouraging tutoring session These deep questions are often about events evaluation opinions syntheses or reasons corresponding to higherorder cognitive levels To verify the effectiveness of existing NQG models in generating deep questions BIBREF8 conducted an empirical study that applies the attention Seq2Seq model on LearningQ a deepquestion centric dataset containing over 60 questions that require reasoning over multiple sentences or external knowledge to answer However the results were poor the model achieved miniscule BLEU4 scores of 4 and METEOR scores of 9 compared with 12 BLEU4 and 16 METEOR on SQuAD Despite further indepth analysis are needed to explore the reasons behind we believe there are two plausible explanations 1 Seq2Seq models handle long inputs ineffectively and 2 Seq2Seq models lack the ability to reason over multiple pieces of information Despite still having a long way to go some works have set out a path forward A few early QG works attempted to solve this through building deep semantic representations of the entire text using concept maps over keywords BIBREF83 or minimal recursion semantics BIBREF84 to reason over concepts in the text BIBREF35 proposed a crowdsourcingbased workflow that involves building an intermediate ontology for the input text soliciting question templates through crowdsourcing and generating deep questions based on template retrieval and ranking Although this process is semiautomatic it provides a practical and efficient way towards deep QG In a separate line of work BIBREF1 proposed a framework that simulates how people ask deep questions by treating questions as formal programs that execute on the state of the world outputting an answer Based on our survey we believe the roadmap towards deep NGQ points towards research that will 1 enhance the NGQ model with the ability to consider relationships among multiple source sentences 2 explicitly model typical reasoning patterns and 3 understand and simulate the mechanism behind human question asking We have presented a comprehensive survey of NQG categorizing current NQG models based on different QGspecific and common technical variations and summarizing three emerging trends in NQG multitask learning wider input modalities and deep question generation Whats next for NGQ We end with future potential directions by applying past insights to current NQG models the unknown unknown promising directions yet explored When to Ask Besides learning what and how to ask in many realworld applications that question plays an important role such as automated tutoring and conversational systems learning when to ask become an important issue In contrast to general dialog management BIBREF85 no research has explored when machine should ask an engaging question in dialog Modeling question asking as an interactive and dynamic process may become an interesting topic ahead Personalized QG Question asking is quite personalized people with different characters and knowledge background ask different questions However integrating QG with user modeling in dialog management or recommendation system has not yet been explored Explicitly modeling user state and awareness leads us towards personalized QG which dovetails deep endtoend QG with deep user modeling and pairs the dual of generationcomprehension much in the same vein as in the visionimage generation area </s>
<s>[INST] Open Named Entity Modeling from Embedding Distribution [/INST] Named Entity Recognition is a major natural language processing task that recognizes the proper labels such as LOC Location PER Person ORG Organization etc Like words or phrase being a sort of language constituent named entities also benefit from better representation for better processing Continuous word representations known as word embeddings well capture semantic and syntactic regularities of words BIBREF0 and perform well in monolingual NE recognition BIBREF1 BIBREF2 Word embeddings also exhibit isomorphism structure across languages BIBREF3 On account of these characteristics above we attempt to utilize word embeddings to improve NE recognition for resourcepoor languages with the help of richer ones The stateoftheart crosslingual NE recognition methods are mainly based on annotation projection methods according to parallel corpora translations BIBREF4 BIBREF5 BIBREF6 BIBREF7 and Wikipedia methods BIBREF8 BIBREF9 BIBREF10 BIBREF11 Most annotated corpus based NE recognition tasks can benefit a great deal from a known NE dictionary as NEs are those words which carry common sense knowledge quite differ from the rest ones in any language vocabulary This work will focus on the NE recognition from plain text instead of corpus based NE recognition For a purpose of learning from limited annotated linguistic resources our preliminary discovery shows that it is possible to build a geometric space projection between embedding spaces to help crosslingual NE recognition Our study contains two main steps First we explore the NE distribution in monolingual case Next we learn a hypersphere mapping between embedding spaces of languages with minimal supervision Despite the simplicity of our model we make the following contributions First for word embeddings generated by different dimensions and objective functions all common NE types PER LOC ORG tend to be densely distributed in a hypersphere which gives a better solution to characterize the general NE distribution rather than existing closed dictionary definition for NE Second with the help of the hypersphere mapping it is possible to capture the NE distribution of resourcepoor languages with only a small amount of annotated data Third our method is highly friendly to unregistered NEs as the distance to each hypersphere center is the only factor needed to determine their NE categories Finally by adding hypersphere features we can significantly improve the performance of offtheshelf named entity recognition NER systems Seok BIBREF2 proposed that similar words are more likely to occupy close spatial positions since their word embeddings carries syntactical and semantical informative clues For an intuitive understanding they listed the nearest neighbors of words included in the PER and ORG tags under cosine similarity metric To empirically verify this observation and explore the performance of this property in Euclidean space we list Top5 nearest neighbors under Euclidean distance metric in Table 1 and illustrate a standard tSNE BIBREF12 2 D projection of the embeddings of three entity types with a sample of 500 words for each type Nearest neighbors are calculated by comparing the Euclidean distance between the embedding of each word such as Fohnsdorf Belgian and Ltd and the embeddings of all other words in the vocabulary We pretrain word embeddings using the continuous skipgram model BIBREF13 with the tool and obtain multiword and singleword phrases with a maximum length of 8 and a minimum word frequency cutoff of 3 The examples in Table 1 and visualization in Figure 1 demonstrate that the above observation suits well under Euclidean distance metric for NE recognition either for monolingual or multilingual situations Encouraged by the verification of nearest neighbors of NEs still being NEs we attempt to build a model which can represent this property with least parameters Namely given an NE dictionary on a monolingual we build a model to describe the distribution of the word embeddings of these entities then we can easily use these parameters as a decoder for any word to directly determine whether it belongs to a certain type of entity In this section we first introduce the open modeling from embedding distribution in monolingual cases and then put forward the mapping of the distribution model between languages and then use the mapping to build named entity dataset for resourcepoor languages Finally we use the proposed named entity model to improve the performance of stateoftheart NE recognition systems As illustrated is Figure 1 the embedding distribution of NEs is aggregated and there exists a certain boundary between different types of NEs We construct an open representation for each type of NEs hypersphere the NE type of any entity can be easily judged by checking whether it is inside a hypersphere which makes a difference from the defining way of any limited and insufficient NE dictionary The hypersphere can be expressed as follows E X O le r Eq 9 where E represents the adopted Euclidean distance X is referred to any point in the hypersphere O and r are the center vector and radius For each entity type we attempt to construct a hypersphere which encompass as many congeneric NEs as possible and as few as possible inhomogeneous NEs we use F1 score as a tradeoff between these two concerns We carefully tune the center and radius of the hypersphere to maximize its F1 score we first fix the center as the average of all NE embeddings from known NE dictionaries and search the best radius in minDist maxDist where minDistmaxDist refers to the distance between the center and its nearestfarthest neighbors Then we kick NEs which are far from the center with the distance threshold q much larger than the radius to generate a new center Finally we tune the threshold q and repeat the above steps to find the most suitable center and radius The mathematical intuition for using a hypersphere can be interpreted in a manner similar to support vector machine SVM BIBREF14 which uses the kernel to obtain the optimal margin in very high dimensional spaces through linear hyperplane separation in Descartes coordination We transfer the idea to the separation of NE distributions The only difference is about boundary shape what we need is a closed surface instead of an open hyperplane and hypersphere is such a smooth closed boundary with least parameters as well in polar coordinates as counterpart of hyperplane in Descartes coordinates Using the least principle to model the mathematical objective also follows the Occam razor principle Figure 1 also reveals that the distribution of PER NEs is compact while ORG NE distribution is relatively sparse Syntactically PER NEs are more stable in terms of position and length in sentences compared to ORG NEs so that they have a more accurate embedding representation with strong strong syntax and semantics making the corresponding word embeddings closer to central region of the hypersphere As the isomorphism characteristic exists between languages BIBREF3 BIBREF15 we can apply the distributional modeling for every languages in the same way For a target language without an NE dictionary its NE distribution can be obtained from a source language with known NE distributions by learning the transforming function between these two languages We construct the transformation matrix W via a set of parallel word pairs the set will be referred to seed pairs hereafter and their word embeddings lbrace Xi Zirbrace i1m BIBREF3 lbrace Xirbrace i1m lbrace Zirbrace i1m are the source and target word embeddings respectively W can be learned by solving the matrix equation XW Z Then given the source center vector O1 the mapping center vector O2 can be expressed as O2 WTO1 Eq 11 Actually the isomorphism mapping between embedding spaces is the type of affine isomorphism by furthermore considering embedding in continuous space The invariant characteristics of relative position BIBREF16 BIBREF17 BIBREF18 BIBREF19 in affine transformation is applied to correct transformation matrix errors caused by limited amount of parallel word pairs the set will be referred to seed pairs hereafter As shown in Figure 2 the ratio of the line segments keep constant when the distance is linearly enlarged or shortened Recall that point Q is an affine combination of two other noncoincident points Q1 and Q2 on the line Q 1tQ1 tQ2 We apply the affine mapping f and get fQ f1tQ1 tQ2 1tfQ1 tfQ2 Obviously the constant ratio t is not affected by the affine transformation f That is Q has the same relative distances between it and Q1 and Q2 during the process of transformation Based on the above characteristic for any point Xi in the source space and its mapping point Zi Xi and fQ f1tQ1 tQ2 1tfQ1 tfQ20 cut off radiuses with the same ratio namely the ratio of the distance of these two points to their centers and their radiuses remains unchanged fracE O1 Xir1 fracE O2 Zir2 Eq 15 where E represents the adopted Euclidean distance O1 O2 r1 r2 are the centers and radii of hyperspheres We convert the equation and learn the optimized mapping center O2 and ratio K via the seed pairs K fracr2r1 fracE O2 ZiE O1 Xi Eq 16 beginaligned
E O2 Zi K E O1 Xi quad r2 K r1
endaligned Eq 17 Given the seed pairs lbrace Xi Zirbrace i1m the initialized center O2 in Equation 3 the center O1 and radius r1 of the hypersphere in source language space we may work out the optimized ratio K the mapping center O2 and radius r2 in target language space by solving the linear equation group 5 The Euclidean distance between word and hypersphere centers can be precomputed as its NE likelihood which may provide informative clues for NE recognition We only consider three entity types in our experiment and the Euclidean distance which is represented as a 3 D vector and referred to HS vector hereafter is added to four representative offtheshelf NER systems to verify its effectiveness We feed HS vector into different layers of the neural network 1 input layer xk ck HS 2 output layer of LSTM hk HS where xk wk and hk represent word embeddings char embeddings and the output of LSTM respectively All of these models are based on classical BiLSTMCRF architecture BIBREF20 except that BIBREF21 replaces CRF layer with softmax These four baseline systems are introduced as follows BIBREF22 concatenates ELMo with word embeddings as the input of LSTM to enhance word representations as it carries both syntactic and semantic information BIBREF21 uses distant supervision for NER task and propose a new Tie or Break tagging scheme where entity spans and entity types are encoded into two folds They first build a binary classifier to distinguish Break from Tie and then learn the entity types according to their occurrence and frequency in NE dictionary The authors conduct their experiments on biomedical datasets rather than standard benchmark so we extract the NEs in training data as the domainspecific dictionary This work creates a promising prospect for using dictionary to replace the role of training data BIBREF23 takes advantage of the power of the 120 entity types from annotated data in Wikipedia Cosine similarity between the word embedding and the embedding of each entity type is concatenated as the 120 D feature vector which is called LS vector in their paper and then fed into the input layer of LSTM Lexical feature has been shown a key factor to NE recognition BIBREF24 passes sentences as sequences of characters into a characterlevel language model to produce a novel type of word embedding contextual string embeddings where one word may have different embeddings as the embeddings are computed both on the characters of a word and its surrounding context Such embeddings are then fed into the input layer of LSTM In this section we evaluate the hypersphere model based on the three models introduced above open monolingual NE modeling embedding distribution mapping and refinement NE recognition In this experiment we adopt pretrained word embeddings from Wikipedia corpus Our preliminary experiments will be conducted on English and Chinese For the former we use NLTK toolkit and LANGID toolkit to perform the preprocessing For the latter we first use OpenCC to simplify characters and then use THULAC to perform word segmentation In order to make the experimental results more accurate and credible we manually annotate two large enough Chinese and English NE dictionaries for training and test Table 2 lists the statistics of Wikipedia corpus and the annotated data Our dictionary contains many multiword NEs in LOC and ORG types as accounted in the second column for each language in Table 2 while we only include singleword PER NEs in our dictionary since the English first name and last name are separated and Chinese word segmentation cuts most of the PER entities together We pretrain quality multiword and singleword embeddings and aim to maximize the coverage of the NEs in the dictionary The pretrained word embeddings cover 823 8251 of LOC NEs and 702 6361 of ORG NEs in English and Chinese respectively For other multiword NEs we simply calculate the average vector of each word embedding as their representations The NE distribution is closely correlated to the dimension of the embedding space we train the word embeddings from 2 D to 300 D and search for the most suitable dimension for each NE type For each dimension we carefully tune the center and radius of the hypersphere using the method introduced in section 31 for maximize F1 score and select the dimension with maximize F1 score The most suitable dimension for ORG PER LOC are 16 D 16 D 24 D these dimensions will be used as parameters in the following experiments respectively We discover that in lowdimensional space the distributions of NEs are better In high dimensions the curse of dimension could be the main reason to limit the performance Table 3 lists the final maximum F1 score of three NE types The results of the three types of NE are almost 50 and PER type performs best The main factor may be that PER NEs are represented as singleword in our dictionary and word embeddings can better represents their meanings The result also states that better representations for multiword NEs which are not covered by the dictionary instead of the average of each word may help bring better results Besides the incompleteness of NE dictionaries and noises during preprocessing may cause a decrease on the performance Overall hypersphere model has shown been effectively used as the open modeling for NEs The following preparations were made for the mapping i A large enough NE dictionary in source resourcerich corpus ii A small amount of annotated seed pairs We use s to represent the number of seed pairs and d to represent the number of unknown variables With seed pair size s d the matrix can be solved with much loose constraints and F1 score remarkably increases with more seed pairs Once s d the linear equation group will be always determined by strong enough constraints which leads to a stable solution Based on the characteristics we only take two dozen of seed pairs on each type in following experiments We combine human translation and online translation together for double verification for this small set of seed pairs In this part we utilize English and Chinese as the corpus of known NEs in turn and predict the NE distribution of the other language Evaluation In order to quantitatively represent the mapping effect we present a new evaluation method to judge the hypersphere mapping between English and Chinese beginaligned
P fracViVm quad R fracViVt quad F1 frac2 P RP R
endaligned Eq 29 where Vt Vm Vi represent the volumes of the target mapping and intersection hyperspheres Due to the difficulty of calculating the volume of hyperspheres in high dimensions we adopt Monte Carlo methods to simulate the volume BIBREF25 we generate a great quantity of points in the embedding spaces and take the amount of the points falling in each hypersphere as its volume Mapping between English and Chinese Table 4 shows the comparisons of crosslingual named entity extraction performance We use the unsupervised method proposed in BIBREF26 to generate crosslingual embeddings k NN and SVM are the same as monolingual cases in Table 3 except for the training set k NN 150 and SVM 150 use 20 of the NEs in source language and 150 NEs 50 LOC PER and ORG in target language for training while k NN 2500 and SVM 2500 use 20 of the NEs in source language and 2500 NEs 1000 LOC and PER 500 ORG in target language k NN and SVM depend much on the annotated training set requiring more than 1K training samples to provide a performance as our model offers Due to the instability of ORG type in length taking the average of each word embedding may disobey the syntactic and semantic regularities of ORG NEs thereby undermines the multilingual isomorphism characteristics which causes the inferior performance of our model on this type of NEs This suggests that build better representations NEs for multiword NEs may contribute to a better performance in our model Mapping to truly Lowresource Language We build named entity dataset for a truly resourcepoor language Indonesian and manually examine the nearest words to the hypersphere center for goldstandard evaluation We take English as the source language the settings of the dimension D and the number of seed pairs s are the same as the above experiments between Chinese and English From the results listed in Table 5 we can see that even the precision of the top100 NEs are 0350 F1 0440 F1 0310 F1 respectively which proves the this distribution can indeed serves as a candidate NE dictionary for Indonesian 9 The authors of BIBREF24 publish an updated results 9298 on CoNLL2003 dataset in httpsgithubcomzalandoresearchflairissues206 on their 032 version and this is the best result at our most try 10 This is the reported stateoftheart result in their github 11We use the same parameters as the authors release in httpsgithubcomzalandoresearchflairissues173 and obtain the result of 8945 on ONTONOTES 50 dataset To evaluate the influence of our hypersphere feature for offtheshelf NER systems we perform the NE recognition on two standard NER benchmark datasets CoNLL2003 and ONTONOTES 50 Our results in Table 6 and Table 7 demonstrate the power of hypersphere features which contribute to nearly all of the three types of entities as shown in Table 6 except for a slight drop in the PER type of BIBREF22 on a strong baseline HS features stably enhance all strong stateoftheart baselines BIBREF22 BIBREF21 and BIBREF23 by 033072023 F1 point and 0130301 F1 point on both benchmark datasets CoNLL2003 and ONTONOTES 50 We show that our HS feature is also comparable with previous much more complicated LS feature and our model surpasses their baseline without LS feature by 058078 F1 point with only HS features We establish a new stateoftheart F1 score of 8975 on ONTONOTES 50 while matching stateoftheart performance with a F1 score of 9295 on CoNLL2003 dataset In recent years word embeddings have also been used as a feature to enhance the NE recognition with the revealing of linguistic features in morphological syntactic and semantic perspective BIBREF1 clustered the word embeddings and combined multiple cluster granularities to improve the NE recognition performance Our work likewise use word embeddings to help NE recognition we make use of the characteristic that syntactically and semantically s are more likely to be neighbors in embedding spaces and construct a hypersphere model to encompass NEs Crosslingual knowledge transfer is a highly promising work for resourcepoor languages annotation projection and representation projection are widely used in NE recognition BIBREF27 BIBREF5 BIBREF4 BIBREF28 BIBREF29 BIBREF30 These works put forward inconvenient requirements for parallel or comparable corpora a large amount of annotated or translation data or bilingual lexicon Different from any existing work to the best of our knowledge this is the first work that merely uses isomorphic mappings in lowdimensional embedding spaces to recognize NEs and we introduce a mathematically simple model to describe NE embedding distribution from visualization results showing it works for both monolingual and crosslingual situations Named entities being an open set which keeps expanding are difficult to represent through a closed NE dictionary This work mitigates significant defects in previous closed NE definitions and proposes a new open definition for NEs by modeling their embedding distributions with least parameters We visualize NE distributions in monolingual case and perform an effective isomorphism spaces mapping in crosslingual case According to our work we demonstrate that common named entity types PER LOC ORG tend to be densely distributed in a hypersphere and it is possible to build a mapping between the NE distributions in embedding spaces to help crosslingual NE recognition Experimental results show that the distribution of named entities via mapping can be used as a good enough replacement for the original distribution Then the discovery is used to build an NE dictionary for Indonesian being a truly lowresource language which also gives satisfactory precision Finally our simple hypersphere features being the representation of NE likelihood can be used for enhancing offtheshelf NER systems by concatenating with word embeddings and the output of BiLSTM in the input layer and encode layer respectively and we achieve a new stateoftheart F1 score of 8975 on ONTONOTES 50 benchmark In this work we also give a better solution for unregistered NEs For any newly emerged NE together with its embedding in case we obtain the hypersphere of each named entity the corresponding named entity category can be determined by calculating the distance between its word embedding and the center of each hypersphere </s>
<s>[INST] Efficient Twitter Sentiment Classification using Subjective Distant Supervision [/INST] A lot of work has been done in the field of Twitter sentiment analysis till date Sentiment analysis has been handled as a Natural Language Processing task at many levels of granularity Most of these techniques use Machine Learning algorithms with features such as unigrams ngrams PartOfSpeech POS tags However the training datasets are often very large and hence with such a large number of features this process requires a lot of computation power and time The following question arises What to do if we do not have resources that provide such a great amount of computation power The existing solution to this problem is to use a smaller sample of the dataset For sentiment analysis if we train the model using a smaller randomly chosen sample then we get low accuracy 16 17 In this paper we propose a novel technique to sample tweets for building a sentiment classification model so that we get higher accuracy than the stateoftheart baseline method namely Distant Supervision using a smaller set of tweets Our model has lower computation time and higher accuracy compared to baseline model Users often express sentiment using subjective expression Although objective expressions can also have sentiment it is much rare Determining subjectivity is quite efficient compared to determining sentiment Subjectivity can be determined for individual tweets But to do sentiment classification we need to build a classification model with positive and negative sentiment tweets The time to train a sentiment classification model increases with the increase in the number of training tweets In this paper we use tweet subjectivity to select the best training tweets This not only lowers the computation time but also increases the accuracy because we have training data with less noise Even the created features will be more relevant to the classification task The computation cost will reduce due to small training data size and better set of features Thus if users do not have enough computational resources they can filter the training dataset using a high value of subjectivf ity threshold This ensures reliable prediction on a smaller training dataset and eventually requires less computational time The above approach and some of the intricacies that invariably seep in need to be considered and are described in the later sections of the paper In this paper we also integrate a lot of meticulous preprocessing steps This makes our model more robust and hence leads to higher accuracy Along with the machine learning algorithms being used we use a heuristicbased classification of tweets This is based on the EFWS of a tweet which is described in later sections This heuristic basically takes into account the polarity scores of frequently used words in tweets and is able to achieve around 85 accuracy on our dataset hence boosting the overall accuracy by a considerable amount Our training data consists of generic not topicspecific Twitter messages with emoticons which are used as noisy labels We show that the accuracy obtained on a training dataset comprising 100K tweets and a test dataset of 5000 tweets gives an accuracy of around 80 on the following classifiers Naive Bayes RBFkernel Support Vector Machine and Logistic Regression Our model takes roughly half the time to train and achieves higher accuracy than the baseline model on all the classifiers Because the amount of training time is expected to increase exponentially as the training data increases we expect our model to outperform in terms of higher accuracy the baseline model at a speed which is at least twofold the speed of the baseline model on larger datasets There has been a large amount of prior research in sentiment analysis of tweets Read 10 shows that using emoticons as labels for positive and sentiment is effective for reducing dependencies in machine learning techniques Alec Go 1 used Naive Bayes SVM and MaxEnt classifiers to train their model This as mentioned earlier is our baseline model Our model builds on this and achieves higher accuracy on a much smaller training dataset Ayushi Dalmia 6 proposed a model with a more involved preprocessing stage and used features like scores from Bing Lius Opinion Lexicon and number of positive negative POS tags This model achieved considerably high accuracies considering the fact that their features were the not the conventional bagofwords or any ngrams The thought of using the polarity scores of frequently used tweet words as described in our EFWS heuristic was inspired from this work 14 created prior probabilities using the datasets for the average sentiment of tweets in different spatial temporal and authorial contexts They then used a Bayesian approach to combine these priors with standard bigram language models Another significant effort in sentiment analysis on Twitter data is by Barbosa 16 They use polarity predictions from three websites as noisy labels to train a model and use 1000 manually labelled tweets for tuning and another 1000 for testing They propose the use of syntax features of tweets like punctuation retweet hashtags link and exclamation marks in addition with features like prior polarity of words and POS of words Some works leveraged the use of existing hashtags in the Twitter data for building the training data Davidov Tsur and Rappoport 2010 also use hashtags for creating training data but they limit their experiments to sentimentnonsentiment classification rather than 3way polarity classification as 15 does Our model integrates some of the preprocessing techniques this work used Hassan Saif 9 introduced a novel approach of adding semantics as additional features into the training set for sentiment analysis This approach works well for topic specific data Hence we thought of taking a different approach for a generic tweet dataset like ours Subjectivity refers to how someones judgment is shaped by personal opinions and feelings instead of outside influences An objective perspective is one that is not influenced by emotions opinions or personal feelings it is a perspective based in fact in things quantifiable and measurable A subjective perspective is one open to greater interpretation based on personal feeling emotion aesthetics etc Subjectivity classification is another topic in the domain of text classification which is garnering more and more interest in the field of sentiment analysis Since a single sentence may contain multiple opinions and subjective and factual clauses this problem is not as straightforward as it seems Below are some examples of subjective and objective sentences Objective sentence with no sentiment So the Earth revolves around the Sun Objective sentence with sentiment The drug relieved my pain Subjective sentence with no sentiment I believe he went home yesterday Subjective sentence with sentiment I am so happy you got the scholarship Classifying a sentence as subjective or objective provides certain conclusions Purely objective sentences do not usually convey any sentiment while most of the purely subjective sentences have a clear inclination towards either the positive or negative sentiment Sentences which are not completely subjective or objective may or may not convey a sentiment Libraries like TextBlob and tools like Opinion Finder can be used to find the extent to which a sentence can be considered subjective Since tweets are usually personspecific or subjective we use this intuition to reduce the size of the training set by filtering the sentences with a subjectivity level below a certain threshold fairly objective tweets In this section we explain the various preprocessing techniques used for feature reduction and also the additional step of filtering the training dataset using the subjectivity score of tweets We further describe our approach of using different machine learning classifiers and feature extractors We also propose an additional heuristic for sentiment classification which can be used as a tagalong with the learning heuristics Our training dataset has 16 million tweets and 5000 tweets in the test dataset Since the test dataset provided comprised only 500 tweets we have taken part of the training data exactly 5000 tweets distinct from the training dataset as the test dataset We remove emoticons from our training and test data The table below shows some sample tweets This is a new step we propose to achieve higher accuracy on a smaller training dataset We use TextBlob to classify each tweet as subjective or objective We then remove all tweets which have a subjectivity levelscore score lies between 0 and 1 below a specified threshold The remaining tweets are used for training purposes We observe that a considerable number of tweets are removed as the subjectivity threshold increases We show the effect of doing this procedure on the overall accuracy in the evaluation section of the paper The Twitter language model has many unique properties We take advantage of the following properties to reduce the feature space Most of the preprocessing steps are common to most of the previous works in the field However we have added some more steps to this stage of our model We first strip off the emoticons from the data Users often include twitter usernames in their tweets in order to direct their messages We also strip off usernames eg Chinmay and URLs present in tweets because they do not help us in sentiment classification Apart from full stops which are dealt in the next point other punctuations and special symbols are also removed Repeated whitespaces are replaced with a single space We also perform stemming to reduce the size of the feature space In the previous works full stops are just usually replaced by a space However we have observed that casual language in tweets is often seen in form of repeated punctuations For example this is so coolwow We take into consideration this format and replace two or more occurrences of and with a space Also full stops are also quite different in usage Sometimes there isnt any space in between sentences For example Its rainingFeeling awesome We replace a single occurrence of a full stop with a space to ensure correct feature incorporation In the case of hashtags most of the previous works just consider the case of hashtags followed by a single word they just remove the hashtag and add the word to the feature vector However sometimes there are multiple words after a hashtag and more often than not these words form an important conclusive part of the Tweet For example ThisSucks or BestMomentEver These hashtags need to be dealt with in a proper fashion We split the text after hashtags after before each capital letter and add these as tokens to the feature vector For hashtags followed by a single word we just replace the pattern word with the word as conventional models do The intuition behind this step is that quite often the sentiment of a tweet is expressed in form of a hashtag For example happy or disappointed are frequently used hashtags and we dont want to lose this information during sentiment classification Tweets contain very casual language as mentioned earlier For example if we search wow with an arbitrary number of os in the middle eg wooow woooow on Twitter there will most likely be a nonempty result set We use preprocessing so that any letter occurring more than two times in a row is replaced with two occurrences In the samples above these words would be converted into the token woow After all the above modifications tweets are converted into lowercase to avoid confusion between features having same content but are different in capitalization We gather a list of 400 stopwords These words if present in the tweets are not considered in the feature vector We store an acronym dictionary which has over 5000 frequentlyused acronyms and their abbreviations We replace such acronyms in tweets with their abbreviation since these can be of great use while sentiment classification All negative words like cannot cant wont dont are replaced by not which effectively keeps the sentiment stable It is observed that doing this makes the training faster since the model has to deal with a smaller feature vector The baseline model for our experiments is explained in the paper by Alec Go 1 The model uses the Naive Bayes SVM and the Maximum Entropy classifiers for their experiment Their feature vector is either composed of Unigrams Bigrams Unigrams Bigrams or Unigrams POS tags This work achieved the following maximum accuracies a 822 for the Unigram feature vector using the SVM classifier b 830 for the Unigram Bigram feature vector using the MaxEnt classifier and 827 using the Naive Bayes classifier c 819 for the Unigram POS feature vector using the SVM classifier These baseline accuracies were on a training dataset of 16 million tweets and a test dataset of 500 tweets We are using the same training dataset for our experiments We later present the baseline accuracies on a training set of 200K tweets and a test dataset of 5000 tweets we compare our models accuracy with these baseline accuracy values on the same test data of 5000 tweets We have described our baseline model above So the feature vectors we collate results for are Unigram Unigram Bigram and Unigram POS We have already made two major changes before the training starts on our dataset as compared to our baseline model Firstly our training dataset will be filtered according to the subjectivity threshold And secondly our preprocessing is much more robust as compared to their work Now let us look at an additional heuristic we use to obtain labels for our test data Along with dictionaries for stop words and acronyms we also maintain a dictionary of a list of frequently used words and their polarity scores This dictionary has around 2500 words and their polarity score ranging from 5 to 5 At runtime we also use all synonyms of a word from WordNet present in a tweet and also the dictionary and assign them the same score as the dictionary word There is a reasonable assumption here that the synonyms arent very extremal in nature that is a word with a polarity score of 2 cannot have a synonym which has a polarity score of 5 Now we calculate the Effective Word Scores of a tweet We define the Effective Word Score of score x as EFWSx Nx Nx where Nx is the number of words in the tweet with polarity score x For example if a tweet has one word with score 5 three words with score 4 two with score 2 three with with score 2 one with score 3 and finally two with score 4 then the effective word scores are EFWS5 N5 N5 1 0 1 EFWS4 N4 N4 3 2 1 EFWS3 N3 N3 0 1 1 EFWS2 N2 N2 2 3 1 EFWS1 N1 N1 2 0 2 We now define the heuristic for obtaining the label of a Tweet EFWS5 INLINEFORM0 1 or EFWS4 INLINEFORM1 1 and EFWS2 INLINEFORM2 1 Label positive Similarly EFWS5 INLINEFORM0 1 or EFWS4 INLINEFORM1 1 and EFWS2 INLINEFORM2 1 Label negative The basic intuition behind such a heuristic is that we found tweets having one strongly positive and one moderately positive word more than the number of strongly negative and the moderately negative words respectively usually conveyed a positive sentiment Similar was the case for negative sentiments The tweets getting a label from this heuristic are not sent into the training phase After considerable amount of experimenting and analyzing the nature of our dataset which is not domain specific we have reached the conclusion that the heuristic mentioned above is optimal for obtaining labels We found that the heuristic accuracy was around 85 for a training dataset of 100K and a test dataset of 5K where the total number of test tweets labelled by the heuristic were around 500 This means that around 425 out of the 500 tweets received a correct prediction of sentiment using this heuristic Thus using this heuristic improves the overall accuracy as well as saves time by reducing the number of tweets to be tested by the ML algorithms We use the following classifiers for our model Naive Bayes is a simple model which works well on text categorization We use a Naive Bayes model Class c is assigned to tweet d where c argmax Pc INLINEFORM0 d INLINEFORM1 And INLINEFORM0 is calculated using Bayes Rule In this formula f represents a feature and INLINEFORM1 represents the count of feature INLINEFORM2 found in tweet d There are a total of m features Parameters Pc and INLINEFORM3 are obtained through maximum likelihood estimates Support vector machines are based on the Structural Risk Minimization principle from computational learning theory SVM classification algorithms for binary classification is based on finding a separation between hyperplanes defined by classes of data One remarkable property of SVMs is that their ability to learn can be independent of the dimensionality of the feature space SVMs can generalize even in the presence of many features as in the case of text data classification We use a nonlinear Support Vector Machine with an RBF kernel Maximum Entropy Model belongs to the family of discriminative classifiers also known as the exponential or loglinear classifiers In the naive Bayes classifier Bayes rule is used to estimate this best y indirectly from the likelihood INLINEFORM0 and the prior INLINEFORM1 but a discriminative model takes this direct approach computing INLINEFORM2 by discriminating among the different possible values of the class y rather than first computing a likelihood INLINEFORM3 Logistic regression estimates INLINEFORM0 by combining the feature set linearly multiplying each feature by a weight and adding them up and then applying a function to this combination In this section we present the collated results of our experiments To show that our model achieves higher accuracy than the baseline model and on a smaller training dataset we first fix the test dataset Our test dataset as mentioned before consists of 5000 tweets We conducted our experiments on an Intel Core i5 machine 4 cores with 8 GB RAM The following are the accuracies of the baseline model on a training set of 200K tweets We filtered the training set with a subjectivity threshold of 05 By doing this we saw that the number of tweets reduced to approximately 06 million tweets from an earlier total of 16 million We then trained our model described in earlier sections on a 100K tweets randomly picked from this filtered training dataset and observed the following accuracies Note that all the accuracies in the tables above have been recorded as the average of 3 iterations of our experiment We achieve higher accuracy for all feature vectors on all classifiers and that too from a training dataset half the size of the baseline one We now see the intricacies of the subjectivity threshold parameter It is clear that more and more tweets get filtered as the subjectivity threshold parameter increases This can be seen in the Figure 1 shown below We have plotted the number of tweets that remain after filtering from two sources TextBlob Opinion Finder Tool TextBlob has an inbuilt function that provides us the subjectivity level of a tweet On the other hand Opinion Finder only provides the information of which parts of the text are subjective and which are objective From that we define the subjectivity level of that text as Subjectivity level INLINEFORM0 xlabelSubjectivity Threshold ylabelTweets in millions xmin0 xmax1 ymin0 ymax2000000 xtick00102030405060708091 ytick020000040000060000080000010000001200000140000016000001800000 legend posnorth east colorred coordinates 0 1600000 01 939785 02 873054 03 804820 04 712485 05 571864 06 449286 07 304874 08 211217 09 135788 colorblue coordinates 0 1600000 01 602313 02 499173 03 392223 04 262109 05 169477 06 154667 07 139613 08 126148 09 116842 Textblob Opinion Finder Figure 1 Number of tweets with subjectivity greater than the subjectivity threshold xlabelSubjectivity Threshold ylabelAccuracy from 0 to 1 xmin0 xmax1 ymin07 ymax1 xtick00102030405060708091 ytick00102030405060708091 legend posnorth east colorred coordinates 01 0753871866 02 0779442897 03 0763421155 04 0783231198 050805132645 060807373259 070808587744 080817799443 090823872989 Figure 2 Variation of accuracy Training data of 100K Test data of 5K with subjectivity threshold TextBlob is used to filter the tweets to form the training dataset We now focus on the issue of choosing the optimum threshold value As the subjectivity threshold parameter increases our model trains on tweets with a higher subjectivity level and the overall accuracy increases We observed the following accuracies on subjectivity level 08 Unigrams as features Naive Bayes 8032 Nonlinear SVM 8015 Logistic Regression 8177 We should consider the fact that a lot of useful tweets are also lost in the process of gradually increasing the parameter and this could cause a problem in cases when the test data is very large because the model will not train on a generic dataset Researchers may use a higher subjectivity threshold for their experiments if they are confident that most of the important information would be retained This is most likely to happen in case of topicspecific or domainspecific data ybar enlargelimits015 legend styleanchornorth legend pos north east ylabelTraining time in minutes symbolic x coordsbaselinesubjectivity05subjectivity08 xtickdata coordinates baseline174 subjectivity051255 subjectivity081068 coordinates baseline1623 subjectivity051231 subjectivity081034 coordinates baseline319 subjectivity051824 subjectivity08163 Logistic RegressionNaive BayesSVM Figure 3 Comparison of training times for Unigrams ybar enlargelimits015 legend styleanchornorth legend pos north east ylabelTraining time in minutes symbolic x coordsbaselinesubjectivity05subjectivity08 xtickdata coordinates baseline2841 subjectivity051409 subjectivity08113 coordinates baseline166 subjectivity051351 subjectivity081266 coordinates baseline352 subjectivity05206 subjectivity08192 Logistic RegressionNaive BayesSVM Figure 4 Comparison of training times for Unigrams Bigrams We use Logistic regression for classification and unigrams as the feature vector with Kfold cross validation for determining the accuracy We choose an optimal threshold value of 05 for our experiment considering the fact that the model should train on a more generic dataset Figure 2 shows the variation of accuracy with the subjectivity threshold The training size is fixed at 100K and the test dataset 5K tweets is also same for all the experiments We also measure the time taken to train our model and compare it to the baseline model Our observation was that our model took roughly half the amount of time in some cases and yet obtained a higher accuracy Figures 3 and 4 show the difference in training time of the baseline model our model on a 05 subjectivityfiltered dataset and our model on a 08 subjectivityfiltered dataset on unigrams and unigrams bigrams respectively The times recorded are on a training dataset of 100K for our model and 200K for the baseline model and a test dataset of 5K was fixed in all the recordings The winning point which can be seen from the plots is that our model is considerably faster and even has twofold speed in some cases And alongside saving computation time it achieves higher accuracy This can be attributed to the fact that as the subjectivity threshold increases only the tweets with highly polar words are retained in the training set and this makes the whole process faster We show that a higher accuracy can be obtained in sentiment classification of Twitter messages training on a smaller dataset and with a much faster computation time and hence the issue of constraint on computation power is resolved to a certain extent This can be achieved using a subjectivity threshold to selectively filter the training data incorporating a more complex preprocessing stage and using an additional heuristic for sentiment classification along with the conventional machine learning techniques As Twitter data is abundant our subjectivity filtering process can achieve a better generalised model for sentiment classification </s>
<s>[INST] Dynamic Memory Networks for Visual and Textual Question Answering [/INST] Neural network based methods have made tremendous progress in image and text classification BIBREF0 BIBREF1 However only recently has progress been made on more complex tasks that require logical reasoning This success is based in part on the addition of memory and attention components to complex neural networks For instance memory networks BIBREF2 are able to reason over several facts written in natural language or subject relation object triplets Attention mechanisms have been successful components in both machine translation BIBREF3 BIBREF4 and image captioning models BIBREF5 The dynamic memory network BIBREF6 DMN is one example of a neural network model that has both a memory component and an attention mechanism The DMN yields state of the art results on question answering with supporting facts marked during training sentiment analysis and partofspeech tagging We analyze the DMN components specifically the input module and memory module to improve question answering We propose a new input module which uses a two level encoder with a sentence reader and input fusion layer to allow for information flow between sentences For the memory we propose a modification to gated recurrent units GRU BIBREF7 The new GRU formulation incorporates attention gates that are computed using global knowledge over the facts Unlike before the new DMN model does not require that supporting facts ie the facts that are relevant for answering a particular question are labeled during training The model learns to select the important facts from a larger set In addition we introduce a new input module to represent images This module is compatible with the rest of the DMN architecture and its output is fed into the memory module We show that the changes in the memory module that improved textual question answering also improve visual question answering Both tasks are illustrated in Fig 1 We begin by outlining the DMN for question answering and the modules as presented in BIBREF6 The DMN is a general architecture for question answering QA It is composed of modules that allow different aspects such as input representations or memory components to be analyzed and improved independently The modules depicted in Fig 1 are as follows Input Module This module processes the input data about which a question is being asked into a set of vectors termed facts represented as Ff1hdots fN where N is the total number of facts These vectors are ordered resulting in additional information that can be used by later components For text QA in BIBREF6 the module consists of a GRU over the input words As the GRU is used in many components of the DMN it is useful to provide the full definition For each time step i with input xi and previous hidden state hi1 we compute the updated hidden state hi GRUxihi1 by ui sigma leftWuxi Uu hi1 bu right
ri sigma leftWrxi Ur hi1 br right
tildehi tanh leftWxi ri circ U hi1 bhright
hi uicirc tildehi 1ui circ hi1 Eq 2 where sigma is the sigmoid activation function circ is an elementwise product Wz Wr W in mathbb RnH times nI Uz Ur U in mathbb RnH times nH nH is the hidden size and nI is the input size Question Module This module computes a vector representation q of the question where q in mathbb RnH is the final hidden state of a GRU over the words in the question Episodic Memory Module Episode memory aims to retrieve the information required to answer the question q from the input facts To improve our understanding of both the question and input especially if questions require transitive reasoning the episode memory module may pass over the input multiple times updating episode memory after each pass We refer to the episode memory on the tth pass over the inputs as mt where mt in mathbb RnH the initial memory vector is set to the question vector m0 q The episodic memory module consists of two separate components the attention mechanism and the memory update mechanism The attention mechanism is responsible for producing a contextual vector ct where ct in mathbb RnH is a summary of relevant input for pass t with relevance inferred by the question q and previous episode memory mt1 The memory update mechanism is responsible for generating the episode memory mt based upon the contextual vector ct and previous episode memory mt1 By the final pass T the episodic memory mT should contain all the information required to answer the question ct in mathbb RnH0 Answer Module The answer module receives both q and mT to generate the models predicted answer For simple answers such as a single word a linear layer with softmax activation may be used For tasks requiring a sequence output an RNN may be used to decode a q mT the concatenation of vectors q and mT to an ordered set of tokens The cross entropy error on the answers is used for training and backpropagated through the entire network We propose and compare several modeling choices for two crucial components input representation attention mechanism and memory update The final DMN model obtains the highest accuracy on the bAbI10k dataset without supporting facts and the VQA dataset BIBREF8 Several design choices are motivated by intuition and accuracy improvements on that dataset In the DMN specified in BIBREF6 a single GRU is used to process all the words in the story extracting sentence representations by storing the hidden states produced at the end of sentence markers The GRU also provides a temporal component by allowing a sentence to know the content of the sentences that came before them Whilst this input module worked well for bAbI1k with supporting facts as reported in BIBREF6 it did not perform well on bAbI10k without supporting facts Sec Model Analysis We speculate that there are two main reasons for this performance disparity all exacerbated by the removal of supporting facts First the GRU only allows sentences to have context from sentences before them but not after them This prevents information propagation from future sentences Second the supporting sentences may be too far away from each other on a word level to allow for these distant sentences to interact through the word level GRU Input Fusion Layer For the DMN we propose replacing this single GRU with two different components The first component is a sentence reader responsible only for encoding the words into a sentence embedding The second component is the input fusion layer allowing for interactions between sentences This resembles the hierarchical neural autoencoder architecture of BIBREF9 and allows content interaction between sentences We adopt the bidirectional GRU for this input fusion layer because it allows information from both past and future sentences to be used As gradients do not need to propagate through the words between sentences the fusion layer also allows for distant supporting sentences to have a more direct interaction Fig 2 shows an illustration of an input module where a positional encoder is used for the sentence reader and a bidirectional GRU is adopted for the input fusion layer Each sentence encoding fi is the output of an encoding scheme taking the word tokens wi1 hdots wiMi where Mi is the length of the sentence The sentence reader could be based on any variety of encoding schemes We selected positional encoding described in BIBREF10 to allow for a comparison to their work GRUs and LSTMs were also considered but required more computational resources and were prone to overfitting if auxiliary tasks such as reconstructing the original sentence were not used For the positional encoding scheme the sentence representation is produced by fi sum j1M lj circ wij where circ is elementwise multiplication and lj is a column vector with structure ljd 1 j M d D 1 2j M where d is the embedding index and D is the dimension of the embedding The input fusion layer takes these input facts and enables an information exchange between them by applying a bidirectional GRU overrightarrowfi GRUfwdfi overrightarrowfi1
overleftarrowfi GRUbwdfi overleftarrowfi1
overleftrightarrowfi overleftarrowfi overrightarrowfi Eq 5 where fi is the input fact at timestep i overrightarrowfi is the hidden state of the forward GRU at timestep i and overleftarrowfi is the hidden state of the backward GRU at timestep i This allows contextual information from both future and past facts to impact overleftrightarrowfi We explored a variety of encoding schemes for the sentence reader including GRUs LSTMs and the positional encoding scheme described in BIBREF10 For simplicity and speed we selected the positional encoding scheme GRUs and LSTMs were also considered but required more computational resources and were prone to overfitting if auxiliary tasks such as reconstructing the original sentence were not used To apply the DMN to visual question answering we introduce a new input module for images The module splits an image into small local regions and considers each region equivalent to a sentence in the input module for text The input module for VQA is composed of three parts illustrated in Fig 3 local region feature extraction visual feature embedding and the input fusion layer introduced in Sec Input Module for Text QA Local region feature extraction To extract features from the image we use a convolutional neural network BIBREF0 based upon the VGG19 model BIBREF11 We first rescale the input image to 448 times 448 and take the output from the last pooling layer which has dimensionality d 512 times 14 times 14 The pooling layer divides the image into a grid of 14 times 14 resulting in 196 local regional vectors of d 512 Visual feature embedding As the VQA task involves both image features and text features we add a linear layer with tanh activation to project the local regional vectors to the textual feature space used by the question vector q Input fusion layer The local regional vectors extracted from above do not yet have global information available to them Without global information their representational power is quite limited with simple issues like object scaling or locational variance causing accuracy problems To solve this we add an input fusion layer similar to that of the textual input module described in Sec Input Module for Text QA First to produce the input facts F we traverse the image in a snake like fashion as seen in Figure 3 We then apply a bidirectional GRU over these input facts F to produce the globally aware input facts overleftrightarrowF The bidirectional GRU allows for information propagation from neighboring image patches capturing spatial information The episodic memory module as depicted in Fig 4 retrieves information from the input facts overleftrightarrowF overleftrightarrowf1 hdots overleftrightarrowfN provided to it by focusing attention on a subset of these facts We implement this attention by associating a single scalar value the attention gate gti with each fact overleftrightarrowfi during pass t This is computed by allowing interactions between the fact and both the question representation and the episode memory state zti overleftrightarrowfi circ q overleftrightarrowfi circ mt1 vert overleftrightarrowfi q vert vert overleftrightarrowfi mt1 vert
Zti W2 tanh leftW1zti b1 right b2
gti fracexp Ztisum k1Mi exp Ztk Eq 10 where overleftrightarrowfi is the ith fact mt1 is the previous episode memory q is the original question circ is the elementwise product cdot is the elementwise absolute value and represents concatenation of the vectors The DMN implemented in BIBREF6 involved a more complex set of interactions within z containing the additional terms f mt1 q fT Wb q fT Wb mt1 After an initial analysis we found these additional terms were not required Attention Mechanism Once we have the attention gate gti we use an attention mechanism to extract a contextual vector ct based upon the current focus We focus on two types of attention soft attention and a new attention based GRU The latter improves performance and is hence the final modeling choice for the DMN Soft attention Soft attention produces a contextual vector ct through a weighted summation of the sorted list of vectors overleftrightarrowF and corresponding attention gates git ct sum i1N gti overleftrightarrowfi This method has two advantages First it is easy to compute Second if the softmax activation is spiky it can approximate a hard attention function by selecting only a single fact for the contextual vector whilst still being differentiable However the main disadvantage to soft attention is that the summation process loses both positional and ordering information Whilst multiple attention passes can retrieve some of this information this is inefficient Attention based GRU For more complex queries we would like for the attention mechanism to be sensitive to both the position and ordering of the input facts overleftrightarrowF An RNN would be advantageous in this situation except they cannot make use of the attention gate from Equation We propose a modification to the GRU architecture by embedding information from the attention mechanism The update gate ui in Equation 2 decides how much of each dimension of the hidden state to retain and how much should be updated with the transformed input xi from the current timestep As ui is computed using only the current input and the hidden state from previous timesteps it lacks any knowledge from the question or previous episode memory By replacing the update gate ui in the GRU Equation 2 with the output of the attention gate gti Equation in Equation the GRU can now use the attention gate for updating its internal state This change is depicted in Fig 5 hi gti circ tildehi 1gti circ hi1 Eq 12 An important consideration is that gti is a scalar generated using a softmax activation as opposed to the vector ui in mathbb RnH generated using a sigmoid activation This allows us to easily visualize how the attention gates activate over the input later shown for visual QA in Fig 6 Though not explored replacing the softmax activation in Equation with a sigmoid activation would result in gti in mathbb RnH To produce the contextual vector ct used for updating the episodic memory state mt we use the final hidden state of the attention based GRU Episode Memory Updates After each pass through the attention mechanism we wish to update the episode memory mt1 with the newly constructed contextual vector ct producing mt In the DMN a GRU with the initial hidden state set to the question vector q is used for this purpose The episodic memory for pass t is computed by mt GRUct mt1 Eq 13 The work of BIBREF10 suggests that using different weights for each pass through the episodic memory may be advantageous When the model contains only one set of weights for all episodic passes over the input it is referred to as a tied model as in the Mem Weights row in Table 1 Following the memory update component used in BIBREF10 and BIBREF12 we experiment with using a ReLU layer for the memory update calculating the new episode memory state by mt ReLUleftWt mt1 ct q bright Eq 14 where is the concatenation operator Wt in mathbb RnH times nH b in mathbb RnH and nH is the hidden size The untying of weights and using this ReLU formulation for the memory update improves accuracy by another 05 as shown in Table 1 in the last column The final output of the memory network is passed to the answer module as in the original DMN The DMN is related to two major lines of recent work memory and attention mechanisms We work on both visual and textual question answering which have until now been developed in separate communities Neural Memory Models The earliest recent work with a memory component that is applied to language processing is that of memory networks BIBREF2 which adds a memory component for question answering over simple facts They are similar to DMNs in that they also have input scoring attention and response mechanisms However unlike the DMN their input module computes sentence representations independently and hence cannot easily be used for other tasks such as sequence labeling Like the original DMN this memory network requires that supporting facts are labeled during QA training Endtoend memory networks BIBREF10 do not have this limitation In contrast to previous memory models with a variety of different functions for memory attention retrieval and representations DMNs BIBREF6 have shown that neural sequence models can be used for input representation attention and response mechanisms Sequence models naturally capture position and temporality of both the inputs and transitive reasoning steps Neural Attention Mechanisms Attention mechanisms allow neural network models to use a question to selectively pay attention to specific inputs They can benefit image classification BIBREF13 generating captions for images BIBREF5 among others mentioned below and machine translation BIBREF14 BIBREF3 BIBREF4 Other recent neural architectures with memory or attention which have proposed include neural Turing machines BIBREF15 neural GPUs BIBREF16 and stackaugmented RNNs BIBREF17 Question Answering in NLP Question answering involving natural language can be solved in a variety of ways to which we cannot all do justice If the potential input is a large text corpus QA becomes a combination of information retrieval and extraction BIBREF18 Neural approaches can include reasoning over knowledge bases BIBREF19 BIBREF20 or directly via sentences for trivia competitions BIBREF21 Visual Question Answering VQA In comparison to QA in NLP VQA is still a relatively young task that is feasible only now that objects can be identified with high accuracy The first large scale database with unconstrained questions about images was introduced by BIBREF8 While VQA datasets existed before they did not include openended freeform questions about general images BIBREF22 Others are were too small to be viable for a deep learning approach BIBREF23 The only VQA model which also has an attention component is the stacked attention network BIBREF24 Their work also uses CNN based features However unlike our input fusion layer they use a single layer neural network to map the features of each patch to the dimensionality of the question vector Hence the model cannot easily incorporate adjacency of local information in its hidden state A model that also uses neural modules albeit logically inspired ones is that by BIBREF25 who evaluate on knowledgebase reasoning and visual question answering We compare directly to their method on the latter task and dataset Related to visual question answering is the task of describing images with sentences BIBREF26 BIBREF27 used deep learning methods to map images and sentences into the same space in order to describe images with sentences and to find images that best visualize a sentence This was the first work to map both modalities into a joint space with deep learning methods but it could only select an existing sentence to describe an image Shortly thereafter recurrent neural networks were used to generate often novel sentences based on images BIBREF28 BIBREF29 BIBREF30 BIBREF5 To analyze our proposed model changes and compare our performance with other architectures we use three datasets For evaluating the DMN on textual question answering we use bAbI10k English BIBREF31 a synthetic dataset which features 20 different tasks Each example is composed of a set of facts a question the answer and the supporting facts that lead to the answer The dataset comes in two sizes referring to the number of training examples each task has bAbI1k and bAbI10k The experiments in BIBREF10 found that their lowest error rates on the smaller bAbI1k dataset were on average three times higher than on bAbI10k The DAtaset for QUestion Answering on Realworld images DAQUAR BIBREF23 consists of 795 training images and 654 test images Based upon these images 6795 training questions and 5673 test questions were generated Following the previously defined experimental method we exclude multiple word answers BIBREF32 BIBREF33 The resulting dataset covers 90 of the original data The evaluation method uses classification accuracy over the single words We use this as a development dataset for model analysis Sec Model Analysis The Visual Question Answering VQA dataset was constructed using the Microsoft COCO dataset BIBREF34 which contained 123287 trainingvalidation images and 81434 test images Each image has several related questions with each question answered by multiple people This dataset contains 248349 training questions 121512 validation questions and 244302 for testing The testing data was split into testdevelopment teststandard and testchallenge in BIBREF8 Evaluation on both teststandard and testchallenge are implemented via a submission system teststandard may only be evaluated 5 times and testchallenge is only evaluated at the end of the competition To the best of our knowledge VQA is the largest and most complex image dataset for the visual question answering task To understand the impact of the proposed module changes we analyze the performance of a variety of DMN models on textual and visual question answering datasets The original DMN ODMN is the architecture presented in BIBREF6 without any modifications DMN2 only replaces the input module with the input fusion layer Sec Input Module for Text QA DMN3 based upon DMN2 replaces the soft attention mechanism with the attention based GRU proposed in Sec The Episodic Memory Module Finally DMN based upon DMN3 is an untied model using a unique set of weights for each pass and a linear layer with a ReLU activation to compute the memory update We report the performance of the model variations in Table 1 A large improvement to accuracy on both the bAbI10k textual and DAQUAR visual datasets results from updating the input module seen when comparing ODMN to DMN2 On both datasets the input fusion layer improves interaction between distant facts In the visual dataset this improvement is purely from providing contextual information from neighboring image patches allowing it to handle objects of varying scale or questions with a locality aspect For the textual dataset the improved interaction between sentences likely helps the path finding required for logical reasoning when multiple transitive steps are required The addition of the attention GRU in DMN3 helps answer questions where complex positional or ordering information may be required This change impacts the textual dataset the most as few questions in the visual dataset are likely to require this form of logical reasoning Finally the untied model in the DMN overfits on some tasks compared to DMN3 but on average the error rate decreases From these experimental results we find that the combination of all the proposed model changes results culminating in DMN achieves the highest performance across both the visual and textual datasets We trained our models using the Adam optimizer BIBREF35 with a learning rate of 0001 and batch size of 128 Training runs for up to 256 epochs with early stopping if the validation loss had not improved within the last 20 epochs The model from the epoch with the lowest validation loss was then selected Xavier initialization was used for all weights except for the word embeddings which used random uniform initialization with range sqrt3 sqrt3 Both the embedding and hidden dimensions were of size d 80 We used ell 2 regularization on all weights except bias and used dropout on the initial sentence encodings and the answer module keeping the input with probability p09 The last 10 of the training data on each task was chosen as the validation set For all tasks three passes were used for the episodic memory module allowing direct comparison to other state of the art methods Finally we limited the input to the last 70 sentences for all tasks except QA3 for which we limited input to the last 130 sentences similar to BIBREF10 On some tasks the accuracy was not stable across multiple runs This was particularly problematic on QA3 QA17 and QA18 To solve this we repeated training 10 times using random initializations and evaluated the model that achieved the lowest validation set loss Text QA Results We compare our best performing approach DMN to two state of the art question answering architectures the end to end memory network E2E BIBREF10 and the neural reasoner framework NR BIBREF12 Neither approach use supporting facts for training The endtoend memory network is a form of memory network BIBREF2 tested on both textual question answering and language modeling The model features both explicit memory and a recurrent attention mechanism We select the model from the paper that achieves the lowest mean error over the bAbI10k dataset This model utilizes positional encoding for input RNNstyle tied weights for the episode module and a ReLU nonlinearity for the memory update component The neural reasoner framework is an endtoend trainable model which features a deep architecture for logical reasoning and an interactionpooling mechanism for allowing interaction over multiple facts While the neural reasoner framework was only tested on QA17 and QA19 these were two of the most challenging question types at the time In Table 2 we compare the accuracy of these question answering architectures both as mean error and error on individual tasks The DMN model reduces mean error by 14 compared to the the endtoend memory network achieving a new state of the art for the bAbI10k dataset One notable deficiency in our model is that of QA16 Basic Induction In BIBREF10 an untied model using only summation for memory updates was able to achieve a near perfect error rate of 04 When the memory update was replaced with a linear layer with ReLU activation the endtoend memory networks overall mean error decreased but the error for QA16 rose sharply Our model experiences the same difficulties suggesting that the more complex memory update component may prevent convergence on certain simpler tasks The neural reasoner model outperforms both the DMN and endtoend memory network on QA17 Positional Reasoning This is likely as the positional reasoning task only involves minimal supervision two sentences for input yesno answers for supervision and only 5812 unique examples after removing duplicates from the initial 10000 training examples BIBREF12 add an auxiliary task of reconstructing both the original sentences and question from their representations This auxiliary task likely improves performance by preventing overfitting For the VQA dataset each question is answered by multiple people and the answers may not be the same the generated answers are evaluated using human consensus For each predicted answer ai for the ith question with target answer set Ti the accuracy of VQA AccVQA frac1Nsum i1Nminfracsum tin Ti1ait31 where 1cdot is the indicator function Simply put the answer ai is only 100 accurate if at least 3 people provide that exact answer Training Details We use the Adam optimizer BIBREF35 with a learning rate of 0003 and batch size of 100 Training runs for up to 256 epochs with early stopping if the validation loss has not improved in the last 10 epochs For weight initialization we sampled from a random uniform distribution with range 008 008 Both the word embedding and hidden layers were vectors of size d512 We apply dropout on the initial image output from the VGG convolutional neural network BIBREF11 as well as the input to the answer module keeping input with probability p05 Results and Analysis The VQA dataset is composed of three question domains YesNo Number and Other This enables us to analyze the performance of the models on various tasks that require different reasoning abilities The comparison models are separated into two broad classes those that utilize a full connected image feature for classification and those that perform reasoning over multiple small image patches Only the SAN and DMN approach use small image patches while the rest use the fullyconnected whole image feature approach Here we show the quantitative and qualitative results in Table 3 and Fig 6 respectively The images in Fig 6 illustrate how the attention gate gti selectively activates over relevant portions of the image according to the query In Table 3 our method outperforms baseline and other stateoftheart methods across all question domains All in both testdev and teststd and especially for Other questions achieves a wide margin compared to the other architectures which is likely as the small image patches allow for finely detailed reasoning over the image However the granularity offered by small image patches does not always offer an advantage The Number questions may be not solvable for both the SAN and DMN architectures potentially as counting objects is not a simple task when an object crosses image patch boundaries We have proposed new modules for the DMN framework to achieve strong results without supervision of supporting facts These improvements include the input fusion layer to allow interactions between input facts and a novel attention based GRU that allows for logical reasoning over ordered inputs Our resulting model obtains state of the art results on both the VQA dataset and the bAbI10k text questionanswering dataset proving the framework can be generalized across input domains </s>
<s>[INST] Low-Level Linguistic Controls for Style Transfer and Content Preservation [/INST] All text has style whether it be formal or informal polite or aggressive colloquial persuasive or even robotic Despite the success of style transfer in image processing BIBREF0 BIBREF1 there has been limited progress in the text domain where disentangling style from content is particularly difficult To date most work in style transfer relies on the availability of metadata such as sentiment authorship or formality While metadata can provide insight into the style of a text it often conflates style with content limiting the ability to perform style transfer while preserving content Generalizing style transfer requires separating style from the meaning of the text itself The study of literary style can guide us For example in the digital humanities and its subfield of stylometry content doesnt figure prominently in practical methods of discriminating authorship and genres which can be thought of as style at the level of the individual and population respectively Rather syntactic and functional constructions are the most salient features In this work we turn to literary style as a testbed for style transfer and build on work from literature scholars using computational techniques for analysis In particular we draw on stylometry the use of surface level features often counts of function words to discriminate between literary styles Stylometry first saw success in attributing authorship to the disputed Federalist Papers BIBREF2 but is recently used by scholars to study things such as the birth of genres BIBREF3 and the change of author styles over time BIBREF4 The use of function words is likely not the way writers intend to express style but they appear to be downstream realizations of higherlevel stylistic decisions We hypothesize that surfacelevel linguistic features such as counts of personal pronouns prepositions and punctuation are an excellent definition of literary style as borne out by their use in the digital humanities and our own style classification experiments We propose a controllable neural encoderdecoder model in which these features are modelled explicitly as decoder feature embeddings In training the model learns to reconstruct a text using only the content words and the linguistic feature embeddings We can then transfer arbitrary content words to a new style without parallel data by setting the lowlevel style feature embeddings to be indicative of the target style This paper makes the following contributions A formal model of style as a suite of controllable lowlevel linguistic features that are independent of content An automatic evaluation showing that our model fools a style classifier 84 of the time A human evaluation with English literature experts including recommendations for dealing with the entanglement of content with style Following in the footsteps of machine translation style transfer in text has seen success by using parallel data BIBREF5 use modern translations of Shakespeare plays to build a moderntoShakespearan model BIBREF6 compile parallel data for formal and informal sentences allowing them to successfully use various machine translation techniques While parallel data may work for very specific styles the difficulty of finding parallel texts dramatically limits this approach There has been a decent amount of work on this approach in the past few years BIBREF7 BIBREF8 mostly focusing on variations of an encoderdecoder framework in which style is modeled as a monolithic style embedding The main obstacle is often to disentangle style and content However it remains a challenging problem Perhaps the most successful is BIBREF9 who use a denoising auto encoder and back translation to learn style without parallel data BIBREF10 outline the benefits of automatically extracting style and suggest there is a formal weakness of using linguistic heuristics In contrast we believe that monolithic style embeddings dont capture the existing knowledge we have about style and will struggle to disentangle content Several papers have worked on controlling style when generating sentences from restaurant meaning representations BIBREF11 BIBREF12 In each of these cases the diversity in outputs is quite small given the constraints of the meaning representation style is often constrained to interjections like yeah and there is no original style from which to transfer BIBREF13 investigate using stylistic parameters and content parameters to control text generation using a movie review dataset Their stylistic parameters are created using wordlevel heuristics and they are successful in controlling these parameters in the outputs Their success bodes well for our related approach in a style transfer setting in which the content not merely content parameters is held fixed Style in literary research is anything but a stable concept but it nonetheless has a long tradition of study in the digital humanities In a remarkably early quantitative study of literature BIBREF14 charts sentencelevel stylistic attributes specific to a number of novelists Half a century later BIBREF15 builds on earlier work in information theory by BIBREF16 and defines a literary text as consisting of two materials the vocabulary and some structural properties the style of its author Beginning with BIBREF2 statistical approaches to style or stylometry join the alreadyheated debates over the authorship of literary works A noteable example of this is the Delta measure which uses zscores of function word frequencies BIBREF17 BIBREF18 find that Shakespeare added some material to a later edition of Thomas Kyds The Spanish Tragedy and that Christopher Marlowe collaborated with Shakespeare on Henry VI The stylometric research cited above suggests that the most frequently used words eg function words are most discriminating of authorship and literary style We investigate these claims using three corpora that have distinctive styles in the literary community gothic novels philosophy books and pulp science fiction hereafter scifi We retrieve gothic novels and philosophy books from Project Gutenberg and pulp scifi from Internet Archives Pulp Magazine Archive We partition this corpus into train validation and test sets the sizes of which can be found in Table TABREF12 In order to validate the above claims we train five different classifiers to predict the literary style of sentences from our corpus Each classifier has gradually more content words replaced with partofspeech POS tag placeholder tokens The All model is trained on sentences with all proper nouns replaced by PROPN The models Ablated N Ablated NV and Ablated NVA replace nouns nouns verbs and nouns verbs adjectives with the corresponding POS tag respectively Finally Contentonly is trained on sentences with all words that are not tagged as NOUN VERB ADJ removed the remaining words are not ablated We train the classifiers on the training set balancing the class distribution to make sure there are the same number of sentences from each style Classifiers are trained using fastText BIBREF19 using trigram features with all other settings as default tableclassifiers shows the accuracies of the classifiers The styles are highly distinctive the All classifier has an accuracy of 86 Additionally even the Ablated NVA is quite successful with 75 accuracy even without access to any content words The Content only classifier is also quite successful at 80 accuracy This indicates that these stylistic genres are distinctive at both the content level and at the syntactic level Given that noncontent words are distinctive enough for a classifier to determine style we propose a suite of lowlevel linguistic feature counts henceforth controls as our formal contentblind definition of style The style of a sentence is represented as a vector of counts of closed word classes like personal pronouns as well as counts of syntactic features like the number of SBAR nonterminals in its constituency parse since clause structure has been shown to be indicative of style BIBREF20 Controls are extracted heuristically and almost all rely on counts of predefined word lists For constituency parses we use the Stanford Parser BIBREF21 tablecontrolexamples lists all the controls along with examples Models are trained with a reconstruction task in which a distorted version of a reference sentence is input and the goal is to output the original reference figsentenceinput illustrates the process Controls are calculated heuristically All words found in the control word lists are then removed from the reference sentence The remaining words which represent the content are used as input into the model along with their POS tags and lemmas In this way we encourage models to construct a sentence using content and style independently This will allow us to vary the stylistic controls while keeping the content constant and successfully perform style transfer When generating a new sentence the controls correspond to the counts of the corresponding syntactic features that we expect to be realized in the output We implement our feature controlled language model using a neural encoderdecoder with attention BIBREF22 using 2layer unidirectional gated recurrent units GRUs for the encoder and decoder BIBREF23 The input to the encoder is a sequence of M content words along with their lemmas and fine and coarse grained partofspeech POS tags ie Xj x1jldots xMj for j in mathcal T lbrace textrm word lemma finepos coarseposrbrace We embed each token and its lemma and POS before concatenating and feeding into the encoder GRU to obtain encoder hidden states ci operatornamegruci1 leftEjXij jin mathcal T right omega enc for i in 1ldots M where initial state c0 encoder GRU parameters omega enc and embedding matrices Ej are learned parameters The decoder sequentially generates the outputs ie a sequence of N tokens y y1ldots yN where all tokens yi are drawn from a finite output vocabulary mathcal V To generate the each token we first embed the previously generated token yi1 and a vector of K control features z z1ldots zK using embedding matrices Edec and Etextrm ctrl1 ldots Etextrm ctrlK respectively before concatenating them into a vector rho i and feeding them into the decoder side GRU along with the previous decoder state hi1 where omega dec are the decoder side GRU parameters Using the decoder hidden state hi we then attend to the encoder context vectors cj computing attention scores alpha ij where before passing hi and the attention weighted context barcisum j1M alpha ij cj into a single hiddenlayer perceptron with softmax output to compute the next token prediction probability where WUV and uv nu are parameter matrices and vectors respectively Crucially the controls z remain fixed for all input decoder steps Each zk represents the frequency of one of the lowlevel features described in secformalstyle During training on the reconstruction task we can observe the full output sequence y and so we can obtain counts for each control feature directly Controls receive a different embedding depending on their frequency where counts of 020 each get a unique embedding and counts greater than 20 are assigned to the same embedding At test time we set the values of the controls according to procedure described in Section SECREF25 We use embedding sizes of 128 128 64 and 32 for token lemma fine and coarse grained POS embedding matrices respectively Output token embeddings Edec have size 512 and 50 for the control feature embeddings We set 512 for all GRU and perceptron output sizes We refer to this model as the StyleEQ model See figmodel for a visual depiction of the model We compare the above model to a similar model where rather than explicitly represent K features as input we have K features in the form of a genre embedding ie we learn a genre specific embedding for each of the gothic scifi and philosophy genres as studied in BIBREF8 and BIBREF7 To generate in a specific style we simply set the appropriate embedding We use genre embeddings of size 850 which is equivalent to the total size of the K feature embeddings in the StyleEQ model We train both models with minibatch stochastic gradient descent with a learning rate of 025 weight decay penalty of 00001 and batch size of 64 We also apply dropout with a drop rate of 025 to all embedding layers the GRUs and preceptron hidden layer We train for a maximum of 200 epochs using validation set BLEU score BIBREF26 to select the final model iteration for evaluation In the Baseline model style transfer is straightforward given an input sentence in one style fix the encoder content features while selecting a different genre embedding In contrast the StyleEQ model requires selecting the counts for each control Although there are a variety of ways to do this we use a method that encourages a diversity of outputs In order to ensure the controls match the reference sentence in magnitude we first find all sentences in the target style with the same number of words as the reference sentence Then we add the following constraints the same number of proper nouns the same number of nouns the same number of verbs and the same number of adjectives We randomly sample n of the remaining sentences and for each of these sibling sentences we compute the controls For each of the new controls we generate a sentence using the original input sentence content features The generated sentences are then reranked using the length normalized loglikelihood under the model We can then select the highest scoring sentence as our styletransferred output or take the topk when we need a diverse set of outputs The reason for this process is that although there are grouplevel distinctive controls for each style eg the high use of punctuation in philosophy books or of first person pronouns in gothic novels at the sentence level it can understandably be quite varied This method matches sentences between styles capturing the natural distribution of the corpora In tabblueperpl we report BLEU scores for the reconstruction of test set sentences from their content and feature representations as well as the model perplexities of the reconstruction For both models we use beam decoding with a beam size of eight Beam candidates are ranked according to their length normalized loglikelihood On these automatic measures we see that StyleEQ is better able to reconstruct the original sentences In some sense this evaluation is mostly a sanity check as the feature controls contain more locally specific information than the genre embeddings which say very little about how many specific function words one should expect to see in the output Designing controllable language models is often difficult because of the various dependencies between tokens when changing one control value it may effect other aspects of the surface realization For example increasing the number of conjunctions may effect how the generator places prepositions to compensate for structural changes in the sentence Since our features are deterministically recoverable we can perturb an individual control value and check to see that the desired change was realized in the output Moreover we can check the amount of change in the other nonperturbed features to measure the independence of the controls We sample 50 sentences from each genre from the test set For each sample we create a perturbed control setting for each control by adding delta to the original control value This is done for delta in lbrace 3 2 1 0 1 2 3rbrace skipping any settings where the new control value would be negative tableautoevalctrl shows the results of this experiment The Exact column displays the percentage of generated texts that realize the exact number of control features specified by the perturbed control High percentages in the Exact column indicate greater onetoone correspondence between the control and surface realization For example if the input was Dracula and Frankenstein and the mummy and we change the conjunction feature by delta 1 an output of Dracula Frankenstein and the mummy would count towards the Exact category while Dracula Frankenstein the mummy would not The Direction column specifies the percentage of cases where the generated text produces a changed number of the control features that while not exactly matching the specified value of the perturbed control does change from the original in the correct direction For example if the input again was Dracula and Frankenstein and the mummy and we change the conjunction feature by delta 1 both outputs of Dracula Frankenstein and the mummy and Dracula Frankenstein the mummy would count towards Direction High percentages in Direction mean that we could roughly ensure desired surface realizations by modifying the control by a larger delta Finally the Atomic column specifies the percentage of cases where the generated text with the perturbed control only realizes changes to that specific control while other features remain constant For example if the input was Dracula and Frankenstein in the castle and we set the conjunction feature to delta 1 an output of Dracula near Frankenstein in the castle would not count as Atomic because while the number of conjunctions did decrease by one the number of simple preposition changed An output of Dracula Frankenstein in the castle would count as Atomic High percentages in the Atomic column indicate this feature is only loosely coupled to the other features and can be changed without modifying other aspects of the sentence Controls such as conjunction determiner and punctuation are highly controllable with Exact rates above 80 But with the exception of the constituency parse features all controls have high Direction rates many in the 90s These results indicate our model successfully controls these features The fact that the Atomic rates are relatively low is to be expected as controls are highly coupled eg to increase 1stPer it is likely another pronoun control will have to decrease For each model we look at the classifier prediction accuracy of reconstructed and transferred sentences In particular we use the Ablated NVA classifier as this is the most contentblind one We produce 16 outputs from both the Baseline and StyleEq models For the Baseline we use a beam search of size 16 For the StyleEQ model we use the method described in Section SECREF25 to select 16 sibling sentences in the target style and generated a transferred sentence for each We look at three different methods for selection all which uses all output sentences top which selects the top ranked sentence based on the score from the model and oracle which selects the sentence with the highest classifier likelihood for the intended style The reason for the third method which indeed acts as an oracle is that using the score from the model didnt always surface a transferred sentence that best reflected the desired style Partially this was because the model score was mostly a function of how well a transferred sentence reflected the distribution of the training data But additionally some control settings are more indicative of a target style than others The use of the classifier allows us to identify the most suitable control setting for a target style that was roughly compatible with the number of content words In tablefasttextresults we see the results Note that for both models the all and top classification accuracy tends to be quite similar though for the Baseline they are often almost exactly the same when the Baseline has little to no diversity in the outputs However the oracle introduces a huge jump in accuracy for the StyleEQ model especially compared to the Baseline partially because the diversity of outputs from StyleEQ is much higher often the Baseline model produces no diversity the 16 output sentences may be nearly identical save a single word or two Its important to note that neither model uses the classifier in any way except to select the sentence from 16 candidate outputs What this implies is that lurking within the StyleEQ model outputs are great sentences even if they are hard to find In many cases the StyleEQ model has a classification accuracy above the base rate from the test data which is 75 see tableclassifiers tablecherrypicking shows example outputs for the StyleEQ and Baseline models Through inspection we see that the StyleEQ model successfully changes syntactic constructions in stylistically distinctive ways such as increasing syntactic complexity when transferring to philosophy or changing relevant pronouns when transferring to scifi In contrast the Baseline model doesnt create outputs that move far from the reference sentence making only minor modifications such changing the type of a single pronoun To determine how readers would classify our transferred sentences we recruited three English Literature PhD candidates all of whom had passed qualifying exams that included determining both genre and era of various literary texts To evaluate the fluency of our outputs we had the annotators score reference sentences reconstructed sentences and transferred sentences on a 05 scale where 0 was incoherent and 5 was a wellwritten human sentence tablefluency shows the average fluency of various conditions from all three annotators Both models have fluency scores around 3 Upon inspection of the outputs it is clear that many have fluency errors resulting in ungrammatical sentences Notably the Baseline often has slightly higher fluency scores than the StyleEQ model This is likely because the Baseline model is far less constrained in how to construct the output sentence and upon inspection often reconstructs the reference sentence even when performing style transfer In contrast the StyleEQ is encouraged to follow the controls but can struggle to incorporate these controls into a fluent sentence The fluency of all outputs is lower than desired We expect that incorporating pretrained language models would increase the fluency of all outputs without requiring larger datasets Each annotator annotated 90 reference sentences ie from the training corpus with which style they thought the sentence was from The accuracy on this baseline task for annotators A1 A2 and A3 was 80 88 and 80 respectively giving us an upper expected bound on the human evaluation In discussing this task with the annotators they noted that content is a heavy predictor of genre and that would certainly confound their annotations To attempt to mitigate this we gave them two annotation tasks whichof3 where they simply marked which style they thought a sentence was from and whichof2 where they were given the original style and marked which style they thought the sentence was transferred into For each task each annotator marked 180 sentences 90 from each model with an even split across the three genres Annotators were presented the sentences in a random order without information about the models In total each marked 270 sentences Note there were no reconstructions in this annotation task tablehumanclassifiers shows the results In both tasks accuracy of annotators classifying the sentence as its intended style was low In whichof3 scores were around 20 below the chance rate of 33 In whichof2 scores were in the 50s slightly above the chance rate of 50 This was the case for both models There was a slight increase in accuracy for the StyleEQ model over the Baseline for whichof3 but the opposite trend for whichof2 suggesting these differences are not significant Its clear that its hard to fool the annotators Introspecting on their approach the annotators expressed having immediate responses based on key words for instance any references of space implied scifi We call this the vampires in space problem because no matter how well a gothic sentence is rewritten as a scifi one its impossible to ignore the fact that there is a vampire in space The transferred sentences in the eyes of the Ablated NVA classifier with no access to content words did quite well transferring into their intended style But people are not blind to content Working with the annotators we regularly came up against the vampires in space problem while syntactic constructions account for much of the distinction of literary styles these constructions often cooccur with distinctive content Stylometrics finds syntactic constructions are great at fingerprinting but suggests that these constructions are surface realizations of higherlevel stylistic decisions The number and type of personal pronouns is a reflection of how characters feature in a text A large number of positional prepositions may be the result of a writer focusing on physical descriptions of scenes In our attempt to decouple these we create Frankenstein sentences which piece together features of different styles we are putting vampires in space Another way to validate our approach would be to select data that is stylistically distinctive but with similar content perhaps genres in which content is static but language use changes over time stylistically distinct authors within a single genre or parodies of a distinctive genre We present a formal extendable model of style that can add control to any neural text generation system We model style as a suite of lowlevel linguistic controls and train a neural encoderdecoder model to reconstruct reference sentences given only content words and the setting of the controls In automatic evaluations we show that our model can fool a style classifier 84 of the time and outperforms a baseline genreembedding model In human evaluations we encounter the vampires in space problem in which content and style are equally discriminative but people focus more on the content In future work we would like to model higherlevel syntactic controls BIBREF20 show that differences in clausal constructions for instance having a dependent clause before an independent clause or vice versa is a marker of style appreciated by the reader Such features would likely interact with our lowerlevel controls in an interesting way and provide further insight into style transfer in text Katy Gero is supported by an NSF GRF DGE 1644869 We would also like to thank Elsbeth Turcan for her helpful comments </s>
<s>[INST] Fusing Visual, Textual and Connectivity Clues for Studying Mental Health [/INST] 0pt00 0pt00 0pt00 095 1Amir Hossein Yazdavar 1Mohammad Saeid Mahdavinejad 2Goonmeet Bajaj 3William Romine 1Amirhassan Monadjemi 1Krishnaprasad Thirunarayan 1Amit Sheth 4Jyotishman Pathak 1Department of Computer Science Engineering Wright State University OH USA 2Ohio State University Columbus OH USA 3Department of Biological Science Wright State University OH USA 4 Division of Health Informatics Weill Cornell University New York NY USA 1 yazdavar2wrightedu With ubiquity of social media platforms millions of people are sharing their online persona by expressing their thoughts moods emotions feelings and even their daily struggles with mental health issues voluntarily and publicly on social media Unlike the most existing efforts which study depression by analyzing textual content we examine and exploit multimodal big data to discern depressive behavior using a wide variety of features including individuallevel demographics By developing a multimodal framework and employing statistical techniques for fusing heterogeneous sets of features obtained by processing visual textual and user interaction data we significantly enhance the current stateoftheart approaches for identifying depressed individuals on Twitter improving the average F1Score by 5 percent as well as facilitate demographic inference from social media for broader applications Besides providing insights into the relationship between demographics and mental health our research assists in the design of a new breed of demographicaware health interventions Depression is a highly prevalent public health challenge and a major cause of disability worldwide Depression affects 67 ie about 16 million Americans each year According to the World Mental Health Survey conducted in 17 countries on average about 5 of people reported having an episode of depression in 2011 BIBREF0 Untreated or undertreated clinical depression can lead to suicide and other chronic risky behaviors such as drug or alcohol addiction Global efforts to curb clinical depression involve identifying depression through surveybased methods employing phone or online questionnaires These approaches suffer from underrepresentation as well as sampling bias with very small group of respondents In contrast the widespread adoption of social media where people voluntarily and publicly express their thoughts moods emotions and feelings and even share their daily struggles with mental health problems has not been adequately tapped into studying mental illnesses such as depression The visual and textual content shared on different social media platforms like Twitter offer new opportunities for a deeper understanding of selfexpressed depression both at an individual as well as communitylevel Previous research efforts have suggested that language style sentiment users activities and engagement expressed in social media posts can predict the likelihood of depression BIBREF1 BIBREF2 However except for a few attempts BIBREF3 BIBREF4 BIBREF5 BIBREF6 these investigations have seldom studied extraction of emotional state from visual content of images in postedprofile images Visual content can express users emotions more vividly and psychologists noted that imagery is an effective medium for communicating difficult emotions According to eMarketer photos accounted for 75 of content posted on Facebook worldwide and they are the most engaging type of content on Facebook 87 Indeed a picture is worth a thousand words and now photos are worth a million likes Similarly on Twitter the tweets with image links get twice as much attention as those without and videolinked tweets drive up engagement The ease and naturalness of expression through visual imagery can serve to glean depressionindicators in vulnerable individuals who often seek social support through social media BIBREF7 Further as psychologist Carl Rogers highlights we often pursue and promote our IdealSelf In this regard the choice of profile image can be a proxy for the online persona BIBREF8 providing a window into an individuals mental health status For instance choosing emaciated legs of girls covered with several cuts as profile image portrays negative selfview BIBREF9 Inferring demographic information like gender and age can be crucial for stratifying our understanding of populationlevel epidemiology of mental health disorders Relying on electronic health records data previous studies explored gender differences in depressive behavior from different angles including prevalence age at onset comorbidities as well as biological and psychosocial factors For instance women have been diagnosed with depression twice as often as men BIBREF10 and national psychiatric morbidity survey in Britain has shown higher risk of depression in women BIBREF11 On the other hand suicide rates for men are three to five times higher compared to that of the women BIBREF12 Although depression can affect anyone at any age signs and triggers of depression vary for different age groups Depression triggers for children include parental depression domestic violence and loss of a pet friend or family member For teenagers ages 1218 depression may arise from hormonal imbalance sexuality concerns and rejection by peers Young adults ages 1929 may develop depression due to life transitions poverty trauma and work issues Adult ages 3060 depression triggers include caring simultaneously for children and aging parents financial burden work and relationship issues Senior adults develop depression from common latelife issues social isolation major life loses such as the death of a spouse financial stress and other chronic health problems eg cardiac disease dementia Therefore inferring demographic information while studying depressive behavior from passively sensed social data can shed better light on the populationlevel epidemiology of depression The recent advancements in deep neural networks specifically for image analysis task can lead to determining demographic features such as age and gender BIBREF13 We show that by determining and integrating heterogeneous set of features from different modalities aesthetic features from posted images colorfulness hue variance sharpness brightness blurriness naturalness choice of profile picture for gender age and facial expression the screen name the language features from both textual content and profiles description ngram emotion sentiment and finally sociability from egonetwork and user engagement we can reliably detect likely depressed individuals in a data set of 8770 humanannotated Twitter users We address and derive answers to the following research questions 1 How well do the content of posted images colors aesthetic and facial presentation reflect depressive behavior 2 Does the choice of profile picture show any psychological traits of depressed online persona Are they reliable enough to represent the demographic information such as age and gender 3 Are there any underlying common themes among depressed individuals generated using multimodal content that can be used to detect depression reliably Mental Health Analysis using Social Media Several efforts have attempted to automatically detect depression from social media content utilizing machinedeep learning and natural language processing approaches Conducting a retrospective study over tweets BIBREF14 characterizes depression based on factors such as language emotion style egonetwork and user engagement They built a classifier to predict the likelihood of depression in a post BIBREF14 BIBREF15 or in an individual BIBREF1 BIBREF16 BIBREF17 BIBREF18 Moreover there have been significant advances due to the shared task BIBREF19 focusing on methods for identifying depressed users on Twitter at the Computational Linguistics and Clinical Psychology Workshop CLP 2015 A corpus of nearly 1800 Twitter users was built for evaluation and the best models employed topic modeling BIBREF20 Linguistic Inquiry and Word Count LIWC features and other metadata BIBREF21 More recently a neural network architecture introduced by BIBREF22 combined posts into a representation of users activities for detecting depressed users Another active line of research has focused on capturing suicide and selfharm signals BIBREF23 BIBREF24 BIBREF25 BIBREF26 BIBREF2 BIBREF27 Moreover the CLP 2016 BIBREF28 defined a shared task on detecting the severity of the mental health from forum posts All of these studies derive discriminative features to classify depression in usergenerated content at messagelevel individuallevel or communitylevel Recent emergence of photosharing platforms such as Instagram has attracted researchers attention to study peoples behavior from their visual narratives ranging from mining their emotions BIBREF29 and happiness trend BIBREF30 to studying medical concerns BIBREF31 Researchers show that people use Instagram to engage in social exchange and storytelling about their difficult experiences BIBREF4 The role of visual imagery as a mechanism of selfdisclosure by relating visual attributes to mental health disclosures on Instagram was highlighted by BIBREF3 BIBREF5 where individual Instagram profiles were utilized to build a prediction framework for identifying markers of depression The importance of data modality to understand user behavior on social media was highlighted by BIBREF32 More recently a deep neural network sequence modeling approach that marries audio and text data modalities to analyze questionanswer style interviews between an individual and an agent has been developed to study mental health BIBREF32 Similarly a multimodal depressive dictionary learning was proposed to detect depressed users on Twitter BIBREF33 They provide a sparse user representations by defining a feature set consisting of social network features user profile features visual features emotional features BIBREF34 topiclevel features and domainspecific features Particularly our choice of multimodel prediction framework is intended to improve upon the prior works involving use of images in multimodal depression analysis BIBREF33 and prior works on studying Instagram photos BIBREF6 BIBREF35 Demographic information inference on Social Media There is a growing interest in understanding online users demographic information due to its numerous applications in healthcare BIBREF36 BIBREF37 A supervised model developed by BIBREF38 for determining users gender by employing features such as screenname fullname profile description and content on external resources eg personal blog Employing features including emoticons acronyms slangs punctuations capitalization sentence length and included linksimages along with online behaviors such as number of friends post time and commenting activity a supervised model was built for predicting users age group BIBREF39 Utilizing users life stage information such as secondary school student college student and employee BIBREF40 builds age inference model for Dutch Twitter users Similarly relying on profile descriptions while devising a set of rules and patterns a novel model introduced for extracting age for Twitter users BIBREF41 They also parse description for occupation by consulting the SOC2010 list of occupations and validating it through social surveys A novel age inference model was developed while relying on homophily interaction information and content for predicting age of Twitter users BIBREF42 The limitations of textual content for predicting age and gender was highlighted by BIBREF43 They distinguish language use based on social gender age identity biological sex and chronological age by collecting crowdsourced signals using a game in which players crowd guess the biological sex and age of a user based only on their tweets Their findings indicate how linguistic markers can misguide eg a heart represented as 3 can be misinterpreted as feminine when the writer is male Estimating age and gender from facial images by training a convolutional neural networks CNN for face recognition is an active line of research BIBREF44 BIBREF13 BIBREF45 Selfdisclosure clues have been extensively utilized for creating groundtruth data for numerous social media analytic studies eg for predicting demographics BIBREF36 BIBREF41 and users depressive behavior BIBREF46 BIBREF47 BIBREF48 For instance vulnerable individuals may employ depressiveindicative terms in their Twitter profile descriptions Others may share their age and gender eg 16 years old suicidal girlsee Figure FIGREF15 We employ a huge dataset of 45000 selfreported depressed users introduced in BIBREF46 where a lexicon of depression symptoms consisting of 1500 depressionindicative terms was created with the help of psychologist clinician and employed for collecting selfdeclared depressed individuals profiles A subset of 8770 users 24 million timestamped tweets containing 3981 depressed and 4789 control users that do not show any depressive behavior were verified by two human judges BIBREF46 This dataset INLINEFORM0 contains the metadata values of each user such as profile descriptions followerscount createdat and profileimageurl Age Enabled Groundtruth Dataset We extract users age by applying regular expression patterns to profile descriptions such as 17 years old selfharm anxiety depression BIBREF41 We compile age prefixes and age suffixes and use three ageextraction rules 1 I am X years old 2 Born in X 3 X years old where X is a date or age eg 1994 We selected a subset of 1061 users among INLINEFORM0 as gold standard dataset INLINEFORM1 who disclose their age From these 1061 users 822 belong to depressed class and 239 belong to control class From 3981 depressed users 206 disclose their age in contrast with only 4 2394789 among control group So selfdisclosure of age is more prevalent among vulnerable users Figure FIGREF18 depicts the age distribution in INLINEFORM2 The general trend consistent with the results in BIBREF42 BIBREF49 is biased toward young people Indeed according to Pew 47 of Twitter users are younger than 30 years old BIBREF50 Similar data collection procedure with comparable distribution have been used in many prior efforts BIBREF51 BIBREF49 BIBREF42 We discuss our approach to mitigate the impact of the bias in Section 41 The median age is 17 for depressed class versus 19 for control class suggesting either likely depresseduser population is younger or depressed youngsters are more likely to disclose their age for connecting to their peers social homophily BIBREF51 Gender Enabled Groundtruth Dataset We selected a subset of 1464 users INLINEFORM0 from INLINEFORM1 who disclose their gender in their profile description From 1464 users 64 belonged to the depressed group and the rest 36 to the control group 23 of the likely depressed users disclose their gender which is considerably higher 12 than that for the control class Once again gender disclosure varies among the two gender groups For statistical significance we performed chisquare test null hypothesis gender and depression are two independent variables Figure FIGREF19 illustrates gender association with each of the two classes Blue circles positive residuals see Figure FIGREF19 AD show positive association among corresponding row and column variables while red circles negative residuals see Figure FIGREF19 BC imply a repulsion Our findings are consistent with the medical literature BIBREF10 as according to BIBREF52 more women than men were given a diagnosis of depression In particular the femaletomale ratio is 21 and 19 for Major Depressive Disorder and Dysthymic Disorder respectively Our findings from Twitter data indicate there is a strong association Chisquare 3275 pvalue104e08 between being female and showing depressive behavior on Twitter We now provide an indepth analysis of visual and textual content of vulnerable users Visual Content Analysis We show that the visual content in images from posts as well as profiles provide valuable psychological cues for understanding a users depression status Profileposted images can surface selfstigmatization BIBREF53 Additionally as opposed to typical computer vision framework for object recognition that often relies on thousands of predetermined lowlevel features what matters more for assessing users online behavior is the emotions reflected in facial expressions BIBREF54 attributes contributing to the computational aesthetics BIBREF55 and sentimental quotes they may subscribe to Figure FIGREF15 BIBREF8 Facial Presence For capturing facial presence we rely on BIBREF56 s approach that uses multilevel convolutional coarsetofine network cascade to tackle facial landmark localization We identify facial presentation emotion from facial expression and demographic features from profileposted images Table TABREF21 illustrates facial presentation differences in both profile and posted images media for depressed and control users in INLINEFORM0 With control class showing significantly higher in both profile and media 8 9 respectively compared to that for the depressed class In contrast with age and gender disclosure vulnerable users are less likely to disclose their facial identity possibly due to lack of confidence or fear of stigma Facial Expression Following BIBREF8 s approach we adopt Ekmans model of six emotions anger disgust fear joy sadness and surprise and use the Face API to automatically capture them from the shared images Positive emotions are joy and surprise and negative emotions are anger disgust fear and sadness In general for each user u in INLINEFORM0 we process profileshared images for both the depressed and the control groups with at least one face from the shared images Table TABREF23 For the photos that contain multiple faces we measure the average emotion Figure FIGREF27 illustrates the intercorrelation of these features Additionally we observe that emotions gleaned from facial expressions correlated with emotional signals captured from textual content utilizing LIWC This indicates visual imagery can be harnessed as a complementary channel for measuring online emotional signals General Image Features The importance of interpretable computational aesthetic features for studying users online behavior has been highlighted by several efforts BIBREF55 BIBREF8 BIBREF57 Color as a pillar of the human vision system has a strong association with conceptual ideas like emotion BIBREF58 BIBREF59 We measured the normalized red green blue and the mean of original colors and brightness and contrast relative to variations of luminance We represent images in HueSaturationValue color space that seems intuitive for humans and measure mean and variance for saturation and hue Saturation is defined as the difference in the intensities of the different light wavelengths that compose the color Although hue is not interpretable high saturation indicates vividness and chromatic purity which are more appealing to the human eye BIBREF8 Colorfulness is measured as a difference against gray background BIBREF60 Naturalness is a measure of the degree of correspondence between images and the human perception of reality BIBREF60 In color reproduction naturalness is measured from the mental recollection of the colors of familiar objects Additionally there is a tendency among vulnerable users to share sentimental quotes bearing negative emotions We performed optical character recognition OCR with pythontesseract to extract text and their sentiment score As illustrated in Table TABREF26 vulnerable users tend to use less colorful higher grayscale profile as well as shared images to convey their negative feelings and share images that are less natural Figure FIGREF15 With respect to the aesthetic quality of images saturation brightness and hue depressed users use images that are less appealing to the human eye We employ independent ttest while adopting Bonferroni Correction as a conservative approach to adjust the confidence intervals Overall we have 223 features and choose Bonferronicorrected INLINEFORM0 level of INLINEFORM1 INLINEFORM2 INLINEFORM3 alpha 005 alpha 005223 Demographics Inference Language Cues LIWC has been used extensively for examining the latent dimensions of selfexpression for analyzing personality BIBREF61 depressive behavior demographic differences BIBREF43 BIBREF40 etc Several studies highlight that females employ more firstperson singular pronouns BIBREF62 and deictic language BIBREF63 while males tend to use more articles BIBREF64 which characterizes concrete thinking and formal informational and affirmation words BIBREF65 For age analysis the salient findings include older individuals using more future tense verbs BIBREF62 triggering a shift in focus while aging They also show positive emotions BIBREF66 and employ fewer selfreferences ie I me with greater first person plural BIBREF62 Depressed users employ first person pronouns more frequently BIBREF67 repeatedly use negative emotions and anger words We analyzed psycholinguistic cues and language style to study the association between depressive behavior as well as demographics Particularly we adopt Levinsons adult development grouping that partitions users in INLINEFORM0 into 5 age groups 14191923 23343446 and 4660 Then we apply LIWC for characterizing linguistic styles for each age group for users in INLINEFORM1 Qualitative Language Analysis The recent LIWC version summarizes textual content in terms of language variables such as analytical thinking clout authenticity and emotional tone It also measures other linguistic dimensions such as descriptors categories eg percent of target words gleaned by dictionary or longer than six letters Sixltr and informal language markers eg swear words netspeak and other linguistic aspects eg 1st person singular pronouns Thinking Style Measuring peoples natural ways of trying to analyze and organize complex events have strong association with analytical thinking LIWC relates higher analytic thinking to more formal and logical reasoning whereas a lower value indicates focus on narratives Also cognitive processing measures problem solving in mind Words such as think realize and know indicates the degree of certainty in communications Critical thinking ability relates to education BIBREF68 and is impacted by different stages of cognitive development at different ages It has been shown that older people communicate with greater cognitive complexity while comprehending nuances and subtle differences BIBREF62 We observe a similar pattern in our data Table TABREF40 A recent study highlights how depression affects brain and thinking at molecular level using a rat model BIBREF69 Depression can promote cognitive dysfunction including difficulty in concentrating and making decisions We observed a notable differences in the ability to think analytically in depressed and control users in different age groups see Figure FIGREF39 A F and Table TABREF40 Overall vulnerable younger users are not logical thinkers based on their relative analytical score and cognitive processing ability Authenticity Authenticity measures the degree of honesty Authenticity is often assessed by measuring present tense verbs 1st person singular pronouns I me my and by examining the linguistic manifestations of false stories BIBREF70 Liars use fewer selfreferences and fewer complex words Psychologists often see a childs first successfull lie as a mental growth There is a decreasing trend of the Authenticity with aging see Figure FIGREF39 B Authenticity for depressed youngsters is strikingly higher than their control peers It decreases with age Figure FIGREF39 B Clout People with high clout speak more confidently and with certainty employing more social words with fewer negations eg no not and swear words In general midlife is relatively stable wrt relationships and work A recent study shows that age 60 to be best for selfesteem BIBREF71 as people take on managerial roles at work and maintain a satisfying relationship with their spouse We see the same pattern in our data see Figure FIGREF39 C and Table TABREF40 Unsurprisingly lack of confidence the 6th PHQ9 symptom is a distinguishable characteristic of vulnerable users leading to their lower clout scores especially among depressed users before middle age 34 years old Selfreferences First person singular words are often seen as indicating interpersonal involvement and their high usage is associated with negative affective states implying nervousness and depression BIBREF66 Consistent with prior studies frequency of first person singular for depressed people is significantly higher compared to that of control class Similarly to BIBREF66 youngsters tend to use more firstperson eg I and second person singular eg you pronouns Figure FIGREF39 G Informal Language Markers Swear Netspeak Several studies highlighted the use of profanity by young adults has significantly increased over the last decade BIBREF72 We observed the same pattern in both the depressed and the control classes Table TABREF40 although its rate is higher for depressed users BIBREF1 Psychologists have also shown that swearing can indicate that an individual is not a fragmented member of a society Depressed youngsters showing higher rate of interpersonal involvement and relationships have a higher rate of cursing Figure FIGREF39 E Also Netspeak lexicon measures the frequency of terms such as lol and thx Sexual Body Sexual lexicon contains terms like horny love and incest and body terms like ache heart and cough Both start with a higher rate for depressed users while decreasing gradually while growing up possibly due to changes in sexual desire as we age Figure FIGREF39 HI and Table TABREF40 Quantitative Language Analysis We employ oneway ANOVA to compare the impact of various factors and validate our findings above Table TABREF40 illustrates our findings with a degree of freedom df of 1055 The null hypothesis is that the sample means for each age group are similar for each of the LIWC features alpha 0001 alpha 001 alpha 005 We leverage both the visual and textual content for predicting age and gender Prediction with Textual Content We employ BIBREF73 s weighted lexicon of terms that uses the dataset of 75394 Facebook users who shared their status age and gender The predictive power of this lexica was evaluated on Twitter blog and Facebook showing promising results BIBREF73 Utilizing these two weighted lexicon of terms we are predicting the demographic information age or gender of INLINEFORM0 denoted by INLINEFORM1 using following equation INLINEFORM2 where INLINEFORM0 is the lexicon weight of the term and INLINEFORM1 represents the frequency of the term in the user generated INLINEFORM2 and INLINEFORM3 measures total word count in INLINEFORM4 As our data is biased toward young people we report age prediction performance for each age group separately Table TABREF42 Moreover to measure the average accuracy of this model we build a balanced dataset keeping all the users above 23 416 users and then randomly sampling the same number of users from the age ranges 1119 and 1923 The average accuracy of this model is 063 for depressed users and 064 for control class Table TABREF44 illustrates the performance of gender prediction for each class The average accuracy is 082 on INLINEFORM5 groundtruth dataset Prediction with Visual Imagery Inspired by BIBREF56 s approach for facial landmark localization we use their pretrained CNN consisting of convolutional layers including unshared and fullyconnected layers to predict gender and age from both the profile and shared images We evaluate the performance for gender and age prediction task on INLINEFORM0 and INLINEFORM1 respectively as shown in Table TABREF42 and Table TABREF44 Demographic Prediction Analysis We delve deeper into the benefits and drawbacks of each data modality for demographic information prediction This is crucial as the differences between language cues between age groups above age 35 tend to become smaller see Figure FIGREF39 ABC and making the prediction harder for older people BIBREF74 In this case the other data modality eg visual content can play integral role as a complementary source for age inference For gender prediction see Table TABREF44 on average the profile imagebased predictor provides a more accurate prediction for both the depressed and control class 092 and 090 compared to contentbased predictor 082 For age prediction see Table TABREF42 textual contentbased predictor on average 060 outperforms both of the visualbased predictors on average profile051 Media053 However not every user provides facial identity on his account see Table TABREF21 We studied facial presentation for each agegroup to examine any association between agegroup facial presentation and depressive behavior see Table TABREF43 We can see youngsters in both depressed and control class are not likely to present their face on profile image Less than 3 of vulnerable users between 1119 years reveal their facial identity Although contentbased gender predictor was not as accurate as imagebased one it is adequate for populationlevel analysis We use the above findings for predicting depressive behavior Our model exploits early fusion BIBREF32 technique in feature space and requires modeling each user INLINEFORM0 in INLINEFORM1 as vector concatenation of individual modality features As opposed to computationally expensive late fusion scheme where each modality requires a separate supervised modeling this model reduces the learning effort and shows promising results BIBREF75 To develop a generalizable model that avoids overfitting we perform feature selection using statistical tests and all relevant ensemble learning models It adds randomness to the data by creating shuffled copies of all features shadow feature and then trains Random Forest classifier on the extended data Iteratively it checks whether the actual feature has a higher Zscore than its shadow feature See Algorithm SECREF6 and Figure FIGREF45 BIBREF76 Main each Feature INLINEFORM0 INLINEFORM1 RndForrest INLINEFORM0 Calculate Imp INLINEFORM1 INLINEFORM2 Generate next hypothesis INLINEFORM3 Once all hypothesis generated Perform Statistical Test INLINEFORM4 Binomial Distribution INLINEFORM5 Feature is important Feature is important Ensemble Feature Selection Next we adopt an ensemble learning method that integrates the predictive power of multiple learners with two main advantages its interpretability with respect to the contributions of each feature and its high predictive power For prediction we have INLINEFORM0 where INLINEFORM1 is a weak learner and INLINEFORM2 denotes the final prediction In particular we optimize the loss function INLINEFORM0 where INLINEFORM1 incorporates INLINEFORM2 and INLINEFORM3 regularization In each iteration the new INLINEFORM4 is obtained by fitting weak learner to the negative gradient of loss function Particularly by estimating the loss function with Taylor expansion INLINEFORM5 where its first expression is constant the second and the third expressions are first INLINEFORM6 and second order derivatives INLINEFORM7 of the loss INLINEFORM8 For exploring the weak learners assume INLINEFORM0 has k leaf nodes INLINEFORM1 be subset of users from INLINEFORM2 belongs to the node INLINEFORM3 and INLINEFORM4 denotes the prediction for node INLINEFORM5 Then for each user INLINEFORM6 belonging to INLINEFORM7 INLINEFORM8 and INLINEFORM9 INLINEFORM10 Next for each leaf node INLINEFORM0 deriving wrt INLINEFORM1 INLINEFORM2 and by substituting weights INLINEFORM0 which represents the loss for fixed weak learners with INLINEFORM0 nodes The trees are built sequentially such that each subsequent tree aims to reduce the errors of its predecessor tree Although the weak learners have high bias the ensemble model produces a strong learner that effectively integrate the weak learners by reducing bias and variance the ultimate goal of supervised models BIBREF77 Table TABREF48 illustrates our multimodal framework outperform the baselines for identifying depressed users in terms of average specificity sensitivity FMeasure and accuracy in 10fold crossvalidation setting on INLINEFORM1 dataset Figure FIGREF47 shows how the likelihood of being classified into the depressed class varies with each feature addition to the model for a sample user in the dataset The prediction bar the black bar shows that the logodds of prediction is 031 that is the likelihood of this person being a depressed user is 57 1 1 exp03 The figure also sheds light on the impact of each contributing feature The waterfall charts represent how the probability of being depressed changes with the addition of each feature variable For instance the Analytic thinking of this user is considered high 4843 Median3695 Mean 4018 and this decreases the chance of this person being classified into the depressed group by the logodds of 141 Depressed users have significantly lower Analytic thinking score compared to control class Moreover the 4046 Clout score is a low value Median 6222 Mean 5717 and it decreases the chance of being classified as depressed With respect to the visual features for instance the mean and the median of sharedcolorfulness is 11203 and 113 respectively The value of 13671 would be high thus it decreases the chance of being depressed for this specific user by logodds of 054 Moreover the profilenaturalness of 046 is considered high compared to 036 as the mean for the depressed class which justifies pull down of the logodds by INLINEFORM2 For network features for instance twohopneighborhood for depressed users Mean 84 are less than that of control users Mean 154 and is reflected in pulling down the logodds by 027 Baselines To test the efficacy of our multimodal framework for detecting depressed users we compare it against existing content contentnetwork and imagebased models based on the aforementioned general image feature facial presence and facial expressions </s>
<s>[INST] Incorporating Sememes into Chinese Definition Modeling [/INST] Chinese definition modeling is the task of generating a definition in Chinese for a given Chinese word This task can benefit the compilation of dictionaries especially dictionaries for Chinese as a foreign language CFL learners In recent years the number of CFL learners has risen sharply In 2017 770000 people took the Chinese Proficiency Test an increase of 38 from 2016 However most Chinese dictionaries are for native speakers Since these dictionaries usually require a fairly high level of Chinese it is necessary to build a dictionary specifically for CFL learners Manually writing definitions relies on the knowledge of lexicographers and linguists which is expensive and timeconsuming BIBREF0 BIBREF1 BIBREF2 Therefore the study on writing definitions automatically is of practical significance Definition modeling was first proposed by BIBREF3 as a tool to evaluate different word embeddings BIBREF4 extended the work by incorporating word sense disambiguation to generate contextaware word definition Both methods are based on recurrent neural network encoderdecoder framework without attention In contrast this paper formulates the definition modeling task as an automatic way to accelerate dictionary compilation In this work we introduce a new dataset for the Chinese definition modeling task that we call Chinese Definition Modeling Corpus cdmCDM CDM consists of 104517 entries where each entry contains a word the sememes of a specific word sense and the definition in Chinese of the same word sense Sememes are minimum semantic units of word meanings and the meaning of each word sense is typically composed of several sememes as is illustrated in Figure 1 For a given word sense CDM annotates the sememes according to HowNet BIBREF5 and the definition according to Chinese Concept Dictionary CCD BIBREF6 Since sememes have been widely used in improving word representation learning BIBREF7 and word similarity computation BIBREF8 we argue that sememes can benefit the task of definition modeling We propose two novel models to incorporate sememes into Chinese definition modeling the AdaptiveAttention Model AAM and the Self and AdaptiveAttention Model SAAM Both models are based on the encoderdecoder framework The encoder maps word and sememes into a sequence of continuous representations and the decoder then attends to the output of the encoder and generates the definition one word at a time Different from the vanilla attention mechanism the decoder of both models employs the adaptive attention mechanism to decide which sememes to focus on and when to pay attention to sememes at one time BIBREF9 Following BIBREF3 BIBREF4 the AAM is built using recurrent neural networks RNNs However recent works demonstrate that attentionbased architecture that entirely eliminates recurrent connections can obtain new stateoftheart in neural machine translation BIBREF10 constituency parsing BIBREF11 and semantic role labeling BIBREF12 In the SAAM we replace the LSTMbased encoder and decoder with an architecture based on selfattention This fully attentionbased model allows for more parallelization reduces the path length between word sememes and the definition and can reach a new stateoftheart on the definition modeling task To the best of our knowledge this is the first work to introduce the attention mechanism and utilize external resource for the definition modeling task In experiments on the CDM dataset we show that our proposed AAM and SAAM outperform the stateoftheart approach with a large margin By efficiently incorporating sememes the SAAM achieves the best performance with improvement over the stateoftheart method by 60 BLEU The definition modeling task is to generate an explanatory sentence for the interpreted word For example given the word hotel a model should generate a sentence like this A place to provide residence and other services for tourists Since distributed representations of words have been shown to capture lexical syntax and semantics it is intuitive to employ word embeddings to generate natural language definitions Previously BIBREF3 proposed several model architectures to generate a definition according to the distributed representation of a word We briefly summarize their model with the best performance in Section Experiments and adopt it as our baseline model Inspired by the works that use sememes to improve word representation learning BIBREF7 and word similarity computation BIBREF8 we propose the idea of incorporating sememes into definition modeling Sememes can provide additional semantic information for the task As shown in Figure 1 sememes are highly correlated to the definition For example the sememe place is related with the word place of the definition and the sememe tour is correlated to the word tourists of the definition Therefore to make full use of the sememes in CDM dataset we propose AAM and SAAM for the task in Section AdaptiveAttention Model and Section Self and AdaptiveAttention Model respectively The baseline model BIBREF3 is implemented with a recurrent neural network based encoderdecoder framework Without utilizing the information of sememes it learns a probabilistic mapping Py x from the word x to be defined to a definition y y1 dots yT in which yt is the t th word of definition y More concretely given a word x to be defined the encoder reads the word and generates its word embedding mathbf x as the encoded information Afterward the decoder computes the conditional probability of each definition word yt depending on the previous definition words yt as well as the word being defined x ie Pytytx Pytytx is given as Pytytx propto exp ytmathbf ztmathbf x
mathbf zt fmathbf zt1yt1mathbf x Eq 4 where mathbf zt is the decoders hidden state at time t f is a recurrent nonlinear function such as LSTM and GRU and mathbf x is the embedding of the word being defined Then the probability of Py x can be computed according to the probability chain rule Py x prod t1T Pytytx Eq 5 We denote all the parameters in the model as theta and the definition corpus as Dxy which is a set of worddefinition pairs Then the model parameters can be learned by maximizing the loglikelihood hattheta mathop rm argmaxtheta sum langle x y rangle in Dxylog Py x theta Eq 6 Our proposed model aims to incorporate sememes into the definition modeling task Given the word to be defined x and its corresponding sememes ss1 dots sN we define the probability of generating the definition yy1 dots yt as Py x s prod t1T Pytytxs Eq 8 Similar to Eq 6 we can maximize the loglikelihood with the definition corpus Dxsy to learn model parameters hattheta mathop rm argmaxtheta sum langle xsy rangle in Dxsylog Py x s theta Eq 9 The probability Py x s can be implemented with an adaptive attention based encoderdecoder framework which we call AdaptiveAttention Model AAM The new architecture consists of a bidirectional RNN as the encoder and a RNN decoder that adaptively attends to the sememes during decoding a definition Similar to BIBREF13 the encoder is a bidirectional RNN consisting of forward and backward RNNs Given the word to be defined x and its corresponding sememes ss1 dots sN we define the input sequence of vectors for the encoder as mathbf vmathbf v1dots mathbf vN The vector mathbf vn is given as follows mathbf vn mathbf x mathbf sn Eq 11 where mathbf x is the vector representation of the word x mathbf sn is the vector representation of the n th sememe sn and mathbf amathbf b denote concatenation of vector mathbf a and mathbf b The forward RNN overrightarrowf reads the input sequence of vectors from mathbf v1 to mathbf vN and calculates a forward hidden state for position n as overrightarrowmathbf hn fmathbf vn overrightarrowmathbf hn1 Eq 12 where f is an LSTM or GRU Similarly the backward RNN overleftarrowf reads the input sequence of vectors from mathbf vN to mathbf v1 and obtain a backward hidden state for position n as overleftarrowmathbf hn fmathbf hn overleftarrowmathbf hn1 Eq 13 In this way we obtain a sequence of encoder hidden states mathbf hleftmathbf h1mathbf hNright by concatenating the forward hidden state overrightarrowmathbf hn and the backward one overleftarrowmathbf hn at each position n mathbf hnleftoverrightarrowmathbf hn overleftarrowmathbf hnright Eq 14 The hidden state mathbf hn captures the sememe and wordaware information of the n th sememe As attentionbased neural encoderdecoder frameworks have shown great success in image captioning BIBREF14 document summarization BIBREF15 and neural machine translation BIBREF13 it is natural to adopt the attentionbased recurrent decoder in BIBREF13 as our decoder The vanilla attention attends to the sememes at every time step However not all words in the definition have corresponding sememes For example sememe reside could be useful when generating residence but none of the sememes is useful when generating provide Besides language correlations make the sememes unnecessary when generating words like and and for Inspired by BIBREF9 we introduce the adaptive attention mechanism for the decoder At each time step t we summarize the timevarying sememes information as sememe context and the language models information as LM context Then we use another attention to obtain the context vector relying on either the sememe context or LM context More concretely we define each conditional probability in Eq 8 as Pytytxs propto exp ytmathbf ztmathbf ct
mathbf zt fmathbf zt1yt1mathbf ct Eq 17 where mathbf ct is the context vector from the output of the adaptive attention module at time t mathbf zt is a decoders hidden state at time t To obtain the context vector mathbf ct we first compute the sememe context vector hatmathbf ct and the LM context mathbf ot Similar to the vanilla attention the sememe context hatmathbf ct is obtained with a soft attention mechanism as hatmathbf ct sum n1N alpha tn mathbf hn Eq 18 where alpha tn fracmathrm expetnsum i1N mathrm expeti nonumber
etn mathbf whatcTmathbf hn mathbf zt1 Eq 19 Since the decoders hidden states store syntax and semantic information for language modeling we compute the LM context mathbf ot with a gated unit whose input is the definition word yt and the previous hidden state mathbf zt1 mathbf gt sigma mathbf Wg yt1 mathbf zt1 mathbf bg nonumber
mathbf ot mathbf gt odot mathrm tanh mathbf zt1 Eq 20 Once the sememe context vector hatmathbf ct and the LM context mathbf ot are ready we can generate the context vector with an adaptive attention layer as mathbf ct beta t mathbf ot 1beta thatmathbf ct Eq 21 where beta t fracmathrm expetomathrm expetomathrm expethatc nonumber
eto mathbf wcTmathbf otmathbf zt nonumber
ethatc mathbf wcThatmathbf ctmathbf zt Eq 22 beta t is a scalar in range 01 which controls the relative importance of LM context and sememe context Once we obtain the context vector mathbf ct we can update the decoders hidden state and generate the next word according to Eq and Eq 17 respectively Recent works demonstrate that an architecture entirely based on attention can obtain new stateoftheart in neural machine translation BIBREF10 constituency parsing BIBREF11 and semantic role labeling BIBREF12 SAAM adopts similar architecture and replaces the recurrent connections in AAM with selfattention Such architecture not only reduces the training time by allowing for more parallelization but also learns better the dependency between word sememes and tokens of the definition by reducing the path length between them Given the word to be defined x and its corresponding ordered sememes ss1 dots sN we combine them as the input sequence of embeddings for the encoder ie mathbf vmathbf v0 mathbf v1 dots mathbf vN The n th vector mathbf vn is defined as mathbf vn
leftlbrace beginarrayll
mathbf x n0 cr mathbf sn n0
endarrayright Eq 25 where mathbf x is the vector representation of the given word x and mathbf sn is the vector representation of the n th sememe sn Although the input sequence is not time ordered position n in the sequence carries some useful information First position 0 corresponds to the word to be defined while other positions correspond to the sememes Secondly sememes are sorted into a logical order in HowNet For example as the first sememe of the word hotel the sememe place describes its most important aspect namely the definition of hotel should be a place for Therefore we add learned position embedding to the input embeddings for the encoder mathbf vn mathbf vn mathbf pn Eq 26 where mathbf pn is the position embedding that can be learned during training Then the vectors mathbf vmathbf v0 mathbf v1 dots mathbf vN are transformed by a stack of identical layers where each layers consists of two sublayers multihead selfattention layer and positionwise fully connected feedforward layer Each of the layers are connected by residual connections followed by layer normalization BIBREF16 We refer the readers to BIBREF10 for the detail of the layers The output of the encoder stack is a sequence of hidden states denoted as mathbf hmathbf h0 mathbf h1 dots mathbf hN The decoder is also composed of a stack of identical layers In BIBREF10 each layer includes three sublayers masked multihead selfattention layer multihead attention layer that attends over the output of the encoder stack and positionwise fully connected feedforward layer In our model we replace the two multihead attention layers with an adaptive multihead attention layer Similarly to the adaptive attention layer in AAM the adaptive multihead attention layer can adaptivelly decide which sememes to focus on and when to attend to sememes at each time and each layer Figure 2 shows the architecture of the decoder Different from the adaptive attention layer in AAM that uses single head attention to obtain the sememe context and gate unit to obtain the LM context the adaptive multihead attention layer utilizes multihead attention to obtain both contexts Multihead attention performs multiple single head attentions in parallel with linearly projected keys values and queries and then combines the outputs of all heads to obtain the final attention result We omit the detail here and refer the readers to BIBREF10 Formally given the hidden state mathbf ztl1 at time t layer l1 of the decoder we obtain the LM context with multihead selfattention mathbf otl textit MultiHeadmathbf ztl1mathbf zle tl1mathbf zle tl1 Eq 28 where the decoders hidden state mathbf ztl1 at time t layer l1 is the query and mathbf zle tl1mathbf z1l1mathbf ztl1 the decoders hidden states from time 1 to time t at layer l1 are the keys and values To obtain better LM context we employ residual connection and layer normalization after the multihead selfattention Similarly the sememe context can be computed by attending to the encoders outputs with multihead attention hatmathbf ctl textit MultiHeadmathbf otlmathbf hmathbf h Eq 29 where mathbf otl is the query and the output from the encoder stack mathbf hmathbf h0 mathbf h1 dots mathbf hN are the values and keys Once obtaining the sememe context vector hatmathbf ctl and the LM context mathbf otl we compute the output from the adaptive attention layer with mathbf ctl beta tl mathbf otl 1beta tlhatmathbf ctl Eq 30 where beta tl fracmathrm expetomathrm expetomathrm expethatc nonumber
etol mathbf wclTmathbf otlmathbf ztl1 nonumber
ethatcl mathbf wclThatmathbf ctlmathbf ztl1 Eq 31 In this section we will first introduce the construction process of the CDM dataset then the experimental results and analysis To verify our proposed models we construct the CDM dataset for the Chinese definition modeling task cdmEach entry in the dataset is a triple that consists of the interpreted word sememes and a definition for a specific word sense where the sememes are annotated with HowNet BIBREF5 and the definition are annotated with Chinese Concept Dictionary CCD BIBREF6 Concretely for a common word in HowNet and CCD we first align its definitions from CCD and sememe groups from HowNet where each group represents one word sense We define the sememes of a definition as the combined sememes associated with any token of the definition Then for each definition of a word we align it with the sememe group that has the largest number of overlapping sememes with the definitions sememes With such aligned definition and sememe group we add an entry that consists of the word the sememes of the aligned sememe group and the aligned definition Each word can have multiple entries in the dataset especially the polysemous word To improve the quality of the created dataset we filter out entries that the definition contains the interpreted word or the interpreted word is among function words numeral words and proper nouns After processing we obtain the dataset that contains 104517 entries with 30052 unique interpreted words We divide the dataset according to the unique interpreted words into training set validation set and test set with a ratio of 1811 Table 1 shows the detailed data statistics We show the effectiveness of all models on the CDM dataset All the embeddings including word and sememe embedding are fixed 300 dimensional word embeddings pretrained on the Chinese Gigaword corpus LDC2011T13 All definitions are segmented with Jiaba Chinese text segmentation tool and we use the resulting unique segments as the decoder vocabulary To evaluate the difference between the generated results and the goldstandard definitions we compute BLEU score using a script provided by Moses following BIBREF3 We implement the Baseline and AAM by modifying the code of BIBREF9 and SAAM with fairseqpy We use twolayer LSTM network as the recurrent component We set batch size to 128 and the dimension of the hidden state to 300 for the decoder Adam optimizer is employed with an initial learning rate of 1times 103 Since the morphemes of the word to be defined can benefit definition modeling BIBREF3 obtain the model with the best performance by adding a trainable embedding from characterlevel CNN to the fixed word embedding To obtain the stateoftheart result as the baseline we follow BIBREF3 and experiment with the characterlevel CNN with the same hyperparameters To be comparable with the baseline we also use twolayer LSTM network as the recurrent componentWe set batch size to 128 and the dimension of the hidden state to 300 for both the encoder and the decoder Adam optimizer is employed with an initial learning rate of 1times 103 We have the same hyperparameters as BIBREF10 and set these hyperparameters as dtextmodel300 dtexthidden2048 ntexthead5 ntextlayer6 To be comparable with AAM we use the same batch size as 128 We also employ label smoothing technique BIBREF17 with a smoothing value of 01 during training We report the experimental results on CDM test set in Figure 3 It shows that both of our proposed models namely AAM and SAAM achieve good results and outperform the baseline by a large margin With sememes AAM and SAAM can improve over the baseline with 31 BLEU and 665 BLEU respectively We also find that sememes are very useful for generating the definition The incorporation of sememes improves the AAM with 332 BLEU and the SAAM with 353 BLEU This can be explained by that sememes help to disambiguate the word sense associated with the target definition Among all models SAAM which incorporates sememes achieves the new stateoftheart with a BLEU score of 3636 on the test set demonstrating the effectiveness of sememes and the architecture of SAAM Table 2 lists some example definitions generated with different models For each wordsememes pair the generated three definitions are ordered according to the order Baseline AAM and SAAM For AAM and SAAM we use the model that incorporates sememes These examples show that with sememes the model can generate more accurate and concrete definitions For example for the word hotel the baseline model fails to generate definition containing the token tourists However by incoporating sememes information especially the sememe tour AAM and SAAM successfully generate tourists Manual inspection of others examples also supports our claim We also conduct an ablation study to evaluate the various choices we made for SAAM We consider three key components position embedding the adaptive attention layer and the incorporated sememes As illustrated in table 3 we remove one of these components and report the performance of the resulting model on validation set and test set We also list the performance of the baseline and AAM for reference It demonstrates that all components benefit the SAAM Removing position embedding is 031 BLEU below the SAAM on the test set Removing the adaptive attention layer is 043 BLEU below the SAAM on the test set Sememes affects the most Without incoporating sememes the performance drops 353 BLEU on the test set Distributed representations of words or word embeddings BIBREF18 were widely used in the field of NLP in recent years Since word embeddings have been shown to capture lexical semantics BIBREF3 proposed the definition modeling task as a more transparent and direct representation of word embeddings This work is followed by BIBREF4 who studied the problem of word ambiguities in definition modeling by employing latent variable modeling and soft attention mechanisms Both works focus on evaluating and interpreting word embeddings In contrast we incorporate sememes to generate word sense aware word definition for dictionary compilation Recently many knowledge bases KBs are established in order to organize human knowledge in structural forms By providing human experiential knowledge KBs are playing an increasingly important role as infrastructural facilities of natural language processing HowNet BIBREF19 is a knowledge base that annotates each concept in Chinese with one or more sememes HowNet plays an important role in understanding the semantic meanings of concepts in human languages and has been widely used in word representation learning BIBREF7 word similarity computation BIBREF20 and sentiment analysis BIBREF21 For example BIBREF7 improved word representation learning by utilizing sememes to represent various senses of each word and selecting suitable senses in contexts with an attention mechanism Chinese Concept Dictionary CCD is a WordNetlike semantic lexicon BIBREF22 BIBREF23 where each concept is defined by a set of synonyms SynSet CCD has been widely used in many NLP tasks such as word sense disambiguation BIBREF23 In this work we annotate the word with aligned sememes from HowNet and definition from CCD Selfattention is a special case of attention mechanism that relates different positions of a single sequence in order to compute a representation for the sequence Selfattention has been successfully applied to many tasks recently BIBREF24 BIBREF25 BIBREF26 BIBREF10 BIBREF12 BIBREF11 BIBREF10 introduced the first transduction model based on selfattention by replacing the recurrent layers commonly used in encoderdecoder architectures with multihead selfattention The proposed model called Transformer achieved the stateoftheart performance on neural machine translation with reduced training time After that BIBREF12 demonstrated that selfattention can improve semantic role labeling by handling structural information and long range dependencies BIBREF11 further extended selfattention to constituency parsing and showed that the use of selfattention helped to analyze the model by making explicit the manner in which information is propagated between different locations in the sentence Selfattention has many good properties It reduces the computation complexity per layer allows for more parallelization and reduces the path length between longrange dependencies in the network In this paper we use selfattention based architecture in SAAM to learn the relations of word sememes and definition automatically We introduce the Chinese definition modeling task that generates a definition in Chinese for a given word and sememes of a specific word sense This task is useful for dictionary compilation To achieve this we constructed the CDM dataset with wordsememesdefinition triples We propose two novel methods AAM and SAAM to generate word sense aware definition by utilizing sememes In experiments on the CDM dataset we show that our proposed AAM and SAAM outperform the stateoftheart approach with a large margin By efficiently incorporating sememes the SAAM achieves the best performance with improvement over the stateoftheart method </s>
<s>[INST] RobBERT: a Dutch RoBERTa-based Language Model [/INST] The advent of neural networks in natural language processing NLP has significantly improved stateoftheart results within the field While recurrent neural networks RNNs and long shortterm memory networks LSTMs initially dominated the field recent models started incorporating attention mechanisms and then later dropped the recurrent part and just kept the attention mechanisms in socalled transformer models BIBREF0 This latter type of model caused a new revolution in NLP and led to popular language models like GPT2 BIBREF1 BIBREF2 and ELMo BIBREF3 BERT BIBREF4 improved over previous transformer models and recurrent networks by allowing the system to learn from input text in a bidirectional way rather than only from lefttoright or the other way around This model was later reimplemented critically evaluated and improved in the RoBERTa model BIBREF5 These largescale transformer models provide the advantage of being able to solve NLP tasks by having a common expensive pretraining phase followed by a smaller finetuning phase The pretraining happens in an unsupervised way by providing large corpora of text in the desired language The second phase only needs a relatively small annotated data set for finetuning to outperform previous popular approaches in one of a large number of possible language tasks While language models are usually trained on English data some multilingual models also exist These are usually trained on a large quantity of text in different languages For example MultilingualBERT is trained on a collection of corpora in 104 different languages BIBREF4 and generalizes language components well across languages BIBREF6 However models trained on data from one specific language usually improve the performance of multilingual models for this particular language BIBREF7 BIBREF8 Training a RoBERTa model BIBREF5 on a Dutch dataset thus has a lot of potential for increasing performance for many downstream Dutch NLP tasks In this paper we introduce RobBERT a Dutch RoBERTabased pretrained language model and critically test its performance using natural language tasks against other Dutch languages models Transformer models have been successfully used for a wide range of language tasks Initially transformers were introduced for use in machine translation where they vastly improved stateoftheart results for English to German in an efficient manner BIBREF0 This transformer model architecture resulted in a new paradigm in NLP with the migration from sequencetosequence recurrent neural networks to transformerbased models by removing the recurrent component and only keeping attention This cornerstone was used for BERT a transformer model that obtained stateoftheart results for eleven natural language processing tasks such as question answering and natural language inference BIBREF4 BERT is pretrained with large corpora of text using two unsupervised tasks The first task is word masking also called the Cloze task BIBREF9 or masked language model MLM where the model has to guess which word is masked in certain position in the text The second task is next sentence prediction This is done by predicting if two sentences are subsequent in the corpus or if they are randomly sampled from the corpus These tasks allowed the model to create internal representations about a language which could thereafter be reused for different language tasks This architecture has been shown to be a general language model that could be finetuned with little data in a relatively efficient way for a very distinct range of tasks and still outperform previous architectures BIBREF4 Transformer models are also capable of generating contextualized word embeddings These contextualized embeddings were presented by BIBREF3 and addressed the well known issue with a words meaning being defined by its context eg a stick versus lets stick to This lack of context is something that traditional word embeddings like word2vec BIBREF10 or GloVe BIBREF11 lack whereas BERT automatically incorporates the context a word occurs in Another advantage of transformer models is that attention allows them to better resolve coreferences between words BIBREF12 A typical example for the importance of coreference resolution is The trophy doesnt fit in the brown suitcase because its too big where the word it would refer to the the suitcase instead of the trophy if the last word was changed to small BIBREF13 Being able to resolve these coreferences is for example important for translating to languages with gender as suitcase and trophy have different genders in French Although BERT has been shown to be a useful language model it has also received some scrutiny on the training and preprocessing of the language model As mentioned before BERT uses next sentence prediction NSP as one of its two training tasks In NSP the model has to predict whether two sentences follow each other in the training text or are just randomly selected from the corpora The authors of RoBERTa BIBREF5 showed that while this task made the model achieve a better performance it was not due to its intended reason as it might merely predict relatedness rather than subsequent sentences That BIBREF4 trained a better model when using NSP than without NSP is likely due to the model learning longrange dependencies in text from its inputs which are longer than just the single sentence on itself As such the RoBERTa model uses only the MLM task and uses multiple full sentences in every input Other research improved the NSP task by instead making the model predict the correct order of two sentences where the model thus has to predict whether the sentences occur in the given order in the corpus or occur in flipped order BIBREF14 BIBREF4 also presented a multilingual model mBERT with the same architecture as BERT but trained on Wikipedia corpora in 104 languages Unfortunately the quality of these multilingual embeddings is often considered worse than their monolingual counterparts BIBREF15 illustrated this difference in quality for German and English models in a generative setting The monolingual French CamemBERT model BIBREF7 also compared their model to mBERT which performed poorer on all tasks More recently BIBREF8 also showed similar results for Dutch using their BERTje model outperforming multilingual BERT in a wide range of tasks such as sentiment analysis and partofspeech tagging Since this work is concurrent with ours we compare our results with BERTje in this paper This section describes the data and training regime we used to train our Dutch RoBERTabased language model called RobBERT We pretrained our model on the Dutch section of the OSCAR corpus a large multilingual corpus which was obtained by language classification in the Common Crawl corpus BIBREF16 This Dutch corpus has 66 billion words totalling 39 GB of text It contains 126064722 lines of text where each line can contain multiple sentences Subsequent lines are however not related to each other due to the shuffled nature of the OSCAR data set For comparison the French RoBERTabased language model CamemBERT BIBREF7 has been trained on the French portion of OSCAR which consists of 138 GB of scraped text Our data differs in several ways from the data used to train BERTje a BERTbased Dutch language model BIBREF8 Firstly they trained the model on an assembly of multiple Dutch corpora totalling only 12 GB Secondly they used WordPiece as subword embeddings since this is what the original BERT architecture uses RobBERT on the other hand uses Byte Pair Encoding BPE which is also used by GPT2 BIBREF2 and RoBERTa BIBREF5 RobBERT shares its architecture with RoBERTas base model which itself is a replication and improvement over BERT BIBREF5 The architecture of our language model is thus equal to the original BERT model with 12 selfattention layers with 12 heads BIBREF4 One difference with the original BERT is due to the different pretraining task specified by RoBERTa using only the MLM task and not the NSP task The training thus only uses word masking where the model has to predict which words were masked in certain positions of a given line of text The training process uses the Adam optimizer BIBREF17 with polynomial decay of the learning rate lr106 and a rampup period of 1000 iterations with parameters beta 109 a common default and RoBERTas default beta 2098 Additionally we also used a weight decay of 01 as well as a small dropout of 01 to help prevent the model from overfitting BIBREF18 We used a computing cluster in order to efficiently pretrain our model More specifically the pretraining was executed on a computing cluster with 20 nodes with 4 Nvidia Tesla P100 GPUs 16 GB VRAM each and 2 nodes with 8 Nvidia V100 GPUs having 32 GB VRAM each This pretraining happened in fixed batches of 8192 sentences by rescaling each GPUs batch size depending on the number of GPUs available in order to maximally utilize the cluster without blocking it entirely for other users The model trained for two epochs which is over 16k batches in total With the large batch size of 8192 this equates to 05M updates for a traditional BERT model At this point the perplexity did not decrease any further We evaluated RobBERT in several different settings on multiple downstream tasks First we compare its performance with other BERTmodels and stateoftheart systems in sentiment analysis to show its performance for classification tasks Second we compare its performance in a recent Dutch language task namely the disambiguation of demonstrative pronouns which allows us to additionally compare the zeroshot performance of our and other BERT models ie using only the pretrained model without any finetuning We replicated the highlevel sentiment analysis task used to evaluate BERTje BIBREF8 to be able to compare our methods This task uses a dataset called Dutch Book Reviews Dataset DBRD in which book reviews scraped from hebbannl are labeled as positive or negative BIBREF19 Although the dataset contains 118516 reviews only 22252 of these reviews are actually labeled as positive or negative The DBRD dataset is already split in a balanced 10 test and 90 train split allowing us to easily compare to other models trained for solving this task This dataset was released in a paper analysing the performance of an ULMFiT model Universal Language Model Finetuning for Text Classification model BIBREF19 We finetuned RobBERT on the first 10000 training examples as well as on the full data set While the ULMFiT model is first finetuned using the unlabeled reviews before training the classifier BIBREF19 it is unclear whether BERTje also first finetuned on the unlabeled reviews or only used the labeled data for finetuning the pretrained model It is also unclear how it dealt with reviews being longer than the maximum number of tokens allowed as input in BERT models as the average book review length is 547 tokens with 40 of the documents being longer than our RobBERT model can handle For a safe comparison we thus decided to discard the unlabeled data and only use the labeled data for training and test purposes 20028 and 2224 examples respectively and compare approaches for dealing with too long input sequences We trained our model for 2000 iterations with a batch size of 128 and a warmup of 500 iterations reaching a learning rate of 105 We found that our model performed better when trained on the last part of the book reviews than on the first part This is likely due to this part containing concluding remarks summarizing the overall sentiment While BERTje was slightly outperformed by ULMFiT BIBREF8 BIBREF19 we can see that RobBERT achieves better performance than both on the test set although the performance difference is not statistically significantly better than the ULMFiT model as can be seen in Table TABREF4 Aside from classic natural language processing tasks in previous subsections we also evaluated its performance on a task that is specific to Dutch namely disambiguating die and dat that in English In Dutch depending on the sentence both terms can be either demonstrative or relative pronouns in addition they can also be used in a subordinating conjunction ie to introduce a clause The use of either of these words depends on the gender of the word it refers to Distinguishing these words is a task introduced by BIBREF20 who presented multiple models trained on the Europarl BIBREF21 and SoNaR corpora BIBREF22 The results ranged from an accuracy of 7503 on Europarl to 8456 on SoNaR For this task we use the Dutch version of the Europarl corpus BIBREF21 which we split in 13M utterances for training 319k for validation and 399k for testing We then process every sentence by checking if it contains die or dat and if so add a training example for every occurrence of this word in the sentence where a single occurrence is masked For the test set for example this resulted in about 289k masked sentences We then test two different approaches for solving this task on this dataset The first approach is making the BERT models use their MLM task and guess which word should be filled in this spot and check if it has more confidence in either die or dat by checking the first 2048 guesses at most as this seemed sufficiently large This allows us to compare the zeroshot BERT models ie without any finetuning after pretraining for which the results can be seen in Table TABREF7 The second approach uses the same data but creates two sentences by filling in the mask with both die and dat appending both with the SEP token and making the model predict which of the two sentences is correct The finetuning was performed using 4 Nvidia GTX 1080 Ti GPUs and evaluated against the same test set of 399k utterances As before we finetuned the model twice once with the full training set and once with a subset of 10k utterances from the training set for illustrating the benefits of pretraining on lowresource tasks RobBERT outperforms previous models as well as other BERT models both with as well as without finetuning see Table TABREF4 and Table TABREF7 It is also able to reach similar performance using less data The fact that zeroshot RobBERT outperforms other zeroshot BERT models is also an indication that the base model has internalised more knowledge about Dutch than the other two have The reason RobBERT and other BERT models outperform the previous RNNbased approach is likely the transformers ability to deal better with coreference resolution BIBREF12 and by extension better in deciding which word the die or dat belongs to The training and evaluation code of this paper as well as the RobBERT model and the finetuned models are publicly available for download on httpsgithubcomiPieterRobBERT There are several possible improvements as well as interesting future directions for this research for example in training similar models First as BERTbased models are a very active field of research it is interesting to experiment with change the pretraining tasks with new unsupervised tasks when they are discovered such as the sentence order prediction BIBREF14 Second while RobBERT is trained on lines that contain multiple sentences it does not put subsequent lines of the corpus after each other due to the shuffled nature of the OSCAR corpus BIBREF16 This is unlike RoBERTa which does put full sentences next to each other if they fit in order to learn the longrange dependencies between words that the original BERT learned using its controversial NSP task It could be interesting to use the processor used to create OSCAR in order to create an unshuffled version to train on such that this technique can be used on the data set Third RobBERT uses the same tokenizer as RoBERTa meaning it uses a tokenizer built for the English language Training a new model using a custom Dutch tokenizer eg using the newly released HuggingFace tokenizers library BIBREF23 could increase the performance even further On the same note incorporating more Unicode glyphs as separate tokens can also be beneficial for example for tasks related to conversational agents BIBREF24 RobBERT itself could also be used in new settings to help future research First RobBERT could be used in different settings thanks to the renewed interest of sequencetosequence models due to their results on a vast range of language tasks BIBREF25 BIBREF26 These models use a BERTlike transformer stack for the encoder and depending on the task a generative model as a decoder These advances once again highlight the flexibility of the selfattention mechanism and it might be interesting to research the reusability of RobBERT in these type of architectures Second there are many Dutch language tasks that we did not examine in this paper for which it may also be possible to achieve stateoftheart results when finetuned on this pretrained model We introduced a new language model for Dutch based on RoBERTa called RobBERT and showed that it outperforms earlier approaches for Dutch language tasks as well as other BERTbased language models We thus hope this model can serve as a base for finetuning on other tasks and thus help foster new models that might advance results for Dutch language tasks Pieter Delobelle was supported by the Research Foundation Flanders under EOS No 30992574 and received funding from the Flemish Government under the Onderzoeksprogramma Artificile Intelligentie AI Vlaanderen programme Thomas Winters is a fellow of the Research FoundationFlanders FWOVlaanderen Most computational resources and services used in this work were provided by the VSC Flemish Supercomputer Center funded by the Research Foundation Flanders FWO and the Flemish Government department EWI We are especially grateful to Luc De Raedt for his guidance as well as for providing the facilities to complete this project We are thankful to Liesbeth Allein and her supervisors for inspiring us to use the diedat task We are also grateful to BIBREF27 BIBREF28 BIBREF29 BIBREF23 for their software packages </s>
<s>[INST] Natural Language State Representation for Reinforcement Learning [/INST] The world of our experiences must be enormously simplified and generalized before it is possible to make a symbolic inventory of all our experiences of things and relations Edward Sapir Language An Introduction to the Study of Speech 1921 Deep Learning based algorithms use neural networks in order to learn feature representations that are good for solving high dimensional Machine Learning ML tasks Reinforcement Learning RL is a subfield of ML that has been greatly affected by the use of deep neural networks as universal function approximators BIBREF0 BIBREF1 These deep neural networks are used in RL to estimate value functions stateaction value functions policy mappings nextstate predictions rewards and more BIBREF2 BIBREF3 BIBREF4 thus combating the curse of dimensionality The term representation is used differently in different contexts For the purpose of this paper we define a semantic representation of a state as one that reflects its meaning as it is understood by an expert The semantic representation of a state should thus be paired with a reliable and computationally efficient method for extracting information from it Previous success in RL has mainly focused on representing the state in its raw form eg visual input in Ataribased games BIBREF2 This approach stems from the belief that neural networks specifically convolutional networks can extract meaningful features from complex inputs In this work we challenge current representation techniques and suggest to represent the state using natural language similar to the way we as humans summarize and transfer information efficiently from one to the other BIBREF5 The ability to associate states with natural language sentences that describe them is a hallmark of understanding representations for reinforcement learning Humans use rich natural language to describe and communicate their visual perceptions feelings beliefs strategies and more The semantics inherent to natural language carry knowledge and cues of complex types of content including events spatial relations temporal relations semantic roles logical structures support for inference and entailment as well as predicates and arguments BIBREF6 The expressive nature of language can thus act as an alternative semantic state representation Over the past few years Natural Language Processing NLP has shown an acceleration in progress on a wide range of downstream applications ranging from Question Answering BIBREF7 BIBREF8 to Natural Language Inference BIBREF9 BIBREF10 BIBREF11 through Syntactic Parsing BIBREF12 BIBREF13 BIBREF14 Recent work has shown the ability to learn flexible hierarchical contextualized representations obtaining stateoftheart results on various natural language processing tasks BIBREF15 A basic observation of our work is that natural language representations are also beneficial for solving problems in which natural language is not the underlying source of input Moreover our results indicate that natural language is a strong alternative to current complementary methods for semantic representations of a state In this work we assume a state can be described using natural language sentences We use distributional embedding methods in order to represent sentences processed with a standard Convolutional Neural Network for feature extraction In Section SECREF2 we describe the basic frameworks we rely on We discuss possible semantic representations in Section SECREF3 namely raw visual inputs semantic segmentation feature vectors and natural language representations Then in Section SECREF4 we compare NLP representations with their alternatives Our results suggest that representation of the state using natural language can achieve better performance even on difficult tasks or tasks in which the description of the state is saturated with tasknuisances BIBREF17 Moreover we observe that NLP representations are more robust to transfer and changes in the environment We conclude the paper with a short discussion and related work In Reinforcement Learning the goal is to learn a policy pi s which is a mapping from state s to a probability distribution over actions mathcal A with the objective to maximize a reward rs that is provided by the environment This is often solved by formulating the problem as a Markov Decision Process MDP BIBREF19 Two common quantities used to estimate the performance in MDPs are the value v s and actionvalue Q s a functions which are defined as follows vs mathbb Epi sum t gamma t rt s0 s and Qs a mathbb Epi sum t gamma t rt s0 s a0 a Two prominent algorithms for solving RL tasks which we use in this paper are the valuebased DQN BIBREF2 and the policybased PPO BIBREF3 Deep Q Networks DQN The DQN algorithm is an extension of the classical Qlearning approach to a deep learning regime Qlearning learns the optimal policy by directly learning the value function ie the actionvalue function A neural network is used to estimate the Qvalues and is trained to minimize the Bellman error namely Proximal Policy Optimization PPO While the DQN learns the optimal behavioral policy using a dynamic programming approach PPO takes a different route PPO builds upon the policy gradient theorem which optimizes the policy directly with an addition of a trustregion update rule The policy gradient theorem updates the policy by A word embedding is a mapping from a word w to a vector mathbf w in mathbb Rd A simple form of word embedding is the Bag of Words BoW a vector mathbf w in mathbb ND D is the dictionary size in which each word receives a unique 1hot vector representation Recently more efficient methods have been proposed in which the embedding vector is smaller than the dictionary size d ll D These methods are also known as distributional embeddings The distributional hypothesis in linguistics is derived from the semantic theory of language usage ie words that are used and occur in the same contexts tend to have similar meanings Distributional word representations are a fundamental building block for representing natural language sentences Word embeddings such as Word2vec BIBREF20 and GloVe BIBREF21 build upon the distributional hypothesis improving efficiency of stateoftheart language models Convolutional Neural Networks CNNs originally invented for computer vision have been shown to achieve strong performance on text classification tasks BIBREF22 BIBREF23 as well as other traditional NLP tasks BIBREF24 In this paper we consider a common architecture BIBREF25 in which each word in a sentence is represented as an embedding vector a single convolutional layer with m filters is applied producing an mdimensional vector for each ngram The vectors are combined using maxpooling followed by a ReLU activation The result is then passed through multiple hidden linear layers with ReLU activation eventually generating the final output Contemporary methods for semantic representation of states currently follow one of three approaches 1 raw visual inputs BIBREF2 BIBREF26 in which raw sensory values of pixels are used from one or multiple sources 2 feature vectors BIBREF27 BIBREF28 in which general features of the problem are chosen with no specific structure and 3 semantic segmentation maps BIBREF29 BIBREF30 in which discrete or logical values are used in one or many channels to represent the general features of the state The common approach is to derive decisions eg classification action etc based on information in its raw form In RL the raw form is often the pixels representing an image however the image is only one form of a semantic representation In Semantic Segmentation the image is converted from a 3channel RGB matrix into an Nchannel matrix where N is the number of classes In this case each channel represents a class and a binary value at each coordinate denotes whether or not this class is present in the image at this location For instance fig semantic segmentation example considers an autonomous vehicle task The raw image and segmentation maps are both sufficient for the task ie both contain a sufficient semantic representation Nevertheless the semantic segmentation maps contain less tasknuisances BIBREF17 which are random variables that affect the observed data but are not informative to the task we are trying to solve In this paper we propose a forth method for representing a state namely using natural language descriptions One method to achieve such a representation is through Image Captioning BIBREF31 BIBREF32 Natural language is both rich as well as flexible This flexibility enables the algorithm designer to represent the information present in the state as efficiently and compactly as possible As an example the top image in fig semantic segmentation example can be represented using natural language as There is a car in your lane two meters in front of you a bicycle rider on your far left in the negative lane a car in your direction in the opposite lane which is twenty meters away and trees and pedestrians on the side walk or compactly by There is a car two meters in front of you a pedestrian on the sidewalk to your right and a car inbound in the negative lane which is far away Language also allows us to efficiently compress information As an example the segmentation map in the bottom image of fig semantic segmentation example can be compactly described by There are 13 pedestrians crossing the road in front of you In the next section we will demonstrate the benefits of using naturallanguagebased semantic state representation in a first person shooter enviornment In this section we compare the different types of semantic representations for representing states in the ViZDoom environment BIBREF26 as described in the previous section More specifically we use a semantic natural language parser in order to describe a state over numerous instances of levels varying in difficulty tasknuisances and objectives Our results show that though semantic segmentation and feature vector representation techniques express a similar statistic of the state natural language representation offers better performance faster convergence more robust solutions as well as better transfer The ViZDoom environment involves a 3D world that is significantly more realworldlike than Atari 2600 games with a relatively realistic physics model An agent in the ViZDoom environment must effectively perceive interpret and learn the 3D world in order to make tactical and strategic decisions of where to go and how to act There are three types of state representations that are provided by the environment The first which is also most commonly used is raw visual inputs in which the state is represented by an image from a first person view of the agent A feature vector representation is an additional state representation provided by the environment The feature vector representation includes positions as well as labels of all objects and creatures in the vicinity of the agent Lastly the environment provides a semantic segmentation map based on the aforementioned feature vector An example of the visual representations in VizDoom is shown in fig representations in vizdoom In order to incorporate natural language representation to the VizDoom environment weve constructed a semantic parser of the semantic segmentation maps provided by the environment Each state of the environment was converted into a natural language sentence based on positions and labels of objects in the frame To implement this the screen was divided into several vertical and horizontal patches as depicted in fig patches These patches describe relational aspects of the state such as distance of objects and their direction with respect to the agents point of view In each patch objects were counted and a natural language description of the patch was constructed This technique was repeated for all patches to form the final state representation fig nlp state rep depicts examples of natural language sentences of different states in the enviornment We tested the natural language representation against the visualbased and feature representations on several tasks with varying difficulty In these tasks the agent could navigate shoot and collect items such as weapons and medipacks Often enemies of different types attacked the agent and a positive reward was given when an enemy was killed Occasionally the agent also suffered from health degeneration The tasks included a basic scenario a health gathering scenario a scenario in which the agent must take cover from fireballs a scenario in which the agent must defend itself from charging enemies and a super scenario where a mixture of the above scenarios was designed to challenge the agent More specifically in the basic scenario a single monster is spawned in front of the agent The purpose of this scenario is to teach the agent to aim at the enemy and shoot at it In the health gathering scenario the floor of the room is covered in toxin causing the agent to gradually lose health Medipacks are spawned randomly in the room and the agents objective is to keep itself alive by collecting them In the take cover scenario multiple fireball shooting monsters are spawned in front of the agent The goal of the agent is to stay alive as long as possible dodging inbound fireballs The difficulty of the task increases over time as additional monsters are spawned In the defend the center scenario melee attacking monsters are randomly spawned in the room and charge towards the agent As opposed to other scenarios the agent is incapable of moving aside from turning left and right and shooting In the defend the line scenario both melee and fireball shooting monsters are spawned near the opposing wall The agent can only step right left or shoot Finally in the super scenario both melee and fireball shooting monsters are repeatably spawned all over the room the room contains various items the agent can pick up and use such as medipacks shotguns ammunition and armor Furthermore the room is filled with unusable objects various types of trees pillars and other decorations The agent can freely move and turn in any direction as well as shoot This scenario combines elements from all of the previous scenarios Our agent was implemented using a Convolutional Neural Network as described in Section SECREF4 We converted the parsed state into embedded representations of fixed length We tested both a DQN and a PPO based agent and compared the natural language representation to the other representation techniques namely the raw image feature vector and semantic segmentation representations In order to effectively compare the performance of the different representation methods we conducted our experiments under similar conditions for all agents The same hyperparameters were used under all tested representations Moreover to rule out effects of architectural expressiveness the number of weights in all neural networks was approximately matched regardless of the input type Finally we ensured the super scenario was positively biased toward imagebased representations This was done by adding a large amount items to the game level thereby filling the state with nuisances these tests are denoted by nuisance in the scenario name This was especially evident in the NLP representations as sentences became extensively longer average of over 250 words This is contrary to imagebased representations which did not change in dimension Results of the DQNbased agent are presented in fig scenario comparison Each plot depicts the average reward across 5 seeds of all representations methods It can be seen that the NLP representation outperforms the other methods This is contrary to the fact that it contains the same information as the semantic segmentation maps More interestingly comparing the visionbased and featurebased representations render inconsistent conclusions with respect to their relative performance NLP representations remain robust to changes in the environment as well as tasknuisances in the state As depicted in fig nuisance scenarios inflating the state space with tasknuisances impairs the performance of all representations There a large amount of unnecessary objects were spawned in the level increasing the states description length to over 250 words whilst retaining the same amount of useful information Nevertheless the NLP representation outperformed the vision and feature based representations with high robustness to the applied noise In order to verify the performance of the natural language representation was not due to extensive discretization of patches weve conducted experiments increasing the number of horizontal patches ranging from 3 to 31 patches in the extreme case Our results as depicted in fig patch count indicate that the amount of discretization of patches did not affect the performance of the NLP agent remaining a superior representation compared to the rest To conclude our experiments suggest that NLP representations though they describe the same raw information of the semantic segmentation maps are more robust to tasknuisances allow for better transfer and achieve higher performance in complex tasks even when their description is long and convoluted While weve only presented results for DQN agents we include plots for a PPO agent in the Appendix showing similar trends and conclusions We thus deduce that NLPbased semantic state representations are a preferable choice for training VizDoom agents Work on representation learning is concerned with finding an appropriate representation of data in order to perform a machine learning task BIBREF33 In particular deep learning exploits this concept by its very nature BIBREF2 Work on representation learning include Predictive State Representations PSR BIBREF34 BIBREF35 which capture the state as a vector of predictions of future outcomes and a Heuristic Embedding of Markov Processes HEMP BIBREF36 which learns to embed transition probabilities using an energybased optimization problem There has been extensive work attempting to use natural language in RL Efforts that integrate language in RL develop tools approaches and insights that are valuable for improving the generalization and sample efficiency of learning agents Previous work on languageconditioned RL has considered the use of natural language in the observation and action space Environments such as Zork and TextWorld BIBREF37 have been the standard benchmarks for testing textbased games Nevertheless these environments do not search for semantic state representations in which an RL algorithm can be better evaluated and controlled BIBREF38 use highlevel semantic abstractions of documents in a representation to facilitate relational learning using Inductive Logic Programming and a generative language model BIBREF39 use highlevel guidance expressed in text to enrich a stochastic agent playing against the builtin AI of Civilization II They train an agent with the MonteCarlo search framework in order to jointly learn to identify text that is relevant to a given game state as well as game strategies based only on environment feedback BIBREF40 utilize natural language in a modelbased approach to describe the dynamics and rewards of an environment showing these can facilitate transfer between different domains More recently the structure and compositionality of natural language has been used for representing policies in hierarchical RL In a paper by BIBREF41 instructions given in natural language were used in order to break down complex problems into highlevel plans and lowerlevel actions Their suggested framework leverages the structure inherent to natural language allowing for transfer to unfamiliar tasks and situations This use of semantic structure has also been leveraged by BIBREF42 where abstract actions not necessarily words were recognized as symbols of a natural and expressive language improving performance and transfer of RL agents Outside the context of RL previous work has also shown that highquality linguistic representations can assist in crossmodal transfer such as using semantic relationships between labels for zeroshot transfer in image classification BIBREF43 BIBREF44 Our results indicate that natural language can outperform and sometime even replace visionbased representations Nevertheless natural language representations can also have disadvantages in various scenarios For one they require the designer to be able to describe the state exactly whether by a rulebased or learned parser Second they abstract notions of the state space that the designer may not realize are necessary for solving the problem As such semantic representations should be carefully chosen similar to the process of reward shaping or choosing a training algorithm Here we enumerate three instances in which we believe natural language representations are beneficial Natural usecase Information contained in both generic and taskspecific textual corpora may be highly valuable for decision making This case assumes the state can either be easily described using natural language or is already in a natural language state This includes examples such as userbased domains in which user profiles and comments are part of the state or the stock market in which stocks are described by analysts and other readily available text 3D physical environments such as VizDoom also fall into this category as semantic segmentation maps can be easily described using natural language Subjective information Subjectivity refers to aspects used to express opinions evaluations and speculations These may include strategies for a game the way a doctor feels about her patient the mood of a driver and more Unstructured information In these cases features might be measured by different units with an arbitrary position in the states feature vector rendering them sensitive to permutations Such state representations are thus hard to process using neural networks As an example the medical domain may contain numerous features describing the vitals of a patient These raw features when observed by an expert can be efficiently described using natural language Moreover they allow an expert to efficiently add subjective information An orthogonal line of research considers automating the process of image annotation The noise added from the supervised or unsupervised process serves as a great challenge for natural language representation We suspect the noise accumulated by this procedure would require additional information to be added to the state eg past information Nevertheless as we have shown in this paper such information can be compressed using natural language In addition while we have only considered spatial features of the state information such as movement directions and transient features can be efficiently encoded as well Natural language representations help abstract information and interpret the state of an agent improving its overall performance Nevertheless it is imperative to choose a representation that best fits the domain at hand Designers of RL algorithms should consider searching for a semantic representation that fits their needs While this work only takes a first step toward finding better semantic state representations we believe the structure inherent in natural language can be considered a favorable candidate for achieving this goal VizDoom is a Doom based research environment that was developed at the Pozna University of Technology It is based on ZDoom game executable and includes a Python based API The API offers the user the ability to run game instances query the game state and execute actions The original purpose of VizDoom is to provide a research platform for vision based reinforcement learning Thus a natural language representation for the game was needed to be implemented ViZDoom emulates the Doom game and enables us to access data within a certain frame using Python dictionaries This makes it possible to extract valuable data including player health ammo enemy locations etc Each game frame contains labels which contain data on visible objects in the game the player enemies medkits etc We used Doom Builder in order to edit some of the scenarios and design a new one Enviroment rewards are presented in doomscenariostable A semantic representation using natural language should contain information which can be deduced by a human playing the game For example even though a human does not know the exact distance between objects it can classify them as close or far However objects that are outside the players field of vision can not be a part of the state Furthermore a human would most likely refer to an objects location relative to itself using directions such as right or left To convert each frame to a natural language representation state the list of available labels is iterated and a string is built accordingly The main idea of our implementation is to divide the screen into multiple vertical patches count the amount of different objects inside by their types and parse it as a sentence The decision as to whether an object is close or far can be determined by calculating the distance from it to the player and using two threshold levels Object descriptions can be concise or detailed as needed We experimented with the following mechanics the screen can be divided between patches equally or by determined ratios Here our main guideline was to keep the front patch narrow enough so it can be used as sights our initial experiment was with 3 patches and later we added 2 more patches classified as outer left and outer right In our experiments we have tested up to 51 patches referred to as left or right patch with corresponding numbers we used 2 thresholds which allowed us to classify the distance of an object from the player as closemid and far Depending on the task the value of the threshold can be changed as well as adding more thresholds different states might generate sentence with different size A maximum sentence length is another parameter that was tested sentenceslengthtable presents some data regarding the average word count in some of the game sceanrios After the sentence describing the state is generated it is transformed to an embedding vector Words that were not found in the vocabulary were replaced with an OOV vector All words were then concatenated to a NxDx1 matrix representing the state We experimented with both Word2Vec and GloVe pretrained embedding vectors Eventually we used the latter as it consumes less memory and speeds up the training process The length of the state sentence is one of the hyperparameters of the agents shorter sentences are zero padded where longer ones are trimmed All of our models were implemented using PyTorch The DQN agents used a single network that outputs the QValues of the available actions The PPO agents used an ActorCritic model with two networks the first outputs the policy distribution for the input state and the second network outputs its value As mentioned earlier we used three common neural network architectures used for the raw image and semantic segmentation based agents VizDooms raw output image resolution is 640X480X3 RGB image We experimented with both the original image and its downsampled version The semantic segmentation image was of resolution 640X480X1 where the pixel value represents the objects class generated using the VizDoom label API the network consisted of two convolutional layers two hidden linear layers and an output layer The first convolutional layer has 8 6X6 filters with stride 3 and ReLU activation The second convolutional layer has 16 3X3 filters with stride 2 and ReLU activation The fully connected layers has 32 and 16 units both of them are followed by ReLU activation The output layers size is the amount of actions the agent has available in the trained scenario Used in the feature vector based agent Naturally some discretization is needed in order to build a feature vector so some of the state data is lost the feature vector was made using features we extracted from the VizDoom API and its dimensions was 90 X 1 The network is made up of two fully connected layers each of them followed by a ReLU activation The first layer has 32 units and the second one one has 16 units The output layers size was the amount of actions available to the agent Used in the natural language based agent As previously mentioned each word in the natural language state is transformed into a 200X50X1 matrix The first layers of the TextCNN are convolutional layers with 8 filter which are designed to scan input sentence and return convolution outputs of sequences of varying lengths The filters vary in width such that each of them learns to identify different lengths of sequences in words Longer filters have higher capability of extracting features from longer word sequences The filters we have chosen have the following dimensions 3X50X1 4X50X1 5X50X1 8X50X111X50X1 Following the convolution layer there is a ReLU activation and a max pool layer Finally there are two fully connected layers The first layer has 32 units and second one has 16 units Both of them are followed by ReLU activation All architectures have the same output regardless of the input type The DQN network is a regression network with its output size the number of available actions The PPO agent has 2 networks actor and critic The actor network has a Softmax activation with size equal to the available amount of actions The critic network is a regression model with a single output representing the states value Reward plots for the PPO agent can be found in Figure FIGREF47 </s>
<s>[INST] Query-oriented text summarization based on hypergraph transversals [/INST] The development of automatic tools for the summarization of large corpora of documents has attracted a widespread interest in recent years With fields of application ranging from medical sciences to finance and legal science these summarization systems considerably reduce the time required for knowledge acquisition and decision making by identifying and formatting the relevant information from a collection of documents Since most applications involve large corpora rather than single documents summarization systems developed recently are meant to produce summaries of multiple documents Similarly the interest has shifted from generic towards queryoriented summarization in which a query expresses the users needs Moreover existing summarizers are generally extractive namely they produce summaries by extracting relevant sentences from the original corpus Among the existing extractive approaches for text summarization graphbased methods are considered very effective due to their ability to capture the global patterns of connection between the sentences of the corpus These systems generally define a graph in which the nodes are the sentences and the edges denote relationships of lexical similarities between the sentences The sentences are then scored using graph ranking algorithms such as the PageRank BIBREF0 or HITS BIBREF1 algorithms which can also be adapted for the purpose of queryoriented summarization BIBREF2 A key step of graphbased summarizers is the way the graph is constructed since it has a strong impact on the sentence scores As pointed out in BIBREF3 a critical issue of traditional graphbased summarizers is their inability to capture group relationships among sentences since each edge of a graph only connects a pair of nodes Following the idea that each topic of a corpus connects a group of multiple sentences covering that topic hypergraph models were proposed in BIBREF3 and BIBREF4 in which the hyperedges represent similarity relationships among groups of sentences These group relationships are formed by detecting clusters of lexically similar sentences we refer to as themes or themebased hyperedges Each theme is believed to cover a specific topic of the corpus However since the models of BIBREF3 and BIBREF4 define the themes as groups of lexically similar sentences the underlying topics are not explicitly discovered Moreover their themes do not overlap which contradicts the fact that each sentence carries multiple information and may thus belong to multiple themes as can be seen from the following example of sentence Two topics are covered by the sentence above the topics of studies and leisure Hence the sentence should belong to multiple themes simultaneously which is not allowed in existing hypergraph models of BIBREF3 and BIBREF4 The hypergraph model proposed in this paper alleviates these issues by first extracting topics ie groups of semantically related terms using a new topic model referred to as SEMCOT Then a theme is associated to each topic such that each theme is defined a the group of sentences covering the associated topic Finally a hypergraph is formed with sentences as nodes themes as hyperedges and hyperedge weights reflecting the prominence of each theme and its relevance to the query In such a way our model alleviates the weaknesses of existing hypergraph models since each themebased hyperedge is associated to a specific topic and each sentence may belong to multiple themes Furthermore a common drawback of existing graph and hypergraphbased summarizers is that they select sentences based on the computation of an individual relevance score for each sentence This approach fails to capture the information jointly carried by the sentences which results in redundant summaries missing important topics of the corpus To alleviate this issue we propose a new approach of sentence selection using our themebased hypergraph A minimal hypergraph transversal is the smallest subset of nodes covering all hyperedges of a hypergraph BIBREF5 The concept of hypergraph transversal is used in computational biology BIBREF6 and data mining BIBREF5 for identifying a subset of relevant agents in a hypergraph In the context of our themebased hypergraph a hypergraph transversal can be viewed as the smallest subset of sentences covering all themes of the corpus We extend the notion of transversal to take the theme weights into account and we propose two extensions called minimal soft hypergraph transversal and maximal budgeted hypergraph transversal The former corresponds to finding a subset of sentences of minimal aggregated length and achieving a target coverage of the topics of the corpus in a sense that will be clarified The latter seeks a subset of sentences maximizing the total weight of covered hyperedges while not exceeding a target summary length As the associated discrete optimization problems are NPhard we propose two approximation algorithms building on the theory of submodular functions Our transversalbased approach for sentence selection alleviates the drawback of methods of individual sentence scoring since it selects a set of sentences that are jointly covering a maximal number of relevant themes and produces informative and nonredundant summaries As demonstrated in the paper the time complexity of the method is equivalent to that of early graphbased summarization systems such as LexRank BIBREF0 which makes it more efficient than existing hypergraphbased summarizers BIBREF3 BIBREF4 The scalability of summarization algorithms is essential especially in applications involving large corpora such as the summarization of news reports BIBREF7 or the summarization of legal texts BIBREF8 The method of BIBREF9 proposes to select sentences by using a maximum coverage approach which shares some similarities with our model However they attempt to select a subset of sentences maximizing the number of relevant terms covered by the sentences Hence they fail to capture the topical relationships among sentences which are in contrast included in our themebased hypergraph A thorough comparative analysis with stateoftheart summarization systems is included in the paper Our model is shown to outperform other models on a benchmark dataset produced by the Document Understanding Conference The main contributions of this paper are 1 a new topic model extracting groups of semantically related terms based on patterns of term cooccurrences 2 a natural hypergraph model representing nodes as sentences and each hyperedge as a theme namely a group of sentences sharing a topic and 3 a new sentence selection approach based on hypergraph transversals for the extraction of a subset of jointly relevant sentences The structure of the paper is as follows In section Background and related work we present work related to our method In section Problem statement and system overview we present an overview of our system which is described in further details in section Summarization based on hypergraph transversals Then in section Experiments and evaluation we present experimental results Finally section Conclusion presents a discussion and concluding remarks While early models focused on the task of single document summarization recent systems generally produce summaries of corpora of documents BIBREF10 Similarly the focus has shifted from generic summarization to the more realistic task of queryoriented summarization in which a summary is produced with the essential information contained in a corpus that is also relevant to a userdefined query BIBREF11 Summarization systems are further divided into two classes namely abstractive and extractive models Extractive summarizers identify relevant sentences in the original corpus and produce summaries by aggregating these sentences BIBREF10 In contrast an abstractive summarizer identifies conceptual information in the corpus and reformulates a summary from scratch BIBREF11 Since abstractive approaches require advanced natural language processing the majority of existing summarization systems consist of extractive models Extractive summarizers differ in the method used to identify relevant sentences which leads to a classification of models as either featurebased or graphbased approaches Featurebased methods represent the sentences with a set of predefined features such as the sentence position the sentence length or the presence of cue phrases BIBREF12 Then they train a model to compute relevance scores for the sentences based on their features Since featurebased approaches generally require datasets with labelled sentences which are hard to produce BIBREF11 unsupervised graphbased methods have attracted growing interest in recent years Graphbased summarizers represent the sentences of a corpus as the nodes of a graph with the edges modelling relationships of similarity between the sentences BIBREF0 Then graphbased algorithms are applied to identify relevant sentences The models generally differ in the type of relationship captured by the graph or in the sentence selection approach Most graphbased models define the edges connecting sentences based on the cooccurrence of terms in pairs of sentences BIBREF0 BIBREF2 BIBREF3 Then important sentences are identified either based on node ranking algorithms or using a global optimization approach Methods based on node ranking compute individual relevance scores for the sentences and build summaries with highly scored sentences The earliest such summarizer LexRank BIBREF0 applies the PageRank algorithm to compute sentence scores Introducing a query bias in the node ranking algorithm this method can be adapted for queryoriented summarization as in BIBREF2 A different graph model was proposed in BIBREF13 where sentences and key phrases form the two classes of nodes of a bipartite graph The sentences and the key phrases are then scored simultaneously by applying a mutual reinforcement algorithm An extended bipartite graph ranking algorithm is also proposed in BIBREF1 in which the sentences represent one class of nodes and clusters of similar sentences represent the other class The hubs and authorities algorithm is then applied to compute sentence scores Adding terms as a third class of nodes BIBREF14 propose to score terms sentences and sentence clusters simultaneously based on a mutual reinforcement algorithm which propagates the scores across the three node classes A common drawback of the approaches based on node ranking is that they compute individual relevance scores for the sentences and they fail to model the information jointly carried by the sentences which may result in redundant summaries Hence global optimization approaches were proposed to select a set of jointly relevant and nonredundant sentences as in BIBREF15 and BIBREF16 For instance BIBREF17 propose a greedy algorithm to find a dominating set of nodes in the sentence graph A summary is then formed with the corresponding set of sentences Similarly BIBREF15 extract a set of sentences with a maximal similarity with the entire corpus and a minimal pairwise lexical similarity which is modelled as a multiobjective optimization problem In contrast BIBREF9 propose a coverage approach in which a set of sentences maximizing the number of distinct relevant terms is selected Finally BIBREF16 propose a two step approach in which individual sentence relevance scores are computed first Then a set of sentences with a maximal total relevance and a minimal joint redundancy is selected All three methods attempt to solve NPhard problems Hence they propose approximation algorithms based on the theory of submodular functions Going beyond pairwise lexical similarities between sentences and relations based on the cooccurrence of terms hypergraph models were proposed in which nodes are sentences and hyperedges model group relationships between sentences BIBREF3 The hyperedges of the hypergraph capture topical relationships among groups of sentences Existing hypergraphbased systems BIBREF3 BIBREF4 combine pairwise lexical similarities and clusters of lexically similar sentences to form the hyperedges of the hypergraph Hypergraph ranking algorithms are then applied to identify important and queryrelevant sentences However they do not provide any interpretation for the clusters of sentences discovered by their method Moreover these clusters do not overlap which is incoherent with the fact that each sentence carries multiple information and hence belongs to multiple semantic groups of sentences In contrast each hyperedge in our proposed hypergraph connects sentences covering the same topic and these hyperedges do overlap A minimal hypergraph transversal is a subset of the nodes of hypergraph of minimum cardinality and such that each hyperedge of the hypergraph is incident to at least one node in the subset BIBREF5 Theoretically equivalent to the minimum hitting set problem the problem of finding a minimum hypergraph transversal can be viewed as finding a subset of representative nodes covering the essential information carried by each hyperedge Hence hypergraph transversals find applications in various areas such as computational biology boolean algebra and data mining BIBREF18 Extensions of hypergraph transversals to include hyperedge and node weights were also proposed in BIBREF19 Since the associated optimization problems are generally NPhard various approximation algorithms were proposed including greedy algorithms BIBREF20 and LP relaxations BIBREF21 The problem of finding a hypergraph transversal is conceptually similar to that of finding a summarizing subset of a set of objects modelled as a hypergraph However to the best of our knowledge there was no attempt to use hypergraph transversals for text summarization in the past Since it seeks a set of jointly relevant sentences our method shares some similarities with existing graphbased models that apply global optimization strategies for sentence selection BIBREF9 BIBREF15 BIBREF16 However our hypergraph better captures topical relationships among sentences than the simple graphs based on lexical similarities between sentences Given a corpus of Nd documents and a userdefined query q we intend to produce a summary of the documents with the information that is considered both central in the corpus and relevant to the query Since we limit ourselves to the production of extracts our task is to extract a set S of relevant sentences from the corpus and to aggregate them to build a summary Let Ns be the total number of sentences in the corpus We further split the task into two subtasks The sentences in the set S are then aggregated to form the final summary Figure 1 summarizes the steps of our proposed method After some preprocessing steps the themes are detected based on a topic detection algorithm which tags each sentence with multiple topics A themebased hypergraph is then built with the weight of each theme reflecting both its importance in the corpus and its similarity with the query Finally depending on the task at hand one of two types of hypergraph transversal is generated If the summary must not exceed a target summary length then a maximal budgeted hypergraph transversal is generated If the summary must achieve a target coverage then a minimal soft hypergraph transversal is generated Finally the sentences corresponding to the generated transversal are selected for the summary In this section we present the key steps of our algorithm after some standard preprocessing steps topics of semantically related terms are detected from which themes grouping topically similar sentences are extracted A hypergraph is then formed based on the sentence themes and sentences are selected based on the detection of a hypergraph transversal As the majority of extractive summarization approaches our model is based on the representation of sentences as vectors To reduce the size of the vocabulary we remove stopwords that do not contribute to the meaning of sentences such as the or a using a publicly available list of 667 stopwords The words are also stemmed using Porter Stemmer BIBREF22 Let Nt be the resulting number of distinct terms after these two preprocessing steps are performed We define the inverse sentence frequency textisft BIBREF23 as textisftlog leftfracNsNstright Eq 7 where Nst is the number of sentences containing term t This weighting scheme yields higher weights for rare terms which are assumed to contribute more to the semantics of sentences BIBREF23 Sentence i is then represented by a vector sitexttfisfi1texttfisfiNt where texttfisfittexttfittextisft Eq 8 and texttfit is the frequency of term t in sentence i Finally to denote the similarity between two text fragments a and b which can be sentences groups of sentences or the query we use the cosine similarity between the texttfisf representations of a and b as suggested in BIBREF2 textsimabfracsum t texttfisfattexttfisfbtsqrtsum ttexttfisfat2sqrtsum ttexttfisfbt2 Eq 9 where texttfisfat is also defined as the frequency of term t in fragment a multiplied by textisft This similarity measure will be used in section Sentence hypergraph construction to compute the similarity with the query q As mentioned in section Introduction our hypergraph model is based on the detection of themes A theme is defined as a group of sentences covering the same topic Hence our theme detection algorithm is based on a 3step approach the extraction of topics the process of tagging each sentence with multiple topics and the detection of themes based on topic tags A topic is viewed as a set of semantically similar terms namely terms that refer to the same subject or the same piece of information In the context of a specific corpus of related documents a topic can be defined as a set of terms that are likely to occur close to each other in a document BIBREF24 In order to extract topics we make use of a clustering approach based on the definition of a semantic dissimilarity between terms For terms u and v we first define the joint textisf weight textisfuv as textisfuvlog leftfracNsNsuvright Eq 11 where Nsuv is the number of sentences in which both terms u and v occur together Then the semantic dissimilarity dtextsemuv between the two terms is defined as dtextsemuvfractextisfuvmin textisfutextisfvmax textisfutextisfv Eq 12 which can be viewed as a special case of the socalled google distance which was already successfully applied to learn semantic similarities between terms on webpages BIBREF25 Using concepts from information theory textisfu represents the number of bits required to express the occurrence of term u in a sentence using an optimally efficient code Then textisfuvtextisfu can be viewed as the number of bits of information in v relative to u Assuming textisfvge textisfu dtextsemuv can be viewed as the improvement obtained when compressing v using a previously compressed code for u and compressing v from scratch BIBREF26 More details can be found in BIBREF25 In practice two terms u0 and u1 with a low value of u2 are expected to consistently occur together in the same context and they are thus considered to be semantically related in the context of the corpus Based on the semantic dissimilarity measure between terms we define a topic as a group of terms with a high semantic density namely a group of terms such that each term of the group is semantically related to a sufficiently high number of terms in the group The DBSCAN algorithm is a method of densitybased clustering that achieves this result by iteratively growing cohesive groups of agents with the condition that each member of a group should contain a sufficient number of other members in an epsilon neighborhood around it BIBREF27 Using the semantic dissimilarity as a distance measure DBSCAN extracts groups of semantically related terms which are considered as topics The advantages offered by DBSCAN over other clustering algorithms are threefold First DBSCAN is capable of detecting the number of clusters automatically Second although the semantic dissimilarity is symmetric and nonnegative it does not satisfy the triangle inequality This prevents the use of various clustering algorithms such as the agglomerative clustering with complete linkage BIBREF28 However DBSCAN does not explicitly require the triangle inequality to be satisfied Finally it is able to detect noisy samples in low density region that do not belong to any other cluster Given a set of pairwise dissimilarity measures a density threshold epsilon and a minimum neighborhood size m DBSCAN returns a number K of clusters and a set of labels lbrace ciin lbrace 11Krbrace 1le ile Ntrbrace such that ci1 if term i is considered a noisy term While it is easy to determine a natural value for m choosing a value for epsilon is not straightforward Hence we adapt DBSCAN algorithm to build our topic model referred to as Semantic Clustering Of Terms SEMCOT algorithm It iteratively applies DBSCAN and decreases the parameter epsilon until the size of each cluster does not exceed a predefined value Algorithm Sentence theme detection based on topic tagging summarizes the process Apart from m the algorithm also takes parameters m0 the initial value of m1 m2 the maximum number of points allowed in a cluster and m3 a factor close to 1 by which m4 is multiplied until all clusters have sizes lower than m5 Experiments on realworld data suggest empirical values of m6 m7 m8 and m9 Additionally we observe that among the terms considered as noisy by DBSCAN some could be highly infrequent terms with a high K0 value but yet having a strong impact on the meaning of sentences Hence we include them as topics consisting of single terms if their K1 value exceeds a threshold K2 whose value is determined by crossvalidation as explained in section Experiments and evaluation H INPUT Semantic Dissimilarities lbrace dtextsemuv1le uvle Ntrbrace PARAMETERS epsilon 0 M m beta le 1 mu OUTPUT Number K of topics topic tags lbrace ci1le ile Ntrbrace epsilon leftarrow epsilon 0 textminTermsleftarrow m textproceedleftarrow textTrue while textproceed cKleftarrow DBSCANdtextsemepsilon textminTerms if underset1le kle Kmax lbrace icikrbrace M textproceedleftarrow textFalse else epsilon leftarrow beta epsilon for each t st ct1 noisy terms if textisftge mu ctleftarrow K1 Kleftarrow K1 SEMCOT Once the topics are obtained based on algorithm Sentence theme detection based on topic tagging a theme is associated to each topic namely a group of sentences covering the same topic The sentences are first tagged with multiple topics based on a scoring function The score of the l th topic in the i th sentence is given by sigma ilundersettctlsum texttfisfit Eq 13 and the sentence is tagged with topic l whenever sigma ilge delta in which delta is a parameter whose value is tuned as explained in section Experiments and evaluation ensuring that each sentence is tagged with at least one topic The scores are intentionally not normalized to avoid tagging short sentences with an excessive number of topics The l th theme is then defined as the set of sentences Tllbrace isigma ilge delta 1le ile Nsrbrace Eq 14 While there exist other summarization models based on the detection of clusters or groups of similar sentence the novelty of our theme model is twofold First each theme is easily interpretable as the set of sentences associated to a specific topic As such our themes can be considered as groups of semantically related sentences Second it is clear that the themes discovered by our approach do overlap since a single sentence may be tagged with multiple topics To the best of our knowledge none of the previous clusterbased summarizers involved overlapping groups of sentences Our model is thus more realistic since it better captures the multiplicity of the information covered by each sentence A hypergraph is a generalization of a graph in which the hyperedges may contain any number of nodes as expressed in definition UID16 BIBREF3 Our hypergraph model moreover includes both hyperedge and node weights Definition 1 Hypergraph A node and hyperedgeweighted hypergraph is defined as a quadruplet HVEphi w in which V is a set of nodes Esubseteq 2V is a set of hyperedges phi in mathbb RV is a vector of positive node weights and win mathbb RE is a vector of positive hyperedge weights For convenience we will refer to a hypergraph by its weight vectors phi and w its hyperedges represented by a set Esubseteq 2V and its incidence lists textincilbrace ein Eiin erbrace for each iin V As mentioned in section Introduction our system relies on the definition of a themebased hypergraph which models groups of semantically related sentences as hyperedges Hence compared to traditional graphbased summarizers the hypergraph is able to capture more complex group relationships between sentences instead of being restricted to pairwise relationships In our sentencebased hypergraph the sentences are the nodes and each theme defines a hyperedge connecting the associated sentences The weight phi i of node i is the length of the i th sentence namely beginarrayl
V lbrace 1Nsrbrace text and phi iLitext text 1le ile Ns
E lbrace e1eKrbrace subseteq 2V
elTltext ie elin textincileftrightarrow iin Tl
endarray Eq 17 Finally the weights of the hyperedges are computed based on the centrality of the associated theme and its similarity with the query wl1lambda textsimTlDlambda textsimTlq Eq 18 where lambda in 01 is a parameter and D represents the entire corpus textsimTlD denotes the similarity of the set of sentences in theme Tl with the entire corpus using the tfisfbased similarity of equation 9 which measures the centrality of the theme in the corpus textsimTlq refers to the similarity of the theme with the userdefined query q The sentences to be included in the queryoriented summary should contain the essential information in the corpus they should be relevant to the query and whenever required they should either not exceed a target length or jointly achieve a target coverage as mentioned in section Problem statement and system overview Existing systems of graphbased summarization generally solve the problem by ranking sentences in terms of their individual relevance BIBREF0 BIBREF2 BIBREF3 Then they extract a set of sentences with a maximal total relevance and pairwise similarities not exceeding a predefined threshold However we argue that the joint relevance of a group of sentences is not reflected by the individual relevance of each sentence And limiting the redundancy of selected sentences as done in BIBREF3 does not guarantee that the sentences jointly cover the relevant themes of the corpus Considering each topic as a distinct piece of information in the corpus an alternative approach is to select the smallest subset of sentences covering each of the topics The latter condition can be reformulated as ensuring that each theme has at least one of its sentences appearing in the summary Using our sentence hypergraph representation this corresponds to the detection of a minimal hypergraph transversal as defined below BIBREF5 Definition 2 Given an unweighted hypergraph HVE a minimal hypergraph transversal is a subset Ssubseteq V of nodes satisfying beginarrayrcl
SundersetSsubseteq VtextargminS
textst undersetiin Sbigcup textinciE
endarray Eq 21 where textincilbrace eiin erbrace denotes the set of hyperedges incident to node i Figure 2 shows an example of hypergraph and a minimal hypergraph transversal of it starshaped nodes In this case since the nodes and the hyperedges are unweighted the minimal transversal is not unique The problem of finding a minimal transversal in a hypergraph is NPhard BIBREF29 However greedy algorithms or LP relaxations provide good approximate solutions in practice BIBREF21 As intended the definition of transversal includes the notion of joint coverage of the themes by the sentences However it neglects node and hyperedge weights and it is unable to identify queryrelevant themes Since both the sentence lengths and the relevance of themes should be taken into account in the summary generation we introduce two extensions of transversal namely the minimal soft hypergraph transversal and the maximal budgeted hypergraph transversal A minimal soft transversal of a hypergraph is obtained by minimizing the total weights of selected nodes while ensuring that the total weight of covered hyperedges exceeds a given threshold Definition 3 minimal soft hypergraph transversal Given a node and hyperedge weighted hypergraph HVEphi w and a parameter gamma in 01 a minimal soft hypergraph transversal is a subset Ssubseteq V of nodes satisfying beginarrayrcl
SundersetSsubseteq Vtextargminundersetiin Ssum phi i
textst undersetein textincSsum wege gamma W
endarray Eq 24 in which textincSundersetiin Sbigcup textinci and Wsum ewe The extraction of a minimal soft hypergraph transversal of the sentence hypergraph produces a summary of minimal length achieving a target coverage expressed by parameter gamma in 01 As mentioned in section Problem statement and system overview applications of text summarization may also involve a hard constraint on the total summary length L For that purpose we introduce the notion of maximal budgeted hypergraph transversal which maximizes the volume of covered hyperedges while not exceeding the target length Definition 4 maximal budgeted hypergraph transversal Given a node and hyperedge weighted hypergraph HVEphi w and a parameter L0 a maximal budgeted hypergraph transversal is a subset Ssubseteq V of nodes satisfying beginarrayrcl
SundersetSsubseteq Vtextargmaxundersetein textincSsum we
textst undersetiin Ssum phi ile L
endarray Eq 26 We refer to the function undersetein textincSsum we as the hyperedge coverage of set S We observe that both weighted transversals defined above include the notion of joint coverage of the hyperedges by the selected nodes As a result and from the definition of hyperedge weights equation 18 the resulting summary covers themes that are both central in the corpus and relevant to the query This approach also implies that the resulting summary does not contain redundant sentences covering the exact same themes As a result selected sentences are expected to cover different themes and to be semantically diverse Both the problems of finding a minimal soft transversal or finding a maximal budgeted transversal are NPhard as proved by theorem UID27 Theorem 1 NPhardness The problems of finding a minimal soft hypergraph transversal or a maximal budgeted hypergraph transversal in a weighted hypergraph are NPhard Regarding the minimal soft hypergraph transversal problem with parameter gamma 1 and unit node weights the problem is equivalent to the classical set cover problem definition UID20 which is NPcomplete BIBREF29 The maximal budgeted hypergraph transversal problem can be shown to be equivalent to the maximum coverage problem with knapsack constraint which was shown to be NPcomplete in BIBREF29 Since both problems are NPhard we formulate polynomial time algorithms to find approximate solutions to them and we provide the associated approximation factors The algorithms build on the submodularity and the nondecreasing properties of the hyperedge coverage function which are defined below Definition 5 Submodular and nondecreasing set functions Given a finite set A a function f2Arightarrow mathbb R is monotonically nondecreasing if forall Ssubset A and forall uin Asetminus S fScup lbrace urbrace ge fS Eq 29 and it is submodular if forall ST with Ssubseteq Tsubset A and forall uin Asetminus T fTcup lbrace urbrace fTle fScup lbrace urbrace fS Eq 30 Based on definition UID28 we prove in theorem UID31 that the hyperedge coverage function is submodular and monotonically nondecreasing which provides the basis of our algorithms Theorem 2 Given a hypergraph HVEphi w the hyperedge coverage function f2Vrightarrow mathbb R defined by fSundersetein textincSsum we Eq 32 is submodular and monotonically nondecreasing The hyperege coverage function f is clearly monotonically nondecreasing and it is submodular since forall Ssubseteq Tsubset V and sin Vsetminus T beginarrayl
fScup lbrace srbrace fSfTcup lbrace srbrace fT
leftundersetein textincScup lbrace srbrace sum weundersetein textincSsum werightleftundersetein textincTcup lbrace srbrace sum weundersetein textincTsum weright
left undersetein textinclbrace srbrace setminus textincSsum werightleft undersetein textinclbrace srbrace setminus textincTsum weright
undersetein textincTcap textinclbrace srbrace setminus textincSsum wege 0
endarray Eq 33 where textincRlbrace eecap Sne emptyset rbrace for Rsubseteq V The last equality follows from textincSsubseteq textincT and textinclbrace srbrace setminus textincTsubseteq textinclbrace srbrace setminus textincS Various classes of NPhard problems involving a submodular and nondecreasing function can be solved approximately by polynomial time algorithms with provable approximation factors Algorithms Detection of hypergraph transversals for text summarization and Detection of hypergraph transversals for text summarization are our core methods for the detection of approximations of maximal budgeted hypergraph transversals and minimal soft hypergraph transversals respectively In each case a transversal is found and the summary is formed by extracting and aggregating the associated sentences Algorithm Detection of hypergraph transversals for text summarization is based on an adaptation of an algorithm presented in BIBREF30 for the maximization of submodular functions under a Knaspack constraint It is our primary transversalbased summarization model and we refer to it as the method of Transversal Summarization with Target Length TLTranSum algorithm Algorithm Detection of hypergraph transversals for text summarization is an application of the algorithm presented in BIBREF20 for solving the submodular set covering problem We refer to it as Transversal Summarization with Target Coverage TCTranSum algorithm Both algorithms produce transversals by iteratively appending the node inducing the largest increase in the total weight of the covered hyperedges relative to the node weight While long sentences are expected to cover more themes and induce a larger increase in the total weight of covered hyperedges the division by the node weights ie the sentence lengths balances this tendency and allows the inclusion of short sentences as well In contrast the methods of sentence selection based on a maximal relevance and a minimal redundancy such as for instance the maximal marginal relevance approach of BIBREF31 tend to favor the selection of long sentences only The main difference between algorithms Detection of hypergraph transversals for text summarization and Detection of hypergraph transversals for text summarization is the stopping criterion in algorithm Detection of hypergraph transversals for text summarization the approximate minimal soft transversal is obtained whenever the targeted hyperedge coverage is reached while algorithm Detection of hypergraph transversals for text summarization appends a given sentence to the approximate maximal budgeted transversal only if its addition does not make the summary length exceed the target length L H INPUT Sentence Hypergraph HVEphi w target length L OUTPUT Set S of sentences to be included in the summary for each iin lbrace 1Nsrbrace rileftarrow frac1phi iundersetein textincisum we Rleftarrow emptyset Qleftarrow V fleftarrow 0 while Qne emptyset sleftarrow undersetiin Qtextargmaxtext ri Qleftarrow Qsetminus lbrace srbrace if phi sfle L Rleftarrow Rcup lbrace srbrace fleftarrow fl for each iin lbrace 1Nsrbrace rileftarrow rifracundersetein textincscap textincisum wephi i Let Gleftarrow lbrace lbrace irbrace text iin Vphi ile Lrbrace Sleftarrow undersetSin lbrace Qrbrace cup Gtextargmaxtext text text undersetein textincSsum we return S Transversal Summarization with Target Length TLTranSum H INPUT Sentence Hypergraph HVEphi w parameter gamma in 01 OUTPUT Set S of sentences to be included in the summary for each iin lbrace 1Nsrbrace rileftarrow frac1phi iundersetein textincisum we Sleftarrow emptyset Qleftarrow V tildeWleftarrow 0 Wleftarrow sum ewe while Qne emptyset and tildeWgamma W sleftarrow undersetiin Qtextargmaxtext ri Sleftarrow Scup lbrace srbrace tildeWleftarrow tildeWphi srs for each iin lbrace 1Nsrbrace rileftarrow rifracundersetein textincscap textincisum wephi i return S Transversal Summarization with Target Coverage TCTranSum We next provide theoretical guarantees that support the formulation of algorithms Detection of hypergraph transversals for text summarization and Detection of hypergraph transversals for text summarization as approximation algorithms for our hypergraph transversals Theorem UID34 provides a constant approximation factor for the output of algorithm Detection of hypergraph transversals for text summarization for the detection of minimal soft hypergraph transversals It builds on the submodularity and the nondecreasing property of the hyperedge coverage function Theorem 3 Let SL be the summary produced by our TLTranSum algorithm Detection of hypergraph transversals for text summarization and S be a maximal budgeted transversal associated to the sentence hypergraph then undersetein textincSLsum we ge frac12left1frac1erightundersetein textincSsum we Eq 35 Since the hyperedge coverage function is submodular and monotonically nondecreasing the extraction of a maximal budgeted transversal is a problem of maximization of a submodular and monotonically nondecreasing function under a Knapsack constraint namely undersetSsubseteq Vmax fStext st undersetiin Ssum phi ile L Eq 36 where fSundersetein textincSsum we Hence by theorem 2 in BIBREF30 the algorithm forming a transversal SF by iteratively growing a set St of sentences according to St1Stcup leftlbrace undersetsin Vsetminus Sttextargmaxleftlbrace fracfScup lbrace srbrace fSphi s phi sundersetiin Stsum phi ile Lrightrbrace rightrbrace Eq 37 produces a final summary SF satisfying fSFge fSfrac12left1frac1eright Eq 38 As algorithm Detection of hypergraph transversals for text summarization implements the iterations expressed by equation 37 it achieves a constant approximation factor of frac12left1frac1eright Similarly theorem UID39 provides a datadependent approximation factor for the output of algorithm Detection of hypergraph transversals for text summarization for the detection of maximal budgeted hypergraph transversals It also builds on the submodularity and the nondecreasing property of the hyperedge coverage function Theorem 4 Let SP be the summary produced by our TCTranSum algorithm Detection of hypergraph transversals for text summarization and let S be a minimal soft hypergraph transversal then undersetiin SPsum phi ile undersetiin Ssum phi i left1log leftfracgamma Wgamma Wundersetein textincST1sum werightright Eq 40 where S1ST represent the consecutive sets of sentences produced by algorithm Detection of hypergraph transversals for text summarization Consider the function gSmin gamma Wundersetein textincSsum we Then the problem of finding a minimal soft hypergraph transversal can be reformulated as SundersetSsubseteq Vtextargmin undersetsin Ssum phi stext st gSge gV Eq 41 As g is submodular and monotonically nondecreasing theorem 1 in BIBREF20 shows that the summary SG produced by iteratively growing a set St of sentences such that St1Stcup leftlbrace undersetsin Vsetminus Sttextargmaxleftlbrace fracfScup lbrace srbrace fSphi srightrbrace rightrbrace Eq 42 produces a summary SG satisfying undersetiin SGsum phi ile undersetiin Ssum phi i left1log leftfracgVgVgST1rightright Eq 43 which can be rewritten as undersetiin SGsum phi ile undersetiin Ssum phi i left1log leftfracgamma Wgamma Wundersetein textincST1sum werightright Eq 44 As algorithm Detection of hypergraph transversals for text summarization implements the iterations expressed by equation 42 the summary SS produced by our algorithm Detection of hypergraph transversals for text summarization satisfies the same inequality In practice the result of theorem UID39 suggests that the quality of the output depends on the relative increase in the hyperedge coverage induced by the last sentence to be appended to the summary In particular if each sentence that is appended to the summary in the interations of algorithm Detection of hypergraph transversals for text summarization covers a sufficient number of new themes that are not covered already by the summary the approximation factor is low We analyse the worst case time complexity of each step of our method The time complexity of DBSCAN algorithm BIBREF27 is ONtlog Nt Hence the theme detection algorithm Sentence theme detection based on topic tagging takes ONcNtlog Nt steps where Nc is the number of iterations of algorithm Sentence theme detection based on topic tagging which is generally low compared to the number of terms The time complexity for the hypergraph construction is OKNsNt where K is the number of topics or ONt2 if Ntge Ns The time complexity of the sentence selection algorithms Detection of hypergraph transversals for text summarization and Detection of hypergraph transversals for text summarization are bounded by ONsKCmax Lmax where Cmax is the number of sentences in the largest theme and Lmax is the length of the longest sentences Assuming ONcNtlog Nt0 is larger than ONcNtlog Nt1 the overall time complexity of the method is of ONcNtlog Nt2 steps in the worst case Hence the method is essentially equivalent to early graphbased models for text summarization in terms of computational burden such as the LexRankbased systems BIBREF0 BIBREF2 or greedy approaches based on global optimization BIBREF17 BIBREF15 BIBREF16 However it is computationnally more efficient than traditional hypergraphbased summarizers such as the one in BIBREF4 which involves a Markov Chain Monte Carlo inference for its topic model or the one in BIBREF3 which is based on an iterative computation of scores involving costly matrix multiplications at each step We present experimental results obtained with a Python implementation of algorithms Detection of hypergraph transversals for text summarization and Detection of hypergraph transversals for text summarization on a standard computer with a 25GHz processor and a 8GB memory We test our algorithms on DUC2005 BIBREF32 DUC2006 BIBREF33 and DUC2007 BIBREF34 datasets which were produced by the Document Understanding Conference DUC and are widely used as benchmark datasets for the evaluation of queryoriented summarizers The datasets consist respectively of 50 50 and 45 corpora each consisting of 25 documents of approximately 1000 words on average A query is associated to each corpus For evaluation purposes each corpus is associated with a set of queryrelevant summaries written by humans called reference summaries In each of our experiments a candidate summary is produced for each corpus by one of our algorithms and it is compared with the reference summaries using the metrics described below Moreover in experiments involving algorithm Detection of hypergraph transversals for text summarization the target summary length is set to 250 words as required in DUC evalutions In order to evaluate the similarity of a candidate summary with a set of reference summaries we make use of the ROUGE toolkit of BIBREF35 and more specifically of ROUGE2 and ROUGESU4 metrics which were adopted by DUC for summary evaluation ROUGE2 measures the number of bigrams found both in the candidate summary and the set of reference summaries ROUGESU4 extends this approach by counting the number of unigrams and the number of 4skipbigrams appearing in the candidate and the reference summaries where a 4skipbigram is a pair of words that are separated by no more than 4 words in a text We refer to ROUGE toolkit BIBREF35 for more details about the evaluation metrics ROUGE2 and ROUGESU4 metrics are computed following the same setting as in DUC evaluations namely with word stemming and jackknife resampling but without stopword removal Besides the parameters of SEMCOT algorithm for which empirical values were given in section Sentence theme detection based on topic tagging there are three parameters of our system that need to be tuned parameters mu threshold on isf value to include a noisy term as a single topic in SEMCOT delta threshold on the topic score for tagging a sentence with a given topic and lambda balance between the query relevance and the centrality in hyperedge weights The values of all three parameters are determined by an alternating maximization strategy of ROUGESU4 score in which the values of two parameters are fixed and the value of the third parameter is tuned to maximize the ROUGESU4 score produced by algorithm Detection of hypergraph transversals for text summarization with a target summary length of 250 words in an iterative fashion The ROUGESU4 scores are evaluated by crossvalidation using a leaveoneout process on a validation dataset consisting of 70 of DUC2007 dataset which yields mu 198 delta 085 and lambda 04 Additionally we display the evolution of ROUGESU4 and ROUGE2 scores as a function of delta and lambda For parameter delta we observe in graphs UID49 and UID50 that the quality of the summary is low for delta close to 0 since it encourages our theme detection algorithm to tag the sentences with irrelevant topics with low associated tfisf values In contrast when delta exceeds 09 some relevant topics are overlooked and the quality of the summaries drops severely Regarding parameter lambda we observe in graphs UID52 and UID53 that lambda 04 yields the highest score since it combines both the relevance of themes to the query and their centrality within the corpus for the computation of hyperedge weights In contrast with lambda 1 the algorithm focuses on the lexical similarity of themes with the query but it neglects the prominence of each theme In order to test our soft transversalbased summarizer we display the evolution of the summary length and the ROUGESU4 score as a function of parameter gamma of algorithm Detection of hypergraph transversals for text summarization In figure UID57 we observe that the summary length grows linearly with the value of parameter gamma which confirms that our system does not favor longer sentences for low values of gamma The ROUGESU4 curve of figure UID56 has a concave shape with a low score when gamma is close to 0 due to a poor recall or when gamma is close to 1 due to a poor precision The overall concave shape of the ROUGESU4 curve also demonstrates the efficiency of our TCTranSum algorithm based on our hyperedge weighting scheme and our hyperedge coverage function highly relevant sentences inducing a significant increase in the ROUGESU4 score are identified and included first in the summary In the subsequent experiments we focus on TLTranSum algorithm Detection of hypergraph transversals for text summarization which includes a target summary length and can thus be compared with other summarization systems which generally include a length constraint To justify our themebased hypergraph definition we test other hypergraph models We only change the hyperedge model which determines the kind of relationship between sentences that is captured by the hypergraph The sentence selection is performed by applying algorithm Detection of hypergraph transversals for text summarization to the resulting hypergraph We test three alternative hyperedge models First a model based on agglomerative clustering instead of SEMCOT the same definition of semantic dissimilarity equation 12 is used then topics are detected as clusters of terms obtained by agglomerative clustering with single linkage with the semantic dissimilarity as a distance measure The themes are detected and the hypergraph is constructed in the same way as in our model Second Overlap model defines hyperedges as overlapping clusters of sentences obtained by applying an algorithm of overlapping cluster detection BIBREF36 and using the cosine distance between tfisf representations of sentences as a distance metric Finally we test a hypergraph model already proposed in HyperSum system by BIBREF3 which combines pairwise hyperedges joining any two sentences having terms in common and hyperedges formed by nonoverlapping clusters of sentences obtained by DBSCAN algorithm Table 1 displays the ROUGE2 and ROUGESU4 scores and the corresponding 95 confidence intervals for each model We observe that our model outperforms both HyperSum and Overlap models by at least 4 and 15 of ROUGESU4 score respectively which confirms that a twostep process extracting consistent topics first and then defining themebased hyperedges from topic tags outperforms approaches based on sentence clustering even when these clusters do overlap Our model also outperforms the Agglomerative model by 10 of ROUGESU4 score due to its ability to identify noisy terms and to detect the number of topics automatically We compare the performance of our TLTranSum algorithm Detection of hypergraph transversals for text summarization with that of five related summarization systems Topicsensitive LexRank of BIBREF2 TSLexRank and HITS algorithms of BIBREF1 are early graphbased summarizers TSLexRank builds a sentence graph based on term cooccurrences in sentences and it applies a querybiased PageRank algorithm for sentence scoring HITS method additionally extracts clusters of sentences and it applies the hubs and authorities algorithm for sentence scoring with the sentences as authorities and the clusters as hubs As suggested in BIBREF3 in order to extract query relevant sentences only the top 5 of sentences that are most relevant to the query are considered HyperSum extends early graphbased summarizers by defining a clusterbased hypergraph with the sentences as nodes and hyperedges as sentence clusters as described in section Testing the hypergraph structure The sentences are then scored using an iterative label propagation algorithm over the hypergraph starting with the lexical similarity of each sentence with the query as initial labels In all three methods the sentences with highest scores and pairwise lexical similarity not exceeding a threshold are included in the summary Finally we test two methods that also build on the theory of submodular functions First the MaxCover approach BIBREF9 seeks a summary by maximizing the number of distinct relevant terms appearing in the summary while not exceeding the target summary length using equation 18 to compute the term relevance scores While the objective function of the method is similar to that of the problem of finding a maximal budgeted hypergraph transversal equation 26 of BIBREF16 they overlook the semantic similarities between terms which are captured by our SEMCOT algorithm and our hypergraph model Similarly the Maximal Relevance Minimal Redundancy MRMR first computes relevance scores of sentences as in equation 18 then it seeks a summary with a maximal total relevance score and a minimal redundancy while not exceeding the target summary length The problem is solved by an iterative algorithm building on the submodularity and nondecreasing property of the objective function Table 2 displays the ROUGE2 and ROUGESU4 scores with the corresponding 95 confidence intervals for all six systems including our TLTranSum method We observe that our system outperforms other graph and hypergraphbased summarizers involving the computation of individual sentence scores LexRank by 6 HITS by 13 and HyperSum by 6 of ROUGESU4 score which confirms both the relevance of our themebased hypergraph model and the capacity of our transversalbased summarizer to identify jointly relevant sentences as opposed to methods based on the computation of individual sentence scores Moreover our TLTranSum method also outperforms other approaches such as MaxCover 5 and MRMR 7 These methods are also based on a submodular and nondecreasing function expressing the information coverage of the summary but they are limited to lexical similarities between sentences and fail to detect topics and themes to measure the information coverage of the summary As a final experiment we compare our TLTranSum approach to other summarizers presented at DUC contests Table 3 displays the ROUGE2 and ROUGESU4 scores for the worst summary produced by a human for the top four systems submitted for the contests for the baseline proposed by NIST a summary consisting of the leading sentences of randomly selected documents and the average score of all methods submitted respectively for DUC2005 DUC2006 and DUC2007 contests Regarding DUC2007 our method outperforms the best system by 2 and the average ROUGESU4 score by 21 It also performs significantly better than the baseline of NIST However it is outperformed by the human summarizer since our systems produces extracts while humans naturally reformulate the original sentences to compress their content and produce more informative summaries Tests on DUC2006 dataset lead to similar conclusions with our TLTranSum algorithm outperforming the best other system and the average ROUGESU4 score by 2 and 22 respectively On DUC2005 dataset however our TLTranSum method is outperformed by the beset system which is due to the use of advanced NLP techniques such as sentence trimming BIBREF37 which tend to increase the ROUGESU4 score Nevertheless the ROUGESU4 score produced by our TLTranSum algorithm is still 15 higher than the average score for DUC2005 contest In this paper a new hypergraphbased summarization model was proposed in which the nodes are the sentences of the corpus and the hyperedges are themes grouping sentences covering the same topics Going beyond existing methods based on simple graphs and pairwise lexical similarities our hypergraph model captures groups of semantically related sentences Moreover two new method of sentence selection based on the detection of hypergraph transversals were proposed one to generate summaries of minimal length and achieving a target coverage and the other to generate a summary achieving a maximal coverage of relevant themes while not exceeding a target length The approach generates informative summaries by extracting a subset of sentences jointly covering the relevant themes of the corpus Experiments on a realworld dataset demonstrate the effectiveness of the approach The hypergraph model itself is shown to produce more accurate summaries than other models based on term or sentence clustering The overall system also outperforms related graph or hypergraphbased approaches by at least 10 of ROUGESU4 score As a future research direction we may analyse the performance of other algorithms for the detection of hypergraph transversals such as methods based on LP relaxations We may also further extend our topic model to take the polysemy of terms into acount since each term may carry multiple meanings a given term could refer to different topics depending on its context Finally we intend to adapt our model for solving related problems such as commmunity question answering </s>
<s>[INST] Text-based inference of moral sentiment change [/INST] Peoples moral sentimentour feelings toward right or wrongcan change over time For instance the publics views toward slavery have shifted substantially over the past centuries BIBREF0 How societys moral views evolve has been a longstanding issue and a constant source of controversy subject to interpretations from social scientists historians philosophers among others Here we ask whether natural language processing has the potential to inform moral sentiment change in society at scale involving minimal human labour or intervention The topic of moral sentiment has been thus far considered a traditional inquiry in philosophy BIBREF1 BIBREF2 BIBREF3 with contemporary development of this topic represented in social psychology BIBREF4 BIBREF5 BIBREF6 BIBREF7 BIBREF8 cognitive linguistics BIBREF9 and more recently the advent of Moral Foundations Theory BIBREF10 BIBREF11 BIBREF12 Despite the fundamental importance and interdisciplinarity of this topic largescale formal treatment of moral sentiment particularly its evolution is still in infancy from the natural language processing NLP community see overview in Section SECREF2 We believe that there is a tremendous potential to bring NLP methodologies to bear on the problem of moral sentiment change We build on extensive recent work showing that word embeddings reveal implicit human biases BIBREF13 BIBREF14 and social stereotypes BIBREF15 Differing from this existing work we demonstrate that moral sentiment change can be revealed by moral biases implicitly learned from diachronic text corpora Accordingly we present to our knowledge the first textbased framework for probing moral sentiment change at a large scale with support for different levels of analysis concerning moral relevance moral polarity and finegrained moral dimensions As such for any query item such as slavery our goal is to automatically infer its moral trajectories from sentiments at each of these levels over a long period of time Our approach is based on the premise that peoples moral sentiments are reflected in natural language and more specifically in text BIBREF16 In particular we know that books are highly effective tools for conveying moral views to the public For example Uncle Toms Cabin BIBREF17 was central to the antislavery movement in the United States The framework that we develop builds on this premise to explore changes in moral sentiment reflected in longitudinal or historical text Figure FIGREF1 offers a preview of our framework by visualizing the evolution trajectories of the publics moral sentiment toward concepts signified by the probe words slavery democracy and gay Each of these concepts illustrates a piece of moral history tracked through a period of 200 years 1800 to 2000 and our framework is able to capture nuanced moral changes For instance slavery initially lies at the border of moral virtue positive sentiment and vice negative sentiment in the 1800s yet gradually moves toward the center of moral vice over the 200year period in contrast democracy considered morally negative eg subversion and antiauthority under monarchy in the 1800s is now perceived as morally positive as a mechanism for fairness gay which came to denote homosexuality only in the 1930s BIBREF18 is inferred to be morally irrelevant until the modern day We will describe systematic evaluations and applications of our framework that extend beyond these anecdotal cases of moral sentiment change The general textbased framework that we propose consists of a parameterfree approach that facilitates the prediction of public moral sentiment toward individual concepts automated retrieval of morally changing concepts and broadscale psycholinguistic analyses of historical rates of moral sentiment change We provide a description of the probabilistic models and data used followed by comprehensive evaluations of our methodology An emerging body of work in natural language processing and computational social science has investigated how NLP systems can detect moral sentiment in online text For example moral rhetoric in social media and political discourse BIBREF19 BIBREF20 BIBREF21 the relation between moralization in social media and violent protests BIBREF22 and bias toward refugees in talk radio shows BIBREF23 have been some of the topics explored in this line of inquiry In contrast to this line of research the development of a formal framework for moral sentiment change is still underexplored with no existing systematic and formal treatment of this topic BIBREF16 While there is emerging awareness of ethical issues in NLP BIBREF24 BIBREF25 work exploiting NLP techniques to study principles of moral sentiment change is scarce Moreover since morality is variable across cultures and time BIBREF12 BIBREF16 developing systems that capture the diachronic nature of moral sentiment will be a pivotal research direction Our work leverages and complements existing research that finds implicit human biases from word embeddings BIBREF13 BIBREF14 BIBREF19 by developing a novel perspective on using NLP methodology to discover principles of moral sentiment change in human society Our framework treats the moral sentiment toward a concept at three incremental levels as illustrated in Figure FIGREF3 First we consider moral relevance distinguishing between morally irrelevant and morally relevant concepts At the second tier moral polarity we further split morally relevant concepts into those that are positively or negatively perceived in the moral domain Finally a third tier classifies these concepts into finegrained categories of human morality We draw from research in social psychology to inform our methodology most prominently Moral Foundations Theory BIBREF26 MFT seeks to explain the structure and variation of human morality across cultures and proposes five moral foundations Care Harm Fairness Cheating Loyalty Betrayal Authority Subversion and Sanctity Degradation Each foundation is summarized by a positive and a negative pole resulting in ten finegrained moral categories To ground moral sentiment in text we leverage the Moral Foundations Dictionary BIBREF27 The MFD is a psycholinguistic resource that associates each MFT category with a set of seed words which are words that provide evidence for the corresponding moral category in text We use the MFD for moral polarity classification by dividing seed words into positive and negative sets and for finegrained categorization by splitting them into the 10 MFT categories To implement the first tier of our framework and detect moral relevance we complement our morally relevant seed words with a corresponding set of seed words approximating moral irrelevance based on the notion of valence ie the degree of pleasantness or unpleasantness of a stimulus We refer to the emotional valence ratings collected by BIBREF28 for approximately 14000 English words and choose the words with most neutral valence rating that do not occur in the MFD as our set of morally irrelevant seed words for an equal total number of morally relevant and morally irrelevant words We propose and evaluate a set of probabilistic models to classify concepts in the three tiers of morality specified above Our models exploit the semantic structure of word embeddings BIBREF29 to perform tiered moral classification of query concepts In each tier the model receives a query word embedding vector mathbf q and a set of seed words for each class in that tier and infers the posterior probabilities over the set of classes c to which the query concept is associated with The seed words function as labelled examples that guide the moral classification of novel concepts and are organized per classification tier as follows In moral relevance classification sets mathbf S0 and mathbf S1 contain the morally irrelevant and morally relevant seed words respectively for moral polarity mathbf S and mathbf S contain the positive and negative seed words and for finegrained moral categories mathbf S1 ldots mathbf S10 contain the seed words for the 10 categories of MFT Then our general problem is to estimate pcmathbf q where mathbf q is a query vector and c is a moral category in the desired tier We evaluate the following four models A Centroid model summarizes each set of seed words by its expected vector in embedding space and classifies concepts into the class of closest expected embedding in Euclidean distance following a softmax rule A Nave Bayes model considers both mean and variance under the assumption of independence among embedding dimensions by fitting a normal distribution with mean vector and diagonal covariance matrix to the set of seed words of each class A kNearest Neighbors kNN model exploits local density estimation and classifies concepts according to the majority vote of the k seed words closest to the query vector A Kernel Density Estimation KDE model performs density estimation at a broader scale by considering the contribution of each seed word toward the total likelihood of each class regulated by a bandwidth parameter h that controls the sensitivity of the model to distance in embedding space Table TABREF2 specifies the formulation of each model Note that we adopt a parsimonious design principle in our modelling both Centroid and Nave Bayes are parameterfree models kNN only depends on the choice of k and KDE uses a single bandwidth parameter h To apply our models diachronically we require a word embedding space that captures the meanings of words at different points in time and reflects changes pertaining to a particular word as diachronic shifts in a common embedding space Following BIBREF30 we combine skipgram word embeddings BIBREF29 trained on longitudinal corpora of English with rotational alignments of embedding spaces to obtain diachronic word embeddings that are aligned through time We divide historical time into decadelong bins and use two sets of embeddings provided by BIBREF30 each trained on a different historical corpus of English Google Ngrams BIBREF31 a corpus of 85 times 1011 tokens collected from the English literature Google Books allgenres spanning the period 18001999 COHA BIBREF32 a smaller corpus of 41 times 108 tokens from works selected so as to be genrebalanced and representative of American English in the period 18102009 We evaluated our models in two ways classification of moral seed words on all three tiers moral relevance polarity and finegrained categories and correlation of model predictions with human judgments In this evaluation we assessed the ability of our models to classify the seed words that compose our moral environment in a leaveoneout classification task We performed the evaluation for all three classification tiers 1 moral relevance where seed words are split into morally relevant and morally irrelevant 2 moral polarity where moral seed words are split into positive and negative 3 finegrained categories where moral seed words are split into the 10 MFT categories In each test we removed one seed word from the training set at a time to obtain crossvalidated model predictions Table TABREF14 shows classification accuracy for all models and corpora on each tier for the 19901999 period We observe that all models perform substantially better than chance confirming the efficacy of our methodology in capturing moral dimensions of words We also observe that models using word embeddings trained on Google Ngrams perform better than those trained on COHA which could be expected given the larger corpus size of the former In the remaining analyses we employ the Centroid model which offers competitive accuracy and a simple parameterfree specification We evaluated the approximate agreement between our methodology and human judgments using valence ratings ie the degree of pleasantness or unpleasantness of a stimulus Our assumption is that the valence of a concept should correlate with its perceived moral polarity eg morally repulsive ideas should evoke an unpleasant feeling However we do not expect this correspondence to be perfect for example the concept of dessert evokes a pleasant reaction without being morally relevant In this analysis we took the valence ratings for the nearly 14000 English nouns collected by BIBREF28 and for each query word q we generated a corresponding prediction of positive moral polarity from our model Pcmathbf q Table TABREF16 shows the correlations between human valence ratings and predictions of positive moral polarity generated by models trained on each of our corpora We observe that the correlations are significant suggesting the ability of our methodology to capture relevant features of moral sentiment from text In the remaining applications we use the diachronic embeddings trained on the Google Ngrams corpus which enabled superior model performance throughout our evaluations We applied our framework in three ways 1 evaluation of selected concepts in historical time courses and prediction of human judgments 2 automatic detection of moral sentiment change and 3 broadscale study of the relations between psycholinguistic variables and historical change of moral sentiment toward concepts We applied our models diachronically to predict time courses of moral relevance moral polarity and finegrained moral categories toward two historically relevant topics slavery and democracy By grounding our model in word embeddings for each decade and querying concepts at the three tiers of classification we obtained the time courses shown in Figure FIGREF21 We note that these trajectories illustrate actual historical trends Predictions for democracy show a trend toward morally positive sentiment consistent with the adoption of democratic regimes in Western societies On the other hand predictions for slavery trend down and suggest a drop around the 1860s coinciding with the American Civil War We also observe changes in the dominant finegrained moral categories such as the perception of democracy as a fair concept suggesting potential mechanisms behind the polarity changes and providing further insight into the public sentiment toward these concepts as evidenced by text We explored the predictive potential of our framework by comparing model predictions with human judgments of moral relevance and acceptability We used data from the Pew Research Centers 2013 Global Attitudes survey BIBREF33 in which participants from 40 countries judged 8 topics such as abortion and homosexuality as one of acceptable unacceptable and not a moral issue We compared human ratings with model predictions at two tiers for moral relevance we paired the proportion of not a moral issue human responses with irrelevance predictions pc0mathbf q for each topic and for moral acceptability we paired the proportion of acceptable responses with positive predictions pcmathbf q We used 1990s word embeddings and obtained predictions for twoword topics by querying the model with their averaged embeddings Figure FIGREF23 shows plots of relevance and polarity predictions against survey proportions and we observe a visible correspondence between model predictions and human judgments despite the difficulty of this task and limited number of topics Beyond analyzing selected concepts we applied our framework predictively on a large repertoire of words to automatically discover the concepts that have exhibited the greatest change in moral sentiment at two tiers moral relevance and moral polarity We selected the 10000 nouns with highest total frequency in the 18001999 period according to data from BIBREF30 restricted to words labelled as nouns in WordNet BIBREF34 for validation For each such word mathbf q we computed diachronic moral relevance scores Ri pc1mathbf q i1ldots 20 for the 20 decades in our time span Then we performed a linear regression of R on T 1ldots n and took the fitted slope as a measure of moral relevance change We repeated the same procedure for moral polarity Finally we removed words with average relevance score below 05 to focus on morally relevant retrievals Table TABREF17 shows the words with steepest predicted change toward moral relevance along with their predicted finegrained moral categories in modern times ie 19001999 Table TABREF18 shows the words with steepest predicted change toward the positive and negative moral poles To further investigate the moral sentiment that may have led to such polarity shifts we also show the predicted finegrained moral categories of each word at its earliest time of predicted moral relevance and in modern times Although we do not have access to ground truth for this application these results offer initial insight into the historical moral landscape of the English language at scale In this application we investigated the hypothesis that concept concreteness is inversely related to change in moral relevance ie that concepts considered more abstract might become morally relevant at a higher rate than concepts considered more concrete To test this hypothesis we performed a multiple linear regression analysis on rate of change toward moral relevance of a large repertoire of words against concept concreteness ratings word frequency BIBREF35 and word length BIBREF36 We obtained norms of concreteness ratings from BIBREF28 We collected the same set of highfrequency nouns as in the previous analysis along with their fitted slopes of moral relevance change Since we were interested in moral relevance change within this large set of words we restricted our analysis to those words whose model predictions indicate change in moral relevance in either direction from the 1800s to the 1990s We performed a multiple linear regression under the following model Here rho w is the slope of moral relevance change for word w fw is its average frequency lw is its character length cw is its concreteness rating beta f beta l beta c and beta 0 are the corresponding factor weights and intercept respectively and epsilon sim mathcal N0 sigma is the regression error term Table TABREF27 shows the results of multiple linear regression We observe that concreteness is a significant negative predictor of change toward moral relevance suggesting that abstract concepts are more strongly associated with increasing moral relevance over time than concrete concepts This significance persists under partial correlation test against the control factors p 001 We further verified the diachronic component of this effect in a random permutation analysis We generated 1000 control time courses by randomly shuffling the 20 decades in our data and repeated the regression analysis to obtain a control distribution for each regression coefficient All effects became nonsignificant under the shuffled condition suggesting the relevance of concept concreteness for diachronic change in moral sentiment see Supplementary Material We presented a textbased framework for exploring the socioscientific problem of moral sentiment change Our methodology uses minimal parameters and exploits implicit moral biases learned from diachronic word embeddings to reveal the publics moral perception toward a large concept repertoire over a long historical period Differing from existing work in NLP that treats moral sentiment as a flat classification problem BIBREF19 BIBREF20 our framework probes moral sentiment change at multiple levels and captures moral dynamics concerning relevance polarity and finegrained categories informed by Moral Foundations Theory BIBREF12 We applied our methodology to the automated analyses of moral change both in individual concepts and at a broad scale thus providing insights into psycholinguistic variables that associate with rates of moral change in the public Our current work focuses on exploring moral sentiment change in Englishspeaking cultures Future research should evaluate the appropriateness of the framework to probing moral change from a diverse range of cultures and linguistic backgrounds and the extent to which moral sentiment change interacts and crisscrosses with linguistic meaning change and lexical coinage Our work creates opportunities for applying natural language processing toward characterizing moral sentiment change in society We would like to thank Nina Wang Nicola Lacerata Dan Jurafsky Paul Bloom Dzmitry Bahdanau and the Computational Linguistics Group at the University of Toronto for helpful discussion We would also like to thank Ben Prystawski for his feedback on the manuscript JX is supported by an NSERC USRA Fellowship and YX is funded through a SSHRC Insight Grant an NSERC Discovery Grant and a Connaught New Researcher Award </s>
<s>[INST] Bringing Stories Alive: Generating Interactive Fiction Worlds [/INST] Interactive fictionsalso called textadventure games or textbased gamesare games in which a player interacts with a virtual world purely through textual natural languagereceiving descriptions of what they see and writing out how they want to act an example can be seen in Figure FIGREF2 Interactive fiction games are often structured as puzzles or quests set within the confines of given game world Interactive fictions have been adopted as a testbed for realtime game playing agents BIBREF0 BIBREF1 BIBREF2 Unlike other graphical games interactive fictions test agents abilities to infer the state of the world through communication and to indirectly affect change in the world through language Interactive fictions are typically modeled after real or fantasy worlds commonsense knowledge is an important factor in successfully playing interactive fictions BIBREF3 BIBREF4 In this paper we explore a different challenge for artificial intelligence automatically generating textbased virtual worlds for interactive fictions A core component of many narrativebased taskseverything from storytelling to game generationis world building The world of a story or game defines the boundaries of where the narrative is allowed and what the player is allowed to do There are four core challenges to world generation 1 commonsense knowledge the world must reference priors that the player possesses so that players can make sense of the world and build expectations on how to interact with it This is especially true in interactive fictions where the world is presented textually because many details of the world necessarily be left out eg the pot is on a stove kitchens are found in houses that might otherwise be literal in a graphical virtual world 2 Thematic knowledge interactive fictions usually involve a theme or genre that comes with its own expectations For example light speed travel is plausible in scifi worlds but not realistic in the real world 3 Coherence the world must not appear to be an random assortment of locations 3 Natural language The descriptions of the rooms as well as the permissible actions must text implying that the system has natural language generation capability Because worlds are conveyed entirely through natural language the potential output space for possible generated worlds is combinatorially large To constrain this space and to make it possible to evaluate generated world we present an approach which makes use of existing stories building on the worlds presented in them but leaving enough room for the worlds to be unique Specifically we take a story such as Sherlock Holmes or Rapunzela linear reading experienceand extract the description of the world the story is set in to make an interactive world the player can explore Our method first extracts a partial potentially disconnected knowledge graph from the story encoding information regarding locations characters and objects in the form of langle entityrelationentityrangle triples Relations between these types of entities as well as their properties are captured in this knowledge graph However stories often do not explicitly contain all the information required to fully fill out such a graph A story may mention that there is a sword stuck in a stone but not what you can do with the sword or where it is in relation to everything else Our method fills in missing relation and affordance information using thematic knowledge gained from training on stories in a similar genre This knowledge graph is then used to guide the text description generation process for the various locations characters and objects The game is then assembled on the basis of the knowledge graph and the corresponding generated descriptions We have two major contributions 1 A neural model and a rulesbased baseline for each of the tasks described above The phases are that of graph extraction and completion followed by description generation and game formulation Each of these phases are relatively distinct and utilize their own models 2 A human subject study for comparing the neural model and variations on it to the rulesbased and humanmade approaches We perform two separate human subject studiesone for the first phase of knowledge graph construction and another for the overall game creation processtesting specifically for coherence interestingness and the ability to maintain a theme or genre There has been a slew of recent work in developing agents that can play text games BIBREF0 BIBREF5 BIBREF1 BIBREF6 BIBREF7 ammanabrolutransferammanabroluammanabrolu2020graph in particular use knowledge graphs as state representations for gameplaying agents BIBREF8 propose QAit a set of question answering tasks framed as textbased or interactive fiction games QAit focuses on helping agents learn procedural knowledge through interaction with a dynamic environment These works all focus on agents that learn to play a given set of interactive fiction games as opposed to generating them Scheherazade BIBREF9 is a system that learns a plot graph based on stories written by crowd sourcing the task of writing short stories The learned plot graph contains details relevant to ensure story coherence It includes plot events temporal precedence and mutual exclusion relations ScheherazadeIF BIBREF10 extends the system to generate chooseyourownadventure style interactive fictions in which the player chooses from prescribed options BIBREF11 explore a method of creating interactive narratives revolving around locations wherein sentences are mapped to a realworld GPS location from a corpus of sentences belonging to a certain genre Narratives are made by chaining together sentences selected based on the players current realworld location In contrast to these models our method generates a parserbased interactive fiction in which the player types in a textual command allowing for greater expressiveness BIBREF12 define the problem of procedural content generation in interactive fiction games in terms of the twin considerations of world and quest generation and focus on the latter They present a system in which quest content is first generated by learning from a corpus and then grounded into a given interactive fiction world The work is this paper focuses on the world generation problem glossed in the prior work Thus these two systems can be seen as complimentary Light BIBREF13 is a crowdsourced dataset of grounded textadventure game dialogues It contains information regarding locations characters and objects set in a fantasy world The authors demonstrate that the supervised training of transformerbased models lets us contextually relevant dialog actions and emotes Most in line with the spirit of this paper BIBREF14 leverage Light to generate worlds for textbased games They train a neural network based model using Light to compositionally arrange locations characters and objects into an interactive world Their model is tested using a human subject study against other machine learning based algorithms with respect to the cohesiveness and diversity of generated worlds Our work in contrast focuses on extracting the information necessary for building interactive worlds from existing story plots World generation happens in two phases In the first phase a partial knowledge graph is extracted from a story plot and then filled in using thematic commonsense knowledge In the second phase the graph is used as the skeleton to generate a full interactive fiction gamegenerating textual descriptions or flavortext for rooms and embedded objects We present a novel neural approach in addition to a rule guided baseline for each of these phases in this section The first phase is to extract a knowledge graph from the story that depicts locations characters objects and the relations between these entities We present two techniques The first uses neural questionanswering technique to extract relations from a story text The second provided as a baseline uses OpenIE5 a commonly used rulebased information extraction technique For the sake of simplicity we considered primarily the locationlocation and locationcharacterobject relations represented by the next to and has edges respectively in Figure FIGREF4 While many neural models already exist that perform similar tasks such as named entity extraction and part of speech tagging they often come at the cost of large amounts of specialized labeled data suited for that task We instead propose a new method that leverages models trained for contextgrounded questionanswering tasks to do entity extraction with no task dependent data or finetuning necessary Our method dubbed AskBERT leverages the QuestionAnswering QA model ALBERT BIBREF15 AskBERT consists of two main steps as shown in Figure FIGREF7 vertex extraction and graph construction The first step is to extract the set of entitiesgraph verticesfrom the story We are looking to extract information specifically regarding characters locations and objects This is done by using asking the QA model questions such as Who is a character in the story BIBREF16 have shown that the phrasing of questions given to a QA model is important and this forms the basis of how we formulate our questionsquestions are asked so that they are more likely to return a single answer eg asking Where is a location in the story as opposed to Where are the locations in the story In particular we notice that pronoun choice can be crucial Where is a location in the story yielded more consistent extraction than What is a location in the story ALBERT QA is trained to also output a special noanswer token when it cannot find an answer to the question within the story Our method makes use of this by iteratively asking QA model a question and masking out the most likely answer outputted on the previous step This process continues until the noanswer token becomes the most likely answer The next step is graph construction Typical interactive fiction worlds are usually structured as trees ie no cycles except between locations Using this fact we use an approach that builds a graph from the vertex set by one relationor edgeat a time Once again using the entire story plot as context we query the ALBERTQA model picking a random starting location x from the set of vertices previously extractedand asking the questions What location can I visit from x and WhoWhat is in x The methodology for phrasing these questions follows that described for the vertex extraction The answer given by the QA model is matched to the vertex set by picking the vertex u that contains the best wordtoken overlap with the answer Relations between vertices are added by computing a relation probability on the basis of the output probabilities of the answer given by the QA model The probability that vertices xu are related where is the sum of the individual token probabilities of all the overlapping tokens in the answer from the QA model and u We compared our proposed AskBERT method with a nonneural rulebased approach This approach is based on the information extracted by OpenIE5 followed by some postprocessing such as namedentity recognition and partofspeech tagging OpenIE5 combines several cuttingedge ideas from several existing papers BIBREF17 BIBREF18 BIBREF19 to create a powerful information extraction tools For a given sentence OpenIE5 generates multiple triples in the format of langle entity relation entityrangle as concise representations of the sentence each with a confidence score These triples are also occasionally annotated with location information indicating that a triple happened in a location As in the neural AskBERT model we attempt to extract information regarding locations characters and objects The entire story plot is passed into the OpenIE5 and we receive a set of triples The location annotations on the triples are used to create a set of locations We mark which sentences in the story contain these locations POS tagging based on marking nounphrases is then used in conjunction with NER to further filter the set of triplesidentifying the set of characters and objects in the story The graph is constructed by linking the set of triples on the basis of the location they belong to While some sentences contain very explicit location information for OpenIE5 to mark it out in the triples most of them do not We therefore make the assumption that the location remains the same for all triples extracted in between sentences where locations are explicitly mentioned For example if there exists location A in the 1st sentence and location B in the 5th sentence of the story all the events described in sentences 14 are considered to take place in location A The entities mentioned in these events are connected to location A in the graph The second phase involves using the constructed knowledge graph to generate textual descriptions of the entities we have extracted also known as flavortext This involves generating descriptions of what a player sees when they enter a location and short blurbs for each object and character These descriptions need to not only be faithful to the information present in the knowledge graph and the overall story plot but to also contain flavor and be interesting for the player Here we approach the problem of description generation by taking inspiration from conditional transformerbased generation methods BIBREF20 Our approach is outlined in Figure FIGREF11 and an example description shown in Figure FIGREF2 For any given entity in the story we first locate it in the story plot and then construct a prompt which consists of the entire story up to and including the sentence when the entity is first mentioned in the story followed by a question asking to describe that entity With respect to prompts we found that more direct methods such as questionanswering were more consistent than openended sentence completion For example Q Who is the prince A often produced descriptions that were more faithful to the information already present about the prince in the story than You see the prince He islooks For our transformerbased generation we use a pretrained 355M GPT2 model BIBREF21 finetuned on a corpus of plot summaries collected from Wikipedia The plots used for finetuning are tailored specific to the genre of the story in order to provide more relevant generation for the target genre Additional details regarding the datasets used are provided in Section SECREF4 This method strikes a balance between knowledge graph verbalization techniques which often lack flavor and open ended generation which struggles to maintain semantic coherence In the rulebased approach we utilized the templates from the builtin text game generator of TextWorld BIBREF1 to generate the description for our graphs TextWorld is an opensource library that provides a way to generate textgame learning environments for training reinforcement learning agents using prebuilt grammars Two major templates involved here are the Room Intro Templates and Container Description Templates from TextWorld responsible for generating descriptions of locations and blurbs for objectscharacters respectively The location and objectcharacter information are taken from the knowledge graph constructed previously Example of Room Intro Templates This might come as a shock to you but youve just entered a locationname Example of Container Description Templates The locationname contains objectpersonname Each token surrounded by sign can be expanded using a select set of terminal tokens For instance entered could be filled with any of the following phrases here entered walked into fallen into moved into stumbled into come into Additional prefixes suffixes and adjectives were added to increase the relative variety of descriptions Unlike the neural methods the rulebased approach is not able to generate detailed and flavorful descriptions of the properties of the locationsobjectscharacters By virtue of the templates however it is much better at maintaining consistency with the information contained in the knowledge graph We conducted two sets of human participant evaluations by recruiting participants over Amazon Mechanical Turk The first evaluation tests the knowledge graph construction phase in which we measure perceived coherence and genre or theme resemblance of graphs extracted by different models The second study compares full gamesincluding description generation and game assembly which cant easily be isolated from graph constructiongenerated by different methods This study looks at how interesting the games were to the players in addition to overall coherence and genre resemblance Both studies are performed across two genres mystery and fairytales This is done in part to test the relative effectiveness of our approach across different genres with varying thematic commonsense knowledge The dataset used was compiled via story summaries that were scraped from Wikipedia via a recursive crawling bot The bot searched pages for both for plot sections as well as links to other potential stories From the process 695 fairytales and 536 mystery stories were compiled from two categories novels and short stories We note that the mysteries did not often contain many fantasy elements ie they consisted of mysteries set in our world such as Sherlock Holmes while the fairytales were much more removed from reality Details regarding how each of the studies were conducted and the corresponding setup are presented below We first select a subset of 10 stories randomly from each genre and then extract a knowledge graph using three different models Each participant is presented with the three graphs extracted from a single story in each genre and then asked to rank them on the basis of how coherent they were and how well the graphs match the genre The graphs resembles the one shown in in Figure FIGREF4 and are presented to the participant sequentially The exact order of the graphs and genres was also randomized to mitigate any potential latent correlations Overall this study had a total of 130 participantsThis ensures that on average graphs from every story were seen by 13 participants In addition to the neural AskBERT and rulesbased methods we also test a variation of the neural model which we dub to be the random approach The method of vertex extraction remains identical to the neural method but we instead connect the vertices randomly instead of selecting the most confident according to the QA model We initialize the graph with a starting location entity Then we randomly sample from the vertex set and connect it to a randomly sampled location in the graph until every vertex has been connected This ablation in particular is designed to test the ability of our neural model to predict relations between entities It lets us observe how accurately linking related vertices effects each of the metrics that we test for For a fair comparison between the graphs produced by different approaches we randomly removed some of the nodes and edges from the initial graphs so that the maximum number of locations per graph and the maximum number of objectspeople per location in each story genre are the same The results are shown in Table TABREF20 We show the median rank of each of the models for both questions across the genres Ranked data is generally closely interrelated and so we perform Friedmans test between the three models to validate that the results are statistically significant This is presented as the pvalue in table asterisks indicate significance at p005 In cases where we make comparisons between specific pairs of models when necessary we additionally perform the MannWhitney U test to ensure that the rankings differed significantly In the mystery genre the rulesbased method was often ranked first in terms of genre resemblance followed by the neural and random models This particular result was not statistically significant however likely indicating that all the models performed approximately equally in this category The neural approach was deemed to be the most coherent followed by the rules and random For the fairytales the neural model ranked higher on both of the questions asked of the participants In this genre the random neural model also performed better than the rules based approach Tables TABREF18 and TABREF19 show the statistics of the constructed knowledge graphs in terms of vertices and edges We see that the rulesbased graph construction has a lower number of locations characters and relations between entities but far more objects in general The greater number of objects is likely due to the rulesbased approach being unable to correctly identify locations and characters The gap between the methods is less pronounced in the mystery genre as opposed to the fairytales in fact the rulesbased graphs have more relations than the neural ones The random and neural models have the same number of entities in all categories by construction but random in general has lower variance on the number of relations found In this case as well the variance is lower for mystery as opposed to fairytales When taken in the context of the results in Table TABREF20 it appears to indicate that leveraging thematic commonsense in the form of AskBERT for graph construction directly results in graphs that are more coherent and maintain genre more easily This is especially true in the case of the fairytales where the thematic and everyday commonsense diverge more than than in the case of the mysteries This participant study was designed to test the overall game formulation process encompassing both phases described in Section SECREF3 A single story from each genre was chosen by hand from the 10 stories used for the graph evaluation process From the knowledge graphs for this story we generate descriptions using the neural rules and random approaches described previously Additionally we introduce a humanauthored game for each story here to provide an additional benchmark This author selected was familiar with textadventure games in general as well as the genres of detective mystery and fairy tale To ensure a fair comparison we ensure that the maximum number of locations and maximum number of charactersobjects per location matched the other methods After setting general format expectations the author read the selected stories and constructed knowledge graphs in a corresponding three step process of identifying the n most important entities in the story mapping positional relationships between entities and then synthesizing flavor text for the entities based off of said location the overall story plot and background topic knowledge Once the knowledge graph and associated descriptions are generated for a particular story they are then automatically turned into a fully playable textgame using the text game engine Evennia Evennia was chosen for its flexibility and customization as well as a convenient web client for end user testing The data structures were translated into builder commands within Evennia that constructed the various layouts flavor text and rules of the game world Users were placed in one room out of the different world locations within the game they were playing and asked to explore the game world that was available to them Users achieved this by moving between rooms and investigating objects Each time a new room was entered or object investigated the players total number of explored entities would be displayed as their score Each participant was was asked to play the neural game and then another one from one of the three additional models within a genre The completion criteria for each game is collect half the total score possible in the game ie explore half of all possible rooms and examine half of all possible entities This provided the participant with multiple possible methods of finishing a particular game On completion the participant was asked to rank the two games according to overall perceived coherence interestingness and adherence to the genre We additionally provided a required initial tutorial game which demonstrated all of these mechanics The order in which participants played the games was also randomized as in the graph evaluation to remove potential correlations We had 75 participants in total 39 for mystery and 36 for fairytales As each player played the neural model created game and one from each of the other approachesthis gave us 13 on average for the other approaches in the mystery genre and 12 for fairytales The summary of the results of the full game study is shown in Table TABREF23 As the comparisons made in this study are all made pairwise between our neural model and one of the baselinesthey are presented in terms of what percentage of participants prefer the baseline game over the neural game Once again as this is highly interrelated ranked data we perform the MannWhitney U test between each of the pairs to ensure that the rankings differed significantly This is also indicated on the table In the mystery genre the neural approach is generally preferred by a greater percentage of participants than the rules or random The humanmade game outperforms them all A significant exception to is that participants thought that the rulesbased game was more interesting than the neural game The trends in the fairytale genre are in general similar with a few notable deviations The first deviation is that the rulesbased and random approaches perform significantly worse than neural in this genre We see also that the neural game is as coherent as the humanmade game As in the previous study we hypothesize that this is likely due to the rulesbased approach being more suited to the mystery genre which is often more mundane and contains less fantastical elements By extension we can say that thematic commonsense in fairytales has less overlap with everyday commonsense than for mundane mysteries This has a few implications one of which is that this theme specific information is unlikely to have been seen by OpenIE5 before This is indicated in the relatively improved performance of the rulesbased model in this genre across in terms of both interestingness and coherenceThe genre difference can also be observed in terms of the performance of the random model This model also lacking when compared to our neural model across all the questions asked especially in the fairytale setting This appears to imply that filling in gaps in the knowledge graph using thematically relevant information such as with AskBERT results in more interesting and coherent descriptions and games especially in settings where the thematic commonsense diverges from everyday commonsense Procedural world generation systems are required to be semantically consistent comply with thematic and everyday commonsense understanding and maintain overall interestingness We describe an approach that transform a linear reading experience in the form of a story plot into a interactive narrative experience Our method AskBERT extracts and fills in a knowledge graph using thematic commonsense and then uses it as a skeleton to flesh out the rest of the world A key insight from our human participant study reveals that the ability to construct a thematically consistent knowledge graph is critical to overall perceptions of coherence and interestingness particularly when the theme diverges from everyday commonsense understanding </s>
<s>[INST] Generating Classical Chinese Poems from Vernacular Chinese [/INST] During thousands of years millions of classical Chinese poems have been written They contain ancient poets emotions such as their appreciation for nature desiring for freedom and concerns for their countries Among various types of classical poetry quatrain poems stand out On the one hand their aestheticism and terseness exhibit unique elegance On the other hand composing such poems is extremely challenging due to their phonological tonal and structural restrictions Most previous models for generating classical Chinese poems BIBREF0 BIBREF1 are based on limited keywords or characters at fixed positions eg acrostic poems Since users could only interfere with the semantic of generated poems using a few input words models control the procedure of poem generation In this paper we proposed a novel model for classical Chinese poem generation As illustrated in Figure FIGREF1 our model generates a classical Chinese poem based on a vernacular Chinese paragraph Our objective is not only to make the model generate aesthetic and terse poems but also keep rich semantic of the original vernacular paragraph Therefore our model gives users more control power over the semantic of generated poems by carefully writing the vernacular paragraph Although a great number of classical poems and vernacular paragraphs are easily available there exist only limited humanannotated pairs of poems and their corresponding vernacular translations Thus it is unlikely to train such poem generation model using supervised approaches Inspired by unsupervised machine translation UMT BIBREF2 we treated our task as a translation problem namely translating vernacular paragraphs to classical poems However our work is not just a straightforward application of UMT In a training example for UMT the length difference of source and target languages are usually not large but this is not true in our task Classical poems tend to be more concise and abstract while vernacular text tends to be detailed and lengthy Based on our observation on goldstandard annotations vernacular paragraphs usually contain more than twice as many Chinese characters as their corresponding classical poems Therefore such discrepancy leads to two main problems during our preliminary experiments 1 Undertranslation when summarizing vernacular paragraphs to poems some vernacular sentences are not translated and ignored by our model Take the last two vernacular sentences in Figure FIGREF1 as examples they are not covered in the generated poem 2 Overtranslation when expanding poems to vernacular paragraphs certain words are unnecessarily translated for multiple times For example the last sentence in the generated poem of Figure FIGREF1 as green as sapphire is backtranslated as as green as as as sapphire Inspired by the phrase segmentation schema in classical poems BIBREF3 we proposed the method of phrasesegmentationbased padding to handle with undertranslation By padding poems based on the phrase segmentation custom of classical poems our model better aligns poems with their corresponding vernacular paragraphs and meanwhile lowers the risk of undertranslation Inspired by Paulus2018ADR we designed a reinforcement learning policy to penalize the model if it generates vernacular paragraphs with too many repeated words Experiments show our method can effectively decrease the possibility of overtranslation The contributions of our work are threefold 1 We proposed a novel task for unsupervised Chinese poem generation from vernacular text 2 We proposed using phrasesegmentationbased padding and reinforcement learning to address two important problems in this task namely undertranslation and overtranslation 3 Through extensive experiments we proved the effectiveness of our models and explored how to write the input vernacular to inspire better poems Human evaluation shows our models are able to generate high quality poems which are comparable to amateur poems Classical Chinese Poem Generation Most previous works in classical Chinese poem generation focus on improving the semantic coherence of generated poems Based on LSTM Zhang and Lapata Zhang2014ChinesePG purposed generating poem lines incrementally by taking into account the history of what has been generated so far Yan Yan2016iPA proposed a polishing generation schema each poem line is generated incrementally and iteratively by refining each line onebyone Wang et al Wang2016ChinesePG and Yi et al Yi2018ChinesePG proposed models to keep the generated poems coherent and semantically consistent with the users intent There are also researches that focus on other aspects of poem generation Yang et al Yang2018StylisticCP explored increasing the diversity of generated poems using an unsupervised approach Xu et al Xu2018HowII explored generating Chinese poems from images While most previous works generate poems based on topic words our work targets at a novel task generating poems from vernacular Chinese paragraphs Unsupervised Machine Translation Compared with supervised machine translation approaches BIBREF4 BIBREF5 unsupervised machine translation BIBREF6 BIBREF2 does not rely on humanlabeled parallel corpora for training This technique is proved to greatly improve the performance of lowresource languages translation systems eg EnglishUrdu translation The unsupervised machine translation framework is also applied to various other tasks eg image captioning BIBREF7 text style transfer BIBREF8 speech to text translation BIBREF9 and clinical text simplification BIBREF10 The UMT framework makes it possible to apply neural models to tasks where limited human labeled data is available However in previous tasks that adopt the UMT framework the abstraction levels of source and target language are the same This is not the case for our task UnderTranslation OverTranslation Both are troublesome problems for neural sequencetosequence models Most previous related researches adopt the coverage mechanism BIBREF11 BIBREF12 BIBREF13 However as far as we know there were no successful attempt applying coverage mechanism to transformerbased models BIBREF14 We transform our poem generation task as an unsupervised machine translation problem As illustrated in Figure FIGREF1 based on the recently proposed UMT framework BIBREF2 our model is composed of the following components Encoder textbf Es and decoder textbf Ds for vernacular paragraph processing Encoder textbf Et and decoder textbf Dt for classical poem processing where textbf Es or textbf Et takes in a vernacular paragraph or a classical poem and converts it into a hidden representation and textbf Ds or textbf Dt takes in the hidden representation and converts it into a vernacular paragraph or a poem Our model relies on a vernacular texts corpus textbf emph S and a poem corpus textbf emph T We denote S and T as instances in textbf emph S and textbf emph T respectively The training of our model relies on three procedures namely parameter initialization language modeling and backtranslation We will give detailed introduction to each procedure Parameter initialization As both vernacular and classical poem use Chinese characters we initialize the character embedding of both languages in one common space the same character in two languages shares the same embedding This initialization helps associate characters with their plausible translations in the other language Language modeling It helps the model generate texts that conform to a certain language A welltrained language model is able to detect and correct minor lexical and syntactic errors We train the language models for both vernacular and classical poem by minimizing the following loss where SN or TN is generated by adding noise drop swap or blank a few words in S or T Backtranslation Based on a vernacular paragraph S we generate a poem TS using textbf Es and textbf Dt we then translate TS back into a vernacular paragraph STS textbf Dstextbf EtTS Here S could be used as gold standard for the backtranslated paragraph STs In this way we could turn the unsupervised translation into a supervised task by maximizing the similarity between S and STS The same also applies to using poem T as gold standard for its corresponding backtranslation TST We define the following loss Note that mathcal Lbt does not back propagate through the generation of TS and ST as we observe no improvement in doing so When training the model we minimize the composite loss where alpha 1 and alpha 2 are scaling factors During our early experiments we realize that the naive UMT framework is not readily applied to our task Classical Chinese poems are featured for its terseness and abstractness They usually focus on depicting broad poetic images rather than details We collected a dataset of classical Chinese poems and their corresponding vernacular translations the average length of the poems is 320 characters while for vernacular translations it is 733 The huge gap in sequence length between source and target language would induce overtranslation and undertranslation when training UMT models In the following sections we explain the two problems and introduce our improvements By nature classical poems are more concise and abstract while vernaculars are more detailed and lengthy to express the same meaning a vernacular paragraph usually contains more characters than a classical poem As a result when summarizing a vernacular paragraph S to a poem TS TS may not cover all information in S due to its length limit In real practice we notice the generated poems usually only cover the information in the front part of the vernacular paragraph while the latter part is unmentioned To alleviate undertranslation we propose phrase segmentationbased padding Specifically we first segment each line in a classical poem into several subsequences we then join these subsequences with the special padding tokens p During training the padded lines are used instead of the original poem lines As illustrated in Figure FIGREF10 padding would create better alignments between a vernacular paragraph and a prolonged poem making it more likely for the latter part of the vernacular paragraph to be covered in the poem As we mentioned before the length of the vernacular translation is about twice the length of its corresponding classical poem so we pad each segmented line to twice its original length According to Ye jia1984 to present a stronger sense of rhythm each type of poem has its unique phrase segmentation schema for example most sevencharacter quatrain poems adopt the 223 schema ie each quatrain line contains 3 phrases the first second and third phrase contains 2 2 3 characters respectively Inspired by this law we segment lines in a poem according to the corresponding phrase segmentation schema In this way we could avoid characters within the scope of a phrase to be cut apart thus best preserve the semantic of each phraseBIBREF15 In NMT when decoding is complete the decoder would generate an EOStoken indicating it has reached the end of the output sequence However when expending a poem T into a vernacular Chinese paragraph ST due to the conciseness nature of poems after finishing translating every source character in T the output sequence ST may still be much shorter than the expected length of a poems vernacular translation As a result the decoder would believe it has not finished decoding Instead of generating the EOStoken the decoder would continue to generate new output characters from previously translated source characters This would cause the decoder to repetitively output a piece of text many times To remedy this issue in addition to minimizing the original loss function mathcal L we propose to minimize a specific discrete metric which is made possible with reinforcement learning We define repetition ratio RRS of a paragraph S as where vocabS refers to the number of distinctive characters in S lenS refers the number of all characters in S Obviously if a generated sequence contains many repeated characters it would have high repetition ratio Following the selfcritical policy gradient training BIBREF16 we define the following loss function where tau is a manually set threshold Intuitively minimizing mathcal Lrl is equivalent to maximizing the conditional likelihood of the sequence S given STS if its repetition ratio is lower than the threshold tau Following BIBREF17 we revise the composite loss as where alpha 1 alpha 2 alpha 3 are scaling factors The objectives of our experiment are to explore the following questions 1 How much do our models improve the generated poems Section SECREF23 2 What are characteristics of the input vernacular paragraph that lead to a good generated poem Section SECREF26 3 What are weaknesses of generated poems compared to human poems Section SECREF27 To this end we built a dataset as described in Section SECREF18 Evaluation metrics and baselines are described in Section SECREF21 and SECREF22 For the implementation details of building the dataset and models please refer to supplementary materials Training and Validation Sets We collected a corpus of poems and a corpus of vernacular literature from online resources The poem corpus contains 163K quatrain poems from Tang Poems and Song Poems the vernacular literature corpus contains 337K short paragraphs from 281 famous books the corpus covers various literary forms including prose fiction and essay Note that our poem corpus and a vernacular corpus are not aligned We further split the two corpora into a training set and a validation set Test Set From online resources we collected 487 sevencharacter quatrain poems from Tang Poems and Song Poems as well as their corresponding high quality vernacular translations These poems could be used as gold standards for poems generated from their corresponding vernacular translations Table TABREF11 shows the statistics of our training validation and test set Perplexity Perplexity reflects the probability a model generates a certain poem Intuitively a better model would yield higher probability lower perplexity on the gold poem BLEU As a standard evaluation metric for machine translation BLEU BIBREF18 measures the intersection of ngrams between the generated poem and the gold poem A better generated poem usually achieves higher BLEU score as it shares more ngram with the gold poem Human evaluation While perplexity and BLEU are objective metrics that could be applied to largevolume test set evaluating Chinese poems is after all a subjective task We invited 30 human evaluators to join our human evaluation The human evaluators were divided into two groups The expert group contains 15 people who hold a bachelor degree in Chinese literature and the amateur group contains 15 people who holds a bachelor degree in other fields All 30 human evaluators are native Chinese speakers We ask evaluators to grade each generated poem from four perspectives 1 Fluency Is the generated poem grammatically and rhythmically well formed 2 Semantic coherence Is the generated poem itself semantic coherent and meaningful 3 Semantic preservability Does the generated poem preserve the semantic of the modern Chinese translation 4 Poeticness Does the generated poem display the characteristic of a poem and does the poem build good poetic image The grading scale for each perspective is from 1 to 5 We compare the performance of the following models 1 LSTM BIBREF19 2Naive transformer BIBREF14 3Transformer Anti OT RL loss 4Transformer Anti UT phrase segmentationbased padding 5Transformer Anti OTUT As illustrated in Table TABREF12 ID 1 Given the vernacular translation of each gold poem in test set we generate five poems using our models Intuitively the more the generated poem resembles the gold poem the better the model is We report mean perplexity and BLEU scores in Table TABREF19 Where Anti OT refers to adding the reinforcement loss to mitigate overfitting and Anti UT refers to adding phrase segmentationbased padding to mitigate undertranslation human evaluation results in Table TABREF20 According to experiment results perplexity BLEU scores and total scores in human evaluation are consistent with each other We observe all BLEU scores are fairly low we believe it is reasonable as there could be multiple ways to compose a poem given a vernacular paragraph Among transformerbased models both Anti OT and Anti UT outperforms the naive transformer while Anti OTUT shows the best performance this demonstrates alleviating undertranslation and overtranslation both helps generate better poems Specifically Anti UT shows bigger improvement than Anti OT According to human evaluation among the four perspectives our Anti OTUT brought most score improvement in Semantic preservability this proves our improvement on semantic preservability was most obvious to human evaluators All transformerbased models outperform LSTM Note that the average length of the vernacular translation is over 70 characters comparing with transformerbased models LSTM may only keep the information in the beginning and end of the vernacular We anticipated some score inconsistency between expert group and amateur group However after analyzing human evaluation results we did not observed big divergence between two groups Chinese literature is not only featured for classical poems but also various other literature forms Song lyricUTF8gbsn or ci also gained tremendous popularity in its palmy days standing out in classical Chinese literature Modern prose modern poems and pop song lyrics have won extensive praise among Chinese people in modern days The goal of this experiment is to transfer texts of other literature forms into quatrain poems We expect the generated poems to not only keep the semantic of the original text but also demonstrate terseness rhythm and other characteristics of ancient poems Specifically we chose 20 famous fragments from four types of Chinese literature 5 fragments for each of modern prose modern poems pop song lyrics and Song lyrics We try to As no ground truth is available we resorted to human evaluation with the same grading standard in Section SECREF23 Comparing the scores of different literature forms we observe Song lyric achieves higher scores than the other three forms of modern literature It is not surprising as both Song lyric and quatrain poems are written in classical Chinese while the other three literature forms are all in vernacular Comparing the scores within the same literature form we observe the scores of poems generated from different paragraphs tends to vary After carefully studying the generated poems as well as their scores we have the following observation 1 In classical Chinese poems poetic images UTF8gbsn were widely used to express emotions and to build artistic conception A certain poetic image usually has some fixed implications For example autumn is usually used to imply sadness and loneliness However with the change of time poetic images and their implications have also changed According to our observation if a vernacular paragraph contains more poetic images used in classical literature its generated poem usually achieves higher score As illustrated in Table TABREF12 both paragraph 2 and 3 are generated from pop song lyrics paragraph 2 uses many poetic images from classical literature eg pear flowers makeup while paragraph 3 uses modern poetic images eg sparrows on the utility pole Obviously compared with poem 2 sentences in poem 3 seems more confusing as the poetic images in modern times may not fit well into the language model of classical poems 2 We also observed that poems generated from descriptive paragraphs achieve higher scores than from logical or philosophical paragraphs For example in Table TABREF12 both paragraph 4 more descriptive and paragraph 5 more philosophical were selected from famous modern prose However compared with poem 4 poem 5 seems semantically more confusing We offer two explanations to the above phenomenon i Limited by the 28character restriction it is hard for quatrain poems to cover complex logical or philosophical explanation ii As vernacular paragraphs are more detailed and lengthy some information in a vernacular paragraph may be lost when it is summarized into a classical poem While losing some information may not change the general meaning of a descriptive paragraph it could make a big difference in a logical or philosophical paragraph We manually select 25 generated poems from vernacular Chinese translations and pair each one with its corresponding human written poem We then present the 25 pairs to human evaluators and ask them to differentiate which poem is generated by human poet As demonstrated in Table TABREF29 although the general meanings in human poems and generated poems seem to be the same the wordings they employ are quite different This explains the low BLEU scores in Section 43 According to the test results in Table TABREF30 human evaluators only achieved 658 in mean accuracy This indicates the best generated poems are somewhat comparable to poems written by amateur poets We interviewed evaluators who achieved higher than 80 accuracy on their differentiation strategies Most interviewed evaluators state they realize the sentences in a human written poem are usually well organized to highlight a theme or to build a poetic image while the correlation between sentences in a generated poem does not seem strong As demonstrated in Table TABREF29 the last two sentences in both human poems marked as red echo each other well while the sentences in machinegenerated poems seem more independent This gives us hints on the weakness of generated poems While neural models may generate poems that resemble human poems lexically and syntactically its still hard for them to compete with human beings in building up good structures Addressing UnderTranslation In this part we wish to explore the effect of different phrase segmentation schemas on our phrase segmentationbased padding According to Ye jia1984 most sevencharacter quatrain poems adopt the 223 segmentation schema As shown in examples in Figure FIGREF31 we compare our phrase segmentationbased padding 223 schema to two less common schemas ie 232 and 322 schema we report our experiment results in Table TABREF32 The results show our 223 segmentationschema greatly outperforms 232 and 322 schema in both perplexity and BLEU scores Note that the BLEU scores of 232 and 322 schema remains almost the same as our naive baseline Without padding According to the observation we have the following conclusions 1 Although padding better aligns the vernacular paragraph to the poem it may not improve the quality of the generated poem 2 The padding tokens should be placed according to the phrase segmentation schema of the poem as it preserves the semantic within the scope of each phrase Addressing OverTranslation To explore the effect of our reinforcement learning policy on alleviating overtranslation we calculate the repetition ratio of vernacular paragraphs generated from classical poems in our validation set We found naive transformer achieves 408 in repetition ratio while our Anti OT achieves 349 Given the repetition ratio of vernacular paragraphs written by human beings in our validation set is 301 the experiment results demonstrated our RL loss effectively alleviate overtranslation which in turn leads to better generated poems In this paper we proposed a novel task of generating classical Chinese poems from vernacular paragraphs We adapted the unsupervised machine translation model to our task and meanwhile proposed two novel approaches to address the undertranslation and overtranslation problems Experiments show that our task can give users more controllability in generating poems In addition our approaches are very effective to solve the problems when the UMT model is directly used in this task In the future we plan to explore 1 Applying the UMT model in the tasks where the abstraction levels of source and target languages are different eg unsupervised automatic summarization 2 Improving the quality of generated poems via better structure organization approaches </s>
<s>[INST] Entity-Consistent End-to-end Task-Oriented Dialogue System with KB Retriever [/INST] Taskoriented dialogue system which helps users to achieve specific goals with natural language is attracting more and more research attention With the success of the sequencetosequence Seq2Seq models in text generation BIBREF0 BIBREF1 BIBREF2 BIBREF3 BIBREF4 several works tried to model the taskoriented dialogue as the Seq2Seq generation of response from the dialogue history BIBREF5 BIBREF6 BIBREF7 This kind of modeling scheme frees the taskoriented dialogue system from the manually designed pipeline modules and heavy annotation labor for these modules Different from typical text generation the successful conversations for taskoriented dialogue system heavily depend on accurate knowledge base KB queries Taking the dialogue in Figure FIGREF1 as an example to answer the drivers query on the gas station the dialogue system is required to retrieve the entities like 200 Alester Ave and Valero For the taskoriented system based on Seq2Seq generation there is a trend in recent study towards modeling the KB query as an attention network over the entire KB entity representations hoping to learn a model to pay more attention to the relevant entities BIBREF6 BIBREF7 BIBREF8 BIBREF9 Though achieving good endtoend dialogue generation with overtheentireKB attention mechanism these methods do not guarantee the generation consistency regarding KB entities and sometimes yield responses with conflict entities like Valero is located at 899 Ames Ct for the gas station query as shown in Figure FIGREF1 In fact the correct address for Valero is 200 Alester Ave A consistent response is relatively easy to achieve for the conventional pipeline systems because they query the KB by issuing API calls BIBREF10 BIBREF11 BIBREF12 and the returned entities which typically come from a single KB row are consistently related to the object like the gas station that serves the users request This indicates that a response can usually be supported by a single KB row Its promising to incorporate such observation into the Seq2Seq dialogue generation model since it encourages KB relevant generation and avoids the model from producing responses with conflict entities To achieve entityconsistent generation in the Seq2Seq taskoriented dialogue system we propose a novel framework which query the KB in two steps In the first step we introduce a retrieval module KBretriever to explicitly query the KB Inspired by the observation that a single KB row usually supports a response given the dialogue history and a set of KB rows the KBretriever uses a memory network BIBREF13 to select the most relevant row The retrieval result is then fed into a Seq2Seq dialogue generation model to filter the irrelevant KB entities and improve the consistency within the generated entities In the second step we further perform attention mechanism to address the most correlated KB column Finally we adopt the copy mechanism to incorporate the retrieved KB entity Since dialogue dataset is not typically annotated with the retrieval results training the KBretriever is nontrivial To make the training feasible we propose two methods 1 we use a set of heuristics to derive the training data and train the retriever in a distant supervised fashion 2 we use GumbelSoftmax BIBREF14 as an approximation of the nondifferentiable selecting process and train the retriever along with the Seq2Seq dialogue generation model Experiments on two publicly available datasets Camrest BIBREF11 and InCar Assistant BIBREF6 confirm the effectiveness of the KBretriever Both the retrievers trained with distantsupervision and GumbelSoftmax technique outperform the compared systems in the automatic and human evaluations Analysis empirically verifies our assumption that more than 80 responses in the dataset can be supported by a single KB row and better retrieval results lead to better taskoriented dialogue generation performance In this section we will describe the input and output of the endtoend taskoriented dialogue system and the definition of Seq2Seq taskoriented dialogue generation Given a dialogue between a user u and a system s we follow eric2017SIGDial and represent the kturned dialogue utterances as lbrace u1 s1 u2 s2 uk skrbrace At the itextth turn of the dialogue we aggregate dialogue context which consists of the tokens of u1 s1 si1 ui and use mathbf x x1 x2 xm to denote the whole dialogue history word by word where m is the number of tokens in the dialogue history In this paper we assume to have the access to a relationaldatabaselike KB B which consists of mathcal R rows and mathcal C columns The value of entity in the jtextth row and the itextth column is noted as vj i We define the Seq2Seq taskoriented dialogue generation as finding the most likely response mathbf y according to the input dialogue history mathbf x and KB B Formally the probability of a response is defined as where yt represents an output token In this section we describe our framework for endtoend taskoriented dialogues The architecture of our framework is demonstrated in Figure FIGREF3 which consists of two major components including an memory networkbased retriever and the seq2seq dialogue generation with KB Retriever Our framework first uses the KBretriever to select the most relevant KB row and further filter the irrelevant entities in a Seq2Seq response generation model to improve the consistency among the output entities While in decoding we further perform the attention mechanism to choose the most probable KB column We will present the details of our framework in the following sections In our encoder we adopt the bidirectional LSTM BIBREF15 to encode the dialogue history mathbf x which captures temporal relationships within the sequence The encoder first map the tokens in mathbf x to vectors with embedding function phi textemb and then the BiLSTM read the vector forwardly and backwardly to produce contextsensitive hidden states mathbf h1 mathbf h2 mathbf hm by repeatedly applying the recurrence mathbf hitextBiLSTMleft phi textembleft xiright mathbf hi1right Here we follow eric2017SIGDial to adopt the attentionbased decoder to generation the response word by word LSTM is also used to represent the partially generated output sequence y1 y2 yt1 as tildemathbf h1 tildemathbf h2 tildemathbf ht For the generation of next token yt their model first calculates an attentive representation tildemathbf hprime t of the dialogue history as Then the concatenation of the hidden representation of the partially outputted sequence tildemathbf ht and the attentive dialogue history representation tildemathbf hprime t are projected to the vocabulary space mathcal V by U as to calculate the score logit for the next token generation The probability of next token yt is finally calculated as As shown in section SECREF7 we can see that the generation of tokens are just based on the dialogue history attention which makes the model ignorant to the KB entities In this section we present how to query the KB explicitly in two steps for improving the entity consistence which first adopt the KBretriever to select the most relevant KB row and the generation of KB entities from the entitiesaugmented decoder is constrained to the entities within the most probable row thus improve the entity generation consistency Next we perform the column attention to select the most probable KB column Finally we show how to use the copy mechanism to incorporate the retrieved entity while decoding In our framework our KBretriever takes the dialogue history and KB rows as inputs and selects the most relevant row This selection process resembles the task of selecting one word from the inputs to answer questions BIBREF13 and we use a memory network to model this process In the following sections we will first describe how to represent the inputs then we will talk about our memory networkbased retriever We encode the dialogue history by adopting the neural bagofwords BoW followed the original paper BIBREF13 Each token in the dialogue history is mapped into a vector by another embedding function phi textembprime x and the dialogue history representation mathbf q is computed as the sum of these vectors mathbf q sum mi1 phi textembprime xi In this section we describe how to encode the KB row Each KB cell is represented as the cell value v embedding as mathbf cj k phi textvaluevj k and the neural BoW is also used to represent a KB row mathbf rj as mathbf rj sum k1mathcal C mathbf cjk We model the KB retrieval process as selecting the row that mostlikely supports the response generation Memory network BIBREF13 has shown to be effective to model this kind of selection For a nhop memory network the model keeps a set of input matrices lbrace R1 R2 Rn1rbrace where each Ri is a stack of mathcal R inputs mathbf ri1 mathbf ri2 mathbf rimathcal R The model also keeps query mathbf q1 as the input A single hop memory network computes the probability mathbf aj of selecting the jtextth input as For the multihop cases layers of single hop memory network are stacked and the query of the i1textth layer network is computed as and the output of the last layer is used as the output of the whole network For more details about memory network please refer to the original paper BIBREF13 After getting mathbf a we represent the retrieval results as a 01 matrix T in lbrace 0 1rbrace mathcal Rtimes mathcal C where each element in T is calculated as In the retrieval result Tj k indicates whether the entity in the jtextth row and the ktextth column is relevant to the final generation of the response In this paper we further flatten T to a 01 vector mathbf t in lbrace 0 1rbrace mathcal E where mathcal E equals mathcal Rtimes mathcal C as our retrieval row results After getting the retrieved row result that indicates which KB row is the most relevant to the generation we further perform column attention in decoding time to select the probable KB column For our KB column selection following the eric2017SIGDial we use the decoder hidden state tildemathbf h1 tildemathbf h2 tildemathbf ht to compute an attention score with the embedding of column attribute name The attention score mathbf cin Rmathcal E then become the logits of the column be selected which can be calculated as where mathbf cj is the attention score of the jtextth KB column mathbf kj is represented with the embedding of word embedding of KB column name Wprime 1 Wprime 2 and mathbf tT are trainable parameters of the model After the row selection and column selection we can define the final retrieved KB entity score as the elementwise dot between the row retriever result and the column selection score which can be calculated as where the vt indicates the final KB retrieved entity score Finally we follow eric2017SIGDial to use copy mechanism to incorporate the retrieved entity which can be defined as where mathbf ots dimensionality is mathcal V mathcal E In mathbf vt lower mathcal V is zero and the restmathcal E is retrieved entity scores As mentioned in section SECREF9 we adopt the memory network to train our KBretriever However in the Seq2Seq dialogue generation the training data does not include the annotated KB row retrieval results which makes supervised training the KBretriever impossible To tackle this problem we propose two training methods for our KBrowretriever 1 In the first method inspired by the recent success of distant supervision in information extraction BIBREF16 BIBREF17 BIBREF18 BIBREF19 we take advantage of the similarity between the surface string of KB entries and the reference response and design a set of heuristics to extract training data for the KBretriever 2 In the second method instead of training the KBretriever as an independent component we train it along with the training of the Seq2Seq dialogue generation To make the retrieval process in Equation DISPLAYFORM13 differentiable we use GumbelSoftmax BIBREF14 as an approximation of the operatornamewithlimitsargmax during training Although its difficult to obtain the annotated retrieval data for the KBretriever we can guess the most relevant KB row from the reference response and then obtain the weakly labeled data for the retriever Intuitively for the current utterance in the same dialogue which usually belongs to one topic and the KB row that contains the largest number of entities mentioned in the whole dialogue should support the utterance In our training with distant supervision we further simplify our assumption and assume that one dialogue which is usually belongs to one topic and can be supported by the most relevant KB row which means for a kturned dialogue we construct k pairs of training instances for the retriever and all the inputs u1 s1 si1 ui mid i le k are associated with the same weakly labeled KB retrieval result T In this paper we compute each rows similarity to the whole dialogue and choose the most similar row as T We define the similarity of each row as the number of matched spans with the surface form of the entities in the row Taking the dialogue in Figure FIGREF1 for an example the similarity of the 4textth row equals to 4 with 200 Alester Ave gas station Valero and road block nearby matching the dialogue context and the similarity of the 7textth row equals to 1 with only road block nearby matching In our model with the distantly supervised retriever the retrieval results serve as the input for the Seq2Seq generation During training the Seq2Seq generation we use the weakly labeled retrieval result T as the input In addition to treating the row retrieval result as an input to the generation model and training the kbrowretriever independently we can train it along with the training of the Seq2Seq dialogue generation in an endtoend fashion The major difficulty of such a training scheme is that the discrete retrieval result is not differentiable and the training signal from the generation model cannot be passed to the parameters of the retriever Gumbelsoftmax technique BIBREF14 has been shown an effective approximation to the discrete variable and proved to work in sentence representation In this paper we adopt the GumbelSoftmax technique to train the KB retriever We use as the approximation of T where mathbf gj are iid samples drawn from textGumbel01 and tau is a constant that controls the smoothness of the distribution Ttextapproxj replaces Ttextj in equation DISPLAYFORM13 and goes through the same flattening and expanding process as mathbf V to get mathbf vmathbf ttextapproxprime and the training signal from Seq2Seq generation is passed via the logit To make training with GumbelSoftmax more stable we first initialize the parameters by pretraining the KBretriever with distant supervision and further finetuning our framework We choose the InCar Assistant dataset BIBREF6 including three distinct domains navigation weather and calendar domain For weather domain we follow wen2018sequence to separate the highest temperature lowest temperature and weather attribute into three different columns For calendar domain there are some dialogues without a KB or incomplete KB In this case we padding a special token in these incomplete KBs Our framework is trained separately in these three domains using the same trainvalidationtest split sets as eric2017SIGDial To justify the generalization of the proposed model we also use another public CamRest dataset BIBREF11 and partition the datasets into training validation and testing set in the ratio 311 Especially we hired some human experts to format the CamRest dataset by equipping the corresponding KB to every dialogues All hyperparameters are selected according to validation set We use a threehop memory network to model our KBretriever The dimensionalities of the embedding is selected from lbrace 100 200rbrace and LSTM hidden units is selected from lbrace 50 100 150 200 350rbrace The dropout we use in our framework is selected from lbrace 025 05 075rbrace and the batch size we adopt is selected from lbrace 12rbrace L2 regularization is used on our model with a tension of 5times 106 for reducing overfitting For training the retriever with distant supervision we adopt the weight typing trick BIBREF20 We use Adam BIBREF21 to optimize the parameters in our model and adopt the suggested hyperparameters for optimization We adopt both the automatic and human evaluations in our experiments We compare our model with several baselines including Attn seq2seq BIBREF22 A model with simple attention over the input context at each time step during decoding PtrUNK BIBREF23 PtrUNK is the model which augments a sequencetosequence architecture with attentionbased copy mechanism over the encoder context KV Net BIBREF6 The model adopted and argumented decoder which decodes over the concatenation of vocabulary and KB entities which allows the model to generate entities Mem2Seq BIBREF7 Mem2Seq is the model that takes dialogue history and KB entities as input and uses a pointer gate to control either generating a vocabulary word or selecting an input as the output DSR BIBREF9 DSR leveraged dialogue state representation to retrieve the KB implicitly and applied copying mechanism to retrieve entities from knowledge base while decoding In InCar dataset for the Attn seq2seq PtrUNK and Mem2seq we adopt the reported results from madotto2018mem2seq In CamRest dataset for the Mem2Seq we adopt their opensourced code to get the results while for the DSR we run their code on the same dataset to obtain the results Follow the prior works BIBREF6 BIBREF7 BIBREF9 we adopt the BLEU and the Micro Entity F1 to evaluate our model performance The experimental results are illustrated in Table TABREF30 In the first block of Table TABREF30 we show the Human rulebased and KV Net with result which are reported from eric2017SIGDial We argue that their results are not directly comparable because their work uses the entities in thier canonicalized forms which are not calculated based on real entity value Its noticing that our framework with two methods still outperform KV Net in InCar dataset on whole BLEU and Entity F metrics which demonstrates the effectiveness of our framework In the second block of Table TABREF30 we can see that our framework trained with both the distant supervision and the GumbelSoftmax beats all existing models on two datasets Our model outperforms each baseline on both BLEU and F1 metrics In InCar dataset Our model with GumbelSoftmax has the highest BLEU compared with baselines which which shows that our framework can generate more fluent response Especially our framework has achieved 25 improvement on navigate domain 18 improvement on weather domain and 35 improvement on calendar domain on F1 metric It indicates that the effectiveness of our KBretriever module and our framework can retrieve more correct entity from KB In CamRest dataset the same trend of improvement has been witnessed which further show the effectiveness of our framework Besides we observe that the model trained with GumbelSoftmax outperforms with distant supervision method We attribute this to the fact that the KBretriever and the Seq2Seq module are finetuned in an endtoend fashion which can refine the KBretriever and further promote the dialogue generation In this section we verify our assumption by examining the proportion of responses that can be supported by a single row We define a response being supported by the most relevant KB row as all the responded entities are included by that row We study the proportion of these responses over the test set The number is 95 for the navigation domain 90 for the CamRest dataset and 80 for the weather domain This confirms our assumption that most responses can be supported by the relevant KB row Correctly retrieving the supporting row should be beneficial We further study the weather domain to see the rest 20 exceptions Instead of being supported by multiple rows most of these exceptions cannot be supported by any KB row For example there is one case whose reference response is It s not rainy today and the related KB entity is sunny These cases provide challenges beyond the scope of this paper If we consider this kind of cases as being supported by a single row such proportion in the weather domain is 99 In this paper we expect the consistent generation from our model To verify this we compute the consistency recall of the utterances that have multiple entities An utterance is considered as consistent if it has multiple entities and these entities belong to the same row which we annotated with distant supervision The consistency result is shown in Table TABREF37 From this table we can see that incorporating retriever in the dialogue generation improves the consistency To further explore the correlation between the number of KB rows and generation consistency we conduct experiments with distant manner to study the correlation between the number of KB rows and generation consistency We choose KBs with different number of rows on a scale from 1 to 5 for the generation From Figure FIGREF32 as the number of KB rows increase we can see a decrease in generation consistency This indicates that irrelevant information would harm the dialogue generation consistency To gain more insights into how the our retriever module influences the whole KB score distribution we visualized the KB entity probability at the decoding position where we generate the entity 200AlesterAve From the example Fig FIGREF38 we can see the 4textth row and the 1textth column has the highest probabilities for generating 200AlesterAve which verify the effectiveness of firstly selecting the most relevant KB row and further selecting the most relevant KB column We provide human evaluation on our framework and the compared models These responses are based on distinct dialogue history We hire several human experts and ask them to judge the quality of the responses according to correctness fluency and humanlikeness on a scale from 1 to 5 In each judgment the expert is presented with the dialogue history an output of a system with the name anonymized and the gold response The evaluation results are illustrated in Table TABREF37 Our framework outperforms other baseline models on all metrics according to Table TABREF37 The most significant improvement is from correctness indicating that our model can retrieve accurate entity from KB and generate more informative information that the users want to know Sequencetosequence Seq2Seq models in text generation BIBREF0 BIBREF1 BIBREF2 BIBREF3 BIBREF4 has gained more popular and they are applied for the opendomain dialogs BIBREF24 BIBREF25 in the endtoend training method Recently the Seq2Seq can be used for learning task oriented dialogs and how to query the structured KB is the remaining challenges Properly querying the KB has long been a challenge in the taskoriented dialogue system In the pipeline system the KB query is strongly correlated with the design of language understanding state tracking and policy management Typically after obtaining the dialogue state the policy management module issues an API call accordingly to query the KB With the development of neural network in natural language processing efforts have been made to replacing the discrete and predefined dialogue state with the distributed representation BIBREF10 BIBREF11 BIBREF12 BIBREF26 In our framework our retrieval result can be treated as a numeric representation of the API call return Instead of interacting with the KB via API calls more and more recent works tried to incorporate KB query as a part of the model The most popular way of modeling KB query is treating it as an attention network over the entire KB entities BIBREF6 BIBREF27 BIBREF8 BIBREF28 BIBREF29 and the return can be a fuzzy summation of the entity representations madotto2018mem2seqs practice of modeling the KB query with memory network can also be considered as learning an attentive preference over these entities wen2018sequence propose the implicit dialogue state representation to query the KB and achieve the promising performance Different from their modes we propose the KBretriever to explicitly query the KB and the query result is used to filter the irrelevant entities in the dialogue generation to improve the consistency among the output entities In this paper we propose a novel framework to improve entities consistency by querying KB in two steps In the first step inspired by the observation that a response can usually be supported by a single KB row we introduce the KB retriever to return the most relevant KB row which is used to filter the irrelevant KB entities and encourage consistent generation In the second step we further perform attention mechanism to select the most relevant KB column Experimental results show the effectiveness of our method Extensive analysis further confirms the observation and reveal the correlation between the success of KB query and the success of taskoriented dialogue generation We thank the anonymous reviewers for their helpful comments and suggestions This work was supported by the National Natural Science Foundation of China NSFC via grant 61976072 61632011 and 61772153 </s>
<s>[INST] From FiLM to Video: Multi-turn Question Answering with Multi-modal Context [/INST] Deep neural networks have been successfully applied to several computer vision tasks such as image classification BIBREF0 object detection BIBREF1 video action classification BIBREF2 etc They have also been successfully applied to natural language processing tasks such as machine translation BIBREF3 machine reading comprehension BIBREF4 etc There has also been an explosion of interest in tasks which combine multiple modalities such as audio vision and language together Some popular multimodal tasks combining these three modalities and their differences are highlighted in Table TABREF1 Given an image and a question related to the image the vqa challenge BIBREF5 tasked users with selecting an answer to the question BIBREF6 identified several sources of bias in the vqa dataset which led to deep neural models answering several questions superficially They found that in several instances deep architectures exploited the statistics of the dataset to select answers ignoring the provided image This prompted the release of vqa 20 BIBREF7 which attempts to balance the original dataset In it each question is paired to two similar images which have different answers Due to the complexity of vqa understanding the failures of deep neural architectures for this task has been a challenge It is not easy to interpret whether the system failed in understanding the question or in understanding the image or in reasoning over it The CLEVR dataset BIBREF8 was hence proposed as a useful benchmark to evaluate such systems on the task of visual reasoning Extending question answering over images to videos BIBREF9 have proposed MovieQA where the task is to select the correct answer to a provided question given the movie clip on which it is based Intelligent systems that can interact with human users for a useful purpose are highly valuable To this end there has been a recent push towards moving from singleturn qa to multiturn dialogue which is a natural and intuitive setting for humans Among multimodal dialogue tasks visdial BIBREF10 provides an image and dialogue where each turn is a qa pair The task is to train a model to answer these questions within the dialogue The avsd challenge extends the visdial task from images to the audiovisual domain We present our modelname model for the avsd task modelname combines a hred for encoding and generating qadialogue with a novel FiLMbased audiovisual feature extractor for videos and an auxiliary multitask learningbased decoder for decoding a summary of the video It outperforms the baseline results for the avsd dataset BIBREF11 and was ranked 2nd overall among the dstc7 avsd challenge participants In Section SECREF2 we discuss existing literature on endtoend dialogue systems with a special focus on multimodal dialogue systems Section SECREF3 describes the avsd dataset In Section SECREF4 we present the architecture of our modelname model We describe our evaluation and experimental setup in Section SECREF5 and then conclude in Section SECREF6 With the availability of large conversational corpora from sources like Reddit and Twitter there has been a lot of recent work on endtoend modelling of dialogue for open domains BIBREF12 treated dialogue as a machine translation problem where they translate from the stimulus to the response They observed this to be more challenging than machine translation tasks due the larger diversity of possible responses Among approaches that just use the previous utterance to generate the current response BIBREF13 proposed a response generation model based on the encoder decoder framework BIBREF14 also proposed an encoderdecoder based neural network architecture that uses the previous two utterances to generate the current response Among discriminative methods ie methods that produce a score for utterances from a set and then rank them BIBREF15 proposed a neural architecture to select the best next response from a list of responses by measuring their similarity to the dialogue context BIBREF16 extended prior work on encoderdecoderbased models to multiturn conversations They trained a hierarchical model called hred for generating dialogue utterances where a recurrent neural network encoder encodes each utterance A higherlevel recurrent neural network maintains the dialogue state by further encoding the individual utterance encodings This dialogue state is then decoded by another recurrent decoder to generate the response at that point in time In followup work BIBREF17 used a latent stochastic variable to condition the generation process which aided their model in producing longer coherent outputs that better retain the context Datasets and tasks BIBREF10 BIBREF18 BIBREF19 have also been released recently to study visualinput based conversations BIBREF10 train several generative and discriminative deep neural models for the visdial task They observe that on this task discriminative models outperform generative models and that models making better use of the dialogue history do better than models that do not use dialogue history at all Unexpectedly the performance between models that use the image features and models that do no use these features is not significantly different As we discussed in Section SECREF1 this is similar to the issues vqa models faced initially due to the imbalanced nature of the dataset which leads us to believe that language is a strong prior on the visdial dataset too BIBREF20 train two separate agents to play a cooperative game where one agent has to answer the other agents questions which in turn has to predict the fc7 features of the Image obtained from VGGNet Both agents are based on hred models and they show that agents finetuned with rl outperform agents trained solely with supervised learning BIBREF18 train both generative and discriminative deep neural models on the igc dataset where the task is to generate questions and answers to carry on a meaningful conversation BIBREF19 train hredbased models on GuessWhat dataset in which agents have to play a guessing game where one player has to find an object in the picture which the other player knows about and can answer questions about them Moving from imagebased dialogue to videobased dialogue adds further complexity and challenges Limited availability of such data is one of the challenges Apart from the avsd dataset there does not exist a video dialogue dataset to the best of our knowledge and the avsd data itself is fairly limited in size Extracting relevant features from videos also contains the inherent complexity of extracting features from individual frames and additionally requires understanding their temporal interaction The temporal nature of videos also makes it important to be able to focus on a varyinglength subset of video frames as the action which is being asked about might be happening within them There is also the need to encode the additional modality of audio which would be required for answering questions that rely on the audio track With limited size of publicly available datasets based on the visual modality learning useful features from high dimensional visual data has been a challenge even for the visdial dataset and we anticipate this to be an even more significant challenge on the avsd dataset as it involves videos On the avsd task BIBREF11 train an attentionbased audiovisual sceneaware dialogue model which we use as the baseline model for this paper They divide each video into multiple equalduration segments and from each of them extract video features using an I3D BIBREF21 model and audio features using a VGGish BIBREF22 model The I3D model was pretrained on Kinetics BIBREF23 dataset and the VGGish model was pretrained on Audio Set BIBREF24 The baseline encodes the current utterances question with a lstm BIBREF25 and uses the encoding to attend to the audio and video features from all the video segments and to fuse them together The dialogue history is modelled with a hierarchical recurrent lstm encoder where the input to the lower level encoder is a concatenation of questionanswer pairs The fused feature representation is concatenated with the question encoding and the dialogue history encoding and the resulting vector is used to decode the current answer using an lstm decoder Similar to the visdial models the performance difference between the best model that uses text and the best model that uses both text and video features is small This indicates that the language is a stronger prior here and the baseline model is unable to make good use of the highly relevant video Automated evaluation of both taskoriented and nontaskoriented dialogue systems has been a challenge BIBREF26 BIBREF27 too Most such dialogue systems are evaluated using perturn evaluation metrics since there is no suitable perdialogue metric as conversations do not need to happen in a deterministic ordering of turns These perturn evaluation metrics are mostly wordoverlapbased metrics such as BLEU METEOR ROUGE and CIDEr borrowed from the machine translation literature Due to the diverse nature of possible responses worldoverlap metrics are not highly suitable for evaluating these tasks Human evaluation of generated responses is considered the most reliable metric for such tasks but it is cost prohibitive and hence the dialogue system literature continues to rely widely on wordoverlapbased metrics The avsd dataset BIBREF28 consists of dialogues collected via amt Each dialogue is associated with a video from the Charades BIBREF29 dataset and has conversations between two amt workers related to the video The Charades dataset has multiaction short videos and it provides text descriptions for these videos which the avsd challenge also distributes as the caption The avsd dataset has been collected using similar methodology as the visdial dataset In avsd each dialogue turn consists of a question and answer pair One of the amt workers assumes the role of questioner while the other amt worker assumes the role of answerer The questioner sees three static frames from the video and has to ask questions The answerer sees the video and answers the questions asked by the questioner After 10 such qa turns the questioner wraps up by writing a summary of the video based on the conversation Dataset statistics such as the number of dialogues turns and words for the avsd dataset are presented in Table TABREF5 For the initially released prototype dataset the training set of the avsd dataset corresponds to videos taken from the training set of the Charades dataset while the validation and test sets of the avsd dataset correspond to videos taken from the validation set of the Charades dataset For the official dataset training validation and test sets are drawn from the corresponding Charades sets The Charades dataset also provides additional annotations for the videos such as action scene and object annotations which are considered to be external data sources by the avsd challenge for which there is a special subtask in the challenge The action annotations also include the start and end time of the action in the video Our modelname model is based on the hred framework for modelling dialogue systems In our model an utterancelevel recurrent lstm encoder encodes utterances and a dialoguelevel recurrent lstm encoder encodes the final hidden states of the utterancelevel encoders thus maintaining the dialogue state and dialogue coherence We use the final hidden states of the utterancelevel encoders in the attention mechanism that is applied to the outputs of the description video and audio encoders The attended features from these encoders are fused with the dialoguelevel encoders hidden states An utterancelevel decoder decodes the response for each such dialogue state following a question We also add an auxiliary decoding module which is similar to the response decoder except that it tries to generate the caption andor the summary of the video We present our model in Figure FIGREF2 and describe the individual components in detail below The utterancelevel encoder is a recurrent neural network consisting of a single layer of lstm cells The input to the lstm are word embeddings for each word in the utterance The utterance is concatenated with a special symbol eos marking the end of the sequence We initialize our word embeddings using 300dimensional GloVe BIBREF30 and then finetune them during training For words not present in the GloVe vocabulary we initialize their word embeddings from a random uniform distribution Similar to the utterancelevel encoder the description encoder is also a singlelayer lstm recurrent neural network Its word embeddings are also initialized with GloVe and then finetuned during training For the description we use the caption andor the summary for the video provided with the dataset The description encoder also has access to the last hidden state of the utterancelevel encoder which it uses to generate an attention map over the hidden states of its lstm The final output of this module is the attentionweighted sum of the lstm hidden states For the video encoder we use an I3D model pretrained on the Kinetics dataset BIBREF23 and extract the output of its Mixed7c layer for INLINEFORM0 30 for our models equidistant segments of the video Over these features we add INLINEFORM1 2 for our models FiLM BIBREF31 blocks which have been highly successful in visual reasoning problems Each FiLM block applies a conditional on the utterance encoding featurewise affine transformation on the features input to it ultimately leading to the extraction of more relevant features The FiLM blocks are followed by fully connected layers which are further encoded by a single layer recurrent lstm network The last hidden state of the utterancelevel encoder then generates an attention map over the hidden states of its lstm which is multiplied by the hidden states to provide the output of this module We also experimented with using convolutional Mixed5c features to capture spatial information but on the limited avsd dataset they did not yield any improvement When not using the FiLM blocks we use the final layer I3D features provided by the avsd organizers and encode them with the lstm directly followed by the attention step We present the video encoder in Figure FIGREF3 The audio encoder is structurally similar to the video encoder We use the VGGish features provided by the avsd challenge organizers Also similar to the video encoder when not using the FiLM blocks we use the VGGish features and encode them with the lstm directly followed by the attention step The audio encoder is depicted in Figure FIGREF4 The outputs of the encoders for past utterances descriptions video and audio together form the dialogue context INLINEFORM0 which is the input of the decoder We first combine past utterances using a dialoguelevel encoder which is a singlelayer lstm recurrent neural network The input to this encoder are the final hidden states of the utterancelevel lstm To combine the hidden states of these diverse modalities we found concatenation to perform better on the validation set than averaging or the Hadamard product The answer decoder consists of a singlelayer recurrent lstm network and generates the answer to the last question utterance At each timestep it is provided with the dialoguelevel state and produces a softmax over a vector corresponding to vocabulary words and stops when 30 words were produced or an end of sentence token is encountered The auxiliary decoder is functionally similar to the answer decoder The decoded sentence is the caption andor description of the video We use the Video Encoder state instead of the Dialoguelevel Encoder state as input since with this module we want to learn a better video representation capable of decoding the description For a given context embedding INLINEFORM0 at dialogue turn INLINEFORM1 we minimize the negative loglikelihood of the answer word INLINEFORM2 vocabulary size normalized by the number of words INLINEFORM3 in the ground truth response INLINEFORM4 LCt r 1Mm1MiV rtmi INLINEFORM5 where the probabilities INLINEFORM6 are given by the decoder LSTM output rtm1 ll rtm1 s02 sU0 1 v INLINEFORM0 else is given by scheduled sampling BIBREF32 and INLINEFORM1 is a symbol denoting the start of a sequence We optimize the model using the AMSGrad algorithm BIBREF33 and use a percondition random search to determine hyperparameters We train the model using the BLEU4 score on the validation set as our stopping citerion The avsd challenge tasks we address here are We train our modelname model for Task 1a and Task 2a of the challenge and we present the results in Table TABREF9 Our model outperforms the baseline model released by BIBREF11 on all of these tasks The scores for the winning team have been released to challenge participants and are also included Their approach however is not public as of yet We observe the following for our models Since the official test set has not been released publicly results reported on the official test set have been provided by the challenge organizers For the prototype test set and for the ablation study presented in Table TABREF24 we use the same code for evaluation metrics as used by BIBREF11 for fairness and comparability We attribute the significant performance gain of our model over the baseline to a combination of several factors as described below Our primary architectural differences over the baseline model are not concatenating the question answer pairs before encoding them the auxiliary decoder module and using the TimeExtended FiLM module for feature extraction These combined with using scheduled sampling and running hyperparameter optimization over the validation set to select hyperparameters give us the observed performance boost We observe that our models generate fairly relevant responses to questions in the dialogues and models with audiovisual inputs respond to audiovisual questions eg is there any voices or music correctly more often We conduct an ablation study on the effectiveness of different components eg text video and audio and present it in Table TABREF24 Our experiments show that We presented modelname a stateoftheart dialogue model for conversations about videos We evaluated the model on the official AVSD test set where it achieves a relative improvement of more than 16 over the baseline model on BLEU4 and more than 33 on CIDEr The challenging aspect of multimodal dialogue is fusing modalities with varying information density On AVSD it is easiest to learn from the input text while video features remain largely opaque to the decoder modelname uses a generalization of FiLM to video that conditions video feature extraction on a question However similar to related work absolute improvements of incorporating video features into dialogue are consistent but small Thus while our results indicate the suitability of our FiLM generalization they also highlight that applications at the intersection between language and video are currently constrained by the quality of video features and emphasizes the need for larger datasets </s>
<s>[INST] Civique: Using Social Media to Detect Urban Emergencies [/INST] With the surge in the use of social media microblogging sites like Twitter Facebook and Foursquare have become household words Growing ubiquity of mobile phones in highly populated developing nations has spurred an exponential rise in social media usage The heavy volume of social media posts tagged with users location information on microblogging website Twitter presents a unique opportunity to scan these posts These Short texts eg tweets on social media contain information about various events happening around the globe as people post about events and incidents alike Conventional web outlets provide emergency phone numbers ie 100 911 etc and are fast and accurate Our system on the other hand connects its users through a relatively newer platform ie social media and provides an alternative to these conventional methods In case of their failure or when such means are busyoccupied an alternative could prove to be life saving These real life events are reported on Twitter with different perspectives opinions and sentiment Every day people discuss events thousands of times across social media sites We would like to detect such events in case of an emergency Some previous studies BIBREF0 investigate the use of features such as keywords in the tweet number of words and context to devise a classifier for event detection BIBREF1 discusses various techniques researchers have used previously to detect events from Twitter BIBREF2 describe a system to automatically detect events about known entities from Twitter This work is highly specific to detection of events only related to known entities BIBREF3 discuss a system that returns a ranked list of relevant events given a user query Several research efforts have focused on identifying events in real time BIBREF4 BIBREF5 BIBREF6 BIBREF0 These include systems to detect emergent topics from Twitter in real time BIBREF4 BIBREF7 an online clustering technique for identifying tweets in real time BIBREF5 a system to detect localized events and also track evolution of such events over a period of time BIBREF6 Our focus is on detecting urban emergencies as events from Twitter messages We classify events ranging from natural disasters to fire break outs and accidents Our system detects whether a tweet which contains a keyword from a predecided list is related to an actual emergency or not It also classifies the event into its appropriate category and visualizes the possible location of the emergency event on the map We also support notifications to our users containing the contacts of specifically concerned authorities as per the category of their tweet The rest of the paper is as follows Section SECREF2 provides the motivation for our work and the challenges in building such a system Section SECREF3 describes the step by step details of our work and its results We evaluate our system and present the results in Section SECREF4 Section SECREF5 showcases our demonstrations in detail and Section SECREF6 concludes the paper by briefly describing the overall contribution implementation and demonstration In 2015 INLINEFORM0 of all unnatural deaths in India were caused by accidents and INLINEFORM1 by accidental fires Moreover the Indian subcontinent suffered seven earthquakes in 2015 with the recent Nepal earthquake alone killing more than 9000 people and injuring INLINEFORM2 We believe we can harness the current social media activity on the web to minimize losses by quickly connecting affected people and the concerned authorities Our work is motivated by the following factors a Social media is very accessible in the current scenario The Digital India initiative by the Government of India promotes internet activity and thus a proactive social media b As per the Internet trends reported in 2014 about 117 million Indians are connected to the Internet through mobile devices c A system such as ours can point out or visualize the affected areas precisely and help inform the authorities in a timely fashion d Such a system can be used on a global scale to reduce the effect of natural calamities and prevent loss of life There are several challenges in building such an application a Such a system expects a tweet to be location tagged Otherwise event detection techniques to extract the spatiotemporal data from the tweet can be vague and lead to false alarms b Such a system should also be able to verify the users credibility as pranksters may raise false alarms c Tweets are usually written in a very informal language which requires a sophisticated language processing component to sanitize the tweet input before event detection d A channel with the concerned authorities should be established for them to take serious action on alarms raised by such a system e An urban emergency such as a natural disaster could affect communications severely in case of an earthquake or a cyclone communications channels like Internet connectivity may get disrupted easily In such cases our system may not be of help as it requires the user to be connected to the internet We address the above challenges and present our approach in the next section We propose a software architecture for Emergency detection and visualization as shown in figure FIGREF9 We collect data using Twitter API and perform language preprocessing before applying a classification model Tweets are labelled manually with emergencyand nonemergencylabels and later classified manually to provide labels according to the type of emergency they indicate We use the manually labeled data for training our classifiers We use traditional classification techniques such as Support Vector MachinesSVM and Naive BayesNB for training and perform 10fold cross validation to obtain fscores Later in real time our system uses the Twitter streaming APIs to get data preprocesses it using the same modules and detects emergencies using the classifiers built above The tweets related to emergencies are displayed on the web interface along with the location and information for the concerned authorities The preprocessing of Twitter data obtained is needed as it usually contains adhoc abbreviations phonetic substitutions URLs hashtags and a lot of misspelled words We use the following language processing modules for such corrections We implement a cleaning module to automate the cleaning of tweets obtained from the Twitter API We remove URLs special symbols like along with the user mentions Hashtags and any associated text We also replace special symbols by blank spaces and inculcate the module as shown in figure FIGREF9 An example of such a sample tweet cleaning is shown in table TABREF10 While tweeting users often express their emotions by stressing over a few characters in the word For example usage of words like hellpppp fiiiiiireeee ruuuuunnnnn druuuuuunnnkkk soooooooo actually corresponds to help fire run drunk so etc We use the compression module implemented by BIBREF8 for converting terms like pleeeeeeeaaaaaassseeee to please It is unlikely for an English word to contain the same character consecutively for three or more times We hence compress all the repeated windows of character length greater than two to two characters For example pleeeeeaaaassee is converted to pleeaassee Each window now contains two characters of the same alphabet in cases of repetition Let n be the number of windows obtained from the previous step We then apply brute force search over INLINEFORM0 possibilities to select a valid dictionary word Table TABREF13 contains sanitized sample output from our compression module for further processing Text Normalization is the process of translating adhoc abbreviations typographical errors phonetic substitution and ungrammatical structures used in text messaging Tweets and SMS to plain English Use of such language often referred as Chatting Language induces noise which poses additional processing challenges We use the normalization module implemented by BIBREF8 for text normalization Training process requires a Language Model of the target language and a parallel corpora containing aligned unnormalized and normalized word pairs Our language model consists of 15000 English words taken from various sources on the web Parallel corpora was collected from the following sources Stanford Normalization Corpora which consists of 9122 pairs of unnormalized and normalized words phrases The above corpora however lacked acronyms and short hand texts like 2mrw l8r b4 hlp flor which are frequently used in chatting We collected 215 pairs unnormalized to normalized wordphrase mappings via crowdsourcing Table TABREF16 contains input and normalized output from our module Users often make spelling mistakes while tweeting A spell checker makes sure that a valid English word is sent to the classification system We take this problem into account by introducing a spell checker as a preprocessing module by using the JAVA API of Jazzy spell checker for handling spelling mistakes An example of correction provided by the Spell Checker module is given below Input building INLINEFORM0 flor help Output building INLINEFORM0 floor help Please note that our current system performs compression normalization and spellchecking if the language used is English The classifier training and detection process are described below The first classifier model acts as a filter for the second stage of classification We use both SVM and NB to compare the results and choose SVM later for stage one classification model owing to a better Fscore The training is performed on tweets labeled with classes emergency and nonemergency based on unigrams as features We create word vectors of strings in the tweet using a filter available in the WEKA API BIBREF9 and perform cross validation using standard classification techniques We employ a multiclass Naive Bayes classifier as the second stage classification mechanism for categorizing tweets appropriately depending on the type of emergencies they indicate This multiclass classifier is trained on data manually labeled with classes We tokenize the training data using NgramTokenizer and then apply a filter to create word vectors of strings before training We use trigrams as features to build a model which later classifies tweets into appropriate categories in real time We then perform cross validation using standard techniques to calculate the results which are shown under the label Stage 2 in table TABREF20 We use Google Maps Geocoding API to display the possible location of the tweet origin based on longitude and latitude Our visualizer presents the user with a map and pinpoints the location with custom icons for earthquake cyclone fire accident etc Since we currently collect tweets with a location filter for the city of Mumbai we display its map location on the interface The possible occurrences of such incidents are displayed on the map as soon as our system is able to detect it We also display the same on an Android device using the WebView functionality available to developers thus solving the issue of portability Our system displays visualization of the various emergencies detected on both web browsers and mobile devices We evaluate our system using automated and manual evaluation techniques We perform 10fold cross validation to obtain the Fscores for our classification systems We use the following technique for dataset creation We test the system in realtime environments and tweet about fires at random locations in our city using test accounts Our system was able to detect such tweets and detect them with locations shown on the map We collect data by using the Twitter API for saved data available for public use For our experiments we collect 3200 tweets filtered by keywords like fire earthquake theft robbery drunk driving drunk driving accident etc Later we manually label tweets with emergencyand nonemergencylabels for classification as stage one Our dataset contains 1313 tweet with positive label emergencyand 1887 tweets with a negative label nonemergency We create another dataset with the positively labeled tweets and provide them with category labels like fire accident earthquake etc The results of 10fold crossvalidation performed for stage one are shown in table TABREF20 under the label Stage 1 In table TABREF20 For Stage 1 of classification Fscore obtained using SVM classifier is INLINEFORM0 as shown in row 2 column 2 We also provide the system with sample tweets in real time and assess its ability to detect the emergency and classify it accordingly The classification training for Stage 1 was performed using two traditional classification techniques SVM and NB SVM outperformed NB by around INLINEFORM1 and became the choice of classification technique for stage one Some false positives obtained during manual evaluation are I am sooooo so drunk right nowwwwwwww and fire in my office the boss is angry These occurrences show the need of more labeled gold data for our classifiers and some other features like PartofSpeech tags Named Entity recognition Bigrams Trigrams etc to perform better The results of 10fold crossvalidation performed for stage two classfication model are also shown in table TABREF20 under the label Stage 2 The training for stage two was also performed using both SVM and NB but NB outperformed SVM by around INLINEFORM0 to become a choice for stage two classification model We also perform attribute evaluation for the classification model and create a word cloud based on the output values shown in figure FIGREF24 It shows that our classifier model is trained on appropriate words which are very close to the emergency situations viz fire earthquake accident break Unigram representation here but possibly occurs in a bigram phrase with fire etc In figure FIGREF24 the word cloud represents the word respond as the most frequently occurring word as people need urgent help and quick response from the assistance teams Users interact with Civique through its Webbased user interface and Android based application interface The features underlying Civique are demonstrated through the following two show cases Show case 1 Tweet Detection and Classification This showcase aims at detecting related tweets and classifying them into appropriate categories For this we have created a list of filter words which are used to filter tweets from the Twitter streaming API These set of words help us filter the tweets related to any incident We will tweet and users are able to see how our system captures such tweets and classifies them Users should be able to see the tweet emerge as an incident on the webinterface as shown in figure FIGREF26 and the on the android application as shown in figure FIGREF27 Figure FIGREF27 demonstrates how a notification is generated when our system detects an emergency tweet When a user clicks the emerged spot the system should be able to display the sanitized version extracted spatiotemporal data from the tweet We test the system in a realtime environment and validate our experiments We also report the false positives generated during the process in section SECREF25 above Show case 2 User Notification and Contact Info Civique includes a set of local contacts for civic authorities who are to be who can be contacted in case of various emergencies Users can see how Civique detects an emergency and classifies it They can also watch how the system generates a notification on the web interface and the Android interface requesting them to contact the authorities for emergencies Users can change their preferences on the mobile device anytime and can also opt not to receive notifications Users should be able to contact the authorities online using the application but in case the online contact is not responsive or in case of a sudden loss of connectivity we provide the user with the offline contact information of the concerned civic authorities along with the notifications Civique is a system which detects urban emergencies like earthquakes cyclones fire break out accidents etc and visualizes them on both on a browsable web interface and an Android application We collect data from the popular microblogging site Twitter and use language processing modules to sanitize the input We use this data as input to train a two step classification system which indicates whether a tweet is related to an emergency or not and if it is then what category of emergency it belongs to We display such positively classified tweets along with their type and location on a Google map and notify our users to inform the concerned authorities and possibly evacuate the area if his location matches the affected area We believe such a system can help the disaster management machinery and government bodies like Fire department Police department etc to act swiftly thus minimizing the loss of life Twitter users use slang profanity misspellings and neologisms We use standard cleaning methods and combine NLP with Machine Learning ML to further our cause of tweet classification At the current stage we also have an Android application ready for our system which shows the improvised mobileviewable web interface In the future we aim to develop detection of emergency categories on the fly obscure emergencies like airplane hijacking should also be detected by our system We plan to analyze the temporal sequence of the tweet set from a single location to determine whether multiple problems on the same location are the result of a single event or relate to multiple events </s>
<s>[INST] Can neural networks understand monotonicity reasoning? [/INST] Natural language inference NLI also known as recognizing textual entailment RTE has been proposed as a benchmark task for natural language understanding Given a premise P and a hypothesis H the task is to determine whether the premise semantically entails the hypothesis BIBREF0 A number of recent works attempt to test and analyze what type of inferences an NLI model may be performing focusing on various types of lexical inferences BIBREF1 BIBREF2 BIBREF3 and logical inferences BIBREF4 BIBREF5 Concerning logical inferences monotonicity reasoning BIBREF6 BIBREF7 which is a type of reasoning based on word replacement requires the ability to capture the interaction between lexical and syntactic structures Consider examples in Introduction and Introduction All workers leavevmode color blue80blackdownarrow joined for a French dinner leavevmode color red80blackuparrow All workers joined for a dinner All new workers joined for a French dinner Not all new workers leavevmode color red80blackuparrow joined for a dinner Not all workers joined for a dinner A context is upward entailing shown by leavevmode color red80blackuparrow that allows an inference from Introduction to Introduction where French dinner is replaced by a more general concept dinner On the other hand a downward entailing context shown by leavevmode color blue80blackdownarrow allows an inference from Introduction to Introduction where workers is replaced by a more specific concept new workers Interestingly the direction of monotonicity can be reversed again by embedding yet another downward entailing context eg not in Introduction as witness the fact that Introduction entails Introduction To properly handle both directions of monotonicity NLI models must detect monotonicity operators eg all not and their arguments from the syntactic structure For previous datasets containing monotonicity inference problems FraCaS BIBREF8 and the GLUE diagnostic dataset BIBREF9 are manuallycurated datasets for testing a wide range of linguistic phenomena However monotonicity problems are limited to very small sizes FraCaS 37346 examples and GLUE 931650 examples The limited syntactic patterns and vocabularies in previous test sets are obstacles in accurately evaluating NLI models on monotonicity reasoning To tackle this issue we present a new evaluation dataset that covers a wide range of monotonicity reasoning that was created by crowdsourcing and collected from linguistics publications Section Dataset Compared with manual or automatic construction we can collect naturallyoccurring examples by crowdsourcing and welldesigned ones from linguistics publications To enable the evaluation of skills required for monotonicity reasoning we annotate each example in our dataset with linguistic tags associated with monotonicity reasoning We measure the performance of stateoftheart NLI models on monotonicity reasoning and investigate their generalization ability in upward and downward reasoning Section Results and Discussion The results show that all models trained with SNLI BIBREF4 and MultiNLI BIBREF10 perform worse on downward inferences than on upward inferences In addition we analyzed the performance of models trained with an automatically created monotonicity dataset HELP BIBREF11 The analysis with monotonicity data augmentation shows that models tend to perform better in the same direction of monotonicity with the training set while they perform worse in the opposite direction This indicates that the accuracy on monotonicity reasoning depends solely on the majority direction in the training set and models might lack the ability to capture the structural relations between monotonicity operators and their arguments As an example of a monotonicity inference consider the example with the determiner every in Monotonicity here the premise P entails the hypothesis H P Every scriptsize mathsf NP person leavevmode color blue80blackdownarrow scriptsize mathsf VP bought a movie ticket leavevmode color red80blackuparrow H Every young person bought a ticket Every is downward entailing in the first argument mathsf NP and upward entailing in the second argument mathsf VP and thus the term person can be more specific by adding modifiers person sqsupseteq young person replacing it with its hyponym person sqsupseteq spectator or adding conjunction person sqsupseteq person and alien On the other hand the term buy a ticket can be more general by removing modifiers bought a movie ticket sqsubseteq bought a ticket replacing it with its hypernym bought a movie ticket sqsubseteq bought a show ticket or adding disjunction bought a movie ticket sqsubseteq bought or sold a movie ticket Table 1 shows determiners modeled as binary operators and their polarities with respect to the first and second arguments There are various types of downward operators not limited to determiners see Table 2 As shown in Monotonicity if a propositional object is embedded in a downward monotonic context eg when the polarity of words over its scope can be reversed P When every scriptsize mathsf NP young person leavevmode color red80blackuparrow scriptsize mathsf VP bought a ticket leavevmode color blue80blackdownarrow that shop was open H When every scriptsize mathsf NP person scriptsize mathsf VP bought a movie ticket that shop was open Thus the polarity leavevmode color red80blackuparrow and leavevmode color blue80blackdownarrow where the replacement with more general specific phrases licenses entailment needs to be determined by the interaction of monotonicity properties and syntactic structures polarity of each constituent is calculated based on a monotonicity operator of functional expressions eg every when and their functionterm relations To create monotonicity inference problems we should satisfy three requirements a detect the monotonicity operators and their arguments b based on the syntactic structure induce the polarity of the argument positions and c replace the phrase in the argument position with a more general or specific phrase in natural and various ways eg by using lexical knowledge or logical connectives For a and b we first conduct polarity computation on a syntactic structure for each sentence and then select premises involving upwarddownward expressions For c we use crowdsourcing to narrow or broaden the arguments The motivation for using crowdsourcing is to collect naturally alike monotonicity inference problems that include various expressions One problem here is that it is unclear how to instruct workers to create monotonicity inference problems without knowledge of natural language syntax and semantics We must make tasks simple for workers to comprehend and provide sound judgements Moreover recent studies BIBREF12 BIBREF3 BIBREF13 point out that previous crowdsourced datasets such as SNLI BIBREF14 and MultiNLI BIBREF10 include hidden biases As these previous datasets are motivated by approximated entailments workers are asked to freely write hypotheses given a premise which does not strictly restrict them to creating logically complex inferences Taking these concerns into consideration we designed twostep tasks to be performed via crowdsourcing for creating a monotonicity test set i a hypothesis creation task and ii a validation task The task i is to create a hypothesis by making some polarized part of an original sentence more specific Instead of writing a complete sentence from scratch workers are asked to rewrite only a relatively short sentence By restricting workers to rewrite only a polarized part we can effectively collect monotonicity inference examples The task ii is to annotate an entailment label for the premisehypothesis pair generated in i Figure 1 summarizes the overview of our humanoriented dataset creation We used the crowdsourcing platform Figure Eight for both tasks As a resource we use declarative sentences with more than five tokens from the Parallel Meaning Bank PMB BIBREF15 The PMB contains syntactically correct sentences annotated with its syntactic category in Combinatory Categorial Grammar CCG BIBREF16 BIBREF16 format which is suitable for our purpose To get a whole CCG derivation tree we parse each sentence by the stateoftheart CCG parser depccg BIBREF17 Then we add a polarity to every constituent of the CCG tree by the polarity computation system ccg2mono BIBREF18 and make the polarized part a blank field We ran a trial rephrasing task on 500 examples and detected 17 expressions that were too general and thus difficult to rephrase them in a natural way eg every one no time We removed examples involving such expressions To collect more downward inference examples we select examples involving determiners in Table 1 and downward operators in Table 2 As a result we selected 1485 examples involving expressions having arguments with upward monotonicity and 1982 examples involving expressions having arguments with downward monotonicity We present crowdworkers with a sentence whose polarized part is underlined and ask them to replace the underlined part with more specific phrases in three different ways In the instructions we showed examples rephrased in various ways by adding modifiers by adding conjunction phrases and by replacing a word with its hyponyms Workers were paid US005 for each set of substitutions and each set was assigned to three workers To remove lowquality examples we set the minimum time it should take to complete each set to 200 seconds The entry in our task was restricted to workers from native speaking English countries 128 workers contributed to the task and we created 15339 hypotheses 7179 upward examples and 8160 downward examples The gold label of each premisehypothesis pair created in the previous task is automatically determined by monotonicity calculus That is a downward inference pair is labeled as entailment while an upward inference pair is labeled as nonentailment However workers sometimes provided some ungrammatical or unnatural sentences such as the case where a rephrased phrase does not satisfy the selectional restrictions eg original Tom doesnt live in Boston rephrased Tom doesnt live in yes making it difficult to judge their entailment relations Thus we performed an annotation task to ensure accurate labeling of gold labels We asked workers about the entailment relation of each premisehypothesis pair as well as how natural it is Worker comprehension of an entailment relation directly affects the quality of inference problems To avoid worker misunderstandings we showed workers the following definitions of labels and five examples for each label entailment the case where the hypothesis is true under any situation that the premise describes nonentailment the case where the hypothesis is not always true under a situation that the premise describes unnatural the case where either the premise andor the hypothesis is ungrammatical or does not make sense Workers were paid US004 for each question and each question was assigned to three workers To collect highquality annotation results we imposed ten test questions on each worker and removed workers who gave more than three wrong answers We also set the minimum time it should take to complete each question to 200 seconds 1237 workers contributed to this task and we annotated gold labels of 15339 premisehypothesis pairs Table 3 shows the numbers of cases where answers matched gold labels automatically determined by monotonicity calculus This table shows that there exist inference pairs whose labels are difficult even for humans to determine there are 3354 premisehypothesis pairs whose gold labels as annotated by polarity computations match with those answered by all workers We selected these naturalistic monotonicity inference pairs for the candidates of the final test set To make the distribution of gold labels symmetric we checked these pairs to determine if we can swap the premise and the hypothesis reverse their gold labels and create another monotonicity inference pair In some cases shown below the gold label cannot be reversed if we swap the premise and the hypothesis In UID15 child and kid are not hyponyms but synonyms and the premise P and the hypothesis H are paraphrases P Tom is no longer a child H Tom is no longer a kid These cases are not strict downward inference problems in the sense that a phrase is not replaced by its hyponymhypernym Consider the example UID16 P The moon has no atmosphere H The moon has no atmosphere and the gravity force is too low The hypothesis H was created by asking workers to make atmosphere in the premise P more specific However the additional phrase and the gravity force is too low does not form constituents with atmosphere Thus such examples are not strict downward monotone inferences In such cases as a and b we do not swap the premise and the hypothesis In the end we collected 4068 examples from crowdsourced datasets We also collect monotonicity inference problems from previous manually curated datasets and linguistics publications The motivation is that previous linguistics publications related to monotonicity reasoning are expected to contain welldesigned inference problems which might be challenging problems for NLI models We collected 1184 examples from 11 linguistics publications BIBREF19 BIBREF20 BIBREF21 BIBREF22 BIBREF23 BIBREF24 BIBREF25 BIBREF26 BIBREF27 BIBREF28 BIBREF29 Regarding previous manuallycurated datasets we collected 93 examples for monotonicity reasoning from the GLUE diagnostic dataset and 37 singlepremise problems from FraCaS Both the GLUE diagnostic dataset and FraCaS categorize problems by their types of monotonicity reasoning but we found that each dataset has different classification criteria Thus following GLUE we reclassified problems into three types of monotone reasoning upward downward and nonmonotone by checking if they include i the target monotonicity operator in both the premise and the hypothesis and ii the phrase replacement in its argument position In the GLUE diagnostic dataset there are several problems whose gold labels are contradiction We regard them as nonentailment in that the premise does not semantically entail the hypothesis We merged the humanoriented dataset created via crowdsourcing and the linguisticsoriented dataset created from linguistics publications to create the current version of the monotonicity entailment dataset MED Table 4 shows some examples from the MED dataset We can see that our dataset contains various phrase replacements eg conjunction relative clauses and comparatives Table 5 reports the statistics of the MED dataset including 5382 premisehypothesis pairs 1820 upward examples 3270 downward examples and 292 nonmonotone examples Regarding nonmonotone problems gold labels are always nonentailment whether a hypothesis is more specific or general than its premise and thus almost all nonmonotone problems are labeled as nonentailment The size of the word vocabulary in the MED dataset is 4023 and overlap ratios of vocabulary with previous standard NLI datasets is 95 with MultiNLI and 90 with SNLI We assigned a set of annotation tags for linguistic phenomena to each example in the test set These tags allow us to analyze how well models perform on each linguistic phenomenon related to monotonicity reasoning We defined 6 tags see Table 4 for examples lexical knowledge 2073 examples inference problems that require lexical relations ie hypernyms hyponyms or synonyms reverse 240 examples inference problems where a propositional object is embedded in a downward environment more than once conjunction 283 examples inference problems that include the phrase replacement by adding conjunction and to the hypothesis disjunction 254 examples inference problems that include the phrase replacement by adding disjunction or to the hypothesis conditionals 149 examples inference problems that include conditionals eg if when unless in the hypothesis negative polarity items NPIs 338 examples inference problems that include NPIs eg any ever at all anything anyone anymore anyhow anywhere in the hypothesis To test the difficulty of our dataset we checked the majority class label and the accuracies of five stateoftheart NLI models adopting different approaches BiMPM Bilateral MultiPerspective Matching Model BIBREF31 BIBREF31 ESIM Enhanced Sequential Inference Model BIBREF32 BIBREF32 Decomposable Attention Model BIBREF33 KIM Knowledgebased Inference Model BIBREF34 BIBREF34 and BERT Bidirectional Encoder Representations from Transformers model BIBREF35 BIBREF35 Regarding BERT we checked the performance of a model pretrained on Wikipedia and BookCorpus for language modeling and trained with SNLI and MultiNLI For other models we checked the performance trained with SNLI In agreement with our dataset we regarded the prediction label contradiction as nonentailment Table 6 shows that the accuracies of all models were better on upward inferences in accordance with the reported results of the GLUE leaderboard The overall accuracy of each model was low In particular all models underperformed the majority baseline on downward inferences despite some models having rich lexical knowledge from a knowledge base KIM or pretraining BERT This indicates that downward inferences are difficult to perform even with the expansion of lexical knowledge In addition it is interesting to see that if a model performed better on upward inferences it performed worse on downward inferences We will investigate these results in detail below To explore whether the performance of models on monotonicity reasoning depends on the training set or the model themselves we conducted further analysis performed by data augmentation with the automatically generated monotonicity dataset HELP BIBREF11 HELP contains 36K monotonicity inference examples 7784 upward examples 21192 downward examples and 1105 nonmonotone examples The size of the HELP word vocabulary is 15K and the overlap ratio of vocabulary between HELP and MED is 152 We trained BERT on MultiNLI only and on MultiNLI augmented with HELP and compared their performance Following BIBREF3 we also checked the performance of a hypothesisonly model trained with each training set to test whether our test set contains undesired biases Table 7 shows that the performance of BERT with the hypothesisonly training set dropped around 1040 as compared with the one with the premisehypothesis training set even if we use the data augmentation technique This indicates that the MED test set does not allow models to predict from hypotheses alone Data augmentation by HELP improved the overall accuracy to 716 but there is still room for improvement In addition while adding HELP increased the accuracy on downward inferences it slightly decreased accuracy on upward inferences The size of downward examples in HELP is much larger than that of upward examples This might improve accuracy on downward inferences but might decrease accuracy on upward inferences To investigate the relationship between accuracy on upward inferences and downward inferences we checked the performance throughout training BERT with only upward and downward inference examples in HELP Figure 2 i ii These two figures show that as the size of the upward training set increased BERT performed better on upward inferences but worse on downward inferences and vice versa Figure 2 iii shows performance on a different ratio of upward and downward inference training sets When downward inference examples constitute more than half of the training set accuracies on upward and downward inferences were reversed As the ratio of downward inferences increased BERT performed much worse on upward inferences This indicates that a training set in one direction upward or downward entailing of monotonicity might be harmful to models when learning the opposite direction of monotonicity Previous work using HELP BIBREF11 reported that the BERT trained with MultiNLI and HELP containing both upward and downward inferences improved accuracy on both directions of monotonicity MultiNLI rarely comes from downward inferences see Section Discussion and its size is large enough to be immune to the sideeffects of downward inference examples in HELP This indicates that MultiNLI might act as a buffer against sideeffects of the monotonicitydriven data augmentation technique Table 8 shows the evaluation results by genre This result shows that inference problems collected from linguistics publications are more challenging than crowdsourced inference problems even if we add HELP to training sets As shown in Figure 2 the change in performance on problems from linguistics publications is milder than that on problems from crowdsourcing This result also indicates the difficulty of problems from linguistics publications Regarding nonmonotone problems collected via crowdsourcing there are very few nonmonotone problems so accuracy is 100 Adding nonmonotone problems to our test set is left for future work Table 9 shows the evaluation results by type of linguistic phenomenon While accuracy on problems involving NPIs and conditionals was improved on both upward and downward inferences accuracy on problems involving conjunction and disjunction was improved on only one direction In addition it is interesting to see that the change in accuracy on conjunction was opposite to that on disjunction Downward inference examples involving disjunction are similar to upward inference ones that is inferences from a sentence to a shorter sentence are valid eg Not many campers have had a sunburn or caught a cold Rightarrow Not many campers have caught a cold Thus these results were also caused by addition of downward inference examples Also accuracy on problems annotated with reverse tags was apparently better without HELP because all examples are upward inferences embedded in a downward environment twice Table 9 also shows that accuracy on conditionals was better on upward inferences than that on downward inferences This indicates that BERT might fail to capture the monotonicity property that conditionals create a downward entailing context in their scope while they create an upward entailing context out of their scope Regarding lexical knowledge the data augmentation technique improved the performance much better on downward inferences which do not require lexical knowledge However among the 394 problems for which all models provided wrong answers 244 problems are nonlexical inference problems This indicates that some nonlexical inference problems are more difficult than lexical inference problems though accuracy on nonlexical inference problems was better than that on lexical inference problems One of our findings is that there is a type of downward inferences to which every model fails to provide correct answers One such example is concerned with the contrast between few and a few Among 394 problems for which all models provided wrong answers 148 downward inference problems were problems involving the downward monotonicity operator few such as in the following example P Few of the books had typical or marginal readers H Few of the books had some typical readers We transformed these downward inference problems to upward inference problems in two ways i by replacing the downward operator few with the upward operator a few and ii by removing the downward operator few We tested BERT using these transformed test sets The results showed that BERT predicted the same answers for the transformed test sets This suggests that BERT does not understand the difference between the downward operator few and the upward operator a few The results of crowdsourcing tasks in Section 313 showed that some downward inferences can naturally be performed in human reasoning However we also found that the MultiNLI training set BIBREF10 which is one of the dataset created from naturallyoccurring texts contains only 77 downward inference problems including the following one P No racin on the Range H No horse racing is allowed on the Range One possible reason why there are few downward inferences is that certain pragmatic factors can block people to draw a downward inference For instance in the case of the inference problem in Discussion unless the added disjunct in H ie a small cat with green eyes is salient in the context it would be difficult to draw the conclusion H from the premise P P I saw a dog H I saw a dog or a small cat with green eyes Such pragmatic factors would be one of the reasons why it is difficult to obtain downward inferences in naturally occurring texts We introduced a large monotonicity entailment dataset called MED To illustrate the usefulness of MED we tested stateoftheart NLI models and found that performance on the new test set was substantially worse for all stateoftheart NLI models In addition the accuracy on downward inferences was inversely proportional to the one on upward inferences An experiment with the data augmentation technique showed that accuracy on upward and downward inferences depends on the proportion of upward and downward inferences in the training set This indicates that current neural models might have limitations on their generalization ability in monotonicity reasoning We hope that the MED will be valuable for future research on more advanced models that are capable of monotonicity reasoning in a proper way This work was partially supported by JST AIP PRISM Grant Number JPMJCR18Y1 Japan and JSPS KAKENHI Grant Number JP18H03284 Japan We thank our three anonymous reviewers for helpful suggestions We are also grateful to Koki Washio Masashi Yoshikawa and Thomas McLachlan for helpful discussion </s>
<s>[INST] Enriching Existing Conversational Emotion Datasets with Dialogue Acts using Neural Annotators. [/INST] With the growing demand for humancomputerrobot interaction systems detecting the emotional state of the user can heavily benefit a conversational agent to respond at an appropriate emotional level Emotion recognition in conversations has proven important for potential applications such as response recommendation or generation emotionbased texttospeech personalisation etc Human emotional states can be expressed verbally and nonverbally BIBREF0 BIBREF1 however while building an interactive dialogue system the interface needs dialogue acts A typical dialogue system consists of a language understanding module which requires to determine the meaning of and intention in the human input utterances BIBREF2 BIBREF3 Also in discourse or conversational analysis dialogue acts are the main linguistic features to consider BIBREF4 A dialogue act provides an intention and performative function in an utterance of the dialogue For example it can infer a users intention by distinguishing Question Answer Request AgreeReject etc and performative functions such as Acknowledgement Conversationalopening or closing Thanking etc The dialogue act information together with emotional states can be very useful for a spoken dialogue system to produce natural interaction BIBREF5 The research in emotion recognition is growing very rapidly and many datasets are available such as textbased speech or visionlevel and multimodal emotion data Emotion expression recognition is a challenging task and hence multimodality is crucial BIBREF0 However few conversational multimodal emotion recognition datasets are available for example IEMOCAP BIBREF6 SEMAINE BIBREF7 MELD BIBREF8 They are multimodal dyadic conversational datasets containing audiovisual and conversational transcripts Every utterance in these datasets is labeled with an emotion label In this work we apply an automated neural ensemble annotation process for dialogue act labeling Several neural models are trained with the Switchboard Dialogue Act SwDA Corpus BIBREF9 BIBREF10 and used for inferring dialogue acts on the emotion datasets We ensemble five model output labels by checking majority occurrences most of the model labels are the same and ranking confidence values of the models We have annotated two potential multimodal conversation datasets for emotion recognition IEMOCAP Interactive Emotional dyadic MOtion CAPture database BIBREF6 and MELD Multimodal EmotionLines Dataset BIBREF8 Figure FIGREF2 shows an example of dialogue acts with emotion and sentiment labels from the MELD dataset We confirmed the reliability of annotations with interannotator metrics We analysed the cooccurrences of the dialogue act and emotion labels and discovered a key relationship between them certain dialogue acts of the utterances show significant and useful association with respective emotional states For example AcceptAgree dialogue act often occurs with the Joy emotion while Reject with Anger Acknowledgements with Surprise Thanking with Joy and Apology with Sadness etc The detailed analysis of the emotional dialogue acts EDAs and annotated datasets are being made available at the SECURE EU Project website There are two emotion taxonomies 1 discrete emotion categories DEC and 2 finedgrained dimensional basis of emotion states DBE The DECs are Joy Sadness Fear Surprise Disgust Anger and Neutral identified by Ekman et al ekman1987universalemos The DBE of the emotion is usually elicited from two or three dimensions BIBREF1 BIBREF11 BIBREF12 A twodimensional model is commonly used with Valence and Arousal also called activation and in the threedimensional model the third dimension is Dominance IEMOCAP is annotated with all DECs and two additional emotion classes Frustration and Excited IEMOCAP is also annotated with three DBE that includes Valance Arousal and Dominance BIBREF6 MELD BIBREF8 which is an evolved version of the Emotionlines dataset developed by BIBREF13 is annotated with exactly 7 DECs and sentiments positive negative and neutral There have been many taxonomies for dialogue acts speech acts BIBREF14 refer to the utterance not only to present information but to the action at is performed Speech acts were later modified into five classes Assertive Directive Commissive Expressive Declarative BIBREF15 There are many such standard taxonomies and schemes to annotate conversational data and most of them follow the discourse compositionality These schemes have proven their importance for discourse or conversational analysis BIBREF16 During the increased development of dialogue systems and discourse analysis the standard taxonomy was introduced in recent decades called Dialogue Act Markup in Several Layers DAMSL tag set According to DAMSL each DA has a forwardlooking function such as Statement Inforequest Thanking and a backwardslooking function such as Accept Reject Answer BIBREF17 The DAMSL annotation includes not only the utterancelevel but also segmentedutterance labelling However in the emotion datasets the utterances are not segmented as we can see in Figure FIGREF2 first or fourth utterances are not segmented as two separate The fourth utterance it could be segmented to have two dialogue act labels for example a statement sd and a question qy That provides very finegrained DA classes and follows the concept of discourse compositionality DAMSL distinguishes whquestion qw yesno question qy openended qo and orquestion qr classes not just because these questions are syntactically distinct but also because they have different forward functions BIBREF18 For example yesno question is more likely to get a yes answer than a whquestion qw This also gives an intuition that the answers follow the syntactic formulation of question providing a context For example qy is used for a question that from a discourse perspective expects a Yes ny or No nn answer We have investigated the annotation method and trained our neural models with the Switchboard Dialogue Act SwDA Corpus BIBREF9 BIBREF10 SwDA Corpus is annotated with the DAMSL tag set and it is been used for reporting and benchmarking stateoftheart results in dialogue act recognition tasks BIBREF19 BIBREF20 BIBREF21 which makes it ideal for our use case The Switchboard DAMSL Coders Manual can be followed for knowing more about the dialogue act labels We adopted the neural architectures based on Bothe et al bothe2018discourse where two variants are noncontext model classifying at utterance level and context model recognizing the dialogue act of the current utterance given a few preceding utterances From conversational analysis using dialogue acts in Bothe et al bothe2018interspeech we learned that the preceding two utterances contribute significantly to recognizing the dialogue act of the current utterance Hence we adapt this setting for the context model and create a pool of annotators using recurrent neural networks RNNs RNNs can model the contextual information in the sequence of words of an utterance and in the sequence of utterances of a dialogue Each word in an utterance is represented with a word embedding vector of dimension 1024 We use the word embedding vectors from pretrained ELMo Embeddings from Language Models embeddings BIBREF22 We have a pool of five neural annotators as shown in Figure FIGREF6 Our online tool called DiscourseWizard is available to practice automated dialogue act labeling In this tool we use the same neural architectures but modeltrained embeddings while in this work we use pretrained ELMo embeddings as they are better performant but computationally and sizewise expensive to be hosted in the online tool The annotators are Uttlevel 1 Dialogue Act Neural Annotator DANA is an utterancelevel classifier that uses word embeddings w as an input to an RNN layer attention mechanism and computes the probability of dialogue acts da using the softmax function see in Figure FIGREF10 dotted line uttl1 This model achieved 7513 accuracy on the SwDA corpus test set Context 1 DANA is a context model that uses 2 preceding utterances while recognizing the dialogue act of the current utterance see context model with con1 line in Figure FIGREF10 It uses a hierarchical RNN with the first RNN layer to encode the utterance from word embeddings w and the second RNN layer is provided with three utterances u current and two preceding composed from the first layer followed by the attention mechanism a where sum n0n atn 1 Finally the softmax function is used to compute the probability distribution This model achieved 7755 accuracy on the SwDA corpus test set Uttlevel 2 DANA is another utterancelevel classifier which takes an average of the word embeddings in the input utterance and uses a feedforward neural network hidden layer see uttl2 line in Figure FIGREF10 where mean passed to softmax directly Similar to the previous model it computes the probability of dialogue acts using the softmax function This model achieved 7259 accuracy on the test set of the SwDA corpus Context 2 DANA is another context model that uses three utterances similar to the Context 1 DANA model but the utterances are composed as the mean of the word embeddings over each utterance similar to the Uttlevel 2 model mean passed to context model in Figure FIGREF10 with con2 line Hence the Context 2 DANA model is composed of one RNN layer with three input vectors finally topped with the softmax function for computing the probability distribution of the dialogue acts This model achieved 7597 accuracy on the test set of the SwDA corpus Context 3 DANA is a context model that uses three utterances similar to the previous models but the utterance representations combine both features from the Context 1 and Context 2 models con1 and con2 together in Figure FIGREF10 Hence the Context 3 DANA model combines features of almost all the previous four models to provide the recognition of the dialogue acts This model achieves 7591 accuracy on the SwDA corpus test set First preference is given to the labels that are perfectly matching in all the neural annotators In Table TABREF11 we can see that both datasets have about 40 of exactly matching labels over all models AM Then priority is given to the contextbased models to check if the label in all context models is matching perfectly In case two out of three context models are correct then it is being checked if that label is also produced by at least one of the noncontext models Then we allow labels to rely on these at least two context models As a result about 47 of the labels are taken based on the context models CM When we see that none of the context models is producing the same results then we rank the labels with their respective confidence values produced as a probability distribution using the softmax function The labels are sorted in descending order according to confidence values Then we check if the first three case when one context model and both noncontext models produce the same label or at least two labels are matching then we allow to pick that one There are about 3 in IEMOCAP and 5 in MELD BM Finally when none the above conditions are fulfilled we leave out the label with an unknown category This unknown category of the dialogue act is labeled with xx in the final annotations and they are about 7 in IEMOCAP and 11 in MELD NM The statistics of the EDAs is reported in Table TABREF13 for both datasets Total utterances in MELD includes training validation and test datasets The pool of neural annotators provides a fair range of annotations and we checked the reliability with the following metrics BIBREF23 Krippendorffs Alpha alpha is a reliability coefficient developed to measure the agreement among observers annotators and raters and is often used in emotion annotation BIBREF24 We apply it on the five neural annotators at the nominal level of measurement of dialogue act categories alpha is computed as follows where Do is the observed disagreement and De is the disagreement that is expected by chance alpha 1 means all annotators produce the same label while alpha 0 would mean none agreed on any label As we can see in Table TABREF20 both datasets IEMOCAP and MELD produce significant interneural annotator agreement 0553 and 0494 respectively A very popular interannotator metric is Fleiss Kappa score also reported in Table TABREF20 which determines consistency in the ratings The kappa k can be defined as where the denominator 1 barPe elicits the degree of agreement that is attainable above chance and the numerator barP barPe provides the degree of the agreement actually achieved above chance Hence k 1 if the raters agree completely and k 0 when none reach any agreement We got 0556 and 0502 for IEOMOCAP and MELD respectively with our five neural annotators This indicated that the annotators are labeling the dialogue acts reliably and consistently We also report the Spearmans correlation between contextbased models Context1 and Context2 and it shows a strong correlation between them Table TABREF20 While using the labels we checked the absolute match between all contextbased models and hence their strong correlation indicates their robustness We can see emotional dialogue act cooccurrences with respect to emotion labels in Figure FIGREF12 for both datasets There are sets of three bars per dialogue act in the figure the first and second bar represent emotion labels of IEMOCAP IE and MELD ME and the third bar is for MELD sentiment MS labels MELD emotion and sentiment statistics are interesting as they are strongly correlated to each other The bars contain the normalized number of utterances for emotion labels with respect to the total number of utterances for that particular dialogue act category The statements withoutopinion sd and withopinion sv contain utterances with almost all emotions Many neutral utterances are spanning over all the dialogue acts Quotation q dialogue acts on the other hand are mostly used with Anger and Frustration in case of IEMOCAP however some utterances with Joy or Sadness as well see examples in Table TABREF21 Action Directive ad dialogue act utterances which are usually orders frequently occur with Anger or Frustration although many with Happy emotion in case of the MELD dataset Acknowledgements b are mostly with positive or neutral however Appreciation ba and Rhetorical bh backchannels often occur with a greater number in Surprise Joy andor with Excited in case of IEMOCAP Questions qh qw qy and qyd are mostly asked with emotions Surprise Excited Frustration or Disgust in case of MELD and many are neutral Noanswers nn are mostly Sad or Frustrated as compared to yesanswers ny Forwardfunctions such as Apology fa are mostly with Sadness whereas Thanking ft and Conventionalclosing or opening fc or fp are usually with Joy or Excited We also noticed that both datasets exhibit a similar relation between dialogue act and emotion It is important to notice that the dialogue act annotation is based on the given transcripts however the emotional expressions are better perceived with audio or video BIBREF6 We report some examples where we mark the utterances with an determined label xx in the last row of Table TABREF21 They are skipped from the final annotation because of not fulfilling the conditions explained in Section SECREF14 It is also interesting to see the previous utterance dialogue acts PDA of those skipped utterances and the sequence of the labels can be followed from Figure FIGREF6 uttl1 uttl2 con1 con2 con3 In the first example the previous utterance was b and three DANA models produced labels of the current utterance as b but it is skipped because the confidence values were not sufficient to bring it as a final label The second utterance can be challenging even for humans to perceive with any of the dialogue acts However the third and fourth utterances are followed by a yesno question qy and hence we can see in the third example that context models tried their best to at least perceive it as an answer ng ny nn The last utterance Im so sorry has been completely disagreed by all the five annotators Similar apology phrases are mostly found with Sadness emotion labels and the correct dialogue act is Apology fa However they are placed either in the sd or in ba dialogue act category We believe that with human annotators help those labels of the utterances can be corrected with very limited efforts In this work we presented a method to extend conversational multimodal emotion datasets with dialogue act labels We successfully show this on two wellestablished emotion datasets IEMOCAP and MELD which we labeled with dialogue acts and made publicly available for further study and research As a first insight we found that many of the dialogue acts and emotion labels follow certain relations These relations can be useful to learn about the emotional behaviours with dialogue acts to build a natural dialogue system and for deeper conversational analysis The conversational agent might benefit in generating an appropriate response when considering both emotional states and dialogue acts in the utterances In future work we foresee the human in the loop for the annotation process along with a pool of automated neural annotators Robust annotations can be achieved with very little human effort and supervision for example observing and correcting the final labels produced by ensemble output labels from the neural annotators The humanannotator might also help to achieve segmentedutterance labelling of the dialogue acts We also plan to use these datasets for conversational analysis to infer interactive behaviours of the emotional states with respect to dialogue acts In our recent work where we used dialogue acts to build a dialogue system for a social robot we find this study and dataset very helpful For example we can extend our robotic conversational system to consider emotion as an added linguistic feature to produce natural interaction We would like to acknowledge funding from the European Unions Horizon 2020 research and innovation programme under the Marie Sklodowska Curie grant agreement No 642667 SECURE </s>
<s>[INST] Synchronising audio and ultrasound by learning cross-modal embeddings [/INST] Ultrasound tongue imaging UTI is a noninvasive way of observing the vocal tract during speech production BIBREF0 Instrumental speech therapy relies on capturing ultrasound videos of the patients tongue simultaneously with their speech audio in order to provide a diagnosis design treatments and measure therapy progress BIBREF1 The two modalities must be correctly synchronised with a minimum shift of INLINEFORM0 45ms if the audio leads and INLINEFORM1 125ms if the audio lags based on synchronisation standards for broadcast audiovisual signals BIBREF2 Errors beyond this range can render the data unusable indeed synchronisation errors do occur resulting in significant wasted effort if not corrected No mechanism currently exists to automatically correct these errors and although manual synchronisation is possible in the presence of certain audiovisual cues such as stop consonants BIBREF3 it is time consuming and tedious In this work we exploit the correlation between the two modalities to synchronise them We utilise a twostream neural network architecture for the task BIBREF4 using as our only source of supervision pairs of ultrasound and audio segments which have been automatically generated and labelled as positive correctly synchronised or negative randomly desynchronised a process known as selfsupervision BIBREF5 We demonstrate how this approach enables us to correctly synchronise the majority of utterances in our test set and in particular those exhibiting natural variation in speech Section SECREF2 reviews existing approaches for audiovisual synchronisation and describes the challenges specifically associated with UTI data compared with lip videos for which automatic synchronisation has been previously attempted Section SECREF3 describes our approach Section SECREF4 describes the data we use including data preprocessing and positive and negative sample creation using a selfsupervision strategy Section SECREF5 describes our experiments followed by an analysis of the results We conclude with a summary and future directions in Section SECREF6 Ultrasound and audio are recorded using separate components and hardware synchronisation is achieved by translating information from the visual signal into audio at recording time Specifically for every ultrasound frame recorded the ultrasound beamforming unit releases a pulse signal which is translated by an external hardware synchroniser into an audio pulse signal and captured by the sound card BIBREF6 BIBREF7 Synchronisation is achieved by aligning the ultrasound frames with the audio pulse signal which is already timealigned with the speech audio BIBREF8 Hardware synchronisation can fail for a number of reasons The synchroniser is an external device which needs to be correctly connected and operated by therapists Incorrect use can lead to missing the pulse signal which would cause synchronisation to fail for entire therapy sessions BIBREF9 Furthermore lowquality sound cards report an approximate rather than the exact sample rate which leads to errors in the offset calculation BIBREF8 There is currently no recovery mechanism for when synchronisation fails and to the best of our knowledge there has been no prior work on automatically correcting the synchronisation error between ultrasound tongue videos and audio There is however some prior work on synchronising lip movement with audio which we describe next Speech audio is generated by articulatory movement and is therefore fundamentally correlated with other manifestations of this movement such as lip or tongue videos BIBREF10 An alternative to the hardware approach is to exploit this correlation to find the offset Previous approaches have investigated the effects of using different representations and feature extraction techniques on finding dimensions of high correlation BIBREF11 BIBREF12 BIBREF13 More recently neural networks which learn features directly from input have been employed for the task SyncNet BIBREF4 uses a twostream neural network and selfsupervision to learn crossmodal embeddings which are then used to synchronise audio with lip videos It achieves near perfect accuracy INLINEFORM0 99 INLINEFORM1 using manual evaluation where lipsync error is not detectable to a human It has since been extended to use different sample creation methods for selfsupervision BIBREF5 BIBREF14 and different training objectives BIBREF14 We adopt the original approach BIBREF4 as it is both simpler and significantly less expensive to train than the more recent variants Videos of lip movement can be obtained from various sources including TV films and YouTube and are often cropped to include only the lips BIBREF4 UTI data on the other hand is recorded in clinics by trained therapists BIBREF15 An ultrasound probe placed under the chin of the patient captures the midsaggital view of their oral cavity as they speak UTI data consists of sequences of 2D matrices of raw ultrasound reflection data which can be interpreted as greyscale images BIBREF15 There are several challenges specifically associated with UTI data compared with lip videos which can potentially lower the performance of models relative to results reported on lip video data These include Poor image quality Ultrasound data is noisy containing arbitrary highcontrast edges speckle noise artefacts and interruptions to the tongues surface BIBREF0 BIBREF16 BIBREF17 The oral cavity is not entirely visible missing the lips the palate and the pharyngeal wall and visually interpreting the data requires specialised training In contrast videos of lip movement are of much higher quality and suffer from none of these issues Probe placement variation Surfaces that are orthogonal to the ultrasound beam image better than those at an angle Small shifts in probe placement during recording lead to high variation between otherwise similar tongue shapes BIBREF0 BIBREF18 BIBREF17 In contrast while the scaling and rotations of lip videos lead to variation they do not lead to a degradation in image quality Interspeaker variation Age and physiology affect the quality of ultrasound data and subjects with smaller vocal tracts and less tissue fat image better BIBREF0 BIBREF17 Dryness in the mouth as a result of nervousness during speech therapy leads to poor imaging While interspeaker variation is expected in lip videos again the variation does not lead to quality degradation Limited amount of data Existing UTI datasets are considerably smaller than lip movement datasets Consider for example VoxCeleb and VoxCeleb2 used to train SyncNet BIBREF4 BIBREF14 which together contain 1 million utterances from 7363 identities BIBREF19 BIBREF20 In contrast the UltraSuite repository used in this work contains 13815 spoken utterances from 86 identities Uncorrelated segments Speech therapy data contains interactions between the therapist and patient The audio therefore contains speech from both speakers while the ultrasound captures only the patients tongue BIBREF15 As a result parts of the recordings will consist of completely uncorrelated audio and ultrasound This issue is similar to that of dubbed voices in lip videos BIBREF4 but is more prevalent in speech therapy data We adopt the approach in BIBREF4 modifying it to synchronise audio with UTI data Our model UltraSync consists of two streams the first takes as input a short segment of ultrasound and the second takes as input the corresponding audio Both inputs are highdimensional and are of different sizes The objective is to learn a mapping from the inputs to a pair of lowdimensional vectors of the same length such that the Euclidean distance between the two vectors is small when they correlate and large otherwise BIBREF21 BIBREF22 This model can be viewed as an extension of a siamese neural network BIBREF23 but with two asymmetrical streams and no shared parameters Figure FIGREF1 illustrates the main architecture The visual data INLINEFORM0 ultrasound and audio data INLINEFORM1 MFCC which have different shapes are mapped to low dimensional embeddings INLINEFORM2 visual and INLINEFORM3 audio of the same size DISPLAYFORM0 The model is trained using a contrastive loss function BIBREF21 BIBREF22 INLINEFORM0 which minimises the Euclidean distance INLINEFORM1 between INLINEFORM2 and INLINEFORM3 for positive pairs INLINEFORM4 and maximises it for negative pairs INLINEFORM5 for a number of training samples INLINEFORM6 DISPLAYFORM0 Given a pair of ultrasound and audio segments we can calculate the distance between them using our model To predict the synchronisation offset for an utterance we consider a discretised set of candidate offsets calculate the average distance for each across utterance segments and select the one with the minimum average distance The candidate set is independent of the model and is chosen based on task knowledge Section SECREF5 For our experiments we select a dataset whose utterances have been correctly synchronised at recording time This allows us to control how the model is trained and verify its performance using ground truth synchronisation offsets We use UltraSuite a repository of ultrasound and acoustic data gathered from child speech therapy sessions BIBREF15 We used all three datasets from the repository UXTD recorded with typically developing children and UXSSD and UPX recorded with children with speech sound disorders In total the dataset contains 13815 spoken utterances from 86 speakers corresponding to 359 hours of recordings The utterances have been categorised by the type of task the child was given and are labelled as Words A Nonwords B Sentence C Articulatory D Nonspeech E or Conversations F See BIBREF15 for details Each utterance consists of 3 files audio ultrasound and parameter The audio file is a RIFF wave file sampled at 2205 KHz containing the speech of the child and therapist The ultrasound file consists of a sequence of ultrasound frames capturing the midsagittal view of the childs tongue A single ultrasound frame is recorded as a 2D matrix where each column represents the ultrasound reflection intensities along a single scan line Each ultrasound frame consists of 63 scan lines of 412 data points each and is sampled at a rate of INLINEFORM0 1215 fps Raw ultrasound frames can be visualised as greyscale images and can thus be interpreted as videos The parameter file contains the synchronisation offset value in milliseconds determined using hardware synchronisation at recording time and confirmed by the therapists to be correct for this dataset First we exclude utterances of type Nonspeech E from our training data and statistics These are coughs recorded to obtain additional tongue shapes or swallowing motions recorded to capture a trace of the hard palate Both of these rarely contain audible content and are therefore not relevant to our task Next we apply the offset which should be positive if the audio leads and negative if the audio lags In this dataset the offset is always positive We apply it by cropping the leading audio and trimming the end of the longer signal to match the duration To process the ultrasound more efficiently we first reduce the frame rate from INLINEFORM0 1215 fps to INLINEFORM1 243 fps by retaining 1 out of every 5 frames We then downsample by a factor of 1 3 shrinking the frame size from 63x412 to 63x138 using max pixel value This retains the number of ultrasound vectors 63 but reduces the number of pixels per vector from 412 to 138 The final prepreprocessing step is to remove empty regions UltraSuite was previously anonymised by zeroing segments of audio which contained personally identifiable information As a preprocessing step we remove the zero regions from audio and corresponding ultrasound We additionally experimented with removing regions of silence using voice activity detection but obtained a higher performance by retaining them To train our model we need positive and negative training pairs The model ingests short clips from each modality of INLINEFORM0 200ms long calculated as INLINEFORM1 where INLINEFORM2 is the time window INLINEFORM3 is the number of ultrasound frames per window 5 in our case and INLINEFORM4 is the ultrasound frame rate of the utterance INLINEFORM5 243 fps For each recording we split the ultrasound into nonoverlapping windows of 5 frames each We extract MFCC features 13 cepstral coefficients from the audio using a window length of INLINEFORM6 20ms calculated as INLINEFORM7 and a step size of INLINEFORM8 10ms calculated as INLINEFORM9 This give us the input sizes shown in Figure FIGREF1 Positive samples are pairs of ultrasound windows and the corresponding MFCC frames To create negative samples we randomise pairings of ultrasound windows to MFCC frames within the same utterance generating as many negative as positive samples to achieve a balanced dataset We obtain 243764 samples for UXTD 135hrs 333526 for UXSSD 185hrs and 572078 for UPX 318 hrs or a total 1149368 samples 639hrs which we divide into training validation and test sets We aim to test whether our model generalises to data from new speakers and to data from new sessions recorded with known speakers To simulate this we select a group of speakers from each dataset and hold out all of their data either for validation or for testing Additionally we hold out one entire session from each of the remaining speakers and use the rest of their data for training We aim to reserve approximately 80 of the created samples for training 10 for validation and 10 for testing and select speakers and sessions on this basis Each speaker in UXTD recorded 1 session but sessions are of different durations We reserve 45 speakers for training 5 for validation and 8 for testing UXSSD and UPX contain fewer speakers but each recorded multiple sessions We hold out 1 speaker for validation and 1 for testing from each of the two datasets We also hold out a session from the first half of the remaining speakers for validation and a session from the second half of the remaining speakers for testing This selection process results in 909858 pooled samples for training 505hrs 128414 for validation 71hrs and 111096 for testing 62hrs From the training set we create shuffled batches which are balanced in the number of positive and negative samples We select the hyperparameters of our model empirically by tuning on the validation set Table Hyperparameter exploration is guided by BIBREF24 We train our model using the Adam optimiser BIBREF25 with a learning rate of 0001 a batch size of 64 samples and for 20 epochs We implement learning rate scheduling which reduces the learning rate by a factor of 01 when the validation loss plateaus for 2 epochs Upon convergence the model achieves 0193 training loss 0215 validation loss and 0213 test loss By placing a threshold of 05 on predicted distances the model achieves 699 binary classification accuracy on training samples 647 on validation samples and 653 on test samples Synchronisation offset prediction Section SECREF3 described briefly how to use our model to predict the synchronisation offset for test utterances To obtain a discretised set of offset candidates we retrieve the true offsets of the training utterances and find that they fall in the range 0 179 ms We discretise this range taking 45ms steps and rendering 40 candidate values 45ms is the smaller of the absolute values of the detectability boundaries INLINEFORM0 125 and INLINEFORM1 45 ms We bin the true offsets in the candidate set and discard empty bins reducing the set from 40 to 24 values We consider all 24 candidates for each test utterance We do this by aligning the two signals according to the given candidate then producing the nonoverlapping windows of ultrasound and MFCC pairs as we did when preparing the data We then use our model to predict the Euclidean distance for each pair and average the distances Finally we select the offset with the smallest average distance as our prediction Evaluation Because the true offsets are known we evaluate the performance of our model by computing the discrepancy between the predicted and the true offset for each utterance If the discrepancy falls within the minimum detectability range INLINEFORM0 125 INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 45 then the prediction is correct Random prediction averaged over 1000 runs yields 146 accuracy with a mean and standard deviation discrepancy of 328 INLINEFORM5 518ms We achieve 829 accuracy with a mean and standard deviation discrepancy of 32 INLINEFORM6 223ms SyncNet reports INLINEFORM7 99 accuracy on lip video synchronisation using a manual evaluation where the lip error is not detectable to a human observer BIBREF4 However we argue that our data is more challenging Section SECREF4 Analysis We analyse the performance of our model across different conditions Table shows the model accuracy broken down by utterance type The model achieves 912 accuracy on utterances containing words sentences and conversations all of which exhibit natural variation in speech The model is less successful with Articulatory utterances which contain isolated phones occurring once or repeated eg sh sh sh Such utterances contain subtle tongue movement making it more challenging to correlate the visual signal with the audio And indeed the model finds the correct offset for only 559 of Articulatory utterances A further analysis shows that 844 N INLINEFORM0 90 of stop consonants eg t which are relied upon by therapists as the most salient audiovisual synchronisation cues BIBREF3 are correctly synchronised by our model compared to 486 N INLINEFORM1 140 of vowels which contain less distinct movement and are also more challenging for therapists to synchronise Table shows accuracy broken down by test set The model performs better on test sets containing entirely new speakers compared with test sets containing new sessions from previously seen speakers This is contrary to expectation but could be due to the UTI challenges described in Section SECREF4 affecting different subsets to different degrees Table shows that the model performs considerably worse on UXTD compared to other test sets 648 accuracy However a further breakdown of the results in Table by test set and utterance type explains this poor performance the majority of UXTD utterances 71 are Articulatory utterances which the model struggles to correctly synchronise In fact for other utterance types where there is a large enough sample such as Words performance on UXTD is on par with other test sets We have shown how a twostream neural network originally designed to synchronise lip videos with audio can be used to synchronise UTI data with audio Our model exploits the correlation between the modalities to learn crossmodel embeddings which are used to find the synchronisation offset It generalises well to heldout data allowing us to correctly synchronise the majority of test utterances The model is bestsuited to utterances which contain natural variation in speech and least suited to those containing isolated phones with the exception of stop consonants Future directions include integrating the model and synchronisation offset prediction process into speech therapy software BIBREF6 BIBREF7 and using the learned embeddings for other tasks such as active speaker detection BIBREF4 Supported by EPSRC Healthcare Partnerships Programme grant number EPP02338X1 Ultrax2020 </s>
<s>[INST] Basic tasks of sentiment analysis [/INST] School of Computer Science and Engineering Nanyang Technological University Singapore Sentiment Analysis Subjectivity Detection Deep Learning Aspect Extraction Polarity Distribution Convolutional Neural Network Aspect Feature related to an opinion target Convolution features made of consecutive words BOW Bag of Words NLP Natural Language Processing CNN Convolutional Neural Network LDA Latent Dirichlet Allocation Subjectivity detection is the task of identifying objective and subjective sentences Objective sentences are those which do not exhibit any sentiment So it is desired for a sentiment analysis engine to find and separate the objective sentences for further analysis eg polarity detection In subjective sentences opinions can often be expressed on one or multiple topics Aspect extraction is a subtask of sentiment analysis that consists in identifying opinion targets in opinionated text ie in detecting the specific aspects of a product or service the opinion holder is either praising or complaining about We consider deep convolutional neural networks where each layer is learned independent of the others resulting in low complexity We model temporal dynamics in product reviews by pretraining the deep CNN using dynamic Gaussian Bayesian networks We combine linguistic aspect mining with CNN features for effective sentiment detection Traditional methods prior to 2001 used handcrafted templates to identify subjectivity and did not generalize well for resourcedeficient languages such as Spanish Later works published between 2002 and 2009 proposed the use of deep neural networks to automatically learn a dictionary of features in the form of convolution kernels that is portable to new languages Recently recurrent deep neural networks are being used to model alternating subjective and objective sentences within a single review Such networks are difficult to train for a large vocabulary of words due to the problem of vanishing gradients Hence in this chapter we consider use of heuristics to learn dynamic Gaussian networks to select significant word dependencies between sentences in a single review Further in order to relation between opinion targets and the corresponding polarity in a review aspect based opinion mining is used Explicit aspects were models by several authors using statistical observations such mutual information between noun phrase and the product class However this method was unable to detect implicit aspects due to high level of noise in the data Hence topic modeling was widely used to extract and group aspects where the latent variable topic is introduced between the observed variables document and word In this chapter we demonstrate the use of common sense reasoning when computing word distributions that enable shifting from a syntactic word model to a semantic concept model While sentiment analysis research has become very popular in the past ten years most companies and researchers still approach it simply as a polarity detection problem In reality sentiment analysis is a suitcase problem that requires tackling many natural language processing NLP subtasks including microtext analysis sarcasm detection anaphora resolution subjectivity detection and aspect extraction In this chapter we focus on the last two subtasks as they are key for ensuring a minimum level of accuracy in the detection of polarity from social media The two basic issues associated with sentiment analysis on the Web in fact are that 1 a lot of factual or nonopinionated information needs to be filtered out and 2 opinions are most times about different aspects of the same product or service rather than on the whole item and reviewers tend to praise some and criticize others Subjectivity detection hence ensures that factual information is filtered out and only opinionated information is passed on to the polarity classifier and aspect extraction enables the correct distribution of polarity among the different features of the opinion target in stead of having one unique averaged polarity assigned to it In this chapter we offer some insights about each task and apply an ensemble of deep learning and linguistics to tackle both The opportunity to capture the opinion of the general public about social events political movements company strategies marketing campaigns and product preferences has raised increasing interest of both the scientific community because of the exciting open challenges and the business world because of the remarkable benefits for marketing and financial market prediction Today sentiment analysis research has its applications in several different scenarios There are a good number of companies both large and smallscale that focus on the analysis of opinions and sentiments as part of their mission BIBREF0 Opinion mining techniques can be used for the creation and automated upkeep of review and opinion aggregation websites in which opinions are continuously gathered from the Web and not restricted to just product reviews but also to broader topics such as political issues and brand perception Sentiment analysis also has a great potential as a subcomponent technology for other systems It can enhance the capabilities of customer relationship management and recommendation systems for example allowing users to find out which features customers are particularly interested in or to exclude items that have received overtly negative feedback from recommendation lists Similarly it can be used in social communication for troll filtering and to enhance antispam systems Business intelligence is also one of the main factors behind corporate interest in the field of sentiment analysis BIBREF1 Sentiment analysis is a suitcase research problem that requires tackling many NLP subtasks including semantic parsing BIBREF2 named entity recognition BIBREF3 sarcasm detection BIBREF4 subjectivity detection and aspect extraction In opinion mining different levels of analysis granularity have been proposed each one having its own advantages and drawbacks BIBREF5 BIBREF6 Aspectbased opinion mining BIBREF7 BIBREF8 focuses on the relations between aspects and document polarity An aspect also known as an opinion target is a concept in which the opinion is expressed in the given document For example in the sentence The screen of my phone is really nice and its resolution is superb for a phone review contains positive polarity ie the author likes the phone However more specifically the positive opinion is about its screen and resolution these concepts are thus called opinion targets or aspects of this opinion The task of identifying the aspects in a given opinionated text is called aspect extraction There are two types of aspects defined in aspectbased opinion mining explicit aspects and implicit aspects Explicit aspects are words in the opinionated document that explicitly denote the opinion target For instance in the above example the opinion targets screen and resolution are explicitly mentioned in the text In contrast an implicit aspect is a concept that represents the opinion target of an opinionated document but which is not specified explicitly in the text One can infer that the sentence This camera is sleek and very affordable implicitly contains a positive opinion of the aspects appearance and price of the entity camera These same aspects would be explicit in an equivalent sentence The appearance of this camera is sleek and its price is very affordable Most of the previous works in aspect term extraction have either used conditional random fields CRFs BIBREF9 BIBREF10 or linguistic patterns BIBREF7 BIBREF11 Both of these approaches have their own limitations CRF is a linear model so it needs a large number of features to work well linguistic patterns need to be crafted by hand and they crucially depend on the grammatical accuracy of the sentences In this chapter we apply an ensemble of deep learning and linguistics to tackle both the problem of aspect extraction and subjectivity detection The remainder of this chapter is organized as follows Section SECREF3 and SECREF4 propose some introductory explanation and some literature for the tasks of subjectivity detection and aspect extraction respectively Section SECREF5 illustrates the basic concepts of deep learning adopted in this work Section SECREF6 describes in detail the proposed algorithm Section SECREF7 shows evaluation results finally Section SECREF9 concludes the chapter Subjectivity detection is an important subtask of sentiment analysis that can prevent a sentiment classifier from considering irrelevant or potentially misleading text in online social platforms such as Twitter and Facebook Subjective extraction can reduce the amount of review data to only 60 INLINEFORM0 and still produce the same polarity results as full text classification BIBREF12 This allows analysts in government commercial and political domains who need to determine the response of people to different crisis events BIBREF12 BIBREF13 BIBREF14 Similarly online reviews need to be summarized in a manner that allows comparison of opinions so that a user can clearly see the advantages and weaknesses of each product merely with a single glance both in unimodal BIBREF15 and multimodal BIBREF16 BIBREF17 contexts Further we can do indepth opinion assessment such as finding reasons or aspects BIBREF18 in opinionbearing texts For example INLINEFORM1 which makes the film INLINEFORM2 Several works have explored sentiment composition through careful engineering of features or polarity shifting rules on syntactic structures However sentiment accuracies for classifying a sentence as positivenegativeneutral has not exceeded 60 INLINEFORM3 Early attempts used general subjectivity clues to generate training data from unannotated text BIBREF19 Next bagofwords BOW classifiers were introduced that represent a document as a multi set of its words disregarding grammar and word order These methods did not work well on short tweets Cooccurrence matrices also were unable to capture difference in antonyms such as goodbad that have similar distributions Subjectivity detection hence progressed from syntactic to semantic methods in BIBREF19 where the authors used extraction pattern to represent subjective expressions For example the pattern hijacking of INLINEFORM0 looks for the noun hijacking and the object of the preposition INLINEFORM1 Extracted features are used to train machinelearning classifiers such as SVM BIBREF20 and ELM BIBREF21 Subjectivity detection is also useful for constructing and maintaining sentiment lexicons as objective words or concepts need to be omitted from them BIBREF22 Since subjective sentences tend to be longer than neutral sentences recursive neural networks were proposed where the sentiment class at each node in the parse tree was captured using matrix multiplication of parent nodes BIBREF23 BIBREF24 However the number of possible parent composition functions is exponential hence in BIBREF25 recursive neural tensor network was introduced that use a single tensor composition function to define multiple bilinear dependencies between words In BIBREF26 the authors used logistic regression predictor that defines a hyperplane in the word vector space where a word vectors positive sentiment probability depends on where it lies with respect to this hyperplane However it was found that while incorporating words that are more subjective can generally yield better results the performance gain by employing extra neutral words is less significant BIBREF27 Another class of probabilistic models called Latent Dirichlet Allocation assumes each document is a mixture of latent topics Lastly sentencelevel subjectivity detection was integrated into documentlevel sentiment detection using graphs where each node is a sentence The contextual constraints between sentences in a graph led to significant improvement in polarity classification BIBREF28 Similarly in BIBREF29 the authors take advantage of the sequence encoding method for trees and treat them as sequence kernels for sentences Templates are not suitable for semantic role labeling because relevant context might be very far away Hence deep neural networks have become popular to process text In word2vec for example a words meaning is simply a signal that helps to classify larger entities such as documents Every word is mapped to a unique vector represented by a column in a weight matrix The concatenation or sum of the vectors is then used as features for prediction of the next word in a sentence BIBREF30 Related words appear next to each other in a INLINEFORM0 dimensional vector space Vectorizing them allows us to measure their similarities and cluster them For semantic role labeling we need to know the relative position of verbs hence the features can include prefix suffix distance from verbs in the sentence etc However each feature has a corresponding vector representation in INLINEFORM1 dimensional space learned from the training data Recently convolutional neural network CNN is being used for subjectivity detection In particular BIBREF31 used recurrent CNNs These show high accuracy on certain datasets such as Twitter we are also concerned with a specific sentence within the context of the previous discussion the order of the sentences preceding the one at hand results in a sequence of sentences also known as a time series of sentences BIBREF31 However their model suffers from overfitting hence in this work we consider deep convolutional neural networks where temporal information is modeled via dynamic Gaussian Bayesian networks Aspect extraction from opinions was first studied by BIBREF7 They introduced the distinction between explicit and implicit aspects However the authors only dealt with explicit aspects and used a set of rules based on statistical observations Hu and Lius method was later improved by BIBREF32 and by BIBREF33 BIBREF32 assumed the product class is known in advance Their algorithm detects whether a noun or noun phrase is a product feature by computing the pointwise mutual information between the noun phrase and the product class BIBREF34 presented a method that uses language model to identify product features They assumed that product features are more frequent in product reviews than in a general natural language text However their method seems to have low precision since retrieved aspects are affected by noise Some methods treated the aspect term extraction as sequence labeling and used CRF for that Such methods have performed very well on the datasets even in crossdomain experiments BIBREF9 BIBREF10 Topic modeling has been widely used as a basis to perform extraction and grouping of aspects BIBREF35 BIBREF36 Two models were considered pLSA BIBREF37 and LDA BIBREF38 Both models introduce a latent variable topic between the observable variables document and word to analyze the semantic topic distribution of documents In topic models each document is represented as a random mixture over latent topics where each topic is characterized by a distribution over words Such methods have been gaining popularity in social media analysis like emerging political topic detection in Twitter BIBREF39 The LDA model defines a Dirichlet probabilistic generative process for documenttopic distribution in each document a latent aspect is chosen according to a multinomial distribution controlled by a Dirichlet prior INLINEFORM0 Then given an aspect a word is extracted according to another multinomial distribution controlled by another Dirichlet prior INLINEFORM1 Among existing works employing these models are the extraction of global aspects such as the brand of a product and local aspects such as the property of a product BIBREF40 the extraction of key phrases BIBREF41 the rating of multiaspects BIBREF42 and the summarization of aspects and sentiments BIBREF43 BIBREF44 employed the maximum entropy method to train a switch variable based on POS tags of words and used it to separate aspect and sentiment words BIBREF45 added user feedback to LDA as a responsevariable related to each document BIBREF46 proposed a semisupervised model DFLDA BIBREF47 also represents a semisupervised model which allows the user to set mustlink and cannotlink constraints A mustlink constraint means that two terms must be in the same topic while a cannotlink constraint means that two terms cannot be in the same topic BIBREF48 integrated commonsense in the calculation of word distributions in the LDA algorithm thus enabling the shift from syntax to semantics in aspectbased sentiment analysis BIBREF49 proposed two semisupervised models for product aspect extraction based on the use of seeding aspects In the category of supervised methods BIBREF50 employed seed words to guide topic models to learn topics of specific interest to a user while BIBREF42 and BIBREF51 employed seeding words to extract related product aspects from product reviews On the other hand recent approaches using deep CNNs BIBREF52 BIBREF53 showed significant performance improvement over the stateoftheart methods on a range of NLP tasks BIBREF52 fed word embeddings to a CNN to solve standard NLP problems such as named entity recognition NER partofspeech POS tagging and semantic role labeling In this section we briefly review the theoretical concepts necessary to comprehend the present work We begin with a description of maximum likelihood estimation of edges in dynamic Gaussian Bayesian networks where each node is a word in a sentence Next we show that weights in the CNN can be learned by minimizing a global error function that corresponds to an exponential distribution over a linear combination of input sequence of word features Notations Consider a Gaussian network GN with time delays which comprises a set of INLINEFORM0 nodes and observations gathered over INLINEFORM1 instances for all the nodes Nodes can take real values from a multivariate distribution determined by the parent set Let the dataset of samples be INLINEFORM2 where INLINEFORM3 represents the sample value of the INLINEFORM4 random variable in instance INLINEFORM5 Lastly let INLINEFORM6 be the set of parent variables regulating variable INLINEFORM7 In tasks where one is concerned with a specific sentence within the context of the previous discourse capturing the order of the sequences preceding the one at hand may be particularly crucial We take as given a sequence of sentences INLINEFORM0 each in turn being a sequence of words so that INLINEFORM1 where INLINEFORM2 is the length of sentence INLINEFORM3 Thus the probability of a word INLINEFORM4 follows the distribution DISPLAYFORM0 A Bayesian network is a graphical model that represents a joint multivariate probability distribution for a set of random variables BIBREF54 It is a directed acyclic graph INLINEFORM0 with a set of parameters INLINEFORM1 that represents the strengths of connections by conditional probabilities The BN decomposes the likelihood of node expressions into a product of conditional probabilities by assuming independence of nondescendant nodes given their parents DISPLAYFORM0 where INLINEFORM0 denotes the conditional probability of node expression INLINEFORM1 given its parent node expressions INLINEFORM2 and INLINEFORM3 denotes the maximum likelihoodML estimate of the conditional probabilities Figure FIGREF11 a illustrates the state space of a Gaussian Bayesian network GBN at time instant INLINEFORM0 where each node INLINEFORM1 is a word in the sentence INLINEFORM2 The connections represent causal dependencies over one or more time instants The observed state vector of variable INLINEFORM3 is denoted as INLINEFORM4 and the conditional probability of variable INLINEFORM5 given variable INLINEFORM6 is INLINEFORM7 The optimal Gaussian network INLINEFORM8 is obtained by maximizing the posterior probability of INLINEFORM9 given the data INLINEFORM10 From Bayes theorem the optimal Gaussian network INLINEFORM11 is given by DISPLAYFORM0 where INLINEFORM0 is the probability of the Gaussian network and INLINEFORM1 is the likelihood of the expression data given the Gaussian network Given the set of conditional distributions with parameters INLINEFORM0 the likelihood of the data is given by DISPLAYFORM0 To find the likelihood in EQREF14 and to obtain the optimal Gaussian network as in EQREF13 Gaussian BN assumes that the nodes are multivariate Gaussian That is expression of node INLINEFORM0 can be described with mean INLINEFORM1 and covariance matrix INLINEFORM2 of size INLINEFORM3 The joint probability of the network can be the product of a set of conditional probability distributions given by DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 denotes the regression coefficient matrix INLINEFORM2 is the conditional variance of INLINEFORM3 given its parent set INLINEFORM4 INLINEFORM5 is the covariance between observations of INLINEFORM6 and the variables in INLINEFORM7 and INLINEFORM8 is the covariance matrix of INLINEFORM9 The acyclic condition of BN does not allow feedback among nodes and feedback is an essential characteristic of real world GN Therefore dynamic Bayesian networks have recently become popular in building GN with time delays mainly due to their ability to model causal interactions as well as feedback regulations BIBREF55 A firstorder dynamic BN is defined by a transition network of interactions between a pair of Gaussian networks connecting nodes at time instants INLINEFORM0 and INLINEFORM1 In time instant INLINEFORM2 the parents of nodes are those specified in the time instant INLINEFORM3 Similarly the Gaussian network of a INLINEFORM4 order dynamic system is represented by a Gaussian network comprising INLINEFORM5 consecutive time points and INLINEFORM6 nodes or a graph of INLINEFORM7 nodes In practice the sentence data is transformed to a BOW model where each sentence is a vector of frequencies for each word in the vocabulary Figure FIGREF11 b illustrates the state space of a firstorder Dynamic GBN models transition networks among words in sentences INLINEFORM8 and INLINEFORM9 in consecutive time points the lines correspond to firstorder edges among the words learned using BOW Hence a sequence of sentences results in a time series of word frequencies It can be seen that such a discourse model produces compelling discourse vector representations that are sensitive to the structure of the discourse and promise to capture subtle aspects of discourse comprehension especially when coupled to further semantic data and unsupervised pretraining The idea behind convolution is to take the dot product of a vector of INLINEFORM0 weights INLINEFORM1 also known as kernel vector with each INLINEFORM2 gram in the sentence INLINEFORM3 to obtain another sequence of features INLINEFORM4 DISPLAYFORM0 We then apply a max pooling operation over the feature map and take the maximum value INLINEFORM0 as the feature corresponding to this particular kernel vector Similarly varying kernel vectors and window sizes are used to obtain multiple features BIBREF23 For each word INLINEFORM0 in the vocabulary an INLINEFORM1 dimensional vector representation is given in a look up table that is learned from the data BIBREF30 The vector representation of a sentence is hence a concatenation of vectors for individual words Similarly we can have look up tables for other features One might want to provide features other than words if these features are suspected to be helpful Now the convolution kernels are applied to word vectors instead of individual words We use these features to train higher layers of the CNN that can represent bigger groups of words in sentences We denote the feature learned at hidden neuron INLINEFORM0 in layer INLINEFORM1 as INLINEFORM2 Multiple features may be learned in parallel in the same CNN layer The features learned in each layer are used to train the next layer DISPLAYFORM0 where indicates convolution and INLINEFORM0 is a weight kernel for hidden neuron INLINEFORM1 and INLINEFORM2 is the total number of hidden neurons Training a CNN becomes difficult as the number of layers increases as the Hessian matrix of secondorder derivatives often does not exist Recently deep learning has been used to improve the scalability of a model that has inherent parallel computation This is because hierarchies of modules can provide a compact representation in the form of inputoutput pairs Each layer tries to minimize the error between the original state of the input nodes and the state of the input nodes predicted by the hidden neurons This results in a downward coupling between modules The more abstract representation at the output of a higher layer module is combined with the less abstract representation at the internal nodes from the module in the layer below In the next section we describe deep CNN that can have arbitrary number of layers A deep belief network DBN is a type of deep neural network that can be viewed as a composite of simple unsupervised models such as restricted Boltzmann machines RBMs where each RBMs hidden layer serves as the visible layer for the next RBM BIBREF56 RBM is a bipartite graph comprising two layers of neurons a visible and a hidden layer it is restricted such that the connections among neurons in the same layer are not allowed To compute the weights INLINEFORM0 of an RBM we assume that the probability distribution over the input vector INLINEFORM1 is given as DISPLAYFORM0 where INLINEFORM0 is a normalisation constant Computing the maximum likelihood is difficult as it involves solving the normalisation constant which is a sum of an exponential number of terms The standard approach is to approximate the average over the distribution with an average over a sample from INLINEFORM1 obtained by Markov chain Monte Carlo until convergence To train such a multilayer system we must compute the gradient of the total energy function INLINEFORM0 with respect to weights in all the layers To learn these weights and maximize the global energy function the approximate maximum likelihood contrastive divergence CD approach can be used This method employs each training sample to initialize the visible layer Next it uses the Gibbs sampling algorithm to update the hidden layer and then reconstruct the visible layer consecutively until convergence BIBREF57 As an example here we use a logistic regression model to learn the binary hidden neurons and each visible unit is assumed to be a sample from a normal distribution BIBREF58 The continuous state INLINEFORM0 of the hidden neuron INLINEFORM1 with bias INLINEFORM2 is a weighted sum over all continuous visible nodes INLINEFORM3 and is given by DISPLAYFORM0 where INLINEFORM0 is the connection weight to hidden neuron INLINEFORM1 from visible node INLINEFORM2 The binary state INLINEFORM3 of the hidden neuron can be defined by a sigmoid activation function DISPLAYFORM0 Similarly in the next iteration the binary state of each visible node is reconstructed and labeled as INLINEFORM0 Here we determine the value to the visible node INLINEFORM1 with bias INLINEFORM2 as a random sample from the normal distribution where the mean is a weighted sum over all binary hidden neurons and is given by DISPLAYFORM0 where INLINEFORM0 is the connection weight to hidden neuron INLINEFORM1 from visible node INLINEFORM2 The continuous state INLINEFORM3 is a random sample from INLINEFORM4 where INLINEFORM5 is the variance of all visible nodes Lastly the weights are updated as the difference between the original and reconstructed visible layer using DISPLAYFORM0 where INLINEFORM0 is the learning rate and INLINEFORM1 is the expected frequency with which visible unit INLINEFORM2 and hidden unit INLINEFORM3 are active together when the visible vectors are sampled from the training set and the hidden units are determined by EQREF21 Finally the energy of a DNN can be determined in the final layer using INLINEFORM4 To extend the deep belief networks to convolution deep belief network CDBN we simply partition the hidden layer into INLINEFORM0 groups Each of the INLINEFORM1 groups is associated with a INLINEFORM2 filter where INLINEFORM3 is the width of the kernel and INLINEFORM4 is the number of dimensions in the word vector Let us assume that the input layer has dimension INLINEFORM5 where INLINEFORM6 is the length of the sentence Then the convolution operation given by EQREF17 will result in a hidden layer of INLINEFORM7 groups each of dimension INLINEFORM8 These learned kernel weights are shared among all hidden units in a particular group The energy function is now a sum over the energy of individual blocks given by DISPLAYFORM0 The CNN sentence model preserve the order of words by adopting convolution kernels of gradually increasing sizes that span an increasing number of words and ultimately the entire sentence BIBREF31 However several word dependencies may occur across sentences hence in this work we propose a Bayesian CNN model that uses dynamic Bayesian networks to model a sequence of sentences In this work we integrate a higherorder GBN for sentences into the first layer of the CNN The GBN layer of connections INLINEFORM0 is learned using maximum likelihood approach on the BOW model of the training data The input sequence of sentences INLINEFORM1 are parsed through this layer prior to training the CNN Only sentences or groups of sentences containing high ML motifs are then used to train the CNN Hence motifs are convolved with the input sentences to generate a new set of sentences for pretraining DISPLAYFORM0 where INLINEFORM0 is the number of high ML motifs and INLINEFORM1 is the training set of sentences in a particular class Fig FIGREF28 illustrates the state space of Bayesian CNN where the input layer is pretrained using a dynamic GBN with upto two time point delays shown for three sentences in a review on iPhone The dashed lines correspond to secondorder edges among the words learned using BOW Each hidden layer does convolution followed by pooling across the length of the sentence To preserve the order of words we adopt kernels of increasing sizes Since the number of possible words in the vocabulary is very large we consider only the top subjectivity clue words to learn the GBN layer Lastly Inorder to preserve the context of words in conceptual phrases such as touchscreen we consider additional nodes in the Bayesian network for phrases with subjectivity clues Further the word embeddings in the CNN are initialized using the logbilinear language model LBL where the INLINEFORM0 dimensional vector representation of each word INLINEFORM1 in EQREF10 is given by DISPLAYFORM0 where INLINEFORM0 are the INLINEFORM1 cooccurrence or context matrices computed from the data The time series of sentences is used to generate a subset of sentences containing high ML motifs using EQREF27 The frequency of a sentence in the new dataset will also correspond to the corresponding number of high ML motifs in the sentence In this way we are able to increase the weights of the corresponding causal features among words and concepts extracted using Gaussian Bayesian networks The new set of sentences is used to pretrain the deep neural network prior to training with the complete dataset Each sentence can be divided into chunks or phrases using POS taggers The phrases have hierarchical structures and combine in distinct ways to form sentences The INLINEFORM0 gram kernels learned in the first layer hence correspond to a chunk in the sentence In order to train the CNN for aspect extraction instead we used a special training algorithm suitable for sequential data proposed by BIBREF52 We will summarize it here mainly following BIBREF59 The algorithm trains the neural network by backpropagation in order to maximize the likelihood over training sentences Consider the network parameter INLINEFORM0 We say that INLINEFORM1 is the output score for the likelihood of an input INLINEFORM2 to have the tag INLINEFORM3 Then the probability to assign the label INLINEFORM4 to INLINEFORM5 is calculated as DISPLAYFORM0 Define the logadd operation as DISPLAYFORM0 then for a training example the loglikelihood becomes DISPLAYFORM0 In aspect term extraction the terms can be organized as chunks and are also often surrounded by opinion terms Hence it is important to consider sentence structure on a whole in order to obtain additional clues Let it be given that there are INLINEFORM0 tokens in a sentence and INLINEFORM1 is the tag sequence while INLINEFORM2 is the network score for the INLINEFORM3 th tag having INLINEFORM4 th tag We introduce INLINEFORM5 transition score from moving tag INLINEFORM6 to tag INLINEFORM7 Then the score tag for the sentence INLINEFORM8 to have the tag path INLINEFORM9 is defined by DISPLAYFORM0 This formula represents the tag path probability over all possible paths Now from EQREF32 we can write the loglikelihood DISPLAYFORM0 The number of tag paths has exponential growth However using dynamic programming techniques one can compute in polynomial time the score for all paths that end in a given tag BIBREF52 Let INLINEFORM0 denote all paths that end with the tag INLINEFORM1 at the token INLINEFORM2 Then using recursion we obtain DISPLAYFORM0 For the sake of brevity we shall not delve into details of the recursive procedure which can be found in BIBREF52 The next equation gives the logadd for all the paths to the token INLINEFORM0 DISPLAYFORM0 Using these equations we can maximize the likelihood of EQREF35 over all training pairs For inference we need to find the best tag path using the Viterbi algorithm eg we need to find the best tag path that minimizes the sentence score EQREF34 The features of an aspect term depend on its surrounding words Thus we used a window of 5 words around each word in a sentence ie INLINEFORM0 words We formed the local features of that window and considered them to be features of the middle word Then the feature vector was fed to a CNN The network contained one input layer two convolution layers two maxpool layers and a fully connected layer with softmax output The first convolution layer consisted of 100 feature maps with filter size 2 The second convolution layer had 50 feature maps with filter size 3 The stride in each convolution layer is 1 as we wanted to tag each word A maxpooling layer followed each convolution layer The pool size we use in the maxpool layers was 2 We used regularization with dropout on the penultimate layer with a constraint on L2norms of the weight vectors with 30 epochs The output of each convolution layer was computed using a nonlinear function in our case we used INLINEFORM0 As features we used word embeddings trained on two different corpora We also used some additional features and rules to boost the accuracy see Section UID49 The CNN produces local features around each word in a sentence and then combines these features into a global feature vector Since the kernel size for the two convolution layers was different the dimensionality INLINEFORM0 mentioned in Section SECREF16 was INLINEFORM1 and INLINEFORM2 respectively The input layer was INLINEFORM3 where 65 was the maximum number of words in a sentence and 300 the dimensionality of the word embeddings used per each word The process was performed for each word in a sentence Unlike traditional maxlikelihood leaning scheme we trained the system using propagation after convolving all tokens in the sentence Namely we stored the weights biases and features for each token after convolution and only backpropagated the error in order to correct them once all tokens were processed using the training scheme as explained in Section SECREF30 If a training instance INLINEFORM0 had INLINEFORM1 words then we represented the input vector for that instance as INLINEFORM2 Here INLINEFORM3 is a INLINEFORM4 dimensional feature vector for the word INLINEFORM5 We found that this network architecture produced good results on both of our benchmark datasets Adding extra layers or changing the pooling size and window size did not contribute to the accuracy much and instead only served to increase computational cost In this subsection we present the data used in our experiments BIBREF64 presented two different neural network models for creating word embeddings The models were loglinear in nature trained on large corpora One of them is a bagofwords based model called CBOW it uses word context in order to obtain the embeddings The other one is called skipgram model it predicts the word embeddings of surrounding words given the current word Those authors made a dataset called word2vec publicly available These 300dimensional vectors were trained on a 100billionword corpus from Google News using the CBOW architecture We trained the CBOW architecture proposed by BIBREF64 on a large Amazon product review dataset developed by BIBREF65 This dataset consists of 34686770 reviews 47 billion words of 2441053 Amazon products from June 1995 to March 2013 We kept the word embeddings 300dimensional httpsenticnetAmazonWEzip Due to the nature of the text used to train this model this includes opinionatedaffective information which is not present in ordinary texts such as the Google News corpus For training and evaluation of the proposed approach we used two corpora Aspectbased sentiment analysis dataset developed by BIBREF66 and SemEval 2014 dataset The dataset consists of training and test sets from two domains Laptop and Restaurant see Table TABREF52 The annotations in both corpora were encoded according to IOB2 a widely used coding scheme for representing sequences In this encoding the first word of each chunk starts with a BType tag IType is the continuation of the chunk and O is used to tag a word which is out of the chunk In our case we are interested to determine whether a word or chunk is an aspect so we only have BA IA and O tags for the words Here is an example of IOB2 tags alsoO excellentO operatingBA systemIA O sizeBA andO weightBA forO optimalO mobilityBA excellentO durabilityBA ofO theO batteryBA theO functionsO providedO byO theO trackpadBA isO unmatchedO byO anyO otherO brandO In this section we present the features the representation of the text and linguistic rules used in our experiments We used the following the features Word Embeddings We used the word embeddings described earlier as features for the network This way each word was encoded as 300dimensional vector which was fed to the network Part of speech tags Most of the aspect terms are either nouns or noun chunk This justifies the importance of POS features We used the POS tag of the word as its additional feature We used 6 basic parts of speech noun verb adjective adverb preposition conjunction encoded as a 6 dimensional binary vector We used Stanford Tagger as a POS tagger These two features vectors were concatenated and fed to CNN So for each word the final feature vector is 306 dimensional In some of our experiments we used a set of linguistic patterns LPs derived from sentic patterns LP BIBREF11 a linguistic framework based on SenticNet BIBREF22 SenticNet is a conceptlevel knowledge base for sentiment analysis built by means of sentic computing BIBREF67 a multidisciplinary approach to natural language processing and understanding at the crossroads between affective computing information extraction and commonsense reasoning which exploits both computer and human sciences to better interpret and process social information on the Web In particular we used the following linguistic rules Let a noun h be a subject of a word t which has an adverbial or adjective modifier present in a large sentiment lexicon SenticNet Then mark h as an aspect Except when the sentence has an auxiliary verb such as is was would should could etc we apply If the verb t is modified by an adjective or adverb or is in adverbial clause modifier relation with another token then mark h as an aspect Eg in The battery lasts little battery is the subject of lasts which is modified by an adjective modifier little so battery is marked as an aspect If t has a direct object a noun n not found in SenticNet then mark n an aspect as eg in I like the lens of this camera If a noun h is a complement of a couplar verb then mark h as an explicit aspect Eg in The camera is nice camera is marked as an aspect If a term marked as an aspect by the CNN or the other rules is in a nounnoun compound relationship with another word then instead form one aspect term composed of both of them Eg if in battery life battery or life is marked as an aspect then the whole expression is marked as an aspect The above rules 14 improve recall by discovering more aspect terms However to improve precision we apply some heuristics eg we remove stopwords such as of the a etc even if they were marked as aspect terms by the CNN or the other rules We used the Stanford parser to determine syntactic relations in the sentences We combined LPs with the CNN as follows both LPs and CNNbased classifier are run on the text then all terms marked by any of the two classifiers are reported as aspect terms except for those unmarked by the last rule Table TABREF63 shows the accuracy of our aspect term extraction framework in laptop and restaurant domains The framework gave better accuracy on restaurant domain reviews because of the lower variety of aspect available terms than in laptop domain However in both cases recall was lower than precision Table TABREF63 shows improvement in terms of both precision and recall when the POS feature is used Pretrained word embeddings performed better than randomized features each words vector initialized randomly see Table TABREF62 Amazon embeddings performed better than Google word2vec embeddings This supports our claim that the former contains opinionspecific information which helped it to outperform the accuracy of Google embeddings trained on more formal textthe Google news corpus Because of this in the sequel we only show the performance using Amazon embeddings which we denote simply as WE word embeddings In both domains CNN suffered from low recall ie it missed some valid aspect terms Linguistic analysis of the syntactic structure of the sentences substantially helped to overcome some drawbacks of machine learningbased analysis Our experiments showed good improvement in both precision and recall when LPs were used together with CNN see Table TABREF64 As to the LPs the removal of stopwords Rule 1 and Rule 3 were most beneficial Figure FIGREF66 shows a visualization for the Table TABREF64 Table TABREF65 and Figure FIGREF61 shows the comparison between the proposed method and the state of the art on the Semeval dataset It is noted that about 3655 aspect terms present in the laptop domain corpus are phrase and restaurant corpus consists of 2456 aspect terms The performance of detecting aspect phrases are lower than single word aspect tokens in both domains This shows that the sequential tagging is indeed a tough task to do Lack of sufficient training data for aspect phrases is also one of the reasons to get lower accuracy in this case In particular we got 7920 and 8355 Fscore to detect aspect phrases in laptop and restaurant domain respectively We observed some cases where only 1 term in an aspect phrase is detected as aspect term In those cases Rule 4 of the LPs helped to correctly detect the aspect phrases We also carried out experiments on the aspect dataset originally developed by BIBREF66 This is to date the largest comprehensive aspectbased sentiment analysis dataset The best accuracy on this dataset was obtained when word embedding features were used together with the POS features This shows that while the word embedding features are most useful the POS feature also plays a major role in aspect extraction As on the SemEval dataset LPs together with CNN increased the overall accuracy However LPs have performed much better on this dataset than on the SemEval dataset This supports the observation made previously BIBREF66 that on this dataset LPs are more useful One of the possible reasons for this is that most of the sentences in this dataset are grammatically correct and contain only one aspect term Here we combined LPs and a CNN to achieve even better results than the approach of by BIBREF66 based only on LPs Our experimental results showed that this ensemble algorithm CNNLP can better understand the semantics of the text than BIBREF66 s pure LPbased algorithm and thus extracts more salient aspect terms Table TABREF69 and Figure FIGREF68 shows the performance and comparisons of different frameworks Figure FIGREF70 compares the proposed method with the state of the art We believe that there are two key reasons for our framework to outperform stateoftheart approaches First a deep CNN which is nonlinear in nature better fits the data than linear models such as CRF Second the pretrained word embedding features help our framework to outperform stateoftheart methods that do not use word embeddings The main advantage of our framework is that it does not need any feature engineering This minimizes development cost and time We use the MPQA corpus BIBREF20 a collection of 535 English news articles from a variety of sources manually annotated with subjectivity flag From the total of 9700 sentences in this corpus 55 INLINEFORM0 of the sentences are labeled as subjective while the rest are objective We also compare with the Movie Review MR benchmark dataset BIBREF28 that contains 5000 subjective movie review snippets from Rotten Tomatoes website and another 5000 objective sentences from plot summaries available from the Internet Movies Database All sentences are at least ten words long and drawn from reviews or plot summaries of movies released post 2001 The data preprocessing included removing top 50 stop words and punctuation marks from the sentences Next we used a POS tagger to determine the partofspeech for each word in a sentence Subjectivity clues dataset BIBREF19 contains a list of over 8000 clues identified manually as well as automatically using both annotated and unannotated data Each clue is a word and the corresponding part of speech The frequency of each clue was computed in both subjective and objective sentences of the MPQA corpus Here we consider the top 50 clue words with highest frequency of occurrence in the subjective sentences We also extracted 25 top concepts containing the top clue words using the method described in BIBREF11 The CNN is collectively pretrained with both subjective and objective sentences that contain high ML word and concept motifs The word vectors are initialized using the LBL model and a context window of size 5 and 30 features Each sentence is wrapped to a window of 50 words to reduce the number of parameters and hence the overfitting of the model A CNN with three hidden layers of 100 neurons and kernels of size INLINEFORM0 is used The output layer corresponds to two neurons for each class of sentiments We used 10 fold cross validation to determine the accuracy of classifying new sentences using the trained CNN classifier A comparison is done with classifying the time series data using baseline classifiers such as Naive Bayes SVM NBSVM BIBREF60 Multichannel CNN CNNMC BIBREF61 Subjectivity Word Sense Disambiguation SWSD BIBREF62 and UnsupervisedWSD UWSD BIBREF63 Table TABREF41 shows that BCDBN outperforms previous methods by INLINEFORM0 in accuracy on both datasets Almost INLINEFORM1 improvement is observed over NBSVM on the movie review dataset In addition we only consider word vectors of 30 features instead of the 300 features used by CNNMC and hence are 10 times faster Subjectivity detection can prevent the sentiment classifier from considering irrelevant or potentially misleading text This is particularly useful in multiperspective question answering summarization systems that need to summarize different opinions and perspectives and present multiple answers to the user based on opinions derived from different sources It is also useful to analysts in government commercial and political domains who need to determine the response of the people to different crisis events After filtering of subjective sentences aspect mining can be used to provide clearer visibility into the emotions of people by connecting different polarities to the corresponding target attribute In this chapter we tackled the two basic tasks of sentiment analysis in social media subjectivity detection and aspect extraction We used an ensemble of deep learning and linguistics to collect opinionated information and hence perform finegrained aspectbased sentiment analysis In particular we proposed a Bayesian deep convolutional belief network to classify a sequence of sentences as either subjective or objective and used a convolutional neural network for aspect extraction Coupled with some linguistic rules this ensemble approach gave a significant improvement in performance over stateoftheart techniques and paved the way for a more multifaceted ie covering more NLP subtasks and multidisciplinary ie integrating techniques from linguistics and other disciplines approach to the complex problem of sentiment analysis In the future we will try to visualize the hierarchies of features learned via deep learning We can also consider fusion with other modalities such as YouTube videos This work was funded by Complexity Institute Nanyang Technological University Sentiment Quantification of UserGenerated Content 110170 Semantic Sentiment Analysis of Twitter Data 110167 Twitter Microblog Sentiment Analysis 265 </s>
<s>[INST] Generalisation in Named Entity Recognition: A Quantitative Analysis [/INST] Named entity recognition and classification NERC short NER the task of recognising and assigning a class to mentions of proper names named entities NEs in text has attracted many years of research BIBREF0 BIBREF1 analyses BIBREF2 starting from the first MUC challenge in 1995 BIBREF3 Recognising entities is key to many applications including text summarisation BIBREF4 search BIBREF5 the semantic web BIBREF6 topic modelling BIBREF7 and machine translation BIBREF8 BIBREF9 As NER is being applied to increasingly diverse and challenging text genres BIBREF10 BIBREF11 BIBREF12 this has lead to a noisier sparser feature space which in turn requires regularisation BIBREF13 and the avoidance of overfitting This has been the case even for large corpora all of the same genre and with the same entity classification scheme such as ACE BIBREF14 Recall in particular has been a persistent problem as named entities often seem to have unusual surface forms eg unusual character sequences for the given language eg Szeged in an Englishlanguage document or words that individually are typically not NEs unless they are combined together eg the White House Indeed the move from ACE and MUC to broader kinds of corpora has presented existing NER systems and resources with a great deal of difficulty BIBREF15 which some researchers have tried to address through domain adaptation specifically with entity recognition in mind BIBREF16 BIBREF17 BIBREF18 BIBREF19 BIBREF20 However more recent performance comparisons of NER methods over different corpora showed that older tools tend to simply fail to adapt even when given a fair amount of indomain data and resources BIBREF21 BIBREF11 Simultaneously the value of NER in nonnewswire data BIBREF21 BIBREF22 BIBREF23 BIBREF24 BIBREF25 has rocketed for example social media now provides us with a sample of all human discourse unmolested by editors publishing guidelines and the like and all in digital format leading to for example whole new fields of research opening in computational social science BIBREF26 BIBREF27 BIBREF28 The prevailing assumption has been that this lower NER performance is due to domain differences arising from using newswire NW as training data as well as from the irregular noisy nature of new media eg BIBREF21 Existing studies BIBREF11 further suggest that named entity diversity discrepancy between named entities in the training set and the test set entity drift over time in particular and diverse context are the likely reasons behind the significantly lower NER performance on social media corpora as compared to newswire No prior studies however have investigated these hypotheses quantitatively For example it is not yet established whether this performance drop is really due to a higher proportion of unseen NEs in the social media or is it instead due to NEs being situated in different kinds of linguistic context Accordingly the contributions of this paper lie in investigating the following open research questions In particular the paper carries out a comparative analyses of the performance of several different approaches to statistical NER over multiple text genres with varying NE and lexical diversity In line with prior analyses of NER performance BIBREF2 BIBREF11 we carry out corpus analysis and introduce briefly the NER methods used for experimentation Unlike prior efforts however our main objectives are to uncover the impact of NE diversity and context diversity on performance measured primarily by F1 score and also to study the relationship between OOV NEs and features and F1 See Section Experiments for details To ensure representativeness and comprehensiveness our experimental findings are based on key benchmark NER corpora spanning multiple genres time periods and corpus annotation methodologies and guidelines As detailed in Section Datasets the corpora studied are OntoNotes BIBREF29 ACE BIBREF30 MUC 7 BIBREF31 the Ritter NER corpus BIBREF21 the MSM 2013 corpus BIBREF32 and the UMBC Twitter corpus BIBREF33 To eliminate potential bias from the choice of statistical NER approach experiments are carried out with three differentlyprincipled NER approaches namely Stanford NER BIBREF34 SENNA BIBREF35 and CRFSuite BIBREF36 see Section NER Models and Features for details Since the goal of this study is to compare NER performance on corpora from diverse domains and genres seven benchmark NER corpora are included spanning newswire broadcast conversation Web content and social media see Table 1 for details These datasets were chosen such that they have been annotated with the same or very similar entity classes in particular names of people locations and organisations Thus corpora including only domainspecific entities eg biomedical corpora were excluded The choice of corpora was also motivated by their chronological age we wanted to ensure a good temporal spread in order to study possible effects of entity drift over time A note is required about terminology This paper refers to text genre and also text domain These are two dimensions by which a document or corpus can be described Genre here accounts the general characteristics of the text measurable with things like register tone reading ease sentence length vocabulary and so on Domain describes the dominant subject matter of text which might give specialised vocabulary or specific unusal word senses For example broadcast news is a genre describing the manner of use of language whereas financial text or popular culture are domains describing the topic One notable exception to this terminology is social media which tends to be a blend of myriad domains and genres with huge variation in both these dimensions BIBREF38 BIBREF39 for simplicity we also refer to this as a genre here In chronological order the first corpus included here is MUC 7 which is the last of the MUC challenges BIBREF31 This is an important corpus since the Message Understanding Conference MUC was the first one to introduce the NER task in 1995 BIBREF3 with focus on recognising persons locations and organisations in newswire text A subsequent evaluation campaign was the CoNLL 2003 NER shared task BIBREF40 which created gold standard data for newswire in Spanish Dutch English and German The corpus of this evaluation effort is now one of the most popular gold standards for NER with new NER approaches and methods often reporting performance on that Later evaluation campaigns began addressing NER for genres other than newswire specifically ACE BIBREF30 and OntoNotes BIBREF29 Both of those contain subcorpora in several genres namely newswire broadcast news broadcast conversation weblogs and conversational telephone speech ACE in addition contains a subcorpus with usenet newsgroups Like CoNLL 2003 the OntoNotes corpus is also a popular benchmark dataset for NER The languages covered are English Arabic and Chinese A further difference between the ACE and OntoNotes corpora on one hand and CoNLL and MUC on the other is that they contain annotations not only for NER but also for other tasks such as coreference resolution relation and event extraction and word sense disambiguation In this paper however we restrict ourselves purely to the English NER annotations for consistency across datasets The ACE corpus contains HEAD as well as EXTENT annotations for NE spans For our experiments we use the EXTENT tags With the emergence of social media studying NER performance on this genre gained momentum So far there have been no big evaluation efforts such as ACE and OntoNotes resulting in substantial amounts of gold standard data Instead benchmark corpora were created as part of smaller challenges or individual projects The first such corpus is the UMBC corpus for Twitter NER BIBREF33 where researchers used crowdsourcing to obtain annotations for persons locations and organisations A further Twitter NER corpus was created by BIBREF21 which in contrast to other corpora contains more finegrained classes defined by the Freebase schema BIBREF41 Next the Making Sense of Microposts initiative BIBREF32 MSM provides single annotated data for named entity recognition on Twitter for persons locations organisations and miscellaneous MSM initiatives from 2014 onwards in addition feature a named entity linking task but since we only focus on NER here we use the 2013 corpus These corpora are diverse not only in terms of genres and time periods covered but also in terms of NE classes and their definitions In particular the ACE and OntoNotes corpora try to model entity metonymy by introducing facilities and geopolitical entities GPEs Since the rest of the benchmark datasets do not make this distinction metonymous entities are mapped to a more common entity class see below In order to ensure consistency across corpora only Person PER Location LOC and Organisation ORG are used in our experiments and other NE classes are mapped to O no NE For the Ritter corpus the 10 entity classes are collapsed to three as in BIBREF21 For the ACE and OntoNotes corpora the following mapping is used PERSON rightarrow PER LOCATION FACILITY GPE rightarrow LOC ORGANIZATION rightarrow ORG all other classes rightarrow O Tokens are annotated with BIO sequence tags indicating that they are the beginning B or inside I of NE mentions or outside of NE mentions O For the Ritter and ACE 2005 corpora separate training and test corpora are not publicly available so we randomly sample 13 for testing and use the rest for training The resulting training and testing data sizes measured in number of NEs are listed in Table 2 Separate models are then trained on the training parts of each corpus and evaluated on the development if available and test parts of the same corpus If development parts are available as they are for CoNLL CoNLL Test A and MUC MUC 7 Dev they are not merged with the training corpora for testing as it was permitted to do in the context of those evaluation challenges t P R and F1 of NERC with different models evaluated on different testing corpora trained on corpora normalised by size Table 1 shows which genres the different corpora belong to the number of NEs and the proportions of NE classes per corpus Sizes of NER corpora have increased over time from MUC to OntoNotes Further the class distribution varies between corpora while the CoNLL corpus is very balanced and contains about equal numbers of PER LOC and ORG NEs other corpora are not The least balanced corpus is the MSM 2013 Test corpus which contains 98 LOC NEs but 1110 PER NEs This makes it difficult to compare NER performance here since performance partly depends on training data size Since comparing NER performance as such is not the goal of this paper we will illustrate the impact of training data size by using learning curves in the next section illustrate NERC performance on trained corpora normalised by size in Table UID9 and then only use the original training data size for subsequent experiments In order to compare corpus diversity across genres we measure NE and tokentype diversity following eg BIBREF2 Note that types are the unique tokens so the ratio can be understood as ratio of total tokens to unique ones Table 4 shows the ratios between the number of NEs and the number of unique NEs per corpus while Table 5 reports the tokentype ratios The lower those ratios are the more diverse a corpus is While tokentype ratios also include tokens which are NEs they are a good measure of broader linguistic diversity Aside from these metrics there are other factors which contribute to corpus diversity including how big a corpus is and how well sampled it is eg if a corpus is only about one story it should not be surprising to see a high tokentype ratio Therefore by experimenting on multiple corpora from different genres and created through different methodologies we aim to encompass these other aspects of corpus diversity Since the original NE and tokentype ratios do not account for corpus size Tables 5 and 4 present also the normalised ratios For those a number of tokens equivalent to those in the corpus eg 7037 for UMBC Table 5 or respectively a number of NEs equivalent to those in the corpus 506 for UMBC are selected Table 4 An easy choice of sampling method would be to sample tokens and NEs randomly However this would not reflect the composition of corpora appropriately Corpora consist of several documents tweets or blog entries which are likely to repeat the words or NEs since they are about one story The difference between bigger and smaller corpora is then that bigger corpora consist of more of those documents tweets blog entries interviews etc Therefore when we downsample we take the first n tokens for the tokentype ratios or the first n NEs for the NEsUnique NEs ratios Looking at the normalised diversity metrics the lowest NEUnique NE ratios 15 in bold Table 4 are observed on the Twitter and CoNLL Test corpora Seeing this for Twitter is not surprising since one would expect noise in social media text eg spelling variations or mistakes to also have an impact on how often the same NEs are seen Observing this in the latter though is less intuitive and suggests that the CoNLL corpora are well balanced in terms of stories Low NEUnique ratios 17 can also be observed for ACE WL ACE UN and OntoNotes TC Similar to social media text content from weblogs usenet dicussions and telephone conversations also contains a larger amount of noise compared to the traditionallystudied newswire genre so this is not a surprising result Corpora bearing high NEUnique NE ratios 25 are ACE CTS OntoNotes MZ and OntoNotes BN These results are also not surprising The telephone conversations in ACE CTS are all about the same story and newswire and broadcast news tend to contain longer stories reducing variety in any fixedsize set and are more regular due to editing The tokentype ratios reflect similar trends Table 5 Low tokentype ratios 28 in bold are observed for the Twitter corpora Ritter and UMBC as well as for the CoNLL Test corpus Tokentype ratios are also low 32 for CoNLL Train and ACE WL Interestingly ACE UN and MSM Train and Test do not have low tokentype ratios although they have low NEUnique ratios That is many diverse persons locations and organisations are mentioned in those corpora but similar context vocabulary is used Tokentype ratios are high 44 for MUC7 Dev ACE BC ACE CTS ACE UN and OntoNotes TC Telephone conversations TC having high tokentype ratios can be attributed to the high amount filler words eg uh you know NE corpora are generally expected to have regular language use for ACE at least in this instance Furthermore it is worth pointing out that especially for the larger corpora eg OntoNotes NW size normalisation makes a big difference The normalised NEUnique NE ratios drop by almost a half compared to the unnormalised ratios and normalised TokenType ratios drop by up to 85 This strengthens our argument for size normalisation and also poses the question of low NERC performance for diverse genres being mostly due to the lack of large training corpora This is examined in Section RQ2 NER performance in Different Genres Lastly Table 6 reports tag density percentage of tokens tagged as part of a NE which is another useful metric of corpus diversity that can be interpreted as the information density of a corpus What can be observed here is that the NW corpora have the highest tag density and generally tend to have higher tag density than corpora of other genres that is newswire bears a lot of entities Corpora with especially low tag density 006 in bold are the TC corpora Ritter OntoNotes WB ACE UN ACE BN and ACE BC As already mentioned conversational corpora to which ACE BC also belong tend to have many filler words thus it is not surprising that they have a low tag density There are only minor differences between the tag density and the normalised tag density since corpus size as such does not impact tag density To avoid systemspecific bias in our experiments three widelyused supervised statistical approaches to NER are included Stanford NER SENNA and CRFSuite These systems each have contrasting notable attributes Stanford NER BIBREF34 is the most popular of the three deployed widely in both research and commerce The system has been developed in terms of both generalising the underlying technology and also specific additions for certain languages The majority of openlyavailable additions to Stanford NER in terms of models gazetteers prefixsuffix handling and so on have been created for newswirestyle text Named entity recognition and classification is modelled as a sequence labelling task with firstorder conditional random fields CRFs BIBREF43 SENNA BIBREF35 is a more recent system for named entity extraction and other NLP tasks Using word representations and deep learning with deep convolutional neural networks the general principle for SENNA is to avoid taskspecific engineering while also doing well on multiple benchmarks The approach taken to fit these desiderata is to use representations induced from large unlabelled datasets including LM2 introduced in the paper itself and Brown clusters BIBREF44 BIBREF45 The outcome is a flexible system that is readily adaptable given training data Although the system is more flexible in general it relies on learning language models from unlabelled data which might take a long time to gather and retrain For the setup in BIBREF35 language models are trained for seven weeks on the English Wikipedia Reuters RCV1 BIBREF46 and parts of the Wall Street Journal and results are reported over the CoNLL 2003 NER dataset Reuters RCV1 is chosen as unlabelled data because the English CoNLL 2003 corpus is created from the Reuters RCV1 corpus For this paper we use the original language models distributed with SENNA and evaluate SENNA with the DeepNL framework BIBREF47 As such it is to some degree also biased towards the CoNLL 2003 benchmark data Finally we use the classical NER approach from CRFsuite BIBREF36 which also uses firstorder CRFs This frames NER as a structured sequence prediction task using features derived directly from the training text Unlike the other systems no external knowledge eg gazetteers and unsupervised representations are used This provides a strong basic supervised system and unlike Stanford NER and SENNA has not been tuned for any particular domain giving potential to reveal more challenging domains without any intrinsic bias We use the feature extractors natively distributed with the NER frameworks For Stanford NER we use the feature set chris2009 without distributional similarity which has been tuned for the CoNLL 2003 data This feature was tuned to handle OOV words through word shape ie capitalisation of constituent characters The goal is to reduce feature sparsity the basic problem behind OOV named entities by reducing the complexity of word shapes for long words while retaining word shape resolution for shorter words In addition word clusters neighbouring ngrams label sequences and quasiNewton minima search are included SENNA uses word embedding features and gazetteer features for the training configuration see httpsgithubcomattardideepnlbenchmarks Finally for CRFSuite we use the provided feature extractor without POS or chunking features which leaves unigram and bigram word features of the mention and in a window of 2 to the left and the right of the mention character shape prefixes and suffixes of tokens These systems are compared against a simple surface form memorisation tagger The memorisation baseline picks the most frequent NE label for each token sequence as observed in the training corpus There are two kinds of ambiguity one is overlapping sequences eg if both New York City and New York are memorised as a location In that case the longestmatching sequence is labelled with the corresponding NE class The second class ambiguity occurs when the same textual label refers to different NE classes eg Google could either refer to the name of a company in which case it would be labelled as ORG or to the companys search engine which would be labelled as O no NE t P R and F1 of NERC with different models trained on original corpora t F1 per NE type with different models trained on original corpora Our first research question is how NERC performance differs for corpora between approaches In order to answer this Precision P Recall R and F1 metrics are reported on sizenormalised corpora Table UID9 and original corpora Tables RQ1 NER performance with Different Approaches and RQ1 NER performance with Different Approaches The reason for size normalisation is to make results comparable across corpora For size normalisation the training corpora are downsampled to include the same number of NEs as the smallest corpus UMBC For that sentences are selected from the beginning of the train part of the corpora so that they include the same number of NEs as UMBC Other ways of downsampling the corpora would be to select the first n sentences or the first n tokens where n is the number of sentences in the smallest corpus The reason that the number of NEs which represent the number of positive training examples is chosen for downsampling the corpora is that the number of positive training examples have a much bigger impact on learning than the number of negative training examples For instance BIBREF48 among others study topic classification performance for small corpora and sample from the Reuters corpus They find that adding more negative training data gives little to no improvement whereas adding positive examples drastically improves performance Table UID9 shows results with size normalised precision P recall R and F1Score F1 The five lowest P R and F1 values per method CRFSuite Stanford NER SENNA are in bold to highlight underperformers Results for all corpora are summed with macro average Comparing the different methods the highest F1 results are achieved with SENNA followed by Stanford NER and CRFSuite SENNA has a balanced P and R which can be explained by the use of word embeddings as features which help with the unseen word problem For Stanford NER as well as CRFSuite which do not make use of embeddings recall is about half of precision These findings are in line with other work reporting the usefulness of word embeddings and deep learning for a variety of NLP tasks and domains BIBREF49 BIBREF50 BIBREF51 With respect to individual corpora the ones where SENNA outperforms other methods by a large margin 13 points in F1 are CoNLL Test A ACE CTS and OntoNotes TC The first success can be attributed to being from the same the domain SENNA was originally tuned for The second is more unexpected and could be due to those corpora containing a disproportional amount of PER and LOC NEs which are easier to tag correctly compared to ORG NEs as can be seen in Table RQ1 NER performance with Different Approaches where F1 of NERC methods is reported on the original training data Our analysis of CRFSuite here is that it is less tuned for NW corpora and might therefore have a more balanced performance across genres does not hold Results with CRFSuite for every corpus are worse than the results for that corpus with Stanford NER which is also CRFbased To summarise our findings are noitemsep F1 is highest with SENNA followed by Stanford NER and CRFSuite SENNA outperforms other methods by a large margin eg 13 points in F1 for CoNLL Test A ACE CTS and OntoNotes TC Our hypothesis that CRFSuite is less tuned for NW corpora and will therefore have a more balanced performance across genres does not hold as results for CRFSuite for every corpus are worse than with Stanford NER Our second research question is whether existing NER approaches generalise well over corpora in different genres To do this we study again Precision P Recall R and F1 metrics on sizenormalised corpora Table UID9 on original corpora Tables RQ1 NER performance with Different Approaches and RQ1 NER performance with Different Approaches and we further test performance per genre in a separate table Table 3 F1 scores over sizenormalised corpora vary widely Table UID9 For example the SENNA scores range from 935 F1 ACE UN to 7148 CoNLL Test A Lowest results are consistently observed for the ACE subcorpora UMBC and OntoNotes BC and WB The ACE corpora are large and so may be more prone to nonuniformities emerging during downsampling they also have special rules for some kinds of organisation which can skew results as described in Section UID9 The highest results are on the CoNLL Test A corpus OntoNotes BN and MUC 7 Dev This moderately supports our hypothesis that NER systems perform better on NW than on other genres probably due to extra fitting from many researchers using them as benchmarks for tuning their approaches Looking at the Twitter TWI corpora present the most challenge due to increased diversity the trends are unstable Although results for UMBC are among the lowest results for MSM 2013 and Ritter are in the same range or even higher than those on NW datasets This begs the question whether low results for Twitter corpora reported previously were due to the lack of sufficient ingenre training data Comparing results on normalised to nonnormalised data Twitter results are lower than those for most OntoNotes corpora and CoNLL test corpora mostly due to low recall Other difficult corpora having low performance are ACE UN and WEB corpora We further explicitly examine results on size normalised corpora grouped by corpus type shown in Table 3 It becomes clear that on average newswire corpora and OntoNotes MZ are the easiest corpora and ACE UN WEB and TWI are harder This confirms our hypothesis that social media and Web corpora are challenging for NERC The CoNLL results on the other hand are the highest across all corpora irrespective of the NERC method What is very interesting to see is that they are much higher than the results on the biggest training corpus OntoNotes NW For instance SENNA has an F1 of 7804 on OntoNotes compared to an F1 of 9239 and 8644 for CoNLL Test A and Test B respectively So even though OntoNotes NW is more than twice the size of CoNLL in terms of NEs see Table 4 NERC performance is much higher on CoNLL NERC performance with respect to training corpus size is represented in Figure 1 The latter figure confirms that although there is some correlation between corpus size and F1 the variance between results on comparably sized corpora is big This strengthens our argument that there is a need for experimental studies such as those reported below to find out what apart from corpus size impacts NERC performance Another set of results presented in Table RQ1 NER performance with Different Approaches are those of the simple NERC memorisation baseline It can be observed that corpora with a low F1 for NERC methods such as UMBC and ACE UN also have a low memorisation performance Memorisation is discussed in more depth in Section RQ5 OutOfDomain NER Performance and Memorisation When NERC results are compared to the corpus diversity statistics ie NEUnique NE ratios Table 4 tokentype ratios Table 5 and tag density Table 6 the strongest predictor for F1 is tag density as can be evidenced by the R correlation values between the ratios and F1 scores with the Stanford NER system shown in the respective tables There is a positive correlation between high F1 and high tag density R of 057 and R of 062 with normalised tag density a weak positive correlation for NEunique ratios R of 020 and R of 015 for normalised ratio whereas for tokentype ratios no such clear correlation can be observed R of 025 and R of 007 for normalised ratio However tag density is also not an absolute predictor for NERC performance While NW corpora have both high NERC performance and high tag density this high density is not necessarily an indicator of high performance For example systems might not find high tag density corpora of other genres necessarily so easy One factor that can explain the difference in genre performance between eg newswire and social media is entity drift the change in observed entity terms over time In this case it is evident from the differing surface forms and contexts for a given entity class For example the concept of location that NER systems try to learn might be frequently represented in English newswire from 1991 with terms like Iraq or Kuwait but more with Atlanta Bosnia and Kabul in the same language and genre from 1996 Informally drift on Twitter is often characterised as both highfrequency and highmagnitude that is the changes are both rapid and correspond to a large amount of surface form occurrences eg BIBREF12 BIBREF52 We examined the impact of drift in newswire and Twitter corpora taking datasets based in different timeframes The goal is to gauge how much diversity is due to new entities appearing over time To do this we used just the surface lexicalisations of entities as the entity representation The overlap of surface forms was measured across different corpora of the same genre and language We used an additional corpus based on recent data that from the WNUT 2015 challenge BIBREF25 This is measured in terms of occurrences rather than distinct surface forms so that the magnitude of the drift is shown instead of having skew in results from the the noisy long tail Results are given in Table 7 for newswire and Table 8 for Twitter corpora It is evident that the withinclass commonalities in surface forms are much higher in newswire than in Twitter That is to say observations of entity texts in one newswire corpus are more helpful in labelling other newswire corpora than if the same technique is used to label other twitter corpora This indicates that drift is lower in newswire than in tweets Certainly the proportion of entity mentions in most recent corpora the rightmostcolumns are consistently low compared to entity forms available in earlier data These reflect the raised OOV and drift rates found in previous work BIBREF12 BIBREF53 Another explanation is that there is higher noise in variation and that the drift is not longitudinal but rather general This is partially addressed by RQ3 which we will address next in Section RQ3 Impact of NE Diversity To summarise our findings are noitemsep Overall F1 scores vary widely across corpora Trends can be marked in some genres On average newswire corpora and OntoNotes MZ are the easiest corpora and ACE UN WEB and TWI are the hardest corpora for NER methods to reach good performance on Normalising corpora by size results in more noisy data such as TWI and WEB data achieving similar results to NW corpora Increasing the amount of available indomain training data will likely result in improved NERC performance There is a strong positive correlation between high F1 and high tag density a weak positive correlation for NEunique ratios and no clear correlation between tokentype ratios and F1 Temporal NE drift is lower in newswire than in tweets The next section will take a closer look at the impact of seen and unseen NEs on NER performance Unseen NEs are those with surface forms present only in the test but not training data whereas seen NEs are those also encountered in the training data As discussed previously the ratio between those two measures is an indicator of corpus NE diversity Table 9 shows how the number of unseen NEs per test corpus relates to the total number of NEs per corpus The proportion of unseen forms varies widely by corpus ranging from 0351 ACE NW to 0931 UMBC As expected there is a correlation between corpus size and percentage of unseen NEs ie smaller corpora such as MUC and UMBC tend to contain a larger proportion of unseen NEs than bigger corpora such as ACE NW In addition similar to the tokentype ratios listed in Table 5 we observe that TWI and WEB corpora have a higher proportion of unseen entities As can be seen from Table RQ1 NER performance with Different Approaches corpora with a low percentage of unseen NEs eg CoNLL Test A and OntoNotes NW tend to have high NERC performance whereas corpora with high percentage of unseen NEs eg UMBC tend to have low NERC performance This suggests that systems struggle to recognise and classify unseen NEs correctly To check this seenunseen performance split next we examine NERC performance for unseen and seen NEs separately results are given in Table 10 The All column group represents an averaged performance result What becomes clear from the macro averages is that F1 on unseen NEs is significantly lower than F1 on seen NEs for all three NERC approaches This is mostly due to recall on unseen NEs being lower than that on seen NEs and suggests some memorisation and poor generalisation in existing systems In particular Stanford NER and CRFSuite have almost 50 lower recall on unseen NEs compared to seen NEs One outlier is ACE UN for which the average seen F1 is 101 and the average unseen F1 is 152 though both are miniscule and the different negligible Of the three approaches SENNA exhibits the narrowest F1 difference between seen and unseen NEs In fact it performs below Stanford NER for seen NEs on many corpora This may be because SENNA has but a few features based on word embeddings which reduces feature sparsity intuitively the simplicity of the representation is likely to help with unseen NEs at the cost of slightly reduced performance on seen NEs through slower fitting Although SENNA appears to be better at generalising than Stanford NER and our CRFSuite baseline the difference between its performance on seen NEs and unseen NEs is still noticeable This is 2177 for SENNA macro average whereas it is 2941 for CRFSuite and 3568 for Stanford NER The fact that performance over unseen entities is significantly lower than on seen NEs partly explains what we observed in the previous section ie that corpora with a high proportion of unseen entities such as the ACE WL corpus are harder to label than corpora of a similar size from other genres such as the ACE BC corpus eg systems reach F1 of sim 30 compared to sim 50 Table RQ1 NER performance with Different Approaches However even though performance on seen NEs is higher than on unseen there is also a difference between seen NEs in corpora of different sizes and genres For instance performance on seen NEs in ACE WL is 7086 averaged over the three different approaches whereas performance on seen NEs in the lessdiverse ACE BC corpus is higher at 7642 the less diverse data is on average easier to tag Interestingly average F1 on seen NEs in the Twitter corpora MSM and Ritter is around 80 whereas average F1 on the ACE corpora which are of similar size is lower at around 70 To summarise our findings are noitemsep F1 on unseen NEs is significantly lower than F1 on seen NEs for all three NERC approaches which is mostly due to recall on unseen NEs being lower than that on seen NEs Performance on seen NEs is significantly and consistently higher than that of unseen NEs in different corpora with the lower scores mostly attributable to lower recall However there are still significant differences at labelling seen NEs in different corpora which means that if NEs are seen or unseen does not account for all of the difference of F1 between corpora of different genres Having examined the impact of seenunseen NEs on NERC performance in RQ3 and touched upon surface form drift in RQ2 we now turn our attention towards establishing the impact of seen features ie features appearing in the test set that are observed also in the training set While feature sparsity can help to explain low F1 it is not a good predictor of performance across methods sparse features can be good if mixed with highfrequency ones For instance Stanford NER often outperforms CRFSuite see Table RQ1 NER performance with Different Approaches despite having a lower proportion of seen features ie those that occur both in test data and during training Also some approaches such as SENNA use a small number of features and base their features almost entirely on the NEs and not on their context Subsequently we want to measure F1 for unseens and seen NEs as in Section RQ3 Impact of NE Diversity but also examine how the proportion of seen features impacts on the result We define seen features as those observed in the test data and also the training data In turn unseen features are those observed in the test data but not in the training data That is they have not been previously encountered by the system at the time of labeling Unseen features are different from unseen words in that they are the difference in representation not surface form For example the entity Xoxarle may be an unseen entity not found in training data This entity could reasonably have shapeXxxxxxx and lastlettere as part of its feature representation If the training data contains entities Kenneth and Simone each of this will have generated these two features respectively Thus these example features will not be unseen features in this case despite coming from an unseen entity Conversely continuing this example if the training data contains no feature firstletterX which applies to the unseen entity in question then this will be an unseen feature We therefore measure the proportion of unseen features per unseen and seen proportion of different corpora An analysis of this with Stanford NER is shown in Figure 2 Each data point represents a corpus The blue squares are data points for seen NEs and the red circles are data points for unseen NEs The figure shows a negative correlation between F1 and percentage of unseen features ie the lower the percentage of unseen features the higher the F1 Seen and unseen performance and features separate into two groups with only two outlier points The figure shows that novel previously unseen NEs have more unseen features and that systems score a lower F1 on them This suggests that despite the presence of feature extractors for tackling unseen NEs the features generated often do not overlap with those from seen NEs However one would expect individual features to give different generalisation power for other sets of entities and for systems use these features in different ways That is machine learning approaches to the NER task do not seem to learn clearcut decision boundaries based on a small set of features This is reflected in the softness of the correlation Finally the proportion of seen features is higher for seen NEs The two outlier points are ACE UN low F1 for seen NEs despite low percentage of unseen features and UMBC high F1 for seen NEs despite high percentage of unseen features An error analysis shows that the ACE UN corpus suffers from the problem that the seen NEs are ambiguous meaning even if they have been seen in the training corpus a majority of the time they have been observed with a different NE label For the UMBC corpus the opposite is true seen NEs are unambiguous This kind of metonymy is a known and challenging issue in NER and the results on these corpora highlight the impact is still has on modern systems For all approaches the proportion of observed features for seen NEs is bigger than the proportion of observed features for unseen NEs as it should be However within the seen and unseen testing instances there is no clear trend indicating whether having more observed features overall increases F1 performance One trend that is observable is that the smaller the tokentype ratio is Table 5 the bigger the variance between the smallest and biggest n for each corpus or in other words the smaller the tokentype ratio is the more diverse the features To summarise our findings are noitemsep Seen NEs have more unseen features and systems score a lower F1 on them Outliers are due to lowhigh ambiguity of seen NEs The proportion of observed features for seen NEs is bigger than the proportion of observed features for unseen NEs Within the seen and unseen testing instances there is no clear trend indicating whether having more observed features overall increases F1 performance The smaller the tokentype ratio is the more diverse the features This section explores baseline outofdomain NERC performance without domain adaptation what percentage of NEs are seen if there is a difference between the the training and the testing domains and how the difference in performance on unseen and seen NEs compares to indomain performance As demonstrated by the above experiments and in line with related work NERC performance varies across domains while also being influenced by the size of the available indomain training data Prior work on transfer learning and domain adaptation eg BIBREF16 has aimed at increasing performance in domains where only small amounts of training data are available This is achieved by adding outof domain data from domains where larger amounts of training data exist For domain adaptation to be successful however the seed domain needs to be similar to the target domain ie if there is no or very little overlap in terms of contexts of the training and testing instances the model does not learn any additional helpful weights As a confounding factor Twitter and other social media generally consist of many thousandsmillions of microdomains with each author BIBREF54 community BIBREF55 and even conversation BIBREF56 having its own style which makes it hard to adapt to it as a single monolithic genre accordingly adding outofdomain NER data gives bad results in this situation BIBREF21 And even if recognised perfectly entities that occur just once cause problems beyond NER eg in coreference BIBREF57 In particular BIBREF58 has reported improving F1 by around 6 through adaptation from the CoNLL to the ACE dataset However transfer learning becomes more difficult if the target domain is very noisy or as mentioned already too different from the seed domain For example BIBREF59 unsuccessfully tried to adapt the CoNLL 2003 corpus to a Twitter corpus spanning several topics They found that handannotating a Twitter corpus consisting of 24000 tokens performs better on new Twitter data than their transfer learning efforts with the CoNLL 2003 corpus The seed domain for the experiments here is newswire where we use the classifier trained on the biggest NW corpus investigated in this study ie OntoNotes NW That classifier is then applied to all other corpora The rationale is to test how suitable such a big corpus would be for improving Twitter NER for which only small training corpora are available Results for outofdomain performance are reported in Table 11 The highest F1 performance is on the OntoNotes BC corpus with similar results to the indomain task This is unsurprising as it belongs to a similar domain as the training corpus broadcast conversation the data was collected in the same time period and it was annotated using the same guidelines In contrast outofdomain results are much lower than indomain results for the CoNLL corpora even though they belong to the same genre as OntoNotes NW Memorisation recall performance on CoNLL TestA and TestB with OntoNotes NW test suggest that this is partly due to the relatively low overlap in NEs between the two datasets This could be attributed to the CoNLL corpus having been collected in a different time period to the OntoNotes corpus when other entities were popular in the news an example of drift BIBREF37 Conversely Stanford NER does better on these corpora than it does on other news data eg ACE NW This indicates that Stanford NER is capable of some degree of generalisation and can detect novel entity surface forms however recall is still lower than precision here achieving roughly the same scores across these three from 4411 to 4496 showing difficulty in picking up novel entities in novel settings In addition there are differences in annotation guidelines between the two datasets If the CoNLL annotation guidelines were more inclusive than the Ontonotes ones then even a memorisation evaluation over the same dataset would yield this result This is in fact the case OntoNotes divides entities into more classes not all of which can be readily mapped to PERLOCORG For example OntoNotes includes PRODUCT EVENT and WORK OF ART classes which are not represented in the CoNLL data It also includes the NORP class which blends nationalities religious and political groups This has some overlap with ORG but also includes terms such as muslims and Danes which are too broad for the ACErelated definition of ORGANIZATION Full details can be found in the OntoNotes 50 release notes and the brief CoNLL 2003 annotation categories Notice how the CoNLL guidelines are much more terse being generally nonprose but also manage to cram in fairly comprehensive lists of subkinds of entities in each case This is likely to make the CoNLL classes include a diverse range of entities with the many suggestions acting as generative material for the annotator and therefore providing a broader range of annotations from which to generalise from ie slightly easier to tag The lowest F1 of 0 is achieved on ACE BN An examination of that corpus reveals the NEs contained in that corpus are all lower case whereas those in OntoNotes NW have initial capital letters Results on unseen NEs for the outofdomain setting are in Table 12 The last sections observation of NERC performance being lower for unseen NEs also generally holds true in this outofdomain setting The macro average over F1 for the indomain setting is 7674 for seen NEs vs 5376 for unseen NEs whereas for the outofdomain setting the F1 is 5610 for seen NEs and 4773 for unseen NEs Corpora with a particularly big F1 difference between seen and unseen NEs 20 averaged over all NERC methods are ACE NW ACE BC ACE UN OntoNotes BN and OntoNotes MZ For some corpora CoNLL Test A and B MSM and Ritter outofdomain F1 macro average over all methods of unseen NEs is better than for seen NEs We suspect that this is due to the outofdomain evaluation setting encouraging better generalisation as well as the regularity in entity context observed in the fairly limited CoNLL news data for example this corpus contains a large proportion of cricket score reports and many cricketer names occurring in linguistically similar contexts Others have also noted that the CoNLL datasets are lowdiversity compared to OntoNotes in the context of named entity recognition BIBREF60 In each of the exceptions except MSM the difference is relatively small We note that the MSM test corpus is one of the smallest datasets used in the evaluation also based on a noisier genre than most others and so regard this discrepancy as an outlier Corpora for which outofdomain F1 is better than indomain F1 for at least one of the NERC methods are MUC7 Test ACE WL ACE UN OntoNotes WB OntoNotes TC and UMBC Most of those corpora are small with combined training and testing bearing fewer than 1000 NEs MUC7 Test ACE UN UMBC In such cases it appears beneficial to have a larger amount of training data even if it is from a different domain andor time period The remaining 3 corpora contain weblogs ACE WL ACE WB and online Usenet discussions ACE UN Those three are diverse corpora as can be observed by the relatively low NEsUnique NEs ratios Table 4 However NEUnique NEs ratios are not an absolute predictor for better outofdomain than indomain performance there are corpora with lower NEsUnique NEs ratios than ACE WB which have better indomain than outofdomain performance As for the other Twitter corpora MSM 2013 and Ritter performance is very low especially for the memorisation system This reflects that as well as surface form variation the context or other information represented by features shifts significantly more in Twitter than across different samples of newswire and that the generalisations that can be drawn from newswire by modern NER systems are not sufficient to give any useful performance in this natural unconstrained kind of text In fact it is interesting to see that the memorisation baseline is so effective with many genres including broadcast news weblog and newswire This indicates that there is low variation in the topics discussed by these sources only a few named entities are mentioned by each When named entities are seen as microtopics each indicating a grounded and small topic of interest this reflects the nature of news having low topic variation focusing on a few specific issues eg location referred to tend to be big persons tend to be politically or financially significant and organisations rich or governmental BIBREF61 In contrast social media users also discuss local locations like restaurants organisations such as music band and sports clubs and are content to discuss people that are not necessarily mentioned in Wikipedia The low overlap and memorisation scores on tweets when taking entity lexica based on newswire are therefore symptomatic of the lack of variation in newswire text which has a limited authorship demographic BIBREF62 and often has to comply to editorial guidelines The other genre that was particularly difficult for the systems was ACE Usenet This is a form of usergenerated content not intended for publication but rather discussion among communities In this sense it is social media and so it is not surprising that system performance on ACE UN resembles performance on social media more than other genres Crucially the computationallycheap memorisation method actually acts as a reasonable predictor of the performance of other methods This suggests that high entity diversity predicts difficulty for current NER systems As we know that social media tends to have high entity diversity certainly higher than other genres examined this offers an explanation for why NER systems perform so poorly when taken outside the relatively conservative newswire domain Indeed if memorisation offers a consistent prediction of performance then it is reasonable to say that memorisation and memorisationlike behaviour accounts for a large proportion of NER system performance To conclude regarding memorisation and outofdomain performance there are multiple issues to consider is the corpus a subcorpus of the same corpus as the training corpus does it belong to the same genre is it collected in the same time period and was it created with similar annotation guidelines Yet it is very difficult to explain highlow outofdomain performance compared to indomain performance with those factors A consistent trend is that if outofdomain memorisation is better indomain memorisation outofdomain NERC performance with supervised learning is better than indomain NERC performance with supervised learning too This reinforces discussions in previous sections an overlap in NEs is a good predictor for NERC performance This is useful when a suitable training corpus has to be identified for a new domain It can be timeconsuming to engineer features or study and compare machine learning methods for different domains while memorisation performance can be checked quickly Indeed memorisation consistently predicts NER performance The prediction applies both within and across domains This has implications for the focus of future work in NER the ability to generalise well enough to recognise unseen entities is a significant and stillopen problem To summarise our findings are noitemsep What time period an out of domain corpus is collected in plays an important role in NER performance The context or other information represented by features shifts significantly more in Twitter than across different samples of newswire The generalisations that can be drawn from newswire by modern NER systems are not sufficient to give any useful performance in this varied kind of text Memorisation consistently predicts NER performance both inside and outside genres or domains This paper investigated the ability of modern NER systems to generalise effectively over a variety of genres Firstly by analysing different corpora we demonstrated that datasets differ widely in many regards in terms of size balance of entity classes proportion of NEs and how often NEs and tokens are repeated The most balanced corpus in terms of NE classes is the CoNLL corpus which incidentally is also the most widely used NERC corpus both for method tuning of offtheshelf NERC systems eg Stanford NER SENNA as well as for comparative evaluation Corpora traditionally viewed as noisy ie the Twitter and Web corpora were found to have a low repetition of NEs and tokens More surprisingly however so does the CoNLL corpus which indicates that it is well balanced in terms of stories Newswire corpora have a large proportion of NEs as percentage of all tokens which indicates high information density Web Twitter and telephone conversation corpora on the other hand have low information density Our second set of findings relates to the NERC approaches studied Overall SENNA achieves consistently the highest performance across most corpora and thus has the best approach to generalising from training to testing data This can mostly be attributed to SENNAs use of word embeddings trained with deep convolutional neural nets The default parameters of SENNA achieve a balanced precision and recall while for Stanford NER and CRFSuite precision is almost twice as high as recall Our experiments also confirmed the correlation between NERC performance and training corpus size although size alone is not an absolute predictor In particular the biggest NEannotated corpus amongst those studied is OntoNotes NW almost twice the size of CoNLL in terms of number of NEs Nevertheless the average F1 for CoNLL is the highest of all corpora and in particular SENNA has 11 points higher F1 on CoNLL than on OntoNotes NW Studying NERC on sizenormalised corpora it becomes clear that there is also a big difference in performance on corpora from the same genre When normalising training data by size diverse corpora such as Web and social media still yield lower F1 than newswire corpora This indicates that annotating more training examples for diverse genres would likely lead to a dramatic increase in F1 What is found to be a good predictor of F1 is a memorisation baseline which picks the most frequent NE label for each token sequence in the test corpus as observed in the training corpus This supported our hypothesis that entity diversity plays an important role being negatively correlated with F1 Studying proportions of unseen entity surface forms experiments showed corpora with a large proportion of unseen NEs tend to yield lower F1 due to much lower performance on unseen than seen NEs about 17 points lower averaged over all NERC methods and corpora This finally explains why the performance is highest for the benchmark CoNLL newswire corpus it contains the lowest proportion of unseen NEs It also explains the difference in performance between NERC on other corpora Out of all the possible indicators for high NER F1 studied this is found to be the most reliable one This directly supports our hypothesis that generalising for unseen named entities is both difficult and important Also studied is the proportion of unseen features per unseen and seen NE portions of different corpora However this is found to not be very helpful The proportion of seen features is higher for seen NEs as it should be However within the seen and unseen NE splits there is no clear trend indicating if having more seen features helps We also showed that handannotating more training examples is a straightforward and reliable way of improving NERC performance However this is costly which is why it can be useful to study if using different larger corpora for training might be helpful Indeed substituting indomain training corpora with other training corpora for the same genre created at the same time improves performance and studying how such corpora can be combined with transfer learning or domain adaptation strategies might improve performance even further However for most corpora there is a significant drop in performance for outofdomain training What is again found to be reliable is to check the memorisation baseline if results for the outofdomain memorisation baseline are higher than for indomain memorisation than using the outofdomain corpus for training is likely to be helpful Across a broad range of corpora and genres characterised in different ways we have examined how named entities are embedded and presented While there is great variation in the range and class of entities found it is consistent that the more varied texts are harder to do named entity recognition in This connection with variation occurs to such an extent that in fact performance when memorising lexical forms stably predicts system accuracy The result of this is that systems are not sufficiently effective at generalising beyond the entity surface forms and contexts found in training data To close this gap and advance NER systems and cope with the modern reality of streamed NER as opposed to the prior generation of batchlearning based systems with static evaluation sets being used as research benchmarks future work needs to address named entity generalisation and outofvocabulary lexical forms This work was partially supported by the UK EPSRC Grant No EPK0178961 uComp and by the European Union under Grant Agreements No 611233 PHEME The authors wish to thank the CSL reviewers for their helpful and constructive feedback </s>
<s>[INST] wav2vec: Unsupervised Pre-training for Speech Recognition [/INST] Current state of the art models for speech recognition require large amounts of transcribed audio data to attain good performance BIBREF1 Recently pretraining of neural networks has emerged as an effective technique for settings where labeled data is scarce The key idea is to learn general representations in a setup where substantial amounts of labeled or unlabeled data is available and to leverage the learned representations to improve performance on a downstream task for which the amount of data is limited This is particularly interesting for tasks where substantial effort is required to obtain labeled data such as speech recognition In computer vision representations for ImageNet BIBREF2 and COCO BIBREF3 have proven to be useful to initialize models for tasks such as image captioning BIBREF4 or pose estimation BIBREF5 Unsupervised pretraining for computer vision has also shown promise BIBREF6 In natural language processing NLP unsupervised pretraining of language models BIBREF7 BIBREF8 BIBREF9 improved many tasks such as text classification phrase structure parsing and machine translation BIBREF10 BIBREF11 In speech processing pretraining has focused on emotion recogniton BIBREF12 speaker identification BIBREF13 phoneme discrimination BIBREF14 BIBREF15 as well as transferring ASR representations from one language to another BIBREF16 There has been work on unsupervised learning for speech but the resulting representations have not been applied to improve supervised speech recognition BIBREF17 BIBREF18 BIBREF19 BIBREF20 BIBREF21 In this paper we apply unsupervised pretraining to improve supervised speech recognition This enables exploiting unlabeled audio data which is much easier to collect than labeled data Our model is a convolutional neural network that takes raw audio as input and computes a general representation that can be input to a speech recognition system The objective is a contrastive loss that requires distinguishing a true future audio sample from negatives BIBREF22 BIBREF23 BIBREF15 Different to previous work BIBREF15 we move beyond framewise phoneme classification and apply the learned representations to improve strong supervised ASR systems relies on a fully convolutional architecture which can be easily parallelized over time on modern hardware compared to recurrent autoregressive models used in previous work SECREF2 Our experimental results on the WSJ benchmark demonstrate that pretrained representations estimated on about 1000 hours of unlabeled speech can substantially improve a characterbased ASR system and outperform the best characterbased result in the literature Deep Speech 2 On the TIMIT task pretraining enables us to match the best reported result in the literature In a simulated lowresource setup with only eight hours of transcriped audio data reduces WER by up to 32 compared to a baseline model that relies on labeled data only SECREF3 SECREF4 Given an audio signal as input we optimize our model SECREF3 to predict future samples from a given signal context A common problem with these approaches is the requirement to accurately model the data distribution INLINEFORM0 which is challenging We avoid this problem by first encoding raw speech samples INLINEFORM1 into a feature representation INLINEFORM2 at a lower temporal frequency and then implicitly model a density function INLINEFORM3 similar to BIBREF15 Our model takes raw audio signal as input and then applies two networks The encoder network embeds the audio signal in latent space and the context network combines multiple timesteps of the encoder to obtain contextualized representations Figure FIGREF2 Both networks are then used to compute the objective function SECREF4 Given raw audio samples INLINEFORM0 we apply the encoder network INLINEFORM1 which we parameterize as a fivelayer convolutional network similar to BIBREF15 Alternatively one could use other architectures such as the trainable frontend of BIBREF24 amongst others The encoder layers have kernel sizes INLINEFORM2 and strides INLINEFORM3 The output of the encoder is a low frequency feature representation INLINEFORM4 which encodes about 30ms of 16KHz of audio and the striding results in representation INLINEFORM5 every 10ms Next we apply the context network INLINEFORM0 to the output of the encoder network to mix multiple latent representations INLINEFORM1 into a single contextualized tensor INLINEFORM2 for a receptive field size INLINEFORM3 The context network has seven layers and each layer has kernel size three and stride one The total receptive field of the context network is about 180ms The layers of both networks consist of a causal convolution with 512 channels a group normalization layer and a ReLU nonlinearity We normalize both across the feature and temporal dimension for each sample which is equivalent to group normalization with a single normalization group BIBREF25 We found it important to choose a normalization scheme that is invariant to the scaling and the offset of the input data This choice resulted in representations that generalize well across datasets We train the model to distinguish a sample INLINEFORM0 that is k steps in the future from distractor samples INLINEFORM1 drawn from a proposal distribution INLINEFORM2 by minimizing the contrastive loss for each step INLINEFORM3 DISPLAYFORM0 where we denote the sigmoid INLINEFORM0 and where INLINEFORM1 is the probability of INLINEFORM2 being the true sample We consider a stepspecific affine transformation INLINEFORM3 for each step INLINEFORM4 that is applied to INLINEFORM5 BIBREF15 We optimize the loss INLINEFORM6 summing EQREF5 over different step sizes In practice we approximate the expectation by sampling ten negatives examples by uniformly choosing distractors from each audio sequence ie INLINEFORM7 where INLINEFORM8 is the sequence length and we set INLINEFORM9 to the number of negatives After training we input the representations produced by the context network INLINEFORM0 to the acoustic model instead of logmel filterbank features We consider the following corpora For phoneme recognition on TIMIT BIBREF26 we use the standard train dev and test split where the training data contains just over three hours of audio data Wall Street Journal WSJ Woodland et al 1994 comprises about 81 hours of transcribed audio data We train on si284 validate on nov93dev and test on nov92 Librispeech BIBREF27 contains a total of 960 hours of clean and noisy speech for training For pretraining we use either the full 81 hours of the WSJ corpus an 80 hour subset of clean Librispeech the full 960 hour Librispeech training set or a combination of all of them To train the baseline acoustic model we compute 80 logmel filterbank coefficients for a 25ms sliding window with stride 10ms Final models are evaluated in terms of both word error rate WER and letter error rate LER We use the wav2letter toolkit for training and evaluation of acoustic models BIBREF28 For the TIMIT task we follow the characterbased wav2letter setup of BIBREF24 which uses seven consecutive blocks of convolutions kernel size 5 with 1000 channels followed by a PReLU nonlinearity and a dropout rate of 07 The final representation is projected to a 39dimensional phoneme probability The model is trained using the Auto Segmentation Criterion ASG Collobert et al 2016 using SGD with momentum Our baseline for the WSJ benchmark is the wav2letter setup described in BIBREF29 which is a 17 layer model with gated convolutions BIBREF30 The model predicts probabilities for 31 graphemes including the standard English alphabet the apostrophe and period two repetition characters eg the word ann is transcribed as an1 and a silence token used as word boundary All acoustic models are trained on 8 Nvidia V100 GPUs using the distributed training implementations of fairseq and wav2letter When training acoustic models on WSJ we use plain SGD with learning rate 56 as well as gradient clipping BIBREF29 and train for 1000 epochs with a total batch size of 64 audio sequences We use early stopping and choose models based on validation WER after evaluating checkpoints with a 4gram language model For TIMIT we use learning rate 012 momentum of 09 and train for 1000 epochs on 8 GPUs with a batch size of 16 audio sequences For decoding the emissions from the acoustic model we use a lexicon as well as a separate language model trained on the WSJ language modeling data only We consider a 4gram KenLM language model BIBREF31 a wordbased convolutional language model BIBREF29 and a character based convolutional language model BIBREF32 We decode the word sequence INLINEFORM0 from the output of the context network INLINEFORM1 or logmel filterbanks using the beam search decoder of BIBREF29 by maximizing DISPLAYFORM0 where INLINEFORM0 is the acoustic model INLINEFORM1 is the language model INLINEFORM2 are the characters of INLINEFORM3 Hyperparameters INLINEFORM4 INLINEFORM5 and INLINEFORM6 are weights for the language model the word penalty and the silence penalty For decoding WSJ we tune the hyperparameters INLINEFORM0 INLINEFORM1 and INLINEFORM2 using a random search Finally we decode the emissions from the acoustic model with the best parameter setting for INLINEFORM3 INLINEFORM4 and INLINEFORM5 and a beam size of 4000 and beam score threshold of 250 The pretraining models are implemented in PyTorch in the fairseq toolkit BIBREF0 We optimize them with Adam BIBREF33 and a cosine learning rate schedule BIBREF34 annealed over 40K update steps for both WSJ and the clean Librispeech training datasets We start with a learning rate of 1e7 and the gradually warm it up for 500 updates up to 0005 and then decay it following the cosine curve up to 1e6 We train for 400K steps for full Librispeech To compute the objective we sample ten negatives and we use INLINEFORM0 tasks We train on 8 GPUs and put a variable number of audio sequences on each GPU up to a predefined limit of 15M frames per GPU Sequences are grouped by length and we crop them to a maximum size of 150K frames each or the length of the shortest sequence in the batch whichever is smaller Cropping removes speech signal from either the beginning or end of the sequence and we randomly decide the cropping offsets for each sample we resample every epoch This is a form of data augmentation but also ensures equal length of all sequences on a GPU and removes on average 25 of the training data After cropping the total effective batch size across GPUs is about 556 seconds of speech signal for a variable number of audio sequences Different to BIBREF15 we evaluate the pretrained representations directly on downstream speech recognition tasks We measure speech recognition performance on the WSJ benchmark and simulate various low resource setups SECREF12 We also evaluate on the TIMIT phoneme recognition task SECREF13 and ablate various modeling choices SECREF14 We consider pretraining on the audio data without labels of WSJ part of clean Librispeech about 80h and full Librispeech as well as a combination of all datasets SECREF7 For the pretraining experiments we feed the output of the context network to the acoustic model instead of logmel filterbank features Table shows that pretraining on more data leads to better accuracy on the WSJ benchmark Pretrained representations can substantially improve performance over our characterbased baseline which is trained on logmel filterbank features This shows that pretraining on unlabeled audio data can improve over the best characterbased approach Deep Speech 2 BIBREF1 by 03 WER on nov92 Our best pretraining model performs as well as the phonemebased model of BIBREF35 BIBREF36 is a phonemebased approach that pretrains on the transcribed Libirspeech data and then finetunes on WSJ In comparison our method requires only unlabeled audio data and BIBREF36 also rely on a stronger baseline model than our setup What is the impact of pretrained representations with less transcribed data In order to get a better understanding of this we train acoustic models with different amounts of labeled training data and measure accuracy with and without pretrained representations logmel filterbanks The pretrained representations are trained on the full Librispeech corpus and we measure accuracy in terms of WER when decoding with a 4gram language model Figure shows that pretraining reduces WER by 32 on nov93dev when only about eight hours of transcribed data is available Pretraining only on the audio data of WSJ WSJ performs worse compared to the much larger Librispeech Libri This further confirms that pretraining on more data is crucial to good performance On the TIMIT task we use a 7layer wav2letter model with high dropout SECREF3 Synnaeve et al 2016 Table shows that we can match the state of the art when we pretrain on Librispeech and WSJ audio data Accuracy steadily increases with more data for pretraining and the best accuracy is achieved when we use the largest amount of data for pretraining In this section we analyze some of the design choices we made for We pretrain on the 80 hour subset of clean Librispeech and evaluate on TIMIT Table shows that increasing the number of negative samples only helps up to ten samples Thereafter performance plateaus while training time increases We suspect that this is because the training signal from the positive samples decreases as the number of negative samples increases In this experiment everything is kept equal except for the number of negative samples Next we analyze the effect of data augmentation through cropping audio sequences SECREF11 When creating batches we crop sequences to a predefined maximum length Table shows that a crop size of 150K frames results in the best performance Not restricting the maximum length None gives an average sequence length of about 207K frames and results in the worst accuracy This is most likely because the setting provides the least amount of data augmentation Table shows that predicting more than 12 steps ahead in the future does not result in better performance and increasing the number of steps increases training time We introduce the first application of unsupervised pretraining to speech recognition with a fully convolutional model Our approach achieves 278 WER on the test set of WSJ a result that outperforms the next best known characterbased speech recognition model in the literature BIBREF1 while using three orders of magnitude less transcribed training data We show that more data for pretraining improves performance and that this approach not only improves resourcepoor setups but also settings where all WSJ training data is used In future work we will investigate different architectures and finetuning which is likely to further improve performance We thank the Speech team at FAIR especially Jacob Kahn Vineel Pratap and Qiantong Xu for help with wav2letter experiments and Tatiana Likhomanenko for providing convolutional language models for our experiments </s>
<s>[INST] Cross-lingual, Character-Level Neural Morphological Tagging [/INST] Stateoftheart morphological taggers require thousands of annotated sentences to train For the majority of the worlds languages however sufficient largescale annotation is not available and obtaining it would often be infeasible Accordingly an important road forward in lowresource NLP is the development of methods that allow for the training of highquality tools from smaller amounts of data In this work we focus on transfer learningwe train a recurrent neural tagger for a lowresource language jointly with a tagger for a related highresource language Forcing the models to share characterlevel features among the languages allows large gains in accuracy when tagging the lowresource languages while maintaining or even improving accuracy on the highresource language Recurrent neural networks constitute the state of the art for a myriad of tasks in NLP eg multilingual partofspeech tagging BIBREF0 syntactic parsing BIBREF1 BIBREF2 morphological paradigm completion BIBREF3 BIBREF4 and language modeling BIBREF5 BIBREF6 recently such models have also improved morphological tagging BIBREF7 BIBREF8 In addition to increased performance over classical approaches neural networks also offer a second advantage they admit a clean paradigm for multitask learning If the learned representations for all of the tasks are embedded jointly into a shared vector space the various tasks reap benefits from each other and often performance improves for all BIBREF9 We exploit this idea for languagetolanguage transfer to develop an approach for crosslingual morphological tagging We experiment on 18 languages taken from four different language families Using the Universal Dependencies treebanks we emulate a lowresource setting for our experiments eg we attempt to train a morphological tagger for Catalan using primarily data from a related language like Spanish Our results demonstrate the successful transfer of morphological knowledge from the highresource languages to the lowresource languages without relying on an externally acquired bilingual lexicon or bitext We consider both the single and multisource transfer case and explore how similar two languages must be in order to enable highquality transfer of morphological taggers Many languages in the world exhibit rich inflectional morphology the form of individual words mutates to reflect the syntactic function For example the Spanish verb soar will appear as sueo in the first person present singular but sois in the second person present plural depending on the bundle of syntacosemantic attributes associated with the given form in a sentential context For concreteness we list a more complete table of Spanish verbal inflections in tabparadigm authorRyancolorpurple40sizefancylinecaptionNotation in table is different Note that some languages eg the Northeastern Caucasian language Archi display a veritable cornucopia of potential forms with the size of the verbal paradigm exceeding 10000 BIBREF10 Standard NLP annotation eg the scheme in sylakglassmanEtAl2015ACLIJCNLP marks forms in terms of universal keyattribute pairs eg the first person present singular is represented as leftright posV per1 numsg tnspres leftright This bundle of keyattributes pairs is typically termed a morphological tag and we may view the goal of morphological tagging to label each word in its sentential context with the appropriate tag BIBREF11 BIBREF12 As the partofspeech POS is a component of the tag we may view morphological tagging as a strict generalization of POS tagging where we have significantly refined the set of available tags All of the experiments in this paper make use of the universal morphological tag set available in the Universal Dependencies UD BIBREF13 As an example we have provided a Russian sentence with its UD tagging in figrussiansentence Our formulation of transfer learning builds on work in multitask learning BIBREF15 BIBREF9 We treat each individual language as a task and train a joint model for all the tasks We first discuss the current state of the art in morphological tagging a characterlevel recurrent neural network After that we explore three augmentations to the architecture that allow for the transfer learning scenario All of our proposals force the embedding of the characters for both the source and the target language to share the same vector space but involve different mechanisms by which the model may learn languagespecific features Characterlevel neural networks currently constitute the state of the art in morphological tagging BIBREF8 We draw on previous work in defining a conditional distribution over taggings t for a sentence w of length w N as ptheta t mid w prod i1N ptheta ti mid w Eq 12 which may be seen as a 0textth order conditional random field CRF BIBREF16 with parameter vector theta Importantly this factorization of the distribution ptheta t mid w also allows for efficient exact decoding and marginal inference in cal ON time but at the cost of not admitting any explicit interactions in the output structure ie between adjacent tags We parameterize the distribution over tags at each time step as ptheta ti mid w textsoftmaxleftW ei bright Eq 15 where W in mathbb Rcal T times n is an embedding matrix bin mathbb Rcal T is a bias vector and positional embeddings ei are taken from a concatenation of the output of two long shortterm memory recurrent neural networks LSTMs BIBREF18 folded forward and backward respectively over a sequence of input vectors This constitutes a bidirectional LSTM BIBREF19 We define the positional embedding vector as follows ei lefttextLSTMv1i
textLSTMvi1Nright Eq 17 where each vi in mathbb Rn is itself a word embedding Note that the function textLSTM returns the last final hidden state vector of the network This architecture is the context bidirectional recurrent neural network of planksogaardgoldberg2016P162 Finally we derive each word embedding vector vi from a characterlevel bidirectional LSTM embedder Namely we define each word embedding as the concatenation vi left textLSTMleftlangle ci1 ldots
ciMirangle right right
left textLSTM leftlangle ciMi ldots ci1rangle right right nonumber Eq 18 In other words we run a bidirectional LSTM over the character stream This bidirectional LSTM is the sequence bidirectional recurrent neural network of planksogaardgoldberg2016P162 Note a concatenation of the sequence of character symbols langle ci1 ldots ciMi rangle results in the word string wi Each of the Mi characters cik is a member of the set Sigma We take Sigma to be the union of sets of characters in the languages considered We direct the reader to heigold2017 for a more indepth discussion of this and various additional architectures for the computation of vi the architecture we have presented in eqembedderv is competitive with the best performing setting in Heigold et als study Crosslingual morphological tagging may be formulated as a multitask learning problem We seek to learn a set of shared character embeddings for taggers in both languages together through optimization of a joint loss function that combines the highresource tagger and the lowresource one The first loss function we consider is the following cal Ltextit multitheta sum t w in cal Ds log ptheta tmid w ell s 5