text
stringlengths
0
164k
<s>[INST] Hybrid Code Networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning [/INST] Taskoriented dialog systems help a user to accomplish some goal using natural language such as making a restaurant reservation getting technical support or placing a phonecall Historically these dialog systems have been built as a pipeline with modules for language understanding state tracking action selection and language generation However dependencies between modules introduce considerable complexity for example it is often unclear how to define the dialog state and what history to maintain yet action selection relies exclusively on the state for input Moreover training each module requires specialized labels Recently endtoend approaches have trained recurrent neural networks RNNs directly on text transcripts of dialogs A key benefit is that the RNN infers a latent representation of state obviating the need for state labels However endtoend methods lack a general mechanism for injecting domain knowledge and constraints For example simple operations like sorting a list of database results or updating a dictionary of entities can expressed in a few lines of software yet may take thousands of dialogs to learn Moreover in some practical settings programmed constraints are essential for example a banking dialog system would require that a user is logged in before they can retrieve account information This paper presents a model for endtoend learning called Hybrid Code Networks HCNs which addresses these problems In addition to learning an RNN HCNs also allow a developer to express domain knowledge via software and action templates Experiments show that compared to existing recurrent endtoend techniques HCNs achieve the same performance with considerably less training data while retaining the key benefit of endtoend trainability Moreover the neural network can be trained with supervised learning or reinforcement learning by changing the gradient update applied This paper is organized as follows Section Model description describes the model and Section Related work compares the model to related work Section Supervised learning evaluation I applies HCNs to the bAbI dialog dataset BIBREF0 Section Supervised learning evaluation II then applies the method to real customer support domains at our company Section Reinforcement learning illustration illustrates how HCNs can be optimized with reinforcement learning and Section Conclusion concludes At a high level the four components of a Hybrid Code Network are a recurrent neural network domainspecific software domainspecific action templates and a conventional entity extraction module for identifying entity mentions in text Both the RNN and the developer code maintain state Each action template can be a textual communicative action or an API call The HCN model is summarized in Figure 1 The cycle begins when the user provides an utterance as text step 1 The utterance is featurized in several ways First a bag of words vector is formed step 2 Second an utterance embedding is formed using a prebuilt utterance embedding model step 3 Third an entity extraction module identifies entity mentions step 4 for example identifying Jennifer Jones as a name entity The text and entity mentions are then passed to Entity tracking code provided by the developer step 5 which grounds and maintains entities for example mapping the text Jennifer Jones to a specific row in a database This code can optionally return an action mask indicating actions which are permitted at the current timestep as a bit vector For example if a target phone number has not yet been identified the API action to place a phone call may be masked It can also optionally return context features which are features the developer thinks will be useful for distinguishing among actions such as which entities are currently present and which are absent The feature components from steps 15 are concatenated to form a feature vector step 6 This vector is passed to an RNN such as a long shortterm memory LSTM BIBREF1 or gated recurrent unit GRU BIBREF2 The RNN computes a hidden state vector which is retained for the next timestep step 8 and passed to a dense layer with a softmax activation with output dimension equal to the number of distinct system action templates step 9 Thus the output of step 9 is a distribution over action templates Next the action mask is applied as an elementwise multiplication and the result is normalized back to a probability distribution step 10 this forces nonpermitted actions to take on probability zero From the resulting distribution step 11 an action is selected step 12 When RL is active exploration is required so in this case an action is sampled from the distribution when RL is not active the best action should be chosen and so the action with the highest probability is always selected The selected action is next passed to Entity output developer code that can substitute in entities step 13 and produce a fullyformed action for example mapping the template city right to Seattle right In step 14 control branches depending on the type of the action if it is an API action the corresponding API call in the developer code is invoked step 15 for example to render rich content to the user APIs can act as sensors and return features relevant to the dialog so these can be added to the feature vector in the next timestep step 16 If the action is text it is rendered to the user step 17 and cycle then repeats The action taken is provided as a feature to the RNN in the next timestep step 18 Broadly there are two lines of work applying machine learning to dialog control The first decomposes a dialog system into a pipeline typically including language understanding dialog state tracking action selection policy and language generation BIBREF3 BIBREF4 BIBREF5 BIBREF6 BIBREF7 BIBREF8 BIBREF9 BIBREF10 BIBREF11 Specifically related to HCNs past work has implemented the policy as feedforward neural networks BIBREF12 trained with supervised learning followed by reinforcement learning BIBREF13 In these works the policy has not been recurrent ie the policy depends on the state tracker to summarize observable dialog history into state features which requires design and specialized labeling By contrast HCNs use an RNN which automatically infers a representation of state For learning efficiency HCNs use an external lightweight process for tracking entity values but the policy is not strictly dependent on it as an illustration in Section Supervised learning evaluation II below we demonstrate an HCNbased dialog system which has no external state tracker If there is context which is not apparent in the text in the dialog such as database status this can be encoded as a context feature to the RNN The second more recent line of work applies recurrent neural networks RNNs to learn endtoend models which map from an observable dialog history directly to a sequence of output words BIBREF14 BIBREF15 BIBREF16 BIBREF17 BIBREF18 BIBREF19 BIBREF20 BIBREF21 BIBREF22 BIBREF23 BIBREF24 BIBREF25 BIBREF26 These systems can be applied to taskoriented domains by adding special API call actions enumerating database output as a sequence of tokens BIBREF0 then learning an RNN using Memory Networks BIBREF27 gated memory networks BIBREF28 query reduction networks BIBREF29 and copyaugmented networks BIBREF30 In each of these architectures the RNN learns to manipulate entity values for example by saving them in a memory Output is produced by generating a sequence of tokens or ranking all possible surface forms which can also draw from this memory HCNs also use an RNN to accumulate dialog state and choose actions However HCNs differ in that they use developerprovided action templates which can contain entity references such as city right This design reduce learning complexity and also enable the software to limit which actions are available via an action mask at the expense of developer effort To further reduce learning complexity in a practical system entities are tracked separately outside the the RNN which also allows them to be substituted into action templates Also past endtoend recurrent models have been trained using supervised learning whereas we show how HCNs can also be trained with reinforcement learning In this section we compare HCNs to existing approaches on the public bAbI dialog dataset BIBREF0 This dataset includes two endtoend dialog learning tasks in the restaurant domain called task5 and task6 Task5 consists of synthetic simulated dialog data with highly regular user behavior and constrained vocabulary Dialogs include a database access action which retrieves relevant restaurants from a database with results included in the dialog transcript We test on the OOV variant of Task5 which includes entity values not observed in the training set Task6 draws on humancomputer dialog data from the second dialog state tracking challenge DSTC2 where usability subjects crowdworkers interacted with several variants of a spoken dialog system BIBREF31 Since the database from DSTC2 was not provided database calls have been inferred from the data and inserted into the dialog transcript Example dialogs are provided in the Appendix Sections bAbI Task5 example dialog and bAbI Task6 example dialog To apply HCNs we wrote simple domainspecific software as follows First for entity extraction step 4 in Figure 1 we used a simple string match with a predefined list of entity names ie the list of restaurants available in the database Second in the context update step 5 we wrote simple logic for tracking entities when an entity is recognized in the user input it is retained by the software overwriting any previously stored value For example if the price cheap is recognized in the first turn it is retained as pricecheap If expensive is then recognized in the third turn it overwrites cheap so the code now holds priceexpensive Third system actions were templatized for example system actions of the form prezzo is a nice restaurant in the west of town in the moderate price range all map to the template name is a nice restaurant in the location of town in the price price range This results in 16 templates for Task5 and 58 for Task6 Fourth when database results are received into the entity state they are sorted by rating Finally an action mask was created which encoded commonsense dependencies These are implemented as simple ifthen rules based on the presence of entity values for example only allow an API call if preconditions are met only offer a restaurant if database results have already been received do not ask for an entity if it is already known etc For Task6 we noticed that the system can say that no restaurants match the current query without consulting the database for an example dialog see Section bAbI Task6 example dialog in the Appendix In a practical system this information would be retrieved from the database and not encoded in the RNN So we mined the training data and built a table of search queries known to yield no results We also added context features that indicated the state of the database for example whether there were any restaurants matching the current query The complete set of context features is given in Appendix Section Task5 and Task6 context features Altogether this code consisted of about 250 lines of Python We then trained an HCN on the training set employing the domainspecific software described above We selected an LSTM for the recurrent layer BIBREF1 with the AdaDelta optimizer BIBREF32 We used the development set to tune the number of hidden units 128 and the number of epochs 12 Utterance embeddings were formed by averaging word embeddings using a publicly available 300dimensional word embedding model trained using word2vec on web data BIBREF33 The word embeddings were static and not updated during LSTM training In training each dialog formed one minibatch and updates were done on full rollouts ie nontruncated back propagation through time The training loss was categorical crossentropy Further lowlevel implementation details are in the Appendix Section Model implementation details We ran experiments with four variants of our model with and without the utterance embeddings and with and without the action mask Figure 1 steps 3 and 6 respectively Following past work we report average turn accuracy ie for each turn in each dialog present the true history of user and system actions to the network and obtain the networks prediction as a string of characters The turn is correct if the string matches the reference exactly and incorrect if not We also report dialog accuracy which indicates if all turns in a dialog are correct We compare to four past endtoend approaches BIBREF0 BIBREF28 BIBREF30 BIBREF29 We emphasize that past approaches have applied purely sequencetosequence models or as a baseline purely programmed rules BIBREF0 By contrast Hybrid Code Networks are a hybrid of handcoded rules and learned models Results are shown in Table 1 Since Task5 is synthetic data generated using rules it is possible to obtain perfect accuracy using rules line 1 The addition of domain knowledge greatly simplifies the learning task and enables HCNs to also attain perfect accuracy On Task6 rules alone fare poorly whereas HCNs outperform past learned models We next examined learning curves training with increasing numbers of dialogs To guard against bias in the ordering of the training set we averaged over 5 runs randomly permuting the order of the training dialogs in each run Results are in Figure 2 In Task5 the action mask and utterance embeddings substantially reduce the number of training dialogs required note the horizontal axis scale is logarithmic For Task6 the benefits of the utterance embeddings are less clear An error analysis showed that there are several systematic differences between the training and testing sets Indeed DSTC2 intentionally used different dialog policies for the training and test sets whereas our goal is to mimic the policy in the training set Nonetheless these tasks are the best public benchmark we are aware of and HCNs exceed performance of existing sequencetosequence models In addition they match performance of past models using an order of magnitude less data 200 vs 1618 dialogs which is crucial in practical settings where collecting realistic dialogs for a new domain can be expensive We now turn to comparing with purely handcrafted approaches To do this we obtained logs from our companys textbased customer support dialog system which uses a sophisticated rulebased dialog manager Data from this system is attractive for evaluation because it is used by real customers not usability subjects and because its rulebased dialog manager was developed by customer support professionals at our company and not the authors This data is not publicly available but we are unaware of suitable humancomputer dialog data in the public domain which uses rules Customers start using the dialog system by entering a brief description of their problem such as I need to update my operating system They are then routed to one of several hundred domains where each domain attempts to resolve a particular problem In this study we collected humancomputer transcripts for the hightraffic domains reset password and cannot access account We labeled the dialog data as follows First we enumerated unique system actions observed in the data Then for each dialog starting from the beginning we examined each system action and determined whether it was correct Here correct means that it was the most appropriate action among the set of existing system actions given the history of that dialog If multiple actions were arguably appropriate we broke ties in favor of the existing rulebased dialog manager Example dialogs are provided in the Appendix Sections Forgot password example dialog and Account access example dialog If a system action was labeled as correct we left it asis and continued to the next system action If the system action was not correct we replaced it with the correct system action and discarded the rest of the dialog since we do not know how the user would have replied to this new system action The resulting dataset contained a mixture of complete and partial dialogs containing only correct system actions We partitioned this set into training and test dialogs Basic statistics of the data are shown in Table 2 In this domain no entities were relevant to the control flow and there was no obvious mask logic since any question could follow any question Therefore we wrote no domainspecific software for this instance of the HCN and relied purely on the recurrent neural network to drive the conversation The architecture and training of the RNN was the same as in Section Supervised learning evaluation I except that here we did not have enough data for a validation set so we instead trained until we either achieved 100 accuracy on the training set or reached 200 epochs To evaluate we observe that conventional measures like average dialog accuracy unfairly penalize the system used to collect the dialogs in our case the rulebased system If the system used for collection makes an error at turn t the labeled dialog only includes the subdialog up to turn t and the system being evaluated offline is only evaluated on that subdialog In other words in our case reporting dialog accuracy would favor the HCN because it would be evaluated on fewer turns than the rulebased system We therefore use a comparative measure that examines which method produces longer continuous sequences of correct system actions starting from the beginning of the dialog Specifically we report Delta P fracCtextHCNwin CtextrulewinCtextall where CtextHCNwin is the number of test dialogs where the rulebased approach output a wrong action before the HCN Ctextrulewin is the number of test dialogs where the HCN output a wrong action before the rulebased approach and Ctextall is the number of dialogs in the test set When Delta P 0 there are more dialogs in which HCNs produce longer continuous sequences of correct actions starting from the beginning of the dialog We run all experiments 5 times each time shuffling the order of the training set Results are in Figure 3 HCNs exceed performance of the existing rulebased system after about 30 dialogs In these domains we have a further source of knowledge the rulebased dialog managers themselves can be used to generate example sunnyday dialogs where the user provides purely expected inputs From each rulebased controller synthetic dialogs were sampled to cover each expected user response at least once and added to the set of labeled real dialogs This resulted in 75 dialogs for the Forgot password domain and 325 for the Cant access account domain Training was repeated as described above Results are also included in Figure 3 with the suffix sampled In the Cant access account domain the sampled dialogs yield a large improvement probably because the flow chart for this domain is large so the sampled dialogs increase coverage The gain in the forgot password domain is present but smaller In summary HCNs can outperform productiongrade rulebased systems with a reasonable number of labeled dialogs and adding synthetic sunnyday dialogs improves performance further Moreover unlike existing pipelined approaches to dialog management that rely on an explicit state tracker this HCN used no explicit state tracker highlighting an advantage of the model In the previous sections supervised learning SL was applied to train the LSTM to mimic dialogs provided by the system developer Once a system operates at scale interacting with a large number of users it is desirable for the system to continue to learn autonomously using reinforcement learning RL With RL each turn receives a measurement of goodness called a reward the agent explores different sequences of actions in different situations and makes adjustments so as to maximize the expected discounted sum of rewards which is called the return denoted G For optimization we selected a policy gradient approach BIBREF34 which has been successfully applied to dialog systems BIBREF35 robotics BIBREF36 and the board game Go BIBREF37 In policy gradientbased RL a model pi is parameterized by mathbf w and outputs a distribution from which actions are sampled at each timestep At the end of a trajectory in our case dialog the return G for that trajectory is computed and the gradients of the probabilities of the actions taken with respect to the model weights are computed The weights are then adjusted by taking a gradient step proportional to the return mathbf w leftarrow mathbf w alpha sum t triangledown mathbf w log pi atmathbf htmathbf w G b Eq 14 where alpha is a learning rate at is the action taken at timestep t mathbf ht is the dialog history at time t G is the return of the dialog triangledown mathbf x F denotes the Jacobian of F with respect to mathbf x b is a baseline described below and at0 is the LSTM ie a stochastic policy which outputs a distribution over at1 given a dialog history at2 parameterized by weights at3 The baseline at4 is an estimate of the average return of the current policy estimated on the last 100 dialogs using weighted importance sampling Intuitively better dialogs receive a positive gradient step making the actions selected more likely and worse dialogs receive a negative gradient step making the actions selected less likely SL and RL correspond to different methods of updating weights so both can be applied to the same network However there is no guarantee that the optimal RL policy will agree with the SL training set therefore after each RL gradient step we check whether the updated policy reconstructs the training set If not we rerun SL gradient steps on the training set until the model reproduces the training set Note that this approach allows new training dialogs to be added at any time during RL optimization We illustrate RL optimization on a simulated dialog task in the name dialing domain In this system a contacts name may have synonyms Michael may also be called Mike and a contact may have more than one phone number such as work or mobile which may in turn have synonyms like cell for mobile This domain has a database of names and phone numbers taken from the Microsoft personnel directory 5 entity types firstname nickname lastname phonenumber and phonetype and 14 actions including 2 API call actions Simple entity logic was coded which retains the most recent copy of recognized entities A simple action mask suppresses impossible actions such as placing a phonecall before a phone number has been retrieved from the database Example dialogs are provided in Appendix Section Name dialing example dialogs To perform optimization we created a simulated user At the start of a dialog the simulated user randomly selected a name and phone type including names and phone types not covered by the dialog system When speaking the simulated user can use the canonical name or a nickname usually answers questions but can ignore the system can provide additional information not requested and can give up The simulated user was parameterized by around 10 probabilities set by hand We defined the reward as being 1 for successfully completing the task and 0 otherwise A discount of 095 was used to incentivize the system to complete dialogs faster rather than slower yielding return 0 for failed dialogs and G 095T1 for successful dialogs where T is the number of system turns in the dialog Finally we created a set of 21 labeled dialogs which will be used for supervised learning For the RNN in the HCN we again used an LSTM with AdaDelta this time with 32 hidden units RL policy updates are made after each dialog Since a simulated user was employed we did not have real user utterances and instead relied on context features omitting bagofwords and utterance embedding features We first evaluate RL by randomly initializing an LSTM and begin RL optimization After 10 RL updates we freeze the policy and run 500 dialogs with the user simulation to measure task completion We repeat all of this for 100 runs and report average performance In addition we also report results by initializing the LSTM using supervised learning on the training set consisting of 1 2 5 or 10 dialogs sampled randomly from the training set then running RL as described above Results are in Figure 4 Although RL alone can find a good policy pretraining with just a handful of labeled dialogs improves learning speed dramatically Additional experiments not shown for space found that ablating the action mask slowed training agreeing with BIBREF6 Finally we conduct a further experiment where we sample 10 training dialogs then add one to the training set just before RL dialog 0 100 200 900 Results are shown in Figure 4 This shows that SL dialogs can be introduced as RL is in progress ie that it is possible to interleave RL and SL This is an attractive property for practical systems if a dialog error is spotted by a developer while RL is in progress it is natural to add a training dialog to the training set This paper has introduced Hybrid Code Networks for endtoend learning of taskoriented dialog systems HCNs support a separation of concerns where procedural knowledge and constraints can be expressed in software and the control flow is learned Compared to existing endtoend approaches HCNs afford more developer control and require less training data at the expense of a small amount of developer effort Results in this paper have explored three different dialog domains On a public benchmark in the restaurants domain HCNs exceeded performance of purely learned models Results in two troubleshooting domains exceeded performance of a commercially deployed rulebased system Finally in a namedialing domain results from dialog simulation show that HCNs can also be optimized with a mixture of reinforcement and supervised learning In future work we plan to extend HCNs by incorporating lines of existing work such as integrating the entity extraction step into the neural network BIBREF38 adding richer utterance embeddings BIBREF39 and supporting text generation BIBREF14 We will also explore using HCNs with automatic speech recognition ASR input for example by forming features from ngrams of the ASR nbest results BIBREF40 Of course we also plan to deploy the model in a live dialog system More broadly HCNs are a general model for stateful control and we would be interested to explore applications beyond dialog systems for example in NLP medical settings or humanrobot NL interaction tasks providing domain constraints are important for safety and in resourcepoor settings providing domain knowledge can amplify limited data The RNN was specified using Keras version 033 with backend computation in Theano version 080dev0 BIBREF42 BIBREF41 The Keras model specification is given below The input variable obs includes all features from Figure 1 step 6 except for the previous action step 18 and the action mask step 6 topmost vector Given obssize actionsize nbhidden g Graph gaddinput nameobs inputshapeNone obssize gaddinput nameprevaction inputshapeNone actionsize gaddinput nameavailactions inputshapeNone actionsize gaddnode LSTM nhidden returnsequencesTrue activationtanh nameh1 inputs obs prevaction availactions gaddnode TimeDistributedDense actionsize activationsoftmax nameh2 inputh1 gaddnode Activation activationnormalize nameaction inputsh2availactions mergemodemul createoutputTrue gcompile optimizerAdadeltaclipnorm1 sampleweightmodes action temporal loss actioncategoricalcrossentropy Model sizes are given in Table 3 Example dialogs are given below for each of the 5 dialog systems For space and readability the entity tags that appear in the user and system sides of the dialogs have been removed for example Call nameJoanname is shown as Call Joan good morning hello what can i help you with today id like to book a table with italian food im on it silence where should it be in paris how many people would be in your party for six people please which price range are looking for in a cheap price range please ok let me look into some options for you silence apicall italian paris six cheap database results returned silence what do you think of this option restomadridcheapspanish8stars no this does not work for me sure let me find an other option for you silence what do you think of this option restomadridcheapspanish6stars do you have something else sure let me find an other option for you silence what do you think of this option restomadridcheapspanish1stars its perfect great let me do the reservation may i have the phone number of the restaurant here it is restomadridcheapspanish1starsphone thanks is there anything i can help you with no thank you youre welcome In the system turn marked the system provided venue information without first making an API call Hello welcome to the Cambridge restaurant system You can ask for restaurants by area price range or food type How may I help you im looking for canapes food in any price range I am sorry but there is no canapes restaurant that matches your request how about european apicall european Rlocation Rprice database results returned silence michaelhousecafe is a great restaurant serving european food and it is in the expensive price range food an What part of town do you have in mind any any part of town michaelhousecafe serves european food may i have the address and phone number please The phone number of michaelhousecafe is michaelhousecafephone thank you good bye you are welcome Context features used in Section Supervised learning evaluation I are in Table 4 Some system actions have been shortened for space How do I recover my password Okay you dont need to remember your password we can reset it Would you like to try that Yes To reset your password please see this URL Was that helpful I dont have a flashdrive Lets connect you to a person who can help you Some system actions have been shortened for space cannot sign in to my email Please check caps lock is off and that your email is spelled correctly Are you able to access your account now no If you still cant sign in please go the the following page and check that the email server is running ok Is everything up and running yes You need to reset your password heres how Did this solve your problem still cannot get in Did you receive the security code yes Check you entered the right numbers Did this solve your problem no Have you seen a message that your profile is temporarily blocked just your message of update on sept 15 Did you turn on twostep verification recently whats that Sorry I cant help Rephrase your question or click below to talk to a person How can I help you Call Joe Theres more than one person named Joe Can you say their full name Call Joe Adamson SavePhonetypeavail Calling Joe Adamson work PlaceCall How can I help you Call Sally on her home Sorry I dont have a home number for Sally Smith I only have a work phone Do you want to call that number no Oh sorry about that Goodbye </s>
<s>[INST] Leveraging Recurrent Neural Networks for Multimodal Recognition of Social Norm Violation in Dialog [/INST] Social norms are informal understandings that govern human behavior They serve as the basis for our beliefs and expectations about others and are instantiated in humanhuman conversation through verbal and nonverbal behaviors BIBREF0 BIBREF1 There is considerable body of work on modeling socially normative behavior in intelligent agentbased systems BIBREF2 BIBREF3 aiming to facilitate lifelike conversations with human users Violating such social norms and impoliteness in the conversation on the other hand have also been demonstrated to positively affect certain aspects of the social interaction For instance BIBREF4 suggests impoliteness may challenge rapport in strangers but it is also an indicator of built relationship among friends The literature on social psychology BIBREF5 shows that the task of managing interpersonal bond like rapport requires management of face which in turn relies on behavioral expectation which are allied with social norms early in a relationship and become more interpersonally determined as the relationship proceeds BIBREF6 advanced the arguments by proposing that with the increasing knowledge of one another more general norms may be purposely violated in order to accommodate each others behavior expectation Moreover they proposed that such kind of social norm violation in fact reinforce the sense of ingroup connectedness Finally in BIBREF7 the authors discovered the effect of temporally cooccurring smile and social norm violation that signal high interpersonal rapport Thus we believe that recognizing the phenomena of social norm violation in dialog can contribute important insights into understanding the interpersonal dynamics that unfold between the interlocutors Interesting prior work on quantifying social norm violation has taken a heavily datadriven focus BIBREF8 BIBREF9 For instance BIBREF8 trained a series of bigram language models to quantify the violation of social norms in users posts on an online community by leveraging crossentropy value or the deviation of word sequences predicted by the language model and their usage by the user However their models were trained on writtenlanguage instead of natural faceface dialog corpus Another kind of social norm violation was examined by BIBREF10 who developed a classifier to identify specific types of sarcasm in tweets They utilized a bootstrapping algorithm to automatically extract lists of positive sentiment phrases and negative situation phrases from given sarcastic tweets which were in turn leveraged to recognize sarcasm in an SVM classifier However no contextual information was considered in this work BIBREF11 understood the nature of social norm violation in dialog by correlating it with associated observable verbal vocal and visual cues By leveraging their findings and statistical machine learning techniques they built a computational model for automatic recognition While they preserved shortterm temporal contextual information in the model this study avoided dealing with sparsity of the social norm violation phenomena by undersampling the negativeclass instances to make a balanced dataset Motivated by theoretical rationale and prior empirical findings concerning the relationship between violation social norm and interpersonal dynamics in the current work we take a step towards addressing the above limitations and our contributions are twofold 1We quantitatively evaluate the contribution of longterm temporal contextual information on detecting violation of social norm 2We incorporate this understanding to our computational model for automatic recognizing social norm violation by leveraging the power of recurrent neural network on modeling the longterm temporal dependencies Reciprocal peer tutoring data was collected from 12 American Englishspeaking dyads 6 friends and 6 strangers 6 boys and 6 girls with a mean age of 13 years who interacted for 5 hourly sessions over as many weeks a total of 60 sessions and 5400 minutes of data tutoring one another in algebra Each session began with a period of getting to know one another after which the first tutoring period started followed by another small social interlude a second tutoring period with role reversal between the tutor and tutee and then the final social time We assessed our automatic recognition of social norm violation against this corpus annotated for those strategies Interrater reliability IRR for the social norm violation that computed via Krippendorffs alpha was 075 IRR for visual behavior was 089 for eye gaze 075 for smile count how many smiles occur 064 for smile duration and 099 for head nod Table 1 shows statistics of our corpus Below we discuss the definition of social norm violation Ground Truth Social norm violations are behaviors or actions that go against general socially acceptable and stereotypical behaviors In a first pass we coded whether a clause was a social norm violation In a second pass if a social norm violation we differentiated 1 breaking the conversational rules of the experiment eg offtask talk during tutoring session insulting the experimenter or the experiment etc 2 face threatening acts eg criticizing teasing or insulting etc 3 referring to ones own or the other persons social norm violations or general social norm violations eg referring to the need to get back to focusing on work or to the other person being verbally annoying etc Social norms are culturallyspecific and so we judged a social norm violation by the impact it had on the listener eg shock specific reference to the behavior as a violation etc In this section our objective was to build a computational model for detecting social norm violation Towards this end we first took each clause the smallest units that can express a complete proposition as the prediction unit Next inspired from the thorough analysis in BIBREF11 we extracted verbal and visual features of the speaker that were highly correlated to social norm violation clauses with rare threshold being set to 20 Verbal features included LIWC features BIBREF12 that helped in categorization of words used during usage of social norm violation bigrams part of speech bigrams and wordpart of speech pairs from the speakers clauses Visual features included head node smile and eye gaze information of the speaker In total there were 3782 features per clause We treated a dialog D as a sequence of clauses c0 cT where T was the number of clauses in the D Each clause ci was a tuple wi0 wim ei where wi0 wim was the m words in the clause ci and ei was the corresponding meta information such as the relationship of the dyad and nonverbal behavior during the generation of the clause The handcrafted feature of size 3782 was denoted as c0 cT0 and could be viewed as a mapping function c0 cT1 Meanwhile each clause was associated with a binary label c0 cT2 that indicates the ground truth of whether c0 cT3 is a violation of social norm Eventually the goal was to model c0 cT4 the conditional distribution over whether the latest clause was a violation of social norm given the entire history of the dialog We first trained a L2 regularized logistic regression model using the proposed verbal and visual features fi as inputs leftmost in Figure 1 This model serves as our baseline Past empirical results suggest two possible hypotheses of improving the model performance 1 improvement in clause level representation 2 inclusion of contextual information for prediction Therefore we designed LocalGlobalContext models to test these hypotheses The LocalContext recurrent neural network RNN models the context inside a clause at the wordlevel by encoding word embeddings of size 300 in a clause ci sequentially using a Longshort Term Memory LSTM cell of size 300 The mechanism of LSTM is defined as
left
beginmatrix
it
ft
ot
jt
endmatrix
right
left
beginmatrix
sigma
sigma
sigma
tanh
endmatrix
right W ht1 xt
ct ft odot ct1 it odot jt
ht ot odot tanhct
We treated last hidden LSTM output him as the clause embedding and concatenated that with the corresponding meta information vector ei The combined vector was linearly transformed and then fed into a softmax function Next our GlobalContext RNN investigated the influence of clauselevel context in detecting social norm violation by using the LSTM cells to model the longterm temporal dependencies For a fair comparison we used the same handcrafted feature fi used in the logistic regression model as the representation of clause ci As shown in Figure 1 we first obtained a linear embedding of size 150 embiWefibi of fi Then embi was used as the inputs to LSTM of size 600 The hidden output hi at each time step was fed into a multilayer perceptron MLP with 1 hidden layer of size 100 We applied 50 dropout regularization BIBREF13 at the inputoutput of LSTM and MLP hidden layer for better generalization Finally the model was optimized wrt to the cross entropy loss A further challenge was the length of dialog The average number of clauses in training dialog was 8178 which made it computationally intractable to backpropagate through the entire sequence Therefore truncated backpropagation through time TBPTT BIBREF14 was used by unrolling the network for 20 steps The final state of LSTM of each batch was fetched into the next batch as the initial state We observed that GlobalContext RNN with 2 LSTM layers outperformed other models as showed in Table 2 First by comparing logistic regression model with our best model the result indicates the strong predictive power of longterm temporal contextual information on the task of detecting social norm violation in dialog On the other hand LocalContext RNN model did not achieve significant improvement on overall performance regarding to logistic regression which means that our learned clause representation through training process has less competence compared to handcrafted features inspired from linguistic knowledge One potential reason for such a result could be insufficient amount of training set in order to learn a generic clause representation In this work we began by indicating our interest in quantitatively learning the contribution of longterm temporal contextual information on detecting social norm violation in discourse We then leveraged the power of recurrent neural network on modeling longterm temporal dependency Inspired by handcrafted multimodal features derived from qualitative and quantitative analysis in former empirical studies we developed a GlobalContext RNN model to detect social norm violation in human dialog This model will play a prime role in building sociallyaware agents that have capabilities of understanding interpersonal dynamics that unfold in the interaction which is in turn essential to better adapt to the interpersonal relationship felt by their users Thus to serve this goal our future work will build a generative model of social norm violation which will make an agent act towards more realistic human behavior understanding reasoning and generation We begin to model those aspects of humanhuman interaction that are not only helpful to humanagent collaboration but also sustain aspects of what we cherish most in being human </s>
<s>[INST] Open-Vocabulary Semantic Parsing with both Distributional Statistics and Formal Knowledge [/INST] Semantic parsing is the task of mapping a phrase in natural language onto a formal query in some fixed schema which can then be executed against a knowledge base KB BIBREF0 BIBREF1 For example the phrase Who is the president of the United States might be mapped onto the query lambda x textsc governmentpresidentof x textsc USA which when executed against Freebase BIBREF2 returns textsc Barack Obama By mapping phrases to executable statements semantic parsers can leverage large curated sources of knowledge to answer questions BIBREF3 This benefit comes with an inherent limitation howeversemantic parsers can only produce executable statements within their manuallyproduced schema There is no query against Freebase that can answer questions like Who are the Democratic frontrunners in the US election as Freebase does not encode information about frontrunners Semantic parsers trained for Freebase fail on these kinds of questions To overcome this limitation recent work has proposed methods for open vocabulary semantic parsing which replace a formal KB with a probabilistic database learned from a text corpus In these methods language is mapped onto queries with predicates derived directly from the text itself BIBREF4 BIBREF5 For instance the question above might be mapped to lambda x textit presidentof x textsc USA This query is not executable against any KB however and so open vocabulary semantic parsers must learn execution models for the predicates found in the text They do this with a distributional approach similar to word embedding methods giving them broad coverage but lacking access to the large curated KBs available to traditional semantic parsers Prior work in semantic parsing then has either had direct access to the information in a knowledge base or broad coverage over all of natural language using the information in a large corpus but not both In this work we show how to combine these two approaches by incorporating KB information into open vocabulary semantic parsing models Our key insight is that formal KB queries can be converted into features that can be added to the learned execution models of open vocabulary semantic parsers This conversion allows open vocabulary models to use the KB fact textsc governmentpresidentof textsc BarackObama textsc USA when scoring textit presidentof textsc BarackObama textsc USA without requiring the model to map the language onto a single formal statement Crucially this featurization also allows the model to use these KB facts even when they only provide partial information about the language being modeled For example knowing that an entity is a textsc politician is very helpful information for deciding whether that entity is a frontrunner Our approach outlined in Figure 1 effectively learns the meaning of a word as a distributional vector plus a weighted combination of Freebase queries a considerably more expressive representation than those used by prior work While this combination is the main contribution of our work we also present some small improvements that allow open vocabulary semantic parsing models to make better use of KB information when it is available improving the logical forms generated by the semantic parser and employing a simple technique from related work for generating candidate entities from the KB We demonstrate our approach on the task of answering opendomain fillintheblank natural language questions By giving open vocabulary semantic parsers direct access to KB information we improve mean average precision on this task by over 120 In this section we briefly describe the current stateoftheart model for open vocabulary semantic parsing introduced by Krishnamurthy and Mitchell krishnamurthy2015semparseopenvocabulary Instead of mapping text to Freebase queries as done by a traditional semantic parser their method parses text to a surface logical form with predicates derived directly from the words in the text see Figure 1 Next a distribution over denotations for each predicate is learned using a matrix factorization approach similar to that of Riedel et al riedel2013mfuniversalschema This distribution is concisely represented using a probabilistic database which also enables efficient probabilistic execution of logical form queries The matrix factorization has two sets of parameters each category or relation has a learned k dimensional embedding theta and each entity or entity pair has a learned k dimensional embedding phi The probability assigned to a category instance ce or relation instance re1 e2 is given by pce sigma theta cT phi e pre1 e2 sigma
theta rT phi e1 e2 The probability of a predicate instance is the sigmoided inner product of the corresponding predicate and entity embeddings Predicates with nearby embeddings will have similar distributions over the entities in their denotation The parameters theta and phi are learned using a query ranking objective that optimizes them to rank entities observed in the denotation of a logical form above unobserved entities Given the trained predicate and entity parameters the system is capable of efficiently computing the marginal probability that an entity is an element of a logical forms denotation using approximate inference algorithms for probabilistic databases The model presented in this section is purely distributional with predicate and entity models that draw only on cooccurrence information found in a corpus In the following sections we show how to augment this model with information contained in large curated KBs such as Freebase Our key insight is that the executable queries used by traditional semantic parsers can be converted into features that provide KB information to the execution models of open vocabulary semantic parsers Here we show how this is done Traditional semantic parsers map words onto distributions over executable queries select one to execute and return sets of entities or entity pairs from a KB as a result Instead of executing a single query we can simply execute all possible queries and use an entitys or entity pairs membership in each set as a feature in our predicate models There are two problems with this approach 1 the set of all possible queries is intractably large so we need a mechanism similar to a semantic parsers lexicon to select a small set of queries for each word and 2 executing hundreds or thousands of queries at runtime for each predicate and entity is not computationally tractable To solve these problems we use a graphbased technique called subgraph feature extraction SFE BIBREF6 SFE is a technique for generating feature matrices over node pairs in graphs with labeled edges When the graph corresponds to a formal KB such as Freebase the features generated by SFE are isomorphic to statements in the KB schema BIBREF7 This means that we can use SFE to generate a feature vector for each entity or entity pair which succinctly captures the set of all statements in whose denotations the entity or entity pair appears Using this feature vector as part of the semantic parsers entity models solves problem 2 above and performing feature selection for each predicate solves problem 1 Some example features extracted by SFE are shown in Figure 2 For entity pairs these features include the sequence of edges or paths connecting the nodes corresponding to the entity pair For entities these features include the set of paths connected to the node optionally including the node at the end of the path Note the correspondence between these features and Freebase queries the path langle textsc designed rightarrow textsc locatedin rangle can be executed as a query against Freebase returning a set of architect location entity pairs where the architect designed a structure in the location textsc Palladio textsc Italy is one such entity pair so this pair has a feature value of 1 for this query The feature vectors produced by SFE contain tens of millions of possible formal statements Out of these tens of millions of formal statements only a handful represent relevant Freebase queries for any particular predicate We therefore select a small number of statements to consider for each learned predicate in the open vocabulary semantic parser We select features by first summing the entity and entity pair feature vectors seen with each predicate in the training data For example the phrase Italian architect Andrea Palladio is considered a positive training example for the predicate instances textit architecttextsc Palladio and textit architectNNtextsc Italy textsc Palladio We add the feature vectors for textsc Palladio and textsc Italy textsc Palladio to the feature counts for the predicates textit architect and textit architectNN respectively This gives a set of counts textsc count pi textsc count textit architectNNtextsc Italy textsc Palladio0 and textit architectNNtextsc Italy textsc Palladio1 textit architectNNtextsc Italy textsc Palladio2 for each predicate textit architectNNtextsc Italy textsc Palladio3 and feature textit architectNNtextsc Italy textsc Palladio4 The features are then ranked by PMI for each predicate by computing textit architectNNtextsc Italy textsc Palladio5 After removing lowfrequency features we pick the textit architectNNtextsc Italy textsc Palladio6 features with the highest PMI values for each predicate to use in our model Here we present our approach to incorporating KB information into open vocabulary semantic parsers Having described how we use SFE to generate features corresponding to statements in a formal schema adding these features to the models described in Section Subgraph feature extraction is straightforward We saw in Section Subgraph feature extraction that open vocabulary semantic parsers learn distributional vectors for each category relation entity and entity pair We augment these vectors with the feature vectors described in Section Converting Freebase queries to features Each category and relation receives a weight omega for each selected Freebase query and each entity and entity pair has an associated feature vector psi The truth probability of a category instance ce or relation instance re1 e2 is thus given by
pce sigma theta cT phi e omega cT psi ce
pre1 e2 sigma theta rT phi e1 e2 omega rT psi re1 e2
In these equations theta and phi are learned predicate and entity embeddings as described in Section Subgraph feature extraction The second term in the sum represents our new features and their learned weights psi ce and psi re1 e2 are SFE feature vectors for each entity and entity pair a different set of features is chosen for each predicate c and r as described in Section Making full use of KB information omega c and omega r are learned weights for these features In our model there are now three sets of parameters to be learned 1 theta lowdimensional distributional vectors trained for each predicate 2 phi lowdimensional distributional vectors trained for each entity and entity pair and 3 omega weights associated with the selected formal SFE features for each predicate All of these parameters are optimized jointly using the same method described in Section Subgraph feature extraction Note here that each SFE feature corresponds to a query over the formal schema defining a set of entities or entity pairs The associated feature weight measures the likelihood that an entity in this set is also in the denotation of the surface predicate Our models include many such features for each surface predicate effectively mapping each surface predicate onto a weighted combination of Freebase queries In addition to improving predicate models as just described adding KB information to open vocabulary semantic parsers suggests two other simple improvements 1 using more specific logical forms and 2 generating candidate entities from the KB Krishnamurthy and Mitchell krishnamurthy2015semparseopenvocabulary generate logical forms from natural language statements by computing a syntactic CCG parse then applying a collection of rules to produce logical forms However their logical form analyses do not model nounmediated relations well For example given the phrase Italian architect Andrea Palladio their systems logical form would include the relation textit NNtextsc Italy
textsc Palladio Here the textit NN predicate represents a generic noun modifier relation however this relation is too vague for the predicate model to accurately learn its denotation A similar problem occurs with prepositions and possessives eg it is similarly hard to learn the denotation of the predicate textit of Our system improves the analysis of nounmediated relations by simply including the noun in the predicate name In the architect example above our system produces the relation textit architectNN It does this by concatenating all intervening noun modifiers between two entity mentions and including them in the predicate name for example Illinois attorney general Lisa Madigan produces the predicate textit attorneygeneralNN We similarly improve the analyses of prepositions and possessives to include the head noun For example Barack Obama president of the US produces the predicate instance textit presidentoftextsc Barack Obama textsc US and Rome Italys capital produces the predicate textit prime scapital This process generates more specific predicates that more closely align with the KB facts that we make available to the predicate models A key benefit of our predicate models is that they are able to assign scores to entity pairs that were never seen in the training data Distributional models have no learned vectors for these entity pairs and therefore assume pre1e2 0 for unseen entity pairs e1e2 This limits the recall of these models when applied to question answering as entity pairs will not have been observed for many correct but rare entity answers In contrast because our models have access to a large KB the formal component of the model can always give a score to any entity pair in the KB This allows our model to considerably improve question answering performance on rare entities It would be computationally intractable to consider all Freebase entities as answers to queries and so we use a simple candidate entity generation technique to consider only a small set of likely entities for a given query We first find all entities in the query and consider as candidates any entity that has either been seen at training time with a query entity or is directly connected to a query entity in Freebase This candidate entity generation is common practice for recent question answering models over Freebase BIBREF8 though for the reasons stated above it has not been used previously in open vocabulary semantic parsing models We evaluate our openvocabulary semantic parser on a fillintheblank natural language query task Each test example is a natural language phrase containing at least two Freebase entities one of which is held out The system must propose a ranked list of Freebase entities to fill in the blank left by the held out entity and the predicted entities are then judged manually for correctness We compare our proposed models which combine distributional and formal elements with a purely distributional baseline from prior work All of the data and code used in these experiments is available at httpgithubcomallenaiopenvocabsemparse Much recent work on semantic parsing has been evaluated using the WebQuestions dataset BIBREF3 This dataset is not suitable for evaluating our model because it was filtered to only questions that are mappable to Freebase queries In contrast our focus is on language that is not directly mappable to Freebase We thus use the dataset introduced by Krishnamurthy and Mitchell krishnamurthy2015semparseopenvocabulary which consists of the ClueWeb09 web corpus along with Googles FACC entity linking of that corpus to Freebase BIBREF9 For training data 3 million webpages from this corpus were processed with a CCG parser to produce logical forms BIBREF10 This produced 21m predicate instances involving 142k entity pairs and 184k entities After removing infrequentlyseen predicates seen fewer than 6 times there were 25k categories and 42k relations We also used the test set created by Krishnamurthy and Mitchell which contains 220 queries generated in the same fashion as the training data from a separate section of ClueWeb However as they did not release a development set with their data we used this set as a development set For a final evaluation we generated another similar test set from a different held out section of ClueWeb in the same fashion as done by Krishnamurthy and Mitchell This final test set contains 307 queries We compare three models in our experiments 1 the distributional model of Krishnamurthy and Mitchell described in Section Subgraph feature extraction which is the current stateoftheart method for open vocabulary semantic parsing 2 a formal model new to this work where the distributional parameters theta and phi in Section Combined predicate models are fixed at zero and 3 the combined model described in Section Combined predicate models also new to this work In each of these models we used vectors of size 300 for all embeddings Except where noted all experiments use our modified logical forms Section Evaluation and our entity proposal mechanism Section Related work We do not compare against any traditional semantic parsers as more than half of the questions in our dataset are not answerable by Freebase queries and so are out of scope for those parsers BIBREF5 Given a fillintheblank query such as Italian architect each system produces a ranked list of 100 candidate entities To compare the output of the systems we follow a pooled evaluation protocol commonly used in relation extraction and information retrieval BIBREF11 BIBREF12 We take the top 30 predictions from each system and manually annotate whether they are correct and use those annotations to compute the average precision AP and reciprocal rank RR of each system on the query Average precision is defined as frac1msum mk1 mathrm Preck times mathrm Correctk where mathrm Preck is the precision at rank k mathrm Correctk is an indicator function for whether the k th answer is correct and m is number of returned answers up to 100 in this evaluation AP is equivalent to calculating the area under a precisionrecall curve Reciprocal rank is computed by first finding the rank r of the first correct prediction made by a system Reciprocal rank is then frac1r ranging from 1 if the first prediction is correct to 0 if there is no correct answer returned In the tables below we report mean average precision MAP and mean reciprocal rank MRR averaged over all of the queries in the test set We also report a weighted version of MAP where the AP of each query is scaled by the number of annotated correct answers to the query shown as WMAP in the tables for space considerations We first show the effect of the new logical forms introduced in Section Evaluation As can be seen in Table 1 with our improved logical forms all models are better able to capture the semantics of language This improvement is most pronounced in the formal models which have more capacity to get specific features from Freebase with the new logical forms As our logical forms give all models better performance the remaining experiments we present all use these logical forms We next show the improvement gained by using the simple candidate entity generation outlined in Section Related work By simply appending the list of connected entities in Freebase to the end of the rankings returned by the distributional model MAP improves by 40 see Table 2 The connectedness of an entity pair in Freebase is very informative especially for rare entities that are not seen together during training Table 3 shows a comparison between the semantic parsing models on the development set As can be seen the combined model significantly improves performance over prior work giving a relative gain in weighted MAP of 29 Table 4 shows that these improvements are consistent on the final test set as well The performance improvement seen by the combined model is actually larger on this set with gains on our metrics ranging from 50 to 87 On both of these datasets the difference in MAP between the combined model and the distributional model is statistically significant by a paired permutation test p 005 The differences between the combined model and the formal model and between the formal model and the distributional model are not statistically significant as each method has certain kinds of queries that it performs well on Only the combined model is able to consistently outperform the distributional model on all kinds of queries Our model tends to outperform the distributional model on queries containing predicates with exact or partial correlates in Freebase For example our model obtains nearly perfect average precision on the queries French newspaper and Israeli prime minister both of which can be exactly expressed in Freebase The top features for textit newspaper x all indicate that x has type textsc newspaper in Freebase and the top features for textit newspaperNN x y indicate that y is a newspaper and that x is either the circulation area of y or the language of x0 The model also performs well on queries with partial Freebase correlates such as Microsoft head honcho The United States s closest ally and Patriots linebacker although with somewhat lower average precision The high weight features in these cases tend to provide useful hints even though there is no direct correlate for example the model learns that honchos are people and that they tend to be CEOs and film producers There are also some areas where our model can be improved First in some cases the edge sequence features used by the model are not expressive enough to identify the correct relation in Freebase An example of this problem is the linebacker example above where the features for textit linebackerNN can capture which athletes play for which teams but not the positions of those athletes Second our model can underperform on predicates with no close mapping to Freebase An example where this problem occurs is the query is a NASA mission Third there remains room to further improve the logical forms produced by the semantic parser specifically for multiword expressions One problem occurs with multiword noun modifiers eg Vice president Al Gore is mapped to textit vicetextsc Al Gore wedge textit presidenttextsc Al Gore Another problem is that there is no backoff with multiword relations For example the predicate textit headhonchoNN was never seen in the training data so it is replaced with textit unknown however it would be better to replace it with textit honchoNN which was seen in the training data Finally although using connected entities in Freebase as additional candidates during inference is helpful it often over or undergenerates candidates A more tailored perquery search process could improve performance There is an extensive literature on building semantic parsers to answer questions against a KB BIBREF1 BIBREF3 BIBREF13 BIBREF14 Some of this work has used surface or ungrounded logical forms as an intermediate representation similar to our work BIBREF15 BIBREF16 BIBREF8 BIBREF17 The main difference between our work and these techniques is that they map surface logical forms to a single executable Freebase query while we learn execution models for the surface logical forms directly using a weighted combination of Freebase queries as part of the model None of these prior works can assign meaning to language that is not directly representable in the KB schema Choi Kwiatkowski and Zettlemoyer choi2015semanticparsingpartialontologies presented an information extraction system that performs a semantic parse of opendomain text recognizing when a predicate cannot be mapped to Freebase However while they recognize when a predicate is not mappable to Freebase they do not attempt to learn execution models for those predicates nor can they answer questions using those predicates Yao and Van Durme yao2014infoextractionfreebaseqa and Dong et al dong2015freebaseqamccnn proposed question answering models that use similar features to those used in this work However they did not produce semantic parses of language instead using methods that are noncompositional and do not permit complex queries Finally learning probabilistic databases in an open vocabulary semantic parser has a strong connection with KB completion In addition to SFE BIBREF6 our work draws on work on embedding the entities and relations in a KB BIBREF12 BIBREF18 BIBREF19 BIBREF20 BIBREF21 as well as work on graphbased methods for reasoning with KBs BIBREF22 BIBREF23 BIBREF24 BIBREF25 Our combination of embedding methods with graphbased methods in this paper is suggestive of how one could combine the two in methods for KB completion Initial work exploring this direction has already been done by Toutanova and Chen toutanova2015observedvslatentkbc Prior work in semantic parsing has either leveraged large knowledge bases to answer questions or used distributional techniques to gain broad coverage over all of natural language In this paper we have shown how to gain both of these benefits by converting the queries generated by traditional semantic parsers into features which are then used in open vocabulary semantic parsing models We presented a technique to do this conversion in a way that is scalable using graphbased feature extraction methods Our combined model achieved relative gains of over 50 in mean average precision and mean reciprocal rank versus a purely distributional approach We also introduced a better mapping from surface text to logical forms and a simple method for using a KB to find candidate entities during inference Taken together the methods introduced in this paper improved mean average precision on our task from 163 to 370 a 127 relative improvement over prior work This work suggests a new direction for semantic parsing research Existing semantic parsers map language to a single KB query an approach that successfully leverages a KBs predicate instances but is fundamentally limited by its schema In contrast our approach maps language to a weighted combination of queries plus a distributional component this approach is capable of representing a much broader class of concepts while still using the KB when it is helpful Furthermore it is capable of using the KB even when the meaning of the language cannot be exactly represented by a KB predicate which is a common occurrence We believe that this kind of approach could significantly expand the applicability of semantic parsing techniques to more complex domains where the assumptions of traditional techniques are too limiting We are actively exploring applying these techniques to science question answering BIBREF26 for example where existing KBs provide only partial coverage of the questions </s>
<s>[INST] Can You Tell Me How to Get Past Sesame Street? Sentence-Level Pretraining Beyond Language Modeling [/INST] Stateoftheart models for natural language processing NLP tasks like translation question answering and parsing include components intended to extract representations for the meaning and contents of each input sentence These sentence encoder components are typically trained directly for the target task at hand This approach can be effective on data rich tasks and yields human performance on some narrowlydefined benchmarks BIBREF1 BIBREF2 but it is tenable only for the few NLP tasks with millions of examples of training data This has prompted interest in pretraining for sentence encoding There is good reason to believe it should be possible to exploit outside data and training signals to effectively pretrain these encoders both because they are intended to primarily capture sentence meaning rather than any taskspecific skill and because we have seen dramatic successes with pretraining in the related domains of word embeddings BIBREF3 and image encoders BIBREF4 More concretely four recent papers show that pretrained sentence encoders can yield very strong performance on NLP tasks First BIBREF5 show that a BiLSTM encoder from a neural machine translation MT system can be effectively reused elsewhere BIBREF6 BIBREF0 and BIBREF7 show that various kinds of encoder pretrained in an unsupervised fashion through generative language modeling LM are effective as well Each paper uses its own evaluation methods though making it unclear which pretraining task is most effective or whether multiple pretraining tasks can be productively combined in the related setting of sentencetovector encoding multitask learning with multiple labeled datasets has yielded a robust state of the art BIBREF8 This paper attempts to systematically address these questions We train reusable sentence encoders on 17 different pretraining tasks several simple baselines and several combinations of these tasks all using a single model architecture and procedure for pretraining and transfer inspired by ELMo We then evaluate each of these encoders on the nine target language understanding tasks in the GLUE benchmark BIBREF9 yielding a total of 40 sentence encoders and 360 total trained models We then measure correlation in performance across target tasks and plot learning curves evaluating the effect of training data volume on each pretraining and target tasks Looking to the results of this experiment we find that language modeling is the most effective single pretraining task we study and that multitask learning during pretraining can offer further gains and a new stateoftheart among fixed sentence encoders We also however find reasons to worry that ELMostyle pretraining in which we pretrain a model and use it on target tasks with no further finetuning is brittle and seriously limiting i Trivial baseline representations do nearly as well as the best pretrained encoders and the margins between substantially different pretraining tasks can be extremely small ii Different target tasks differ dramatically on what kinds of pretraining they benefit most from and multitask pretraining is not sufficient to circumvent this problem and offer generalpurpose pretrained encoders Work toward learning reusable sentence encoders can be traced back at least as far as the multitask model of BIBREF10 but has seen a recent surge in progress with the successes of CoVe BIBREF5 ULMFit BIBREF6 ELMo BIBREF0 and the Transformer LM BIBREF7 However each uses a different model and dataset from the others so while these works serve as existence proofs that effective reusable sentence encoders are possible they do not address the question of what task or tasks should be used to create them The revival of interest in sentence encoder pretraining is recent enough that relatively little has been done to understand the relative merits of these models though two exceptions stand out In unpublished work BIBREF11 offer an analysis of the relative strengths of translation and language modeling using a single architecture and training dataset They find that encoders trained as language models reliably uncover the most syntactic structure even when they are trained on a strict subset of the data used for a comparable translation model BIBREF12 offer a deeper investigation of model design issues for ELMo showing that all of the standard architectures for sentence encoding can be effectively pretrained with broadly similar performance and that all learn reasonably good representations of the morphological and syntactic properties of sentences There has been a great deal of work on sentencetovector encoding a setting in which the pretrained encoder produces a fixedsize vector representation for each input sentence BIBREF13 BIBREF14 BIBREF15 BIBREF16 BIBREF17 These vectors are potentially useful for tasks that require fast similaritybased matching of sentences but using them to replace sentence encoders trained in the conventional way on a given target text classification task does not reliably yield stateofthe art performance on that task BIBREF8 Multitask representation learning in NLP in general has been well studied and again can be traced back at least as far as BIBREF10 For example BIBREF18 show promising results from the combination of translation and parsing BIBREF8 show the benefits of multitask learning in sentencetovector encoding and BIBREF19 and BIBREF20 offer studies of when multitask learning is helpful for lowerlevel NLP tasks Our main experiment compares encoders pretrained on a large number of tasks and task combinations where a task is a datasetobjective function pair This section lists these tasks which we select either to serve as baselines or because they have shown promise in outside prior work especially prior work on sentencetovector encoding Appendix Additional Pretraining Task Details includes additional details on how we implemented some of these tasks and names tasks we evaluated but left out We implement our models using the AllenNLP toolkit BIBREF37 aiming to build the simplest architecture that could be reasonably expected to perform well on the target tasks under study The design of the models roughly follows that used in the GLUE baselines and ELMo Table 1 shows results on the GLUE dev set for all our pretrained encoders each with and without the pretrained ELMo BiLSTM layers E The NA baselines are untrained encoders with random intialization The SingleTask baselines are aggregations of results from nine GLUE runs The result in this row for a given GLUE task uses the encoder pretrained on only that task For consistency with other runs we treat the pretraining task and the target task as two separate tasks in all cases including here and give them separate taskspecific parameters despite the fact that they use identical data We use S and C to distinguish the sequencetosequence and classification versions of the Reddit task respectively To comply with GLUEs limits on test set access we evaluated only three of our pretrained encoders on test data These reflect our best models with and without the use of the pretrained ELMo encoder and with and without the use of GLUE data during pretraining For discussion of our limited hyperparameter tuning see above For roughlycomparable GLUE results in prior work see BIBREF9 or httpswwwgluebenchmarkcom we omit them here in the interest of space The limited size of a US Letter page prevent us from including these baselines in this table As of writing the best test result using a comparable frozen pretrained encoder is 689 from BIBREF9 for a model similar to our GLUE E multitask model and the best overall result is 728 from BIBREF7 with a model that is finetuned in its entirety for each target task While not feasible to run each setting multiple times we estimate the variance of the GLUE score by rerunning the random encoder and MNLI pretraining setups with and without ELMo with different random seeds Across five runs we recorded sigma 04 for the random encoder NA in table and sigma 02 for MNLI E This variation is substantial but not so high as to render results meaningless For the explicitly adversarial WNLI dataset BIBREF28 only one of our models reached even the most frequent class performance of 563 In computing average and test set performances we replace model predictions with the most frequent label to simulate the better performance achievable by choosing not to model that task Looking to other target tasks the grammarrelated CoLA task benefits dramatically from ELMo pretraining The best result without language model pretraining is less than half the result achieved with such pretraining In contrast the meaningoriented textual similarity benchmark STS sees good results with several kinds of pretraining but does not benefit substantially from the use of ELMo Comparing pretraining tasks in isolation without ELMo language modeling performs best followed by MNLI The remaining pretraining tasks yield performance near that of the random baseline Even when training directly on each target task SingleTask in table we get less than a one point gain over this simple baseline Adding ELMo yielded improvements in performance across all pretraining tasks MNLI and EnglishGerman translation perform best in this setting with SkipThought Reddit classification and DisSent also outperforming the ELMoaugmented random baseline With ELMo a multitask model performs best but without it all three multitask models are tied or outperformed by models trained on one of their constituent tasks suggesting that our approach to multitask learning is not reliably able to produce models that productively use the knowledge taught by each training task However of the two nonELMo models that perform best on the development data the multitask model generalizes better than the singletask model on test data for tasks like STS where the test set contains new outofdomain data This paper presents a systematic comparison of tasks and taskcombinations for the pretraining of sentencelevel BiLSTM encoders like those seen in ELMo and CoVe With 40 pretraining tasks and task combinations not counting many more ruled out early and nine target tasks this represents a far more comprehensive study than any seen on this problem to date Our chief positive results are perhaps unsurprising Language modeling works well as a pretraining task and no other single task is consistently better Multitask pretraining can produce results better than any single task can and sets a new stateoftheart among comparable models Target task performance continues to improve with the addition of more language model data even at large scales suggesting that further work scaling up language model pretraining is warranted However a closer look at our results suggests that the pretrainandfreeze paradigm that underlies ELMo and CoVe might not be a sound platform for future work Some trivial baselines do strikingly well the margins between pretraining tasks are small and some pretraining configurations such as MNLI E yield better performance with less data This suggests that we may be nearing an upper bound on the performance that can be reached with methods like these In addition different tasks benefit from different forms of pretraining to a striking degreewith correlations between target tasks often low or negativeand multitask pretraining tasks fail to reliably produce models better than their best individual components This suggests that if truly generalpurpose sentence encoders are possible our current methods cannot produce them While further work on language modeling seems straightforward and worthwhile the authors of this paper believe that the future of this line of work will require a better understanding of the ways in which neural network target task models can benefit from outside knowledge and data and new methods for pretraining and transfer learning to allow them to do so See Section UID18 for general comments on hyperparameter tuning Our multitask learning experiments have three somewhat distinctive properties i We mix tasks with very different amounts of training dataat the extreme under 1000 examples for WNLI and over 1000000000 examples from LM BWB ii Our goal is to optimize the quality of the shared encoder not the performance of any one of the tasks in the multitask mix iii We mix a relatively large number of tasks up to eighteen at once in some conditions These conditions make it challenging but important to avoid overfitting or underfitting any of our tasks Relatively little work has been done on this problem so we conduct a small experiment here All our experiments use the basic paradigm of randomly sampling a new task to train on at each step and we experiment with two hyperparameters that can be used to control over and underfitting The probability with which we sample each task and the weight with which we scale the loss for each task Our experiments follow the setup in Appendix Hyperparameters and Optimization Details and do not use the ELMo BiLSTM Table 6 below shows results on the four coarsegrained categories of the GLUE diagnostic set for all our pretraining experiments This set consists of about 1000 expertconstructed examples in NLI format meant to isolate a range of relevant phenomena Results use the target task classifier trained on the MNLI training set No model achieves performance anywhere close to humanlevel performance suggesting that either none of our pretrained models extract features that are suitable for robust reasoning over text or that the MNLI training set and the MNLI targettask model are not able to exploit any such features that exist See Section Analysis and Discussion for further discussion While no model achieves nearhuman performance the use of ELMo and other forms of unsupervised pretraining appears to be helpful on examples that highlight world knowledge and lexicalsemantic knowledge and less so on examples that highlight complex logical reasoning patterns or alternations in sentence structure This relative weakness on sentence structure is somewhat surprising given the finding in BIBREF11 that language model pretraining is helpful for tasks involving sentence structure </s>
<s>[INST] A Novel Embedding Model for Knowledge Base Completion Based on Convolutional Neural Network [/INST] Largescale knowledge bases KBs such as YAGO BIBREF0 Freebase BIBREF1 and DBpedia BIBREF2 are usually databases of triples representing the relationships between entities in the form of fact head entity relation tail entity denoted as h r t eg Melbourne cityOf Australia These KBs are useful resources in many applications such as semantic searching and ranking BIBREF3 BIBREF4 BIBREF5 question answering BIBREF6 BIBREF7 and machine reading BIBREF8 However the KBs are still incomplete ie missing a lot of valid triples BIBREF9 BIBREF10 Therefore much research work has been devoted towards knowledge base completion or link prediction to predict whether a triple h r t is valid or not BIBREF11 Many embedding models have proposed to learn vector or matrix representations for entities and relations obtaining stateoftheart SOTA link prediction results BIBREF12 In these embedding models valid triples obtain lower implausibility scores than invalid triples Let us take the wellknown embedding model TransE BIBREF13 as an example In TransE entities and relations are represented by k dimensional vector embeddings TransE employs a transitional characteristic to model relationships between entities in which it assumes that if h r t is a valid fact the embedding of head entity h plus the embedding of relation r should be close to the embedding of tail entity t ie vh vr approx vt here vh vr and h0 are embeddings of h1 h2 and h3 respectively That is a TransE score h4 of the valid triple h r t should be close to 0 and smaller than a score h5 of an invalid triple h r t The transitional characteristic in TransE also implies the global relationships among same dimensional entries of h6 h7 and h8 Other transitionbased models extend TransE to additionally use projection vectors or matrices to translate head and tail embeddings into the relation vector space such as TransH BIBREF14 TransR BIBREF15 TransD BIBREF16 STransE BIBREF17 and TranSparse BIBREF18 Furthermore DISTMULT BIBREF19 and ComplEx BIBREF20 use a trilinear dot product to compute the score for each triple Recent research has shown that using relation paths between entities in the KBs could help to get contextual information for improving KB completion performance BIBREF21 BIBREF22 BIBREF23 BIBREF24 BIBREF25 See other embedding models for KB completion in BIBREF26 Recently convolutional neural networks CNNs originally designed for computer vision BIBREF27 have significantly received research attention in natural language processing BIBREF28 BIBREF29 CNN learns nonlinear features to capture complex relationships with a remarkably less number of parameters compared to fully connected neural networks Inspired from the success in computer vision BIBREF30 proposed ConvEthe first model applying CNN for the KB completion task In ConvE only vh and vr are reshaped and then concatenated into an input matrix which is fed to the convolution layer Different filters of the same 3times 3 shape are operated over the input matrix to output feature map tensors These feature map tensors are then vectorized and mapped into a vector via a linear transformation Then this vector is computed with vt via a dot product to return a score for h r t See a formal definition of the ConvE score function in Table 1 It is worth noting that ConvE focuses on the local relationships among different dimensional entries in each of vh or vr ie ConvE does not observe the global relationships among same dimensional entries of an embedding triple vh vr vt so that ConvE ignores the transitional characteristic in transitionbased models which is one of the most useful intuitions for the task In this paper we present ConvKBan embedding model which proposes a novel use of CNN for the KB completion task In ConvKB each entity or relation is associated with an unique k dimensional embedding Let vh vr and vt denote k dimensional embeddings of h r and t respectively For each triple h r t the corresponding triple of k dimensional embeddings vh vh0 vh1 is represented as a vh2 input matrix This input matrix is fed to the convolution layer where different filters of the same vh3 shape are used to extract the global relationships among same dimensional entries of the embedding triple That is these filters are repeatedly operated over every row of the input matrix to produce different feature maps The feature maps are concatenated into a single feature vector which is then computed with a weight vector via a dot product to produce a score for the triple h r t This score is used to infer whether the triple h r t is valid or not Our contributions in this paper are as follows A knowledge base mathcal G is a collection of valid factual triples in the form of head entity relation tail entity denoted as h r t such that h t in mathcal E and r in mathcal R where mathcal E is a set of entities and mathcal R is a set of relations Embedding models aim to define a score function f giving an implausibility score for each triple h r t such that valid triples receive lower scores than invalid triples Table 1 presents score functions in previous SOTA models We denote the dimensionality of embeddings by k such that each embedding triple vh vr vt are viewed as a matrix A vhvrvt in mathbb Rktimes 3 And Ai in mathbb R1times 3 denotes the i th row of A Suppose that we use a filter omega in mathbb R1times 3 operated on the convolution layer omega is not only aimed to examine the global relationships between same dimensional entries of the embedding triple vh0 vh1 vh2 but also to generalize the transitional characteristics in the transitionbased models vh3 is repeatedly operated over every row of vh4 to finally generate a feature map vh5 as vi gleftomega cdot Ai bright nonumber Eq 4 where b in mathbb R is a bias term and g is some activation function such as ReLU Our ConvKB uses different filters in mathbb R1times 3 to generate different feature maps Let Omega and tau denote the set of filters and the number of filters respectively ie tau Omega resulting in tau feature maps These tau feature maps are concatenated into a single vector in mathbb Rtau ktimes 1 which is then computed with a weight vector w in mathbb Rtau ktimes 1 via a dot product to give a score for the triple h r t Figure 1 illustrates the computation process in ConvKB Formally we define the ConvKB score function f as follows fhrt mathsf concatleftgleftvhvrvtast Omega rightrightcdot w nonumber Eq 6 where Omega and w are shared parameters independent of h r and t ast denotes a convolution operator and mathsf concat denotes a concatenation operator If we only use one filter omega ie using tau 1 with a fixed bias term b0 and the activation function gxx or gxx2 and fix omega 1 1 1 and w textbf 1 during training ConvKB reduces to the plain TransE model BIBREF13 So our ConvKB model can be viewed as an extension of TransE to further model global relationships We use the Adam optimizer BIBREF32 to train ConvKB by minimizing the loss function mathcal L BIBREF20 with L2 regularization on the weight vector w of the model textin which lhrt leftlbrace
beginarrayl
1textfor hrtin mathcal G
1textfor hrtin mathcal Gprime
endarray right Eq
mathcal L sum beginarraychrt in lbrace mathcal G cup mathcal Gprime rbrace endarray log left1 exp leftlhrt cdot flefthrtrightrightright nonumber
fraclambda 2Vert wVert 22 nonumber
here mathcal Gprime is a collection of invalid triples generated by corrupting valid triples in mathcal G We evaluate ConvKB on two benchmark datasets WN18RR BIBREF30 and FB15k237 BIBREF31 WN18RR and FB15k237 are correspondingly subsets of two common datasets WN18 and FB15k BIBREF13 As noted by BIBREF31 WN18 and FB15k are easy because they contain many reversible relations So knowing relations are reversible allows us to easily predict the majority of test triples eg stateoftheart results on both WN18 and FB15k are obtained by using a simple reversal rule as shown in BIBREF30 Therefore WN18RR and FB15k237 are created to not suffer from this reversible relation problem in WN18 and FB15k for which the knowledge base completion task is more realistic Table 2 presents the statistics of WN18RR and FB15k237 In the KB completion or link prediction task BIBREF13 the purpose is to predict a missing entity given a relation and another entity ie inferring h given r t or inferring t given h r The results are calculated based on ranking the scores produced by the score function f on test triples Following BIBREF13 for each valid test triple h r t we replace either h or t by each of other entities in mathcal E to create a set of corrupted triples We use the Filtered setting protocol BIBREF13 ie not taking any corrupted triples that appear in the KB into accounts We rank the valid test triple and corrupted triples in ascending order of their scores We employ three common evaluation metrics mean rank MR mean reciprocal rank MRR and Hits10 ie the proportion of the valid test triples ranking in top 10 predictions Lower MR higher MRR or higher Hits10 indicate better performance We use the common Bernoulli trick BIBREF14 BIBREF15 to generate the head or tail entities when sampling invalid triples We also use entity and relation embeddings produced by TransE to initialize entity and relation embeddings in ConvKB We employ a TransE implementation available at httpsgithubcomdatquocnguyenSTransE We train TransE for 3000 epochs using a grid search of hyperparameters the dimensionality of embeddings k in lbrace 50 100rbrace SGD learning rate in lbrace 1e4 5e4 1e3 5e3rbrace mathit l1 norm or mathit l2 norm and margin gamma in lbrace 1 3 5 7rbrace The highest Hits10 scores on the validation set are when using mathit l1 norm learning rate at 5e4 gamma 5 and k 50 for WN18RR and using mathit l1 norm learning rate at in lbrace 1e4 5e4 1e3 5e3rbrace 0 in lbrace 1e4 5e4 1e3 5e3rbrace 1 1 and k 100 for FB15k237 To learn our model parameters including entity and relation embeddings filters omega and the weight vector w we use Adam BIBREF32 and select its initial learning rate in lbrace 5e6 1e5 5e5 1e4 5e4rbrace We use ReLU as the activation function g We fix the batch size at 256 and set the L2 regularizer lambda at 0001 in our objective function The filters omega are initialized by a truncated normal distribution or by 01 01 01 We select the number of filters tau in lbrace 50 100 200 400 500rbrace We run ConvKB up to 200 epochs and use outputs from the last epoch for evaluation The highest Hits10 scores on the validation set are obtained when using k 50 w0 the truncated normal distribution for filter initialization and the initial learning rate at w1 on WN18RR and k 100 w2 w3 for filter initialization and the initial learning rate at w4 on FB15k237 Table 3 compares the experimental results of our ConvKB model with previous published results using the same experimental setup Table 3 shows that ConvKB obtains the best MR and highest Hits10 scores on WN18RR and also the highest MRR and Hits10 scores on FB15k237 ConvKB does better than the closely related model TransE on both experimental datasets especially on FB15k237 where ConvKB gains significant improvements of 347257 90 in MR which is about 26 relative improvement and 0396 0294 0102 in MRR which is 34 relative improvement and also obtains 517 465 52 absolute improvement in Hits10 Previous work shows that TransE obtains very competitive results BIBREF21 BIBREF38 BIBREF20 BIBREF25 However when comparing the CNNbased embedding model ConvE with other models BIBREF30 did not experiment with TransE We reconfirm previous findings that TransE in fact is a strong baseline model eg TransE obtains better MR and Hits10 than ConvE on WN18RR ConvKB obtains better scores than ConvE on both datasets except MRR on WN18RR and MR on FB15k237 thus showing the usefulness of taking transitional characteristics into accounts In particular on FB15k237 ConvKB achieves improvements of 03940316 0078 in MRR which is about 25 relative improvement and 517 491 26 in Hits10 while both ConvKB and ConvE produce similar MR scores ConvKB also obtains 25 relatively higher MRR score than the relation pathbased model KB LRN on FB15k237 In addition ConvKB gives better Hits10 than KB LRN however KB LRN gives better MR than ConvKB We plan to extend ConvKB with relation path information to obtain better link prediction performance in future work In this paper we propose a novel embedding model ConvKB for the knowledge base completion task ConvKB applies the convolutional neural network to explore the global relationships among same dimensional entries of the entity and relation embeddings so that ConvKB generalizes the transitional characteristics in the transitionbased embedding models Experimental results show that our model ConvKB outperforms other stateoftheart models on two benchmark datasets WN18RR and FB15k237 Our code is available at httpsgithubcomdaiquocnguyenConvKB We also plan to extend ConvKB for a new application where we could formulate data in the form of triples For example inspired from the work by BIBREF39 for search personalization we can also apply ConvKB to model useroriented relationships between submitted queries and documents returned by search engines ie modeling triple representations query user document This research was partially supported by the Australian Research Council ARC Discovery Grant Project DP160103934 </s>