id
stringlengths 40
40
| pid
stringlengths 42
42
| input
stringlengths 8.37k
169k
| output
stringlengths 1
1.63k
|
---|---|---|---|
c383fa9170ae00a4a24a8e39358c38395c5f034b | c383fa9170ae00a4a24a8e39358c38395c5f034b_0 | Q: How they know what are content words?
Text: Introduction
All text has style, whether it be formal or informal, polite or aggressive, colloquial, persuasive, or even robotic. Despite the success of style transfer in image processing BIBREF0, BIBREF1, there has been limited progress in the text domain, where disentangling style from content is particularly difficult.
To date, most work in style transfer relies on the availability of meta-data, such as sentiment, authorship, or formality. While meta-data can provide insight into the style of a text, it often conflates style with content, limiting the ability to perform style transfer while preserving content. Generalizing style transfer requires separating style from the meaning of the text itself. The study of literary style can guide us. For example, in the digital humanities and its subfield of stylometry, content doesn't figure prominently in practical methods of discriminating authorship and genres, which can be thought of as style at the level of the individual and population, respectively. Rather, syntactic and functional constructions are the most salient features.
In this work, we turn to literary style as a test-bed for style transfer, and build on work from literature scholars using computational techniques for analysis. In particular we draw on stylometry: the use of surface level features, often counts of function words, to discriminate between literary styles. Stylometry first saw success in attributing authorship to the disputed Federalist Papers BIBREF2, but is recently used by scholars to study things such as the birth of genres BIBREF3 and the change of author styles over time BIBREF4. The use of function words is likely not the way writers intend to express style, but they appear to be downstream realizations of higher-level stylistic decisions.
We hypothesize that surface-level linguistic features, such as counts of personal pronouns, prepositions, and punctuation, are an excellent definition of literary style, as borne out by their use in the digital humanities, and our own style classification experiments. We propose a controllable neural encoder-decoder model in which these features are modelled explicitly as decoder feature embeddings. In training, the model learns to reconstruct a text using only the content words and the linguistic feature embeddings. We can then transfer arbitrary content words to a new style without parallel data by setting the low-level style feature embeddings to be indicative of the target style.
This paper makes the following contributions:
A formal model of style as a suite of controllable, low-level linguistic features that are independent of content.
An automatic evaluation showing that our model fools a style classifier 84% of the time.
A human evaluation with English literature experts, including recommendations for dealing with the entanglement of content with style.
Related Work ::: Style Transfer with Parallel Data
Following in the footsteps of machine translation, style transfer in text has seen success by using parallel data. BIBREF5 use modern translations of Shakespeare plays to build a modern-to-Shakespearan model. BIBREF6 compile parallel data for formal and informal sentences, allowing them to successfully use various machine translation techniques. While parallel data may work for very specific styles, the difficulty of finding parallel texts dramatically limits this approach.
Related Work ::: Style Transfer without Parallel Data
There has been a decent amount of work on this approach in the past few years BIBREF7, BIBREF8, mostly focusing on variations of an encoder-decoder framework in which style is modeled as a monolithic style embedding. The main obstacle is often to disentangle style and content. However, it remains a challenging problem.
Perhaps the most successful is BIBREF9, who use a de-noising auto encoder and back translation to learn style without parallel data. BIBREF10 outline the benefits of automatically extracting style, and suggest there is a formal weakness of using linguistic heuristics. In contrast, we believe that monolithic style embeddings don't capture the existing knowledge we have about style, and will struggle to disentangle content.
Related Work ::: Controlling Linguistic Features
Several papers have worked on controlling style when generating sentences from restaurant meaning representations BIBREF11, BIBREF12. In each of these cases, the diversity in outputs is quite small given the constraints of the meaning representation, style is often constrained to interjections (like “yeah”), and there is no original style from which to transfer.
BIBREF13 investigate using stylistic parameters and content parameters to control text generation using a movie review dataset. Their stylistic parameters are created using word-level heuristics and they are successful in controlling these parameters in the outputs. Their success bodes well for our related approach in a style transfer setting, in which the content (not merely content parameters) is held fixed.
Related Work ::: Stylometry and the Digital Humanities
Style, in literary research, is anything but a stable concept, but it nonetheless has a long tradition of study in the digital humanities. In a remarkably early quantitative study of literature, BIBREF14 charts sentence-level stylistic attributes specific to a number of novelists. Half a century later, BIBREF15 builds on earlier work in information theory by BIBREF16, and defines a literary text as consisting of two “materials": “the vocabulary, and some structural properties, the style, of its author."
Beginning with BIBREF2, statistical approaches to style, or stylometry, join the already-heated debates over the authorship of literary works. A noteable example of this is the “Delta" measure, which uses z-scores of function word frequencies BIBREF17. BIBREF18 find that Shakespeare added some material to a later edition of Thomas Kyd's The Spanish Tragedy, and that Christopher Marlowe collaborated with Shakespeare on Henry VI.
Models ::: Preliminary Classification Experiments
The stylometric research cited above suggests that the most frequently used words, e.g. function words, are most discriminating of authorship and literary style. We investigate these claims using three corpora that have distinctive styles in the literary community: gothic novels, philosophy books, and pulp science fiction, hereafter sci-fi.
We retrieve gothic novels and philosophy books from Project Gutenberg and pulp sci-fi from Internet Archive's Pulp Magazine Archive. We partition this corpus into train, validation, and test sets the sizes of which can be found in Table TABREF12.
In order to validate the above claims, we train five different classifiers to predict the literary style of sentences from our corpus. Each classifier has gradually more content words replaced with part-of-speech (POS) tag placeholder tokens. The All model is trained on sentences with all proper nouns replaced by `PROPN'. The models Ablated N, Ablated NV, and Ablated NVA replace nouns, nouns & verbs, and nouns, verbs, & adjectives with the corresponding POS tag respectively. Finally, Content-only is trained on sentences with all words that are not tagged as NOUN, VERB, ADJ removed; the remaining words are not ablated.
We train the classifiers on the training set, balancing the class distribution to make sure there are the same number of sentences from each style. Classifiers are trained using fastText BIBREF19, using tri-gram features with all other settings as default. table:classifiers shows the accuracies of the classifiers.
The styles are highly distinctive: the All classifier has an accuracy of 86%. Additionally, even the Ablated NVA is quite successful, with 75% accuracy, even without access to any content words. The Content only classifier is also quite successful, at 80% accuracy. This indicates that these stylistic genres are distinctive at both the content level and at the syntactic level.
Models ::: Formal Model of Style
Given that non-content words are distinctive enough for a classifier to determine style, we propose a suite of low-level linguistic feature counts (henceforth, controls) as our formal, content-blind definition of style. The style of a sentence is represented as a vector of counts of closed word classes (like personal pronouns) as well as counts of syntactic features like the number of SBAR non-terminals in its constituency parse, since clause structure has been shown to be indicative of style BIBREF20. Controls are extracted heuristically, and almost all rely on counts of pre-defined word lists. For constituency parses we use the Stanford Parser BIBREF21. table:controlexamples lists all the controls along with examples.
Models ::: Formal Model of Style ::: Reconstruction Task
Models are trained with a reconstruction task, in which a distorted version of a reference sentence is input and the goal is to output the original reference.
fig:sentenceinput illustrates the process. Controls are calculated heuristically. All words found in the control word lists are then removed from the reference sentence. The remaining words, which represent the content, are used as input into the model, along with their POS tags and lemmas.
In this way we encourage models to construct a sentence using content and style independently. This will allow us to vary the stylistic controls while keeping the content constant, and successfully perform style transfer. When generating a new sentence, the controls correspond to the counts of the corresponding syntactic features that we expect to be realized in the output.
Models ::: Neural Architecture
We implement our feature controlled language model using a neural encoder-decoder with attention BIBREF22, using 2-layer uni-directional gated recurrent units (GRUs) for the encoder and decoder BIBREF23.
The input to the encoder is a sequence of $M$ content words, along with their lemmas, and fine and coarse grained part-of-speech (POS) tags, i.e. $X_{.,j} = (x_{1,j},\ldots ,x_{M,j})$ for $j \in \mathcal {T} = \lbrace \textrm {word, lemma, fine-pos, coarse-pos}\rbrace $. We embed each token (and its lemma and POS) before concatenating, and feeding into the encoder GRU to obtain encoder hidden states, $ c_i = \operatorname{gru}(c_{i-1}, \left[E_j(X_{i,j}), \; j\in \mathcal {T} \right]; \omega _{enc}) $ for $i \in {1,\ldots ,M},$ where initial state $c_0$, encoder GRU parameters $\omega _{enc}$ and embedding matrices $E_j$ are learned parameters.
The decoder sequentially generates the outputs, i.e. a sequence of $N$ tokens $y =(y_1,\ldots ,y_N)$, where all tokens $y_i$ are drawn from a finite output vocabulary $\mathcal {V}$. To generate the each token we first embed the previously generated token $y_{i-1}$ and a vector of $K$ control features $z = ( z_1,\ldots , z_K)$ (using embedding matrices $E_{dec}$ and $E_{\textrm {ctrl-1}}, \ldots , E_{\textrm {ctrl-K}}$ respectively), before concatenating them into a vector $\rho _i,$ and feeding them into the decoder side GRU along with the previous decoder state $h_{i-1}$:
where $\omega _{dec}$ are the decoder side GRU parameters.
Using the decoder hidden state $h_i$ we then attend to the encoder context vectors $c_j$, computing attention scores $\alpha _{i,j}$, where
before passing $h_i$ and the attention weighted context $\bar{c}_i=\sum _{j=1}^M \alpha _{i,j} c_j$ into a single hidden-layer perceptron with softmax output to compute the next token prediction probability,
where $W,U,V$ and $u,v, \nu $ are parameter matrices and vectors respectively.
Crucially, the controls $z$ remain fixed for all input decoder steps. Each $z_k$ represents the frequency of one of the low-level features described in sec:formalstyle. During training on the reconstruction task, we can observe the full output sequence $y,$ and so we can obtain counts for each control feature directly. Controls receive a different embedding depending on their frequency, where counts of 0-20 each get a unique embedding, and counts greater than 20 are assigned to the same embedding. At test time, we set the values of the controls according to procedure described in Section SECREF25.
We use embedding sizes of 128, 128, 64, and 32 for token, lemma, fine, and coarse grained POS embedding matrices respectively. Output token embeddings $E_{dec}$ have size 512, and 50 for the control feature embeddings. We set 512 for all GRU and perceptron output sizes. We refer to this model as the StyleEQ model. See fig:model for a visual depiction of the model.
Models ::: Neural Architecture ::: Baseline Genre Model
We compare the above model to a similar model, where rather than explicitly represent $K$ features as input, we have $K$ features in the form of a genre embedding, i.e. we learn a genre specific embedding for each of the gothic, scifi, and philosophy genres, as studied in BIBREF8 and BIBREF7. To generate in a specific style, we simply set the appropriate embedding. We use genre embeddings of size 850 which is equivalent to the total size of the $K$ feature embeddings in the StyleEQ model.
Models ::: Neural Architecture ::: Training
We train both models with minibatch stochastic gradient descent with a learning rate of 0.25, weight decay penalty of 0.0001, and batch size of 64. We also apply dropout with a drop rate of 0.25 to all embedding layers, the GRUs, and preceptron hidden layer. We train for a maximum of 200 epochs, using validation set BLEU score BIBREF26 to select the final model iteration for evaluation.
Models ::: Neural Architecture ::: Selecting Controls for Style Transfer
In the Baseline model, style transfer is straightforward: given an input sentence in one style, fix the encoder content features while selecting a different genre embedding. In contrast, the StyleEQ model requires selecting the counts for each control. Although there are a variety of ways to do this, we use a method that encourages a diversity of outputs.
In order to ensure the controls match the reference sentence in magnitude, we first find all sentences in the target style with the same number of words as the reference sentence. Then, we add the following constraints: the same number of proper nouns, the same number of nouns, the same number of verbs, and the same number of adjectives. We randomly sample $n$ of the remaining sentences, and for each of these `sibling' sentences, we compute the controls. For each of the new controls, we generate a sentence using the original input sentence content features. The generated sentences are then reranked using the length normalized log-likelihood under the model. We can then select the highest scoring sentence as our style-transferred output, or take the top-$k$ when we need a diverse set of outputs.
The reason for this process is that although there are group-level distinctive controls for each style, e.g. the high use of punctuation in philosophy books or of first person pronouns in gothic novels, at the sentence level it can understandably be quite varied. This method matches sentences between styles, capturing the natural distribution of the corpora.
Automatic Evaluations ::: BLEU Scores & Perplexity
In tab:blueperpl we report BLEU scores for the reconstruction of test set sentences from their content and feature representations, as well as the model perplexities of the reconstruction. For both models, we use beam decoding with a beam size of eight. Beam candidates are ranked according to their length normalized log-likelihood. On these automatic measures we see that StyleEQ is better able to reconstruct the original sentences. In some sense this evaluation is mostly a sanity check, as the feature controls contain more locally specific information than the genre embeddings, which say very little about how many specific function words one should expect to see in the output.
Automatic Evaluations ::: Feature Control
Designing controllable language models is often difficult because of the various dependencies between tokens; when changing one control value it may effect other aspects of the surface realization. For example, increasing the number of conjunctions may effect how the generator places prepositions to compensate for structural changes in the sentence. Since our features are deterministically recoverable, we can perturb an individual control value and check to see that the desired change was realized in the output. Moreover, we can check the amount of change in the other non-perturbed features to measure the independence of the controls.
We sample 50 sentences from each genre from the test set. For each sample, we create a perturbed control setting for each control by adding $\delta $ to the original control value. This is done for $\delta \in \lbrace -3, -2, -1, 0, 1, 2, 3\rbrace $, skipping any settings where the new control value would be negative.
table:autoeval:ctrl shows the results of this experiment. The Exact column displays the percentage of generated texts that realize the exact number of control features specified by the perturbed control. High percentages in the Exact column indicate greater one-to-one correspondence between the control and surface realization. For example, if the input was “Dracula and Frankenstein and the mummy,” and we change the conjunction feature by $\delta =-1$, an output of “Dracula, Frankenstein and the mummy,” would count towards the Exact category, while “Dracula, Frankenstein, the mummy,” would not.
The Direction column specifies the percentage of cases where the generated text produces a changed number of the control features that, while not exactly matching the specified value of the perturbed control, does change from the original in the correct direction. For example, if the input again was “Dracula and Frankenstein and the mummy,” and we change the conjunction feature by $\delta =-1$, both outputs of “Dracula, Frankenstein and the mummy,” and “Dracula, Frankenstein, the mummy,” would count towards Direction. High percentages in Direction mean that we could roughly ensure desired surface realizations by modifying the control by a larger $\delta $.
Finally, the Atomic column specifies the percentage of cases where the generated text with the perturbed control only realizes changes to that specific control, while other features remain constant. For example, if the input was “Dracula and Frankenstein in the castle,” and we set the conjunction feature to $\delta =-1$, an output of “Dracula near Frankenstein in the castle,” would not count as Atomic because, while the number of conjunctions did decrease by one, the number of simple preposition changed. An output of “Dracula, Frankenstein in the castle,” would count as Atomic. High percentages in the Atomic column indicate this feature is only loosely coupled to the other features and can be changed without modifying other aspects of the sentence.
Controls such as conjunction, determiner, and punctuation are highly controllable, with Exact rates above 80%. But with the exception of the constituency parse features, all controls have high Direction rates, many in the 90s. These results indicate our model successfully controls these features. The fact that the Atomic rates are relatively low is to be expected, as controls are highly coupled – e.g. to increase 1stPer, it is likely another pronoun control will have to decrease.
Automatic Evaluations ::: Automatic Classification
For each model we look at the classifier prediction accuracy of reconstructed and transferred sentences. In particular we use the Ablated NVA classifier, as this is the most content-blind one.
We produce 16 outputs from both the Baseline and StyleEq models. For the Baseline, we use a beam search of size 16. For the StyleEQ model, we use the method described in Section SECREF25 to select 16 `sibling' sentences in the target style, and generated a transferred sentence for each. We look at three different methods for selection: all, which uses all output sentences; top, which selects the top ranked sentence based on the score from the model; and oracle, which selects the sentence with the highest classifier likelihood for the intended style.
The reason for the third method, which indeed acts as an oracle, is that using the score from the model didn't always surface a transferred sentence that best reflected the desired style. Partially this was because the model score was mostly a function of how well a transferred sentence reflected the distribution of the training data. But additionally, some control settings are more indicative of a target style than others. The use of the classifier allows us to identify the most suitable control setting for a target style that was roughly compatible with the number of content words.
In table:fasttext-results we see the results. Note that for both models, the all and top classification accuracy tends to be quite similar, though for the Baseline they are often almost exactly the same when the Baseline has little to no diversity in the outputs.
However, the oracle introduces a huge jump in accuracy for the StyleEQ model, especially compared to the Baseline, partially because the diversity of outputs from StyleEQ is much higher; often the Baseline model produces no diversity – the 16 output sentences may be nearly identical, save a single word or two. It's important to note that neither model uses the classifier in any way except to select the sentence from 16 candidate outputs.
What this implies is that lurking within the StyleEQ model outputs are great sentences, even if they are hard to find. In many cases, the StyleEQ model has a classification accuracy above the base rate from the test data, which is 75% (see table:classifiers).
Human Evaluation
table:cherrypicking shows example outputs for the StyleEQ and Baseline models. Through inspection we see that the StyleEQ model successfully changes syntactic constructions in stylistically distinctive ways, such as increasing syntactic complexity when transferring to philosophy, or changing relevant pronouns when transferring to sci-fi. In contrast, the Baseline model doesn't create outputs that move far from the reference sentence, making only minor modifications such changing the type of a single pronoun.
To determine how readers would classify our transferred sentences, we recruited three English Literature PhD candidates, all of whom had passed qualifying exams that included determining both genre and era of various literary texts.
Human Evaluation ::: Fluency Evaluation
To evaluate the fluency of our outputs, we had the annotators score reference sentences, reconstructed sentences, and transferred sentences on a 0-5 scale, where 0 was incoherent and 5 was a well-written human sentence.
table:fluency shows the average fluency of various conditions from all three annotators. Both models have fluency scores around 3. Upon inspection of the outputs, it is clear that many have fluency errors, resulting in ungrammatical sentences.
Notably the Baseline often has slightly higher fluency scores than the StyleEQ model. This is likely because the Baseline model is far less constrained in how to construct the output sentence, and upon inspection often reconstructs the reference sentence even when performing style transfer. In contrast, the StyleEQ is encouraged to follow the controls, but can struggle to incorporate these controls into a fluent sentence.
The fluency of all outputs is lower than desired. We expect that incorporating pre-trained language models would increase the fluency of all outputs without requiring larger datasets.
Human Evaluation ::: Human Classification
Each annotator annotated 90 reference sentences (i.e. from the training corpus) with which style they thought the sentence was from. The accuracy on this baseline task for annotators A1, A2, and A3 was 80%, 88%, and 80% respectively, giving us an upper expected bound on the human evaluation.
In discussing this task with the annotators, they noted that content is a heavy predictor of genre, and that would certainly confound their annotations. To attempt to mitigate this, we gave them two annotation tasks: which-of-3 where they simply marked which style they thought a sentence was from, and which-of-2 where they were given the original style and marked which style they thought the sentence was transferred into.
For each task, each annotator marked 180 sentences: 90 from each model, with an even split across the three genres. Annotators were presented the sentences in a random order, without information about the models. In total, each marked 270 sentences. (Note there were no reconstructions in this annotation task.)
table:humanclassifiers shows the results. In both tasks, accuracy of annotators classifying the sentence as its intended style was low. In which-of-3, scores were around 20%, below the chance rate of 33%. In which-of-2, scores were in the 50s, slightly above the chance rate of 50%. This was the case for both models. There was a slight increase in accuracy for the StyleEQ model over the Baseline for which-of-3, but the opposite trend for which-of-2, suggesting these differences are not significant.
It's clear that it's hard to fool the annotators. Introspecting on their approach, the annotators expressed having immediate responses based on key words – for instance any references of `space' implied `sci-fi'. We call this the `vampires in space' problem, because no matter how well a gothic sentence is rewritten as a sci-fi one, it's impossible to ignore the fact that there is a vampire in space. The transferred sentences, in the eyes of the Ablated NVA classifier (with no access to content words), did quite well transferring into their intended style. But people are not blind to content.
Human Evaluation ::: The `Vampires in Space' Problem
Working with the annotators, we regularly came up against the 'vampires in space' problem: while syntactic constructions account for much of the distinction of literary styles, these constructions often co-occur with distinctive content.
Stylometrics finds syntactic constructions are great at fingerprinting, but suggests that these constructions are surface realizations of higher-level stylistic decisions. The number and type of personal pronouns is a reflection of how characters feature in a text. A large number of positional prepositions may be the result of a writer focusing on physical descriptions of scenes. In our attempt to decouple these, we create Frankenstein sentences, which piece together features of different styles – we are putting vampires in space.
Another way to validate our approach would be to select data that is stylistically distinctive but with similar content: perhaps genres in which content is static but language use changes over time, stylistically distinct authors within a single genre, or parodies of a distinctive genre.
Conclusion and Future Work
We present a formal, extendable model of style that can add control to any neural text generation system. We model style as a suite of low-level linguistic controls, and train a neural encoder-decoder model to reconstruct reference sentences given only content words and the setting of the controls. In automatic evaluations, we show that our model can fool a style classifier 84% of the time and outperforms a baseline genre-embedding model. In human evaluations, we encounter the `vampires in space' problem in which content and style are equally discriminative but people focus more on the content.
In future work we would like to model higher-level syntactic controls. BIBREF20 show that differences in clausal constructions, for instance having a dependent clause before an independent clause or vice versa, is a marker of style appreciated by the reader. Such features would likely interact with our lower-level controls in an interesting way, and provide further insight into style transfer in text.
Acknowledgements
Katy Gero is supported by an NSF GRF (DGE - 1644869). We would also like to thank Elsbeth Turcan for her helpful comments. | words found in the control word lists are then removed, The remaining words, which represent the content |
83251fd4a641cea8b180b49027e74920bca2699a | 83251fd4a641cea8b180b49027e74920bca2699a_0 | Q: How they model style as a suite of low-level linguistic controls, such as frequency of pronouns, prepositions, and subordinate clause constructions?
Text: Introduction
All text has style, whether it be formal or informal, polite or aggressive, colloquial, persuasive, or even robotic. Despite the success of style transfer in image processing BIBREF0, BIBREF1, there has been limited progress in the text domain, where disentangling style from content is particularly difficult.
To date, most work in style transfer relies on the availability of meta-data, such as sentiment, authorship, or formality. While meta-data can provide insight into the style of a text, it often conflates style with content, limiting the ability to perform style transfer while preserving content. Generalizing style transfer requires separating style from the meaning of the text itself. The study of literary style can guide us. For example, in the digital humanities and its subfield of stylometry, content doesn't figure prominently in practical methods of discriminating authorship and genres, which can be thought of as style at the level of the individual and population, respectively. Rather, syntactic and functional constructions are the most salient features.
In this work, we turn to literary style as a test-bed for style transfer, and build on work from literature scholars using computational techniques for analysis. In particular we draw on stylometry: the use of surface level features, often counts of function words, to discriminate between literary styles. Stylometry first saw success in attributing authorship to the disputed Federalist Papers BIBREF2, but is recently used by scholars to study things such as the birth of genres BIBREF3 and the change of author styles over time BIBREF4. The use of function words is likely not the way writers intend to express style, but they appear to be downstream realizations of higher-level stylistic decisions.
We hypothesize that surface-level linguistic features, such as counts of personal pronouns, prepositions, and punctuation, are an excellent definition of literary style, as borne out by their use in the digital humanities, and our own style classification experiments. We propose a controllable neural encoder-decoder model in which these features are modelled explicitly as decoder feature embeddings. In training, the model learns to reconstruct a text using only the content words and the linguistic feature embeddings. We can then transfer arbitrary content words to a new style without parallel data by setting the low-level style feature embeddings to be indicative of the target style.
This paper makes the following contributions:
A formal model of style as a suite of controllable, low-level linguistic features that are independent of content.
An automatic evaluation showing that our model fools a style classifier 84% of the time.
A human evaluation with English literature experts, including recommendations for dealing with the entanglement of content with style.
Related Work ::: Style Transfer with Parallel Data
Following in the footsteps of machine translation, style transfer in text has seen success by using parallel data. BIBREF5 use modern translations of Shakespeare plays to build a modern-to-Shakespearan model. BIBREF6 compile parallel data for formal and informal sentences, allowing them to successfully use various machine translation techniques. While parallel data may work for very specific styles, the difficulty of finding parallel texts dramatically limits this approach.
Related Work ::: Style Transfer without Parallel Data
There has been a decent amount of work on this approach in the past few years BIBREF7, BIBREF8, mostly focusing on variations of an encoder-decoder framework in which style is modeled as a monolithic style embedding. The main obstacle is often to disentangle style and content. However, it remains a challenging problem.
Perhaps the most successful is BIBREF9, who use a de-noising auto encoder and back translation to learn style without parallel data. BIBREF10 outline the benefits of automatically extracting style, and suggest there is a formal weakness of using linguistic heuristics. In contrast, we believe that monolithic style embeddings don't capture the existing knowledge we have about style, and will struggle to disentangle content.
Related Work ::: Controlling Linguistic Features
Several papers have worked on controlling style when generating sentences from restaurant meaning representations BIBREF11, BIBREF12. In each of these cases, the diversity in outputs is quite small given the constraints of the meaning representation, style is often constrained to interjections (like “yeah”), and there is no original style from which to transfer.
BIBREF13 investigate using stylistic parameters and content parameters to control text generation using a movie review dataset. Their stylistic parameters are created using word-level heuristics and they are successful in controlling these parameters in the outputs. Their success bodes well for our related approach in a style transfer setting, in which the content (not merely content parameters) is held fixed.
Related Work ::: Stylometry and the Digital Humanities
Style, in literary research, is anything but a stable concept, but it nonetheless has a long tradition of study in the digital humanities. In a remarkably early quantitative study of literature, BIBREF14 charts sentence-level stylistic attributes specific to a number of novelists. Half a century later, BIBREF15 builds on earlier work in information theory by BIBREF16, and defines a literary text as consisting of two “materials": “the vocabulary, and some structural properties, the style, of its author."
Beginning with BIBREF2, statistical approaches to style, or stylometry, join the already-heated debates over the authorship of literary works. A noteable example of this is the “Delta" measure, which uses z-scores of function word frequencies BIBREF17. BIBREF18 find that Shakespeare added some material to a later edition of Thomas Kyd's The Spanish Tragedy, and that Christopher Marlowe collaborated with Shakespeare on Henry VI.
Models ::: Preliminary Classification Experiments
The stylometric research cited above suggests that the most frequently used words, e.g. function words, are most discriminating of authorship and literary style. We investigate these claims using three corpora that have distinctive styles in the literary community: gothic novels, philosophy books, and pulp science fiction, hereafter sci-fi.
We retrieve gothic novels and philosophy books from Project Gutenberg and pulp sci-fi from Internet Archive's Pulp Magazine Archive. We partition this corpus into train, validation, and test sets the sizes of which can be found in Table TABREF12.
In order to validate the above claims, we train five different classifiers to predict the literary style of sentences from our corpus. Each classifier has gradually more content words replaced with part-of-speech (POS) tag placeholder tokens. The All model is trained on sentences with all proper nouns replaced by `PROPN'. The models Ablated N, Ablated NV, and Ablated NVA replace nouns, nouns & verbs, and nouns, verbs, & adjectives with the corresponding POS tag respectively. Finally, Content-only is trained on sentences with all words that are not tagged as NOUN, VERB, ADJ removed; the remaining words are not ablated.
We train the classifiers on the training set, balancing the class distribution to make sure there are the same number of sentences from each style. Classifiers are trained using fastText BIBREF19, using tri-gram features with all other settings as default. table:classifiers shows the accuracies of the classifiers.
The styles are highly distinctive: the All classifier has an accuracy of 86%. Additionally, even the Ablated NVA is quite successful, with 75% accuracy, even without access to any content words. The Content only classifier is also quite successful, at 80% accuracy. This indicates that these stylistic genres are distinctive at both the content level and at the syntactic level.
Models ::: Formal Model of Style
Given that non-content words are distinctive enough for a classifier to determine style, we propose a suite of low-level linguistic feature counts (henceforth, controls) as our formal, content-blind definition of style. The style of a sentence is represented as a vector of counts of closed word classes (like personal pronouns) as well as counts of syntactic features like the number of SBAR non-terminals in its constituency parse, since clause structure has been shown to be indicative of style BIBREF20. Controls are extracted heuristically, and almost all rely on counts of pre-defined word lists. For constituency parses we use the Stanford Parser BIBREF21. table:controlexamples lists all the controls along with examples.
Models ::: Formal Model of Style ::: Reconstruction Task
Models are trained with a reconstruction task, in which a distorted version of a reference sentence is input and the goal is to output the original reference.
fig:sentenceinput illustrates the process. Controls are calculated heuristically. All words found in the control word lists are then removed from the reference sentence. The remaining words, which represent the content, are used as input into the model, along with their POS tags and lemmas.
In this way we encourage models to construct a sentence using content and style independently. This will allow us to vary the stylistic controls while keeping the content constant, and successfully perform style transfer. When generating a new sentence, the controls correspond to the counts of the corresponding syntactic features that we expect to be realized in the output.
Models ::: Neural Architecture
We implement our feature controlled language model using a neural encoder-decoder with attention BIBREF22, using 2-layer uni-directional gated recurrent units (GRUs) for the encoder and decoder BIBREF23.
The input to the encoder is a sequence of $M$ content words, along with their lemmas, and fine and coarse grained part-of-speech (POS) tags, i.e. $X_{.,j} = (x_{1,j},\ldots ,x_{M,j})$ for $j \in \mathcal {T} = \lbrace \textrm {word, lemma, fine-pos, coarse-pos}\rbrace $. We embed each token (and its lemma and POS) before concatenating, and feeding into the encoder GRU to obtain encoder hidden states, $ c_i = \operatorname{gru}(c_{i-1}, \left[E_j(X_{i,j}), \; j\in \mathcal {T} \right]; \omega _{enc}) $ for $i \in {1,\ldots ,M},$ where initial state $c_0$, encoder GRU parameters $\omega _{enc}$ and embedding matrices $E_j$ are learned parameters.
The decoder sequentially generates the outputs, i.e. a sequence of $N$ tokens $y =(y_1,\ldots ,y_N)$, where all tokens $y_i$ are drawn from a finite output vocabulary $\mathcal {V}$. To generate the each token we first embed the previously generated token $y_{i-1}$ and a vector of $K$ control features $z = ( z_1,\ldots , z_K)$ (using embedding matrices $E_{dec}$ and $E_{\textrm {ctrl-1}}, \ldots , E_{\textrm {ctrl-K}}$ respectively), before concatenating them into a vector $\rho _i,$ and feeding them into the decoder side GRU along with the previous decoder state $h_{i-1}$:
where $\omega _{dec}$ are the decoder side GRU parameters.
Using the decoder hidden state $h_i$ we then attend to the encoder context vectors $c_j$, computing attention scores $\alpha _{i,j}$, where
before passing $h_i$ and the attention weighted context $\bar{c}_i=\sum _{j=1}^M \alpha _{i,j} c_j$ into a single hidden-layer perceptron with softmax output to compute the next token prediction probability,
where $W,U,V$ and $u,v, \nu $ are parameter matrices and vectors respectively.
Crucially, the controls $z$ remain fixed for all input decoder steps. Each $z_k$ represents the frequency of one of the low-level features described in sec:formalstyle. During training on the reconstruction task, we can observe the full output sequence $y,$ and so we can obtain counts for each control feature directly. Controls receive a different embedding depending on their frequency, where counts of 0-20 each get a unique embedding, and counts greater than 20 are assigned to the same embedding. At test time, we set the values of the controls according to procedure described in Section SECREF25.
We use embedding sizes of 128, 128, 64, and 32 for token, lemma, fine, and coarse grained POS embedding matrices respectively. Output token embeddings $E_{dec}$ have size 512, and 50 for the control feature embeddings. We set 512 for all GRU and perceptron output sizes. We refer to this model as the StyleEQ model. See fig:model for a visual depiction of the model.
Models ::: Neural Architecture ::: Baseline Genre Model
We compare the above model to a similar model, where rather than explicitly represent $K$ features as input, we have $K$ features in the form of a genre embedding, i.e. we learn a genre specific embedding for each of the gothic, scifi, and philosophy genres, as studied in BIBREF8 and BIBREF7. To generate in a specific style, we simply set the appropriate embedding. We use genre embeddings of size 850 which is equivalent to the total size of the $K$ feature embeddings in the StyleEQ model.
Models ::: Neural Architecture ::: Training
We train both models with minibatch stochastic gradient descent with a learning rate of 0.25, weight decay penalty of 0.0001, and batch size of 64. We also apply dropout with a drop rate of 0.25 to all embedding layers, the GRUs, and preceptron hidden layer. We train for a maximum of 200 epochs, using validation set BLEU score BIBREF26 to select the final model iteration for evaluation.
Models ::: Neural Architecture ::: Selecting Controls for Style Transfer
In the Baseline model, style transfer is straightforward: given an input sentence in one style, fix the encoder content features while selecting a different genre embedding. In contrast, the StyleEQ model requires selecting the counts for each control. Although there are a variety of ways to do this, we use a method that encourages a diversity of outputs.
In order to ensure the controls match the reference sentence in magnitude, we first find all sentences in the target style with the same number of words as the reference sentence. Then, we add the following constraints: the same number of proper nouns, the same number of nouns, the same number of verbs, and the same number of adjectives. We randomly sample $n$ of the remaining sentences, and for each of these `sibling' sentences, we compute the controls. For each of the new controls, we generate a sentence using the original input sentence content features. The generated sentences are then reranked using the length normalized log-likelihood under the model. We can then select the highest scoring sentence as our style-transferred output, or take the top-$k$ when we need a diverse set of outputs.
The reason for this process is that although there are group-level distinctive controls for each style, e.g. the high use of punctuation in philosophy books or of first person pronouns in gothic novels, at the sentence level it can understandably be quite varied. This method matches sentences between styles, capturing the natural distribution of the corpora.
Automatic Evaluations ::: BLEU Scores & Perplexity
In tab:blueperpl we report BLEU scores for the reconstruction of test set sentences from their content and feature representations, as well as the model perplexities of the reconstruction. For both models, we use beam decoding with a beam size of eight. Beam candidates are ranked according to their length normalized log-likelihood. On these automatic measures we see that StyleEQ is better able to reconstruct the original sentences. In some sense this evaluation is mostly a sanity check, as the feature controls contain more locally specific information than the genre embeddings, which say very little about how many specific function words one should expect to see in the output.
Automatic Evaluations ::: Feature Control
Designing controllable language models is often difficult because of the various dependencies between tokens; when changing one control value it may effect other aspects of the surface realization. For example, increasing the number of conjunctions may effect how the generator places prepositions to compensate for structural changes in the sentence. Since our features are deterministically recoverable, we can perturb an individual control value and check to see that the desired change was realized in the output. Moreover, we can check the amount of change in the other non-perturbed features to measure the independence of the controls.
We sample 50 sentences from each genre from the test set. For each sample, we create a perturbed control setting for each control by adding $\delta $ to the original control value. This is done for $\delta \in \lbrace -3, -2, -1, 0, 1, 2, 3\rbrace $, skipping any settings where the new control value would be negative.
table:autoeval:ctrl shows the results of this experiment. The Exact column displays the percentage of generated texts that realize the exact number of control features specified by the perturbed control. High percentages in the Exact column indicate greater one-to-one correspondence between the control and surface realization. For example, if the input was “Dracula and Frankenstein and the mummy,” and we change the conjunction feature by $\delta =-1$, an output of “Dracula, Frankenstein and the mummy,” would count towards the Exact category, while “Dracula, Frankenstein, the mummy,” would not.
The Direction column specifies the percentage of cases where the generated text produces a changed number of the control features that, while not exactly matching the specified value of the perturbed control, does change from the original in the correct direction. For example, if the input again was “Dracula and Frankenstein and the mummy,” and we change the conjunction feature by $\delta =-1$, both outputs of “Dracula, Frankenstein and the mummy,” and “Dracula, Frankenstein, the mummy,” would count towards Direction. High percentages in Direction mean that we could roughly ensure desired surface realizations by modifying the control by a larger $\delta $.
Finally, the Atomic column specifies the percentage of cases where the generated text with the perturbed control only realizes changes to that specific control, while other features remain constant. For example, if the input was “Dracula and Frankenstein in the castle,” and we set the conjunction feature to $\delta =-1$, an output of “Dracula near Frankenstein in the castle,” would not count as Atomic because, while the number of conjunctions did decrease by one, the number of simple preposition changed. An output of “Dracula, Frankenstein in the castle,” would count as Atomic. High percentages in the Atomic column indicate this feature is only loosely coupled to the other features and can be changed without modifying other aspects of the sentence.
Controls such as conjunction, determiner, and punctuation are highly controllable, with Exact rates above 80%. But with the exception of the constituency parse features, all controls have high Direction rates, many in the 90s. These results indicate our model successfully controls these features. The fact that the Atomic rates are relatively low is to be expected, as controls are highly coupled – e.g. to increase 1stPer, it is likely another pronoun control will have to decrease.
Automatic Evaluations ::: Automatic Classification
For each model we look at the classifier prediction accuracy of reconstructed and transferred sentences. In particular we use the Ablated NVA classifier, as this is the most content-blind one.
We produce 16 outputs from both the Baseline and StyleEq models. For the Baseline, we use a beam search of size 16. For the StyleEQ model, we use the method described in Section SECREF25 to select 16 `sibling' sentences in the target style, and generated a transferred sentence for each. We look at three different methods for selection: all, which uses all output sentences; top, which selects the top ranked sentence based on the score from the model; and oracle, which selects the sentence with the highest classifier likelihood for the intended style.
The reason for the third method, which indeed acts as an oracle, is that using the score from the model didn't always surface a transferred sentence that best reflected the desired style. Partially this was because the model score was mostly a function of how well a transferred sentence reflected the distribution of the training data. But additionally, some control settings are more indicative of a target style than others. The use of the classifier allows us to identify the most suitable control setting for a target style that was roughly compatible with the number of content words.
In table:fasttext-results we see the results. Note that for both models, the all and top classification accuracy tends to be quite similar, though for the Baseline they are often almost exactly the same when the Baseline has little to no diversity in the outputs.
However, the oracle introduces a huge jump in accuracy for the StyleEQ model, especially compared to the Baseline, partially because the diversity of outputs from StyleEQ is much higher; often the Baseline model produces no diversity – the 16 output sentences may be nearly identical, save a single word or two. It's important to note that neither model uses the classifier in any way except to select the sentence from 16 candidate outputs.
What this implies is that lurking within the StyleEQ model outputs are great sentences, even if they are hard to find. In many cases, the StyleEQ model has a classification accuracy above the base rate from the test data, which is 75% (see table:classifiers).
Human Evaluation
table:cherrypicking shows example outputs for the StyleEQ and Baseline models. Through inspection we see that the StyleEQ model successfully changes syntactic constructions in stylistically distinctive ways, such as increasing syntactic complexity when transferring to philosophy, or changing relevant pronouns when transferring to sci-fi. In contrast, the Baseline model doesn't create outputs that move far from the reference sentence, making only minor modifications such changing the type of a single pronoun.
To determine how readers would classify our transferred sentences, we recruited three English Literature PhD candidates, all of whom had passed qualifying exams that included determining both genre and era of various literary texts.
Human Evaluation ::: Fluency Evaluation
To evaluate the fluency of our outputs, we had the annotators score reference sentences, reconstructed sentences, and transferred sentences on a 0-5 scale, where 0 was incoherent and 5 was a well-written human sentence.
table:fluency shows the average fluency of various conditions from all three annotators. Both models have fluency scores around 3. Upon inspection of the outputs, it is clear that many have fluency errors, resulting in ungrammatical sentences.
Notably the Baseline often has slightly higher fluency scores than the StyleEQ model. This is likely because the Baseline model is far less constrained in how to construct the output sentence, and upon inspection often reconstructs the reference sentence even when performing style transfer. In contrast, the StyleEQ is encouraged to follow the controls, but can struggle to incorporate these controls into a fluent sentence.
The fluency of all outputs is lower than desired. We expect that incorporating pre-trained language models would increase the fluency of all outputs without requiring larger datasets.
Human Evaluation ::: Human Classification
Each annotator annotated 90 reference sentences (i.e. from the training corpus) with which style they thought the sentence was from. The accuracy on this baseline task for annotators A1, A2, and A3 was 80%, 88%, and 80% respectively, giving us an upper expected bound on the human evaluation.
In discussing this task with the annotators, they noted that content is a heavy predictor of genre, and that would certainly confound their annotations. To attempt to mitigate this, we gave them two annotation tasks: which-of-3 where they simply marked which style they thought a sentence was from, and which-of-2 where they were given the original style and marked which style they thought the sentence was transferred into.
For each task, each annotator marked 180 sentences: 90 from each model, with an even split across the three genres. Annotators were presented the sentences in a random order, without information about the models. In total, each marked 270 sentences. (Note there were no reconstructions in this annotation task.)
table:humanclassifiers shows the results. In both tasks, accuracy of annotators classifying the sentence as its intended style was low. In which-of-3, scores were around 20%, below the chance rate of 33%. In which-of-2, scores were in the 50s, slightly above the chance rate of 50%. This was the case for both models. There was a slight increase in accuracy for the StyleEQ model over the Baseline for which-of-3, but the opposite trend for which-of-2, suggesting these differences are not significant.
It's clear that it's hard to fool the annotators. Introspecting on their approach, the annotators expressed having immediate responses based on key words – for instance any references of `space' implied `sci-fi'. We call this the `vampires in space' problem, because no matter how well a gothic sentence is rewritten as a sci-fi one, it's impossible to ignore the fact that there is a vampire in space. The transferred sentences, in the eyes of the Ablated NVA classifier (with no access to content words), did quite well transferring into their intended style. But people are not blind to content.
Human Evaluation ::: The `Vampires in Space' Problem
Working with the annotators, we regularly came up against the 'vampires in space' problem: while syntactic constructions account for much of the distinction of literary styles, these constructions often co-occur with distinctive content.
Stylometrics finds syntactic constructions are great at fingerprinting, but suggests that these constructions are surface realizations of higher-level stylistic decisions. The number and type of personal pronouns is a reflection of how characters feature in a text. A large number of positional prepositions may be the result of a writer focusing on physical descriptions of scenes. In our attempt to decouple these, we create Frankenstein sentences, which piece together features of different styles – we are putting vampires in space.
Another way to validate our approach would be to select data that is stylistically distinctive but with similar content: perhaps genres in which content is static but language use changes over time, stylistically distinct authors within a single genre, or parodies of a distinctive genre.
Conclusion and Future Work
We present a formal, extendable model of style that can add control to any neural text generation system. We model style as a suite of low-level linguistic controls, and train a neural encoder-decoder model to reconstruct reference sentences given only content words and the setting of the controls. In automatic evaluations, we show that our model can fool a style classifier 84% of the time and outperforms a baseline genre-embedding model. In human evaluations, we encounter the `vampires in space' problem in which content and style are equally discriminative but people focus more on the content.
In future work we would like to model higher-level syntactic controls. BIBREF20 show that differences in clausal constructions, for instance having a dependent clause before an independent clause or vice versa, is a marker of style appreciated by the reader. Such features would likely interact with our lower-level controls in an interesting way, and provide further insight into style transfer in text.
Acknowledgements
Katy Gero is supported by an NSF GRF (DGE - 1644869). We would also like to thank Elsbeth Turcan for her helpful comments. | style of a sentence is represented as a vector of counts of closed word classes (like personal pronouns) as well as counts of syntactic features like the number of SBAR non-terminals in its constituency parse, since clause structure has been shown to be indicative of style |
5d70c32137e82943526911ebdf78694899b3c28a | 5d70c32137e82943526911ebdf78694899b3c28a_0 | Q: Do they report results only on English data?
Text: 0pt*0*0
0pt*0*0
0pt*0*0 0.95
1]Amir Hossein Yazdavar 1]Mohammad Saeid Mahdavinejad 2]Goonmeet Bajaj
3]William Romine 1]Amirhassan Monadjemi 1]Krishnaprasad Thirunarayan
1]Amit Sheth 4]Jyotishman Pathak [1]Department of Computer Science & Engineering, Wright State University, OH, USA [2]Ohio State University, Columbus, OH, USA [3]Department of Biological Science, Wright State University, OH, USA [4] Division of Health Informatics, Weill Cornell University, New York, NY, USA
[1] yazdavar.2@wright.edu
With ubiquity of social media platforms, millions of people are sharing their online persona by expressing their thoughts, moods, emotions, feelings, and even their daily struggles with mental health issues voluntarily and publicly on social media. Unlike the most existing efforts which study depression by analyzing textual content, we examine and exploit multimodal big data to discern depressive behavior using a wide variety of features including individual-level demographics. By developing a multimodal framework and employing statistical techniques for fusing heterogeneous sets of features obtained by processing visual, textual and user interaction data, we significantly enhance the current state-of-the-art approaches for identifying depressed individuals on Twitter (improving the average F1-Score by 5 percent) as well as facilitate demographic inference from social media for broader applications. Besides providing insights into the relationship between demographics and mental health, our research assists in the design of a new breed of demographic-aware health interventions.
Introduction
Depression is a highly prevalent public health challenge and a major cause of disability worldwide. Depression affects 6.7% (i.e., about 16 million) Americans each year . According to the World Mental Health Survey conducted in 17 countries, on average, about 5% of people reported having an episode of depression in 2011 BIBREF0 . Untreated or under-treated clinical depression can lead to suicide and other chronic risky behaviors such as drug or alcohol addiction.
Global efforts to curb clinical depression involve identifying depression through survey-based methods employing phone or online questionnaires. These approaches suffer from under-representation as well as sampling bias (with very small group of respondents.) In contrast, the widespread adoption of social media where people voluntarily and publicly express their thoughts, moods, emotions, and feelings, and even share their daily struggles with mental health problems has not been adequately tapped into studying mental illnesses, such as depression. The visual and textual content shared on different social media platforms like Twitter offer new opportunities for a deeper understanding of self-expressed depression both at an individual as well as community-level. Previous research efforts have suggested that language style, sentiment, users' activities, and engagement expressed in social media posts can predict the likelihood of depression BIBREF1 , BIBREF2 . However, except for a few attempts BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , these investigations have seldom studied extraction of emotional state from visual content of images in posted/profile images. Visual content can express users' emotions more vividly, and psychologists noted that imagery is an effective medium for communicating difficult emotions.
According to eMarketer, photos accounted for 75% of content posted on Facebook worldwide and they are the most engaging type of content on Facebook (87%). Indeed, "a picture is worth a thousand words" and now "photos are worth a million likes." Similarly, on Twitter, the tweets with image links get twice as much attention as those without , and video-linked tweets drive up engagement . The ease and naturalness of expression through visual imagery can serve to glean depression-indicators in vulnerable individuals who often seek social support through social media BIBREF7 . Further, as psychologist Carl Rogers highlights, we often pursue and promote our Ideal-Self . In this regard, the choice of profile image can be a proxy for the online persona BIBREF8 , providing a window into an individual's mental health status. For instance, choosing emaciated legs of girls covered with several cuts as profile image portrays negative self-view BIBREF9 .
Inferring demographic information like gender and age can be crucial for stratifying our understanding of population-level epidemiology of mental health disorders. Relying on electronic health records data, previous studies explored gender differences in depressive behavior from different angles including prevalence, age at onset, comorbidities, as well as biological and psychosocial factors. For instance, women have been diagnosed with depression twice as often as men BIBREF10 and national psychiatric morbidity survey in Britain has shown higher risk of depression in women BIBREF11 . On the other hand, suicide rates for men are three to five times higher compared to that of the women BIBREF12 .
Although depression can affect anyone at any age, signs and triggers of depression vary for different age groups . Depression triggers for children include parental depression, domestic violence, and loss of a pet, friend or family member. For teenagers (ages 12-18), depression may arise from hormonal imbalance, sexuality concerns and rejection by peers. Young adults (ages 19-29) may develop depression due to life transitions, poverty, trauma, and work issues. Adult (ages 30-60) depression triggers include caring simultaneously for children and aging parents, financial burden, work and relationship issues. Senior adults develop depression from common late-life issues, social isolation, major life loses such as the death of a spouse, financial stress and other chronic health problems (e.g., cardiac disease, dementia). Therefore, inferring demographic information while studying depressive behavior from passively sensed social data, can shed better light on the population-level epidemiology of depression.
The recent advancements in deep neural networks, specifically for image analysis task, can lead to determining demographic features such as age and gender BIBREF13 . We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users.
We address and derive answers to the following research questions: 1) How well do the content of posted images (colors, aesthetic and facial presentation) reflect depressive behavior? 2) Does the choice of profile picture show any psychological traits of depressed online persona? Are they reliable enough to represent the demographic information such as age and gender? 3) Are there any underlying common themes among depressed individuals generated using multimodal content that can be used to detect depression reliably?
Related Work
Mental Health Analysis using Social Media:
Several efforts have attempted to automatically detect depression from social media content utilizing machine/deep learning and natural language processing approaches. Conducting a retrospective study over tweets, BIBREF14 characterizes depression based on factors such as language, emotion, style, ego-network, and user engagement. They built a classifier to predict the likelihood of depression in a post BIBREF14 , BIBREF15 or in an individual BIBREF1 , BIBREF16 , BIBREF17 , BIBREF18 . Moreover, there have been significant advances due to the shared task BIBREF19 focusing on methods for identifying depressed users on Twitter at the Computational Linguistics and Clinical Psychology Workshop (CLP 2015). A corpus of nearly 1,800 Twitter users was built for evaluation, and the best models employed topic modeling BIBREF20 , Linguistic Inquiry and Word Count (LIWC) features, and other metadata BIBREF21 . More recently, a neural network architecture introduced by BIBREF22 combined posts into a representation of user's activities for detecting depressed users. Another active line of research has focused on capturing suicide and self-harm signals BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF2 , BIBREF27 . Moreover, the CLP 2016 BIBREF28 defined a shared task on detecting the severity of the mental health from forum posts. All of these studies derive discriminative features to classify depression in user-generated content at message-level, individual-level or community-level. Recent emergence of photo-sharing platforms such as Instagram, has attracted researchers attention to study people's behavior from their visual narratives – ranging from mining their emotions BIBREF29 , and happiness trend BIBREF30 , to studying medical concerns BIBREF31 . Researchers show that people use Instagram to engage in social exchange and storytelling about their difficult experiences BIBREF4 . The role of visual imagery as a mechanism of self-disclosure by relating visual attributes to mental health disclosures on Instagram was highlighted by BIBREF3 , BIBREF5 where individual Instagram profiles were utilized to build a prediction framework for identifying markers of depression. The importance of data modality to understand user behavior on social media was highlighted by BIBREF32 . More recently, a deep neural network sequence modeling approach that marries audio and text data modalities to analyze question-answer style interviews between an individual and an agent has been developed to study mental health BIBREF32 . Similarly, a multimodal depressive dictionary learning was proposed to detect depressed users on Twitter BIBREF33 . They provide a sparse user representations by defining a feature set consisting of social network features, user profile features, visual features, emotional features BIBREF34 , topic-level features, and domain-specific features. Particularly, our choice of multi-model prediction framework is intended to improve upon the prior works involving use of images in multimodal depression analysis BIBREF33 and prior works on studying Instagram photos BIBREF6 , BIBREF35 .
Demographic information inference on Social Media:
There is a growing interest in understanding online user's demographic information due to its numerous applications in healthcare BIBREF36 , BIBREF37 . A supervised model developed by BIBREF38 for determining users' gender by employing features such as screen-name, full-name, profile description and content on external resources (e.g., personal blog). Employing features including emoticons, acronyms, slangs, punctuations, capitalization, sentence length and included links/images, along with online behaviors such as number of friends, post time, and commenting activity, a supervised model was built for predicting user's age group BIBREF39 . Utilizing users life stage information such as secondary school student, college student, and employee, BIBREF40 builds age inference model for Dutch Twitter users. Similarly, relying on profile descriptions while devising a set of rules and patterns, a novel model introduced for extracting age for Twitter users BIBREF41 . They also parse description for occupation by consulting the SOC2010 list of occupations and validating it through social surveys. A novel age inference model was developed while relying on homophily interaction information and content for predicting age of Twitter users BIBREF42 . The limitations of textual content for predicting age and gender was highlighted by BIBREF43 . They distinguish language use based on social gender, age identity, biological sex and chronological age by collecting crowdsourced signals using a game in which players (crowd) guess the biological sex and age of a user based only on their tweets. Their findings indicate how linguistic markers can misguide (e.g., a heart represented as <3 can be misinterpreted as feminine when the writer is male.) Estimating age and gender from facial images by training a convolutional neural networks (CNN) for face recognition is an active line of research BIBREF44 , BIBREF13 , BIBREF45 .
Dataset
Self-disclosure clues have been extensively utilized for creating ground-truth data for numerous social media analytic studies e.g., for predicting demographics BIBREF36 , BIBREF41 , and user's depressive behavior BIBREF46 , BIBREF47 , BIBREF48 . For instance, vulnerable individuals may employ depressive-indicative terms in their Twitter profile descriptions. Others may share their age and gender, e.g., "16 years old suicidal girl"(see Figure FIGREF15 ). We employ a huge dataset of 45,000 self-reported depressed users introduced in BIBREF46 where a lexicon of depression symptoms consisting of 1500 depression-indicative terms was created with the help of psychologist clinician and employed for collecting self-declared depressed individual's profiles. A subset of 8,770 users (24 million time-stamped tweets) containing 3981 depressed and 4789 control users (that do not show any depressive behavior) were verified by two human judges BIBREF46 . This dataset INLINEFORM0 contains the metadata values of each user such as profile descriptions, followers_count, created_at, and profile_image_url.
Age Enabled Ground-truth Dataset: We extract user's age by applying regular expression patterns to profile descriptions (such as "17 years old, self-harm, anxiety, depression") BIBREF41 . We compile "age prefixes" and "age suffixes", and use three age-extraction rules: 1. I am X years old 2. Born in X 3. X years old, where X is a "date" or age (e.g., 1994). We selected a subset of 1061 users among INLINEFORM0 as gold standard dataset INLINEFORM1 who disclose their age. From these 1061 users, 822 belong to depressed class and 239 belong to control class. From 3981 depressed users, 20.6% disclose their age in contrast with only 4% (239/4789) among control group. So self-disclosure of age is more prevalent among vulnerable users. Figure FIGREF18 depicts the age distribution in INLINEFORM2 . The general trend, consistent with the results in BIBREF42 , BIBREF49 , is biased toward young people. Indeed, according to Pew, 47% of Twitter users are younger than 30 years old BIBREF50 . Similar data collection procedure with comparable distribution have been used in many prior efforts BIBREF51 , BIBREF49 , BIBREF42 . We discuss our approach to mitigate the impact of the bias in Section 4.1. The median age is 17 for depressed class versus 19 for control class suggesting either likely depressed-user population is younger, or depressed youngsters are more likely to disclose their age for connecting to their peers (social homophily.) BIBREF51
Gender Enabled Ground-truth Dataset: We selected a subset of 1464 users INLINEFORM0 from INLINEFORM1 who disclose their gender in their profile description. From 1464 users 64% belonged to the depressed group, and the rest (36%) to the control group. 23% of the likely depressed users disclose their gender which is considerably higher (12%) than that for the control class. Once again, gender disclosure varies among the two gender groups. For statistical significance, we performed chi-square test (null hypothesis: gender and depression are two independent variables). Figure FIGREF19 illustrates gender association with each of the two classes. Blue circles (positive residuals, see Figure FIGREF19 -A,D) show positive association among corresponding row and column variables while red circles (negative residuals, see Figure FIGREF19 -B,C) imply a repulsion. Our findings are consistent with the medical literature BIBREF10 as according to BIBREF52 more women than men were given a diagnosis of depression. In particular, the female-to-male ratio is 2.1 and 1.9 for Major Depressive Disorder and Dysthymic Disorder respectively. Our findings from Twitter data indicate there is a strong association (Chi-square: 32.75, p-value:1.04e-08) between being female and showing depressive behavior on Twitter.
Data Modality Analysis
We now provide an in-depth analysis of visual and textual content of vulnerable users.
Visual Content Analysis: We show that the visual content in images from posts as well as profiles provide valuable psychological cues for understanding a user's depression status. Profile/posted images can surface self-stigmatization BIBREF53 . Additionally, as opposed to typical computer vision framework for object recognition that often relies on thousands of predetermined low-level features, what matters more for assessing user's online behavior is the emotions reflected in facial expressions BIBREF54 , attributes contributing to the computational aesthetics BIBREF55 , and sentimental quotes they may subscribe to (Figure FIGREF15 ) BIBREF8 .
Facial Presence:
For capturing facial presence, we rely on BIBREF56 's approach that uses multilevel convolutional coarse-to-fine network cascade to tackle facial landmark localization. We identify facial presentation, emotion from facial expression, and demographic features from profile/posted images . Table TABREF21 illustrates facial presentation differences in both profile and posted images (media) for depressed and control users in INLINEFORM0 . With control class showing significantly higher in both profile and media (8%, 9% respectively) compared to that for the depressed class. In contrast with age and gender disclosure, vulnerable users are less likely to disclose their facial identity, possibly due to lack of confidence or fear of stigma.
Facial Expression:
Following BIBREF8 's approach, we adopt Ekman's model of six emotions: anger, disgust, fear, joy, sadness and surprise, and use the Face++ API to automatically capture them from the shared images. Positive emotions are joy and surprise, and negative emotions are anger, disgust, fear, and sadness. In general, for each user u in INLINEFORM0 , we process profile/shared images for both the depressed and the control groups with at least one face from the shared images (Table TABREF23 ). For the photos that contain multiple faces, we measure the average emotion.
Figure FIGREF27 illustrates the inter-correlation of these features. Additionally, we observe that emotions gleaned from facial expressions correlated with emotional signals captured from textual content utilizing LIWC. This indicates visual imagery can be harnessed as a complementary channel for measuring online emotional signals.
General Image Features:
The importance of interpretable computational aesthetic features for studying users' online behavior has been highlighted by several efforts BIBREF55 , BIBREF8 , BIBREF57 . Color, as a pillar of the human vision system, has a strong association with conceptual ideas like emotion BIBREF58 , BIBREF59 . We measured the normalized red, green, blue and the mean of original colors, and brightness and contrast relative to variations of luminance. We represent images in Hue-Saturation-Value color space that seems intuitive for humans, and measure mean and variance for saturation and hue. Saturation is defined as the difference in the intensities of the different light wavelengths that compose the color. Although hue is not interpretable, high saturation indicates vividness and chromatic purity which are more appealing to the human eye BIBREF8 . Colorfulness is measured as a difference against gray background BIBREF60 . Naturalness is a measure of the degree of correspondence between images and the human perception of reality BIBREF60 . In color reproduction, naturalness is measured from the mental recollection of the colors of familiar objects. Additionally, there is a tendency among vulnerable users to share sentimental quotes bearing negative emotions. We performed optical character recognition (OCR) with python-tesseract to extract text and their sentiment score. As illustrated in Table TABREF26 , vulnerable users tend to use less colorful (higher grayscale) profile as well as shared images to convey their negative feelings, and share images that are less natural (Figure FIGREF15 ). With respect to the aesthetic quality of images (saturation, brightness, and hue), depressed users use images that are less appealing to the human eye. We employ independent t-test, while adopting Bonferroni Correction as a conservative approach to adjust the confidence intervals. Overall, we have 223 features, and choose Bonferroni-corrected INLINEFORM0 level of INLINEFORM1 (*** INLINEFORM2 , ** INLINEFORM3 ).
** alpha= 0.05, *** alpha = 0.05/223
Demographics Inference & Language Cues: LIWC has been used extensively for examining the latent dimensions of self-expression for analyzing personality BIBREF61 , depressive behavior, demographic differences BIBREF43 , BIBREF40 , etc. Several studies highlight that females employ more first-person singular pronouns BIBREF62 , and deictic language BIBREF63 , while males tend to use more articles BIBREF64 which characterizes concrete thinking, and formal, informational and affirmation words BIBREF65 . For age analysis, the salient findings include older individuals using more future tense verbs BIBREF62 triggering a shift in focus while aging. They also show positive emotions BIBREF66 and employ fewer self-references (i.e. 'I', 'me') with greater first person plural BIBREF62 . Depressed users employ first person pronouns more frequently BIBREF67 , repeatedly use negative emotions and anger words. We analyzed psycholinguistic cues and language style to study the association between depressive behavior as well as demographics. Particularly, we adopt Levinson's adult development grouping that partitions users in INLINEFORM0 into 5 age groups: (14,19],(19,23], (23,34],(34,46], and (46,60]. Then, we apply LIWC for characterizing linguistic styles for each age group for users in INLINEFORM1 .
Qualitative Language Analysis: The recent LIWC version summarizes textual content in terms of language variables such as analytical thinking, clout, authenticity, and emotional tone. It also measures other linguistic dimensions such as descriptors categories (e.g., percent of target words gleaned by dictionary, or longer than six letters - Sixltr) and informal language markers (e.g., swear words, netspeak), and other linguistic aspects (e.g., 1st person singular pronouns.)
Thinking Style:
Measuring people's natural ways of trying to analyze, and organize complex events have strong association with analytical thinking. LIWC relates higher analytic thinking to more formal and logical reasoning whereas a lower value indicates focus on narratives. Also, cognitive processing measures problem solving in mind. Words such as "think," "realize," and "know" indicates the degree of "certainty" in communications. Critical thinking ability relates to education BIBREF68 , and is impacted by different stages of cognitive development at different ages . It has been shown that older people communicate with greater cognitive complexity while comprehending nuances and subtle differences BIBREF62 . We observe a similar pattern in our data (Table TABREF40 .) A recent study highlights how depression affects brain and thinking at molecular level using a rat model BIBREF69 . Depression can promote cognitive dysfunction including difficulty in concentrating and making decisions. We observed a notable differences in the ability to think analytically in depressed and control users in different age groups (see Figure FIGREF39 - A, F and Table TABREF40 ). Overall, vulnerable younger users are not logical thinkers based on their relative analytical score and cognitive processing ability.
Authenticity:
Authenticity measures the degree of honesty. Authenticity is often assessed by measuring present tense verbs, 1st person singular pronouns (I, me, my), and by examining the linguistic manifestations of false stories BIBREF70 . Liars use fewer self-references and fewer complex words. Psychologists often see a child's first successfull lie as a mental growth. There is a decreasing trend of the Authenticity with aging (see Figure FIGREF39 -B.) Authenticity for depressed youngsters is strikingly higher than their control peers. It decreases with age (Figure FIGREF39 -B.)
Clout:
People with high clout speak more confidently and with certainty, employing more social words with fewer negations (e.g., no, not) and swear words. In general, midlife is relatively stable w.r.t. relationships and work. A recent study shows that age 60 to be best for self-esteem BIBREF71 as people take on managerial roles at work and maintain a satisfying relationship with their spouse. We see the same pattern in our data (see Figure FIGREF39 -C and Table TABREF40 ). Unsurprisingly, lack of confidence (the 6th PHQ-9 symptom) is a distinguishable characteristic of vulnerable users, leading to their lower clout scores, especially among depressed users before middle age (34 years old).
Self-references:
First person singular words are often seen as indicating interpersonal involvement and their high usage is associated with negative affective states implying nervousness and depression BIBREF66 . Consistent with prior studies, frequency of first person singular for depressed people is significantly higher compared to that of control class. Similarly to BIBREF66 , youngsters tend to use more first-person (e.g. I) and second person singular (e.g. you) pronouns (Figure FIGREF39 -G).
Informal Language Markers; Swear, Netspeak:
Several studies highlighted the use of profanity by young adults has significantly increased over the last decade BIBREF72 . We observed the same pattern in both the depressed and the control classes (Table TABREF40 ), although it's rate is higher for depressed users BIBREF1 . Psychologists have also shown that swearing can indicate that an individual is not a fragmented member of a society. Depressed youngsters, showing higher rate of interpersonal involvement and relationships, have a higher rate of cursing (Figure FIGREF39 -E). Also, Netspeak lexicon measures the frequency of terms such as lol and thx.
Sexual, Body:
Sexual lexicon contains terms like "horny", "love" and "incest", and body terms like "ache", "heart", and "cough". Both start with a higher rate for depressed users while decreasing gradually while growing up, possibly due to changes in sexual desire as we age (Figure FIGREF39 -H,I and Table TABREF40 .)
Quantitative Language Analysis:
We employ one-way ANOVA to compare the impact of various factors and validate our findings above. Table TABREF40 illustrates our findings, with a degree of freedom (df) of 1055. The null hypothesis is that the sample means' for each age group are similar for each of the LIWC features.
*** alpha = 0.001, ** alpha = 0.01, * alpha = 0.05
Demographic Prediction
We leverage both the visual and textual content for predicting age and gender.
Prediction with Textual Content:
We employ BIBREF73 's weighted lexicon of terms that uses the dataset of 75,394 Facebook users who shared their status, age and gender. The predictive power of this lexica was evaluated on Twitter, blog, and Facebook, showing promising results BIBREF73 . Utilizing these two weighted lexicon of terms, we are predicting the demographic information (age or gender) of INLINEFORM0 (denoted by INLINEFORM1 ) using following equation: INLINEFORM2
where INLINEFORM0 is the lexicon weight of the term, and INLINEFORM1 represents the frequency of the term in the user generated INLINEFORM2 , and INLINEFORM3 measures total word count in INLINEFORM4 . As our data is biased toward young people, we report age prediction performance for each age group separately (Table TABREF42 ). Moreover, to measure the average accuracy of this model, we build a balanced dataset (keeping all the users above 23 -416 users), and then randomly sampling the same number of users from the age ranges (11,19] and (19,23]. The average accuracy of this model is 0.63 for depressed users and 0.64 for control class. Table TABREF44 illustrates the performance of gender prediction for each class. The average accuracy is 0.82 on INLINEFORM5 ground-truth dataset.
Prediction with Visual Imagery:
Inspired by BIBREF56 's approach for facial landmark localization, we use their pretrained CNN consisting of convolutional layers, including unshared and fully-connected layers, to predict gender and age from both the profile and shared images. We evaluate the performance for gender and age prediction task on INLINEFORM0 and INLINEFORM1 respectively as shown in Table TABREF42 and Table TABREF44 .
Demographic Prediction Analysis:
We delve deeper into the benefits and drawbacks of each data modality for demographic information prediction. This is crucial as the differences between language cues between age groups above age 35 tend to become smaller (see Figure FIGREF39 -A,B,C) and making the prediction harder for older people BIBREF74 . In this case, the other data modality (e.g., visual content) can play integral role as a complementary source for age inference. For gender prediction (see Table TABREF44 ), on average, the profile image-based predictor provides a more accurate prediction for both the depressed and control class (0.92 and 0.90) compared to content-based predictor (0.82). For age prediction (see Table TABREF42 ), textual content-based predictor (on average 0.60) outperforms both of the visual-based predictors (on average profile:0.51, Media:0.53).
However, not every user provides facial identity on his account (see Table TABREF21 ). We studied facial presentation for each age-group to examine any association between age-group, facial presentation and depressive behavior (see Table TABREF43 ). We can see youngsters in both depressed and control class are not likely to present their face on profile image. Less than 3% of vulnerable users between 11-19 years reveal their facial identity. Although content-based gender predictor was not as accurate as image-based one, it is adequate for population-level analysis.
Multi-modal Prediction Framework
We use the above findings for predicting depressive behavior. Our model exploits early fusion BIBREF32 technique in feature space and requires modeling each user INLINEFORM0 in INLINEFORM1 as vector concatenation of individual modality features. As opposed to computationally expensive late fusion scheme where each modality requires a separate supervised modeling, this model reduces the learning effort and shows promising results BIBREF75 . To develop a generalizable model that avoids overfitting, we perform feature selection using statistical tests and all relevant ensemble learning models. It adds randomness to the data by creating shuffled copies of all features (shadow feature), and then trains Random Forest classifier on the extended data. Iteratively, it checks whether the actual feature has a higher Z-score than its shadow feature (See Algorithm SECREF6 and Figure FIGREF45 ) BIBREF76 .
Main each Feature INLINEFORM0 INLINEFORM1
RndForrest( INLINEFORM0 ) Calculate Imp INLINEFORM1 INLINEFORM2 Generate next hypothesis , INLINEFORM3 Once all hypothesis generated Perform Statistical Test INLINEFORM4 //Binomial Distribution INLINEFORM5 Feature is important Feature is important
Ensemble Feature Selection
Next, we adopt an ensemble learning method that integrates the predictive power of multiple learners with two main advantages; its interpretability with respect to the contributions of each feature and its high predictive power. For prediction we have INLINEFORM0 where INLINEFORM1 is a weak learner and INLINEFORM2 denotes the final prediction.
In particular, we optimize the loss function: INLINEFORM0 where INLINEFORM1 incorporates INLINEFORM2 and INLINEFORM3 regularization. In each iteration, the new INLINEFORM4 is obtained by fitting weak learner to the negative gradient of loss function. Particularly, by estimating the loss function with Taylor expansion : INLINEFORM5 where its first expression is constant, the second and the third expressions are first ( INLINEFORM6 ) and second order derivatives ( INLINEFORM7 ) of the loss. INLINEFORM8
For exploring the weak learners, assume INLINEFORM0 has k leaf nodes, INLINEFORM1 be subset of users from INLINEFORM2 belongs to the node INLINEFORM3 , and INLINEFORM4 denotes the prediction for node INLINEFORM5 . Then, for each user INLINEFORM6 belonging to INLINEFORM7 , INLINEFORM8 and INLINEFORM9 INLINEFORM10
Next, for each leaf node INLINEFORM0 , deriving w.r.t INLINEFORM1 : INLINEFORM2
and by substituting weights: INLINEFORM0
which represents the loss for fixed weak learners with INLINEFORM0 nodes. The trees are built sequentially such that each subsequent tree aims to reduce the errors of its predecessor tree. Although, the weak learners have high bias, the ensemble model produces a strong learner that effectively integrate the weak learners by reducing bias and variance (the ultimate goal of supervised models) BIBREF77 . Table TABREF48 illustrates our multimodal framework outperform the baselines for identifying depressed users in terms of average specificity, sensitivity, F-Measure, and accuracy in 10-fold cross-validation setting on INLINEFORM1 dataset. Figure FIGREF47 shows how the likelihood of being classified into the depressed class varies with each feature addition to the model for a sample user in the dataset. The prediction bar (the black bar) shows that the log-odds of prediction is 0.31, that is, the likelihood of this person being a depressed user is 57% (1 / (1 + exp(-0.3))). The figure also sheds light on the impact of each contributing feature. The waterfall charts represent how the probability of being depressed changes with the addition of each feature variable. For instance, the "Analytic thinking" of this user is considered high 48.43 (Median:36.95, Mean: 40.18) and this decreases the chance of this person being classified into the depressed group by the log-odds of -1.41. Depressed users have significantly lower "Analytic thinking" score compared to control class. Moreover, the 40.46 "Clout" score is a low value (Median: 62.22, Mean: 57.17) and it decreases the chance of being classified as depressed. With respect to the visual features, for instance, the mean and the median of 'shared_colorfulness' is 112.03 and 113 respectively. The value of 136.71 would be high; thus, it decreases the chance of being depressed for this specific user by log-odds of -0.54. Moreover, the 'profile_naturalness' of 0.46 is considered high compared to 0.36 as the mean for the depressed class which justifies pull down of the log-odds by INLINEFORM2 . For network features, for instance, 'two_hop_neighborhood' for depressed users (Mean : 84) are less than that of control users (Mean: 154), and is reflected in pulling down the log-odds by -0.27.
Baselines:
To test the efficacy of our multi-modal framework for detecting depressed users, we compare it against existing content, content-network, and image-based models (based on the aforementioned general image feature, facial presence, and facial expressions.) | Unanswerable |
97dac7092cf8082a6238aaa35f4b185343b914af | 97dac7092cf8082a6238aaa35f4b185343b914af_0 | Q: What insights into the relationship between demographics and mental health are provided?
Text: 0pt*0*0
0pt*0*0
0pt*0*0 0.95
1]Amir Hossein Yazdavar 1]Mohammad Saeid Mahdavinejad 2]Goonmeet Bajaj
3]William Romine 1]Amirhassan Monadjemi 1]Krishnaprasad Thirunarayan
1]Amit Sheth 4]Jyotishman Pathak [1]Department of Computer Science & Engineering, Wright State University, OH, USA [2]Ohio State University, Columbus, OH, USA [3]Department of Biological Science, Wright State University, OH, USA [4] Division of Health Informatics, Weill Cornell University, New York, NY, USA
[1] yazdavar.2@wright.edu
With ubiquity of social media platforms, millions of people are sharing their online persona by expressing their thoughts, moods, emotions, feelings, and even their daily struggles with mental health issues voluntarily and publicly on social media. Unlike the most existing efforts which study depression by analyzing textual content, we examine and exploit multimodal big data to discern depressive behavior using a wide variety of features including individual-level demographics. By developing a multimodal framework and employing statistical techniques for fusing heterogeneous sets of features obtained by processing visual, textual and user interaction data, we significantly enhance the current state-of-the-art approaches for identifying depressed individuals on Twitter (improving the average F1-Score by 5 percent) as well as facilitate demographic inference from social media for broader applications. Besides providing insights into the relationship between demographics and mental health, our research assists in the design of a new breed of demographic-aware health interventions.
Introduction
Depression is a highly prevalent public health challenge and a major cause of disability worldwide. Depression affects 6.7% (i.e., about 16 million) Americans each year . According to the World Mental Health Survey conducted in 17 countries, on average, about 5% of people reported having an episode of depression in 2011 BIBREF0 . Untreated or under-treated clinical depression can lead to suicide and other chronic risky behaviors such as drug or alcohol addiction.
Global efforts to curb clinical depression involve identifying depression through survey-based methods employing phone or online questionnaires. These approaches suffer from under-representation as well as sampling bias (with very small group of respondents.) In contrast, the widespread adoption of social media where people voluntarily and publicly express their thoughts, moods, emotions, and feelings, and even share their daily struggles with mental health problems has not been adequately tapped into studying mental illnesses, such as depression. The visual and textual content shared on different social media platforms like Twitter offer new opportunities for a deeper understanding of self-expressed depression both at an individual as well as community-level. Previous research efforts have suggested that language style, sentiment, users' activities, and engagement expressed in social media posts can predict the likelihood of depression BIBREF1 , BIBREF2 . However, except for a few attempts BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , these investigations have seldom studied extraction of emotional state from visual content of images in posted/profile images. Visual content can express users' emotions more vividly, and psychologists noted that imagery is an effective medium for communicating difficult emotions.
According to eMarketer, photos accounted for 75% of content posted on Facebook worldwide and they are the most engaging type of content on Facebook (87%). Indeed, "a picture is worth a thousand words" and now "photos are worth a million likes." Similarly, on Twitter, the tweets with image links get twice as much attention as those without , and video-linked tweets drive up engagement . The ease and naturalness of expression through visual imagery can serve to glean depression-indicators in vulnerable individuals who often seek social support through social media BIBREF7 . Further, as psychologist Carl Rogers highlights, we often pursue and promote our Ideal-Self . In this regard, the choice of profile image can be a proxy for the online persona BIBREF8 , providing a window into an individual's mental health status. For instance, choosing emaciated legs of girls covered with several cuts as profile image portrays negative self-view BIBREF9 .
Inferring demographic information like gender and age can be crucial for stratifying our understanding of population-level epidemiology of mental health disorders. Relying on electronic health records data, previous studies explored gender differences in depressive behavior from different angles including prevalence, age at onset, comorbidities, as well as biological and psychosocial factors. For instance, women have been diagnosed with depression twice as often as men BIBREF10 and national psychiatric morbidity survey in Britain has shown higher risk of depression in women BIBREF11 . On the other hand, suicide rates for men are three to five times higher compared to that of the women BIBREF12 .
Although depression can affect anyone at any age, signs and triggers of depression vary for different age groups . Depression triggers for children include parental depression, domestic violence, and loss of a pet, friend or family member. For teenagers (ages 12-18), depression may arise from hormonal imbalance, sexuality concerns and rejection by peers. Young adults (ages 19-29) may develop depression due to life transitions, poverty, trauma, and work issues. Adult (ages 30-60) depression triggers include caring simultaneously for children and aging parents, financial burden, work and relationship issues. Senior adults develop depression from common late-life issues, social isolation, major life loses such as the death of a spouse, financial stress and other chronic health problems (e.g., cardiac disease, dementia). Therefore, inferring demographic information while studying depressive behavior from passively sensed social data, can shed better light on the population-level epidemiology of depression.
The recent advancements in deep neural networks, specifically for image analysis task, can lead to determining demographic features such as age and gender BIBREF13 . We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users.
We address and derive answers to the following research questions: 1) How well do the content of posted images (colors, aesthetic and facial presentation) reflect depressive behavior? 2) Does the choice of profile picture show any psychological traits of depressed online persona? Are they reliable enough to represent the demographic information such as age and gender? 3) Are there any underlying common themes among depressed individuals generated using multimodal content that can be used to detect depression reliably?
Related Work
Mental Health Analysis using Social Media:
Several efforts have attempted to automatically detect depression from social media content utilizing machine/deep learning and natural language processing approaches. Conducting a retrospective study over tweets, BIBREF14 characterizes depression based on factors such as language, emotion, style, ego-network, and user engagement. They built a classifier to predict the likelihood of depression in a post BIBREF14 , BIBREF15 or in an individual BIBREF1 , BIBREF16 , BIBREF17 , BIBREF18 . Moreover, there have been significant advances due to the shared task BIBREF19 focusing on methods for identifying depressed users on Twitter at the Computational Linguistics and Clinical Psychology Workshop (CLP 2015). A corpus of nearly 1,800 Twitter users was built for evaluation, and the best models employed topic modeling BIBREF20 , Linguistic Inquiry and Word Count (LIWC) features, and other metadata BIBREF21 . More recently, a neural network architecture introduced by BIBREF22 combined posts into a representation of user's activities for detecting depressed users. Another active line of research has focused on capturing suicide and self-harm signals BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF2 , BIBREF27 . Moreover, the CLP 2016 BIBREF28 defined a shared task on detecting the severity of the mental health from forum posts. All of these studies derive discriminative features to classify depression in user-generated content at message-level, individual-level or community-level. Recent emergence of photo-sharing platforms such as Instagram, has attracted researchers attention to study people's behavior from their visual narratives – ranging from mining their emotions BIBREF29 , and happiness trend BIBREF30 , to studying medical concerns BIBREF31 . Researchers show that people use Instagram to engage in social exchange and storytelling about their difficult experiences BIBREF4 . The role of visual imagery as a mechanism of self-disclosure by relating visual attributes to mental health disclosures on Instagram was highlighted by BIBREF3 , BIBREF5 where individual Instagram profiles were utilized to build a prediction framework for identifying markers of depression. The importance of data modality to understand user behavior on social media was highlighted by BIBREF32 . More recently, a deep neural network sequence modeling approach that marries audio and text data modalities to analyze question-answer style interviews between an individual and an agent has been developed to study mental health BIBREF32 . Similarly, a multimodal depressive dictionary learning was proposed to detect depressed users on Twitter BIBREF33 . They provide a sparse user representations by defining a feature set consisting of social network features, user profile features, visual features, emotional features BIBREF34 , topic-level features, and domain-specific features. Particularly, our choice of multi-model prediction framework is intended to improve upon the prior works involving use of images in multimodal depression analysis BIBREF33 and prior works on studying Instagram photos BIBREF6 , BIBREF35 .
Demographic information inference on Social Media:
There is a growing interest in understanding online user's demographic information due to its numerous applications in healthcare BIBREF36 , BIBREF37 . A supervised model developed by BIBREF38 for determining users' gender by employing features such as screen-name, full-name, profile description and content on external resources (e.g., personal blog). Employing features including emoticons, acronyms, slangs, punctuations, capitalization, sentence length and included links/images, along with online behaviors such as number of friends, post time, and commenting activity, a supervised model was built for predicting user's age group BIBREF39 . Utilizing users life stage information such as secondary school student, college student, and employee, BIBREF40 builds age inference model for Dutch Twitter users. Similarly, relying on profile descriptions while devising a set of rules and patterns, a novel model introduced for extracting age for Twitter users BIBREF41 . They also parse description for occupation by consulting the SOC2010 list of occupations and validating it through social surveys. A novel age inference model was developed while relying on homophily interaction information and content for predicting age of Twitter users BIBREF42 . The limitations of textual content for predicting age and gender was highlighted by BIBREF43 . They distinguish language use based on social gender, age identity, biological sex and chronological age by collecting crowdsourced signals using a game in which players (crowd) guess the biological sex and age of a user based only on their tweets. Their findings indicate how linguistic markers can misguide (e.g., a heart represented as <3 can be misinterpreted as feminine when the writer is male.) Estimating age and gender from facial images by training a convolutional neural networks (CNN) for face recognition is an active line of research BIBREF44 , BIBREF13 , BIBREF45 .
Dataset
Self-disclosure clues have been extensively utilized for creating ground-truth data for numerous social media analytic studies e.g., for predicting demographics BIBREF36 , BIBREF41 , and user's depressive behavior BIBREF46 , BIBREF47 , BIBREF48 . For instance, vulnerable individuals may employ depressive-indicative terms in their Twitter profile descriptions. Others may share their age and gender, e.g., "16 years old suicidal girl"(see Figure FIGREF15 ). We employ a huge dataset of 45,000 self-reported depressed users introduced in BIBREF46 where a lexicon of depression symptoms consisting of 1500 depression-indicative terms was created with the help of psychologist clinician and employed for collecting self-declared depressed individual's profiles. A subset of 8,770 users (24 million time-stamped tweets) containing 3981 depressed and 4789 control users (that do not show any depressive behavior) were verified by two human judges BIBREF46 . This dataset INLINEFORM0 contains the metadata values of each user such as profile descriptions, followers_count, created_at, and profile_image_url.
Age Enabled Ground-truth Dataset: We extract user's age by applying regular expression patterns to profile descriptions (such as "17 years old, self-harm, anxiety, depression") BIBREF41 . We compile "age prefixes" and "age suffixes", and use three age-extraction rules: 1. I am X years old 2. Born in X 3. X years old, where X is a "date" or age (e.g., 1994). We selected a subset of 1061 users among INLINEFORM0 as gold standard dataset INLINEFORM1 who disclose their age. From these 1061 users, 822 belong to depressed class and 239 belong to control class. From 3981 depressed users, 20.6% disclose their age in contrast with only 4% (239/4789) among control group. So self-disclosure of age is more prevalent among vulnerable users. Figure FIGREF18 depicts the age distribution in INLINEFORM2 . The general trend, consistent with the results in BIBREF42 , BIBREF49 , is biased toward young people. Indeed, according to Pew, 47% of Twitter users are younger than 30 years old BIBREF50 . Similar data collection procedure with comparable distribution have been used in many prior efforts BIBREF51 , BIBREF49 , BIBREF42 . We discuss our approach to mitigate the impact of the bias in Section 4.1. The median age is 17 for depressed class versus 19 for control class suggesting either likely depressed-user population is younger, or depressed youngsters are more likely to disclose their age for connecting to their peers (social homophily.) BIBREF51
Gender Enabled Ground-truth Dataset: We selected a subset of 1464 users INLINEFORM0 from INLINEFORM1 who disclose their gender in their profile description. From 1464 users 64% belonged to the depressed group, and the rest (36%) to the control group. 23% of the likely depressed users disclose their gender which is considerably higher (12%) than that for the control class. Once again, gender disclosure varies among the two gender groups. For statistical significance, we performed chi-square test (null hypothesis: gender and depression are two independent variables). Figure FIGREF19 illustrates gender association with each of the two classes. Blue circles (positive residuals, see Figure FIGREF19 -A,D) show positive association among corresponding row and column variables while red circles (negative residuals, see Figure FIGREF19 -B,C) imply a repulsion. Our findings are consistent with the medical literature BIBREF10 as according to BIBREF52 more women than men were given a diagnosis of depression. In particular, the female-to-male ratio is 2.1 and 1.9 for Major Depressive Disorder and Dysthymic Disorder respectively. Our findings from Twitter data indicate there is a strong association (Chi-square: 32.75, p-value:1.04e-08) between being female and showing depressive behavior on Twitter.
Data Modality Analysis
We now provide an in-depth analysis of visual and textual content of vulnerable users.
Visual Content Analysis: We show that the visual content in images from posts as well as profiles provide valuable psychological cues for understanding a user's depression status. Profile/posted images can surface self-stigmatization BIBREF53 . Additionally, as opposed to typical computer vision framework for object recognition that often relies on thousands of predetermined low-level features, what matters more for assessing user's online behavior is the emotions reflected in facial expressions BIBREF54 , attributes contributing to the computational aesthetics BIBREF55 , and sentimental quotes they may subscribe to (Figure FIGREF15 ) BIBREF8 .
Facial Presence:
For capturing facial presence, we rely on BIBREF56 's approach that uses multilevel convolutional coarse-to-fine network cascade to tackle facial landmark localization. We identify facial presentation, emotion from facial expression, and demographic features from profile/posted images . Table TABREF21 illustrates facial presentation differences in both profile and posted images (media) for depressed and control users in INLINEFORM0 . With control class showing significantly higher in both profile and media (8%, 9% respectively) compared to that for the depressed class. In contrast with age and gender disclosure, vulnerable users are less likely to disclose their facial identity, possibly due to lack of confidence or fear of stigma.
Facial Expression:
Following BIBREF8 's approach, we adopt Ekman's model of six emotions: anger, disgust, fear, joy, sadness and surprise, and use the Face++ API to automatically capture them from the shared images. Positive emotions are joy and surprise, and negative emotions are anger, disgust, fear, and sadness. In general, for each user u in INLINEFORM0 , we process profile/shared images for both the depressed and the control groups with at least one face from the shared images (Table TABREF23 ). For the photos that contain multiple faces, we measure the average emotion.
Figure FIGREF27 illustrates the inter-correlation of these features. Additionally, we observe that emotions gleaned from facial expressions correlated with emotional signals captured from textual content utilizing LIWC. This indicates visual imagery can be harnessed as a complementary channel for measuring online emotional signals.
General Image Features:
The importance of interpretable computational aesthetic features for studying users' online behavior has been highlighted by several efforts BIBREF55 , BIBREF8 , BIBREF57 . Color, as a pillar of the human vision system, has a strong association with conceptual ideas like emotion BIBREF58 , BIBREF59 . We measured the normalized red, green, blue and the mean of original colors, and brightness and contrast relative to variations of luminance. We represent images in Hue-Saturation-Value color space that seems intuitive for humans, and measure mean and variance for saturation and hue. Saturation is defined as the difference in the intensities of the different light wavelengths that compose the color. Although hue is not interpretable, high saturation indicates vividness and chromatic purity which are more appealing to the human eye BIBREF8 . Colorfulness is measured as a difference against gray background BIBREF60 . Naturalness is a measure of the degree of correspondence between images and the human perception of reality BIBREF60 . In color reproduction, naturalness is measured from the mental recollection of the colors of familiar objects. Additionally, there is a tendency among vulnerable users to share sentimental quotes bearing negative emotions. We performed optical character recognition (OCR) with python-tesseract to extract text and their sentiment score. As illustrated in Table TABREF26 , vulnerable users tend to use less colorful (higher grayscale) profile as well as shared images to convey their negative feelings, and share images that are less natural (Figure FIGREF15 ). With respect to the aesthetic quality of images (saturation, brightness, and hue), depressed users use images that are less appealing to the human eye. We employ independent t-test, while adopting Bonferroni Correction as a conservative approach to adjust the confidence intervals. Overall, we have 223 features, and choose Bonferroni-corrected INLINEFORM0 level of INLINEFORM1 (*** INLINEFORM2 , ** INLINEFORM3 ).
** alpha= 0.05, *** alpha = 0.05/223
Demographics Inference & Language Cues: LIWC has been used extensively for examining the latent dimensions of self-expression for analyzing personality BIBREF61 , depressive behavior, demographic differences BIBREF43 , BIBREF40 , etc. Several studies highlight that females employ more first-person singular pronouns BIBREF62 , and deictic language BIBREF63 , while males tend to use more articles BIBREF64 which characterizes concrete thinking, and formal, informational and affirmation words BIBREF65 . For age analysis, the salient findings include older individuals using more future tense verbs BIBREF62 triggering a shift in focus while aging. They also show positive emotions BIBREF66 and employ fewer self-references (i.e. 'I', 'me') with greater first person plural BIBREF62 . Depressed users employ first person pronouns more frequently BIBREF67 , repeatedly use negative emotions and anger words. We analyzed psycholinguistic cues and language style to study the association between depressive behavior as well as demographics. Particularly, we adopt Levinson's adult development grouping that partitions users in INLINEFORM0 into 5 age groups: (14,19],(19,23], (23,34],(34,46], and (46,60]. Then, we apply LIWC for characterizing linguistic styles for each age group for users in INLINEFORM1 .
Qualitative Language Analysis: The recent LIWC version summarizes textual content in terms of language variables such as analytical thinking, clout, authenticity, and emotional tone. It also measures other linguistic dimensions such as descriptors categories (e.g., percent of target words gleaned by dictionary, or longer than six letters - Sixltr) and informal language markers (e.g., swear words, netspeak), and other linguistic aspects (e.g., 1st person singular pronouns.)
Thinking Style:
Measuring people's natural ways of trying to analyze, and organize complex events have strong association with analytical thinking. LIWC relates higher analytic thinking to more formal and logical reasoning whereas a lower value indicates focus on narratives. Also, cognitive processing measures problem solving in mind. Words such as "think," "realize," and "know" indicates the degree of "certainty" in communications. Critical thinking ability relates to education BIBREF68 , and is impacted by different stages of cognitive development at different ages . It has been shown that older people communicate with greater cognitive complexity while comprehending nuances and subtle differences BIBREF62 . We observe a similar pattern in our data (Table TABREF40 .) A recent study highlights how depression affects brain and thinking at molecular level using a rat model BIBREF69 . Depression can promote cognitive dysfunction including difficulty in concentrating and making decisions. We observed a notable differences in the ability to think analytically in depressed and control users in different age groups (see Figure FIGREF39 - A, F and Table TABREF40 ). Overall, vulnerable younger users are not logical thinkers based on their relative analytical score and cognitive processing ability.
Authenticity:
Authenticity measures the degree of honesty. Authenticity is often assessed by measuring present tense verbs, 1st person singular pronouns (I, me, my), and by examining the linguistic manifestations of false stories BIBREF70 . Liars use fewer self-references and fewer complex words. Psychologists often see a child's first successfull lie as a mental growth. There is a decreasing trend of the Authenticity with aging (see Figure FIGREF39 -B.) Authenticity for depressed youngsters is strikingly higher than their control peers. It decreases with age (Figure FIGREF39 -B.)
Clout:
People with high clout speak more confidently and with certainty, employing more social words with fewer negations (e.g., no, not) and swear words. In general, midlife is relatively stable w.r.t. relationships and work. A recent study shows that age 60 to be best for self-esteem BIBREF71 as people take on managerial roles at work and maintain a satisfying relationship with their spouse. We see the same pattern in our data (see Figure FIGREF39 -C and Table TABREF40 ). Unsurprisingly, lack of confidence (the 6th PHQ-9 symptom) is a distinguishable characteristic of vulnerable users, leading to their lower clout scores, especially among depressed users before middle age (34 years old).
Self-references:
First person singular words are often seen as indicating interpersonal involvement and their high usage is associated with negative affective states implying nervousness and depression BIBREF66 . Consistent with prior studies, frequency of first person singular for depressed people is significantly higher compared to that of control class. Similarly to BIBREF66 , youngsters tend to use more first-person (e.g. I) and second person singular (e.g. you) pronouns (Figure FIGREF39 -G).
Informal Language Markers; Swear, Netspeak:
Several studies highlighted the use of profanity by young adults has significantly increased over the last decade BIBREF72 . We observed the same pattern in both the depressed and the control classes (Table TABREF40 ), although it's rate is higher for depressed users BIBREF1 . Psychologists have also shown that swearing can indicate that an individual is not a fragmented member of a society. Depressed youngsters, showing higher rate of interpersonal involvement and relationships, have a higher rate of cursing (Figure FIGREF39 -E). Also, Netspeak lexicon measures the frequency of terms such as lol and thx.
Sexual, Body:
Sexual lexicon contains terms like "horny", "love" and "incest", and body terms like "ache", "heart", and "cough". Both start with a higher rate for depressed users while decreasing gradually while growing up, possibly due to changes in sexual desire as we age (Figure FIGREF39 -H,I and Table TABREF40 .)
Quantitative Language Analysis:
We employ one-way ANOVA to compare the impact of various factors and validate our findings above. Table TABREF40 illustrates our findings, with a degree of freedom (df) of 1055. The null hypothesis is that the sample means' for each age group are similar for each of the LIWC features.
*** alpha = 0.001, ** alpha = 0.01, * alpha = 0.05
Demographic Prediction
We leverage both the visual and textual content for predicting age and gender.
Prediction with Textual Content:
We employ BIBREF73 's weighted lexicon of terms that uses the dataset of 75,394 Facebook users who shared their status, age and gender. The predictive power of this lexica was evaluated on Twitter, blog, and Facebook, showing promising results BIBREF73 . Utilizing these two weighted lexicon of terms, we are predicting the demographic information (age or gender) of INLINEFORM0 (denoted by INLINEFORM1 ) using following equation: INLINEFORM2
where INLINEFORM0 is the lexicon weight of the term, and INLINEFORM1 represents the frequency of the term in the user generated INLINEFORM2 , and INLINEFORM3 measures total word count in INLINEFORM4 . As our data is biased toward young people, we report age prediction performance for each age group separately (Table TABREF42 ). Moreover, to measure the average accuracy of this model, we build a balanced dataset (keeping all the users above 23 -416 users), and then randomly sampling the same number of users from the age ranges (11,19] and (19,23]. The average accuracy of this model is 0.63 for depressed users and 0.64 for control class. Table TABREF44 illustrates the performance of gender prediction for each class. The average accuracy is 0.82 on INLINEFORM5 ground-truth dataset.
Prediction with Visual Imagery:
Inspired by BIBREF56 's approach for facial landmark localization, we use their pretrained CNN consisting of convolutional layers, including unshared and fully-connected layers, to predict gender and age from both the profile and shared images. We evaluate the performance for gender and age prediction task on INLINEFORM0 and INLINEFORM1 respectively as shown in Table TABREF42 and Table TABREF44 .
Demographic Prediction Analysis:
We delve deeper into the benefits and drawbacks of each data modality for demographic information prediction. This is crucial as the differences between language cues between age groups above age 35 tend to become smaller (see Figure FIGREF39 -A,B,C) and making the prediction harder for older people BIBREF74 . In this case, the other data modality (e.g., visual content) can play integral role as a complementary source for age inference. For gender prediction (see Table TABREF44 ), on average, the profile image-based predictor provides a more accurate prediction for both the depressed and control class (0.92 and 0.90) compared to content-based predictor (0.82). For age prediction (see Table TABREF42 ), textual content-based predictor (on average 0.60) outperforms both of the visual-based predictors (on average profile:0.51, Media:0.53).
However, not every user provides facial identity on his account (see Table TABREF21 ). We studied facial presentation for each age-group to examine any association between age-group, facial presentation and depressive behavior (see Table TABREF43 ). We can see youngsters in both depressed and control class are not likely to present their face on profile image. Less than 3% of vulnerable users between 11-19 years reveal their facial identity. Although content-based gender predictor was not as accurate as image-based one, it is adequate for population-level analysis.
Multi-modal Prediction Framework
We use the above findings for predicting depressive behavior. Our model exploits early fusion BIBREF32 technique in feature space and requires modeling each user INLINEFORM0 in INLINEFORM1 as vector concatenation of individual modality features. As opposed to computationally expensive late fusion scheme where each modality requires a separate supervised modeling, this model reduces the learning effort and shows promising results BIBREF75 . To develop a generalizable model that avoids overfitting, we perform feature selection using statistical tests and all relevant ensemble learning models. It adds randomness to the data by creating shuffled copies of all features (shadow feature), and then trains Random Forest classifier on the extended data. Iteratively, it checks whether the actual feature has a higher Z-score than its shadow feature (See Algorithm SECREF6 and Figure FIGREF45 ) BIBREF76 .
Main each Feature INLINEFORM0 INLINEFORM1
RndForrest( INLINEFORM0 ) Calculate Imp INLINEFORM1 INLINEFORM2 Generate next hypothesis , INLINEFORM3 Once all hypothesis generated Perform Statistical Test INLINEFORM4 //Binomial Distribution INLINEFORM5 Feature is important Feature is important
Ensemble Feature Selection
Next, we adopt an ensemble learning method that integrates the predictive power of multiple learners with two main advantages; its interpretability with respect to the contributions of each feature and its high predictive power. For prediction we have INLINEFORM0 where INLINEFORM1 is a weak learner and INLINEFORM2 denotes the final prediction.
In particular, we optimize the loss function: INLINEFORM0 where INLINEFORM1 incorporates INLINEFORM2 and INLINEFORM3 regularization. In each iteration, the new INLINEFORM4 is obtained by fitting weak learner to the negative gradient of loss function. Particularly, by estimating the loss function with Taylor expansion : INLINEFORM5 where its first expression is constant, the second and the third expressions are first ( INLINEFORM6 ) and second order derivatives ( INLINEFORM7 ) of the loss. INLINEFORM8
For exploring the weak learners, assume INLINEFORM0 has k leaf nodes, INLINEFORM1 be subset of users from INLINEFORM2 belongs to the node INLINEFORM3 , and INLINEFORM4 denotes the prediction for node INLINEFORM5 . Then, for each user INLINEFORM6 belonging to INLINEFORM7 , INLINEFORM8 and INLINEFORM9 INLINEFORM10
Next, for each leaf node INLINEFORM0 , deriving w.r.t INLINEFORM1 : INLINEFORM2
and by substituting weights: INLINEFORM0
which represents the loss for fixed weak learners with INLINEFORM0 nodes. The trees are built sequentially such that each subsequent tree aims to reduce the errors of its predecessor tree. Although, the weak learners have high bias, the ensemble model produces a strong learner that effectively integrate the weak learners by reducing bias and variance (the ultimate goal of supervised models) BIBREF77 . Table TABREF48 illustrates our multimodal framework outperform the baselines for identifying depressed users in terms of average specificity, sensitivity, F-Measure, and accuracy in 10-fold cross-validation setting on INLINEFORM1 dataset. Figure FIGREF47 shows how the likelihood of being classified into the depressed class varies with each feature addition to the model for a sample user in the dataset. The prediction bar (the black bar) shows that the log-odds of prediction is 0.31, that is, the likelihood of this person being a depressed user is 57% (1 / (1 + exp(-0.3))). The figure also sheds light on the impact of each contributing feature. The waterfall charts represent how the probability of being depressed changes with the addition of each feature variable. For instance, the "Analytic thinking" of this user is considered high 48.43 (Median:36.95, Mean: 40.18) and this decreases the chance of this person being classified into the depressed group by the log-odds of -1.41. Depressed users have significantly lower "Analytic thinking" score compared to control class. Moreover, the 40.46 "Clout" score is a low value (Median: 62.22, Mean: 57.17) and it decreases the chance of being classified as depressed. With respect to the visual features, for instance, the mean and the median of 'shared_colorfulness' is 112.03 and 113 respectively. The value of 136.71 would be high; thus, it decreases the chance of being depressed for this specific user by log-odds of -0.54. Moreover, the 'profile_naturalness' of 0.46 is considered high compared to 0.36 as the mean for the depressed class which justifies pull down of the log-odds by INLINEFORM2 . For network features, for instance, 'two_hop_neighborhood' for depressed users (Mean : 84) are less than that of control users (Mean: 154), and is reflected in pulling down the log-odds by -0.27.
Baselines:
To test the efficacy of our multi-modal framework for detecting depressed users, we compare it against existing content, content-network, and image-based models (based on the aforementioned general image feature, facial presence, and facial expressions.) | either likely depressed-user population is younger, or depressed youngsters are more likely to disclose their age, more women than men were given a diagnosis of depression |
195611926760d1ceec00bd043dfdc8eba2df5ad1 | 195611926760d1ceec00bd043dfdc8eba2df5ad1_0 | Q: What model is used to achieve 5% improvement on F1 for identifying depressed individuals on Twitter?
Text: 0pt*0*0
0pt*0*0
0pt*0*0 0.95
1]Amir Hossein Yazdavar 1]Mohammad Saeid Mahdavinejad 2]Goonmeet Bajaj
3]William Romine 1]Amirhassan Monadjemi 1]Krishnaprasad Thirunarayan
1]Amit Sheth 4]Jyotishman Pathak [1]Department of Computer Science & Engineering, Wright State University, OH, USA [2]Ohio State University, Columbus, OH, USA [3]Department of Biological Science, Wright State University, OH, USA [4] Division of Health Informatics, Weill Cornell University, New York, NY, USA
[1] yazdavar.2@wright.edu
With ubiquity of social media platforms, millions of people are sharing their online persona by expressing their thoughts, moods, emotions, feelings, and even their daily struggles with mental health issues voluntarily and publicly on social media. Unlike the most existing efforts which study depression by analyzing textual content, we examine and exploit multimodal big data to discern depressive behavior using a wide variety of features including individual-level demographics. By developing a multimodal framework and employing statistical techniques for fusing heterogeneous sets of features obtained by processing visual, textual and user interaction data, we significantly enhance the current state-of-the-art approaches for identifying depressed individuals on Twitter (improving the average F1-Score by 5 percent) as well as facilitate demographic inference from social media for broader applications. Besides providing insights into the relationship between demographics and mental health, our research assists in the design of a new breed of demographic-aware health interventions.
Introduction
Depression is a highly prevalent public health challenge and a major cause of disability worldwide. Depression affects 6.7% (i.e., about 16 million) Americans each year . According to the World Mental Health Survey conducted in 17 countries, on average, about 5% of people reported having an episode of depression in 2011 BIBREF0 . Untreated or under-treated clinical depression can lead to suicide and other chronic risky behaviors such as drug or alcohol addiction.
Global efforts to curb clinical depression involve identifying depression through survey-based methods employing phone or online questionnaires. These approaches suffer from under-representation as well as sampling bias (with very small group of respondents.) In contrast, the widespread adoption of social media where people voluntarily and publicly express their thoughts, moods, emotions, and feelings, and even share their daily struggles with mental health problems has not been adequately tapped into studying mental illnesses, such as depression. The visual and textual content shared on different social media platforms like Twitter offer new opportunities for a deeper understanding of self-expressed depression both at an individual as well as community-level. Previous research efforts have suggested that language style, sentiment, users' activities, and engagement expressed in social media posts can predict the likelihood of depression BIBREF1 , BIBREF2 . However, except for a few attempts BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , these investigations have seldom studied extraction of emotional state from visual content of images in posted/profile images. Visual content can express users' emotions more vividly, and psychologists noted that imagery is an effective medium for communicating difficult emotions.
According to eMarketer, photos accounted for 75% of content posted on Facebook worldwide and they are the most engaging type of content on Facebook (87%). Indeed, "a picture is worth a thousand words" and now "photos are worth a million likes." Similarly, on Twitter, the tweets with image links get twice as much attention as those without , and video-linked tweets drive up engagement . The ease and naturalness of expression through visual imagery can serve to glean depression-indicators in vulnerable individuals who often seek social support through social media BIBREF7 . Further, as psychologist Carl Rogers highlights, we often pursue and promote our Ideal-Self . In this regard, the choice of profile image can be a proxy for the online persona BIBREF8 , providing a window into an individual's mental health status. For instance, choosing emaciated legs of girls covered with several cuts as profile image portrays negative self-view BIBREF9 .
Inferring demographic information like gender and age can be crucial for stratifying our understanding of population-level epidemiology of mental health disorders. Relying on electronic health records data, previous studies explored gender differences in depressive behavior from different angles including prevalence, age at onset, comorbidities, as well as biological and psychosocial factors. For instance, women have been diagnosed with depression twice as often as men BIBREF10 and national psychiatric morbidity survey in Britain has shown higher risk of depression in women BIBREF11 . On the other hand, suicide rates for men are three to five times higher compared to that of the women BIBREF12 .
Although depression can affect anyone at any age, signs and triggers of depression vary for different age groups . Depression triggers for children include parental depression, domestic violence, and loss of a pet, friend or family member. For teenagers (ages 12-18), depression may arise from hormonal imbalance, sexuality concerns and rejection by peers. Young adults (ages 19-29) may develop depression due to life transitions, poverty, trauma, and work issues. Adult (ages 30-60) depression triggers include caring simultaneously for children and aging parents, financial burden, work and relationship issues. Senior adults develop depression from common late-life issues, social isolation, major life loses such as the death of a spouse, financial stress and other chronic health problems (e.g., cardiac disease, dementia). Therefore, inferring demographic information while studying depressive behavior from passively sensed social data, can shed better light on the population-level epidemiology of depression.
The recent advancements in deep neural networks, specifically for image analysis task, can lead to determining demographic features such as age and gender BIBREF13 . We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users.
We address and derive answers to the following research questions: 1) How well do the content of posted images (colors, aesthetic and facial presentation) reflect depressive behavior? 2) Does the choice of profile picture show any psychological traits of depressed online persona? Are they reliable enough to represent the demographic information such as age and gender? 3) Are there any underlying common themes among depressed individuals generated using multimodal content that can be used to detect depression reliably?
Related Work
Mental Health Analysis using Social Media:
Several efforts have attempted to automatically detect depression from social media content utilizing machine/deep learning and natural language processing approaches. Conducting a retrospective study over tweets, BIBREF14 characterizes depression based on factors such as language, emotion, style, ego-network, and user engagement. They built a classifier to predict the likelihood of depression in a post BIBREF14 , BIBREF15 or in an individual BIBREF1 , BIBREF16 , BIBREF17 , BIBREF18 . Moreover, there have been significant advances due to the shared task BIBREF19 focusing on methods for identifying depressed users on Twitter at the Computational Linguistics and Clinical Psychology Workshop (CLP 2015). A corpus of nearly 1,800 Twitter users was built for evaluation, and the best models employed topic modeling BIBREF20 , Linguistic Inquiry and Word Count (LIWC) features, and other metadata BIBREF21 . More recently, a neural network architecture introduced by BIBREF22 combined posts into a representation of user's activities for detecting depressed users. Another active line of research has focused on capturing suicide and self-harm signals BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF2 , BIBREF27 . Moreover, the CLP 2016 BIBREF28 defined a shared task on detecting the severity of the mental health from forum posts. All of these studies derive discriminative features to classify depression in user-generated content at message-level, individual-level or community-level. Recent emergence of photo-sharing platforms such as Instagram, has attracted researchers attention to study people's behavior from their visual narratives – ranging from mining their emotions BIBREF29 , and happiness trend BIBREF30 , to studying medical concerns BIBREF31 . Researchers show that people use Instagram to engage in social exchange and storytelling about their difficult experiences BIBREF4 . The role of visual imagery as a mechanism of self-disclosure by relating visual attributes to mental health disclosures on Instagram was highlighted by BIBREF3 , BIBREF5 where individual Instagram profiles were utilized to build a prediction framework for identifying markers of depression. The importance of data modality to understand user behavior on social media was highlighted by BIBREF32 . More recently, a deep neural network sequence modeling approach that marries audio and text data modalities to analyze question-answer style interviews between an individual and an agent has been developed to study mental health BIBREF32 . Similarly, a multimodal depressive dictionary learning was proposed to detect depressed users on Twitter BIBREF33 . They provide a sparse user representations by defining a feature set consisting of social network features, user profile features, visual features, emotional features BIBREF34 , topic-level features, and domain-specific features. Particularly, our choice of multi-model prediction framework is intended to improve upon the prior works involving use of images in multimodal depression analysis BIBREF33 and prior works on studying Instagram photos BIBREF6 , BIBREF35 .
Demographic information inference on Social Media:
There is a growing interest in understanding online user's demographic information due to its numerous applications in healthcare BIBREF36 , BIBREF37 . A supervised model developed by BIBREF38 for determining users' gender by employing features such as screen-name, full-name, profile description and content on external resources (e.g., personal blog). Employing features including emoticons, acronyms, slangs, punctuations, capitalization, sentence length and included links/images, along with online behaviors such as number of friends, post time, and commenting activity, a supervised model was built for predicting user's age group BIBREF39 . Utilizing users life stage information such as secondary school student, college student, and employee, BIBREF40 builds age inference model for Dutch Twitter users. Similarly, relying on profile descriptions while devising a set of rules and patterns, a novel model introduced for extracting age for Twitter users BIBREF41 . They also parse description for occupation by consulting the SOC2010 list of occupations and validating it through social surveys. A novel age inference model was developed while relying on homophily interaction information and content for predicting age of Twitter users BIBREF42 . The limitations of textual content for predicting age and gender was highlighted by BIBREF43 . They distinguish language use based on social gender, age identity, biological sex and chronological age by collecting crowdsourced signals using a game in which players (crowd) guess the biological sex and age of a user based only on their tweets. Their findings indicate how linguistic markers can misguide (e.g., a heart represented as <3 can be misinterpreted as feminine when the writer is male.) Estimating age and gender from facial images by training a convolutional neural networks (CNN) for face recognition is an active line of research BIBREF44 , BIBREF13 , BIBREF45 .
Dataset
Self-disclosure clues have been extensively utilized for creating ground-truth data for numerous social media analytic studies e.g., for predicting demographics BIBREF36 , BIBREF41 , and user's depressive behavior BIBREF46 , BIBREF47 , BIBREF48 . For instance, vulnerable individuals may employ depressive-indicative terms in their Twitter profile descriptions. Others may share their age and gender, e.g., "16 years old suicidal girl"(see Figure FIGREF15 ). We employ a huge dataset of 45,000 self-reported depressed users introduced in BIBREF46 where a lexicon of depression symptoms consisting of 1500 depression-indicative terms was created with the help of psychologist clinician and employed for collecting self-declared depressed individual's profiles. A subset of 8,770 users (24 million time-stamped tweets) containing 3981 depressed and 4789 control users (that do not show any depressive behavior) were verified by two human judges BIBREF46 . This dataset INLINEFORM0 contains the metadata values of each user such as profile descriptions, followers_count, created_at, and profile_image_url.
Age Enabled Ground-truth Dataset: We extract user's age by applying regular expression patterns to profile descriptions (such as "17 years old, self-harm, anxiety, depression") BIBREF41 . We compile "age prefixes" and "age suffixes", and use three age-extraction rules: 1. I am X years old 2. Born in X 3. X years old, where X is a "date" or age (e.g., 1994). We selected a subset of 1061 users among INLINEFORM0 as gold standard dataset INLINEFORM1 who disclose their age. From these 1061 users, 822 belong to depressed class and 239 belong to control class. From 3981 depressed users, 20.6% disclose their age in contrast with only 4% (239/4789) among control group. So self-disclosure of age is more prevalent among vulnerable users. Figure FIGREF18 depicts the age distribution in INLINEFORM2 . The general trend, consistent with the results in BIBREF42 , BIBREF49 , is biased toward young people. Indeed, according to Pew, 47% of Twitter users are younger than 30 years old BIBREF50 . Similar data collection procedure with comparable distribution have been used in many prior efforts BIBREF51 , BIBREF49 , BIBREF42 . We discuss our approach to mitigate the impact of the bias in Section 4.1. The median age is 17 for depressed class versus 19 for control class suggesting either likely depressed-user population is younger, or depressed youngsters are more likely to disclose their age for connecting to their peers (social homophily.) BIBREF51
Gender Enabled Ground-truth Dataset: We selected a subset of 1464 users INLINEFORM0 from INLINEFORM1 who disclose their gender in their profile description. From 1464 users 64% belonged to the depressed group, and the rest (36%) to the control group. 23% of the likely depressed users disclose their gender which is considerably higher (12%) than that for the control class. Once again, gender disclosure varies among the two gender groups. For statistical significance, we performed chi-square test (null hypothesis: gender and depression are two independent variables). Figure FIGREF19 illustrates gender association with each of the two classes. Blue circles (positive residuals, see Figure FIGREF19 -A,D) show positive association among corresponding row and column variables while red circles (negative residuals, see Figure FIGREF19 -B,C) imply a repulsion. Our findings are consistent with the medical literature BIBREF10 as according to BIBREF52 more women than men were given a diagnosis of depression. In particular, the female-to-male ratio is 2.1 and 1.9 for Major Depressive Disorder and Dysthymic Disorder respectively. Our findings from Twitter data indicate there is a strong association (Chi-square: 32.75, p-value:1.04e-08) between being female and showing depressive behavior on Twitter.
Data Modality Analysis
We now provide an in-depth analysis of visual and textual content of vulnerable users.
Visual Content Analysis: We show that the visual content in images from posts as well as profiles provide valuable psychological cues for understanding a user's depression status. Profile/posted images can surface self-stigmatization BIBREF53 . Additionally, as opposed to typical computer vision framework for object recognition that often relies on thousands of predetermined low-level features, what matters more for assessing user's online behavior is the emotions reflected in facial expressions BIBREF54 , attributes contributing to the computational aesthetics BIBREF55 , and sentimental quotes they may subscribe to (Figure FIGREF15 ) BIBREF8 .
Facial Presence:
For capturing facial presence, we rely on BIBREF56 's approach that uses multilevel convolutional coarse-to-fine network cascade to tackle facial landmark localization. We identify facial presentation, emotion from facial expression, and demographic features from profile/posted images . Table TABREF21 illustrates facial presentation differences in both profile and posted images (media) for depressed and control users in INLINEFORM0 . With control class showing significantly higher in both profile and media (8%, 9% respectively) compared to that for the depressed class. In contrast with age and gender disclosure, vulnerable users are less likely to disclose their facial identity, possibly due to lack of confidence or fear of stigma.
Facial Expression:
Following BIBREF8 's approach, we adopt Ekman's model of six emotions: anger, disgust, fear, joy, sadness and surprise, and use the Face++ API to automatically capture them from the shared images. Positive emotions are joy and surprise, and negative emotions are anger, disgust, fear, and sadness. In general, for each user u in INLINEFORM0 , we process profile/shared images for both the depressed and the control groups with at least one face from the shared images (Table TABREF23 ). For the photos that contain multiple faces, we measure the average emotion.
Figure FIGREF27 illustrates the inter-correlation of these features. Additionally, we observe that emotions gleaned from facial expressions correlated with emotional signals captured from textual content utilizing LIWC. This indicates visual imagery can be harnessed as a complementary channel for measuring online emotional signals.
General Image Features:
The importance of interpretable computational aesthetic features for studying users' online behavior has been highlighted by several efforts BIBREF55 , BIBREF8 , BIBREF57 . Color, as a pillar of the human vision system, has a strong association with conceptual ideas like emotion BIBREF58 , BIBREF59 . We measured the normalized red, green, blue and the mean of original colors, and brightness and contrast relative to variations of luminance. We represent images in Hue-Saturation-Value color space that seems intuitive for humans, and measure mean and variance for saturation and hue. Saturation is defined as the difference in the intensities of the different light wavelengths that compose the color. Although hue is not interpretable, high saturation indicates vividness and chromatic purity which are more appealing to the human eye BIBREF8 . Colorfulness is measured as a difference against gray background BIBREF60 . Naturalness is a measure of the degree of correspondence between images and the human perception of reality BIBREF60 . In color reproduction, naturalness is measured from the mental recollection of the colors of familiar objects. Additionally, there is a tendency among vulnerable users to share sentimental quotes bearing negative emotions. We performed optical character recognition (OCR) with python-tesseract to extract text and their sentiment score. As illustrated in Table TABREF26 , vulnerable users tend to use less colorful (higher grayscale) profile as well as shared images to convey their negative feelings, and share images that are less natural (Figure FIGREF15 ). With respect to the aesthetic quality of images (saturation, brightness, and hue), depressed users use images that are less appealing to the human eye. We employ independent t-test, while adopting Bonferroni Correction as a conservative approach to adjust the confidence intervals. Overall, we have 223 features, and choose Bonferroni-corrected INLINEFORM0 level of INLINEFORM1 (*** INLINEFORM2 , ** INLINEFORM3 ).
** alpha= 0.05, *** alpha = 0.05/223
Demographics Inference & Language Cues: LIWC has been used extensively for examining the latent dimensions of self-expression for analyzing personality BIBREF61 , depressive behavior, demographic differences BIBREF43 , BIBREF40 , etc. Several studies highlight that females employ more first-person singular pronouns BIBREF62 , and deictic language BIBREF63 , while males tend to use more articles BIBREF64 which characterizes concrete thinking, and formal, informational and affirmation words BIBREF65 . For age analysis, the salient findings include older individuals using more future tense verbs BIBREF62 triggering a shift in focus while aging. They also show positive emotions BIBREF66 and employ fewer self-references (i.e. 'I', 'me') with greater first person plural BIBREF62 . Depressed users employ first person pronouns more frequently BIBREF67 , repeatedly use negative emotions and anger words. We analyzed psycholinguistic cues and language style to study the association between depressive behavior as well as demographics. Particularly, we adopt Levinson's adult development grouping that partitions users in INLINEFORM0 into 5 age groups: (14,19],(19,23], (23,34],(34,46], and (46,60]. Then, we apply LIWC for characterizing linguistic styles for each age group for users in INLINEFORM1 .
Qualitative Language Analysis: The recent LIWC version summarizes textual content in terms of language variables such as analytical thinking, clout, authenticity, and emotional tone. It also measures other linguistic dimensions such as descriptors categories (e.g., percent of target words gleaned by dictionary, or longer than six letters - Sixltr) and informal language markers (e.g., swear words, netspeak), and other linguistic aspects (e.g., 1st person singular pronouns.)
Thinking Style:
Measuring people's natural ways of trying to analyze, and organize complex events have strong association with analytical thinking. LIWC relates higher analytic thinking to more formal and logical reasoning whereas a lower value indicates focus on narratives. Also, cognitive processing measures problem solving in mind. Words such as "think," "realize," and "know" indicates the degree of "certainty" in communications. Critical thinking ability relates to education BIBREF68 , and is impacted by different stages of cognitive development at different ages . It has been shown that older people communicate with greater cognitive complexity while comprehending nuances and subtle differences BIBREF62 . We observe a similar pattern in our data (Table TABREF40 .) A recent study highlights how depression affects brain and thinking at molecular level using a rat model BIBREF69 . Depression can promote cognitive dysfunction including difficulty in concentrating and making decisions. We observed a notable differences in the ability to think analytically in depressed and control users in different age groups (see Figure FIGREF39 - A, F and Table TABREF40 ). Overall, vulnerable younger users are not logical thinkers based on their relative analytical score and cognitive processing ability.
Authenticity:
Authenticity measures the degree of honesty. Authenticity is often assessed by measuring present tense verbs, 1st person singular pronouns (I, me, my), and by examining the linguistic manifestations of false stories BIBREF70 . Liars use fewer self-references and fewer complex words. Psychologists often see a child's first successfull lie as a mental growth. There is a decreasing trend of the Authenticity with aging (see Figure FIGREF39 -B.) Authenticity for depressed youngsters is strikingly higher than their control peers. It decreases with age (Figure FIGREF39 -B.)
Clout:
People with high clout speak more confidently and with certainty, employing more social words with fewer negations (e.g., no, not) and swear words. In general, midlife is relatively stable w.r.t. relationships and work. A recent study shows that age 60 to be best for self-esteem BIBREF71 as people take on managerial roles at work and maintain a satisfying relationship with their spouse. We see the same pattern in our data (see Figure FIGREF39 -C and Table TABREF40 ). Unsurprisingly, lack of confidence (the 6th PHQ-9 symptom) is a distinguishable characteristic of vulnerable users, leading to their lower clout scores, especially among depressed users before middle age (34 years old).
Self-references:
First person singular words are often seen as indicating interpersonal involvement and their high usage is associated with negative affective states implying nervousness and depression BIBREF66 . Consistent with prior studies, frequency of first person singular for depressed people is significantly higher compared to that of control class. Similarly to BIBREF66 , youngsters tend to use more first-person (e.g. I) and second person singular (e.g. you) pronouns (Figure FIGREF39 -G).
Informal Language Markers; Swear, Netspeak:
Several studies highlighted the use of profanity by young adults has significantly increased over the last decade BIBREF72 . We observed the same pattern in both the depressed and the control classes (Table TABREF40 ), although it's rate is higher for depressed users BIBREF1 . Psychologists have also shown that swearing can indicate that an individual is not a fragmented member of a society. Depressed youngsters, showing higher rate of interpersonal involvement and relationships, have a higher rate of cursing (Figure FIGREF39 -E). Also, Netspeak lexicon measures the frequency of terms such as lol and thx.
Sexual, Body:
Sexual lexicon contains terms like "horny", "love" and "incest", and body terms like "ache", "heart", and "cough". Both start with a higher rate for depressed users while decreasing gradually while growing up, possibly due to changes in sexual desire as we age (Figure FIGREF39 -H,I and Table TABREF40 .)
Quantitative Language Analysis:
We employ one-way ANOVA to compare the impact of various factors and validate our findings above. Table TABREF40 illustrates our findings, with a degree of freedom (df) of 1055. The null hypothesis is that the sample means' for each age group are similar for each of the LIWC features.
*** alpha = 0.001, ** alpha = 0.01, * alpha = 0.05
Demographic Prediction
We leverage both the visual and textual content for predicting age and gender.
Prediction with Textual Content:
We employ BIBREF73 's weighted lexicon of terms that uses the dataset of 75,394 Facebook users who shared their status, age and gender. The predictive power of this lexica was evaluated on Twitter, blog, and Facebook, showing promising results BIBREF73 . Utilizing these two weighted lexicon of terms, we are predicting the demographic information (age or gender) of INLINEFORM0 (denoted by INLINEFORM1 ) using following equation: INLINEFORM2
where INLINEFORM0 is the lexicon weight of the term, and INLINEFORM1 represents the frequency of the term in the user generated INLINEFORM2 , and INLINEFORM3 measures total word count in INLINEFORM4 . As our data is biased toward young people, we report age prediction performance for each age group separately (Table TABREF42 ). Moreover, to measure the average accuracy of this model, we build a balanced dataset (keeping all the users above 23 -416 users), and then randomly sampling the same number of users from the age ranges (11,19] and (19,23]. The average accuracy of this model is 0.63 for depressed users and 0.64 for control class. Table TABREF44 illustrates the performance of gender prediction for each class. The average accuracy is 0.82 on INLINEFORM5 ground-truth dataset.
Prediction with Visual Imagery:
Inspired by BIBREF56 's approach for facial landmark localization, we use their pretrained CNN consisting of convolutional layers, including unshared and fully-connected layers, to predict gender and age from both the profile and shared images. We evaluate the performance for gender and age prediction task on INLINEFORM0 and INLINEFORM1 respectively as shown in Table TABREF42 and Table TABREF44 .
Demographic Prediction Analysis:
We delve deeper into the benefits and drawbacks of each data modality for demographic information prediction. This is crucial as the differences between language cues between age groups above age 35 tend to become smaller (see Figure FIGREF39 -A,B,C) and making the prediction harder for older people BIBREF74 . In this case, the other data modality (e.g., visual content) can play integral role as a complementary source for age inference. For gender prediction (see Table TABREF44 ), on average, the profile image-based predictor provides a more accurate prediction for both the depressed and control class (0.92 and 0.90) compared to content-based predictor (0.82). For age prediction (see Table TABREF42 ), textual content-based predictor (on average 0.60) outperforms both of the visual-based predictors (on average profile:0.51, Media:0.53).
However, not every user provides facial identity on his account (see Table TABREF21 ). We studied facial presentation for each age-group to examine any association between age-group, facial presentation and depressive behavior (see Table TABREF43 ). We can see youngsters in both depressed and control class are not likely to present their face on profile image. Less than 3% of vulnerable users between 11-19 years reveal their facial identity. Although content-based gender predictor was not as accurate as image-based one, it is adequate for population-level analysis.
Multi-modal Prediction Framework
We use the above findings for predicting depressive behavior. Our model exploits early fusion BIBREF32 technique in feature space and requires modeling each user INLINEFORM0 in INLINEFORM1 as vector concatenation of individual modality features. As opposed to computationally expensive late fusion scheme where each modality requires a separate supervised modeling, this model reduces the learning effort and shows promising results BIBREF75 . To develop a generalizable model that avoids overfitting, we perform feature selection using statistical tests and all relevant ensemble learning models. It adds randomness to the data by creating shuffled copies of all features (shadow feature), and then trains Random Forest classifier on the extended data. Iteratively, it checks whether the actual feature has a higher Z-score than its shadow feature (See Algorithm SECREF6 and Figure FIGREF45 ) BIBREF76 .
Main each Feature INLINEFORM0 INLINEFORM1
RndForrest( INLINEFORM0 ) Calculate Imp INLINEFORM1 INLINEFORM2 Generate next hypothesis , INLINEFORM3 Once all hypothesis generated Perform Statistical Test INLINEFORM4 //Binomial Distribution INLINEFORM5 Feature is important Feature is important
Ensemble Feature Selection
Next, we adopt an ensemble learning method that integrates the predictive power of multiple learners with two main advantages; its interpretability with respect to the contributions of each feature and its high predictive power. For prediction we have INLINEFORM0 where INLINEFORM1 is a weak learner and INLINEFORM2 denotes the final prediction.
In particular, we optimize the loss function: INLINEFORM0 where INLINEFORM1 incorporates INLINEFORM2 and INLINEFORM3 regularization. In each iteration, the new INLINEFORM4 is obtained by fitting weak learner to the negative gradient of loss function. Particularly, by estimating the loss function with Taylor expansion : INLINEFORM5 where its first expression is constant, the second and the third expressions are first ( INLINEFORM6 ) and second order derivatives ( INLINEFORM7 ) of the loss. INLINEFORM8
For exploring the weak learners, assume INLINEFORM0 has k leaf nodes, INLINEFORM1 be subset of users from INLINEFORM2 belongs to the node INLINEFORM3 , and INLINEFORM4 denotes the prediction for node INLINEFORM5 . Then, for each user INLINEFORM6 belonging to INLINEFORM7 , INLINEFORM8 and INLINEFORM9 INLINEFORM10
Next, for each leaf node INLINEFORM0 , deriving w.r.t INLINEFORM1 : INLINEFORM2
and by substituting weights: INLINEFORM0
which represents the loss for fixed weak learners with INLINEFORM0 nodes. The trees are built sequentially such that each subsequent tree aims to reduce the errors of its predecessor tree. Although, the weak learners have high bias, the ensemble model produces a strong learner that effectively integrate the weak learners by reducing bias and variance (the ultimate goal of supervised models) BIBREF77 . Table TABREF48 illustrates our multimodal framework outperform the baselines for identifying depressed users in terms of average specificity, sensitivity, F-Measure, and accuracy in 10-fold cross-validation setting on INLINEFORM1 dataset. Figure FIGREF47 shows how the likelihood of being classified into the depressed class varies with each feature addition to the model for a sample user in the dataset. The prediction bar (the black bar) shows that the log-odds of prediction is 0.31, that is, the likelihood of this person being a depressed user is 57% (1 / (1 + exp(-0.3))). The figure also sheds light on the impact of each contributing feature. The waterfall charts represent how the probability of being depressed changes with the addition of each feature variable. For instance, the "Analytic thinking" of this user is considered high 48.43 (Median:36.95, Mean: 40.18) and this decreases the chance of this person being classified into the depressed group by the log-odds of -1.41. Depressed users have significantly lower "Analytic thinking" score compared to control class. Moreover, the 40.46 "Clout" score is a low value (Median: 62.22, Mean: 57.17) and it decreases the chance of being classified as depressed. With respect to the visual features, for instance, the mean and the median of 'shared_colorfulness' is 112.03 and 113 respectively. The value of 136.71 would be high; thus, it decreases the chance of being depressed for this specific user by log-odds of -0.54. Moreover, the 'profile_naturalness' of 0.46 is considered high compared to 0.36 as the mean for the depressed class which justifies pull down of the log-odds by INLINEFORM2 . For network features, for instance, 'two_hop_neighborhood' for depressed users (Mean : 84) are less than that of control users (Mean: 154), and is reflected in pulling down the log-odds by -0.27.
Baselines:
To test the efficacy of our multi-modal framework for detecting depressed users, we compare it against existing content, content-network, and image-based models (based on the aforementioned general image feature, facial presence, and facial expressions.) | Random Forest classifier |
445e792ce7e699e960e2cb4fe217aeacdd88d392 | 445e792ce7e699e960e2cb4fe217aeacdd88d392_0 | Q: How do this framework facilitate demographic inference from social media?
Text: 0pt*0*0
0pt*0*0
0pt*0*0 0.95
1]Amir Hossein Yazdavar 1]Mohammad Saeid Mahdavinejad 2]Goonmeet Bajaj
3]William Romine 1]Amirhassan Monadjemi 1]Krishnaprasad Thirunarayan
1]Amit Sheth 4]Jyotishman Pathak [1]Department of Computer Science & Engineering, Wright State University, OH, USA [2]Ohio State University, Columbus, OH, USA [3]Department of Biological Science, Wright State University, OH, USA [4] Division of Health Informatics, Weill Cornell University, New York, NY, USA
[1] yazdavar.2@wright.edu
With ubiquity of social media platforms, millions of people are sharing their online persona by expressing their thoughts, moods, emotions, feelings, and even their daily struggles with mental health issues voluntarily and publicly on social media. Unlike the most existing efforts which study depression by analyzing textual content, we examine and exploit multimodal big data to discern depressive behavior using a wide variety of features including individual-level demographics. By developing a multimodal framework and employing statistical techniques for fusing heterogeneous sets of features obtained by processing visual, textual and user interaction data, we significantly enhance the current state-of-the-art approaches for identifying depressed individuals on Twitter (improving the average F1-Score by 5 percent) as well as facilitate demographic inference from social media for broader applications. Besides providing insights into the relationship between demographics and mental health, our research assists in the design of a new breed of demographic-aware health interventions.
Introduction
Depression is a highly prevalent public health challenge and a major cause of disability worldwide. Depression affects 6.7% (i.e., about 16 million) Americans each year . According to the World Mental Health Survey conducted in 17 countries, on average, about 5% of people reported having an episode of depression in 2011 BIBREF0 . Untreated or under-treated clinical depression can lead to suicide and other chronic risky behaviors such as drug or alcohol addiction.
Global efforts to curb clinical depression involve identifying depression through survey-based methods employing phone or online questionnaires. These approaches suffer from under-representation as well as sampling bias (with very small group of respondents.) In contrast, the widespread adoption of social media where people voluntarily and publicly express their thoughts, moods, emotions, and feelings, and even share their daily struggles with mental health problems has not been adequately tapped into studying mental illnesses, such as depression. The visual and textual content shared on different social media platforms like Twitter offer new opportunities for a deeper understanding of self-expressed depression both at an individual as well as community-level. Previous research efforts have suggested that language style, sentiment, users' activities, and engagement expressed in social media posts can predict the likelihood of depression BIBREF1 , BIBREF2 . However, except for a few attempts BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , these investigations have seldom studied extraction of emotional state from visual content of images in posted/profile images. Visual content can express users' emotions more vividly, and psychologists noted that imagery is an effective medium for communicating difficult emotions.
According to eMarketer, photos accounted for 75% of content posted on Facebook worldwide and they are the most engaging type of content on Facebook (87%). Indeed, "a picture is worth a thousand words" and now "photos are worth a million likes." Similarly, on Twitter, the tweets with image links get twice as much attention as those without , and video-linked tweets drive up engagement . The ease and naturalness of expression through visual imagery can serve to glean depression-indicators in vulnerable individuals who often seek social support through social media BIBREF7 . Further, as psychologist Carl Rogers highlights, we often pursue and promote our Ideal-Self . In this regard, the choice of profile image can be a proxy for the online persona BIBREF8 , providing a window into an individual's mental health status. For instance, choosing emaciated legs of girls covered with several cuts as profile image portrays negative self-view BIBREF9 .
Inferring demographic information like gender and age can be crucial for stratifying our understanding of population-level epidemiology of mental health disorders. Relying on electronic health records data, previous studies explored gender differences in depressive behavior from different angles including prevalence, age at onset, comorbidities, as well as biological and psychosocial factors. For instance, women have been diagnosed with depression twice as often as men BIBREF10 and national psychiatric morbidity survey in Britain has shown higher risk of depression in women BIBREF11 . On the other hand, suicide rates for men are three to five times higher compared to that of the women BIBREF12 .
Although depression can affect anyone at any age, signs and triggers of depression vary for different age groups . Depression triggers for children include parental depression, domestic violence, and loss of a pet, friend or family member. For teenagers (ages 12-18), depression may arise from hormonal imbalance, sexuality concerns and rejection by peers. Young adults (ages 19-29) may develop depression due to life transitions, poverty, trauma, and work issues. Adult (ages 30-60) depression triggers include caring simultaneously for children and aging parents, financial burden, work and relationship issues. Senior adults develop depression from common late-life issues, social isolation, major life loses such as the death of a spouse, financial stress and other chronic health problems (e.g., cardiac disease, dementia). Therefore, inferring demographic information while studying depressive behavior from passively sensed social data, can shed better light on the population-level epidemiology of depression.
The recent advancements in deep neural networks, specifically for image analysis task, can lead to determining demographic features such as age and gender BIBREF13 . We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users.
We address and derive answers to the following research questions: 1) How well do the content of posted images (colors, aesthetic and facial presentation) reflect depressive behavior? 2) Does the choice of profile picture show any psychological traits of depressed online persona? Are they reliable enough to represent the demographic information such as age and gender? 3) Are there any underlying common themes among depressed individuals generated using multimodal content that can be used to detect depression reliably?
Related Work
Mental Health Analysis using Social Media:
Several efforts have attempted to automatically detect depression from social media content utilizing machine/deep learning and natural language processing approaches. Conducting a retrospective study over tweets, BIBREF14 characterizes depression based on factors such as language, emotion, style, ego-network, and user engagement. They built a classifier to predict the likelihood of depression in a post BIBREF14 , BIBREF15 or in an individual BIBREF1 , BIBREF16 , BIBREF17 , BIBREF18 . Moreover, there have been significant advances due to the shared task BIBREF19 focusing on methods for identifying depressed users on Twitter at the Computational Linguistics and Clinical Psychology Workshop (CLP 2015). A corpus of nearly 1,800 Twitter users was built for evaluation, and the best models employed topic modeling BIBREF20 , Linguistic Inquiry and Word Count (LIWC) features, and other metadata BIBREF21 . More recently, a neural network architecture introduced by BIBREF22 combined posts into a representation of user's activities for detecting depressed users. Another active line of research has focused on capturing suicide and self-harm signals BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF2 , BIBREF27 . Moreover, the CLP 2016 BIBREF28 defined a shared task on detecting the severity of the mental health from forum posts. All of these studies derive discriminative features to classify depression in user-generated content at message-level, individual-level or community-level. Recent emergence of photo-sharing platforms such as Instagram, has attracted researchers attention to study people's behavior from their visual narratives – ranging from mining their emotions BIBREF29 , and happiness trend BIBREF30 , to studying medical concerns BIBREF31 . Researchers show that people use Instagram to engage in social exchange and storytelling about their difficult experiences BIBREF4 . The role of visual imagery as a mechanism of self-disclosure by relating visual attributes to mental health disclosures on Instagram was highlighted by BIBREF3 , BIBREF5 where individual Instagram profiles were utilized to build a prediction framework for identifying markers of depression. The importance of data modality to understand user behavior on social media was highlighted by BIBREF32 . More recently, a deep neural network sequence modeling approach that marries audio and text data modalities to analyze question-answer style interviews between an individual and an agent has been developed to study mental health BIBREF32 . Similarly, a multimodal depressive dictionary learning was proposed to detect depressed users on Twitter BIBREF33 . They provide a sparse user representations by defining a feature set consisting of social network features, user profile features, visual features, emotional features BIBREF34 , topic-level features, and domain-specific features. Particularly, our choice of multi-model prediction framework is intended to improve upon the prior works involving use of images in multimodal depression analysis BIBREF33 and prior works on studying Instagram photos BIBREF6 , BIBREF35 .
Demographic information inference on Social Media:
There is a growing interest in understanding online user's demographic information due to its numerous applications in healthcare BIBREF36 , BIBREF37 . A supervised model developed by BIBREF38 for determining users' gender by employing features such as screen-name, full-name, profile description and content on external resources (e.g., personal blog). Employing features including emoticons, acronyms, slangs, punctuations, capitalization, sentence length and included links/images, along with online behaviors such as number of friends, post time, and commenting activity, a supervised model was built for predicting user's age group BIBREF39 . Utilizing users life stage information such as secondary school student, college student, and employee, BIBREF40 builds age inference model for Dutch Twitter users. Similarly, relying on profile descriptions while devising a set of rules and patterns, a novel model introduced for extracting age for Twitter users BIBREF41 . They also parse description for occupation by consulting the SOC2010 list of occupations and validating it through social surveys. A novel age inference model was developed while relying on homophily interaction information and content for predicting age of Twitter users BIBREF42 . The limitations of textual content for predicting age and gender was highlighted by BIBREF43 . They distinguish language use based on social gender, age identity, biological sex and chronological age by collecting crowdsourced signals using a game in which players (crowd) guess the biological sex and age of a user based only on their tweets. Their findings indicate how linguistic markers can misguide (e.g., a heart represented as <3 can be misinterpreted as feminine when the writer is male.) Estimating age and gender from facial images by training a convolutional neural networks (CNN) for face recognition is an active line of research BIBREF44 , BIBREF13 , BIBREF45 .
Dataset
Self-disclosure clues have been extensively utilized for creating ground-truth data for numerous social media analytic studies e.g., for predicting demographics BIBREF36 , BIBREF41 , and user's depressive behavior BIBREF46 , BIBREF47 , BIBREF48 . For instance, vulnerable individuals may employ depressive-indicative terms in their Twitter profile descriptions. Others may share their age and gender, e.g., "16 years old suicidal girl"(see Figure FIGREF15 ). We employ a huge dataset of 45,000 self-reported depressed users introduced in BIBREF46 where a lexicon of depression symptoms consisting of 1500 depression-indicative terms was created with the help of psychologist clinician and employed for collecting self-declared depressed individual's profiles. A subset of 8,770 users (24 million time-stamped tweets) containing 3981 depressed and 4789 control users (that do not show any depressive behavior) were verified by two human judges BIBREF46 . This dataset INLINEFORM0 contains the metadata values of each user such as profile descriptions, followers_count, created_at, and profile_image_url.
Age Enabled Ground-truth Dataset: We extract user's age by applying regular expression patterns to profile descriptions (such as "17 years old, self-harm, anxiety, depression") BIBREF41 . We compile "age prefixes" and "age suffixes", and use three age-extraction rules: 1. I am X years old 2. Born in X 3. X years old, where X is a "date" or age (e.g., 1994). We selected a subset of 1061 users among INLINEFORM0 as gold standard dataset INLINEFORM1 who disclose their age. From these 1061 users, 822 belong to depressed class and 239 belong to control class. From 3981 depressed users, 20.6% disclose their age in contrast with only 4% (239/4789) among control group. So self-disclosure of age is more prevalent among vulnerable users. Figure FIGREF18 depicts the age distribution in INLINEFORM2 . The general trend, consistent with the results in BIBREF42 , BIBREF49 , is biased toward young people. Indeed, according to Pew, 47% of Twitter users are younger than 30 years old BIBREF50 . Similar data collection procedure with comparable distribution have been used in many prior efforts BIBREF51 , BIBREF49 , BIBREF42 . We discuss our approach to mitigate the impact of the bias in Section 4.1. The median age is 17 for depressed class versus 19 for control class suggesting either likely depressed-user population is younger, or depressed youngsters are more likely to disclose their age for connecting to their peers (social homophily.) BIBREF51
Gender Enabled Ground-truth Dataset: We selected a subset of 1464 users INLINEFORM0 from INLINEFORM1 who disclose their gender in their profile description. From 1464 users 64% belonged to the depressed group, and the rest (36%) to the control group. 23% of the likely depressed users disclose their gender which is considerably higher (12%) than that for the control class. Once again, gender disclosure varies among the two gender groups. For statistical significance, we performed chi-square test (null hypothesis: gender and depression are two independent variables). Figure FIGREF19 illustrates gender association with each of the two classes. Blue circles (positive residuals, see Figure FIGREF19 -A,D) show positive association among corresponding row and column variables while red circles (negative residuals, see Figure FIGREF19 -B,C) imply a repulsion. Our findings are consistent with the medical literature BIBREF10 as according to BIBREF52 more women than men were given a diagnosis of depression. In particular, the female-to-male ratio is 2.1 and 1.9 for Major Depressive Disorder and Dysthymic Disorder respectively. Our findings from Twitter data indicate there is a strong association (Chi-square: 32.75, p-value:1.04e-08) between being female and showing depressive behavior on Twitter.
Data Modality Analysis
We now provide an in-depth analysis of visual and textual content of vulnerable users.
Visual Content Analysis: We show that the visual content in images from posts as well as profiles provide valuable psychological cues for understanding a user's depression status. Profile/posted images can surface self-stigmatization BIBREF53 . Additionally, as opposed to typical computer vision framework for object recognition that often relies on thousands of predetermined low-level features, what matters more for assessing user's online behavior is the emotions reflected in facial expressions BIBREF54 , attributes contributing to the computational aesthetics BIBREF55 , and sentimental quotes they may subscribe to (Figure FIGREF15 ) BIBREF8 .
Facial Presence:
For capturing facial presence, we rely on BIBREF56 's approach that uses multilevel convolutional coarse-to-fine network cascade to tackle facial landmark localization. We identify facial presentation, emotion from facial expression, and demographic features from profile/posted images . Table TABREF21 illustrates facial presentation differences in both profile and posted images (media) for depressed and control users in INLINEFORM0 . With control class showing significantly higher in both profile and media (8%, 9% respectively) compared to that for the depressed class. In contrast with age and gender disclosure, vulnerable users are less likely to disclose their facial identity, possibly due to lack of confidence or fear of stigma.
Facial Expression:
Following BIBREF8 's approach, we adopt Ekman's model of six emotions: anger, disgust, fear, joy, sadness and surprise, and use the Face++ API to automatically capture them from the shared images. Positive emotions are joy and surprise, and negative emotions are anger, disgust, fear, and sadness. In general, for each user u in INLINEFORM0 , we process profile/shared images for both the depressed and the control groups with at least one face from the shared images (Table TABREF23 ). For the photos that contain multiple faces, we measure the average emotion.
Figure FIGREF27 illustrates the inter-correlation of these features. Additionally, we observe that emotions gleaned from facial expressions correlated with emotional signals captured from textual content utilizing LIWC. This indicates visual imagery can be harnessed as a complementary channel for measuring online emotional signals.
General Image Features:
The importance of interpretable computational aesthetic features for studying users' online behavior has been highlighted by several efforts BIBREF55 , BIBREF8 , BIBREF57 . Color, as a pillar of the human vision system, has a strong association with conceptual ideas like emotion BIBREF58 , BIBREF59 . We measured the normalized red, green, blue and the mean of original colors, and brightness and contrast relative to variations of luminance. We represent images in Hue-Saturation-Value color space that seems intuitive for humans, and measure mean and variance for saturation and hue. Saturation is defined as the difference in the intensities of the different light wavelengths that compose the color. Although hue is not interpretable, high saturation indicates vividness and chromatic purity which are more appealing to the human eye BIBREF8 . Colorfulness is measured as a difference against gray background BIBREF60 . Naturalness is a measure of the degree of correspondence between images and the human perception of reality BIBREF60 . In color reproduction, naturalness is measured from the mental recollection of the colors of familiar objects. Additionally, there is a tendency among vulnerable users to share sentimental quotes bearing negative emotions. We performed optical character recognition (OCR) with python-tesseract to extract text and their sentiment score. As illustrated in Table TABREF26 , vulnerable users tend to use less colorful (higher grayscale) profile as well as shared images to convey their negative feelings, and share images that are less natural (Figure FIGREF15 ). With respect to the aesthetic quality of images (saturation, brightness, and hue), depressed users use images that are less appealing to the human eye. We employ independent t-test, while adopting Bonferroni Correction as a conservative approach to adjust the confidence intervals. Overall, we have 223 features, and choose Bonferroni-corrected INLINEFORM0 level of INLINEFORM1 (*** INLINEFORM2 , ** INLINEFORM3 ).
** alpha= 0.05, *** alpha = 0.05/223
Demographics Inference & Language Cues: LIWC has been used extensively for examining the latent dimensions of self-expression for analyzing personality BIBREF61 , depressive behavior, demographic differences BIBREF43 , BIBREF40 , etc. Several studies highlight that females employ more first-person singular pronouns BIBREF62 , and deictic language BIBREF63 , while males tend to use more articles BIBREF64 which characterizes concrete thinking, and formal, informational and affirmation words BIBREF65 . For age analysis, the salient findings include older individuals using more future tense verbs BIBREF62 triggering a shift in focus while aging. They also show positive emotions BIBREF66 and employ fewer self-references (i.e. 'I', 'me') with greater first person plural BIBREF62 . Depressed users employ first person pronouns more frequently BIBREF67 , repeatedly use negative emotions and anger words. We analyzed psycholinguistic cues and language style to study the association between depressive behavior as well as demographics. Particularly, we adopt Levinson's adult development grouping that partitions users in INLINEFORM0 into 5 age groups: (14,19],(19,23], (23,34],(34,46], and (46,60]. Then, we apply LIWC for characterizing linguistic styles for each age group for users in INLINEFORM1 .
Qualitative Language Analysis: The recent LIWC version summarizes textual content in terms of language variables such as analytical thinking, clout, authenticity, and emotional tone. It also measures other linguistic dimensions such as descriptors categories (e.g., percent of target words gleaned by dictionary, or longer than six letters - Sixltr) and informal language markers (e.g., swear words, netspeak), and other linguistic aspects (e.g., 1st person singular pronouns.)
Thinking Style:
Measuring people's natural ways of trying to analyze, and organize complex events have strong association with analytical thinking. LIWC relates higher analytic thinking to more formal and logical reasoning whereas a lower value indicates focus on narratives. Also, cognitive processing measures problem solving in mind. Words such as "think," "realize," and "know" indicates the degree of "certainty" in communications. Critical thinking ability relates to education BIBREF68 , and is impacted by different stages of cognitive development at different ages . It has been shown that older people communicate with greater cognitive complexity while comprehending nuances and subtle differences BIBREF62 . We observe a similar pattern in our data (Table TABREF40 .) A recent study highlights how depression affects brain and thinking at molecular level using a rat model BIBREF69 . Depression can promote cognitive dysfunction including difficulty in concentrating and making decisions. We observed a notable differences in the ability to think analytically in depressed and control users in different age groups (see Figure FIGREF39 - A, F and Table TABREF40 ). Overall, vulnerable younger users are not logical thinkers based on their relative analytical score and cognitive processing ability.
Authenticity:
Authenticity measures the degree of honesty. Authenticity is often assessed by measuring present tense verbs, 1st person singular pronouns (I, me, my), and by examining the linguistic manifestations of false stories BIBREF70 . Liars use fewer self-references and fewer complex words. Psychologists often see a child's first successfull lie as a mental growth. There is a decreasing trend of the Authenticity with aging (see Figure FIGREF39 -B.) Authenticity for depressed youngsters is strikingly higher than their control peers. It decreases with age (Figure FIGREF39 -B.)
Clout:
People with high clout speak more confidently and with certainty, employing more social words with fewer negations (e.g., no, not) and swear words. In general, midlife is relatively stable w.r.t. relationships and work. A recent study shows that age 60 to be best for self-esteem BIBREF71 as people take on managerial roles at work and maintain a satisfying relationship with their spouse. We see the same pattern in our data (see Figure FIGREF39 -C and Table TABREF40 ). Unsurprisingly, lack of confidence (the 6th PHQ-9 symptom) is a distinguishable characteristic of vulnerable users, leading to their lower clout scores, especially among depressed users before middle age (34 years old).
Self-references:
First person singular words are often seen as indicating interpersonal involvement and their high usage is associated with negative affective states implying nervousness and depression BIBREF66 . Consistent with prior studies, frequency of first person singular for depressed people is significantly higher compared to that of control class. Similarly to BIBREF66 , youngsters tend to use more first-person (e.g. I) and second person singular (e.g. you) pronouns (Figure FIGREF39 -G).
Informal Language Markers; Swear, Netspeak:
Several studies highlighted the use of profanity by young adults has significantly increased over the last decade BIBREF72 . We observed the same pattern in both the depressed and the control classes (Table TABREF40 ), although it's rate is higher for depressed users BIBREF1 . Psychologists have also shown that swearing can indicate that an individual is not a fragmented member of a society. Depressed youngsters, showing higher rate of interpersonal involvement and relationships, have a higher rate of cursing (Figure FIGREF39 -E). Also, Netspeak lexicon measures the frequency of terms such as lol and thx.
Sexual, Body:
Sexual lexicon contains terms like "horny", "love" and "incest", and body terms like "ache", "heart", and "cough". Both start with a higher rate for depressed users while decreasing gradually while growing up, possibly due to changes in sexual desire as we age (Figure FIGREF39 -H,I and Table TABREF40 .)
Quantitative Language Analysis:
We employ one-way ANOVA to compare the impact of various factors and validate our findings above. Table TABREF40 illustrates our findings, with a degree of freedom (df) of 1055. The null hypothesis is that the sample means' for each age group are similar for each of the LIWC features.
*** alpha = 0.001, ** alpha = 0.01, * alpha = 0.05
Demographic Prediction
We leverage both the visual and textual content for predicting age and gender.
Prediction with Textual Content:
We employ BIBREF73 's weighted lexicon of terms that uses the dataset of 75,394 Facebook users who shared their status, age and gender. The predictive power of this lexica was evaluated on Twitter, blog, and Facebook, showing promising results BIBREF73 . Utilizing these two weighted lexicon of terms, we are predicting the demographic information (age or gender) of INLINEFORM0 (denoted by INLINEFORM1 ) using following equation: INLINEFORM2
where INLINEFORM0 is the lexicon weight of the term, and INLINEFORM1 represents the frequency of the term in the user generated INLINEFORM2 , and INLINEFORM3 measures total word count in INLINEFORM4 . As our data is biased toward young people, we report age prediction performance for each age group separately (Table TABREF42 ). Moreover, to measure the average accuracy of this model, we build a balanced dataset (keeping all the users above 23 -416 users), and then randomly sampling the same number of users from the age ranges (11,19] and (19,23]. The average accuracy of this model is 0.63 for depressed users and 0.64 for control class. Table TABREF44 illustrates the performance of gender prediction for each class. The average accuracy is 0.82 on INLINEFORM5 ground-truth dataset.
Prediction with Visual Imagery:
Inspired by BIBREF56 's approach for facial landmark localization, we use their pretrained CNN consisting of convolutional layers, including unshared and fully-connected layers, to predict gender and age from both the profile and shared images. We evaluate the performance for gender and age prediction task on INLINEFORM0 and INLINEFORM1 respectively as shown in Table TABREF42 and Table TABREF44 .
Demographic Prediction Analysis:
We delve deeper into the benefits and drawbacks of each data modality for demographic information prediction. This is crucial as the differences between language cues between age groups above age 35 tend to become smaller (see Figure FIGREF39 -A,B,C) and making the prediction harder for older people BIBREF74 . In this case, the other data modality (e.g., visual content) can play integral role as a complementary source for age inference. For gender prediction (see Table TABREF44 ), on average, the profile image-based predictor provides a more accurate prediction for both the depressed and control class (0.92 and 0.90) compared to content-based predictor (0.82). For age prediction (see Table TABREF42 ), textual content-based predictor (on average 0.60) outperforms both of the visual-based predictors (on average profile:0.51, Media:0.53).
However, not every user provides facial identity on his account (see Table TABREF21 ). We studied facial presentation for each age-group to examine any association between age-group, facial presentation and depressive behavior (see Table TABREF43 ). We can see youngsters in both depressed and control class are not likely to present their face on profile image. Less than 3% of vulnerable users between 11-19 years reveal their facial identity. Although content-based gender predictor was not as accurate as image-based one, it is adequate for population-level analysis.
Multi-modal Prediction Framework
We use the above findings for predicting depressive behavior. Our model exploits early fusion BIBREF32 technique in feature space and requires modeling each user INLINEFORM0 in INLINEFORM1 as vector concatenation of individual modality features. As opposed to computationally expensive late fusion scheme where each modality requires a separate supervised modeling, this model reduces the learning effort and shows promising results BIBREF75 . To develop a generalizable model that avoids overfitting, we perform feature selection using statistical tests and all relevant ensemble learning models. It adds randomness to the data by creating shuffled copies of all features (shadow feature), and then trains Random Forest classifier on the extended data. Iteratively, it checks whether the actual feature has a higher Z-score than its shadow feature (See Algorithm SECREF6 and Figure FIGREF45 ) BIBREF76 .
Main each Feature INLINEFORM0 INLINEFORM1
RndForrest( INLINEFORM0 ) Calculate Imp INLINEFORM1 INLINEFORM2 Generate next hypothesis , INLINEFORM3 Once all hypothesis generated Perform Statistical Test INLINEFORM4 //Binomial Distribution INLINEFORM5 Feature is important Feature is important
Ensemble Feature Selection
Next, we adopt an ensemble learning method that integrates the predictive power of multiple learners with two main advantages; its interpretability with respect to the contributions of each feature and its high predictive power. For prediction we have INLINEFORM0 where INLINEFORM1 is a weak learner and INLINEFORM2 denotes the final prediction.
In particular, we optimize the loss function: INLINEFORM0 where INLINEFORM1 incorporates INLINEFORM2 and INLINEFORM3 regularization. In each iteration, the new INLINEFORM4 is obtained by fitting weak learner to the negative gradient of loss function. Particularly, by estimating the loss function with Taylor expansion : INLINEFORM5 where its first expression is constant, the second and the third expressions are first ( INLINEFORM6 ) and second order derivatives ( INLINEFORM7 ) of the loss. INLINEFORM8
For exploring the weak learners, assume INLINEFORM0 has k leaf nodes, INLINEFORM1 be subset of users from INLINEFORM2 belongs to the node INLINEFORM3 , and INLINEFORM4 denotes the prediction for node INLINEFORM5 . Then, for each user INLINEFORM6 belonging to INLINEFORM7 , INLINEFORM8 and INLINEFORM9 INLINEFORM10
Next, for each leaf node INLINEFORM0 , deriving w.r.t INLINEFORM1 : INLINEFORM2
and by substituting weights: INLINEFORM0
which represents the loss for fixed weak learners with INLINEFORM0 nodes. The trees are built sequentially such that each subsequent tree aims to reduce the errors of its predecessor tree. Although, the weak learners have high bias, the ensemble model produces a strong learner that effectively integrate the weak learners by reducing bias and variance (the ultimate goal of supervised models) BIBREF77 . Table TABREF48 illustrates our multimodal framework outperform the baselines for identifying depressed users in terms of average specificity, sensitivity, F-Measure, and accuracy in 10-fold cross-validation setting on INLINEFORM1 dataset. Figure FIGREF47 shows how the likelihood of being classified into the depressed class varies with each feature addition to the model for a sample user in the dataset. The prediction bar (the black bar) shows that the log-odds of prediction is 0.31, that is, the likelihood of this person being a depressed user is 57% (1 / (1 + exp(-0.3))). The figure also sheds light on the impact of each contributing feature. The waterfall charts represent how the probability of being depressed changes with the addition of each feature variable. For instance, the "Analytic thinking" of this user is considered high 48.43 (Median:36.95, Mean: 40.18) and this decreases the chance of this person being classified into the depressed group by the log-odds of -1.41. Depressed users have significantly lower "Analytic thinking" score compared to control class. Moreover, the 40.46 "Clout" score is a low value (Median: 62.22, Mean: 57.17) and it decreases the chance of being classified as depressed. With respect to the visual features, for instance, the mean and the median of 'shared_colorfulness' is 112.03 and 113 respectively. The value of 136.71 would be high; thus, it decreases the chance of being depressed for this specific user by log-odds of -0.54. Moreover, the 'profile_naturalness' of 0.46 is considered high compared to 0.36 as the mean for the depressed class which justifies pull down of the log-odds by INLINEFORM2 . For network features, for instance, 'two_hop_neighborhood' for depressed users (Mean : 84) are less than that of control users (Mean: 154), and is reflected in pulling down the log-odds by -0.27.
Baselines:
To test the efficacy of our multi-modal framework for detecting depressed users, we compare it against existing content, content-network, and image-based models (based on the aforementioned general image feature, facial presence, and facial expressions.) | Demographic information is predicted using weighted lexicon of terms. |
a3b1520e3da29d64af2b6e22ff15d330026d0b36 | a3b1520e3da29d64af2b6e22ff15d330026d0b36_0 | Q: What types of features are used from each data type?
Text: 0pt*0*0
0pt*0*0
0pt*0*0 0.95
1]Amir Hossein Yazdavar 1]Mohammad Saeid Mahdavinejad 2]Goonmeet Bajaj
3]William Romine 1]Amirhassan Monadjemi 1]Krishnaprasad Thirunarayan
1]Amit Sheth 4]Jyotishman Pathak [1]Department of Computer Science & Engineering, Wright State University, OH, USA [2]Ohio State University, Columbus, OH, USA [3]Department of Biological Science, Wright State University, OH, USA [4] Division of Health Informatics, Weill Cornell University, New York, NY, USA
[1] yazdavar.2@wright.edu
With ubiquity of social media platforms, millions of people are sharing their online persona by expressing their thoughts, moods, emotions, feelings, and even their daily struggles with mental health issues voluntarily and publicly on social media. Unlike the most existing efforts which study depression by analyzing textual content, we examine and exploit multimodal big data to discern depressive behavior using a wide variety of features including individual-level demographics. By developing a multimodal framework and employing statistical techniques for fusing heterogeneous sets of features obtained by processing visual, textual and user interaction data, we significantly enhance the current state-of-the-art approaches for identifying depressed individuals on Twitter (improving the average F1-Score by 5 percent) as well as facilitate demographic inference from social media for broader applications. Besides providing insights into the relationship between demographics and mental health, our research assists in the design of a new breed of demographic-aware health interventions.
Introduction
Depression is a highly prevalent public health challenge and a major cause of disability worldwide. Depression affects 6.7% (i.e., about 16 million) Americans each year . According to the World Mental Health Survey conducted in 17 countries, on average, about 5% of people reported having an episode of depression in 2011 BIBREF0 . Untreated or under-treated clinical depression can lead to suicide and other chronic risky behaviors such as drug or alcohol addiction.
Global efforts to curb clinical depression involve identifying depression through survey-based methods employing phone or online questionnaires. These approaches suffer from under-representation as well as sampling bias (with very small group of respondents.) In contrast, the widespread adoption of social media where people voluntarily and publicly express their thoughts, moods, emotions, and feelings, and even share their daily struggles with mental health problems has not been adequately tapped into studying mental illnesses, such as depression. The visual and textual content shared on different social media platforms like Twitter offer new opportunities for a deeper understanding of self-expressed depression both at an individual as well as community-level. Previous research efforts have suggested that language style, sentiment, users' activities, and engagement expressed in social media posts can predict the likelihood of depression BIBREF1 , BIBREF2 . However, except for a few attempts BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , these investigations have seldom studied extraction of emotional state from visual content of images in posted/profile images. Visual content can express users' emotions more vividly, and psychologists noted that imagery is an effective medium for communicating difficult emotions.
According to eMarketer, photos accounted for 75% of content posted on Facebook worldwide and they are the most engaging type of content on Facebook (87%). Indeed, "a picture is worth a thousand words" and now "photos are worth a million likes." Similarly, on Twitter, the tweets with image links get twice as much attention as those without , and video-linked tweets drive up engagement . The ease and naturalness of expression through visual imagery can serve to glean depression-indicators in vulnerable individuals who often seek social support through social media BIBREF7 . Further, as psychologist Carl Rogers highlights, we often pursue and promote our Ideal-Self . In this regard, the choice of profile image can be a proxy for the online persona BIBREF8 , providing a window into an individual's mental health status. For instance, choosing emaciated legs of girls covered with several cuts as profile image portrays negative self-view BIBREF9 .
Inferring demographic information like gender and age can be crucial for stratifying our understanding of population-level epidemiology of mental health disorders. Relying on electronic health records data, previous studies explored gender differences in depressive behavior from different angles including prevalence, age at onset, comorbidities, as well as biological and psychosocial factors. For instance, women have been diagnosed with depression twice as often as men BIBREF10 and national psychiatric morbidity survey in Britain has shown higher risk of depression in women BIBREF11 . On the other hand, suicide rates for men are three to five times higher compared to that of the women BIBREF12 .
Although depression can affect anyone at any age, signs and triggers of depression vary for different age groups . Depression triggers for children include parental depression, domestic violence, and loss of a pet, friend or family member. For teenagers (ages 12-18), depression may arise from hormonal imbalance, sexuality concerns and rejection by peers. Young adults (ages 19-29) may develop depression due to life transitions, poverty, trauma, and work issues. Adult (ages 30-60) depression triggers include caring simultaneously for children and aging parents, financial burden, work and relationship issues. Senior adults develop depression from common late-life issues, social isolation, major life loses such as the death of a spouse, financial stress and other chronic health problems (e.g., cardiac disease, dementia). Therefore, inferring demographic information while studying depressive behavior from passively sensed social data, can shed better light on the population-level epidemiology of depression.
The recent advancements in deep neural networks, specifically for image analysis task, can lead to determining demographic features such as age and gender BIBREF13 . We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users.
We address and derive answers to the following research questions: 1) How well do the content of posted images (colors, aesthetic and facial presentation) reflect depressive behavior? 2) Does the choice of profile picture show any psychological traits of depressed online persona? Are they reliable enough to represent the demographic information such as age and gender? 3) Are there any underlying common themes among depressed individuals generated using multimodal content that can be used to detect depression reliably?
Related Work
Mental Health Analysis using Social Media:
Several efforts have attempted to automatically detect depression from social media content utilizing machine/deep learning and natural language processing approaches. Conducting a retrospective study over tweets, BIBREF14 characterizes depression based on factors such as language, emotion, style, ego-network, and user engagement. They built a classifier to predict the likelihood of depression in a post BIBREF14 , BIBREF15 or in an individual BIBREF1 , BIBREF16 , BIBREF17 , BIBREF18 . Moreover, there have been significant advances due to the shared task BIBREF19 focusing on methods for identifying depressed users on Twitter at the Computational Linguistics and Clinical Psychology Workshop (CLP 2015). A corpus of nearly 1,800 Twitter users was built for evaluation, and the best models employed topic modeling BIBREF20 , Linguistic Inquiry and Word Count (LIWC) features, and other metadata BIBREF21 . More recently, a neural network architecture introduced by BIBREF22 combined posts into a representation of user's activities for detecting depressed users. Another active line of research has focused on capturing suicide and self-harm signals BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF2 , BIBREF27 . Moreover, the CLP 2016 BIBREF28 defined a shared task on detecting the severity of the mental health from forum posts. All of these studies derive discriminative features to classify depression in user-generated content at message-level, individual-level or community-level. Recent emergence of photo-sharing platforms such as Instagram, has attracted researchers attention to study people's behavior from their visual narratives – ranging from mining their emotions BIBREF29 , and happiness trend BIBREF30 , to studying medical concerns BIBREF31 . Researchers show that people use Instagram to engage in social exchange and storytelling about their difficult experiences BIBREF4 . The role of visual imagery as a mechanism of self-disclosure by relating visual attributes to mental health disclosures on Instagram was highlighted by BIBREF3 , BIBREF5 where individual Instagram profiles were utilized to build a prediction framework for identifying markers of depression. The importance of data modality to understand user behavior on social media was highlighted by BIBREF32 . More recently, a deep neural network sequence modeling approach that marries audio and text data modalities to analyze question-answer style interviews between an individual and an agent has been developed to study mental health BIBREF32 . Similarly, a multimodal depressive dictionary learning was proposed to detect depressed users on Twitter BIBREF33 . They provide a sparse user representations by defining a feature set consisting of social network features, user profile features, visual features, emotional features BIBREF34 , topic-level features, and domain-specific features. Particularly, our choice of multi-model prediction framework is intended to improve upon the prior works involving use of images in multimodal depression analysis BIBREF33 and prior works on studying Instagram photos BIBREF6 , BIBREF35 .
Demographic information inference on Social Media:
There is a growing interest in understanding online user's demographic information due to its numerous applications in healthcare BIBREF36 , BIBREF37 . A supervised model developed by BIBREF38 for determining users' gender by employing features such as screen-name, full-name, profile description and content on external resources (e.g., personal blog). Employing features including emoticons, acronyms, slangs, punctuations, capitalization, sentence length and included links/images, along with online behaviors such as number of friends, post time, and commenting activity, a supervised model was built for predicting user's age group BIBREF39 . Utilizing users life stage information such as secondary school student, college student, and employee, BIBREF40 builds age inference model for Dutch Twitter users. Similarly, relying on profile descriptions while devising a set of rules and patterns, a novel model introduced for extracting age for Twitter users BIBREF41 . They also parse description for occupation by consulting the SOC2010 list of occupations and validating it through social surveys. A novel age inference model was developed while relying on homophily interaction information and content for predicting age of Twitter users BIBREF42 . The limitations of textual content for predicting age and gender was highlighted by BIBREF43 . They distinguish language use based on social gender, age identity, biological sex and chronological age by collecting crowdsourced signals using a game in which players (crowd) guess the biological sex and age of a user based only on their tweets. Their findings indicate how linguistic markers can misguide (e.g., a heart represented as <3 can be misinterpreted as feminine when the writer is male.) Estimating age and gender from facial images by training a convolutional neural networks (CNN) for face recognition is an active line of research BIBREF44 , BIBREF13 , BIBREF45 .
Dataset
Self-disclosure clues have been extensively utilized for creating ground-truth data for numerous social media analytic studies e.g., for predicting demographics BIBREF36 , BIBREF41 , and user's depressive behavior BIBREF46 , BIBREF47 , BIBREF48 . For instance, vulnerable individuals may employ depressive-indicative terms in their Twitter profile descriptions. Others may share their age and gender, e.g., "16 years old suicidal girl"(see Figure FIGREF15 ). We employ a huge dataset of 45,000 self-reported depressed users introduced in BIBREF46 where a lexicon of depression symptoms consisting of 1500 depression-indicative terms was created with the help of psychologist clinician and employed for collecting self-declared depressed individual's profiles. A subset of 8,770 users (24 million time-stamped tweets) containing 3981 depressed and 4789 control users (that do not show any depressive behavior) were verified by two human judges BIBREF46 . This dataset INLINEFORM0 contains the metadata values of each user such as profile descriptions, followers_count, created_at, and profile_image_url.
Age Enabled Ground-truth Dataset: We extract user's age by applying regular expression patterns to profile descriptions (such as "17 years old, self-harm, anxiety, depression") BIBREF41 . We compile "age prefixes" and "age suffixes", and use three age-extraction rules: 1. I am X years old 2. Born in X 3. X years old, where X is a "date" or age (e.g., 1994). We selected a subset of 1061 users among INLINEFORM0 as gold standard dataset INLINEFORM1 who disclose their age. From these 1061 users, 822 belong to depressed class and 239 belong to control class. From 3981 depressed users, 20.6% disclose their age in contrast with only 4% (239/4789) among control group. So self-disclosure of age is more prevalent among vulnerable users. Figure FIGREF18 depicts the age distribution in INLINEFORM2 . The general trend, consistent with the results in BIBREF42 , BIBREF49 , is biased toward young people. Indeed, according to Pew, 47% of Twitter users are younger than 30 years old BIBREF50 . Similar data collection procedure with comparable distribution have been used in many prior efforts BIBREF51 , BIBREF49 , BIBREF42 . We discuss our approach to mitigate the impact of the bias in Section 4.1. The median age is 17 for depressed class versus 19 for control class suggesting either likely depressed-user population is younger, or depressed youngsters are more likely to disclose their age for connecting to their peers (social homophily.) BIBREF51
Gender Enabled Ground-truth Dataset: We selected a subset of 1464 users INLINEFORM0 from INLINEFORM1 who disclose their gender in their profile description. From 1464 users 64% belonged to the depressed group, and the rest (36%) to the control group. 23% of the likely depressed users disclose their gender which is considerably higher (12%) than that for the control class. Once again, gender disclosure varies among the two gender groups. For statistical significance, we performed chi-square test (null hypothesis: gender and depression are two independent variables). Figure FIGREF19 illustrates gender association with each of the two classes. Blue circles (positive residuals, see Figure FIGREF19 -A,D) show positive association among corresponding row and column variables while red circles (negative residuals, see Figure FIGREF19 -B,C) imply a repulsion. Our findings are consistent with the medical literature BIBREF10 as according to BIBREF52 more women than men were given a diagnosis of depression. In particular, the female-to-male ratio is 2.1 and 1.9 for Major Depressive Disorder and Dysthymic Disorder respectively. Our findings from Twitter data indicate there is a strong association (Chi-square: 32.75, p-value:1.04e-08) between being female and showing depressive behavior on Twitter.
Data Modality Analysis
We now provide an in-depth analysis of visual and textual content of vulnerable users.
Visual Content Analysis: We show that the visual content in images from posts as well as profiles provide valuable psychological cues for understanding a user's depression status. Profile/posted images can surface self-stigmatization BIBREF53 . Additionally, as opposed to typical computer vision framework for object recognition that often relies on thousands of predetermined low-level features, what matters more for assessing user's online behavior is the emotions reflected in facial expressions BIBREF54 , attributes contributing to the computational aesthetics BIBREF55 , and sentimental quotes they may subscribe to (Figure FIGREF15 ) BIBREF8 .
Facial Presence:
For capturing facial presence, we rely on BIBREF56 's approach that uses multilevel convolutional coarse-to-fine network cascade to tackle facial landmark localization. We identify facial presentation, emotion from facial expression, and demographic features from profile/posted images . Table TABREF21 illustrates facial presentation differences in both profile and posted images (media) for depressed and control users in INLINEFORM0 . With control class showing significantly higher in both profile and media (8%, 9% respectively) compared to that for the depressed class. In contrast with age and gender disclosure, vulnerable users are less likely to disclose their facial identity, possibly due to lack of confidence or fear of stigma.
Facial Expression:
Following BIBREF8 's approach, we adopt Ekman's model of six emotions: anger, disgust, fear, joy, sadness and surprise, and use the Face++ API to automatically capture them from the shared images. Positive emotions are joy and surprise, and negative emotions are anger, disgust, fear, and sadness. In general, for each user u in INLINEFORM0 , we process profile/shared images for both the depressed and the control groups with at least one face from the shared images (Table TABREF23 ). For the photos that contain multiple faces, we measure the average emotion.
Figure FIGREF27 illustrates the inter-correlation of these features. Additionally, we observe that emotions gleaned from facial expressions correlated with emotional signals captured from textual content utilizing LIWC. This indicates visual imagery can be harnessed as a complementary channel for measuring online emotional signals.
General Image Features:
The importance of interpretable computational aesthetic features for studying users' online behavior has been highlighted by several efforts BIBREF55 , BIBREF8 , BIBREF57 . Color, as a pillar of the human vision system, has a strong association with conceptual ideas like emotion BIBREF58 , BIBREF59 . We measured the normalized red, green, blue and the mean of original colors, and brightness and contrast relative to variations of luminance. We represent images in Hue-Saturation-Value color space that seems intuitive for humans, and measure mean and variance for saturation and hue. Saturation is defined as the difference in the intensities of the different light wavelengths that compose the color. Although hue is not interpretable, high saturation indicates vividness and chromatic purity which are more appealing to the human eye BIBREF8 . Colorfulness is measured as a difference against gray background BIBREF60 . Naturalness is a measure of the degree of correspondence between images and the human perception of reality BIBREF60 . In color reproduction, naturalness is measured from the mental recollection of the colors of familiar objects. Additionally, there is a tendency among vulnerable users to share sentimental quotes bearing negative emotions. We performed optical character recognition (OCR) with python-tesseract to extract text and their sentiment score. As illustrated in Table TABREF26 , vulnerable users tend to use less colorful (higher grayscale) profile as well as shared images to convey their negative feelings, and share images that are less natural (Figure FIGREF15 ). With respect to the aesthetic quality of images (saturation, brightness, and hue), depressed users use images that are less appealing to the human eye. We employ independent t-test, while adopting Bonferroni Correction as a conservative approach to adjust the confidence intervals. Overall, we have 223 features, and choose Bonferroni-corrected INLINEFORM0 level of INLINEFORM1 (*** INLINEFORM2 , ** INLINEFORM3 ).
** alpha= 0.05, *** alpha = 0.05/223
Demographics Inference & Language Cues: LIWC has been used extensively for examining the latent dimensions of self-expression for analyzing personality BIBREF61 , depressive behavior, demographic differences BIBREF43 , BIBREF40 , etc. Several studies highlight that females employ more first-person singular pronouns BIBREF62 , and deictic language BIBREF63 , while males tend to use more articles BIBREF64 which characterizes concrete thinking, and formal, informational and affirmation words BIBREF65 . For age analysis, the salient findings include older individuals using more future tense verbs BIBREF62 triggering a shift in focus while aging. They also show positive emotions BIBREF66 and employ fewer self-references (i.e. 'I', 'me') with greater first person plural BIBREF62 . Depressed users employ first person pronouns more frequently BIBREF67 , repeatedly use negative emotions and anger words. We analyzed psycholinguistic cues and language style to study the association between depressive behavior as well as demographics. Particularly, we adopt Levinson's adult development grouping that partitions users in INLINEFORM0 into 5 age groups: (14,19],(19,23], (23,34],(34,46], and (46,60]. Then, we apply LIWC for characterizing linguistic styles for each age group for users in INLINEFORM1 .
Qualitative Language Analysis: The recent LIWC version summarizes textual content in terms of language variables such as analytical thinking, clout, authenticity, and emotional tone. It also measures other linguistic dimensions such as descriptors categories (e.g., percent of target words gleaned by dictionary, or longer than six letters - Sixltr) and informal language markers (e.g., swear words, netspeak), and other linguistic aspects (e.g., 1st person singular pronouns.)
Thinking Style:
Measuring people's natural ways of trying to analyze, and organize complex events have strong association with analytical thinking. LIWC relates higher analytic thinking to more formal and logical reasoning whereas a lower value indicates focus on narratives. Also, cognitive processing measures problem solving in mind. Words such as "think," "realize," and "know" indicates the degree of "certainty" in communications. Critical thinking ability relates to education BIBREF68 , and is impacted by different stages of cognitive development at different ages . It has been shown that older people communicate with greater cognitive complexity while comprehending nuances and subtle differences BIBREF62 . We observe a similar pattern in our data (Table TABREF40 .) A recent study highlights how depression affects brain and thinking at molecular level using a rat model BIBREF69 . Depression can promote cognitive dysfunction including difficulty in concentrating and making decisions. We observed a notable differences in the ability to think analytically in depressed and control users in different age groups (see Figure FIGREF39 - A, F and Table TABREF40 ). Overall, vulnerable younger users are not logical thinkers based on their relative analytical score and cognitive processing ability.
Authenticity:
Authenticity measures the degree of honesty. Authenticity is often assessed by measuring present tense verbs, 1st person singular pronouns (I, me, my), and by examining the linguistic manifestations of false stories BIBREF70 . Liars use fewer self-references and fewer complex words. Psychologists often see a child's first successfull lie as a mental growth. There is a decreasing trend of the Authenticity with aging (see Figure FIGREF39 -B.) Authenticity for depressed youngsters is strikingly higher than their control peers. It decreases with age (Figure FIGREF39 -B.)
Clout:
People with high clout speak more confidently and with certainty, employing more social words with fewer negations (e.g., no, not) and swear words. In general, midlife is relatively stable w.r.t. relationships and work. A recent study shows that age 60 to be best for self-esteem BIBREF71 as people take on managerial roles at work and maintain a satisfying relationship with their spouse. We see the same pattern in our data (see Figure FIGREF39 -C and Table TABREF40 ). Unsurprisingly, lack of confidence (the 6th PHQ-9 symptom) is a distinguishable characteristic of vulnerable users, leading to their lower clout scores, especially among depressed users before middle age (34 years old).
Self-references:
First person singular words are often seen as indicating interpersonal involvement and their high usage is associated with negative affective states implying nervousness and depression BIBREF66 . Consistent with prior studies, frequency of first person singular for depressed people is significantly higher compared to that of control class. Similarly to BIBREF66 , youngsters tend to use more first-person (e.g. I) and second person singular (e.g. you) pronouns (Figure FIGREF39 -G).
Informal Language Markers; Swear, Netspeak:
Several studies highlighted the use of profanity by young adults has significantly increased over the last decade BIBREF72 . We observed the same pattern in both the depressed and the control classes (Table TABREF40 ), although it's rate is higher for depressed users BIBREF1 . Psychologists have also shown that swearing can indicate that an individual is not a fragmented member of a society. Depressed youngsters, showing higher rate of interpersonal involvement and relationships, have a higher rate of cursing (Figure FIGREF39 -E). Also, Netspeak lexicon measures the frequency of terms such as lol and thx.
Sexual, Body:
Sexual lexicon contains terms like "horny", "love" and "incest", and body terms like "ache", "heart", and "cough". Both start with a higher rate for depressed users while decreasing gradually while growing up, possibly due to changes in sexual desire as we age (Figure FIGREF39 -H,I and Table TABREF40 .)
Quantitative Language Analysis:
We employ one-way ANOVA to compare the impact of various factors and validate our findings above. Table TABREF40 illustrates our findings, with a degree of freedom (df) of 1055. The null hypothesis is that the sample means' for each age group are similar for each of the LIWC features.
*** alpha = 0.001, ** alpha = 0.01, * alpha = 0.05
Demographic Prediction
We leverage both the visual and textual content for predicting age and gender.
Prediction with Textual Content:
We employ BIBREF73 's weighted lexicon of terms that uses the dataset of 75,394 Facebook users who shared their status, age and gender. The predictive power of this lexica was evaluated on Twitter, blog, and Facebook, showing promising results BIBREF73 . Utilizing these two weighted lexicon of terms, we are predicting the demographic information (age or gender) of INLINEFORM0 (denoted by INLINEFORM1 ) using following equation: INLINEFORM2
where INLINEFORM0 is the lexicon weight of the term, and INLINEFORM1 represents the frequency of the term in the user generated INLINEFORM2 , and INLINEFORM3 measures total word count in INLINEFORM4 . As our data is biased toward young people, we report age prediction performance for each age group separately (Table TABREF42 ). Moreover, to measure the average accuracy of this model, we build a balanced dataset (keeping all the users above 23 -416 users), and then randomly sampling the same number of users from the age ranges (11,19] and (19,23]. The average accuracy of this model is 0.63 for depressed users and 0.64 for control class. Table TABREF44 illustrates the performance of gender prediction for each class. The average accuracy is 0.82 on INLINEFORM5 ground-truth dataset.
Prediction with Visual Imagery:
Inspired by BIBREF56 's approach for facial landmark localization, we use their pretrained CNN consisting of convolutional layers, including unshared and fully-connected layers, to predict gender and age from both the profile and shared images. We evaluate the performance for gender and age prediction task on INLINEFORM0 and INLINEFORM1 respectively as shown in Table TABREF42 and Table TABREF44 .
Demographic Prediction Analysis:
We delve deeper into the benefits and drawbacks of each data modality for demographic information prediction. This is crucial as the differences between language cues between age groups above age 35 tend to become smaller (see Figure FIGREF39 -A,B,C) and making the prediction harder for older people BIBREF74 . In this case, the other data modality (e.g., visual content) can play integral role as a complementary source for age inference. For gender prediction (see Table TABREF44 ), on average, the profile image-based predictor provides a more accurate prediction for both the depressed and control class (0.92 and 0.90) compared to content-based predictor (0.82). For age prediction (see Table TABREF42 ), textual content-based predictor (on average 0.60) outperforms both of the visual-based predictors (on average profile:0.51, Media:0.53).
However, not every user provides facial identity on his account (see Table TABREF21 ). We studied facial presentation for each age-group to examine any association between age-group, facial presentation and depressive behavior (see Table TABREF43 ). We can see youngsters in both depressed and control class are not likely to present their face on profile image. Less than 3% of vulnerable users between 11-19 years reveal their facial identity. Although content-based gender predictor was not as accurate as image-based one, it is adequate for population-level analysis.
Multi-modal Prediction Framework
We use the above findings for predicting depressive behavior. Our model exploits early fusion BIBREF32 technique in feature space and requires modeling each user INLINEFORM0 in INLINEFORM1 as vector concatenation of individual modality features. As opposed to computationally expensive late fusion scheme where each modality requires a separate supervised modeling, this model reduces the learning effort and shows promising results BIBREF75 . To develop a generalizable model that avoids overfitting, we perform feature selection using statistical tests and all relevant ensemble learning models. It adds randomness to the data by creating shuffled copies of all features (shadow feature), and then trains Random Forest classifier on the extended data. Iteratively, it checks whether the actual feature has a higher Z-score than its shadow feature (See Algorithm SECREF6 and Figure FIGREF45 ) BIBREF76 .
Main each Feature INLINEFORM0 INLINEFORM1
RndForrest( INLINEFORM0 ) Calculate Imp INLINEFORM1 INLINEFORM2 Generate next hypothesis , INLINEFORM3 Once all hypothesis generated Perform Statistical Test INLINEFORM4 //Binomial Distribution INLINEFORM5 Feature is important Feature is important
Ensemble Feature Selection
Next, we adopt an ensemble learning method that integrates the predictive power of multiple learners with two main advantages; its interpretability with respect to the contributions of each feature and its high predictive power. For prediction we have INLINEFORM0 where INLINEFORM1 is a weak learner and INLINEFORM2 denotes the final prediction.
In particular, we optimize the loss function: INLINEFORM0 where INLINEFORM1 incorporates INLINEFORM2 and INLINEFORM3 regularization. In each iteration, the new INLINEFORM4 is obtained by fitting weak learner to the negative gradient of loss function. Particularly, by estimating the loss function with Taylor expansion : INLINEFORM5 where its first expression is constant, the second and the third expressions are first ( INLINEFORM6 ) and second order derivatives ( INLINEFORM7 ) of the loss. INLINEFORM8
For exploring the weak learners, assume INLINEFORM0 has k leaf nodes, INLINEFORM1 be subset of users from INLINEFORM2 belongs to the node INLINEFORM3 , and INLINEFORM4 denotes the prediction for node INLINEFORM5 . Then, for each user INLINEFORM6 belonging to INLINEFORM7 , INLINEFORM8 and INLINEFORM9 INLINEFORM10
Next, for each leaf node INLINEFORM0 , deriving w.r.t INLINEFORM1 : INLINEFORM2
and by substituting weights: INLINEFORM0
which represents the loss for fixed weak learners with INLINEFORM0 nodes. The trees are built sequentially such that each subsequent tree aims to reduce the errors of its predecessor tree. Although, the weak learners have high bias, the ensemble model produces a strong learner that effectively integrate the weak learners by reducing bias and variance (the ultimate goal of supervised models) BIBREF77 . Table TABREF48 illustrates our multimodal framework outperform the baselines for identifying depressed users in terms of average specificity, sensitivity, F-Measure, and accuracy in 10-fold cross-validation setting on INLINEFORM1 dataset. Figure FIGREF47 shows how the likelihood of being classified into the depressed class varies with each feature addition to the model for a sample user in the dataset. The prediction bar (the black bar) shows that the log-odds of prediction is 0.31, that is, the likelihood of this person being a depressed user is 57% (1 / (1 + exp(-0.3))). The figure also sheds light on the impact of each contributing feature. The waterfall charts represent how the probability of being depressed changes with the addition of each feature variable. For instance, the "Analytic thinking" of this user is considered high 48.43 (Median:36.95, Mean: 40.18) and this decreases the chance of this person being classified into the depressed group by the log-odds of -1.41. Depressed users have significantly lower "Analytic thinking" score compared to control class. Moreover, the 40.46 "Clout" score is a low value (Median: 62.22, Mean: 57.17) and it decreases the chance of being classified as depressed. With respect to the visual features, for instance, the mean and the median of 'shared_colorfulness' is 112.03 and 113 respectively. The value of 136.71 would be high; thus, it decreases the chance of being depressed for this specific user by log-odds of -0.54. Moreover, the 'profile_naturalness' of 0.46 is considered high compared to 0.36 as the mean for the depressed class which justifies pull down of the log-odds by INLINEFORM2 . For network features, for instance, 'two_hop_neighborhood' for depressed users (Mean : 84) are less than that of control users (Mean: 154), and is reflected in pulling down the log-odds by -0.27.
Baselines:
To test the efficacy of our multi-modal framework for detecting depressed users, we compare it against existing content, content-network, and image-based models (based on the aforementioned general image feature, facial presence, and facial expressions.) | facial presence, Facial Expression, General Image Features, textual content, analytical thinking, clout, authenticity, emotional tone, Sixltr, informal language markers, 1st person singular pronouns |
2cf8825639164a842c3172af039ff079a8448592 | 2cf8825639164a842c3172af039ff079a8448592_0 | Q: How is the data annotated?
Text: 0pt*0*0
0pt*0*0
0pt*0*0 0.95
1]Amir Hossein Yazdavar 1]Mohammad Saeid Mahdavinejad 2]Goonmeet Bajaj
3]William Romine 1]Amirhassan Monadjemi 1]Krishnaprasad Thirunarayan
1]Amit Sheth 4]Jyotishman Pathak [1]Department of Computer Science & Engineering, Wright State University, OH, USA [2]Ohio State University, Columbus, OH, USA [3]Department of Biological Science, Wright State University, OH, USA [4] Division of Health Informatics, Weill Cornell University, New York, NY, USA
[1] yazdavar.2@wright.edu
With ubiquity of social media platforms, millions of people are sharing their online persona by expressing their thoughts, moods, emotions, feelings, and even their daily struggles with mental health issues voluntarily and publicly on social media. Unlike the most existing efforts which study depression by analyzing textual content, we examine and exploit multimodal big data to discern depressive behavior using a wide variety of features including individual-level demographics. By developing a multimodal framework and employing statistical techniques for fusing heterogeneous sets of features obtained by processing visual, textual and user interaction data, we significantly enhance the current state-of-the-art approaches for identifying depressed individuals on Twitter (improving the average F1-Score by 5 percent) as well as facilitate demographic inference from social media for broader applications. Besides providing insights into the relationship between demographics and mental health, our research assists in the design of a new breed of demographic-aware health interventions.
Introduction
Depression is a highly prevalent public health challenge and a major cause of disability worldwide. Depression affects 6.7% (i.e., about 16 million) Americans each year . According to the World Mental Health Survey conducted in 17 countries, on average, about 5% of people reported having an episode of depression in 2011 BIBREF0 . Untreated or under-treated clinical depression can lead to suicide and other chronic risky behaviors such as drug or alcohol addiction.
Global efforts to curb clinical depression involve identifying depression through survey-based methods employing phone or online questionnaires. These approaches suffer from under-representation as well as sampling bias (with very small group of respondents.) In contrast, the widespread adoption of social media where people voluntarily and publicly express their thoughts, moods, emotions, and feelings, and even share their daily struggles with mental health problems has not been adequately tapped into studying mental illnesses, such as depression. The visual and textual content shared on different social media platforms like Twitter offer new opportunities for a deeper understanding of self-expressed depression both at an individual as well as community-level. Previous research efforts have suggested that language style, sentiment, users' activities, and engagement expressed in social media posts can predict the likelihood of depression BIBREF1 , BIBREF2 . However, except for a few attempts BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , these investigations have seldom studied extraction of emotional state from visual content of images in posted/profile images. Visual content can express users' emotions more vividly, and psychologists noted that imagery is an effective medium for communicating difficult emotions.
According to eMarketer, photos accounted for 75% of content posted on Facebook worldwide and they are the most engaging type of content on Facebook (87%). Indeed, "a picture is worth a thousand words" and now "photos are worth a million likes." Similarly, on Twitter, the tweets with image links get twice as much attention as those without , and video-linked tweets drive up engagement . The ease and naturalness of expression through visual imagery can serve to glean depression-indicators in vulnerable individuals who often seek social support through social media BIBREF7 . Further, as psychologist Carl Rogers highlights, we often pursue and promote our Ideal-Self . In this regard, the choice of profile image can be a proxy for the online persona BIBREF8 , providing a window into an individual's mental health status. For instance, choosing emaciated legs of girls covered with several cuts as profile image portrays negative self-view BIBREF9 .
Inferring demographic information like gender and age can be crucial for stratifying our understanding of population-level epidemiology of mental health disorders. Relying on electronic health records data, previous studies explored gender differences in depressive behavior from different angles including prevalence, age at onset, comorbidities, as well as biological and psychosocial factors. For instance, women have been diagnosed with depression twice as often as men BIBREF10 and national psychiatric morbidity survey in Britain has shown higher risk of depression in women BIBREF11 . On the other hand, suicide rates for men are three to five times higher compared to that of the women BIBREF12 .
Although depression can affect anyone at any age, signs and triggers of depression vary for different age groups . Depression triggers for children include parental depression, domestic violence, and loss of a pet, friend or family member. For teenagers (ages 12-18), depression may arise from hormonal imbalance, sexuality concerns and rejection by peers. Young adults (ages 19-29) may develop depression due to life transitions, poverty, trauma, and work issues. Adult (ages 30-60) depression triggers include caring simultaneously for children and aging parents, financial burden, work and relationship issues. Senior adults develop depression from common late-life issues, social isolation, major life loses such as the death of a spouse, financial stress and other chronic health problems (e.g., cardiac disease, dementia). Therefore, inferring demographic information while studying depressive behavior from passively sensed social data, can shed better light on the population-level epidemiology of depression.
The recent advancements in deep neural networks, specifically for image analysis task, can lead to determining demographic features such as age and gender BIBREF13 . We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users.
We address and derive answers to the following research questions: 1) How well do the content of posted images (colors, aesthetic and facial presentation) reflect depressive behavior? 2) Does the choice of profile picture show any psychological traits of depressed online persona? Are they reliable enough to represent the demographic information such as age and gender? 3) Are there any underlying common themes among depressed individuals generated using multimodal content that can be used to detect depression reliably?
Related Work
Mental Health Analysis using Social Media:
Several efforts have attempted to automatically detect depression from social media content utilizing machine/deep learning and natural language processing approaches. Conducting a retrospective study over tweets, BIBREF14 characterizes depression based on factors such as language, emotion, style, ego-network, and user engagement. They built a classifier to predict the likelihood of depression in a post BIBREF14 , BIBREF15 or in an individual BIBREF1 , BIBREF16 , BIBREF17 , BIBREF18 . Moreover, there have been significant advances due to the shared task BIBREF19 focusing on methods for identifying depressed users on Twitter at the Computational Linguistics and Clinical Psychology Workshop (CLP 2015). A corpus of nearly 1,800 Twitter users was built for evaluation, and the best models employed topic modeling BIBREF20 , Linguistic Inquiry and Word Count (LIWC) features, and other metadata BIBREF21 . More recently, a neural network architecture introduced by BIBREF22 combined posts into a representation of user's activities for detecting depressed users. Another active line of research has focused on capturing suicide and self-harm signals BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF2 , BIBREF27 . Moreover, the CLP 2016 BIBREF28 defined a shared task on detecting the severity of the mental health from forum posts. All of these studies derive discriminative features to classify depression in user-generated content at message-level, individual-level or community-level. Recent emergence of photo-sharing platforms such as Instagram, has attracted researchers attention to study people's behavior from their visual narratives – ranging from mining their emotions BIBREF29 , and happiness trend BIBREF30 , to studying medical concerns BIBREF31 . Researchers show that people use Instagram to engage in social exchange and storytelling about their difficult experiences BIBREF4 . The role of visual imagery as a mechanism of self-disclosure by relating visual attributes to mental health disclosures on Instagram was highlighted by BIBREF3 , BIBREF5 where individual Instagram profiles were utilized to build a prediction framework for identifying markers of depression. The importance of data modality to understand user behavior on social media was highlighted by BIBREF32 . More recently, a deep neural network sequence modeling approach that marries audio and text data modalities to analyze question-answer style interviews between an individual and an agent has been developed to study mental health BIBREF32 . Similarly, a multimodal depressive dictionary learning was proposed to detect depressed users on Twitter BIBREF33 . They provide a sparse user representations by defining a feature set consisting of social network features, user profile features, visual features, emotional features BIBREF34 , topic-level features, and domain-specific features. Particularly, our choice of multi-model prediction framework is intended to improve upon the prior works involving use of images in multimodal depression analysis BIBREF33 and prior works on studying Instagram photos BIBREF6 , BIBREF35 .
Demographic information inference on Social Media:
There is a growing interest in understanding online user's demographic information due to its numerous applications in healthcare BIBREF36 , BIBREF37 . A supervised model developed by BIBREF38 for determining users' gender by employing features such as screen-name, full-name, profile description and content on external resources (e.g., personal blog). Employing features including emoticons, acronyms, slangs, punctuations, capitalization, sentence length and included links/images, along with online behaviors such as number of friends, post time, and commenting activity, a supervised model was built for predicting user's age group BIBREF39 . Utilizing users life stage information such as secondary school student, college student, and employee, BIBREF40 builds age inference model for Dutch Twitter users. Similarly, relying on profile descriptions while devising a set of rules and patterns, a novel model introduced for extracting age for Twitter users BIBREF41 . They also parse description for occupation by consulting the SOC2010 list of occupations and validating it through social surveys. A novel age inference model was developed while relying on homophily interaction information and content for predicting age of Twitter users BIBREF42 . The limitations of textual content for predicting age and gender was highlighted by BIBREF43 . They distinguish language use based on social gender, age identity, biological sex and chronological age by collecting crowdsourced signals using a game in which players (crowd) guess the biological sex and age of a user based only on their tweets. Their findings indicate how linguistic markers can misguide (e.g., a heart represented as <3 can be misinterpreted as feminine when the writer is male.) Estimating age and gender from facial images by training a convolutional neural networks (CNN) for face recognition is an active line of research BIBREF44 , BIBREF13 , BIBREF45 .
Dataset
Self-disclosure clues have been extensively utilized for creating ground-truth data for numerous social media analytic studies e.g., for predicting demographics BIBREF36 , BIBREF41 , and user's depressive behavior BIBREF46 , BIBREF47 , BIBREF48 . For instance, vulnerable individuals may employ depressive-indicative terms in their Twitter profile descriptions. Others may share their age and gender, e.g., "16 years old suicidal girl"(see Figure FIGREF15 ). We employ a huge dataset of 45,000 self-reported depressed users introduced in BIBREF46 where a lexicon of depression symptoms consisting of 1500 depression-indicative terms was created with the help of psychologist clinician and employed for collecting self-declared depressed individual's profiles. A subset of 8,770 users (24 million time-stamped tweets) containing 3981 depressed and 4789 control users (that do not show any depressive behavior) were verified by two human judges BIBREF46 . This dataset INLINEFORM0 contains the metadata values of each user such as profile descriptions, followers_count, created_at, and profile_image_url.
Age Enabled Ground-truth Dataset: We extract user's age by applying regular expression patterns to profile descriptions (such as "17 years old, self-harm, anxiety, depression") BIBREF41 . We compile "age prefixes" and "age suffixes", and use three age-extraction rules: 1. I am X years old 2. Born in X 3. X years old, where X is a "date" or age (e.g., 1994). We selected a subset of 1061 users among INLINEFORM0 as gold standard dataset INLINEFORM1 who disclose their age. From these 1061 users, 822 belong to depressed class and 239 belong to control class. From 3981 depressed users, 20.6% disclose their age in contrast with only 4% (239/4789) among control group. So self-disclosure of age is more prevalent among vulnerable users. Figure FIGREF18 depicts the age distribution in INLINEFORM2 . The general trend, consistent with the results in BIBREF42 , BIBREF49 , is biased toward young people. Indeed, according to Pew, 47% of Twitter users are younger than 30 years old BIBREF50 . Similar data collection procedure with comparable distribution have been used in many prior efforts BIBREF51 , BIBREF49 , BIBREF42 . We discuss our approach to mitigate the impact of the bias in Section 4.1. The median age is 17 for depressed class versus 19 for control class suggesting either likely depressed-user population is younger, or depressed youngsters are more likely to disclose their age for connecting to their peers (social homophily.) BIBREF51
Gender Enabled Ground-truth Dataset: We selected a subset of 1464 users INLINEFORM0 from INLINEFORM1 who disclose their gender in their profile description. From 1464 users 64% belonged to the depressed group, and the rest (36%) to the control group. 23% of the likely depressed users disclose their gender which is considerably higher (12%) than that for the control class. Once again, gender disclosure varies among the two gender groups. For statistical significance, we performed chi-square test (null hypothesis: gender and depression are two independent variables). Figure FIGREF19 illustrates gender association with each of the two classes. Blue circles (positive residuals, see Figure FIGREF19 -A,D) show positive association among corresponding row and column variables while red circles (negative residuals, see Figure FIGREF19 -B,C) imply a repulsion. Our findings are consistent with the medical literature BIBREF10 as according to BIBREF52 more women than men were given a diagnosis of depression. In particular, the female-to-male ratio is 2.1 and 1.9 for Major Depressive Disorder and Dysthymic Disorder respectively. Our findings from Twitter data indicate there is a strong association (Chi-square: 32.75, p-value:1.04e-08) between being female and showing depressive behavior on Twitter.
Data Modality Analysis
We now provide an in-depth analysis of visual and textual content of vulnerable users.
Visual Content Analysis: We show that the visual content in images from posts as well as profiles provide valuable psychological cues for understanding a user's depression status. Profile/posted images can surface self-stigmatization BIBREF53 . Additionally, as opposed to typical computer vision framework for object recognition that often relies on thousands of predetermined low-level features, what matters more for assessing user's online behavior is the emotions reflected in facial expressions BIBREF54 , attributes contributing to the computational aesthetics BIBREF55 , and sentimental quotes they may subscribe to (Figure FIGREF15 ) BIBREF8 .
Facial Presence:
For capturing facial presence, we rely on BIBREF56 's approach that uses multilevel convolutional coarse-to-fine network cascade to tackle facial landmark localization. We identify facial presentation, emotion from facial expression, and demographic features from profile/posted images . Table TABREF21 illustrates facial presentation differences in both profile and posted images (media) for depressed and control users in INLINEFORM0 . With control class showing significantly higher in both profile and media (8%, 9% respectively) compared to that for the depressed class. In contrast with age and gender disclosure, vulnerable users are less likely to disclose their facial identity, possibly due to lack of confidence or fear of stigma.
Facial Expression:
Following BIBREF8 's approach, we adopt Ekman's model of six emotions: anger, disgust, fear, joy, sadness and surprise, and use the Face++ API to automatically capture them from the shared images. Positive emotions are joy and surprise, and negative emotions are anger, disgust, fear, and sadness. In general, for each user u in INLINEFORM0 , we process profile/shared images for both the depressed and the control groups with at least one face from the shared images (Table TABREF23 ). For the photos that contain multiple faces, we measure the average emotion.
Figure FIGREF27 illustrates the inter-correlation of these features. Additionally, we observe that emotions gleaned from facial expressions correlated with emotional signals captured from textual content utilizing LIWC. This indicates visual imagery can be harnessed as a complementary channel for measuring online emotional signals.
General Image Features:
The importance of interpretable computational aesthetic features for studying users' online behavior has been highlighted by several efforts BIBREF55 , BIBREF8 , BIBREF57 . Color, as a pillar of the human vision system, has a strong association with conceptual ideas like emotion BIBREF58 , BIBREF59 . We measured the normalized red, green, blue and the mean of original colors, and brightness and contrast relative to variations of luminance. We represent images in Hue-Saturation-Value color space that seems intuitive for humans, and measure mean and variance for saturation and hue. Saturation is defined as the difference in the intensities of the different light wavelengths that compose the color. Although hue is not interpretable, high saturation indicates vividness and chromatic purity which are more appealing to the human eye BIBREF8 . Colorfulness is measured as a difference against gray background BIBREF60 . Naturalness is a measure of the degree of correspondence between images and the human perception of reality BIBREF60 . In color reproduction, naturalness is measured from the mental recollection of the colors of familiar objects. Additionally, there is a tendency among vulnerable users to share sentimental quotes bearing negative emotions. We performed optical character recognition (OCR) with python-tesseract to extract text and their sentiment score. As illustrated in Table TABREF26 , vulnerable users tend to use less colorful (higher grayscale) profile as well as shared images to convey their negative feelings, and share images that are less natural (Figure FIGREF15 ). With respect to the aesthetic quality of images (saturation, brightness, and hue), depressed users use images that are less appealing to the human eye. We employ independent t-test, while adopting Bonferroni Correction as a conservative approach to adjust the confidence intervals. Overall, we have 223 features, and choose Bonferroni-corrected INLINEFORM0 level of INLINEFORM1 (*** INLINEFORM2 , ** INLINEFORM3 ).
** alpha= 0.05, *** alpha = 0.05/223
Demographics Inference & Language Cues: LIWC has been used extensively for examining the latent dimensions of self-expression for analyzing personality BIBREF61 , depressive behavior, demographic differences BIBREF43 , BIBREF40 , etc. Several studies highlight that females employ more first-person singular pronouns BIBREF62 , and deictic language BIBREF63 , while males tend to use more articles BIBREF64 which characterizes concrete thinking, and formal, informational and affirmation words BIBREF65 . For age analysis, the salient findings include older individuals using more future tense verbs BIBREF62 triggering a shift in focus while aging. They also show positive emotions BIBREF66 and employ fewer self-references (i.e. 'I', 'me') with greater first person plural BIBREF62 . Depressed users employ first person pronouns more frequently BIBREF67 , repeatedly use negative emotions and anger words. We analyzed psycholinguistic cues and language style to study the association between depressive behavior as well as demographics. Particularly, we adopt Levinson's adult development grouping that partitions users in INLINEFORM0 into 5 age groups: (14,19],(19,23], (23,34],(34,46], and (46,60]. Then, we apply LIWC for characterizing linguistic styles for each age group for users in INLINEFORM1 .
Qualitative Language Analysis: The recent LIWC version summarizes textual content in terms of language variables such as analytical thinking, clout, authenticity, and emotional tone. It also measures other linguistic dimensions such as descriptors categories (e.g., percent of target words gleaned by dictionary, or longer than six letters - Sixltr) and informal language markers (e.g., swear words, netspeak), and other linguistic aspects (e.g., 1st person singular pronouns.)
Thinking Style:
Measuring people's natural ways of trying to analyze, and organize complex events have strong association with analytical thinking. LIWC relates higher analytic thinking to more formal and logical reasoning whereas a lower value indicates focus on narratives. Also, cognitive processing measures problem solving in mind. Words such as "think," "realize," and "know" indicates the degree of "certainty" in communications. Critical thinking ability relates to education BIBREF68 , and is impacted by different stages of cognitive development at different ages . It has been shown that older people communicate with greater cognitive complexity while comprehending nuances and subtle differences BIBREF62 . We observe a similar pattern in our data (Table TABREF40 .) A recent study highlights how depression affects brain and thinking at molecular level using a rat model BIBREF69 . Depression can promote cognitive dysfunction including difficulty in concentrating and making decisions. We observed a notable differences in the ability to think analytically in depressed and control users in different age groups (see Figure FIGREF39 - A, F and Table TABREF40 ). Overall, vulnerable younger users are not logical thinkers based on their relative analytical score and cognitive processing ability.
Authenticity:
Authenticity measures the degree of honesty. Authenticity is often assessed by measuring present tense verbs, 1st person singular pronouns (I, me, my), and by examining the linguistic manifestations of false stories BIBREF70 . Liars use fewer self-references and fewer complex words. Psychologists often see a child's first successfull lie as a mental growth. There is a decreasing trend of the Authenticity with aging (see Figure FIGREF39 -B.) Authenticity for depressed youngsters is strikingly higher than their control peers. It decreases with age (Figure FIGREF39 -B.)
Clout:
People with high clout speak more confidently and with certainty, employing more social words with fewer negations (e.g., no, not) and swear words. In general, midlife is relatively stable w.r.t. relationships and work. A recent study shows that age 60 to be best for self-esteem BIBREF71 as people take on managerial roles at work and maintain a satisfying relationship with their spouse. We see the same pattern in our data (see Figure FIGREF39 -C and Table TABREF40 ). Unsurprisingly, lack of confidence (the 6th PHQ-9 symptom) is a distinguishable characteristic of vulnerable users, leading to their lower clout scores, especially among depressed users before middle age (34 years old).
Self-references:
First person singular words are often seen as indicating interpersonal involvement and their high usage is associated with negative affective states implying nervousness and depression BIBREF66 . Consistent with prior studies, frequency of first person singular for depressed people is significantly higher compared to that of control class. Similarly to BIBREF66 , youngsters tend to use more first-person (e.g. I) and second person singular (e.g. you) pronouns (Figure FIGREF39 -G).
Informal Language Markers; Swear, Netspeak:
Several studies highlighted the use of profanity by young adults has significantly increased over the last decade BIBREF72 . We observed the same pattern in both the depressed and the control classes (Table TABREF40 ), although it's rate is higher for depressed users BIBREF1 . Psychologists have also shown that swearing can indicate that an individual is not a fragmented member of a society. Depressed youngsters, showing higher rate of interpersonal involvement and relationships, have a higher rate of cursing (Figure FIGREF39 -E). Also, Netspeak lexicon measures the frequency of terms such as lol and thx.
Sexual, Body:
Sexual lexicon contains terms like "horny", "love" and "incest", and body terms like "ache", "heart", and "cough". Both start with a higher rate for depressed users while decreasing gradually while growing up, possibly due to changes in sexual desire as we age (Figure FIGREF39 -H,I and Table TABREF40 .)
Quantitative Language Analysis:
We employ one-way ANOVA to compare the impact of various factors and validate our findings above. Table TABREF40 illustrates our findings, with a degree of freedom (df) of 1055. The null hypothesis is that the sample means' for each age group are similar for each of the LIWC features.
*** alpha = 0.001, ** alpha = 0.01, * alpha = 0.05
Demographic Prediction
We leverage both the visual and textual content for predicting age and gender.
Prediction with Textual Content:
We employ BIBREF73 's weighted lexicon of terms that uses the dataset of 75,394 Facebook users who shared their status, age and gender. The predictive power of this lexica was evaluated on Twitter, blog, and Facebook, showing promising results BIBREF73 . Utilizing these two weighted lexicon of terms, we are predicting the demographic information (age or gender) of INLINEFORM0 (denoted by INLINEFORM1 ) using following equation: INLINEFORM2
where INLINEFORM0 is the lexicon weight of the term, and INLINEFORM1 represents the frequency of the term in the user generated INLINEFORM2 , and INLINEFORM3 measures total word count in INLINEFORM4 . As our data is biased toward young people, we report age prediction performance for each age group separately (Table TABREF42 ). Moreover, to measure the average accuracy of this model, we build a balanced dataset (keeping all the users above 23 -416 users), and then randomly sampling the same number of users from the age ranges (11,19] and (19,23]. The average accuracy of this model is 0.63 for depressed users and 0.64 for control class. Table TABREF44 illustrates the performance of gender prediction for each class. The average accuracy is 0.82 on INLINEFORM5 ground-truth dataset.
Prediction with Visual Imagery:
Inspired by BIBREF56 's approach for facial landmark localization, we use their pretrained CNN consisting of convolutional layers, including unshared and fully-connected layers, to predict gender and age from both the profile and shared images. We evaluate the performance for gender and age prediction task on INLINEFORM0 and INLINEFORM1 respectively as shown in Table TABREF42 and Table TABREF44 .
Demographic Prediction Analysis:
We delve deeper into the benefits and drawbacks of each data modality for demographic information prediction. This is crucial as the differences between language cues between age groups above age 35 tend to become smaller (see Figure FIGREF39 -A,B,C) and making the prediction harder for older people BIBREF74 . In this case, the other data modality (e.g., visual content) can play integral role as a complementary source for age inference. For gender prediction (see Table TABREF44 ), on average, the profile image-based predictor provides a more accurate prediction for both the depressed and control class (0.92 and 0.90) compared to content-based predictor (0.82). For age prediction (see Table TABREF42 ), textual content-based predictor (on average 0.60) outperforms both of the visual-based predictors (on average profile:0.51, Media:0.53).
However, not every user provides facial identity on his account (see Table TABREF21 ). We studied facial presentation for each age-group to examine any association between age-group, facial presentation and depressive behavior (see Table TABREF43 ). We can see youngsters in both depressed and control class are not likely to present their face on profile image. Less than 3% of vulnerable users between 11-19 years reveal their facial identity. Although content-based gender predictor was not as accurate as image-based one, it is adequate for population-level analysis.
Multi-modal Prediction Framework
We use the above findings for predicting depressive behavior. Our model exploits early fusion BIBREF32 technique in feature space and requires modeling each user INLINEFORM0 in INLINEFORM1 as vector concatenation of individual modality features. As opposed to computationally expensive late fusion scheme where each modality requires a separate supervised modeling, this model reduces the learning effort and shows promising results BIBREF75 . To develop a generalizable model that avoids overfitting, we perform feature selection using statistical tests and all relevant ensemble learning models. It adds randomness to the data by creating shuffled copies of all features (shadow feature), and then trains Random Forest classifier on the extended data. Iteratively, it checks whether the actual feature has a higher Z-score than its shadow feature (See Algorithm SECREF6 and Figure FIGREF45 ) BIBREF76 .
Main each Feature INLINEFORM0 INLINEFORM1
RndForrest( INLINEFORM0 ) Calculate Imp INLINEFORM1 INLINEFORM2 Generate next hypothesis , INLINEFORM3 Once all hypothesis generated Perform Statistical Test INLINEFORM4 //Binomial Distribution INLINEFORM5 Feature is important Feature is important
Ensemble Feature Selection
Next, we adopt an ensemble learning method that integrates the predictive power of multiple learners with two main advantages; its interpretability with respect to the contributions of each feature and its high predictive power. For prediction we have INLINEFORM0 where INLINEFORM1 is a weak learner and INLINEFORM2 denotes the final prediction.
In particular, we optimize the loss function: INLINEFORM0 where INLINEFORM1 incorporates INLINEFORM2 and INLINEFORM3 regularization. In each iteration, the new INLINEFORM4 is obtained by fitting weak learner to the negative gradient of loss function. Particularly, by estimating the loss function with Taylor expansion : INLINEFORM5 where its first expression is constant, the second and the third expressions are first ( INLINEFORM6 ) and second order derivatives ( INLINEFORM7 ) of the loss. INLINEFORM8
For exploring the weak learners, assume INLINEFORM0 has k leaf nodes, INLINEFORM1 be subset of users from INLINEFORM2 belongs to the node INLINEFORM3 , and INLINEFORM4 denotes the prediction for node INLINEFORM5 . Then, for each user INLINEFORM6 belonging to INLINEFORM7 , INLINEFORM8 and INLINEFORM9 INLINEFORM10
Next, for each leaf node INLINEFORM0 , deriving w.r.t INLINEFORM1 : INLINEFORM2
and by substituting weights: INLINEFORM0
which represents the loss for fixed weak learners with INLINEFORM0 nodes. The trees are built sequentially such that each subsequent tree aims to reduce the errors of its predecessor tree. Although, the weak learners have high bias, the ensemble model produces a strong learner that effectively integrate the weak learners by reducing bias and variance (the ultimate goal of supervised models) BIBREF77 . Table TABREF48 illustrates our multimodal framework outperform the baselines for identifying depressed users in terms of average specificity, sensitivity, F-Measure, and accuracy in 10-fold cross-validation setting on INLINEFORM1 dataset. Figure FIGREF47 shows how the likelihood of being classified into the depressed class varies with each feature addition to the model for a sample user in the dataset. The prediction bar (the black bar) shows that the log-odds of prediction is 0.31, that is, the likelihood of this person being a depressed user is 57% (1 / (1 + exp(-0.3))). The figure also sheds light on the impact of each contributing feature. The waterfall charts represent how the probability of being depressed changes with the addition of each feature variable. For instance, the "Analytic thinking" of this user is considered high 48.43 (Median:36.95, Mean: 40.18) and this decreases the chance of this person being classified into the depressed group by the log-odds of -1.41. Depressed users have significantly lower "Analytic thinking" score compared to control class. Moreover, the 40.46 "Clout" score is a low value (Median: 62.22, Mean: 57.17) and it decreases the chance of being classified as depressed. With respect to the visual features, for instance, the mean and the median of 'shared_colorfulness' is 112.03 and 113 respectively. The value of 136.71 would be high; thus, it decreases the chance of being depressed for this specific user by log-odds of -0.54. Moreover, the 'profile_naturalness' of 0.46 is considered high compared to 0.36 as the mean for the depressed class which justifies pull down of the log-odds by INLINEFORM2 . For network features, for instance, 'two_hop_neighborhood' for depressed users (Mean : 84) are less than that of control users (Mean: 154), and is reflected in pulling down the log-odds by -0.27.
Baselines:
To test the efficacy of our multi-modal framework for detecting depressed users, we compare it against existing content, content-network, and image-based models (based on the aforementioned general image feature, facial presence, and facial expressions.) | The data are self-reported by Twitter users and then verified by two human experts. |
36b25021464a9574bf449e52ae50810c4ac7b642 | 36b25021464a9574bf449e52ae50810c4ac7b642_0 | Q: Where does the information on individual-level demographics come from?
Text: 0pt*0*0
0pt*0*0
0pt*0*0 0.95
1]Amir Hossein Yazdavar 1]Mohammad Saeid Mahdavinejad 2]Goonmeet Bajaj
3]William Romine 1]Amirhassan Monadjemi 1]Krishnaprasad Thirunarayan
1]Amit Sheth 4]Jyotishman Pathak [1]Department of Computer Science & Engineering, Wright State University, OH, USA [2]Ohio State University, Columbus, OH, USA [3]Department of Biological Science, Wright State University, OH, USA [4] Division of Health Informatics, Weill Cornell University, New York, NY, USA
[1] yazdavar.2@wright.edu
With ubiquity of social media platforms, millions of people are sharing their online persona by expressing their thoughts, moods, emotions, feelings, and even their daily struggles with mental health issues voluntarily and publicly on social media. Unlike the most existing efforts which study depression by analyzing textual content, we examine and exploit multimodal big data to discern depressive behavior using a wide variety of features including individual-level demographics. By developing a multimodal framework and employing statistical techniques for fusing heterogeneous sets of features obtained by processing visual, textual and user interaction data, we significantly enhance the current state-of-the-art approaches for identifying depressed individuals on Twitter (improving the average F1-Score by 5 percent) as well as facilitate demographic inference from social media for broader applications. Besides providing insights into the relationship between demographics and mental health, our research assists in the design of a new breed of demographic-aware health interventions.
Introduction
Depression is a highly prevalent public health challenge and a major cause of disability worldwide. Depression affects 6.7% (i.e., about 16 million) Americans each year . According to the World Mental Health Survey conducted in 17 countries, on average, about 5% of people reported having an episode of depression in 2011 BIBREF0 . Untreated or under-treated clinical depression can lead to suicide and other chronic risky behaviors such as drug or alcohol addiction.
Global efforts to curb clinical depression involve identifying depression through survey-based methods employing phone or online questionnaires. These approaches suffer from under-representation as well as sampling bias (with very small group of respondents.) In contrast, the widespread adoption of social media where people voluntarily and publicly express their thoughts, moods, emotions, and feelings, and even share their daily struggles with mental health problems has not been adequately tapped into studying mental illnesses, such as depression. The visual and textual content shared on different social media platforms like Twitter offer new opportunities for a deeper understanding of self-expressed depression both at an individual as well as community-level. Previous research efforts have suggested that language style, sentiment, users' activities, and engagement expressed in social media posts can predict the likelihood of depression BIBREF1 , BIBREF2 . However, except for a few attempts BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , these investigations have seldom studied extraction of emotional state from visual content of images in posted/profile images. Visual content can express users' emotions more vividly, and psychologists noted that imagery is an effective medium for communicating difficult emotions.
According to eMarketer, photos accounted for 75% of content posted on Facebook worldwide and they are the most engaging type of content on Facebook (87%). Indeed, "a picture is worth a thousand words" and now "photos are worth a million likes." Similarly, on Twitter, the tweets with image links get twice as much attention as those without , and video-linked tweets drive up engagement . The ease and naturalness of expression through visual imagery can serve to glean depression-indicators in vulnerable individuals who often seek social support through social media BIBREF7 . Further, as psychologist Carl Rogers highlights, we often pursue and promote our Ideal-Self . In this regard, the choice of profile image can be a proxy for the online persona BIBREF8 , providing a window into an individual's mental health status. For instance, choosing emaciated legs of girls covered with several cuts as profile image portrays negative self-view BIBREF9 .
Inferring demographic information like gender and age can be crucial for stratifying our understanding of population-level epidemiology of mental health disorders. Relying on electronic health records data, previous studies explored gender differences in depressive behavior from different angles including prevalence, age at onset, comorbidities, as well as biological and psychosocial factors. For instance, women have been diagnosed with depression twice as often as men BIBREF10 and national psychiatric morbidity survey in Britain has shown higher risk of depression in women BIBREF11 . On the other hand, suicide rates for men are three to five times higher compared to that of the women BIBREF12 .
Although depression can affect anyone at any age, signs and triggers of depression vary for different age groups . Depression triggers for children include parental depression, domestic violence, and loss of a pet, friend or family member. For teenagers (ages 12-18), depression may arise from hormonal imbalance, sexuality concerns and rejection by peers. Young adults (ages 19-29) may develop depression due to life transitions, poverty, trauma, and work issues. Adult (ages 30-60) depression triggers include caring simultaneously for children and aging parents, financial burden, work and relationship issues. Senior adults develop depression from common late-life issues, social isolation, major life loses such as the death of a spouse, financial stress and other chronic health problems (e.g., cardiac disease, dementia). Therefore, inferring demographic information while studying depressive behavior from passively sensed social data, can shed better light on the population-level epidemiology of depression.
The recent advancements in deep neural networks, specifically for image analysis task, can lead to determining demographic features such as age and gender BIBREF13 . We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users.
We address and derive answers to the following research questions: 1) How well do the content of posted images (colors, aesthetic and facial presentation) reflect depressive behavior? 2) Does the choice of profile picture show any psychological traits of depressed online persona? Are they reliable enough to represent the demographic information such as age and gender? 3) Are there any underlying common themes among depressed individuals generated using multimodal content that can be used to detect depression reliably?
Related Work
Mental Health Analysis using Social Media:
Several efforts have attempted to automatically detect depression from social media content utilizing machine/deep learning and natural language processing approaches. Conducting a retrospective study over tweets, BIBREF14 characterizes depression based on factors such as language, emotion, style, ego-network, and user engagement. They built a classifier to predict the likelihood of depression in a post BIBREF14 , BIBREF15 or in an individual BIBREF1 , BIBREF16 , BIBREF17 , BIBREF18 . Moreover, there have been significant advances due to the shared task BIBREF19 focusing on methods for identifying depressed users on Twitter at the Computational Linguistics and Clinical Psychology Workshop (CLP 2015). A corpus of nearly 1,800 Twitter users was built for evaluation, and the best models employed topic modeling BIBREF20 , Linguistic Inquiry and Word Count (LIWC) features, and other metadata BIBREF21 . More recently, a neural network architecture introduced by BIBREF22 combined posts into a representation of user's activities for detecting depressed users. Another active line of research has focused on capturing suicide and self-harm signals BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF2 , BIBREF27 . Moreover, the CLP 2016 BIBREF28 defined a shared task on detecting the severity of the mental health from forum posts. All of these studies derive discriminative features to classify depression in user-generated content at message-level, individual-level or community-level. Recent emergence of photo-sharing platforms such as Instagram, has attracted researchers attention to study people's behavior from their visual narratives – ranging from mining their emotions BIBREF29 , and happiness trend BIBREF30 , to studying medical concerns BIBREF31 . Researchers show that people use Instagram to engage in social exchange and storytelling about their difficult experiences BIBREF4 . The role of visual imagery as a mechanism of self-disclosure by relating visual attributes to mental health disclosures on Instagram was highlighted by BIBREF3 , BIBREF5 where individual Instagram profiles were utilized to build a prediction framework for identifying markers of depression. The importance of data modality to understand user behavior on social media was highlighted by BIBREF32 . More recently, a deep neural network sequence modeling approach that marries audio and text data modalities to analyze question-answer style interviews between an individual and an agent has been developed to study mental health BIBREF32 . Similarly, a multimodal depressive dictionary learning was proposed to detect depressed users on Twitter BIBREF33 . They provide a sparse user representations by defining a feature set consisting of social network features, user profile features, visual features, emotional features BIBREF34 , topic-level features, and domain-specific features. Particularly, our choice of multi-model prediction framework is intended to improve upon the prior works involving use of images in multimodal depression analysis BIBREF33 and prior works on studying Instagram photos BIBREF6 , BIBREF35 .
Demographic information inference on Social Media:
There is a growing interest in understanding online user's demographic information due to its numerous applications in healthcare BIBREF36 , BIBREF37 . A supervised model developed by BIBREF38 for determining users' gender by employing features such as screen-name, full-name, profile description and content on external resources (e.g., personal blog). Employing features including emoticons, acronyms, slangs, punctuations, capitalization, sentence length and included links/images, along with online behaviors such as number of friends, post time, and commenting activity, a supervised model was built for predicting user's age group BIBREF39 . Utilizing users life stage information such as secondary school student, college student, and employee, BIBREF40 builds age inference model for Dutch Twitter users. Similarly, relying on profile descriptions while devising a set of rules and patterns, a novel model introduced for extracting age for Twitter users BIBREF41 . They also parse description for occupation by consulting the SOC2010 list of occupations and validating it through social surveys. A novel age inference model was developed while relying on homophily interaction information and content for predicting age of Twitter users BIBREF42 . The limitations of textual content for predicting age and gender was highlighted by BIBREF43 . They distinguish language use based on social gender, age identity, biological sex and chronological age by collecting crowdsourced signals using a game in which players (crowd) guess the biological sex and age of a user based only on their tweets. Their findings indicate how linguistic markers can misguide (e.g., a heart represented as <3 can be misinterpreted as feminine when the writer is male.) Estimating age and gender from facial images by training a convolutional neural networks (CNN) for face recognition is an active line of research BIBREF44 , BIBREF13 , BIBREF45 .
Dataset
Self-disclosure clues have been extensively utilized for creating ground-truth data for numerous social media analytic studies e.g., for predicting demographics BIBREF36 , BIBREF41 , and user's depressive behavior BIBREF46 , BIBREF47 , BIBREF48 . For instance, vulnerable individuals may employ depressive-indicative terms in their Twitter profile descriptions. Others may share their age and gender, e.g., "16 years old suicidal girl"(see Figure FIGREF15 ). We employ a huge dataset of 45,000 self-reported depressed users introduced in BIBREF46 where a lexicon of depression symptoms consisting of 1500 depression-indicative terms was created with the help of psychologist clinician and employed for collecting self-declared depressed individual's profiles. A subset of 8,770 users (24 million time-stamped tweets) containing 3981 depressed and 4789 control users (that do not show any depressive behavior) were verified by two human judges BIBREF46 . This dataset INLINEFORM0 contains the metadata values of each user such as profile descriptions, followers_count, created_at, and profile_image_url.
Age Enabled Ground-truth Dataset: We extract user's age by applying regular expression patterns to profile descriptions (such as "17 years old, self-harm, anxiety, depression") BIBREF41 . We compile "age prefixes" and "age suffixes", and use three age-extraction rules: 1. I am X years old 2. Born in X 3. X years old, where X is a "date" or age (e.g., 1994). We selected a subset of 1061 users among INLINEFORM0 as gold standard dataset INLINEFORM1 who disclose their age. From these 1061 users, 822 belong to depressed class and 239 belong to control class. From 3981 depressed users, 20.6% disclose their age in contrast with only 4% (239/4789) among control group. So self-disclosure of age is more prevalent among vulnerable users. Figure FIGREF18 depicts the age distribution in INLINEFORM2 . The general trend, consistent with the results in BIBREF42 , BIBREF49 , is biased toward young people. Indeed, according to Pew, 47% of Twitter users are younger than 30 years old BIBREF50 . Similar data collection procedure with comparable distribution have been used in many prior efforts BIBREF51 , BIBREF49 , BIBREF42 . We discuss our approach to mitigate the impact of the bias in Section 4.1. The median age is 17 for depressed class versus 19 for control class suggesting either likely depressed-user population is younger, or depressed youngsters are more likely to disclose their age for connecting to their peers (social homophily.) BIBREF51
Gender Enabled Ground-truth Dataset: We selected a subset of 1464 users INLINEFORM0 from INLINEFORM1 who disclose their gender in their profile description. From 1464 users 64% belonged to the depressed group, and the rest (36%) to the control group. 23% of the likely depressed users disclose their gender which is considerably higher (12%) than that for the control class. Once again, gender disclosure varies among the two gender groups. For statistical significance, we performed chi-square test (null hypothesis: gender and depression are two independent variables). Figure FIGREF19 illustrates gender association with each of the two classes. Blue circles (positive residuals, see Figure FIGREF19 -A,D) show positive association among corresponding row and column variables while red circles (negative residuals, see Figure FIGREF19 -B,C) imply a repulsion. Our findings are consistent with the medical literature BIBREF10 as according to BIBREF52 more women than men were given a diagnosis of depression. In particular, the female-to-male ratio is 2.1 and 1.9 for Major Depressive Disorder and Dysthymic Disorder respectively. Our findings from Twitter data indicate there is a strong association (Chi-square: 32.75, p-value:1.04e-08) between being female and showing depressive behavior on Twitter.
Data Modality Analysis
We now provide an in-depth analysis of visual and textual content of vulnerable users.
Visual Content Analysis: We show that the visual content in images from posts as well as profiles provide valuable psychological cues for understanding a user's depression status. Profile/posted images can surface self-stigmatization BIBREF53 . Additionally, as opposed to typical computer vision framework for object recognition that often relies on thousands of predetermined low-level features, what matters more for assessing user's online behavior is the emotions reflected in facial expressions BIBREF54 , attributes contributing to the computational aesthetics BIBREF55 , and sentimental quotes they may subscribe to (Figure FIGREF15 ) BIBREF8 .
Facial Presence:
For capturing facial presence, we rely on BIBREF56 's approach that uses multilevel convolutional coarse-to-fine network cascade to tackle facial landmark localization. We identify facial presentation, emotion from facial expression, and demographic features from profile/posted images . Table TABREF21 illustrates facial presentation differences in both profile and posted images (media) for depressed and control users in INLINEFORM0 . With control class showing significantly higher in both profile and media (8%, 9% respectively) compared to that for the depressed class. In contrast with age and gender disclosure, vulnerable users are less likely to disclose their facial identity, possibly due to lack of confidence or fear of stigma.
Facial Expression:
Following BIBREF8 's approach, we adopt Ekman's model of six emotions: anger, disgust, fear, joy, sadness and surprise, and use the Face++ API to automatically capture them from the shared images. Positive emotions are joy and surprise, and negative emotions are anger, disgust, fear, and sadness. In general, for each user u in INLINEFORM0 , we process profile/shared images for both the depressed and the control groups with at least one face from the shared images (Table TABREF23 ). For the photos that contain multiple faces, we measure the average emotion.
Figure FIGREF27 illustrates the inter-correlation of these features. Additionally, we observe that emotions gleaned from facial expressions correlated with emotional signals captured from textual content utilizing LIWC. This indicates visual imagery can be harnessed as a complementary channel for measuring online emotional signals.
General Image Features:
The importance of interpretable computational aesthetic features for studying users' online behavior has been highlighted by several efforts BIBREF55 , BIBREF8 , BIBREF57 . Color, as a pillar of the human vision system, has a strong association with conceptual ideas like emotion BIBREF58 , BIBREF59 . We measured the normalized red, green, blue and the mean of original colors, and brightness and contrast relative to variations of luminance. We represent images in Hue-Saturation-Value color space that seems intuitive for humans, and measure mean and variance for saturation and hue. Saturation is defined as the difference in the intensities of the different light wavelengths that compose the color. Although hue is not interpretable, high saturation indicates vividness and chromatic purity which are more appealing to the human eye BIBREF8 . Colorfulness is measured as a difference against gray background BIBREF60 . Naturalness is a measure of the degree of correspondence between images and the human perception of reality BIBREF60 . In color reproduction, naturalness is measured from the mental recollection of the colors of familiar objects. Additionally, there is a tendency among vulnerable users to share sentimental quotes bearing negative emotions. We performed optical character recognition (OCR) with python-tesseract to extract text and their sentiment score. As illustrated in Table TABREF26 , vulnerable users tend to use less colorful (higher grayscale) profile as well as shared images to convey their negative feelings, and share images that are less natural (Figure FIGREF15 ). With respect to the aesthetic quality of images (saturation, brightness, and hue), depressed users use images that are less appealing to the human eye. We employ independent t-test, while adopting Bonferroni Correction as a conservative approach to adjust the confidence intervals. Overall, we have 223 features, and choose Bonferroni-corrected INLINEFORM0 level of INLINEFORM1 (*** INLINEFORM2 , ** INLINEFORM3 ).
** alpha= 0.05, *** alpha = 0.05/223
Demographics Inference & Language Cues: LIWC has been used extensively for examining the latent dimensions of self-expression for analyzing personality BIBREF61 , depressive behavior, demographic differences BIBREF43 , BIBREF40 , etc. Several studies highlight that females employ more first-person singular pronouns BIBREF62 , and deictic language BIBREF63 , while males tend to use more articles BIBREF64 which characterizes concrete thinking, and formal, informational and affirmation words BIBREF65 . For age analysis, the salient findings include older individuals using more future tense verbs BIBREF62 triggering a shift in focus while aging. They also show positive emotions BIBREF66 and employ fewer self-references (i.e. 'I', 'me') with greater first person plural BIBREF62 . Depressed users employ first person pronouns more frequently BIBREF67 , repeatedly use negative emotions and anger words. We analyzed psycholinguistic cues and language style to study the association between depressive behavior as well as demographics. Particularly, we adopt Levinson's adult development grouping that partitions users in INLINEFORM0 into 5 age groups: (14,19],(19,23], (23,34],(34,46], and (46,60]. Then, we apply LIWC for characterizing linguistic styles for each age group for users in INLINEFORM1 .
Qualitative Language Analysis: The recent LIWC version summarizes textual content in terms of language variables such as analytical thinking, clout, authenticity, and emotional tone. It also measures other linguistic dimensions such as descriptors categories (e.g., percent of target words gleaned by dictionary, or longer than six letters - Sixltr) and informal language markers (e.g., swear words, netspeak), and other linguistic aspects (e.g., 1st person singular pronouns.)
Thinking Style:
Measuring people's natural ways of trying to analyze, and organize complex events have strong association with analytical thinking. LIWC relates higher analytic thinking to more formal and logical reasoning whereas a lower value indicates focus on narratives. Also, cognitive processing measures problem solving in mind. Words such as "think," "realize," and "know" indicates the degree of "certainty" in communications. Critical thinking ability relates to education BIBREF68 , and is impacted by different stages of cognitive development at different ages . It has been shown that older people communicate with greater cognitive complexity while comprehending nuances and subtle differences BIBREF62 . We observe a similar pattern in our data (Table TABREF40 .) A recent study highlights how depression affects brain and thinking at molecular level using a rat model BIBREF69 . Depression can promote cognitive dysfunction including difficulty in concentrating and making decisions. We observed a notable differences in the ability to think analytically in depressed and control users in different age groups (see Figure FIGREF39 - A, F and Table TABREF40 ). Overall, vulnerable younger users are not logical thinkers based on their relative analytical score and cognitive processing ability.
Authenticity:
Authenticity measures the degree of honesty. Authenticity is often assessed by measuring present tense verbs, 1st person singular pronouns (I, me, my), and by examining the linguistic manifestations of false stories BIBREF70 . Liars use fewer self-references and fewer complex words. Psychologists often see a child's first successfull lie as a mental growth. There is a decreasing trend of the Authenticity with aging (see Figure FIGREF39 -B.) Authenticity for depressed youngsters is strikingly higher than their control peers. It decreases with age (Figure FIGREF39 -B.)
Clout:
People with high clout speak more confidently and with certainty, employing more social words with fewer negations (e.g., no, not) and swear words. In general, midlife is relatively stable w.r.t. relationships and work. A recent study shows that age 60 to be best for self-esteem BIBREF71 as people take on managerial roles at work and maintain a satisfying relationship with their spouse. We see the same pattern in our data (see Figure FIGREF39 -C and Table TABREF40 ). Unsurprisingly, lack of confidence (the 6th PHQ-9 symptom) is a distinguishable characteristic of vulnerable users, leading to their lower clout scores, especially among depressed users before middle age (34 years old).
Self-references:
First person singular words are often seen as indicating interpersonal involvement and their high usage is associated with negative affective states implying nervousness and depression BIBREF66 . Consistent with prior studies, frequency of first person singular for depressed people is significantly higher compared to that of control class. Similarly to BIBREF66 , youngsters tend to use more first-person (e.g. I) and second person singular (e.g. you) pronouns (Figure FIGREF39 -G).
Informal Language Markers; Swear, Netspeak:
Several studies highlighted the use of profanity by young adults has significantly increased over the last decade BIBREF72 . We observed the same pattern in both the depressed and the control classes (Table TABREF40 ), although it's rate is higher for depressed users BIBREF1 . Psychologists have also shown that swearing can indicate that an individual is not a fragmented member of a society. Depressed youngsters, showing higher rate of interpersonal involvement and relationships, have a higher rate of cursing (Figure FIGREF39 -E). Also, Netspeak lexicon measures the frequency of terms such as lol and thx.
Sexual, Body:
Sexual lexicon contains terms like "horny", "love" and "incest", and body terms like "ache", "heart", and "cough". Both start with a higher rate for depressed users while decreasing gradually while growing up, possibly due to changes in sexual desire as we age (Figure FIGREF39 -H,I and Table TABREF40 .)
Quantitative Language Analysis:
We employ one-way ANOVA to compare the impact of various factors and validate our findings above. Table TABREF40 illustrates our findings, with a degree of freedom (df) of 1055. The null hypothesis is that the sample means' for each age group are similar for each of the LIWC features.
*** alpha = 0.001, ** alpha = 0.01, * alpha = 0.05
Demographic Prediction
We leverage both the visual and textual content for predicting age and gender.
Prediction with Textual Content:
We employ BIBREF73 's weighted lexicon of terms that uses the dataset of 75,394 Facebook users who shared their status, age and gender. The predictive power of this lexica was evaluated on Twitter, blog, and Facebook, showing promising results BIBREF73 . Utilizing these two weighted lexicon of terms, we are predicting the demographic information (age or gender) of INLINEFORM0 (denoted by INLINEFORM1 ) using following equation: INLINEFORM2
where INLINEFORM0 is the lexicon weight of the term, and INLINEFORM1 represents the frequency of the term in the user generated INLINEFORM2 , and INLINEFORM3 measures total word count in INLINEFORM4 . As our data is biased toward young people, we report age prediction performance for each age group separately (Table TABREF42 ). Moreover, to measure the average accuracy of this model, we build a balanced dataset (keeping all the users above 23 -416 users), and then randomly sampling the same number of users from the age ranges (11,19] and (19,23]. The average accuracy of this model is 0.63 for depressed users and 0.64 for control class. Table TABREF44 illustrates the performance of gender prediction for each class. The average accuracy is 0.82 on INLINEFORM5 ground-truth dataset.
Prediction with Visual Imagery:
Inspired by BIBREF56 's approach for facial landmark localization, we use their pretrained CNN consisting of convolutional layers, including unshared and fully-connected layers, to predict gender and age from both the profile and shared images. We evaluate the performance for gender and age prediction task on INLINEFORM0 and INLINEFORM1 respectively as shown in Table TABREF42 and Table TABREF44 .
Demographic Prediction Analysis:
We delve deeper into the benefits and drawbacks of each data modality for demographic information prediction. This is crucial as the differences between language cues between age groups above age 35 tend to become smaller (see Figure FIGREF39 -A,B,C) and making the prediction harder for older people BIBREF74 . In this case, the other data modality (e.g., visual content) can play integral role as a complementary source for age inference. For gender prediction (see Table TABREF44 ), on average, the profile image-based predictor provides a more accurate prediction for both the depressed and control class (0.92 and 0.90) compared to content-based predictor (0.82). For age prediction (see Table TABREF42 ), textual content-based predictor (on average 0.60) outperforms both of the visual-based predictors (on average profile:0.51, Media:0.53).
However, not every user provides facial identity on his account (see Table TABREF21 ). We studied facial presentation for each age-group to examine any association between age-group, facial presentation and depressive behavior (see Table TABREF43 ). We can see youngsters in both depressed and control class are not likely to present their face on profile image. Less than 3% of vulnerable users between 11-19 years reveal their facial identity. Although content-based gender predictor was not as accurate as image-based one, it is adequate for population-level analysis.
Multi-modal Prediction Framework
We use the above findings for predicting depressive behavior. Our model exploits early fusion BIBREF32 technique in feature space and requires modeling each user INLINEFORM0 in INLINEFORM1 as vector concatenation of individual modality features. As opposed to computationally expensive late fusion scheme where each modality requires a separate supervised modeling, this model reduces the learning effort and shows promising results BIBREF75 . To develop a generalizable model that avoids overfitting, we perform feature selection using statistical tests and all relevant ensemble learning models. It adds randomness to the data by creating shuffled copies of all features (shadow feature), and then trains Random Forest classifier on the extended data. Iteratively, it checks whether the actual feature has a higher Z-score than its shadow feature (See Algorithm SECREF6 and Figure FIGREF45 ) BIBREF76 .
Main each Feature INLINEFORM0 INLINEFORM1
RndForrest( INLINEFORM0 ) Calculate Imp INLINEFORM1 INLINEFORM2 Generate next hypothesis , INLINEFORM3 Once all hypothesis generated Perform Statistical Test INLINEFORM4 //Binomial Distribution INLINEFORM5 Feature is important Feature is important
Ensemble Feature Selection
Next, we adopt an ensemble learning method that integrates the predictive power of multiple learners with two main advantages; its interpretability with respect to the contributions of each feature and its high predictive power. For prediction we have INLINEFORM0 where INLINEFORM1 is a weak learner and INLINEFORM2 denotes the final prediction.
In particular, we optimize the loss function: INLINEFORM0 where INLINEFORM1 incorporates INLINEFORM2 and INLINEFORM3 regularization. In each iteration, the new INLINEFORM4 is obtained by fitting weak learner to the negative gradient of loss function. Particularly, by estimating the loss function with Taylor expansion : INLINEFORM5 where its first expression is constant, the second and the third expressions are first ( INLINEFORM6 ) and second order derivatives ( INLINEFORM7 ) of the loss. INLINEFORM8
For exploring the weak learners, assume INLINEFORM0 has k leaf nodes, INLINEFORM1 be subset of users from INLINEFORM2 belongs to the node INLINEFORM3 , and INLINEFORM4 denotes the prediction for node INLINEFORM5 . Then, for each user INLINEFORM6 belonging to INLINEFORM7 , INLINEFORM8 and INLINEFORM9 INLINEFORM10
Next, for each leaf node INLINEFORM0 , deriving w.r.t INLINEFORM1 : INLINEFORM2
and by substituting weights: INLINEFORM0
which represents the loss for fixed weak learners with INLINEFORM0 nodes. The trees are built sequentially such that each subsequent tree aims to reduce the errors of its predecessor tree. Although, the weak learners have high bias, the ensemble model produces a strong learner that effectively integrate the weak learners by reducing bias and variance (the ultimate goal of supervised models) BIBREF77 . Table TABREF48 illustrates our multimodal framework outperform the baselines for identifying depressed users in terms of average specificity, sensitivity, F-Measure, and accuracy in 10-fold cross-validation setting on INLINEFORM1 dataset. Figure FIGREF47 shows how the likelihood of being classified into the depressed class varies with each feature addition to the model for a sample user in the dataset. The prediction bar (the black bar) shows that the log-odds of prediction is 0.31, that is, the likelihood of this person being a depressed user is 57% (1 / (1 + exp(-0.3))). The figure also sheds light on the impact of each contributing feature. The waterfall charts represent how the probability of being depressed changes with the addition of each feature variable. For instance, the "Analytic thinking" of this user is considered high 48.43 (Median:36.95, Mean: 40.18) and this decreases the chance of this person being classified into the depressed group by the log-odds of -1.41. Depressed users have significantly lower "Analytic thinking" score compared to control class. Moreover, the 40.46 "Clout" score is a low value (Median: 62.22, Mean: 57.17) and it decreases the chance of being classified as depressed. With respect to the visual features, for instance, the mean and the median of 'shared_colorfulness' is 112.03 and 113 respectively. The value of 136.71 would be high; thus, it decreases the chance of being depressed for this specific user by log-odds of -0.54. Moreover, the 'profile_naturalness' of 0.46 is considered high compared to 0.36 as the mean for the depressed class which justifies pull down of the log-odds by INLINEFORM2 . For network features, for instance, 'two_hop_neighborhood' for depressed users (Mean : 84) are less than that of control users (Mean: 154), and is reflected in pulling down the log-odds by -0.27.
Baselines:
To test the efficacy of our multi-modal framework for detecting depressed users, we compare it against existing content, content-network, and image-based models (based on the aforementioned general image feature, facial presence, and facial expressions.) | From Twitter profile descriptions of the users. |
98515bd97e4fae6bfce2d164659cd75e87a9fc89 | 98515bd97e4fae6bfce2d164659cd75e87a9fc89_0 | Q: What is the source of the user interaction data?
Text: 0pt*0*0
0pt*0*0
0pt*0*0 0.95
1]Amir Hossein Yazdavar 1]Mohammad Saeid Mahdavinejad 2]Goonmeet Bajaj
3]William Romine 1]Amirhassan Monadjemi 1]Krishnaprasad Thirunarayan
1]Amit Sheth 4]Jyotishman Pathak [1]Department of Computer Science & Engineering, Wright State University, OH, USA [2]Ohio State University, Columbus, OH, USA [3]Department of Biological Science, Wright State University, OH, USA [4] Division of Health Informatics, Weill Cornell University, New York, NY, USA
[1] yazdavar.2@wright.edu
With ubiquity of social media platforms, millions of people are sharing their online persona by expressing their thoughts, moods, emotions, feelings, and even their daily struggles with mental health issues voluntarily and publicly on social media. Unlike the most existing efforts which study depression by analyzing textual content, we examine and exploit multimodal big data to discern depressive behavior using a wide variety of features including individual-level demographics. By developing a multimodal framework and employing statistical techniques for fusing heterogeneous sets of features obtained by processing visual, textual and user interaction data, we significantly enhance the current state-of-the-art approaches for identifying depressed individuals on Twitter (improving the average F1-Score by 5 percent) as well as facilitate demographic inference from social media for broader applications. Besides providing insights into the relationship between demographics and mental health, our research assists in the design of a new breed of demographic-aware health interventions.
Introduction
Depression is a highly prevalent public health challenge and a major cause of disability worldwide. Depression affects 6.7% (i.e., about 16 million) Americans each year . According to the World Mental Health Survey conducted in 17 countries, on average, about 5% of people reported having an episode of depression in 2011 BIBREF0 . Untreated or under-treated clinical depression can lead to suicide and other chronic risky behaviors such as drug or alcohol addiction.
Global efforts to curb clinical depression involve identifying depression through survey-based methods employing phone or online questionnaires. These approaches suffer from under-representation as well as sampling bias (with very small group of respondents.) In contrast, the widespread adoption of social media where people voluntarily and publicly express their thoughts, moods, emotions, and feelings, and even share their daily struggles with mental health problems has not been adequately tapped into studying mental illnesses, such as depression. The visual and textual content shared on different social media platforms like Twitter offer new opportunities for a deeper understanding of self-expressed depression both at an individual as well as community-level. Previous research efforts have suggested that language style, sentiment, users' activities, and engagement expressed in social media posts can predict the likelihood of depression BIBREF1 , BIBREF2 . However, except for a few attempts BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , these investigations have seldom studied extraction of emotional state from visual content of images in posted/profile images. Visual content can express users' emotions more vividly, and psychologists noted that imagery is an effective medium for communicating difficult emotions.
According to eMarketer, photos accounted for 75% of content posted on Facebook worldwide and they are the most engaging type of content on Facebook (87%). Indeed, "a picture is worth a thousand words" and now "photos are worth a million likes." Similarly, on Twitter, the tweets with image links get twice as much attention as those without , and video-linked tweets drive up engagement . The ease and naturalness of expression through visual imagery can serve to glean depression-indicators in vulnerable individuals who often seek social support through social media BIBREF7 . Further, as psychologist Carl Rogers highlights, we often pursue and promote our Ideal-Self . In this regard, the choice of profile image can be a proxy for the online persona BIBREF8 , providing a window into an individual's mental health status. For instance, choosing emaciated legs of girls covered with several cuts as profile image portrays negative self-view BIBREF9 .
Inferring demographic information like gender and age can be crucial for stratifying our understanding of population-level epidemiology of mental health disorders. Relying on electronic health records data, previous studies explored gender differences in depressive behavior from different angles including prevalence, age at onset, comorbidities, as well as biological and psychosocial factors. For instance, women have been diagnosed with depression twice as often as men BIBREF10 and national psychiatric morbidity survey in Britain has shown higher risk of depression in women BIBREF11 . On the other hand, suicide rates for men are three to five times higher compared to that of the women BIBREF12 .
Although depression can affect anyone at any age, signs and triggers of depression vary for different age groups . Depression triggers for children include parental depression, domestic violence, and loss of a pet, friend or family member. For teenagers (ages 12-18), depression may arise from hormonal imbalance, sexuality concerns and rejection by peers. Young adults (ages 19-29) may develop depression due to life transitions, poverty, trauma, and work issues. Adult (ages 30-60) depression triggers include caring simultaneously for children and aging parents, financial burden, work and relationship issues. Senior adults develop depression from common late-life issues, social isolation, major life loses such as the death of a spouse, financial stress and other chronic health problems (e.g., cardiac disease, dementia). Therefore, inferring demographic information while studying depressive behavior from passively sensed social data, can shed better light on the population-level epidemiology of depression.
The recent advancements in deep neural networks, specifically for image analysis task, can lead to determining demographic features such as age and gender BIBREF13 . We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users.
We address and derive answers to the following research questions: 1) How well do the content of posted images (colors, aesthetic and facial presentation) reflect depressive behavior? 2) Does the choice of profile picture show any psychological traits of depressed online persona? Are they reliable enough to represent the demographic information such as age and gender? 3) Are there any underlying common themes among depressed individuals generated using multimodal content that can be used to detect depression reliably?
Related Work
Mental Health Analysis using Social Media:
Several efforts have attempted to automatically detect depression from social media content utilizing machine/deep learning and natural language processing approaches. Conducting a retrospective study over tweets, BIBREF14 characterizes depression based on factors such as language, emotion, style, ego-network, and user engagement. They built a classifier to predict the likelihood of depression in a post BIBREF14 , BIBREF15 or in an individual BIBREF1 , BIBREF16 , BIBREF17 , BIBREF18 . Moreover, there have been significant advances due to the shared task BIBREF19 focusing on methods for identifying depressed users on Twitter at the Computational Linguistics and Clinical Psychology Workshop (CLP 2015). A corpus of nearly 1,800 Twitter users was built for evaluation, and the best models employed topic modeling BIBREF20 , Linguistic Inquiry and Word Count (LIWC) features, and other metadata BIBREF21 . More recently, a neural network architecture introduced by BIBREF22 combined posts into a representation of user's activities for detecting depressed users. Another active line of research has focused on capturing suicide and self-harm signals BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF2 , BIBREF27 . Moreover, the CLP 2016 BIBREF28 defined a shared task on detecting the severity of the mental health from forum posts. All of these studies derive discriminative features to classify depression in user-generated content at message-level, individual-level or community-level. Recent emergence of photo-sharing platforms such as Instagram, has attracted researchers attention to study people's behavior from their visual narratives – ranging from mining their emotions BIBREF29 , and happiness trend BIBREF30 , to studying medical concerns BIBREF31 . Researchers show that people use Instagram to engage in social exchange and storytelling about their difficult experiences BIBREF4 . The role of visual imagery as a mechanism of self-disclosure by relating visual attributes to mental health disclosures on Instagram was highlighted by BIBREF3 , BIBREF5 where individual Instagram profiles were utilized to build a prediction framework for identifying markers of depression. The importance of data modality to understand user behavior on social media was highlighted by BIBREF32 . More recently, a deep neural network sequence modeling approach that marries audio and text data modalities to analyze question-answer style interviews between an individual and an agent has been developed to study mental health BIBREF32 . Similarly, a multimodal depressive dictionary learning was proposed to detect depressed users on Twitter BIBREF33 . They provide a sparse user representations by defining a feature set consisting of social network features, user profile features, visual features, emotional features BIBREF34 , topic-level features, and domain-specific features. Particularly, our choice of multi-model prediction framework is intended to improve upon the prior works involving use of images in multimodal depression analysis BIBREF33 and prior works on studying Instagram photos BIBREF6 , BIBREF35 .
Demographic information inference on Social Media:
There is a growing interest in understanding online user's demographic information due to its numerous applications in healthcare BIBREF36 , BIBREF37 . A supervised model developed by BIBREF38 for determining users' gender by employing features such as screen-name, full-name, profile description and content on external resources (e.g., personal blog). Employing features including emoticons, acronyms, slangs, punctuations, capitalization, sentence length and included links/images, along with online behaviors such as number of friends, post time, and commenting activity, a supervised model was built for predicting user's age group BIBREF39 . Utilizing users life stage information such as secondary school student, college student, and employee, BIBREF40 builds age inference model for Dutch Twitter users. Similarly, relying on profile descriptions while devising a set of rules and patterns, a novel model introduced for extracting age for Twitter users BIBREF41 . They also parse description for occupation by consulting the SOC2010 list of occupations and validating it through social surveys. A novel age inference model was developed while relying on homophily interaction information and content for predicting age of Twitter users BIBREF42 . The limitations of textual content for predicting age and gender was highlighted by BIBREF43 . They distinguish language use based on social gender, age identity, biological sex and chronological age by collecting crowdsourced signals using a game in which players (crowd) guess the biological sex and age of a user based only on their tweets. Their findings indicate how linguistic markers can misguide (e.g., a heart represented as <3 can be misinterpreted as feminine when the writer is male.) Estimating age and gender from facial images by training a convolutional neural networks (CNN) for face recognition is an active line of research BIBREF44 , BIBREF13 , BIBREF45 .
Dataset
Self-disclosure clues have been extensively utilized for creating ground-truth data for numerous social media analytic studies e.g., for predicting demographics BIBREF36 , BIBREF41 , and user's depressive behavior BIBREF46 , BIBREF47 , BIBREF48 . For instance, vulnerable individuals may employ depressive-indicative terms in their Twitter profile descriptions. Others may share their age and gender, e.g., "16 years old suicidal girl"(see Figure FIGREF15 ). We employ a huge dataset of 45,000 self-reported depressed users introduced in BIBREF46 where a lexicon of depression symptoms consisting of 1500 depression-indicative terms was created with the help of psychologist clinician and employed for collecting self-declared depressed individual's profiles. A subset of 8,770 users (24 million time-stamped tweets) containing 3981 depressed and 4789 control users (that do not show any depressive behavior) were verified by two human judges BIBREF46 . This dataset INLINEFORM0 contains the metadata values of each user such as profile descriptions, followers_count, created_at, and profile_image_url.
Age Enabled Ground-truth Dataset: We extract user's age by applying regular expression patterns to profile descriptions (such as "17 years old, self-harm, anxiety, depression") BIBREF41 . We compile "age prefixes" and "age suffixes", and use three age-extraction rules: 1. I am X years old 2. Born in X 3. X years old, where X is a "date" or age (e.g., 1994). We selected a subset of 1061 users among INLINEFORM0 as gold standard dataset INLINEFORM1 who disclose their age. From these 1061 users, 822 belong to depressed class and 239 belong to control class. From 3981 depressed users, 20.6% disclose their age in contrast with only 4% (239/4789) among control group. So self-disclosure of age is more prevalent among vulnerable users. Figure FIGREF18 depicts the age distribution in INLINEFORM2 . The general trend, consistent with the results in BIBREF42 , BIBREF49 , is biased toward young people. Indeed, according to Pew, 47% of Twitter users are younger than 30 years old BIBREF50 . Similar data collection procedure with comparable distribution have been used in many prior efforts BIBREF51 , BIBREF49 , BIBREF42 . We discuss our approach to mitigate the impact of the bias in Section 4.1. The median age is 17 for depressed class versus 19 for control class suggesting either likely depressed-user population is younger, or depressed youngsters are more likely to disclose their age for connecting to their peers (social homophily.) BIBREF51
Gender Enabled Ground-truth Dataset: We selected a subset of 1464 users INLINEFORM0 from INLINEFORM1 who disclose their gender in their profile description. From 1464 users 64% belonged to the depressed group, and the rest (36%) to the control group. 23% of the likely depressed users disclose their gender which is considerably higher (12%) than that for the control class. Once again, gender disclosure varies among the two gender groups. For statistical significance, we performed chi-square test (null hypothesis: gender and depression are two independent variables). Figure FIGREF19 illustrates gender association with each of the two classes. Blue circles (positive residuals, see Figure FIGREF19 -A,D) show positive association among corresponding row and column variables while red circles (negative residuals, see Figure FIGREF19 -B,C) imply a repulsion. Our findings are consistent with the medical literature BIBREF10 as according to BIBREF52 more women than men were given a diagnosis of depression. In particular, the female-to-male ratio is 2.1 and 1.9 for Major Depressive Disorder and Dysthymic Disorder respectively. Our findings from Twitter data indicate there is a strong association (Chi-square: 32.75, p-value:1.04e-08) between being female and showing depressive behavior on Twitter.
Data Modality Analysis
We now provide an in-depth analysis of visual and textual content of vulnerable users.
Visual Content Analysis: We show that the visual content in images from posts as well as profiles provide valuable psychological cues for understanding a user's depression status. Profile/posted images can surface self-stigmatization BIBREF53 . Additionally, as opposed to typical computer vision framework for object recognition that often relies on thousands of predetermined low-level features, what matters more for assessing user's online behavior is the emotions reflected in facial expressions BIBREF54 , attributes contributing to the computational aesthetics BIBREF55 , and sentimental quotes they may subscribe to (Figure FIGREF15 ) BIBREF8 .
Facial Presence:
For capturing facial presence, we rely on BIBREF56 's approach that uses multilevel convolutional coarse-to-fine network cascade to tackle facial landmark localization. We identify facial presentation, emotion from facial expression, and demographic features from profile/posted images . Table TABREF21 illustrates facial presentation differences in both profile and posted images (media) for depressed and control users in INLINEFORM0 . With control class showing significantly higher in both profile and media (8%, 9% respectively) compared to that for the depressed class. In contrast with age and gender disclosure, vulnerable users are less likely to disclose their facial identity, possibly due to lack of confidence or fear of stigma.
Facial Expression:
Following BIBREF8 's approach, we adopt Ekman's model of six emotions: anger, disgust, fear, joy, sadness and surprise, and use the Face++ API to automatically capture them from the shared images. Positive emotions are joy and surprise, and negative emotions are anger, disgust, fear, and sadness. In general, for each user u in INLINEFORM0 , we process profile/shared images for both the depressed and the control groups with at least one face from the shared images (Table TABREF23 ). For the photos that contain multiple faces, we measure the average emotion.
Figure FIGREF27 illustrates the inter-correlation of these features. Additionally, we observe that emotions gleaned from facial expressions correlated with emotional signals captured from textual content utilizing LIWC. This indicates visual imagery can be harnessed as a complementary channel for measuring online emotional signals.
General Image Features:
The importance of interpretable computational aesthetic features for studying users' online behavior has been highlighted by several efforts BIBREF55 , BIBREF8 , BIBREF57 . Color, as a pillar of the human vision system, has a strong association with conceptual ideas like emotion BIBREF58 , BIBREF59 . We measured the normalized red, green, blue and the mean of original colors, and brightness and contrast relative to variations of luminance. We represent images in Hue-Saturation-Value color space that seems intuitive for humans, and measure mean and variance for saturation and hue. Saturation is defined as the difference in the intensities of the different light wavelengths that compose the color. Although hue is not interpretable, high saturation indicates vividness and chromatic purity which are more appealing to the human eye BIBREF8 . Colorfulness is measured as a difference against gray background BIBREF60 . Naturalness is a measure of the degree of correspondence between images and the human perception of reality BIBREF60 . In color reproduction, naturalness is measured from the mental recollection of the colors of familiar objects. Additionally, there is a tendency among vulnerable users to share sentimental quotes bearing negative emotions. We performed optical character recognition (OCR) with python-tesseract to extract text and their sentiment score. As illustrated in Table TABREF26 , vulnerable users tend to use less colorful (higher grayscale) profile as well as shared images to convey their negative feelings, and share images that are less natural (Figure FIGREF15 ). With respect to the aesthetic quality of images (saturation, brightness, and hue), depressed users use images that are less appealing to the human eye. We employ independent t-test, while adopting Bonferroni Correction as a conservative approach to adjust the confidence intervals. Overall, we have 223 features, and choose Bonferroni-corrected INLINEFORM0 level of INLINEFORM1 (*** INLINEFORM2 , ** INLINEFORM3 ).
** alpha= 0.05, *** alpha = 0.05/223
Demographics Inference & Language Cues: LIWC has been used extensively for examining the latent dimensions of self-expression for analyzing personality BIBREF61 , depressive behavior, demographic differences BIBREF43 , BIBREF40 , etc. Several studies highlight that females employ more first-person singular pronouns BIBREF62 , and deictic language BIBREF63 , while males tend to use more articles BIBREF64 which characterizes concrete thinking, and formal, informational and affirmation words BIBREF65 . For age analysis, the salient findings include older individuals using more future tense verbs BIBREF62 triggering a shift in focus while aging. They also show positive emotions BIBREF66 and employ fewer self-references (i.e. 'I', 'me') with greater first person plural BIBREF62 . Depressed users employ first person pronouns more frequently BIBREF67 , repeatedly use negative emotions and anger words. We analyzed psycholinguistic cues and language style to study the association between depressive behavior as well as demographics. Particularly, we adopt Levinson's adult development grouping that partitions users in INLINEFORM0 into 5 age groups: (14,19],(19,23], (23,34],(34,46], and (46,60]. Then, we apply LIWC for characterizing linguistic styles for each age group for users in INLINEFORM1 .
Qualitative Language Analysis: The recent LIWC version summarizes textual content in terms of language variables such as analytical thinking, clout, authenticity, and emotional tone. It also measures other linguistic dimensions such as descriptors categories (e.g., percent of target words gleaned by dictionary, or longer than six letters - Sixltr) and informal language markers (e.g., swear words, netspeak), and other linguistic aspects (e.g., 1st person singular pronouns.)
Thinking Style:
Measuring people's natural ways of trying to analyze, and organize complex events have strong association with analytical thinking. LIWC relates higher analytic thinking to more formal and logical reasoning whereas a lower value indicates focus on narratives. Also, cognitive processing measures problem solving in mind. Words such as "think," "realize," and "know" indicates the degree of "certainty" in communications. Critical thinking ability relates to education BIBREF68 , and is impacted by different stages of cognitive development at different ages . It has been shown that older people communicate with greater cognitive complexity while comprehending nuances and subtle differences BIBREF62 . We observe a similar pattern in our data (Table TABREF40 .) A recent study highlights how depression affects brain and thinking at molecular level using a rat model BIBREF69 . Depression can promote cognitive dysfunction including difficulty in concentrating and making decisions. We observed a notable differences in the ability to think analytically in depressed and control users in different age groups (see Figure FIGREF39 - A, F and Table TABREF40 ). Overall, vulnerable younger users are not logical thinkers based on their relative analytical score and cognitive processing ability.
Authenticity:
Authenticity measures the degree of honesty. Authenticity is often assessed by measuring present tense verbs, 1st person singular pronouns (I, me, my), and by examining the linguistic manifestations of false stories BIBREF70 . Liars use fewer self-references and fewer complex words. Psychologists often see a child's first successfull lie as a mental growth. There is a decreasing trend of the Authenticity with aging (see Figure FIGREF39 -B.) Authenticity for depressed youngsters is strikingly higher than their control peers. It decreases with age (Figure FIGREF39 -B.)
Clout:
People with high clout speak more confidently and with certainty, employing more social words with fewer negations (e.g., no, not) and swear words. In general, midlife is relatively stable w.r.t. relationships and work. A recent study shows that age 60 to be best for self-esteem BIBREF71 as people take on managerial roles at work and maintain a satisfying relationship with their spouse. We see the same pattern in our data (see Figure FIGREF39 -C and Table TABREF40 ). Unsurprisingly, lack of confidence (the 6th PHQ-9 symptom) is a distinguishable characteristic of vulnerable users, leading to their lower clout scores, especially among depressed users before middle age (34 years old).
Self-references:
First person singular words are often seen as indicating interpersonal involvement and their high usage is associated with negative affective states implying nervousness and depression BIBREF66 . Consistent with prior studies, frequency of first person singular for depressed people is significantly higher compared to that of control class. Similarly to BIBREF66 , youngsters tend to use more first-person (e.g. I) and second person singular (e.g. you) pronouns (Figure FIGREF39 -G).
Informal Language Markers; Swear, Netspeak:
Several studies highlighted the use of profanity by young adults has significantly increased over the last decade BIBREF72 . We observed the same pattern in both the depressed and the control classes (Table TABREF40 ), although it's rate is higher for depressed users BIBREF1 . Psychologists have also shown that swearing can indicate that an individual is not a fragmented member of a society. Depressed youngsters, showing higher rate of interpersonal involvement and relationships, have a higher rate of cursing (Figure FIGREF39 -E). Also, Netspeak lexicon measures the frequency of terms such as lol and thx.
Sexual, Body:
Sexual lexicon contains terms like "horny", "love" and "incest", and body terms like "ache", "heart", and "cough". Both start with a higher rate for depressed users while decreasing gradually while growing up, possibly due to changes in sexual desire as we age (Figure FIGREF39 -H,I and Table TABREF40 .)
Quantitative Language Analysis:
We employ one-way ANOVA to compare the impact of various factors and validate our findings above. Table TABREF40 illustrates our findings, with a degree of freedom (df) of 1055. The null hypothesis is that the sample means' for each age group are similar for each of the LIWC features.
*** alpha = 0.001, ** alpha = 0.01, * alpha = 0.05
Demographic Prediction
We leverage both the visual and textual content for predicting age and gender.
Prediction with Textual Content:
We employ BIBREF73 's weighted lexicon of terms that uses the dataset of 75,394 Facebook users who shared their status, age and gender. The predictive power of this lexica was evaluated on Twitter, blog, and Facebook, showing promising results BIBREF73 . Utilizing these two weighted lexicon of terms, we are predicting the demographic information (age or gender) of INLINEFORM0 (denoted by INLINEFORM1 ) using following equation: INLINEFORM2
where INLINEFORM0 is the lexicon weight of the term, and INLINEFORM1 represents the frequency of the term in the user generated INLINEFORM2 , and INLINEFORM3 measures total word count in INLINEFORM4 . As our data is biased toward young people, we report age prediction performance for each age group separately (Table TABREF42 ). Moreover, to measure the average accuracy of this model, we build a balanced dataset (keeping all the users above 23 -416 users), and then randomly sampling the same number of users from the age ranges (11,19] and (19,23]. The average accuracy of this model is 0.63 for depressed users and 0.64 for control class. Table TABREF44 illustrates the performance of gender prediction for each class. The average accuracy is 0.82 on INLINEFORM5 ground-truth dataset.
Prediction with Visual Imagery:
Inspired by BIBREF56 's approach for facial landmark localization, we use their pretrained CNN consisting of convolutional layers, including unshared and fully-connected layers, to predict gender and age from both the profile and shared images. We evaluate the performance for gender and age prediction task on INLINEFORM0 and INLINEFORM1 respectively as shown in Table TABREF42 and Table TABREF44 .
Demographic Prediction Analysis:
We delve deeper into the benefits and drawbacks of each data modality for demographic information prediction. This is crucial as the differences between language cues between age groups above age 35 tend to become smaller (see Figure FIGREF39 -A,B,C) and making the prediction harder for older people BIBREF74 . In this case, the other data modality (e.g., visual content) can play integral role as a complementary source for age inference. For gender prediction (see Table TABREF44 ), on average, the profile image-based predictor provides a more accurate prediction for both the depressed and control class (0.92 and 0.90) compared to content-based predictor (0.82). For age prediction (see Table TABREF42 ), textual content-based predictor (on average 0.60) outperforms both of the visual-based predictors (on average profile:0.51, Media:0.53).
However, not every user provides facial identity on his account (see Table TABREF21 ). We studied facial presentation for each age-group to examine any association between age-group, facial presentation and depressive behavior (see Table TABREF43 ). We can see youngsters in both depressed and control class are not likely to present their face on profile image. Less than 3% of vulnerable users between 11-19 years reveal their facial identity. Although content-based gender predictor was not as accurate as image-based one, it is adequate for population-level analysis.
Multi-modal Prediction Framework
We use the above findings for predicting depressive behavior. Our model exploits early fusion BIBREF32 technique in feature space and requires modeling each user INLINEFORM0 in INLINEFORM1 as vector concatenation of individual modality features. As opposed to computationally expensive late fusion scheme where each modality requires a separate supervised modeling, this model reduces the learning effort and shows promising results BIBREF75 . To develop a generalizable model that avoids overfitting, we perform feature selection using statistical tests and all relevant ensemble learning models. It adds randomness to the data by creating shuffled copies of all features (shadow feature), and then trains Random Forest classifier on the extended data. Iteratively, it checks whether the actual feature has a higher Z-score than its shadow feature (See Algorithm SECREF6 and Figure FIGREF45 ) BIBREF76 .
Main each Feature INLINEFORM0 INLINEFORM1
RndForrest( INLINEFORM0 ) Calculate Imp INLINEFORM1 INLINEFORM2 Generate next hypothesis , INLINEFORM3 Once all hypothesis generated Perform Statistical Test INLINEFORM4 //Binomial Distribution INLINEFORM5 Feature is important Feature is important
Ensemble Feature Selection
Next, we adopt an ensemble learning method that integrates the predictive power of multiple learners with two main advantages; its interpretability with respect to the contributions of each feature and its high predictive power. For prediction we have INLINEFORM0 where INLINEFORM1 is a weak learner and INLINEFORM2 denotes the final prediction.
In particular, we optimize the loss function: INLINEFORM0 where INLINEFORM1 incorporates INLINEFORM2 and INLINEFORM3 regularization. In each iteration, the new INLINEFORM4 is obtained by fitting weak learner to the negative gradient of loss function. Particularly, by estimating the loss function with Taylor expansion : INLINEFORM5 where its first expression is constant, the second and the third expressions are first ( INLINEFORM6 ) and second order derivatives ( INLINEFORM7 ) of the loss. INLINEFORM8
For exploring the weak learners, assume INLINEFORM0 has k leaf nodes, INLINEFORM1 be subset of users from INLINEFORM2 belongs to the node INLINEFORM3 , and INLINEFORM4 denotes the prediction for node INLINEFORM5 . Then, for each user INLINEFORM6 belonging to INLINEFORM7 , INLINEFORM8 and INLINEFORM9 INLINEFORM10
Next, for each leaf node INLINEFORM0 , deriving w.r.t INLINEFORM1 : INLINEFORM2
and by substituting weights: INLINEFORM0
which represents the loss for fixed weak learners with INLINEFORM0 nodes. The trees are built sequentially such that each subsequent tree aims to reduce the errors of its predecessor tree. Although, the weak learners have high bias, the ensemble model produces a strong learner that effectively integrate the weak learners by reducing bias and variance (the ultimate goal of supervised models) BIBREF77 . Table TABREF48 illustrates our multimodal framework outperform the baselines for identifying depressed users in terms of average specificity, sensitivity, F-Measure, and accuracy in 10-fold cross-validation setting on INLINEFORM1 dataset. Figure FIGREF47 shows how the likelihood of being classified into the depressed class varies with each feature addition to the model for a sample user in the dataset. The prediction bar (the black bar) shows that the log-odds of prediction is 0.31, that is, the likelihood of this person being a depressed user is 57% (1 / (1 + exp(-0.3))). The figure also sheds light on the impact of each contributing feature. The waterfall charts represent how the probability of being depressed changes with the addition of each feature variable. For instance, the "Analytic thinking" of this user is considered high 48.43 (Median:36.95, Mean: 40.18) and this decreases the chance of this person being classified into the depressed group by the log-odds of -1.41. Depressed users have significantly lower "Analytic thinking" score compared to control class. Moreover, the 40.46 "Clout" score is a low value (Median: 62.22, Mean: 57.17) and it decreases the chance of being classified as depressed. With respect to the visual features, for instance, the mean and the median of 'shared_colorfulness' is 112.03 and 113 respectively. The value of 136.71 would be high; thus, it decreases the chance of being depressed for this specific user by log-odds of -0.54. Moreover, the 'profile_naturalness' of 0.46 is considered high compared to 0.36 as the mean for the depressed class which justifies pull down of the log-odds by INLINEFORM2 . For network features, for instance, 'two_hop_neighborhood' for depressed users (Mean : 84) are less than that of control users (Mean: 154), and is reflected in pulling down the log-odds by -0.27.
Baselines:
To test the efficacy of our multi-modal framework for detecting depressed users, we compare it against existing content, content-network, and image-based models (based on the aforementioned general image feature, facial presence, and facial expressions.) | Sociability from ego-network on Twitter |
53bf6238baa29a10f4ff91656c470609c16320e1 | 53bf6238baa29a10f4ff91656c470609c16320e1_0 | Q: What is the source of the textual data?
Text: 0pt*0*0
0pt*0*0
0pt*0*0 0.95
1]Amir Hossein Yazdavar 1]Mohammad Saeid Mahdavinejad 2]Goonmeet Bajaj
3]William Romine 1]Amirhassan Monadjemi 1]Krishnaprasad Thirunarayan
1]Amit Sheth 4]Jyotishman Pathak [1]Department of Computer Science & Engineering, Wright State University, OH, USA [2]Ohio State University, Columbus, OH, USA [3]Department of Biological Science, Wright State University, OH, USA [4] Division of Health Informatics, Weill Cornell University, New York, NY, USA
[1] yazdavar.2@wright.edu
With ubiquity of social media platforms, millions of people are sharing their online persona by expressing their thoughts, moods, emotions, feelings, and even their daily struggles with mental health issues voluntarily and publicly on social media. Unlike the most existing efforts which study depression by analyzing textual content, we examine and exploit multimodal big data to discern depressive behavior using a wide variety of features including individual-level demographics. By developing a multimodal framework and employing statistical techniques for fusing heterogeneous sets of features obtained by processing visual, textual and user interaction data, we significantly enhance the current state-of-the-art approaches for identifying depressed individuals on Twitter (improving the average F1-Score by 5 percent) as well as facilitate demographic inference from social media for broader applications. Besides providing insights into the relationship between demographics and mental health, our research assists in the design of a new breed of demographic-aware health interventions.
Introduction
Depression is a highly prevalent public health challenge and a major cause of disability worldwide. Depression affects 6.7% (i.e., about 16 million) Americans each year . According to the World Mental Health Survey conducted in 17 countries, on average, about 5% of people reported having an episode of depression in 2011 BIBREF0 . Untreated or under-treated clinical depression can lead to suicide and other chronic risky behaviors such as drug or alcohol addiction.
Global efforts to curb clinical depression involve identifying depression through survey-based methods employing phone or online questionnaires. These approaches suffer from under-representation as well as sampling bias (with very small group of respondents.) In contrast, the widespread adoption of social media where people voluntarily and publicly express their thoughts, moods, emotions, and feelings, and even share their daily struggles with mental health problems has not been adequately tapped into studying mental illnesses, such as depression. The visual and textual content shared on different social media platforms like Twitter offer new opportunities for a deeper understanding of self-expressed depression both at an individual as well as community-level. Previous research efforts have suggested that language style, sentiment, users' activities, and engagement expressed in social media posts can predict the likelihood of depression BIBREF1 , BIBREF2 . However, except for a few attempts BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , these investigations have seldom studied extraction of emotional state from visual content of images in posted/profile images. Visual content can express users' emotions more vividly, and psychologists noted that imagery is an effective medium for communicating difficult emotions.
According to eMarketer, photos accounted for 75% of content posted on Facebook worldwide and they are the most engaging type of content on Facebook (87%). Indeed, "a picture is worth a thousand words" and now "photos are worth a million likes." Similarly, on Twitter, the tweets with image links get twice as much attention as those without , and video-linked tweets drive up engagement . The ease and naturalness of expression through visual imagery can serve to glean depression-indicators in vulnerable individuals who often seek social support through social media BIBREF7 . Further, as psychologist Carl Rogers highlights, we often pursue and promote our Ideal-Self . In this regard, the choice of profile image can be a proxy for the online persona BIBREF8 , providing a window into an individual's mental health status. For instance, choosing emaciated legs of girls covered with several cuts as profile image portrays negative self-view BIBREF9 .
Inferring demographic information like gender and age can be crucial for stratifying our understanding of population-level epidemiology of mental health disorders. Relying on electronic health records data, previous studies explored gender differences in depressive behavior from different angles including prevalence, age at onset, comorbidities, as well as biological and psychosocial factors. For instance, women have been diagnosed with depression twice as often as men BIBREF10 and national psychiatric morbidity survey in Britain has shown higher risk of depression in women BIBREF11 . On the other hand, suicide rates for men are three to five times higher compared to that of the women BIBREF12 .
Although depression can affect anyone at any age, signs and triggers of depression vary for different age groups . Depression triggers for children include parental depression, domestic violence, and loss of a pet, friend or family member. For teenagers (ages 12-18), depression may arise from hormonal imbalance, sexuality concerns and rejection by peers. Young adults (ages 19-29) may develop depression due to life transitions, poverty, trauma, and work issues. Adult (ages 30-60) depression triggers include caring simultaneously for children and aging parents, financial burden, work and relationship issues. Senior adults develop depression from common late-life issues, social isolation, major life loses such as the death of a spouse, financial stress and other chronic health problems (e.g., cardiac disease, dementia). Therefore, inferring demographic information while studying depressive behavior from passively sensed social data, can shed better light on the population-level epidemiology of depression.
The recent advancements in deep neural networks, specifically for image analysis task, can lead to determining demographic features such as age and gender BIBREF13 . We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users.
We address and derive answers to the following research questions: 1) How well do the content of posted images (colors, aesthetic and facial presentation) reflect depressive behavior? 2) Does the choice of profile picture show any psychological traits of depressed online persona? Are they reliable enough to represent the demographic information such as age and gender? 3) Are there any underlying common themes among depressed individuals generated using multimodal content that can be used to detect depression reliably?
Related Work
Mental Health Analysis using Social Media:
Several efforts have attempted to automatically detect depression from social media content utilizing machine/deep learning and natural language processing approaches. Conducting a retrospective study over tweets, BIBREF14 characterizes depression based on factors such as language, emotion, style, ego-network, and user engagement. They built a classifier to predict the likelihood of depression in a post BIBREF14 , BIBREF15 or in an individual BIBREF1 , BIBREF16 , BIBREF17 , BIBREF18 . Moreover, there have been significant advances due to the shared task BIBREF19 focusing on methods for identifying depressed users on Twitter at the Computational Linguistics and Clinical Psychology Workshop (CLP 2015). A corpus of nearly 1,800 Twitter users was built for evaluation, and the best models employed topic modeling BIBREF20 , Linguistic Inquiry and Word Count (LIWC) features, and other metadata BIBREF21 . More recently, a neural network architecture introduced by BIBREF22 combined posts into a representation of user's activities for detecting depressed users. Another active line of research has focused on capturing suicide and self-harm signals BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF2 , BIBREF27 . Moreover, the CLP 2016 BIBREF28 defined a shared task on detecting the severity of the mental health from forum posts. All of these studies derive discriminative features to classify depression in user-generated content at message-level, individual-level or community-level. Recent emergence of photo-sharing platforms such as Instagram, has attracted researchers attention to study people's behavior from their visual narratives – ranging from mining their emotions BIBREF29 , and happiness trend BIBREF30 , to studying medical concerns BIBREF31 . Researchers show that people use Instagram to engage in social exchange and storytelling about their difficult experiences BIBREF4 . The role of visual imagery as a mechanism of self-disclosure by relating visual attributes to mental health disclosures on Instagram was highlighted by BIBREF3 , BIBREF5 where individual Instagram profiles were utilized to build a prediction framework for identifying markers of depression. The importance of data modality to understand user behavior on social media was highlighted by BIBREF32 . More recently, a deep neural network sequence modeling approach that marries audio and text data modalities to analyze question-answer style interviews between an individual and an agent has been developed to study mental health BIBREF32 . Similarly, a multimodal depressive dictionary learning was proposed to detect depressed users on Twitter BIBREF33 . They provide a sparse user representations by defining a feature set consisting of social network features, user profile features, visual features, emotional features BIBREF34 , topic-level features, and domain-specific features. Particularly, our choice of multi-model prediction framework is intended to improve upon the prior works involving use of images in multimodal depression analysis BIBREF33 and prior works on studying Instagram photos BIBREF6 , BIBREF35 .
Demographic information inference on Social Media:
There is a growing interest in understanding online user's demographic information due to its numerous applications in healthcare BIBREF36 , BIBREF37 . A supervised model developed by BIBREF38 for determining users' gender by employing features such as screen-name, full-name, profile description and content on external resources (e.g., personal blog). Employing features including emoticons, acronyms, slangs, punctuations, capitalization, sentence length and included links/images, along with online behaviors such as number of friends, post time, and commenting activity, a supervised model was built for predicting user's age group BIBREF39 . Utilizing users life stage information such as secondary school student, college student, and employee, BIBREF40 builds age inference model for Dutch Twitter users. Similarly, relying on profile descriptions while devising a set of rules and patterns, a novel model introduced for extracting age for Twitter users BIBREF41 . They also parse description for occupation by consulting the SOC2010 list of occupations and validating it through social surveys. A novel age inference model was developed while relying on homophily interaction information and content for predicting age of Twitter users BIBREF42 . The limitations of textual content for predicting age and gender was highlighted by BIBREF43 . They distinguish language use based on social gender, age identity, biological sex and chronological age by collecting crowdsourced signals using a game in which players (crowd) guess the biological sex and age of a user based only on their tweets. Their findings indicate how linguistic markers can misguide (e.g., a heart represented as <3 can be misinterpreted as feminine when the writer is male.) Estimating age and gender from facial images by training a convolutional neural networks (CNN) for face recognition is an active line of research BIBREF44 , BIBREF13 , BIBREF45 .
Dataset
Self-disclosure clues have been extensively utilized for creating ground-truth data for numerous social media analytic studies e.g., for predicting demographics BIBREF36 , BIBREF41 , and user's depressive behavior BIBREF46 , BIBREF47 , BIBREF48 . For instance, vulnerable individuals may employ depressive-indicative terms in their Twitter profile descriptions. Others may share their age and gender, e.g., "16 years old suicidal girl"(see Figure FIGREF15 ). We employ a huge dataset of 45,000 self-reported depressed users introduced in BIBREF46 where a lexicon of depression symptoms consisting of 1500 depression-indicative terms was created with the help of psychologist clinician and employed for collecting self-declared depressed individual's profiles. A subset of 8,770 users (24 million time-stamped tweets) containing 3981 depressed and 4789 control users (that do not show any depressive behavior) were verified by two human judges BIBREF46 . This dataset INLINEFORM0 contains the metadata values of each user such as profile descriptions, followers_count, created_at, and profile_image_url.
Age Enabled Ground-truth Dataset: We extract user's age by applying regular expression patterns to profile descriptions (such as "17 years old, self-harm, anxiety, depression") BIBREF41 . We compile "age prefixes" and "age suffixes", and use three age-extraction rules: 1. I am X years old 2. Born in X 3. X years old, where X is a "date" or age (e.g., 1994). We selected a subset of 1061 users among INLINEFORM0 as gold standard dataset INLINEFORM1 who disclose their age. From these 1061 users, 822 belong to depressed class and 239 belong to control class. From 3981 depressed users, 20.6% disclose their age in contrast with only 4% (239/4789) among control group. So self-disclosure of age is more prevalent among vulnerable users. Figure FIGREF18 depicts the age distribution in INLINEFORM2 . The general trend, consistent with the results in BIBREF42 , BIBREF49 , is biased toward young people. Indeed, according to Pew, 47% of Twitter users are younger than 30 years old BIBREF50 . Similar data collection procedure with comparable distribution have been used in many prior efforts BIBREF51 , BIBREF49 , BIBREF42 . We discuss our approach to mitigate the impact of the bias in Section 4.1. The median age is 17 for depressed class versus 19 for control class suggesting either likely depressed-user population is younger, or depressed youngsters are more likely to disclose their age for connecting to their peers (social homophily.) BIBREF51
Gender Enabled Ground-truth Dataset: We selected a subset of 1464 users INLINEFORM0 from INLINEFORM1 who disclose their gender in their profile description. From 1464 users 64% belonged to the depressed group, and the rest (36%) to the control group. 23% of the likely depressed users disclose their gender which is considerably higher (12%) than that for the control class. Once again, gender disclosure varies among the two gender groups. For statistical significance, we performed chi-square test (null hypothesis: gender and depression are two independent variables). Figure FIGREF19 illustrates gender association with each of the two classes. Blue circles (positive residuals, see Figure FIGREF19 -A,D) show positive association among corresponding row and column variables while red circles (negative residuals, see Figure FIGREF19 -B,C) imply a repulsion. Our findings are consistent with the medical literature BIBREF10 as according to BIBREF52 more women than men were given a diagnosis of depression. In particular, the female-to-male ratio is 2.1 and 1.9 for Major Depressive Disorder and Dysthymic Disorder respectively. Our findings from Twitter data indicate there is a strong association (Chi-square: 32.75, p-value:1.04e-08) between being female and showing depressive behavior on Twitter.
Data Modality Analysis
We now provide an in-depth analysis of visual and textual content of vulnerable users.
Visual Content Analysis: We show that the visual content in images from posts as well as profiles provide valuable psychological cues for understanding a user's depression status. Profile/posted images can surface self-stigmatization BIBREF53 . Additionally, as opposed to typical computer vision framework for object recognition that often relies on thousands of predetermined low-level features, what matters more for assessing user's online behavior is the emotions reflected in facial expressions BIBREF54 , attributes contributing to the computational aesthetics BIBREF55 , and sentimental quotes they may subscribe to (Figure FIGREF15 ) BIBREF8 .
Facial Presence:
For capturing facial presence, we rely on BIBREF56 's approach that uses multilevel convolutional coarse-to-fine network cascade to tackle facial landmark localization. We identify facial presentation, emotion from facial expression, and demographic features from profile/posted images . Table TABREF21 illustrates facial presentation differences in both profile and posted images (media) for depressed and control users in INLINEFORM0 . With control class showing significantly higher in both profile and media (8%, 9% respectively) compared to that for the depressed class. In contrast with age and gender disclosure, vulnerable users are less likely to disclose their facial identity, possibly due to lack of confidence or fear of stigma.
Facial Expression:
Following BIBREF8 's approach, we adopt Ekman's model of six emotions: anger, disgust, fear, joy, sadness and surprise, and use the Face++ API to automatically capture them from the shared images. Positive emotions are joy and surprise, and negative emotions are anger, disgust, fear, and sadness. In general, for each user u in INLINEFORM0 , we process profile/shared images for both the depressed and the control groups with at least one face from the shared images (Table TABREF23 ). For the photos that contain multiple faces, we measure the average emotion.
Figure FIGREF27 illustrates the inter-correlation of these features. Additionally, we observe that emotions gleaned from facial expressions correlated with emotional signals captured from textual content utilizing LIWC. This indicates visual imagery can be harnessed as a complementary channel for measuring online emotional signals.
General Image Features:
The importance of interpretable computational aesthetic features for studying users' online behavior has been highlighted by several efforts BIBREF55 , BIBREF8 , BIBREF57 . Color, as a pillar of the human vision system, has a strong association with conceptual ideas like emotion BIBREF58 , BIBREF59 . We measured the normalized red, green, blue and the mean of original colors, and brightness and contrast relative to variations of luminance. We represent images in Hue-Saturation-Value color space that seems intuitive for humans, and measure mean and variance for saturation and hue. Saturation is defined as the difference in the intensities of the different light wavelengths that compose the color. Although hue is not interpretable, high saturation indicates vividness and chromatic purity which are more appealing to the human eye BIBREF8 . Colorfulness is measured as a difference against gray background BIBREF60 . Naturalness is a measure of the degree of correspondence between images and the human perception of reality BIBREF60 . In color reproduction, naturalness is measured from the mental recollection of the colors of familiar objects. Additionally, there is a tendency among vulnerable users to share sentimental quotes bearing negative emotions. We performed optical character recognition (OCR) with python-tesseract to extract text and their sentiment score. As illustrated in Table TABREF26 , vulnerable users tend to use less colorful (higher grayscale) profile as well as shared images to convey their negative feelings, and share images that are less natural (Figure FIGREF15 ). With respect to the aesthetic quality of images (saturation, brightness, and hue), depressed users use images that are less appealing to the human eye. We employ independent t-test, while adopting Bonferroni Correction as a conservative approach to adjust the confidence intervals. Overall, we have 223 features, and choose Bonferroni-corrected INLINEFORM0 level of INLINEFORM1 (*** INLINEFORM2 , ** INLINEFORM3 ).
** alpha= 0.05, *** alpha = 0.05/223
Demographics Inference & Language Cues: LIWC has been used extensively for examining the latent dimensions of self-expression for analyzing personality BIBREF61 , depressive behavior, demographic differences BIBREF43 , BIBREF40 , etc. Several studies highlight that females employ more first-person singular pronouns BIBREF62 , and deictic language BIBREF63 , while males tend to use more articles BIBREF64 which characterizes concrete thinking, and formal, informational and affirmation words BIBREF65 . For age analysis, the salient findings include older individuals using more future tense verbs BIBREF62 triggering a shift in focus while aging. They also show positive emotions BIBREF66 and employ fewer self-references (i.e. 'I', 'me') with greater first person plural BIBREF62 . Depressed users employ first person pronouns more frequently BIBREF67 , repeatedly use negative emotions and anger words. We analyzed psycholinguistic cues and language style to study the association between depressive behavior as well as demographics. Particularly, we adopt Levinson's adult development grouping that partitions users in INLINEFORM0 into 5 age groups: (14,19],(19,23], (23,34],(34,46], and (46,60]. Then, we apply LIWC for characterizing linguistic styles for each age group for users in INLINEFORM1 .
Qualitative Language Analysis: The recent LIWC version summarizes textual content in terms of language variables such as analytical thinking, clout, authenticity, and emotional tone. It also measures other linguistic dimensions such as descriptors categories (e.g., percent of target words gleaned by dictionary, or longer than six letters - Sixltr) and informal language markers (e.g., swear words, netspeak), and other linguistic aspects (e.g., 1st person singular pronouns.)
Thinking Style:
Measuring people's natural ways of trying to analyze, and organize complex events have strong association with analytical thinking. LIWC relates higher analytic thinking to more formal and logical reasoning whereas a lower value indicates focus on narratives. Also, cognitive processing measures problem solving in mind. Words such as "think," "realize," and "know" indicates the degree of "certainty" in communications. Critical thinking ability relates to education BIBREF68 , and is impacted by different stages of cognitive development at different ages . It has been shown that older people communicate with greater cognitive complexity while comprehending nuances and subtle differences BIBREF62 . We observe a similar pattern in our data (Table TABREF40 .) A recent study highlights how depression affects brain and thinking at molecular level using a rat model BIBREF69 . Depression can promote cognitive dysfunction including difficulty in concentrating and making decisions. We observed a notable differences in the ability to think analytically in depressed and control users in different age groups (see Figure FIGREF39 - A, F and Table TABREF40 ). Overall, vulnerable younger users are not logical thinkers based on their relative analytical score and cognitive processing ability.
Authenticity:
Authenticity measures the degree of honesty. Authenticity is often assessed by measuring present tense verbs, 1st person singular pronouns (I, me, my), and by examining the linguistic manifestations of false stories BIBREF70 . Liars use fewer self-references and fewer complex words. Psychologists often see a child's first successfull lie as a mental growth. There is a decreasing trend of the Authenticity with aging (see Figure FIGREF39 -B.) Authenticity for depressed youngsters is strikingly higher than their control peers. It decreases with age (Figure FIGREF39 -B.)
Clout:
People with high clout speak more confidently and with certainty, employing more social words with fewer negations (e.g., no, not) and swear words. In general, midlife is relatively stable w.r.t. relationships and work. A recent study shows that age 60 to be best for self-esteem BIBREF71 as people take on managerial roles at work and maintain a satisfying relationship with their spouse. We see the same pattern in our data (see Figure FIGREF39 -C and Table TABREF40 ). Unsurprisingly, lack of confidence (the 6th PHQ-9 symptom) is a distinguishable characteristic of vulnerable users, leading to their lower clout scores, especially among depressed users before middle age (34 years old).
Self-references:
First person singular words are often seen as indicating interpersonal involvement and their high usage is associated with negative affective states implying nervousness and depression BIBREF66 . Consistent with prior studies, frequency of first person singular for depressed people is significantly higher compared to that of control class. Similarly to BIBREF66 , youngsters tend to use more first-person (e.g. I) and second person singular (e.g. you) pronouns (Figure FIGREF39 -G).
Informal Language Markers; Swear, Netspeak:
Several studies highlighted the use of profanity by young adults has significantly increased over the last decade BIBREF72 . We observed the same pattern in both the depressed and the control classes (Table TABREF40 ), although it's rate is higher for depressed users BIBREF1 . Psychologists have also shown that swearing can indicate that an individual is not a fragmented member of a society. Depressed youngsters, showing higher rate of interpersonal involvement and relationships, have a higher rate of cursing (Figure FIGREF39 -E). Also, Netspeak lexicon measures the frequency of terms such as lol and thx.
Sexual, Body:
Sexual lexicon contains terms like "horny", "love" and "incest", and body terms like "ache", "heart", and "cough". Both start with a higher rate for depressed users while decreasing gradually while growing up, possibly due to changes in sexual desire as we age (Figure FIGREF39 -H,I and Table TABREF40 .)
Quantitative Language Analysis:
We employ one-way ANOVA to compare the impact of various factors and validate our findings above. Table TABREF40 illustrates our findings, with a degree of freedom (df) of 1055. The null hypothesis is that the sample means' for each age group are similar for each of the LIWC features.
*** alpha = 0.001, ** alpha = 0.01, * alpha = 0.05
Demographic Prediction
We leverage both the visual and textual content for predicting age and gender.
Prediction with Textual Content:
We employ BIBREF73 's weighted lexicon of terms that uses the dataset of 75,394 Facebook users who shared their status, age and gender. The predictive power of this lexica was evaluated on Twitter, blog, and Facebook, showing promising results BIBREF73 . Utilizing these two weighted lexicon of terms, we are predicting the demographic information (age or gender) of INLINEFORM0 (denoted by INLINEFORM1 ) using following equation: INLINEFORM2
where INLINEFORM0 is the lexicon weight of the term, and INLINEFORM1 represents the frequency of the term in the user generated INLINEFORM2 , and INLINEFORM3 measures total word count in INLINEFORM4 . As our data is biased toward young people, we report age prediction performance for each age group separately (Table TABREF42 ). Moreover, to measure the average accuracy of this model, we build a balanced dataset (keeping all the users above 23 -416 users), and then randomly sampling the same number of users from the age ranges (11,19] and (19,23]. The average accuracy of this model is 0.63 for depressed users and 0.64 for control class. Table TABREF44 illustrates the performance of gender prediction for each class. The average accuracy is 0.82 on INLINEFORM5 ground-truth dataset.
Prediction with Visual Imagery:
Inspired by BIBREF56 's approach for facial landmark localization, we use their pretrained CNN consisting of convolutional layers, including unshared and fully-connected layers, to predict gender and age from both the profile and shared images. We evaluate the performance for gender and age prediction task on INLINEFORM0 and INLINEFORM1 respectively as shown in Table TABREF42 and Table TABREF44 .
Demographic Prediction Analysis:
We delve deeper into the benefits and drawbacks of each data modality for demographic information prediction. This is crucial as the differences between language cues between age groups above age 35 tend to become smaller (see Figure FIGREF39 -A,B,C) and making the prediction harder for older people BIBREF74 . In this case, the other data modality (e.g., visual content) can play integral role as a complementary source for age inference. For gender prediction (see Table TABREF44 ), on average, the profile image-based predictor provides a more accurate prediction for both the depressed and control class (0.92 and 0.90) compared to content-based predictor (0.82). For age prediction (see Table TABREF42 ), textual content-based predictor (on average 0.60) outperforms both of the visual-based predictors (on average profile:0.51, Media:0.53).
However, not every user provides facial identity on his account (see Table TABREF21 ). We studied facial presentation for each age-group to examine any association between age-group, facial presentation and depressive behavior (see Table TABREF43 ). We can see youngsters in both depressed and control class are not likely to present their face on profile image. Less than 3% of vulnerable users between 11-19 years reveal their facial identity. Although content-based gender predictor was not as accurate as image-based one, it is adequate for population-level analysis.
Multi-modal Prediction Framework
We use the above findings for predicting depressive behavior. Our model exploits early fusion BIBREF32 technique in feature space and requires modeling each user INLINEFORM0 in INLINEFORM1 as vector concatenation of individual modality features. As opposed to computationally expensive late fusion scheme where each modality requires a separate supervised modeling, this model reduces the learning effort and shows promising results BIBREF75 . To develop a generalizable model that avoids overfitting, we perform feature selection using statistical tests and all relevant ensemble learning models. It adds randomness to the data by creating shuffled copies of all features (shadow feature), and then trains Random Forest classifier on the extended data. Iteratively, it checks whether the actual feature has a higher Z-score than its shadow feature (See Algorithm SECREF6 and Figure FIGREF45 ) BIBREF76 .
Main each Feature INLINEFORM0 INLINEFORM1
RndForrest( INLINEFORM0 ) Calculate Imp INLINEFORM1 INLINEFORM2 Generate next hypothesis , INLINEFORM3 Once all hypothesis generated Perform Statistical Test INLINEFORM4 //Binomial Distribution INLINEFORM5 Feature is important Feature is important
Ensemble Feature Selection
Next, we adopt an ensemble learning method that integrates the predictive power of multiple learners with two main advantages; its interpretability with respect to the contributions of each feature and its high predictive power. For prediction we have INLINEFORM0 where INLINEFORM1 is a weak learner and INLINEFORM2 denotes the final prediction.
In particular, we optimize the loss function: INLINEFORM0 where INLINEFORM1 incorporates INLINEFORM2 and INLINEFORM3 regularization. In each iteration, the new INLINEFORM4 is obtained by fitting weak learner to the negative gradient of loss function. Particularly, by estimating the loss function with Taylor expansion : INLINEFORM5 where its first expression is constant, the second and the third expressions are first ( INLINEFORM6 ) and second order derivatives ( INLINEFORM7 ) of the loss. INLINEFORM8
For exploring the weak learners, assume INLINEFORM0 has k leaf nodes, INLINEFORM1 be subset of users from INLINEFORM2 belongs to the node INLINEFORM3 , and INLINEFORM4 denotes the prediction for node INLINEFORM5 . Then, for each user INLINEFORM6 belonging to INLINEFORM7 , INLINEFORM8 and INLINEFORM9 INLINEFORM10
Next, for each leaf node INLINEFORM0 , deriving w.r.t INLINEFORM1 : INLINEFORM2
and by substituting weights: INLINEFORM0
which represents the loss for fixed weak learners with INLINEFORM0 nodes. The trees are built sequentially such that each subsequent tree aims to reduce the errors of its predecessor tree. Although, the weak learners have high bias, the ensemble model produces a strong learner that effectively integrate the weak learners by reducing bias and variance (the ultimate goal of supervised models) BIBREF77 . Table TABREF48 illustrates our multimodal framework outperform the baselines for identifying depressed users in terms of average specificity, sensitivity, F-Measure, and accuracy in 10-fold cross-validation setting on INLINEFORM1 dataset. Figure FIGREF47 shows how the likelihood of being classified into the depressed class varies with each feature addition to the model for a sample user in the dataset. The prediction bar (the black bar) shows that the log-odds of prediction is 0.31, that is, the likelihood of this person being a depressed user is 57% (1 / (1 + exp(-0.3))). The figure also sheds light on the impact of each contributing feature. The waterfall charts represent how the probability of being depressed changes with the addition of each feature variable. For instance, the "Analytic thinking" of this user is considered high 48.43 (Median:36.95, Mean: 40.18) and this decreases the chance of this person being classified into the depressed group by the log-odds of -1.41. Depressed users have significantly lower "Analytic thinking" score compared to control class. Moreover, the 40.46 "Clout" score is a low value (Median: 62.22, Mean: 57.17) and it decreases the chance of being classified as depressed. With respect to the visual features, for instance, the mean and the median of 'shared_colorfulness' is 112.03 and 113 respectively. The value of 136.71 would be high; thus, it decreases the chance of being depressed for this specific user by log-odds of -0.54. Moreover, the 'profile_naturalness' of 0.46 is considered high compared to 0.36 as the mean for the depressed class which justifies pull down of the log-odds by INLINEFORM2 . For network features, for instance, 'two_hop_neighborhood' for depressed users (Mean : 84) are less than that of control users (Mean: 154), and is reflected in pulling down the log-odds by -0.27.
Baselines:
To test the efficacy of our multi-modal framework for detecting depressed users, we compare it against existing content, content-network, and image-based models (based on the aforementioned general image feature, facial presence, and facial expressions.) | Users' tweets |
b27f7993b1fe7804c5660d1a33655e424cea8d10 | b27f7993b1fe7804c5660d1a33655e424cea8d10_0 | Q: What is the source of the visual data?
Text: 0pt*0*0
0pt*0*0
0pt*0*0 0.95
1]Amir Hossein Yazdavar 1]Mohammad Saeid Mahdavinejad 2]Goonmeet Bajaj
3]William Romine 1]Amirhassan Monadjemi 1]Krishnaprasad Thirunarayan
1]Amit Sheth 4]Jyotishman Pathak [1]Department of Computer Science & Engineering, Wright State University, OH, USA [2]Ohio State University, Columbus, OH, USA [3]Department of Biological Science, Wright State University, OH, USA [4] Division of Health Informatics, Weill Cornell University, New York, NY, USA
[1] yazdavar.2@wright.edu
With ubiquity of social media platforms, millions of people are sharing their online persona by expressing their thoughts, moods, emotions, feelings, and even their daily struggles with mental health issues voluntarily and publicly on social media. Unlike the most existing efforts which study depression by analyzing textual content, we examine and exploit multimodal big data to discern depressive behavior using a wide variety of features including individual-level demographics. By developing a multimodal framework and employing statistical techniques for fusing heterogeneous sets of features obtained by processing visual, textual and user interaction data, we significantly enhance the current state-of-the-art approaches for identifying depressed individuals on Twitter (improving the average F1-Score by 5 percent) as well as facilitate demographic inference from social media for broader applications. Besides providing insights into the relationship between demographics and mental health, our research assists in the design of a new breed of demographic-aware health interventions.
Introduction
Depression is a highly prevalent public health challenge and a major cause of disability worldwide. Depression affects 6.7% (i.e., about 16 million) Americans each year . According to the World Mental Health Survey conducted in 17 countries, on average, about 5% of people reported having an episode of depression in 2011 BIBREF0 . Untreated or under-treated clinical depression can lead to suicide and other chronic risky behaviors such as drug or alcohol addiction.
Global efforts to curb clinical depression involve identifying depression through survey-based methods employing phone or online questionnaires. These approaches suffer from under-representation as well as sampling bias (with very small group of respondents.) In contrast, the widespread adoption of social media where people voluntarily and publicly express their thoughts, moods, emotions, and feelings, and even share their daily struggles with mental health problems has not been adequately tapped into studying mental illnesses, such as depression. The visual and textual content shared on different social media platforms like Twitter offer new opportunities for a deeper understanding of self-expressed depression both at an individual as well as community-level. Previous research efforts have suggested that language style, sentiment, users' activities, and engagement expressed in social media posts can predict the likelihood of depression BIBREF1 , BIBREF2 . However, except for a few attempts BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , these investigations have seldom studied extraction of emotional state from visual content of images in posted/profile images. Visual content can express users' emotions more vividly, and psychologists noted that imagery is an effective medium for communicating difficult emotions.
According to eMarketer, photos accounted for 75% of content posted on Facebook worldwide and they are the most engaging type of content on Facebook (87%). Indeed, "a picture is worth a thousand words" and now "photos are worth a million likes." Similarly, on Twitter, the tweets with image links get twice as much attention as those without , and video-linked tweets drive up engagement . The ease and naturalness of expression through visual imagery can serve to glean depression-indicators in vulnerable individuals who often seek social support through social media BIBREF7 . Further, as psychologist Carl Rogers highlights, we often pursue and promote our Ideal-Self . In this regard, the choice of profile image can be a proxy for the online persona BIBREF8 , providing a window into an individual's mental health status. For instance, choosing emaciated legs of girls covered with several cuts as profile image portrays negative self-view BIBREF9 .
Inferring demographic information like gender and age can be crucial for stratifying our understanding of population-level epidemiology of mental health disorders. Relying on electronic health records data, previous studies explored gender differences in depressive behavior from different angles including prevalence, age at onset, comorbidities, as well as biological and psychosocial factors. For instance, women have been diagnosed with depression twice as often as men BIBREF10 and national psychiatric morbidity survey in Britain has shown higher risk of depression in women BIBREF11 . On the other hand, suicide rates for men are three to five times higher compared to that of the women BIBREF12 .
Although depression can affect anyone at any age, signs and triggers of depression vary for different age groups . Depression triggers for children include parental depression, domestic violence, and loss of a pet, friend or family member. For teenagers (ages 12-18), depression may arise from hormonal imbalance, sexuality concerns and rejection by peers. Young adults (ages 19-29) may develop depression due to life transitions, poverty, trauma, and work issues. Adult (ages 30-60) depression triggers include caring simultaneously for children and aging parents, financial burden, work and relationship issues. Senior adults develop depression from common late-life issues, social isolation, major life loses such as the death of a spouse, financial stress and other chronic health problems (e.g., cardiac disease, dementia). Therefore, inferring demographic information while studying depressive behavior from passively sensed social data, can shed better light on the population-level epidemiology of depression.
The recent advancements in deep neural networks, specifically for image analysis task, can lead to determining demographic features such as age and gender BIBREF13 . We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users.
We address and derive answers to the following research questions: 1) How well do the content of posted images (colors, aesthetic and facial presentation) reflect depressive behavior? 2) Does the choice of profile picture show any psychological traits of depressed online persona? Are they reliable enough to represent the demographic information such as age and gender? 3) Are there any underlying common themes among depressed individuals generated using multimodal content that can be used to detect depression reliably?
Related Work
Mental Health Analysis using Social Media:
Several efforts have attempted to automatically detect depression from social media content utilizing machine/deep learning and natural language processing approaches. Conducting a retrospective study over tweets, BIBREF14 characterizes depression based on factors such as language, emotion, style, ego-network, and user engagement. They built a classifier to predict the likelihood of depression in a post BIBREF14 , BIBREF15 or in an individual BIBREF1 , BIBREF16 , BIBREF17 , BIBREF18 . Moreover, there have been significant advances due to the shared task BIBREF19 focusing on methods for identifying depressed users on Twitter at the Computational Linguistics and Clinical Psychology Workshop (CLP 2015). A corpus of nearly 1,800 Twitter users was built for evaluation, and the best models employed topic modeling BIBREF20 , Linguistic Inquiry and Word Count (LIWC) features, and other metadata BIBREF21 . More recently, a neural network architecture introduced by BIBREF22 combined posts into a representation of user's activities for detecting depressed users. Another active line of research has focused on capturing suicide and self-harm signals BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF2 , BIBREF27 . Moreover, the CLP 2016 BIBREF28 defined a shared task on detecting the severity of the mental health from forum posts. All of these studies derive discriminative features to classify depression in user-generated content at message-level, individual-level or community-level. Recent emergence of photo-sharing platforms such as Instagram, has attracted researchers attention to study people's behavior from their visual narratives – ranging from mining their emotions BIBREF29 , and happiness trend BIBREF30 , to studying medical concerns BIBREF31 . Researchers show that people use Instagram to engage in social exchange and storytelling about their difficult experiences BIBREF4 . The role of visual imagery as a mechanism of self-disclosure by relating visual attributes to mental health disclosures on Instagram was highlighted by BIBREF3 , BIBREF5 where individual Instagram profiles were utilized to build a prediction framework for identifying markers of depression. The importance of data modality to understand user behavior on social media was highlighted by BIBREF32 . More recently, a deep neural network sequence modeling approach that marries audio and text data modalities to analyze question-answer style interviews between an individual and an agent has been developed to study mental health BIBREF32 . Similarly, a multimodal depressive dictionary learning was proposed to detect depressed users on Twitter BIBREF33 . They provide a sparse user representations by defining a feature set consisting of social network features, user profile features, visual features, emotional features BIBREF34 , topic-level features, and domain-specific features. Particularly, our choice of multi-model prediction framework is intended to improve upon the prior works involving use of images in multimodal depression analysis BIBREF33 and prior works on studying Instagram photos BIBREF6 , BIBREF35 .
Demographic information inference on Social Media:
There is a growing interest in understanding online user's demographic information due to its numerous applications in healthcare BIBREF36 , BIBREF37 . A supervised model developed by BIBREF38 for determining users' gender by employing features such as screen-name, full-name, profile description and content on external resources (e.g., personal blog). Employing features including emoticons, acronyms, slangs, punctuations, capitalization, sentence length and included links/images, along with online behaviors such as number of friends, post time, and commenting activity, a supervised model was built for predicting user's age group BIBREF39 . Utilizing users life stage information such as secondary school student, college student, and employee, BIBREF40 builds age inference model for Dutch Twitter users. Similarly, relying on profile descriptions while devising a set of rules and patterns, a novel model introduced for extracting age for Twitter users BIBREF41 . They also parse description for occupation by consulting the SOC2010 list of occupations and validating it through social surveys. A novel age inference model was developed while relying on homophily interaction information and content for predicting age of Twitter users BIBREF42 . The limitations of textual content for predicting age and gender was highlighted by BIBREF43 . They distinguish language use based on social gender, age identity, biological sex and chronological age by collecting crowdsourced signals using a game in which players (crowd) guess the biological sex and age of a user based only on their tweets. Their findings indicate how linguistic markers can misguide (e.g., a heart represented as <3 can be misinterpreted as feminine when the writer is male.) Estimating age and gender from facial images by training a convolutional neural networks (CNN) for face recognition is an active line of research BIBREF44 , BIBREF13 , BIBREF45 .
Dataset
Self-disclosure clues have been extensively utilized for creating ground-truth data for numerous social media analytic studies e.g., for predicting demographics BIBREF36 , BIBREF41 , and user's depressive behavior BIBREF46 , BIBREF47 , BIBREF48 . For instance, vulnerable individuals may employ depressive-indicative terms in their Twitter profile descriptions. Others may share their age and gender, e.g., "16 years old suicidal girl"(see Figure FIGREF15 ). We employ a huge dataset of 45,000 self-reported depressed users introduced in BIBREF46 where a lexicon of depression symptoms consisting of 1500 depression-indicative terms was created with the help of psychologist clinician and employed for collecting self-declared depressed individual's profiles. A subset of 8,770 users (24 million time-stamped tweets) containing 3981 depressed and 4789 control users (that do not show any depressive behavior) were verified by two human judges BIBREF46 . This dataset INLINEFORM0 contains the metadata values of each user such as profile descriptions, followers_count, created_at, and profile_image_url.
Age Enabled Ground-truth Dataset: We extract user's age by applying regular expression patterns to profile descriptions (such as "17 years old, self-harm, anxiety, depression") BIBREF41 . We compile "age prefixes" and "age suffixes", and use three age-extraction rules: 1. I am X years old 2. Born in X 3. X years old, where X is a "date" or age (e.g., 1994). We selected a subset of 1061 users among INLINEFORM0 as gold standard dataset INLINEFORM1 who disclose their age. From these 1061 users, 822 belong to depressed class and 239 belong to control class. From 3981 depressed users, 20.6% disclose their age in contrast with only 4% (239/4789) among control group. So self-disclosure of age is more prevalent among vulnerable users. Figure FIGREF18 depicts the age distribution in INLINEFORM2 . The general trend, consistent with the results in BIBREF42 , BIBREF49 , is biased toward young people. Indeed, according to Pew, 47% of Twitter users are younger than 30 years old BIBREF50 . Similar data collection procedure with comparable distribution have been used in many prior efforts BIBREF51 , BIBREF49 , BIBREF42 . We discuss our approach to mitigate the impact of the bias in Section 4.1. The median age is 17 for depressed class versus 19 for control class suggesting either likely depressed-user population is younger, or depressed youngsters are more likely to disclose their age for connecting to their peers (social homophily.) BIBREF51
Gender Enabled Ground-truth Dataset: We selected a subset of 1464 users INLINEFORM0 from INLINEFORM1 who disclose their gender in their profile description. From 1464 users 64% belonged to the depressed group, and the rest (36%) to the control group. 23% of the likely depressed users disclose their gender which is considerably higher (12%) than that for the control class. Once again, gender disclosure varies among the two gender groups. For statistical significance, we performed chi-square test (null hypothesis: gender and depression are two independent variables). Figure FIGREF19 illustrates gender association with each of the two classes. Blue circles (positive residuals, see Figure FIGREF19 -A,D) show positive association among corresponding row and column variables while red circles (negative residuals, see Figure FIGREF19 -B,C) imply a repulsion. Our findings are consistent with the medical literature BIBREF10 as according to BIBREF52 more women than men were given a diagnosis of depression. In particular, the female-to-male ratio is 2.1 and 1.9 for Major Depressive Disorder and Dysthymic Disorder respectively. Our findings from Twitter data indicate there is a strong association (Chi-square: 32.75, p-value:1.04e-08) between being female and showing depressive behavior on Twitter.
Data Modality Analysis
We now provide an in-depth analysis of visual and textual content of vulnerable users.
Visual Content Analysis: We show that the visual content in images from posts as well as profiles provide valuable psychological cues for understanding a user's depression status. Profile/posted images can surface self-stigmatization BIBREF53 . Additionally, as opposed to typical computer vision framework for object recognition that often relies on thousands of predetermined low-level features, what matters more for assessing user's online behavior is the emotions reflected in facial expressions BIBREF54 , attributes contributing to the computational aesthetics BIBREF55 , and sentimental quotes they may subscribe to (Figure FIGREF15 ) BIBREF8 .
Facial Presence:
For capturing facial presence, we rely on BIBREF56 's approach that uses multilevel convolutional coarse-to-fine network cascade to tackle facial landmark localization. We identify facial presentation, emotion from facial expression, and demographic features from profile/posted images . Table TABREF21 illustrates facial presentation differences in both profile and posted images (media) for depressed and control users in INLINEFORM0 . With control class showing significantly higher in both profile and media (8%, 9% respectively) compared to that for the depressed class. In contrast with age and gender disclosure, vulnerable users are less likely to disclose their facial identity, possibly due to lack of confidence or fear of stigma.
Facial Expression:
Following BIBREF8 's approach, we adopt Ekman's model of six emotions: anger, disgust, fear, joy, sadness and surprise, and use the Face++ API to automatically capture them from the shared images. Positive emotions are joy and surprise, and negative emotions are anger, disgust, fear, and sadness. In general, for each user u in INLINEFORM0 , we process profile/shared images for both the depressed and the control groups with at least one face from the shared images (Table TABREF23 ). For the photos that contain multiple faces, we measure the average emotion.
Figure FIGREF27 illustrates the inter-correlation of these features. Additionally, we observe that emotions gleaned from facial expressions correlated with emotional signals captured from textual content utilizing LIWC. This indicates visual imagery can be harnessed as a complementary channel for measuring online emotional signals.
General Image Features:
The importance of interpretable computational aesthetic features for studying users' online behavior has been highlighted by several efforts BIBREF55 , BIBREF8 , BIBREF57 . Color, as a pillar of the human vision system, has a strong association with conceptual ideas like emotion BIBREF58 , BIBREF59 . We measured the normalized red, green, blue and the mean of original colors, and brightness and contrast relative to variations of luminance. We represent images in Hue-Saturation-Value color space that seems intuitive for humans, and measure mean and variance for saturation and hue. Saturation is defined as the difference in the intensities of the different light wavelengths that compose the color. Although hue is not interpretable, high saturation indicates vividness and chromatic purity which are more appealing to the human eye BIBREF8 . Colorfulness is measured as a difference against gray background BIBREF60 . Naturalness is a measure of the degree of correspondence between images and the human perception of reality BIBREF60 . In color reproduction, naturalness is measured from the mental recollection of the colors of familiar objects. Additionally, there is a tendency among vulnerable users to share sentimental quotes bearing negative emotions. We performed optical character recognition (OCR) with python-tesseract to extract text and their sentiment score. As illustrated in Table TABREF26 , vulnerable users tend to use less colorful (higher grayscale) profile as well as shared images to convey their negative feelings, and share images that are less natural (Figure FIGREF15 ). With respect to the aesthetic quality of images (saturation, brightness, and hue), depressed users use images that are less appealing to the human eye. We employ independent t-test, while adopting Bonferroni Correction as a conservative approach to adjust the confidence intervals. Overall, we have 223 features, and choose Bonferroni-corrected INLINEFORM0 level of INLINEFORM1 (*** INLINEFORM2 , ** INLINEFORM3 ).
** alpha= 0.05, *** alpha = 0.05/223
Demographics Inference & Language Cues: LIWC has been used extensively for examining the latent dimensions of self-expression for analyzing personality BIBREF61 , depressive behavior, demographic differences BIBREF43 , BIBREF40 , etc. Several studies highlight that females employ more first-person singular pronouns BIBREF62 , and deictic language BIBREF63 , while males tend to use more articles BIBREF64 which characterizes concrete thinking, and formal, informational and affirmation words BIBREF65 . For age analysis, the salient findings include older individuals using more future tense verbs BIBREF62 triggering a shift in focus while aging. They also show positive emotions BIBREF66 and employ fewer self-references (i.e. 'I', 'me') with greater first person plural BIBREF62 . Depressed users employ first person pronouns more frequently BIBREF67 , repeatedly use negative emotions and anger words. We analyzed psycholinguistic cues and language style to study the association between depressive behavior as well as demographics. Particularly, we adopt Levinson's adult development grouping that partitions users in INLINEFORM0 into 5 age groups: (14,19],(19,23], (23,34],(34,46], and (46,60]. Then, we apply LIWC for characterizing linguistic styles for each age group for users in INLINEFORM1 .
Qualitative Language Analysis: The recent LIWC version summarizes textual content in terms of language variables such as analytical thinking, clout, authenticity, and emotional tone. It also measures other linguistic dimensions such as descriptors categories (e.g., percent of target words gleaned by dictionary, or longer than six letters - Sixltr) and informal language markers (e.g., swear words, netspeak), and other linguistic aspects (e.g., 1st person singular pronouns.)
Thinking Style:
Measuring people's natural ways of trying to analyze, and organize complex events have strong association with analytical thinking. LIWC relates higher analytic thinking to more formal and logical reasoning whereas a lower value indicates focus on narratives. Also, cognitive processing measures problem solving in mind. Words such as "think," "realize," and "know" indicates the degree of "certainty" in communications. Critical thinking ability relates to education BIBREF68 , and is impacted by different stages of cognitive development at different ages . It has been shown that older people communicate with greater cognitive complexity while comprehending nuances and subtle differences BIBREF62 . We observe a similar pattern in our data (Table TABREF40 .) A recent study highlights how depression affects brain and thinking at molecular level using a rat model BIBREF69 . Depression can promote cognitive dysfunction including difficulty in concentrating and making decisions. We observed a notable differences in the ability to think analytically in depressed and control users in different age groups (see Figure FIGREF39 - A, F and Table TABREF40 ). Overall, vulnerable younger users are not logical thinkers based on their relative analytical score and cognitive processing ability.
Authenticity:
Authenticity measures the degree of honesty. Authenticity is often assessed by measuring present tense verbs, 1st person singular pronouns (I, me, my), and by examining the linguistic manifestations of false stories BIBREF70 . Liars use fewer self-references and fewer complex words. Psychologists often see a child's first successfull lie as a mental growth. There is a decreasing trend of the Authenticity with aging (see Figure FIGREF39 -B.) Authenticity for depressed youngsters is strikingly higher than their control peers. It decreases with age (Figure FIGREF39 -B.)
Clout:
People with high clout speak more confidently and with certainty, employing more social words with fewer negations (e.g., no, not) and swear words. In general, midlife is relatively stable w.r.t. relationships and work. A recent study shows that age 60 to be best for self-esteem BIBREF71 as people take on managerial roles at work and maintain a satisfying relationship with their spouse. We see the same pattern in our data (see Figure FIGREF39 -C and Table TABREF40 ). Unsurprisingly, lack of confidence (the 6th PHQ-9 symptom) is a distinguishable characteristic of vulnerable users, leading to their lower clout scores, especially among depressed users before middle age (34 years old).
Self-references:
First person singular words are often seen as indicating interpersonal involvement and their high usage is associated with negative affective states implying nervousness and depression BIBREF66 . Consistent with prior studies, frequency of first person singular for depressed people is significantly higher compared to that of control class. Similarly to BIBREF66 , youngsters tend to use more first-person (e.g. I) and second person singular (e.g. you) pronouns (Figure FIGREF39 -G).
Informal Language Markers; Swear, Netspeak:
Several studies highlighted the use of profanity by young adults has significantly increased over the last decade BIBREF72 . We observed the same pattern in both the depressed and the control classes (Table TABREF40 ), although it's rate is higher for depressed users BIBREF1 . Psychologists have also shown that swearing can indicate that an individual is not a fragmented member of a society. Depressed youngsters, showing higher rate of interpersonal involvement and relationships, have a higher rate of cursing (Figure FIGREF39 -E). Also, Netspeak lexicon measures the frequency of terms such as lol and thx.
Sexual, Body:
Sexual lexicon contains terms like "horny", "love" and "incest", and body terms like "ache", "heart", and "cough". Both start with a higher rate for depressed users while decreasing gradually while growing up, possibly due to changes in sexual desire as we age (Figure FIGREF39 -H,I and Table TABREF40 .)
Quantitative Language Analysis:
We employ one-way ANOVA to compare the impact of various factors and validate our findings above. Table TABREF40 illustrates our findings, with a degree of freedom (df) of 1055. The null hypothesis is that the sample means' for each age group are similar for each of the LIWC features.
*** alpha = 0.001, ** alpha = 0.01, * alpha = 0.05
Demographic Prediction
We leverage both the visual and textual content for predicting age and gender.
Prediction with Textual Content:
We employ BIBREF73 's weighted lexicon of terms that uses the dataset of 75,394 Facebook users who shared their status, age and gender. The predictive power of this lexica was evaluated on Twitter, blog, and Facebook, showing promising results BIBREF73 . Utilizing these two weighted lexicon of terms, we are predicting the demographic information (age or gender) of INLINEFORM0 (denoted by INLINEFORM1 ) using following equation: INLINEFORM2
where INLINEFORM0 is the lexicon weight of the term, and INLINEFORM1 represents the frequency of the term in the user generated INLINEFORM2 , and INLINEFORM3 measures total word count in INLINEFORM4 . As our data is biased toward young people, we report age prediction performance for each age group separately (Table TABREF42 ). Moreover, to measure the average accuracy of this model, we build a balanced dataset (keeping all the users above 23 -416 users), and then randomly sampling the same number of users from the age ranges (11,19] and (19,23]. The average accuracy of this model is 0.63 for depressed users and 0.64 for control class. Table TABREF44 illustrates the performance of gender prediction for each class. The average accuracy is 0.82 on INLINEFORM5 ground-truth dataset.
Prediction with Visual Imagery:
Inspired by BIBREF56 's approach for facial landmark localization, we use their pretrained CNN consisting of convolutional layers, including unshared and fully-connected layers, to predict gender and age from both the profile and shared images. We evaluate the performance for gender and age prediction task on INLINEFORM0 and INLINEFORM1 respectively as shown in Table TABREF42 and Table TABREF44 .
Demographic Prediction Analysis:
We delve deeper into the benefits and drawbacks of each data modality for demographic information prediction. This is crucial as the differences between language cues between age groups above age 35 tend to become smaller (see Figure FIGREF39 -A,B,C) and making the prediction harder for older people BIBREF74 . In this case, the other data modality (e.g., visual content) can play integral role as a complementary source for age inference. For gender prediction (see Table TABREF44 ), on average, the profile image-based predictor provides a more accurate prediction for both the depressed and control class (0.92 and 0.90) compared to content-based predictor (0.82). For age prediction (see Table TABREF42 ), textual content-based predictor (on average 0.60) outperforms both of the visual-based predictors (on average profile:0.51, Media:0.53).
However, not every user provides facial identity on his account (see Table TABREF21 ). We studied facial presentation for each age-group to examine any association between age-group, facial presentation and depressive behavior (see Table TABREF43 ). We can see youngsters in both depressed and control class are not likely to present their face on profile image. Less than 3% of vulnerable users between 11-19 years reveal their facial identity. Although content-based gender predictor was not as accurate as image-based one, it is adequate for population-level analysis.
Multi-modal Prediction Framework
We use the above findings for predicting depressive behavior. Our model exploits early fusion BIBREF32 technique in feature space and requires modeling each user INLINEFORM0 in INLINEFORM1 as vector concatenation of individual modality features. As opposed to computationally expensive late fusion scheme where each modality requires a separate supervised modeling, this model reduces the learning effort and shows promising results BIBREF75 . To develop a generalizable model that avoids overfitting, we perform feature selection using statistical tests and all relevant ensemble learning models. It adds randomness to the data by creating shuffled copies of all features (shadow feature), and then trains Random Forest classifier on the extended data. Iteratively, it checks whether the actual feature has a higher Z-score than its shadow feature (See Algorithm SECREF6 and Figure FIGREF45 ) BIBREF76 .
Main each Feature INLINEFORM0 INLINEFORM1
RndForrest( INLINEFORM0 ) Calculate Imp INLINEFORM1 INLINEFORM2 Generate next hypothesis , INLINEFORM3 Once all hypothesis generated Perform Statistical Test INLINEFORM4 //Binomial Distribution INLINEFORM5 Feature is important Feature is important
Ensemble Feature Selection
Next, we adopt an ensemble learning method that integrates the predictive power of multiple learners with two main advantages; its interpretability with respect to the contributions of each feature and its high predictive power. For prediction we have INLINEFORM0 where INLINEFORM1 is a weak learner and INLINEFORM2 denotes the final prediction.
In particular, we optimize the loss function: INLINEFORM0 where INLINEFORM1 incorporates INLINEFORM2 and INLINEFORM3 regularization. In each iteration, the new INLINEFORM4 is obtained by fitting weak learner to the negative gradient of loss function. Particularly, by estimating the loss function with Taylor expansion : INLINEFORM5 where its first expression is constant, the second and the third expressions are first ( INLINEFORM6 ) and second order derivatives ( INLINEFORM7 ) of the loss. INLINEFORM8
For exploring the weak learners, assume INLINEFORM0 has k leaf nodes, INLINEFORM1 be subset of users from INLINEFORM2 belongs to the node INLINEFORM3 , and INLINEFORM4 denotes the prediction for node INLINEFORM5 . Then, for each user INLINEFORM6 belonging to INLINEFORM7 , INLINEFORM8 and INLINEFORM9 INLINEFORM10
Next, for each leaf node INLINEFORM0 , deriving w.r.t INLINEFORM1 : INLINEFORM2
and by substituting weights: INLINEFORM0
which represents the loss for fixed weak learners with INLINEFORM0 nodes. The trees are built sequentially such that each subsequent tree aims to reduce the errors of its predecessor tree. Although, the weak learners have high bias, the ensemble model produces a strong learner that effectively integrate the weak learners by reducing bias and variance (the ultimate goal of supervised models) BIBREF77 . Table TABREF48 illustrates our multimodal framework outperform the baselines for identifying depressed users in terms of average specificity, sensitivity, F-Measure, and accuracy in 10-fold cross-validation setting on INLINEFORM1 dataset. Figure FIGREF47 shows how the likelihood of being classified into the depressed class varies with each feature addition to the model for a sample user in the dataset. The prediction bar (the black bar) shows that the log-odds of prediction is 0.31, that is, the likelihood of this person being a depressed user is 57% (1 / (1 + exp(-0.3))). The figure also sheds light on the impact of each contributing feature. The waterfall charts represent how the probability of being depressed changes with the addition of each feature variable. For instance, the "Analytic thinking" of this user is considered high 48.43 (Median:36.95, Mean: 40.18) and this decreases the chance of this person being classified into the depressed group by the log-odds of -1.41. Depressed users have significantly lower "Analytic thinking" score compared to control class. Moreover, the 40.46 "Clout" score is a low value (Median: 62.22, Mean: 57.17) and it decreases the chance of being classified as depressed. With respect to the visual features, for instance, the mean and the median of 'shared_colorfulness' is 112.03 and 113 respectively. The value of 136.71 would be high; thus, it decreases the chance of being depressed for this specific user by log-odds of -0.54. Moreover, the 'profile_naturalness' of 0.46 is considered high compared to 0.36 as the mean for the depressed class which justifies pull down of the log-odds by INLINEFORM2 . For network features, for instance, 'two_hop_neighborhood' for depressed users (Mean : 84) are less than that of control users (Mean: 154), and is reflected in pulling down the log-odds by -0.27.
Baselines:
To test the efficacy of our multi-modal framework for detecting depressed users, we compare it against existing content, content-network, and image-based models (based on the aforementioned general image feature, facial presence, and facial expressions.) | Profile pictures from the Twitter users' profiles. |
e21a8581cc858483a31c6133e53dd0cfda76ae4c | e21a8581cc858483a31c6133e53dd0cfda76ae4c_0 | Q: Is there an online demo of their system?
Text: Introduction
Chinese definition modeling is the task of generating a definition in Chinese for a given Chinese word. This task can benefit the compilation of dictionaries, especially dictionaries for Chinese as a foreign language (CFL) learners.
In recent years, the number of CFL learners has risen sharply. In 2017, 770,000 people took the Chinese Proficiency Test, an increase of 38% from 2016. However, most Chinese dictionaries are for native speakers. Since these dictionaries usually require a fairly high level of Chinese, it is necessary to build a dictionary specifically for CFL learners. Manually writing definitions relies on the knowledge of lexicographers and linguists, which is expensive and time-consuming BIBREF0 , BIBREF1 , BIBREF2 . Therefore, the study on writing definitions automatically is of practical significance.
Definition modeling was first proposed by BIBREF3 as a tool to evaluate different word embeddings. BIBREF4 extended the work by incorporating word sense disambiguation to generate context-aware word definition. Both methods are based on recurrent neural network encoder-decoder framework without attention. In contrast, this paper formulates the definition modeling task as an automatic way to accelerate dictionary compilation.
In this work, we introduce a new dataset for the Chinese definition modeling task that we call Chinese Definition Modeling Corpus cdm(CDM). CDM consists of 104,517 entries, where each entry contains a word, the sememes of a specific word sense, and the definition in Chinese of the same word sense. Sememes are minimum semantic units of word meanings, and the meaning of each word sense is typically composed of several sememes, as is illustrated in Figure 1 . For a given word sense, CDM annotates the sememes according to HowNet BIBREF5 , and the definition according to Chinese Concept Dictionary (CCD) BIBREF6 . Since sememes have been widely used in improving word representation learning BIBREF7 and word similarity computation BIBREF8 , we argue that sememes can benefit the task of definition modeling.
We propose two novel models to incorporate sememes into Chinese definition modeling: the Adaptive-Attention Model (AAM) and the Self- and Adaptive-Attention Model (SAAM). Both models are based on the encoder-decoder framework. The encoder maps word and sememes into a sequence of continuous representations, and the decoder then attends to the output of the encoder and generates the definition one word at a time. Different from the vanilla attention mechanism, the decoder of both models employs the adaptive attention mechanism to decide which sememes to focus on and when to pay attention to sememes at one time BIBREF9 . Following BIBREF3 , BIBREF4 , the AAM is built using recurrent neural networks (RNNs). However, recent works demonstrate that attention-based architecture that entirely eliminates recurrent connections can obtain new state-of-the-art in neural machine translation BIBREF10 , constituency parsing BIBREF11 and semantic role labeling BIBREF12 . In the SAAM, we replace the LSTM-based encoder and decoder with an architecture based on self-attention. This fully attention-based model allows for more parallelization, reduces the path length between word, sememes and the definition, and can reach a new state-of-the-art on the definition modeling task. To the best of our knowledge, this is the first work to introduce the attention mechanism and utilize external resource for the definition modeling task.
In experiments on the CDM dataset we show that our proposed AAM and SAAM outperform the state-of-the-art approach with a large margin. By efficiently incorporating sememes, the SAAM achieves the best performance with improvement over the state-of-the-art method by +6.0 BLEU.
Methodology
The definition modeling task is to generate an explanatory sentence for the interpreted word. For example, given the word “旅馆” (hotel), a model should generate a sentence like this: “给旅行者提供食宿和其他服务的地方” (A place to provide residence and other services for tourists). Since distributed representations of words have been shown to capture lexical syntax and semantics, it is intuitive to employ word embeddings to generate natural language definitions.
Previously, BIBREF3 proposed several model architectures to generate a definition according to the distributed representation of a word. We briefly summarize their model with the best performance in Section "Experiments" and adopt it as our baseline model.
Inspired by the works that use sememes to improve word representation learning BIBREF7 and word similarity computation BIBREF8 , we propose the idea of incorporating sememes into definition modeling. Sememes can provide additional semantic information for the task. As shown in Figure 1 , sememes are highly correlated to the definition. For example, the sememe “场所” (place) is related with the word “地方” (place) of the definition, and the sememe “旅游” (tour) is correlated to the word “旅行者” (tourists) of the definition.
Therefore, to make full use of the sememes in CDM dataset, we propose AAM and SAAM for the task, in Section "Adaptive-Attention Model" and Section "Self- and Adaptive-Attention Model" , respectively.
Baseline Model
The baseline model BIBREF3 is implemented with a recurrent neural network based encoder-decoder framework. Without utilizing the information of sememes, it learns a probabilistic mapping $P(y | x)$ from the word $x$ to be defined to a definition $y = [y_1, \dots , y_T ]$ , in which $y_t$ is the $t$ -th word of definition $y$ .
More concretely, given a word $x$ to be defined, the encoder reads the word and generates its word embedding $\mathbf {x}$ as the encoded information. Afterward, the decoder computes the conditional probability of each definition word $y_t$ depending on the previous definition words $y_{<t}$ , as well as the word being defined $x$ , i.e., $P(y_t|y_{<t},x)$ . $P(y_t|y_{<t},x)$ is given as:
$$& P(y_t|y_{<t},x) \propto \exp {(y_t;\mathbf {z}_t,\mathbf {x})} & \\ & \mathbf {z}_t = f(\mathbf {z}_{t-1},y_{t-1},\mathbf {x}) &$$ (Eq. 4)
where $\mathbf {z}_t$ is the decoder's hidden state at time $t$ , $f$ is a recurrent nonlinear function such as LSTM and GRU, and $\mathbf {x}$ is the embedding of the word being defined. Then the probability of $P(y | x)$ can be computed according to the probability chain rule:
$$P(y | x) = \prod _{t=1}^{T} P(y_t|y_{<t},x)$$ (Eq. 5)
We denote all the parameters in the model as $\theta $ and the definition corpus as $D_{x,y}$ , which is a set of word-definition pairs. Then the model parameters can be learned by maximizing the log-likelihood:
$$\hat{\theta } = \mathop {\rm argmax}_{\theta } \sum _{\langle x, y \rangle \in D_{x,y}}\log P(y | x; \theta ) $$ (Eq. 6)
Adaptive-Attention Model
Our proposed model aims to incorporate sememes into the definition modeling task. Given the word to be defined $x$ and its corresponding sememes $s=[s_1, \dots , s_N ]$ , we define the probability of generating the definition $y=[y_1, \dots , y_t ]$ as:
$$P(y | x, s) = \prod _{t=1}^{T} P(y_t|y_{<t},x,s) $$ (Eq. 8)
Similar to Eq. 6 , we can maximize the log-likelihood with the definition corpus $D_{x,s,y}$ to learn model parameters:
$$\hat{\theta } = \mathop {\rm argmax}_{\theta } \sum _{\langle x,s,y \rangle \in D_{x,s,y}}\log P(y | x, s; \theta ) $$ (Eq. 9)
The probability $P(y | x, s)$ can be implemented with an adaptive attention based encoder-decoder framework, which we call Adaptive-Attention Model (AAM). The new architecture consists of a bidirectional RNN as the encoder and a RNN decoder that adaptively attends to the sememes during decoding a definition.
Similar to BIBREF13 , the encoder is a bidirectional RNN, consisting of forward and backward RNNs. Given the word to be defined $x$ and its corresponding sememes $s=[s_1, \dots , s_N ]$ , we define the input sequence of vectors for the encoder as $\mathbf {v}=[\mathbf {v}_1,\dots ,\mathbf {v}_{N}]$ . The vector $\mathbf {v}_n$ is given as follows:
$$\mathbf {v}_n = [\mathbf {x}; \mathbf {s}_n ]$$ (Eq. 11)
where $\mathbf {x}$ is the vector representation of the word $x$ , $\mathbf {s}_n$ is the vector representation of the $n$ -th sememe $s_n$ , and $[\mathbf {a};\mathbf {b}]$ denote concatenation of vector $\mathbf {a}$ and $\mathbf {b}$ .
The forward RNN $\overrightarrow{f}$ reads the input sequence of vectors from $\mathbf {v}_1$ to $\mathbf {v}_N$ and calculates a forward hidden state for position $n$ as:
$$\overrightarrow{\mathbf {h}_{n}} &=& f(\mathbf {v}_n, \overrightarrow{\mathbf {h}_{n-1}})$$ (Eq. 12)
where $f$ is an LSTM or GRU. Similarly, the backward RNN $\overleftarrow{f}$ reads the input sequence of vectors from $\mathbf {v}_N$ to $\mathbf {v}_1$ and obtain a backward hidden state for position $n$ as:
$$\overleftarrow{\mathbf {h}_{n}} &=& f(\mathbf {h}_n, \overleftarrow{\mathbf {h}_{n+1}})$$ (Eq. 13)
In this way, we obtain a sequence of encoder hidden states $\mathbf {h}=\left[\mathbf {h}_1,...,\mathbf {h}_N\right]$ , by concatenating the forward hidden state $\overrightarrow{\mathbf {h}_{n}}$ and the backward one $\overleftarrow{\mathbf {h}_{n}}$ at each position $n$ :
$$\mathbf {h}_n=\left[\overrightarrow{\mathbf {h}_{n}}, \overleftarrow{\mathbf {h}_{n}}\right]$$ (Eq. 14)
The hidden state $\mathbf {h}_n$ captures the sememe- and word-aware information of the $n$ -th sememe.
As attention-based neural encoder-decoder frameworks have shown great success in image captioning BIBREF14 , document summarization BIBREF15 and neural machine translation BIBREF13 , it is natural to adopt the attention-based recurrent decoder in BIBREF13 as our decoder. The vanilla attention attends to the sememes at every time step. However, not all words in the definition have corresponding sememes. For example, sememe “住下” (reside) could be useful when generating “食宿” (residence), but none of the sememes is useful when generating “提供” (provide). Besides, language correlations make the sememes unnecessary when generating words like “和” (and) and “给” (for).
Inspired by BIBREF9 , we introduce the adaptive attention mechanism for the decoder. At each time step $t$ , we summarize the time-varying sememes' information as sememe context, and the language model's information as LM context. Then, we use another attention to obtain the context vector, relying on either the sememe context or LM context.
More concretely, we define each conditional probability in Eq. 8 as:
$$& P(y_t|y_{<t},x,s) \propto \exp {(y_t;\mathbf {z}_t,\mathbf {c}_t)} & \\ & \mathbf {z}_t = f(\mathbf {z}_{t-1},y_{t-1},\mathbf {c}_t) & $$ (Eq. 17)
where $\mathbf {c}_t$ is the context vector from the output of the adaptive attention module at time $t$ , $\mathbf {z}_t$ is a decoder's hidden state at time $t$ .
To obtain the context vector $\mathbf {c}_t$ , we first compute the sememe context vector $\hat{\mathbf {c}_t}$ and the LM context $\mathbf {o}_t$ . Similar to the vanilla attention, the sememe context $\hat{\mathbf {c}_t}$ is obtained with a soft attention mechanism as:
$$\hat{\mathbf {c}_t} = \sum _{n=1}^{N} \alpha _{tn} \mathbf {h}_n,$$ (Eq. 18)
where
$$\alpha _{tn} &=& \frac{\mathrm {exp}(e_{tn})}{\sum _{i=1}^{N} \mathrm {exp}(e_{ti})} \nonumber \\ e_{tn} &=& \mathbf {w}_{\hat{c}}^T[\mathbf {h}_n; \mathbf {z}_{t-1}].$$ (Eq. 19)
Since the decoder's hidden states store syntax and semantic information for language modeling, we compute the LM context $\mathbf {o}_t$ with a gated unit, whose input is the definition word $y_t$ and the previous hidden state $\mathbf {z}_{t-1}$ :
$$\mathbf {g}_t &=& \sigma (\mathbf {W}_g [y_{t-1}; \mathbf {z}_{t-1}] + \mathbf {b}_g) \nonumber \\ \mathbf {o}_t &=& \mathbf {g}_t \odot \mathrm {tanh} (\mathbf {z}_{t-1}) $$ (Eq. 20)
Once the sememe context vector $\hat{\mathbf {c}_t}$ and the LM context $\mathbf {o}_t$ are ready, we can generate the context vector with an adaptive attention layer as:
$$\mathbf {c}_t = \beta _t \mathbf {o}_t + (1-\beta _t)\hat{\mathbf {c}_t}, $$ (Eq. 21)
where
$$\beta _{t} &=& \frac{\mathrm {exp}(e_{to})}{\mathrm {exp}(e_{to})+\mathrm {exp}(e_{t\hat{c}})} \nonumber \\ e_{to} &=& (\mathbf {w}_c)^T[\mathbf {o}_t;\mathbf {z}_t] \nonumber \\ e_{t\hat{c}} &=& (\mathbf {w}_c)^T[\hat{\mathbf {c}_t};\mathbf {z}_t] $$ (Eq. 22)
$\beta _{t}$ is a scalar in range $[0,1]$ , which controls the relative importance of LM context and sememe context.
Once we obtain the context vector $\mathbf {c}_t$ , we can update the decoder's hidden state and generate the next word according to Eq. and Eq. 17 , respectively.
Self- and Adaptive-Attention Model
Recent works demonstrate that an architecture entirely based on attention can obtain new state-of-the-art in neural machine translation BIBREF10 , constituency parsing BIBREF11 and semantic role labeling BIBREF12 . SAAM adopts similar architecture and replaces the recurrent connections in AAM with self-attention. Such architecture not only reduces the training time by allowing for more parallelization, but also learns better the dependency between word, sememes and tokens of the definition by reducing the path length between them.
Given the word to be defined $x$ and its corresponding ordered sememes $s=[s_1, \dots , s_{N}]$ , we combine them as the input sequence of embeddings for the encoder, i.e., $\mathbf {v}=[\mathbf {v}_0, \mathbf {v}_1, \dots , \mathbf {v}_{N}]$ . The $n$ -th vector $\mathbf {v}_n$ is defined as:
$$\mathbf {v}_n = {\left\lbrace \begin{array}{ll} \mathbf {x}, &n=0 \cr \mathbf {s}_n, &n>0 \end{array}\right.}$$ (Eq. 25)
where $\mathbf {x}$ is the vector representation of the given word $x$ , and $\mathbf {s}_n$ is the vector representation of the $n$ -th sememe $s_n$ .
Although the input sequence is not time ordered, position $n$ in the sequence carries some useful information. First, position 0 corresponds to the word to be defined, while other positions correspond to the sememes. Secondly, sememes are sorted into a logical order in HowNet. For example, as the first sememe of the word “旅馆” (hotel), the sememe “场所” (place) describes its most important aspect, namely, the definition of “旅馆” (hotel) should be “…… 的地方” (a place for ...). Therefore, we add learned position embedding to the input embeddings for the encoder:
$$\mathbf {v}_n = \mathbf {v}_n + \mathbf {p}_n$$ (Eq. 26)
where $\mathbf {p}_n$ is the position embedding that can be learned during training.
Then the vectors $\mathbf {v}=[\mathbf {v}_0, \mathbf {v}_1, \dots , \mathbf {v}_{N}]$ are transformed by a stack of identical layers, where each layers consists of two sublayers: multi-head self-attention layer and position-wise fully connected feed-forward layer. Each of the layers are connected by residual connections, followed by layer normalization BIBREF16 . We refer the readers to BIBREF10 for the detail of the layers. The output of the encoder stack is a sequence of hidden states, denoted as $\mathbf {h}=[\mathbf {h}_0, \mathbf {h}_1, \dots , \mathbf {h}_{N}]$ .
The decoder is also composed of a stack of identical layers. In BIBREF10 , each layer includes three sublayers: masked multi-head self-attention layer, multi-head attention layer that attends over the output of the encoder stack and position-wise fully connected feed-forward layer. In our model, we replace the two multi-head attention layers with an adaptive multi-head attention layer. Similarly to the adaptive attention layer in AAM, the adaptive multi-head attention layer can adaptivelly decide which sememes to focus on and when to attend to sememes at each time and each layer. Figure 2 shows the architecture of the decoder.
Different from the adaptive attention layer in AAM that uses single head attention to obtain the sememe context and gate unit to obtain the LM context, the adaptive multi-head attention layer utilizes multi-head attention to obtain both contexts. Multi-head attention performs multiple single head attentions in parallel with linearly projected keys, values and queries, and then combines the outputs of all heads to obtain the final attention result. We omit the detail here and refer the readers to BIBREF10 . Formally, given the hidden state $\mathbf {z}_t^{l-1}$ at time $t$ , layer $l-1$ of the decoder, we obtain the LM context with multi-head self-attention:
$$\mathbf {o}_t^l = \textit {MultiHead}(\mathbf {z}_t^{l-1},\mathbf {z}_{\le t}^{l-1},\mathbf {z}_{\le t}^{l-1})$$ (Eq. 28)
where the decoder's hidden state $\mathbf {z}_t^{l-1}$ at time $t$ , layer $l-1$ is the query, and $\mathbf {z}_{\le t}^{l-1}=[\mathbf {z}_1^{l-1},...,\mathbf {z}_t^{l-1}]$ , the decoder's hidden states from time 1 to time $t$ at layer $l-1$ , are the keys and values. To obtain better LM context, we employ residual connection and layer normalization after the multi-head self-attention. Similarly, the sememe context can be computed by attending to the encoder's outputs with multi-head attention:
$$\hat{\mathbf {c}_t}^l = \textit {MultiHead}(\mathbf {o}_t^l,\mathbf {h},\mathbf {h})$$ (Eq. 29)
where $\mathbf {o}_t^l$ is the query, and the output from the encoder stack $\mathbf {h}=[\mathbf {h}_0, \mathbf {h}_1, \dots , \mathbf {h}_{N}]$ , are the values and keys.
Once obtaining the sememe context vector $\hat{\mathbf {c}_t}^l$ and the LM context $\mathbf {o}_t^l$ , we compute the output from the adaptive attention layer with:
$$\mathbf {c}_t^l = \beta _t^l \mathbf {o}_t^l + (1-\beta _t^l)\hat{\mathbf {c}_t}^l, $$ (Eq. 30)
where
$$\beta _{t}^l &=& \frac{\mathrm {exp}(e_{to})}{\mathrm {exp}(e_{to})+\mathrm {exp}(e_{t\hat{c}})} \nonumber \\ e_{to}^l &=& (\mathbf {w}_c^l)^T[\mathbf {o}_t^l;\mathbf {z}_t^{l-1}] \nonumber \\ e_{t\hat{c}}^l &=& (\mathbf {w}_c^l)^T[\hat{\mathbf {c}_t}^l;\mathbf {z}_t^{l-1}] $$ (Eq. 31)
Experiments
In this section, we will first introduce the construction process of the CDM dataset, then the experimental results and analysis.
Dataset
To verify our proposed models, we construct the CDM dataset for the Chinese definition modeling task. cdmEach entry in the dataset is a triple that consists of: the interpreted word, sememes and a definition for a specific word sense, where the sememes are annotated with HowNet BIBREF5 , and the definition are annotated with Chinese Concept Dictionary (CCD) BIBREF6 .
Concretely, for a common word in HowNet and CCD, we first align its definitions from CCD and sememe groups from HowNet, where each group represents one word sense. We define the sememes of a definition as the combined sememes associated with any token of the definition. Then for each definition of a word, we align it with the sememe group that has the largest number of overlapping sememes with the definition's sememes. With such aligned definition and sememe group, we add an entry that consists of the word, the sememes of the aligned sememe group and the aligned definition. Each word can have multiple entries in the dataset, especially the polysemous word. To improve the quality of the created dataset, we filter out entries that the definition contains the interpreted word, or the interpreted word is among function words, numeral words and proper nouns.
After processing, we obtain the dataset that contains 104,517 entries with 30,052 unique interpreted words. We divide the dataset according to the unique interpreted words into training set, validation set and test set with a ratio of 18:1:1. Table 1 shows the detailed data statistics.
Settings
We show the effectiveness of all models on the CDM dataset. All the embeddings, including word and sememe embedding, are fixed 300 dimensional word embeddings pretrained on the Chinese Gigaword corpus (LDC2011T13). All definitions are segmented with Jiaba Chinese text segmentation tool and we use the resulting unique segments as the decoder vocabulary. To evaluate the difference between the generated results and the gold-standard definitions, we compute BLEU score using a script provided by Moses, following BIBREF3 . We implement the Baseline and AAM by modifying the code of BIBREF9 , and SAAM with fairseq-py .
We use two-layer LSTM network as the recurrent component. We set batch size to 128, and the dimension of the hidden state to 300 for the decoder. Adam optimizer is employed with an initial learning rate of $1\times 10^{-3}$ . Since the morphemes of the word to be defined can benefit definition modeling, BIBREF3 obtain the model with the best performance by adding a trainable embedding from character-level CNN to the fixed word embedding. To obtain the state-of-the-art result as the baseline, we follow BIBREF3 and experiment with the character-level CNN with the same hyperparameters.
To be comparable with the baseline, we also use two-layer LSTM network as the recurrent component.We set batch size to 128, and the dimension of the hidden state to 300 for both the encoder and the decoder. Adam optimizer is employed with an initial learning rate of $1\times 10^{-3}$ .
We have the same hyperparameters as BIBREF10 , and set these hyperparameters as $(d_{\text{model}}=300, d_{\text{hidden}}=2048, n_{\text{head}}=5, n_{\text{layer}}=6)$ . To be comparable with AAM, we use the same batch size as 128. We also employ label smoothing technique BIBREF17 with a smoothing value of 0.1 during training.
Results
We report the experimental results on CDM test set in Figure 3 . It shows that both of our proposed models, namely AAM and SAAM, achieve good results and outperform the baseline by a large margin. With sememes, AAM and SAAM can improve over the baseline with +3.1 BLEU and +6.65 BLEU, respectively.
We also find that sememes are very useful for generating the definition. The incorporation of sememes improves the AAM with +3.32 BLEU and the SAAM with +3.53 BLEU. This can be explained by that sememes help to disambiguate the word sense associated with the target definition.
Among all models, SAAM which incorporates sememes achieves the new state-of-the-art, with a BLEU score of 36.36 on the test set, demonstrating the effectiveness of sememes and the architecture of SAAM.
Table 2 lists some example definitions generated with different models. For each word-sememes pair, the generated three definitions are ordered according to the order: Baseline, AAM and SAAM. For AAM and SAAM, we use the model that incorporates sememes. These examples show that with sememes, the model can generate more accurate and concrete definitions. For example, for the word “旅馆” (hotel), the baseline model fails to generate definition containing the token “旅行者”(tourists). However, by incoporating sememes' information, especially the sememe “旅游” (tour), AAM and SAAM successfully generate “旅行者”(tourists). Manual inspection of others examples also supports our claim.
We also conduct an ablation study to evaluate the various choices we made for SAAM. We consider three key components: position embedding, the adaptive attention layer, and the incorporated sememes. As illustrated in table 3 , we remove one of these components and report the performance of the resulting model on validation set and test set. We also list the performance of the baseline and AAM for reference.
It demonstrates that all components benefit the SAAM. Removing position embedding is 0.31 BLEU below the SAAM on the test set. Removing the adaptive attention layer is 0.43 BLEU below the SAAM on the test set. Sememes affects the most. Without incoporating sememes, the performance drops 3.53 BLEU on the test set.
Definition Modeling
Distributed representations of words, or word embeddings BIBREF18 were widely used in the field of NLP in recent years. Since word embeddings have been shown to capture lexical semantics, BIBREF3 proposed the definition modeling task as a more transparent and direct representation of word embeddings. This work is followed by BIBREF4 , who studied the problem of word ambiguities in definition modeling by employing latent variable modeling and soft attention mechanisms. Both works focus on evaluating and interpreting word embeddings. In contrast, we incorporate sememes to generate word sense aware word definition for dictionary compilation.
Knowledge Bases
Recently many knowledge bases (KBs) are established in order to organize human knowledge in structural forms. By providing human experiential knowledge, KBs are playing an increasingly important role as infrastructural facilities of natural language processing.
HowNet BIBREF19 is a knowledge base that annotates each concept in Chinese with one or more sememes. HowNet plays an important role in understanding the semantic meanings of concepts in human languages, and has been widely used in word representation learning BIBREF7 , word similarity computation BIBREF20 and sentiment analysis BIBREF21 . For example, BIBREF7 improved word representation learning by utilizing sememes to represent various senses of each word and selecting suitable senses in contexts with an attention mechanism.
Chinese Concept Dictionary (CCD) is a WordNet-like semantic lexicon BIBREF22 , BIBREF23 , where each concept is defined by a set of synonyms (SynSet). CCD has been widely used in many NLP tasks, such as word sense disambiguation BIBREF23 .
In this work, we annotate the word with aligned sememes from HowNet and definition from CCD.
Self-Attention
Self-attention is a special case of attention mechanism that relates different positions of a single sequence in order to compute a representation for the sequence. Self-attention has been successfully applied to many tasks recently BIBREF24 , BIBREF25 , BIBREF26 , BIBREF10 , BIBREF12 , BIBREF11 .
BIBREF10 introduced the first transduction model based on self-attention by replacing the recurrent layers commonly used in encoder-decoder architectures with multi-head self-attention. The proposed model called Transformer achieved the state-of-the-art performance on neural machine translation with reduced training time. After that, BIBREF12 demonstrated that self-attention can improve semantic role labeling by handling structural information and long range dependencies. BIBREF11 further extended self-attention to constituency parsing and showed that the use of self-attention helped to analyze the model by making explicit the manner in which information is propagated between different locations in the sentence.
Self-attention has many good properties. It reduces the computation complexity per layer, allows for more parallelization and reduces the path length between long-range dependencies in the network. In this paper, we use self-attention based architecture in SAAM to learn the relations of word, sememes and definition automatically.
Conclusion
We introduce the Chinese definition modeling task that generates a definition in Chinese for a given word and sememes of a specific word sense. This task is useful for dictionary compilation. To achieve this, we constructed the CDM dataset with word-sememes-definition triples. We propose two novel methods, AAM and SAAM, to generate word sense aware definition by utilizing sememes. In experiments on the CDM dataset we show that our proposed AAM and SAAM outperform the state-of-the-art approach with a large margin. By efficiently incorporating sememes, the SAAM achieves the best performance with improvement over the state-of-the-art method. | No |
9f6e877e3bde771595e8aee10c2656a0e7b9aeb2 | 9f6e877e3bde771595e8aee10c2656a0e7b9aeb2_0 | Q: Do they perform manual evaluation?
Text: Introduction
Chinese definition modeling is the task of generating a definition in Chinese for a given Chinese word. This task can benefit the compilation of dictionaries, especially dictionaries for Chinese as a foreign language (CFL) learners.
In recent years, the number of CFL learners has risen sharply. In 2017, 770,000 people took the Chinese Proficiency Test, an increase of 38% from 2016. However, most Chinese dictionaries are for native speakers. Since these dictionaries usually require a fairly high level of Chinese, it is necessary to build a dictionary specifically for CFL learners. Manually writing definitions relies on the knowledge of lexicographers and linguists, which is expensive and time-consuming BIBREF0 , BIBREF1 , BIBREF2 . Therefore, the study on writing definitions automatically is of practical significance.
Definition modeling was first proposed by BIBREF3 as a tool to evaluate different word embeddings. BIBREF4 extended the work by incorporating word sense disambiguation to generate context-aware word definition. Both methods are based on recurrent neural network encoder-decoder framework without attention. In contrast, this paper formulates the definition modeling task as an automatic way to accelerate dictionary compilation.
In this work, we introduce a new dataset for the Chinese definition modeling task that we call Chinese Definition Modeling Corpus cdm(CDM). CDM consists of 104,517 entries, where each entry contains a word, the sememes of a specific word sense, and the definition in Chinese of the same word sense. Sememes are minimum semantic units of word meanings, and the meaning of each word sense is typically composed of several sememes, as is illustrated in Figure 1 . For a given word sense, CDM annotates the sememes according to HowNet BIBREF5 , and the definition according to Chinese Concept Dictionary (CCD) BIBREF6 . Since sememes have been widely used in improving word representation learning BIBREF7 and word similarity computation BIBREF8 , we argue that sememes can benefit the task of definition modeling.
We propose two novel models to incorporate sememes into Chinese definition modeling: the Adaptive-Attention Model (AAM) and the Self- and Adaptive-Attention Model (SAAM). Both models are based on the encoder-decoder framework. The encoder maps word and sememes into a sequence of continuous representations, and the decoder then attends to the output of the encoder and generates the definition one word at a time. Different from the vanilla attention mechanism, the decoder of both models employs the adaptive attention mechanism to decide which sememes to focus on and when to pay attention to sememes at one time BIBREF9 . Following BIBREF3 , BIBREF4 , the AAM is built using recurrent neural networks (RNNs). However, recent works demonstrate that attention-based architecture that entirely eliminates recurrent connections can obtain new state-of-the-art in neural machine translation BIBREF10 , constituency parsing BIBREF11 and semantic role labeling BIBREF12 . In the SAAM, we replace the LSTM-based encoder and decoder with an architecture based on self-attention. This fully attention-based model allows for more parallelization, reduces the path length between word, sememes and the definition, and can reach a new state-of-the-art on the definition modeling task. To the best of our knowledge, this is the first work to introduce the attention mechanism and utilize external resource for the definition modeling task.
In experiments on the CDM dataset we show that our proposed AAM and SAAM outperform the state-of-the-art approach with a large margin. By efficiently incorporating sememes, the SAAM achieves the best performance with improvement over the state-of-the-art method by +6.0 BLEU.
Methodology
The definition modeling task is to generate an explanatory sentence for the interpreted word. For example, given the word “旅馆” (hotel), a model should generate a sentence like this: “给旅行者提供食宿和其他服务的地方” (A place to provide residence and other services for tourists). Since distributed representations of words have been shown to capture lexical syntax and semantics, it is intuitive to employ word embeddings to generate natural language definitions.
Previously, BIBREF3 proposed several model architectures to generate a definition according to the distributed representation of a word. We briefly summarize their model with the best performance in Section "Experiments" and adopt it as our baseline model.
Inspired by the works that use sememes to improve word representation learning BIBREF7 and word similarity computation BIBREF8 , we propose the idea of incorporating sememes into definition modeling. Sememes can provide additional semantic information for the task. As shown in Figure 1 , sememes are highly correlated to the definition. For example, the sememe “场所” (place) is related with the word “地方” (place) of the definition, and the sememe “旅游” (tour) is correlated to the word “旅行者” (tourists) of the definition.
Therefore, to make full use of the sememes in CDM dataset, we propose AAM and SAAM for the task, in Section "Adaptive-Attention Model" and Section "Self- and Adaptive-Attention Model" , respectively.
Baseline Model
The baseline model BIBREF3 is implemented with a recurrent neural network based encoder-decoder framework. Without utilizing the information of sememes, it learns a probabilistic mapping $P(y | x)$ from the word $x$ to be defined to a definition $y = [y_1, \dots , y_T ]$ , in which $y_t$ is the $t$ -th word of definition $y$ .
More concretely, given a word $x$ to be defined, the encoder reads the word and generates its word embedding $\mathbf {x}$ as the encoded information. Afterward, the decoder computes the conditional probability of each definition word $y_t$ depending on the previous definition words $y_{<t}$ , as well as the word being defined $x$ , i.e., $P(y_t|y_{<t},x)$ . $P(y_t|y_{<t},x)$ is given as:
$$& P(y_t|y_{<t},x) \propto \exp {(y_t;\mathbf {z}_t,\mathbf {x})} & \\ & \mathbf {z}_t = f(\mathbf {z}_{t-1},y_{t-1},\mathbf {x}) &$$ (Eq. 4)
where $\mathbf {z}_t$ is the decoder's hidden state at time $t$ , $f$ is a recurrent nonlinear function such as LSTM and GRU, and $\mathbf {x}$ is the embedding of the word being defined. Then the probability of $P(y | x)$ can be computed according to the probability chain rule:
$$P(y | x) = \prod _{t=1}^{T} P(y_t|y_{<t},x)$$ (Eq. 5)
We denote all the parameters in the model as $\theta $ and the definition corpus as $D_{x,y}$ , which is a set of word-definition pairs. Then the model parameters can be learned by maximizing the log-likelihood:
$$\hat{\theta } = \mathop {\rm argmax}_{\theta } \sum _{\langle x, y \rangle \in D_{x,y}}\log P(y | x; \theta ) $$ (Eq. 6)
Adaptive-Attention Model
Our proposed model aims to incorporate sememes into the definition modeling task. Given the word to be defined $x$ and its corresponding sememes $s=[s_1, \dots , s_N ]$ , we define the probability of generating the definition $y=[y_1, \dots , y_t ]$ as:
$$P(y | x, s) = \prod _{t=1}^{T} P(y_t|y_{<t},x,s) $$ (Eq. 8)
Similar to Eq. 6 , we can maximize the log-likelihood with the definition corpus $D_{x,s,y}$ to learn model parameters:
$$\hat{\theta } = \mathop {\rm argmax}_{\theta } \sum _{\langle x,s,y \rangle \in D_{x,s,y}}\log P(y | x, s; \theta ) $$ (Eq. 9)
The probability $P(y | x, s)$ can be implemented with an adaptive attention based encoder-decoder framework, which we call Adaptive-Attention Model (AAM). The new architecture consists of a bidirectional RNN as the encoder and a RNN decoder that adaptively attends to the sememes during decoding a definition.
Similar to BIBREF13 , the encoder is a bidirectional RNN, consisting of forward and backward RNNs. Given the word to be defined $x$ and its corresponding sememes $s=[s_1, \dots , s_N ]$ , we define the input sequence of vectors for the encoder as $\mathbf {v}=[\mathbf {v}_1,\dots ,\mathbf {v}_{N}]$ . The vector $\mathbf {v}_n$ is given as follows:
$$\mathbf {v}_n = [\mathbf {x}; \mathbf {s}_n ]$$ (Eq. 11)
where $\mathbf {x}$ is the vector representation of the word $x$ , $\mathbf {s}_n$ is the vector representation of the $n$ -th sememe $s_n$ , and $[\mathbf {a};\mathbf {b}]$ denote concatenation of vector $\mathbf {a}$ and $\mathbf {b}$ .
The forward RNN $\overrightarrow{f}$ reads the input sequence of vectors from $\mathbf {v}_1$ to $\mathbf {v}_N$ and calculates a forward hidden state for position $n$ as:
$$\overrightarrow{\mathbf {h}_{n}} &=& f(\mathbf {v}_n, \overrightarrow{\mathbf {h}_{n-1}})$$ (Eq. 12)
where $f$ is an LSTM or GRU. Similarly, the backward RNN $\overleftarrow{f}$ reads the input sequence of vectors from $\mathbf {v}_N$ to $\mathbf {v}_1$ and obtain a backward hidden state for position $n$ as:
$$\overleftarrow{\mathbf {h}_{n}} &=& f(\mathbf {h}_n, \overleftarrow{\mathbf {h}_{n+1}})$$ (Eq. 13)
In this way, we obtain a sequence of encoder hidden states $\mathbf {h}=\left[\mathbf {h}_1,...,\mathbf {h}_N\right]$ , by concatenating the forward hidden state $\overrightarrow{\mathbf {h}_{n}}$ and the backward one $\overleftarrow{\mathbf {h}_{n}}$ at each position $n$ :
$$\mathbf {h}_n=\left[\overrightarrow{\mathbf {h}_{n}}, \overleftarrow{\mathbf {h}_{n}}\right]$$ (Eq. 14)
The hidden state $\mathbf {h}_n$ captures the sememe- and word-aware information of the $n$ -th sememe.
As attention-based neural encoder-decoder frameworks have shown great success in image captioning BIBREF14 , document summarization BIBREF15 and neural machine translation BIBREF13 , it is natural to adopt the attention-based recurrent decoder in BIBREF13 as our decoder. The vanilla attention attends to the sememes at every time step. However, not all words in the definition have corresponding sememes. For example, sememe “住下” (reside) could be useful when generating “食宿” (residence), but none of the sememes is useful when generating “提供” (provide). Besides, language correlations make the sememes unnecessary when generating words like “和” (and) and “给” (for).
Inspired by BIBREF9 , we introduce the adaptive attention mechanism for the decoder. At each time step $t$ , we summarize the time-varying sememes' information as sememe context, and the language model's information as LM context. Then, we use another attention to obtain the context vector, relying on either the sememe context or LM context.
More concretely, we define each conditional probability in Eq. 8 as:
$$& P(y_t|y_{<t},x,s) \propto \exp {(y_t;\mathbf {z}_t,\mathbf {c}_t)} & \\ & \mathbf {z}_t = f(\mathbf {z}_{t-1},y_{t-1},\mathbf {c}_t) & $$ (Eq. 17)
where $\mathbf {c}_t$ is the context vector from the output of the adaptive attention module at time $t$ , $\mathbf {z}_t$ is a decoder's hidden state at time $t$ .
To obtain the context vector $\mathbf {c}_t$ , we first compute the sememe context vector $\hat{\mathbf {c}_t}$ and the LM context $\mathbf {o}_t$ . Similar to the vanilla attention, the sememe context $\hat{\mathbf {c}_t}$ is obtained with a soft attention mechanism as:
$$\hat{\mathbf {c}_t} = \sum _{n=1}^{N} \alpha _{tn} \mathbf {h}_n,$$ (Eq. 18)
where
$$\alpha _{tn} &=& \frac{\mathrm {exp}(e_{tn})}{\sum _{i=1}^{N} \mathrm {exp}(e_{ti})} \nonumber \\ e_{tn} &=& \mathbf {w}_{\hat{c}}^T[\mathbf {h}_n; \mathbf {z}_{t-1}].$$ (Eq. 19)
Since the decoder's hidden states store syntax and semantic information for language modeling, we compute the LM context $\mathbf {o}_t$ with a gated unit, whose input is the definition word $y_t$ and the previous hidden state $\mathbf {z}_{t-1}$ :
$$\mathbf {g}_t &=& \sigma (\mathbf {W}_g [y_{t-1}; \mathbf {z}_{t-1}] + \mathbf {b}_g) \nonumber \\ \mathbf {o}_t &=& \mathbf {g}_t \odot \mathrm {tanh} (\mathbf {z}_{t-1}) $$ (Eq. 20)
Once the sememe context vector $\hat{\mathbf {c}_t}$ and the LM context $\mathbf {o}_t$ are ready, we can generate the context vector with an adaptive attention layer as:
$$\mathbf {c}_t = \beta _t \mathbf {o}_t + (1-\beta _t)\hat{\mathbf {c}_t}, $$ (Eq. 21)
where
$$\beta _{t} &=& \frac{\mathrm {exp}(e_{to})}{\mathrm {exp}(e_{to})+\mathrm {exp}(e_{t\hat{c}})} \nonumber \\ e_{to} &=& (\mathbf {w}_c)^T[\mathbf {o}_t;\mathbf {z}_t] \nonumber \\ e_{t\hat{c}} &=& (\mathbf {w}_c)^T[\hat{\mathbf {c}_t};\mathbf {z}_t] $$ (Eq. 22)
$\beta _{t}$ is a scalar in range $[0,1]$ , which controls the relative importance of LM context and sememe context.
Once we obtain the context vector $\mathbf {c}_t$ , we can update the decoder's hidden state and generate the next word according to Eq. and Eq. 17 , respectively.
Self- and Adaptive-Attention Model
Recent works demonstrate that an architecture entirely based on attention can obtain new state-of-the-art in neural machine translation BIBREF10 , constituency parsing BIBREF11 and semantic role labeling BIBREF12 . SAAM adopts similar architecture and replaces the recurrent connections in AAM with self-attention. Such architecture not only reduces the training time by allowing for more parallelization, but also learns better the dependency between word, sememes and tokens of the definition by reducing the path length between them.
Given the word to be defined $x$ and its corresponding ordered sememes $s=[s_1, \dots , s_{N}]$ , we combine them as the input sequence of embeddings for the encoder, i.e., $\mathbf {v}=[\mathbf {v}_0, \mathbf {v}_1, \dots , \mathbf {v}_{N}]$ . The $n$ -th vector $\mathbf {v}_n$ is defined as:
$$\mathbf {v}_n = {\left\lbrace \begin{array}{ll} \mathbf {x}, &n=0 \cr \mathbf {s}_n, &n>0 \end{array}\right.}$$ (Eq. 25)
where $\mathbf {x}$ is the vector representation of the given word $x$ , and $\mathbf {s}_n$ is the vector representation of the $n$ -th sememe $s_n$ .
Although the input sequence is not time ordered, position $n$ in the sequence carries some useful information. First, position 0 corresponds to the word to be defined, while other positions correspond to the sememes. Secondly, sememes are sorted into a logical order in HowNet. For example, as the first sememe of the word “旅馆” (hotel), the sememe “场所” (place) describes its most important aspect, namely, the definition of “旅馆” (hotel) should be “…… 的地方” (a place for ...). Therefore, we add learned position embedding to the input embeddings for the encoder:
$$\mathbf {v}_n = \mathbf {v}_n + \mathbf {p}_n$$ (Eq. 26)
where $\mathbf {p}_n$ is the position embedding that can be learned during training.
Then the vectors $\mathbf {v}=[\mathbf {v}_0, \mathbf {v}_1, \dots , \mathbf {v}_{N}]$ are transformed by a stack of identical layers, where each layers consists of two sublayers: multi-head self-attention layer and position-wise fully connected feed-forward layer. Each of the layers are connected by residual connections, followed by layer normalization BIBREF16 . We refer the readers to BIBREF10 for the detail of the layers. The output of the encoder stack is a sequence of hidden states, denoted as $\mathbf {h}=[\mathbf {h}_0, \mathbf {h}_1, \dots , \mathbf {h}_{N}]$ .
The decoder is also composed of a stack of identical layers. In BIBREF10 , each layer includes three sublayers: masked multi-head self-attention layer, multi-head attention layer that attends over the output of the encoder stack and position-wise fully connected feed-forward layer. In our model, we replace the two multi-head attention layers with an adaptive multi-head attention layer. Similarly to the adaptive attention layer in AAM, the adaptive multi-head attention layer can adaptivelly decide which sememes to focus on and when to attend to sememes at each time and each layer. Figure 2 shows the architecture of the decoder.
Different from the adaptive attention layer in AAM that uses single head attention to obtain the sememe context and gate unit to obtain the LM context, the adaptive multi-head attention layer utilizes multi-head attention to obtain both contexts. Multi-head attention performs multiple single head attentions in parallel with linearly projected keys, values and queries, and then combines the outputs of all heads to obtain the final attention result. We omit the detail here and refer the readers to BIBREF10 . Formally, given the hidden state $\mathbf {z}_t^{l-1}$ at time $t$ , layer $l-1$ of the decoder, we obtain the LM context with multi-head self-attention:
$$\mathbf {o}_t^l = \textit {MultiHead}(\mathbf {z}_t^{l-1},\mathbf {z}_{\le t}^{l-1},\mathbf {z}_{\le t}^{l-1})$$ (Eq. 28)
where the decoder's hidden state $\mathbf {z}_t^{l-1}$ at time $t$ , layer $l-1$ is the query, and $\mathbf {z}_{\le t}^{l-1}=[\mathbf {z}_1^{l-1},...,\mathbf {z}_t^{l-1}]$ , the decoder's hidden states from time 1 to time $t$ at layer $l-1$ , are the keys and values. To obtain better LM context, we employ residual connection and layer normalization after the multi-head self-attention. Similarly, the sememe context can be computed by attending to the encoder's outputs with multi-head attention:
$$\hat{\mathbf {c}_t}^l = \textit {MultiHead}(\mathbf {o}_t^l,\mathbf {h},\mathbf {h})$$ (Eq. 29)
where $\mathbf {o}_t^l$ is the query, and the output from the encoder stack $\mathbf {h}=[\mathbf {h}_0, \mathbf {h}_1, \dots , \mathbf {h}_{N}]$ , are the values and keys.
Once obtaining the sememe context vector $\hat{\mathbf {c}_t}^l$ and the LM context $\mathbf {o}_t^l$ , we compute the output from the adaptive attention layer with:
$$\mathbf {c}_t^l = \beta _t^l \mathbf {o}_t^l + (1-\beta _t^l)\hat{\mathbf {c}_t}^l, $$ (Eq. 30)
where
$$\beta _{t}^l &=& \frac{\mathrm {exp}(e_{to})}{\mathrm {exp}(e_{to})+\mathrm {exp}(e_{t\hat{c}})} \nonumber \\ e_{to}^l &=& (\mathbf {w}_c^l)^T[\mathbf {o}_t^l;\mathbf {z}_t^{l-1}] \nonumber \\ e_{t\hat{c}}^l &=& (\mathbf {w}_c^l)^T[\hat{\mathbf {c}_t}^l;\mathbf {z}_t^{l-1}] $$ (Eq. 31)
Experiments
In this section, we will first introduce the construction process of the CDM dataset, then the experimental results and analysis.
Dataset
To verify our proposed models, we construct the CDM dataset for the Chinese definition modeling task. cdmEach entry in the dataset is a triple that consists of: the interpreted word, sememes and a definition for a specific word sense, where the sememes are annotated with HowNet BIBREF5 , and the definition are annotated with Chinese Concept Dictionary (CCD) BIBREF6 .
Concretely, for a common word in HowNet and CCD, we first align its definitions from CCD and sememe groups from HowNet, where each group represents one word sense. We define the sememes of a definition as the combined sememes associated with any token of the definition. Then for each definition of a word, we align it with the sememe group that has the largest number of overlapping sememes with the definition's sememes. With such aligned definition and sememe group, we add an entry that consists of the word, the sememes of the aligned sememe group and the aligned definition. Each word can have multiple entries in the dataset, especially the polysemous word. To improve the quality of the created dataset, we filter out entries that the definition contains the interpreted word, or the interpreted word is among function words, numeral words and proper nouns.
After processing, we obtain the dataset that contains 104,517 entries with 30,052 unique interpreted words. We divide the dataset according to the unique interpreted words into training set, validation set and test set with a ratio of 18:1:1. Table 1 shows the detailed data statistics.
Settings
We show the effectiveness of all models on the CDM dataset. All the embeddings, including word and sememe embedding, are fixed 300 dimensional word embeddings pretrained on the Chinese Gigaword corpus (LDC2011T13). All definitions are segmented with Jiaba Chinese text segmentation tool and we use the resulting unique segments as the decoder vocabulary. To evaluate the difference between the generated results and the gold-standard definitions, we compute BLEU score using a script provided by Moses, following BIBREF3 . We implement the Baseline and AAM by modifying the code of BIBREF9 , and SAAM with fairseq-py .
We use two-layer LSTM network as the recurrent component. We set batch size to 128, and the dimension of the hidden state to 300 for the decoder. Adam optimizer is employed with an initial learning rate of $1\times 10^{-3}$ . Since the morphemes of the word to be defined can benefit definition modeling, BIBREF3 obtain the model with the best performance by adding a trainable embedding from character-level CNN to the fixed word embedding. To obtain the state-of-the-art result as the baseline, we follow BIBREF3 and experiment with the character-level CNN with the same hyperparameters.
To be comparable with the baseline, we also use two-layer LSTM network as the recurrent component.We set batch size to 128, and the dimension of the hidden state to 300 for both the encoder and the decoder. Adam optimizer is employed with an initial learning rate of $1\times 10^{-3}$ .
We have the same hyperparameters as BIBREF10 , and set these hyperparameters as $(d_{\text{model}}=300, d_{\text{hidden}}=2048, n_{\text{head}}=5, n_{\text{layer}}=6)$ . To be comparable with AAM, we use the same batch size as 128. We also employ label smoothing technique BIBREF17 with a smoothing value of 0.1 during training.
Results
We report the experimental results on CDM test set in Figure 3 . It shows that both of our proposed models, namely AAM and SAAM, achieve good results and outperform the baseline by a large margin. With sememes, AAM and SAAM can improve over the baseline with +3.1 BLEU and +6.65 BLEU, respectively.
We also find that sememes are very useful for generating the definition. The incorporation of sememes improves the AAM with +3.32 BLEU and the SAAM with +3.53 BLEU. This can be explained by that sememes help to disambiguate the word sense associated with the target definition.
Among all models, SAAM which incorporates sememes achieves the new state-of-the-art, with a BLEU score of 36.36 on the test set, demonstrating the effectiveness of sememes and the architecture of SAAM.
Table 2 lists some example definitions generated with different models. For each word-sememes pair, the generated three definitions are ordered according to the order: Baseline, AAM and SAAM. For AAM and SAAM, we use the model that incorporates sememes. These examples show that with sememes, the model can generate more accurate and concrete definitions. For example, for the word “旅馆” (hotel), the baseline model fails to generate definition containing the token “旅行者”(tourists). However, by incoporating sememes' information, especially the sememe “旅游” (tour), AAM and SAAM successfully generate “旅行者”(tourists). Manual inspection of others examples also supports our claim.
We also conduct an ablation study to evaluate the various choices we made for SAAM. We consider three key components: position embedding, the adaptive attention layer, and the incorporated sememes. As illustrated in table 3 , we remove one of these components and report the performance of the resulting model on validation set and test set. We also list the performance of the baseline and AAM for reference.
It demonstrates that all components benefit the SAAM. Removing position embedding is 0.31 BLEU below the SAAM on the test set. Removing the adaptive attention layer is 0.43 BLEU below the SAAM on the test set. Sememes affects the most. Without incoporating sememes, the performance drops 3.53 BLEU on the test set.
Definition Modeling
Distributed representations of words, or word embeddings BIBREF18 were widely used in the field of NLP in recent years. Since word embeddings have been shown to capture lexical semantics, BIBREF3 proposed the definition modeling task as a more transparent and direct representation of word embeddings. This work is followed by BIBREF4 , who studied the problem of word ambiguities in definition modeling by employing latent variable modeling and soft attention mechanisms. Both works focus on evaluating and interpreting word embeddings. In contrast, we incorporate sememes to generate word sense aware word definition for dictionary compilation.
Knowledge Bases
Recently many knowledge bases (KBs) are established in order to organize human knowledge in structural forms. By providing human experiential knowledge, KBs are playing an increasingly important role as infrastructural facilities of natural language processing.
HowNet BIBREF19 is a knowledge base that annotates each concept in Chinese with one or more sememes. HowNet plays an important role in understanding the semantic meanings of concepts in human languages, and has been widely used in word representation learning BIBREF7 , word similarity computation BIBREF20 and sentiment analysis BIBREF21 . For example, BIBREF7 improved word representation learning by utilizing sememes to represent various senses of each word and selecting suitable senses in contexts with an attention mechanism.
Chinese Concept Dictionary (CCD) is a WordNet-like semantic lexicon BIBREF22 , BIBREF23 , where each concept is defined by a set of synonyms (SynSet). CCD has been widely used in many NLP tasks, such as word sense disambiguation BIBREF23 .
In this work, we annotate the word with aligned sememes from HowNet and definition from CCD.
Self-Attention
Self-attention is a special case of attention mechanism that relates different positions of a single sequence in order to compute a representation for the sequence. Self-attention has been successfully applied to many tasks recently BIBREF24 , BIBREF25 , BIBREF26 , BIBREF10 , BIBREF12 , BIBREF11 .
BIBREF10 introduced the first transduction model based on self-attention by replacing the recurrent layers commonly used in encoder-decoder architectures with multi-head self-attention. The proposed model called Transformer achieved the state-of-the-art performance on neural machine translation with reduced training time. After that, BIBREF12 demonstrated that self-attention can improve semantic role labeling by handling structural information and long range dependencies. BIBREF11 further extended self-attention to constituency parsing and showed that the use of self-attention helped to analyze the model by making explicit the manner in which information is propagated between different locations in the sentence.
Self-attention has many good properties. It reduces the computation complexity per layer, allows for more parallelization and reduces the path length between long-range dependencies in the network. In this paper, we use self-attention based architecture in SAAM to learn the relations of word, sememes and definition automatically.
Conclusion
We introduce the Chinese definition modeling task that generates a definition in Chinese for a given word and sememes of a specific word sense. This task is useful for dictionary compilation. To achieve this, we constructed the CDM dataset with word-sememes-definition triples. We propose two novel methods, AAM and SAAM, to generate word sense aware definition by utilizing sememes. In experiments on the CDM dataset we show that our proposed AAM and SAAM outperform the state-of-the-art approach with a large margin. By efficiently incorporating sememes, the SAAM achieves the best performance with improvement over the state-of-the-art method. | Yes |
a3783e42c2bf616c8a07bd3b3d503886660e4344 | a3783e42c2bf616c8a07bd3b3d503886660e4344_0 | Q: Do they compare against Noraset et al. 2017?
Text: Introduction
Chinese definition modeling is the task of generating a definition in Chinese for a given Chinese word. This task can benefit the compilation of dictionaries, especially dictionaries for Chinese as a foreign language (CFL) learners.
In recent years, the number of CFL learners has risen sharply. In 2017, 770,000 people took the Chinese Proficiency Test, an increase of 38% from 2016. However, most Chinese dictionaries are for native speakers. Since these dictionaries usually require a fairly high level of Chinese, it is necessary to build a dictionary specifically for CFL learners. Manually writing definitions relies on the knowledge of lexicographers and linguists, which is expensive and time-consuming BIBREF0 , BIBREF1 , BIBREF2 . Therefore, the study on writing definitions automatically is of practical significance.
Definition modeling was first proposed by BIBREF3 as a tool to evaluate different word embeddings. BIBREF4 extended the work by incorporating word sense disambiguation to generate context-aware word definition. Both methods are based on recurrent neural network encoder-decoder framework without attention. In contrast, this paper formulates the definition modeling task as an automatic way to accelerate dictionary compilation.
In this work, we introduce a new dataset for the Chinese definition modeling task that we call Chinese Definition Modeling Corpus cdm(CDM). CDM consists of 104,517 entries, where each entry contains a word, the sememes of a specific word sense, and the definition in Chinese of the same word sense. Sememes are minimum semantic units of word meanings, and the meaning of each word sense is typically composed of several sememes, as is illustrated in Figure 1 . For a given word sense, CDM annotates the sememes according to HowNet BIBREF5 , and the definition according to Chinese Concept Dictionary (CCD) BIBREF6 . Since sememes have been widely used in improving word representation learning BIBREF7 and word similarity computation BIBREF8 , we argue that sememes can benefit the task of definition modeling.
We propose two novel models to incorporate sememes into Chinese definition modeling: the Adaptive-Attention Model (AAM) and the Self- and Adaptive-Attention Model (SAAM). Both models are based on the encoder-decoder framework. The encoder maps word and sememes into a sequence of continuous representations, and the decoder then attends to the output of the encoder and generates the definition one word at a time. Different from the vanilla attention mechanism, the decoder of both models employs the adaptive attention mechanism to decide which sememes to focus on and when to pay attention to sememes at one time BIBREF9 . Following BIBREF3 , BIBREF4 , the AAM is built using recurrent neural networks (RNNs). However, recent works demonstrate that attention-based architecture that entirely eliminates recurrent connections can obtain new state-of-the-art in neural machine translation BIBREF10 , constituency parsing BIBREF11 and semantic role labeling BIBREF12 . In the SAAM, we replace the LSTM-based encoder and decoder with an architecture based on self-attention. This fully attention-based model allows for more parallelization, reduces the path length between word, sememes and the definition, and can reach a new state-of-the-art on the definition modeling task. To the best of our knowledge, this is the first work to introduce the attention mechanism and utilize external resource for the definition modeling task.
In experiments on the CDM dataset we show that our proposed AAM and SAAM outperform the state-of-the-art approach with a large margin. By efficiently incorporating sememes, the SAAM achieves the best performance with improvement over the state-of-the-art method by +6.0 BLEU.
Methodology
The definition modeling task is to generate an explanatory sentence for the interpreted word. For example, given the word “旅馆” (hotel), a model should generate a sentence like this: “给旅行者提供食宿和其他服务的地方” (A place to provide residence and other services for tourists). Since distributed representations of words have been shown to capture lexical syntax and semantics, it is intuitive to employ word embeddings to generate natural language definitions.
Previously, BIBREF3 proposed several model architectures to generate a definition according to the distributed representation of a word. We briefly summarize their model with the best performance in Section "Experiments" and adopt it as our baseline model.
Inspired by the works that use sememes to improve word representation learning BIBREF7 and word similarity computation BIBREF8 , we propose the idea of incorporating sememes into definition modeling. Sememes can provide additional semantic information for the task. As shown in Figure 1 , sememes are highly correlated to the definition. For example, the sememe “场所” (place) is related with the word “地方” (place) of the definition, and the sememe “旅游” (tour) is correlated to the word “旅行者” (tourists) of the definition.
Therefore, to make full use of the sememes in CDM dataset, we propose AAM and SAAM for the task, in Section "Adaptive-Attention Model" and Section "Self- and Adaptive-Attention Model" , respectively.
Baseline Model
The baseline model BIBREF3 is implemented with a recurrent neural network based encoder-decoder framework. Without utilizing the information of sememes, it learns a probabilistic mapping $P(y | x)$ from the word $x$ to be defined to a definition $y = [y_1, \dots , y_T ]$ , in which $y_t$ is the $t$ -th word of definition $y$ .
More concretely, given a word $x$ to be defined, the encoder reads the word and generates its word embedding $\mathbf {x}$ as the encoded information. Afterward, the decoder computes the conditional probability of each definition word $y_t$ depending on the previous definition words $y_{<t}$ , as well as the word being defined $x$ , i.e., $P(y_t|y_{<t},x)$ . $P(y_t|y_{<t},x)$ is given as:
$$& P(y_t|y_{<t},x) \propto \exp {(y_t;\mathbf {z}_t,\mathbf {x})} & \\ & \mathbf {z}_t = f(\mathbf {z}_{t-1},y_{t-1},\mathbf {x}) &$$ (Eq. 4)
where $\mathbf {z}_t$ is the decoder's hidden state at time $t$ , $f$ is a recurrent nonlinear function such as LSTM and GRU, and $\mathbf {x}$ is the embedding of the word being defined. Then the probability of $P(y | x)$ can be computed according to the probability chain rule:
$$P(y | x) = \prod _{t=1}^{T} P(y_t|y_{<t},x)$$ (Eq. 5)
We denote all the parameters in the model as $\theta $ and the definition corpus as $D_{x,y}$ , which is a set of word-definition pairs. Then the model parameters can be learned by maximizing the log-likelihood:
$$\hat{\theta } = \mathop {\rm argmax}_{\theta } \sum _{\langle x, y \rangle \in D_{x,y}}\log P(y | x; \theta ) $$ (Eq. 6)
Adaptive-Attention Model
Our proposed model aims to incorporate sememes into the definition modeling task. Given the word to be defined $x$ and its corresponding sememes $s=[s_1, \dots , s_N ]$ , we define the probability of generating the definition $y=[y_1, \dots , y_t ]$ as:
$$P(y | x, s) = \prod _{t=1}^{T} P(y_t|y_{<t},x,s) $$ (Eq. 8)
Similar to Eq. 6 , we can maximize the log-likelihood with the definition corpus $D_{x,s,y}$ to learn model parameters:
$$\hat{\theta } = \mathop {\rm argmax}_{\theta } \sum _{\langle x,s,y \rangle \in D_{x,s,y}}\log P(y | x, s; \theta ) $$ (Eq. 9)
The probability $P(y | x, s)$ can be implemented with an adaptive attention based encoder-decoder framework, which we call Adaptive-Attention Model (AAM). The new architecture consists of a bidirectional RNN as the encoder and a RNN decoder that adaptively attends to the sememes during decoding a definition.
Similar to BIBREF13 , the encoder is a bidirectional RNN, consisting of forward and backward RNNs. Given the word to be defined $x$ and its corresponding sememes $s=[s_1, \dots , s_N ]$ , we define the input sequence of vectors for the encoder as $\mathbf {v}=[\mathbf {v}_1,\dots ,\mathbf {v}_{N}]$ . The vector $\mathbf {v}_n$ is given as follows:
$$\mathbf {v}_n = [\mathbf {x}; \mathbf {s}_n ]$$ (Eq. 11)
where $\mathbf {x}$ is the vector representation of the word $x$ , $\mathbf {s}_n$ is the vector representation of the $n$ -th sememe $s_n$ , and $[\mathbf {a};\mathbf {b}]$ denote concatenation of vector $\mathbf {a}$ and $\mathbf {b}$ .
The forward RNN $\overrightarrow{f}$ reads the input sequence of vectors from $\mathbf {v}_1$ to $\mathbf {v}_N$ and calculates a forward hidden state for position $n$ as:
$$\overrightarrow{\mathbf {h}_{n}} &=& f(\mathbf {v}_n, \overrightarrow{\mathbf {h}_{n-1}})$$ (Eq. 12)
where $f$ is an LSTM or GRU. Similarly, the backward RNN $\overleftarrow{f}$ reads the input sequence of vectors from $\mathbf {v}_N$ to $\mathbf {v}_1$ and obtain a backward hidden state for position $n$ as:
$$\overleftarrow{\mathbf {h}_{n}} &=& f(\mathbf {h}_n, \overleftarrow{\mathbf {h}_{n+1}})$$ (Eq. 13)
In this way, we obtain a sequence of encoder hidden states $\mathbf {h}=\left[\mathbf {h}_1,...,\mathbf {h}_N\right]$ , by concatenating the forward hidden state $\overrightarrow{\mathbf {h}_{n}}$ and the backward one $\overleftarrow{\mathbf {h}_{n}}$ at each position $n$ :
$$\mathbf {h}_n=\left[\overrightarrow{\mathbf {h}_{n}}, \overleftarrow{\mathbf {h}_{n}}\right]$$ (Eq. 14)
The hidden state $\mathbf {h}_n$ captures the sememe- and word-aware information of the $n$ -th sememe.
As attention-based neural encoder-decoder frameworks have shown great success in image captioning BIBREF14 , document summarization BIBREF15 and neural machine translation BIBREF13 , it is natural to adopt the attention-based recurrent decoder in BIBREF13 as our decoder. The vanilla attention attends to the sememes at every time step. However, not all words in the definition have corresponding sememes. For example, sememe “住下” (reside) could be useful when generating “食宿” (residence), but none of the sememes is useful when generating “提供” (provide). Besides, language correlations make the sememes unnecessary when generating words like “和” (and) and “给” (for).
Inspired by BIBREF9 , we introduce the adaptive attention mechanism for the decoder. At each time step $t$ , we summarize the time-varying sememes' information as sememe context, and the language model's information as LM context. Then, we use another attention to obtain the context vector, relying on either the sememe context or LM context.
More concretely, we define each conditional probability in Eq. 8 as:
$$& P(y_t|y_{<t},x,s) \propto \exp {(y_t;\mathbf {z}_t,\mathbf {c}_t)} & \\ & \mathbf {z}_t = f(\mathbf {z}_{t-1},y_{t-1},\mathbf {c}_t) & $$ (Eq. 17)
where $\mathbf {c}_t$ is the context vector from the output of the adaptive attention module at time $t$ , $\mathbf {z}_t$ is a decoder's hidden state at time $t$ .
To obtain the context vector $\mathbf {c}_t$ , we first compute the sememe context vector $\hat{\mathbf {c}_t}$ and the LM context $\mathbf {o}_t$ . Similar to the vanilla attention, the sememe context $\hat{\mathbf {c}_t}$ is obtained with a soft attention mechanism as:
$$\hat{\mathbf {c}_t} = \sum _{n=1}^{N} \alpha _{tn} \mathbf {h}_n,$$ (Eq. 18)
where
$$\alpha _{tn} &=& \frac{\mathrm {exp}(e_{tn})}{\sum _{i=1}^{N} \mathrm {exp}(e_{ti})} \nonumber \\ e_{tn} &=& \mathbf {w}_{\hat{c}}^T[\mathbf {h}_n; \mathbf {z}_{t-1}].$$ (Eq. 19)
Since the decoder's hidden states store syntax and semantic information for language modeling, we compute the LM context $\mathbf {o}_t$ with a gated unit, whose input is the definition word $y_t$ and the previous hidden state $\mathbf {z}_{t-1}$ :
$$\mathbf {g}_t &=& \sigma (\mathbf {W}_g [y_{t-1}; \mathbf {z}_{t-1}] + \mathbf {b}_g) \nonumber \\ \mathbf {o}_t &=& \mathbf {g}_t \odot \mathrm {tanh} (\mathbf {z}_{t-1}) $$ (Eq. 20)
Once the sememe context vector $\hat{\mathbf {c}_t}$ and the LM context $\mathbf {o}_t$ are ready, we can generate the context vector with an adaptive attention layer as:
$$\mathbf {c}_t = \beta _t \mathbf {o}_t + (1-\beta _t)\hat{\mathbf {c}_t}, $$ (Eq. 21)
where
$$\beta _{t} &=& \frac{\mathrm {exp}(e_{to})}{\mathrm {exp}(e_{to})+\mathrm {exp}(e_{t\hat{c}})} \nonumber \\ e_{to} &=& (\mathbf {w}_c)^T[\mathbf {o}_t;\mathbf {z}_t] \nonumber \\ e_{t\hat{c}} &=& (\mathbf {w}_c)^T[\hat{\mathbf {c}_t};\mathbf {z}_t] $$ (Eq. 22)
$\beta _{t}$ is a scalar in range $[0,1]$ , which controls the relative importance of LM context and sememe context.
Once we obtain the context vector $\mathbf {c}_t$ , we can update the decoder's hidden state and generate the next word according to Eq. and Eq. 17 , respectively.
Self- and Adaptive-Attention Model
Recent works demonstrate that an architecture entirely based on attention can obtain new state-of-the-art in neural machine translation BIBREF10 , constituency parsing BIBREF11 and semantic role labeling BIBREF12 . SAAM adopts similar architecture and replaces the recurrent connections in AAM with self-attention. Such architecture not only reduces the training time by allowing for more parallelization, but also learns better the dependency between word, sememes and tokens of the definition by reducing the path length between them.
Given the word to be defined $x$ and its corresponding ordered sememes $s=[s_1, \dots , s_{N}]$ , we combine them as the input sequence of embeddings for the encoder, i.e., $\mathbf {v}=[\mathbf {v}_0, \mathbf {v}_1, \dots , \mathbf {v}_{N}]$ . The $n$ -th vector $\mathbf {v}_n$ is defined as:
$$\mathbf {v}_n = {\left\lbrace \begin{array}{ll} \mathbf {x}, &n=0 \cr \mathbf {s}_n, &n>0 \end{array}\right.}$$ (Eq. 25)
where $\mathbf {x}$ is the vector representation of the given word $x$ , and $\mathbf {s}_n$ is the vector representation of the $n$ -th sememe $s_n$ .
Although the input sequence is not time ordered, position $n$ in the sequence carries some useful information. First, position 0 corresponds to the word to be defined, while other positions correspond to the sememes. Secondly, sememes are sorted into a logical order in HowNet. For example, as the first sememe of the word “旅馆” (hotel), the sememe “场所” (place) describes its most important aspect, namely, the definition of “旅馆” (hotel) should be “…… 的地方” (a place for ...). Therefore, we add learned position embedding to the input embeddings for the encoder:
$$\mathbf {v}_n = \mathbf {v}_n + \mathbf {p}_n$$ (Eq. 26)
where $\mathbf {p}_n$ is the position embedding that can be learned during training.
Then the vectors $\mathbf {v}=[\mathbf {v}_0, \mathbf {v}_1, \dots , \mathbf {v}_{N}]$ are transformed by a stack of identical layers, where each layers consists of two sublayers: multi-head self-attention layer and position-wise fully connected feed-forward layer. Each of the layers are connected by residual connections, followed by layer normalization BIBREF16 . We refer the readers to BIBREF10 for the detail of the layers. The output of the encoder stack is a sequence of hidden states, denoted as $\mathbf {h}=[\mathbf {h}_0, \mathbf {h}_1, \dots , \mathbf {h}_{N}]$ .
The decoder is also composed of a stack of identical layers. In BIBREF10 , each layer includes three sublayers: masked multi-head self-attention layer, multi-head attention layer that attends over the output of the encoder stack and position-wise fully connected feed-forward layer. In our model, we replace the two multi-head attention layers with an adaptive multi-head attention layer. Similarly to the adaptive attention layer in AAM, the adaptive multi-head attention layer can adaptivelly decide which sememes to focus on and when to attend to sememes at each time and each layer. Figure 2 shows the architecture of the decoder.
Different from the adaptive attention layer in AAM that uses single head attention to obtain the sememe context and gate unit to obtain the LM context, the adaptive multi-head attention layer utilizes multi-head attention to obtain both contexts. Multi-head attention performs multiple single head attentions in parallel with linearly projected keys, values and queries, and then combines the outputs of all heads to obtain the final attention result. We omit the detail here and refer the readers to BIBREF10 . Formally, given the hidden state $\mathbf {z}_t^{l-1}$ at time $t$ , layer $l-1$ of the decoder, we obtain the LM context with multi-head self-attention:
$$\mathbf {o}_t^l = \textit {MultiHead}(\mathbf {z}_t^{l-1},\mathbf {z}_{\le t}^{l-1},\mathbf {z}_{\le t}^{l-1})$$ (Eq. 28)
where the decoder's hidden state $\mathbf {z}_t^{l-1}$ at time $t$ , layer $l-1$ is the query, and $\mathbf {z}_{\le t}^{l-1}=[\mathbf {z}_1^{l-1},...,\mathbf {z}_t^{l-1}]$ , the decoder's hidden states from time 1 to time $t$ at layer $l-1$ , are the keys and values. To obtain better LM context, we employ residual connection and layer normalization after the multi-head self-attention. Similarly, the sememe context can be computed by attending to the encoder's outputs with multi-head attention:
$$\hat{\mathbf {c}_t}^l = \textit {MultiHead}(\mathbf {o}_t^l,\mathbf {h},\mathbf {h})$$ (Eq. 29)
where $\mathbf {o}_t^l$ is the query, and the output from the encoder stack $\mathbf {h}=[\mathbf {h}_0, \mathbf {h}_1, \dots , \mathbf {h}_{N}]$ , are the values and keys.
Once obtaining the sememe context vector $\hat{\mathbf {c}_t}^l$ and the LM context $\mathbf {o}_t^l$ , we compute the output from the adaptive attention layer with:
$$\mathbf {c}_t^l = \beta _t^l \mathbf {o}_t^l + (1-\beta _t^l)\hat{\mathbf {c}_t}^l, $$ (Eq. 30)
where
$$\beta _{t}^l &=& \frac{\mathrm {exp}(e_{to})}{\mathrm {exp}(e_{to})+\mathrm {exp}(e_{t\hat{c}})} \nonumber \\ e_{to}^l &=& (\mathbf {w}_c^l)^T[\mathbf {o}_t^l;\mathbf {z}_t^{l-1}] \nonumber \\ e_{t\hat{c}}^l &=& (\mathbf {w}_c^l)^T[\hat{\mathbf {c}_t}^l;\mathbf {z}_t^{l-1}] $$ (Eq. 31)
Experiments
In this section, we will first introduce the construction process of the CDM dataset, then the experimental results and analysis.
Dataset
To verify our proposed models, we construct the CDM dataset for the Chinese definition modeling task. cdmEach entry in the dataset is a triple that consists of: the interpreted word, sememes and a definition for a specific word sense, where the sememes are annotated with HowNet BIBREF5 , and the definition are annotated with Chinese Concept Dictionary (CCD) BIBREF6 .
Concretely, for a common word in HowNet and CCD, we first align its definitions from CCD and sememe groups from HowNet, where each group represents one word sense. We define the sememes of a definition as the combined sememes associated with any token of the definition. Then for each definition of a word, we align it with the sememe group that has the largest number of overlapping sememes with the definition's sememes. With such aligned definition and sememe group, we add an entry that consists of the word, the sememes of the aligned sememe group and the aligned definition. Each word can have multiple entries in the dataset, especially the polysemous word. To improve the quality of the created dataset, we filter out entries that the definition contains the interpreted word, or the interpreted word is among function words, numeral words and proper nouns.
After processing, we obtain the dataset that contains 104,517 entries with 30,052 unique interpreted words. We divide the dataset according to the unique interpreted words into training set, validation set and test set with a ratio of 18:1:1. Table 1 shows the detailed data statistics.
Settings
We show the effectiveness of all models on the CDM dataset. All the embeddings, including word and sememe embedding, are fixed 300 dimensional word embeddings pretrained on the Chinese Gigaword corpus (LDC2011T13). All definitions are segmented with Jiaba Chinese text segmentation tool and we use the resulting unique segments as the decoder vocabulary. To evaluate the difference between the generated results and the gold-standard definitions, we compute BLEU score using a script provided by Moses, following BIBREF3 . We implement the Baseline and AAM by modifying the code of BIBREF9 , and SAAM with fairseq-py .
We use two-layer LSTM network as the recurrent component. We set batch size to 128, and the dimension of the hidden state to 300 for the decoder. Adam optimizer is employed with an initial learning rate of $1\times 10^{-3}$ . Since the morphemes of the word to be defined can benefit definition modeling, BIBREF3 obtain the model with the best performance by adding a trainable embedding from character-level CNN to the fixed word embedding. To obtain the state-of-the-art result as the baseline, we follow BIBREF3 and experiment with the character-level CNN with the same hyperparameters.
To be comparable with the baseline, we also use two-layer LSTM network as the recurrent component.We set batch size to 128, and the dimension of the hidden state to 300 for both the encoder and the decoder. Adam optimizer is employed with an initial learning rate of $1\times 10^{-3}$ .
We have the same hyperparameters as BIBREF10 , and set these hyperparameters as $(d_{\text{model}}=300, d_{\text{hidden}}=2048, n_{\text{head}}=5, n_{\text{layer}}=6)$ . To be comparable with AAM, we use the same batch size as 128. We also employ label smoothing technique BIBREF17 with a smoothing value of 0.1 during training.
Results
We report the experimental results on CDM test set in Figure 3 . It shows that both of our proposed models, namely AAM and SAAM, achieve good results and outperform the baseline by a large margin. With sememes, AAM and SAAM can improve over the baseline with +3.1 BLEU and +6.65 BLEU, respectively.
We also find that sememes are very useful for generating the definition. The incorporation of sememes improves the AAM with +3.32 BLEU and the SAAM with +3.53 BLEU. This can be explained by that sememes help to disambiguate the word sense associated with the target definition.
Among all models, SAAM which incorporates sememes achieves the new state-of-the-art, with a BLEU score of 36.36 on the test set, demonstrating the effectiveness of sememes and the architecture of SAAM.
Table 2 lists some example definitions generated with different models. For each word-sememes pair, the generated three definitions are ordered according to the order: Baseline, AAM and SAAM. For AAM and SAAM, we use the model that incorporates sememes. These examples show that with sememes, the model can generate more accurate and concrete definitions. For example, for the word “旅馆” (hotel), the baseline model fails to generate definition containing the token “旅行者”(tourists). However, by incoporating sememes' information, especially the sememe “旅游” (tour), AAM and SAAM successfully generate “旅行者”(tourists). Manual inspection of others examples also supports our claim.
We also conduct an ablation study to evaluate the various choices we made for SAAM. We consider three key components: position embedding, the adaptive attention layer, and the incorporated sememes. As illustrated in table 3 , we remove one of these components and report the performance of the resulting model on validation set and test set. We also list the performance of the baseline and AAM for reference.
It demonstrates that all components benefit the SAAM. Removing position embedding is 0.31 BLEU below the SAAM on the test set. Removing the adaptive attention layer is 0.43 BLEU below the SAAM on the test set. Sememes affects the most. Without incoporating sememes, the performance drops 3.53 BLEU on the test set.
Definition Modeling
Distributed representations of words, or word embeddings BIBREF18 were widely used in the field of NLP in recent years. Since word embeddings have been shown to capture lexical semantics, BIBREF3 proposed the definition modeling task as a more transparent and direct representation of word embeddings. This work is followed by BIBREF4 , who studied the problem of word ambiguities in definition modeling by employing latent variable modeling and soft attention mechanisms. Both works focus on evaluating and interpreting word embeddings. In contrast, we incorporate sememes to generate word sense aware word definition for dictionary compilation.
Knowledge Bases
Recently many knowledge bases (KBs) are established in order to organize human knowledge in structural forms. By providing human experiential knowledge, KBs are playing an increasingly important role as infrastructural facilities of natural language processing.
HowNet BIBREF19 is a knowledge base that annotates each concept in Chinese with one or more sememes. HowNet plays an important role in understanding the semantic meanings of concepts in human languages, and has been widely used in word representation learning BIBREF7 , word similarity computation BIBREF20 and sentiment analysis BIBREF21 . For example, BIBREF7 improved word representation learning by utilizing sememes to represent various senses of each word and selecting suitable senses in contexts with an attention mechanism.
Chinese Concept Dictionary (CCD) is a WordNet-like semantic lexicon BIBREF22 , BIBREF23 , where each concept is defined by a set of synonyms (SynSet). CCD has been widely used in many NLP tasks, such as word sense disambiguation BIBREF23 .
In this work, we annotate the word with aligned sememes from HowNet and definition from CCD.
Self-Attention
Self-attention is a special case of attention mechanism that relates different positions of a single sequence in order to compute a representation for the sequence. Self-attention has been successfully applied to many tasks recently BIBREF24 , BIBREF25 , BIBREF26 , BIBREF10 , BIBREF12 , BIBREF11 .
BIBREF10 introduced the first transduction model based on self-attention by replacing the recurrent layers commonly used in encoder-decoder architectures with multi-head self-attention. The proposed model called Transformer achieved the state-of-the-art performance on neural machine translation with reduced training time. After that, BIBREF12 demonstrated that self-attention can improve semantic role labeling by handling structural information and long range dependencies. BIBREF11 further extended self-attention to constituency parsing and showed that the use of self-attention helped to analyze the model by making explicit the manner in which information is propagated between different locations in the sentence.
Self-attention has many good properties. It reduces the computation complexity per layer, allows for more parallelization and reduces the path length between long-range dependencies in the network. In this paper, we use self-attention based architecture in SAAM to learn the relations of word, sememes and definition automatically.
Conclusion
We introduce the Chinese definition modeling task that generates a definition in Chinese for a given word and sememes of a specific word sense. This task is useful for dictionary compilation. To achieve this, we constructed the CDM dataset with word-sememes-definition triples. We propose two novel methods, AAM and SAAM, to generate word sense aware definition by utilizing sememes. In experiments on the CDM dataset we show that our proposed AAM and SAAM outperform the state-of-the-art approach with a large margin. By efficiently incorporating sememes, the SAAM achieves the best performance with improvement over the state-of-the-art method. | Yes |
0d0959dba3f7c15ee4f5cdee51682656c4abbd8f | 0d0959dba3f7c15ee4f5cdee51682656c4abbd8f_0 | Q: What is a sememe?
Text: Introduction
Chinese definition modeling is the task of generating a definition in Chinese for a given Chinese word. This task can benefit the compilation of dictionaries, especially dictionaries for Chinese as a foreign language (CFL) learners.
In recent years, the number of CFL learners has risen sharply. In 2017, 770,000 people took the Chinese Proficiency Test, an increase of 38% from 2016. However, most Chinese dictionaries are for native speakers. Since these dictionaries usually require a fairly high level of Chinese, it is necessary to build a dictionary specifically for CFL learners. Manually writing definitions relies on the knowledge of lexicographers and linguists, which is expensive and time-consuming BIBREF0 , BIBREF1 , BIBREF2 . Therefore, the study on writing definitions automatically is of practical significance.
Definition modeling was first proposed by BIBREF3 as a tool to evaluate different word embeddings. BIBREF4 extended the work by incorporating word sense disambiguation to generate context-aware word definition. Both methods are based on recurrent neural network encoder-decoder framework without attention. In contrast, this paper formulates the definition modeling task as an automatic way to accelerate dictionary compilation.
In this work, we introduce a new dataset for the Chinese definition modeling task that we call Chinese Definition Modeling Corpus cdm(CDM). CDM consists of 104,517 entries, where each entry contains a word, the sememes of a specific word sense, and the definition in Chinese of the same word sense. Sememes are minimum semantic units of word meanings, and the meaning of each word sense is typically composed of several sememes, as is illustrated in Figure 1 . For a given word sense, CDM annotates the sememes according to HowNet BIBREF5 , and the definition according to Chinese Concept Dictionary (CCD) BIBREF6 . Since sememes have been widely used in improving word representation learning BIBREF7 and word similarity computation BIBREF8 , we argue that sememes can benefit the task of definition modeling.
We propose two novel models to incorporate sememes into Chinese definition modeling: the Adaptive-Attention Model (AAM) and the Self- and Adaptive-Attention Model (SAAM). Both models are based on the encoder-decoder framework. The encoder maps word and sememes into a sequence of continuous representations, and the decoder then attends to the output of the encoder and generates the definition one word at a time. Different from the vanilla attention mechanism, the decoder of both models employs the adaptive attention mechanism to decide which sememes to focus on and when to pay attention to sememes at one time BIBREF9 . Following BIBREF3 , BIBREF4 , the AAM is built using recurrent neural networks (RNNs). However, recent works demonstrate that attention-based architecture that entirely eliminates recurrent connections can obtain new state-of-the-art in neural machine translation BIBREF10 , constituency parsing BIBREF11 and semantic role labeling BIBREF12 . In the SAAM, we replace the LSTM-based encoder and decoder with an architecture based on self-attention. This fully attention-based model allows for more parallelization, reduces the path length between word, sememes and the definition, and can reach a new state-of-the-art on the definition modeling task. To the best of our knowledge, this is the first work to introduce the attention mechanism and utilize external resource for the definition modeling task.
In experiments on the CDM dataset we show that our proposed AAM and SAAM outperform the state-of-the-art approach with a large margin. By efficiently incorporating sememes, the SAAM achieves the best performance with improvement over the state-of-the-art method by +6.0 BLEU.
Methodology
The definition modeling task is to generate an explanatory sentence for the interpreted word. For example, given the word “旅馆” (hotel), a model should generate a sentence like this: “给旅行者提供食宿和其他服务的地方” (A place to provide residence and other services for tourists). Since distributed representations of words have been shown to capture lexical syntax and semantics, it is intuitive to employ word embeddings to generate natural language definitions.
Previously, BIBREF3 proposed several model architectures to generate a definition according to the distributed representation of a word. We briefly summarize their model with the best performance in Section "Experiments" and adopt it as our baseline model.
Inspired by the works that use sememes to improve word representation learning BIBREF7 and word similarity computation BIBREF8 , we propose the idea of incorporating sememes into definition modeling. Sememes can provide additional semantic information for the task. As shown in Figure 1 , sememes are highly correlated to the definition. For example, the sememe “场所” (place) is related with the word “地方” (place) of the definition, and the sememe “旅游” (tour) is correlated to the word “旅行者” (tourists) of the definition.
Therefore, to make full use of the sememes in CDM dataset, we propose AAM and SAAM for the task, in Section "Adaptive-Attention Model" and Section "Self- and Adaptive-Attention Model" , respectively.
Baseline Model
The baseline model BIBREF3 is implemented with a recurrent neural network based encoder-decoder framework. Without utilizing the information of sememes, it learns a probabilistic mapping $P(y | x)$ from the word $x$ to be defined to a definition $y = [y_1, \dots , y_T ]$ , in which $y_t$ is the $t$ -th word of definition $y$ .
More concretely, given a word $x$ to be defined, the encoder reads the word and generates its word embedding $\mathbf {x}$ as the encoded information. Afterward, the decoder computes the conditional probability of each definition word $y_t$ depending on the previous definition words $y_{<t}$ , as well as the word being defined $x$ , i.e., $P(y_t|y_{<t},x)$ . $P(y_t|y_{<t},x)$ is given as:
$$& P(y_t|y_{<t},x) \propto \exp {(y_t;\mathbf {z}_t,\mathbf {x})} & \\ & \mathbf {z}_t = f(\mathbf {z}_{t-1},y_{t-1},\mathbf {x}) &$$ (Eq. 4)
where $\mathbf {z}_t$ is the decoder's hidden state at time $t$ , $f$ is a recurrent nonlinear function such as LSTM and GRU, and $\mathbf {x}$ is the embedding of the word being defined. Then the probability of $P(y | x)$ can be computed according to the probability chain rule:
$$P(y | x) = \prod _{t=1}^{T} P(y_t|y_{<t},x)$$ (Eq. 5)
We denote all the parameters in the model as $\theta $ and the definition corpus as $D_{x,y}$ , which is a set of word-definition pairs. Then the model parameters can be learned by maximizing the log-likelihood:
$$\hat{\theta } = \mathop {\rm argmax}_{\theta } \sum _{\langle x, y \rangle \in D_{x,y}}\log P(y | x; \theta ) $$ (Eq. 6)
Adaptive-Attention Model
Our proposed model aims to incorporate sememes into the definition modeling task. Given the word to be defined $x$ and its corresponding sememes $s=[s_1, \dots , s_N ]$ , we define the probability of generating the definition $y=[y_1, \dots , y_t ]$ as:
$$P(y | x, s) = \prod _{t=1}^{T} P(y_t|y_{<t},x,s) $$ (Eq. 8)
Similar to Eq. 6 , we can maximize the log-likelihood with the definition corpus $D_{x,s,y}$ to learn model parameters:
$$\hat{\theta } = \mathop {\rm argmax}_{\theta } \sum _{\langle x,s,y \rangle \in D_{x,s,y}}\log P(y | x, s; \theta ) $$ (Eq. 9)
The probability $P(y | x, s)$ can be implemented with an adaptive attention based encoder-decoder framework, which we call Adaptive-Attention Model (AAM). The new architecture consists of a bidirectional RNN as the encoder and a RNN decoder that adaptively attends to the sememes during decoding a definition.
Similar to BIBREF13 , the encoder is a bidirectional RNN, consisting of forward and backward RNNs. Given the word to be defined $x$ and its corresponding sememes $s=[s_1, \dots , s_N ]$ , we define the input sequence of vectors for the encoder as $\mathbf {v}=[\mathbf {v}_1,\dots ,\mathbf {v}_{N}]$ . The vector $\mathbf {v}_n$ is given as follows:
$$\mathbf {v}_n = [\mathbf {x}; \mathbf {s}_n ]$$ (Eq. 11)
where $\mathbf {x}$ is the vector representation of the word $x$ , $\mathbf {s}_n$ is the vector representation of the $n$ -th sememe $s_n$ , and $[\mathbf {a};\mathbf {b}]$ denote concatenation of vector $\mathbf {a}$ and $\mathbf {b}$ .
The forward RNN $\overrightarrow{f}$ reads the input sequence of vectors from $\mathbf {v}_1$ to $\mathbf {v}_N$ and calculates a forward hidden state for position $n$ as:
$$\overrightarrow{\mathbf {h}_{n}} &=& f(\mathbf {v}_n, \overrightarrow{\mathbf {h}_{n-1}})$$ (Eq. 12)
where $f$ is an LSTM or GRU. Similarly, the backward RNN $\overleftarrow{f}$ reads the input sequence of vectors from $\mathbf {v}_N$ to $\mathbf {v}_1$ and obtain a backward hidden state for position $n$ as:
$$\overleftarrow{\mathbf {h}_{n}} &=& f(\mathbf {h}_n, \overleftarrow{\mathbf {h}_{n+1}})$$ (Eq. 13)
In this way, we obtain a sequence of encoder hidden states $\mathbf {h}=\left[\mathbf {h}_1,...,\mathbf {h}_N\right]$ , by concatenating the forward hidden state $\overrightarrow{\mathbf {h}_{n}}$ and the backward one $\overleftarrow{\mathbf {h}_{n}}$ at each position $n$ :
$$\mathbf {h}_n=\left[\overrightarrow{\mathbf {h}_{n}}, \overleftarrow{\mathbf {h}_{n}}\right]$$ (Eq. 14)
The hidden state $\mathbf {h}_n$ captures the sememe- and word-aware information of the $n$ -th sememe.
As attention-based neural encoder-decoder frameworks have shown great success in image captioning BIBREF14 , document summarization BIBREF15 and neural machine translation BIBREF13 , it is natural to adopt the attention-based recurrent decoder in BIBREF13 as our decoder. The vanilla attention attends to the sememes at every time step. However, not all words in the definition have corresponding sememes. For example, sememe “住下” (reside) could be useful when generating “食宿” (residence), but none of the sememes is useful when generating “提供” (provide). Besides, language correlations make the sememes unnecessary when generating words like “和” (and) and “给” (for).
Inspired by BIBREF9 , we introduce the adaptive attention mechanism for the decoder. At each time step $t$ , we summarize the time-varying sememes' information as sememe context, and the language model's information as LM context. Then, we use another attention to obtain the context vector, relying on either the sememe context or LM context.
More concretely, we define each conditional probability in Eq. 8 as:
$$& P(y_t|y_{<t},x,s) \propto \exp {(y_t;\mathbf {z}_t,\mathbf {c}_t)} & \\ & \mathbf {z}_t = f(\mathbf {z}_{t-1},y_{t-1},\mathbf {c}_t) & $$ (Eq. 17)
where $\mathbf {c}_t$ is the context vector from the output of the adaptive attention module at time $t$ , $\mathbf {z}_t$ is a decoder's hidden state at time $t$ .
To obtain the context vector $\mathbf {c}_t$ , we first compute the sememe context vector $\hat{\mathbf {c}_t}$ and the LM context $\mathbf {o}_t$ . Similar to the vanilla attention, the sememe context $\hat{\mathbf {c}_t}$ is obtained with a soft attention mechanism as:
$$\hat{\mathbf {c}_t} = \sum _{n=1}^{N} \alpha _{tn} \mathbf {h}_n,$$ (Eq. 18)
where
$$\alpha _{tn} &=& \frac{\mathrm {exp}(e_{tn})}{\sum _{i=1}^{N} \mathrm {exp}(e_{ti})} \nonumber \\ e_{tn} &=& \mathbf {w}_{\hat{c}}^T[\mathbf {h}_n; \mathbf {z}_{t-1}].$$ (Eq. 19)
Since the decoder's hidden states store syntax and semantic information for language modeling, we compute the LM context $\mathbf {o}_t$ with a gated unit, whose input is the definition word $y_t$ and the previous hidden state $\mathbf {z}_{t-1}$ :
$$\mathbf {g}_t &=& \sigma (\mathbf {W}_g [y_{t-1}; \mathbf {z}_{t-1}] + \mathbf {b}_g) \nonumber \\ \mathbf {o}_t &=& \mathbf {g}_t \odot \mathrm {tanh} (\mathbf {z}_{t-1}) $$ (Eq. 20)
Once the sememe context vector $\hat{\mathbf {c}_t}$ and the LM context $\mathbf {o}_t$ are ready, we can generate the context vector with an adaptive attention layer as:
$$\mathbf {c}_t = \beta _t \mathbf {o}_t + (1-\beta _t)\hat{\mathbf {c}_t}, $$ (Eq. 21)
where
$$\beta _{t} &=& \frac{\mathrm {exp}(e_{to})}{\mathrm {exp}(e_{to})+\mathrm {exp}(e_{t\hat{c}})} \nonumber \\ e_{to} &=& (\mathbf {w}_c)^T[\mathbf {o}_t;\mathbf {z}_t] \nonumber \\ e_{t\hat{c}} &=& (\mathbf {w}_c)^T[\hat{\mathbf {c}_t};\mathbf {z}_t] $$ (Eq. 22)
$\beta _{t}$ is a scalar in range $[0,1]$ , which controls the relative importance of LM context and sememe context.
Once we obtain the context vector $\mathbf {c}_t$ , we can update the decoder's hidden state and generate the next word according to Eq. and Eq. 17 , respectively.
Self- and Adaptive-Attention Model
Recent works demonstrate that an architecture entirely based on attention can obtain new state-of-the-art in neural machine translation BIBREF10 , constituency parsing BIBREF11 and semantic role labeling BIBREF12 . SAAM adopts similar architecture and replaces the recurrent connections in AAM with self-attention. Such architecture not only reduces the training time by allowing for more parallelization, but also learns better the dependency between word, sememes and tokens of the definition by reducing the path length between them.
Given the word to be defined $x$ and its corresponding ordered sememes $s=[s_1, \dots , s_{N}]$ , we combine them as the input sequence of embeddings for the encoder, i.e., $\mathbf {v}=[\mathbf {v}_0, \mathbf {v}_1, \dots , \mathbf {v}_{N}]$ . The $n$ -th vector $\mathbf {v}_n$ is defined as:
$$\mathbf {v}_n = {\left\lbrace \begin{array}{ll} \mathbf {x}, &n=0 \cr \mathbf {s}_n, &n>0 \end{array}\right.}$$ (Eq. 25)
where $\mathbf {x}$ is the vector representation of the given word $x$ , and $\mathbf {s}_n$ is the vector representation of the $n$ -th sememe $s_n$ .
Although the input sequence is not time ordered, position $n$ in the sequence carries some useful information. First, position 0 corresponds to the word to be defined, while other positions correspond to the sememes. Secondly, sememes are sorted into a logical order in HowNet. For example, as the first sememe of the word “旅馆” (hotel), the sememe “场所” (place) describes its most important aspect, namely, the definition of “旅馆” (hotel) should be “…… 的地方” (a place for ...). Therefore, we add learned position embedding to the input embeddings for the encoder:
$$\mathbf {v}_n = \mathbf {v}_n + \mathbf {p}_n$$ (Eq. 26)
where $\mathbf {p}_n$ is the position embedding that can be learned during training.
Then the vectors $\mathbf {v}=[\mathbf {v}_0, \mathbf {v}_1, \dots , \mathbf {v}_{N}]$ are transformed by a stack of identical layers, where each layers consists of two sublayers: multi-head self-attention layer and position-wise fully connected feed-forward layer. Each of the layers are connected by residual connections, followed by layer normalization BIBREF16 . We refer the readers to BIBREF10 for the detail of the layers. The output of the encoder stack is a sequence of hidden states, denoted as $\mathbf {h}=[\mathbf {h}_0, \mathbf {h}_1, \dots , \mathbf {h}_{N}]$ .
The decoder is also composed of a stack of identical layers. In BIBREF10 , each layer includes three sublayers: masked multi-head self-attention layer, multi-head attention layer that attends over the output of the encoder stack and position-wise fully connected feed-forward layer. In our model, we replace the two multi-head attention layers with an adaptive multi-head attention layer. Similarly to the adaptive attention layer in AAM, the adaptive multi-head attention layer can adaptivelly decide which sememes to focus on and when to attend to sememes at each time and each layer. Figure 2 shows the architecture of the decoder.
Different from the adaptive attention layer in AAM that uses single head attention to obtain the sememe context and gate unit to obtain the LM context, the adaptive multi-head attention layer utilizes multi-head attention to obtain both contexts. Multi-head attention performs multiple single head attentions in parallel with linearly projected keys, values and queries, and then combines the outputs of all heads to obtain the final attention result. We omit the detail here and refer the readers to BIBREF10 . Formally, given the hidden state $\mathbf {z}_t^{l-1}$ at time $t$ , layer $l-1$ of the decoder, we obtain the LM context with multi-head self-attention:
$$\mathbf {o}_t^l = \textit {MultiHead}(\mathbf {z}_t^{l-1},\mathbf {z}_{\le t}^{l-1},\mathbf {z}_{\le t}^{l-1})$$ (Eq. 28)
where the decoder's hidden state $\mathbf {z}_t^{l-1}$ at time $t$ , layer $l-1$ is the query, and $\mathbf {z}_{\le t}^{l-1}=[\mathbf {z}_1^{l-1},...,\mathbf {z}_t^{l-1}]$ , the decoder's hidden states from time 1 to time $t$ at layer $l-1$ , are the keys and values. To obtain better LM context, we employ residual connection and layer normalization after the multi-head self-attention. Similarly, the sememe context can be computed by attending to the encoder's outputs with multi-head attention:
$$\hat{\mathbf {c}_t}^l = \textit {MultiHead}(\mathbf {o}_t^l,\mathbf {h},\mathbf {h})$$ (Eq. 29)
where $\mathbf {o}_t^l$ is the query, and the output from the encoder stack $\mathbf {h}=[\mathbf {h}_0, \mathbf {h}_1, \dots , \mathbf {h}_{N}]$ , are the values and keys.
Once obtaining the sememe context vector $\hat{\mathbf {c}_t}^l$ and the LM context $\mathbf {o}_t^l$ , we compute the output from the adaptive attention layer with:
$$\mathbf {c}_t^l = \beta _t^l \mathbf {o}_t^l + (1-\beta _t^l)\hat{\mathbf {c}_t}^l, $$ (Eq. 30)
where
$$\beta _{t}^l &=& \frac{\mathrm {exp}(e_{to})}{\mathrm {exp}(e_{to})+\mathrm {exp}(e_{t\hat{c}})} \nonumber \\ e_{to}^l &=& (\mathbf {w}_c^l)^T[\mathbf {o}_t^l;\mathbf {z}_t^{l-1}] \nonumber \\ e_{t\hat{c}}^l &=& (\mathbf {w}_c^l)^T[\hat{\mathbf {c}_t}^l;\mathbf {z}_t^{l-1}] $$ (Eq. 31)
Experiments
In this section, we will first introduce the construction process of the CDM dataset, then the experimental results and analysis.
Dataset
To verify our proposed models, we construct the CDM dataset for the Chinese definition modeling task. cdmEach entry in the dataset is a triple that consists of: the interpreted word, sememes and a definition for a specific word sense, where the sememes are annotated with HowNet BIBREF5 , and the definition are annotated with Chinese Concept Dictionary (CCD) BIBREF6 .
Concretely, for a common word in HowNet and CCD, we first align its definitions from CCD and sememe groups from HowNet, where each group represents one word sense. We define the sememes of a definition as the combined sememes associated with any token of the definition. Then for each definition of a word, we align it with the sememe group that has the largest number of overlapping sememes with the definition's sememes. With such aligned definition and sememe group, we add an entry that consists of the word, the sememes of the aligned sememe group and the aligned definition. Each word can have multiple entries in the dataset, especially the polysemous word. To improve the quality of the created dataset, we filter out entries that the definition contains the interpreted word, or the interpreted word is among function words, numeral words and proper nouns.
After processing, we obtain the dataset that contains 104,517 entries with 30,052 unique interpreted words. We divide the dataset according to the unique interpreted words into training set, validation set and test set with a ratio of 18:1:1. Table 1 shows the detailed data statistics.
Settings
We show the effectiveness of all models on the CDM dataset. All the embeddings, including word and sememe embedding, are fixed 300 dimensional word embeddings pretrained on the Chinese Gigaword corpus (LDC2011T13). All definitions are segmented with Jiaba Chinese text segmentation tool and we use the resulting unique segments as the decoder vocabulary. To evaluate the difference between the generated results and the gold-standard definitions, we compute BLEU score using a script provided by Moses, following BIBREF3 . We implement the Baseline and AAM by modifying the code of BIBREF9 , and SAAM with fairseq-py .
We use two-layer LSTM network as the recurrent component. We set batch size to 128, and the dimension of the hidden state to 300 for the decoder. Adam optimizer is employed with an initial learning rate of $1\times 10^{-3}$ . Since the morphemes of the word to be defined can benefit definition modeling, BIBREF3 obtain the model with the best performance by adding a trainable embedding from character-level CNN to the fixed word embedding. To obtain the state-of-the-art result as the baseline, we follow BIBREF3 and experiment with the character-level CNN with the same hyperparameters.
To be comparable with the baseline, we also use two-layer LSTM network as the recurrent component.We set batch size to 128, and the dimension of the hidden state to 300 for both the encoder and the decoder. Adam optimizer is employed with an initial learning rate of $1\times 10^{-3}$ .
We have the same hyperparameters as BIBREF10 , and set these hyperparameters as $(d_{\text{model}}=300, d_{\text{hidden}}=2048, n_{\text{head}}=5, n_{\text{layer}}=6)$ . To be comparable with AAM, we use the same batch size as 128. We also employ label smoothing technique BIBREF17 with a smoothing value of 0.1 during training.
Results
We report the experimental results on CDM test set in Figure 3 . It shows that both of our proposed models, namely AAM and SAAM, achieve good results and outperform the baseline by a large margin. With sememes, AAM and SAAM can improve over the baseline with +3.1 BLEU and +6.65 BLEU, respectively.
We also find that sememes are very useful for generating the definition. The incorporation of sememes improves the AAM with +3.32 BLEU and the SAAM with +3.53 BLEU. This can be explained by that sememes help to disambiguate the word sense associated with the target definition.
Among all models, SAAM which incorporates sememes achieves the new state-of-the-art, with a BLEU score of 36.36 on the test set, demonstrating the effectiveness of sememes and the architecture of SAAM.
Table 2 lists some example definitions generated with different models. For each word-sememes pair, the generated three definitions are ordered according to the order: Baseline, AAM and SAAM. For AAM and SAAM, we use the model that incorporates sememes. These examples show that with sememes, the model can generate more accurate and concrete definitions. For example, for the word “旅馆” (hotel), the baseline model fails to generate definition containing the token “旅行者”(tourists). However, by incoporating sememes' information, especially the sememe “旅游” (tour), AAM and SAAM successfully generate “旅行者”(tourists). Manual inspection of others examples also supports our claim.
We also conduct an ablation study to evaluate the various choices we made for SAAM. We consider three key components: position embedding, the adaptive attention layer, and the incorporated sememes. As illustrated in table 3 , we remove one of these components and report the performance of the resulting model on validation set and test set. We also list the performance of the baseline and AAM for reference.
It demonstrates that all components benefit the SAAM. Removing position embedding is 0.31 BLEU below the SAAM on the test set. Removing the adaptive attention layer is 0.43 BLEU below the SAAM on the test set. Sememes affects the most. Without incoporating sememes, the performance drops 3.53 BLEU on the test set.
Definition Modeling
Distributed representations of words, or word embeddings BIBREF18 were widely used in the field of NLP in recent years. Since word embeddings have been shown to capture lexical semantics, BIBREF3 proposed the definition modeling task as a more transparent and direct representation of word embeddings. This work is followed by BIBREF4 , who studied the problem of word ambiguities in definition modeling by employing latent variable modeling and soft attention mechanisms. Both works focus on evaluating and interpreting word embeddings. In contrast, we incorporate sememes to generate word sense aware word definition for dictionary compilation.
Knowledge Bases
Recently many knowledge bases (KBs) are established in order to organize human knowledge in structural forms. By providing human experiential knowledge, KBs are playing an increasingly important role as infrastructural facilities of natural language processing.
HowNet BIBREF19 is a knowledge base that annotates each concept in Chinese with one or more sememes. HowNet plays an important role in understanding the semantic meanings of concepts in human languages, and has been widely used in word representation learning BIBREF7 , word similarity computation BIBREF20 and sentiment analysis BIBREF21 . For example, BIBREF7 improved word representation learning by utilizing sememes to represent various senses of each word and selecting suitable senses in contexts with an attention mechanism.
Chinese Concept Dictionary (CCD) is a WordNet-like semantic lexicon BIBREF22 , BIBREF23 , where each concept is defined by a set of synonyms (SynSet). CCD has been widely used in many NLP tasks, such as word sense disambiguation BIBREF23 .
In this work, we annotate the word with aligned sememes from HowNet and definition from CCD.
Self-Attention
Self-attention is a special case of attention mechanism that relates different positions of a single sequence in order to compute a representation for the sequence. Self-attention has been successfully applied to many tasks recently BIBREF24 , BIBREF25 , BIBREF26 , BIBREF10 , BIBREF12 , BIBREF11 .
BIBREF10 introduced the first transduction model based on self-attention by replacing the recurrent layers commonly used in encoder-decoder architectures with multi-head self-attention. The proposed model called Transformer achieved the state-of-the-art performance on neural machine translation with reduced training time. After that, BIBREF12 demonstrated that self-attention can improve semantic role labeling by handling structural information and long range dependencies. BIBREF11 further extended self-attention to constituency parsing and showed that the use of self-attention helped to analyze the model by making explicit the manner in which information is propagated between different locations in the sentence.
Self-attention has many good properties. It reduces the computation complexity per layer, allows for more parallelization and reduces the path length between long-range dependencies in the network. In this paper, we use self-attention based architecture in SAAM to learn the relations of word, sememes and definition automatically.
Conclusion
We introduce the Chinese definition modeling task that generates a definition in Chinese for a given word and sememes of a specific word sense. This task is useful for dictionary compilation. To achieve this, we constructed the CDM dataset with word-sememes-definition triples. We propose two novel methods, AAM and SAAM, to generate word sense aware definition by utilizing sememes. In experiments on the CDM dataset we show that our proposed AAM and SAAM outperform the state-of-the-art approach with a large margin. By efficiently incorporating sememes, the SAAM achieves the best performance with improvement over the state-of-the-art method. | Sememes are minimum semantic units of word meanings, and the meaning of each word sense is typically composed of several sememes |
589be705a5cc73a23f30decba23ce58ec39d313b | 589be705a5cc73a23f30decba23ce58ec39d313b_0 | Q: What data did they use?
Text: Introduction
The advent of neural networks in natural language processing (NLP) has significantly improved state-of-the-art results within the field. While recurrent neural networks (RNNs) and long short-term memory networks (LSTMs) initially dominated the field, recent models started incorporating attention mechanisms and then later dropped the recurrent part and just kept the attention mechanisms in so-called transformer models BIBREF0. This latter type of model caused a new revolution in NLP and led to popular language models like GPT-2 BIBREF1, BIBREF2 and ELMo BIBREF3. BERT BIBREF4 improved over previous transformer models and recurrent networks by allowing the system to learn from input text in a bidirectional way, rather than only from left-to-right or the other way around. This model was later re-implemented, critically evaluated and improved in the RoBERTa model BIBREF5.
These large-scale transformer models provide the advantage of being able to solve NLP tasks by having a common, expensive pre-training phase, followed by a smaller fine-tuning phase. The pre-training happens in an unsupervised way by providing large corpora of text in the desired language. The second phase only needs a relatively small annotated data set for fine-tuning to outperform previous popular approaches in one of a large number of possible language tasks.
While language models are usually trained on English data, some multilingual models also exist. These are usually trained on a large quantity of text in different languages. For example, Multilingual-BERT is trained on a collection of corpora in 104 different languages BIBREF4, and generalizes language components well across languages BIBREF6. However, models trained on data from one specific language usually improve the performance of multilingual models for this particular language BIBREF7, BIBREF8. Training a RoBERTa model BIBREF5 on a Dutch dataset thus has a lot of potential for increasing performance for many downstream Dutch NLP tasks. In this paper, we introduce RobBERT, a Dutch RoBERTa-based pre-trained language model, and critically test its performance using natural language tasks against other Dutch languages models.
Related Work
Transformer models have been successfully used for a wide range of language tasks. Initially, transformers were introduced for use in machine translation, where they vastly improved state-of-the-art results for English to German in an efficient manner BIBREF0. This transformer model architecture resulted in a new paradigm in NLP with the migration from sequence-to-sequence recurrent neural networks to transformer-based models by removing the recurrent component and only keeping attention. This cornerstone was used for BERT, a transformer model that obtained state-of-the-art results for eleven natural language processing tasks, such as question answering and natural language inference BIBREF4. BERT is pre-trained with large corpora of text using two unsupervised tasks. The first task is word masking (also called the Cloze task BIBREF9 or masked language model (MLM)), where the model has to guess which word is masked in certain position in the text. The second task is next sentence prediction. This is done by predicting if two sentences are subsequent in the corpus, or if they are randomly sampled from the corpus. These tasks allowed the model to create internal representations about a language, which could thereafter be reused for different language tasks. This architecture has been shown to be a general language model that could be fine-tuned with little data in a relatively efficient way for a very distinct range of tasks and still outperform previous architectures BIBREF4.
Transformer models are also capable of generating contextualized word embeddings. These contextualized embeddings were presented by BIBREF3 and addressed the well known issue with a word's meaning being defined by its context (e.g. “a stick” versus “let's stick to”). This lack of context is something that traditional word embeddings like word2vec BIBREF10 or GloVe BIBREF11 lack, whereas BERT automatically incorporates the context a word occurs in.
Another advantage of transformer models is that attention allows them to better resolve coreferences between words BIBREF12. A typical example for the importance of coreference resolution is “The trophy doesn’t fit in the brown suitcase because it’s too big.”, where the word “it” would refer to the the suitcase instead of the trophy if the last word was changed to “small” BIBREF13. Being able to resolve these coreferences is for example important for translating to languages with gender, as suitcase and trophy have different genders in French.
Although BERT has been shown to be a useful language model, it has also received some scrutiny on the training and pre-processing of the language model. As mentioned before, BERT uses next sentence prediction (NSP) as one of its two training tasks. In NSP, the model has to predict whether two sentences follow each other in the training text, or are just randomly selected from the corpora. The authors of RoBERTa BIBREF5 showed that while this task made the model achieve a better performance, it was not due to its intended reason, as it might merely predict relatedness rather than subsequent sentences. That BIBREF4 trained a better model when using NSP than without NSP is likely due to the model learning long-range dependencies in text from its inputs, which are longer than just the single sentence on itself. As such, the RoBERTa model uses only the MLM task, and uses multiple full sentences in every input. Other research improved the NSP task by instead making the model predict the correct order of two sentences, where the model thus has to predict whether the sentences occur in the given order in the corpus, or occur in flipped order BIBREF14.
BIBREF4 also presented a multilingual model (mBERT) with the same architecture as BERT, but trained on Wikipedia corpora in 104 languages. Unfortunately, the quality of these multilingual embeddings is often considered worse than their monolingual counterparts. BIBREF15 illustrated this difference in quality for German and English models in a generative setting. The monolingual French CamemBERT model BIBREF7 also compared their model to mBERT, which performed poorer on all tasks. More recently, BIBREF8 also showed similar results for Dutch using their BERTje model, outperforming multilingual BERT in a wide range of tasks, such as sentiment analysis and part-of-speech tagging. Since this work is concurrent with ours, we compare our results with BERTje in this paper.
Pre-training RobBERT
This section describes the data and training regime we used to train our Dutch RoBERTa-based language model called RobBERT.
Pre-training RobBERT ::: Data
We pre-trained our model on the Dutch section of the OSCAR corpus, a large multilingual corpus which was obtained by language classification in the Common Crawl corpus BIBREF16. This Dutch corpus has 6.6 billion words, totalling 39 GB of text. It contains 126,064,722 lines of text, where each line can contain multiple sentences. Subsequent lines are however not related to each other, due to the shuffled nature of the OSCAR data set. For comparison, the French RoBERTa-based language model CamemBERT BIBREF7 has been trained on the French portion of OSCAR, which consists of 138 GB of scraped text.
Our data differs in several ways from the data used to train BERTje, a BERT-based Dutch language model BIBREF8. Firstly, they trained the model on an assembly of multiple Dutch corpora totalling only 12 GB. Secondly, they used WordPiece as subword embeddings, since this is what the original BERT architecture uses. RobBERT on the other hand uses Byte Pair Encoding (BPE), which is also used by GPT-2 BIBREF2 and RoBERTa BIBREF5.
Pre-training RobBERT ::: Training
RobBERT shares its architecture with RoBERTa's base model, which itself is a replication and improvement over BERT BIBREF5. The architecture of our language model is thus equal to the original BERT model with 12 self-attention layers with 12 heads BIBREF4. One difference with the original BERT is due to the different pre-training task specified by RoBERTa, using only the MLM task and not the NSP task. The training thus only uses word masking, where the model has to predict which words were masked in certain positions of a given line of text. The training process uses the Adam optimizer BIBREF17 with polynomial decay of the learning rate $l_r=10^{-6}$ and a ramp-up period of 1000 iterations, with parameters $\beta _1=0.9$ (a common default) and RoBERTa's default $\beta _2=0.98$. Additionally, we also used a weight decay of 0.1 as well as a small dropout of 0.1 to help prevent the model from overfitting BIBREF18.
We used a computing cluster in order to efficiently pre-train our model. More specifically, the pre-training was executed on a computing cluster with 20 nodes with 4 Nvidia Tesla P100 GPUs (16 GB VRAM each) and 2 nodes with 8 Nvidia V100 GPUs (having 32 GB VRAM each). This pre-training happened in fixed batches of 8192 sentences by rescaling each GPUs batch size depending on the number of GPUs available, in order to maximally utilize the cluster without blocking it entirely for other users. The model trained for two epochs, which is over 16k batches in total. With the large batch size of 8192, this equates to 0.5M updates for a traditional BERT model. At this point, the perplexity did not decrease any further.
Evaluation
We evaluated RobBERT in several different settings on multiple downstream tasks. First, we compare its performance with other BERT-models and state-of-the-art systems in sentiment analysis, to show its performance for classification tasks. Second, we compare its performance in a recent Dutch language task, namely the disambiguation of demonstrative pronouns, which allows us to additionally compare the zero-shot performance of our and other BERT models, i.e. using only the pre-trained model without any fine-tuning.
Evaluation ::: Sentiment Analysis
We replicated the high-level sentiment analysis task used to evaluate BERTje BIBREF8 to be able to compare our methods. This task uses a dataset called Dutch Book Reviews Dataset (DBRD), in which book reviews scraped from hebban.nl are labeled as positive or negative BIBREF19. Although the dataset contains 118,516 reviews, only 22,252 of these reviews are actually labeled as positive or negative. The DBRD dataset is already split in a balanced 10% test and 90% train split, allowing us to easily compare to other models trained for solving this task. This dataset was released in a paper analysing the performance of an ULMFiT model (Universal Language Model Fine-tuning for Text Classification model) BIBREF19.
We fine-tuned RobBERT on the first 10,000 training examples as well as on the full data set. While the ULMFiT model is first fine-tuned using the unlabeled reviews before training the classifier BIBREF19, it is unclear whether BERTje also first fine-tuned on the unlabeled reviews or only used the labeled data for fine-tuning the pretrained model. It is also unclear how it dealt with reviews being longer than the maximum number of tokens allowed as input in BERT models, as the average book review length is 547 tokens, with 40% of the documents being longer than our RobBERT model can handle. For a safe comparison, we thus decided to discard the unlabeled data and only use the labeled data for training and test purposes (20,028 and 2,224 examples respectively), and compare approaches for dealing with too long input sequences. We trained our model for 2000 iterations with a batch size of 128 and a warm-up of 500 iterations, reaching a learning rate of $10^{-5}$. We found that our model performed better when trained on the last part of the book reviews than on the first part. This is likely due to this part containing concluding remarks summarizing the overall sentiment. While BERTje was slightly outperformed by ULMFiT BIBREF8, BIBREF19, we can see that RobBERT achieves better performance than both on the test set, although the performance difference is not statistically significantly better than the ULMFiT model, as can be seen in Table TABREF4.
Evaluation ::: Die/Dat Disambiguation
Aside from classic natural language processing tasks in previous subsections, we also evaluated its performance on a task that is specific to Dutch, namely disambiguating “die” and “dat” (= “that” in English). In Dutch, depending on the sentence, both terms can be either demonstrative or relative pronouns; in addition they can also be used in a subordinating conjunction, i.e. to introduce a clause. The use of either of these words depends on the gender of the word it refers to. Distinguishing these words is a task introduced by BIBREF20, who presented multiple models trained on the Europarl BIBREF21 and SoNaR corpora BIBREF22. The results ranged from an accuracy of 75.03% on Europarl to 84.56% on SoNaR.
For this task, we use the Dutch version of the Europarl corpus BIBREF21, which we split in 1.3M utterances for training, 319k for validation, and 399k for testing. We then process every sentence by checking if it contains “die” or “dat”, and if so, add a training example for every occurrence of this word in the sentence, where a single occurrence is masked. For the test set for example, this resulted in about 289k masked sentences. We then test two different approaches for solving this task on this dataset. The first approach is making the BERT models use their MLM task and guess which word should be filled in this spot, and check if it has more confidence in either “die” or “dat” (by checking the first 2,048 guesses at most, as this seemed sufficiently large). This allows us to compare the zero-shot BERT models, i.e. without any fine-tuning after pre-training, for which the results can be seen in Table TABREF7. The second approach uses the same data, but creates two sentences by filling in the mask with both “die” and “dat”, appending both with the [SEP] token and making the model predict which of the two sentences is correct. The fine-tuning was performed using 4 Nvidia GTX 1080 Ti GPUs and evaluated against the same test set of 399k utterances. As before, we fine-tuned the model twice: once with the full training set and once with a subset of 10k utterances from the training set for illustrating the benefits of pre-training on low-resource tasks.
RobBERT outperforms previous models as well as other BERT models both with as well as without fine-tuning (see Table TABREF4 and Table TABREF7). It is also able to reach similar performance using less data. The fact that zero-shot RobBERT outperforms other zero-shot BERT models is also an indication that the base model has internalised more knowledge about Dutch than the other two have. The reason RobBERT and other BERT models outperform the previous RNN-based approach is likely the transformers ability to deal better with coreference resolution BIBREF12, and by extension better in deciding which word the “die” or “dat” belongs to.
Code
The training and evaluation code of this paper as well as the RobBERT model and the fine-tuned models are publicly available for download on https://github.com/iPieter/RobBERT.
Future Work
There are several possible improvements as well as interesting future directions for this research, for example in training similar models. First, as BERT-based models are a very active field of research, it is interesting to experiment with change the pre-training tasks with new unsupervised tasks when they are discovered, such as the sentence order prediction BIBREF14. Second, while RobBERT is trained on lines that contain multiple sentences, it does not put subsequent lines of the corpus after each other due to the shuffled nature of the OSCAR corpus BIBREF16. This is unlike RoBERTa, which does put full sentences next to each other if they fit, in order to learn the long-range dependencies between words that the original BERT learned using its controversial NSP task. It could be interesting to use the processor used to create OSCAR in order to create an unshuffled version to train on, such that this technique can be used on the data set. Third, RobBERT uses the same tokenizer as RoBERTa, meaning it uses a tokenizer built for the English language. Training a new model using a custom Dutch tokenizer, e.g. using the newly released HuggingFace tokenizers library BIBREF23, could increase the performance even further. On the same note, incorporating more Unicode glyphs as separate tokens can also be beneficial for example for tasks related to conversational agents BIBREF24.
RobBERT itself could also be used in new settings to help future research. First, RobBERT could be used in different settings thanks to the renewed interest of sequence-to-sequence models due to their results on a vast range of language tasks BIBREF25, BIBREF26. These models use a BERT-like transformer stack for the encoder and depending on the task a generative model as a decoder. These advances once again highlight the flexibility of the self-attention mechanism and it might be interesting to research the re-usability of RobBERT in these type of architectures. Second, there are many Dutch language tasks that we did not examine in this paper, for which it may also be possible to achieve state-of-the-art results when fine-tuned on this pre-trained model.
Conclusion
We introduced a new language model for Dutch based on RoBERTa, called RobBERT, and showed that it outperforms earlier approaches for Dutch language tasks, as well as other BERT-based language models. We thus hope this model can serve as a base for fine-tuning on other tasks, and thus help foster new models that might advance results for Dutch language tasks.
Acknowledgements
Pieter Delobelle was supported by the Research Foundation - Flanders under EOS No. 30992574 and received funding from the Flemish Government under the “Onderzoeksprogramma Artificiële Intelligentie (AI) Vlaanderen” programme. Thomas Winters is a fellow of the Research Foundation-Flanders (FWO-Vlaanderen). Most computational resources and services used in this work were provided by the VSC (Flemish Supercomputer Center), funded by the Research Foundation - Flanders (FWO) and the Flemish Government – department EWI. We are especially grateful to Luc De Raedt for his guidance as well as for providing the facilities to complete this project. We are thankful to Liesbeth Allein and her supervisors for inspiring us to use the die/dat task. We are also grateful to BIBREF27, BIBREF28, BIBREF29, BIBREF23 for their software packages. | the Dutch section of the OSCAR corpus |
6e962f1f23061f738f651177346b38fd440ff480 | 6e962f1f23061f738f651177346b38fd440ff480_0 | Q: What is the state of the art?
Text: Introduction
The advent of neural networks in natural language processing (NLP) has significantly improved state-of-the-art results within the field. While recurrent neural networks (RNNs) and long short-term memory networks (LSTMs) initially dominated the field, recent models started incorporating attention mechanisms and then later dropped the recurrent part and just kept the attention mechanisms in so-called transformer models BIBREF0. This latter type of model caused a new revolution in NLP and led to popular language models like GPT-2 BIBREF1, BIBREF2 and ELMo BIBREF3. BERT BIBREF4 improved over previous transformer models and recurrent networks by allowing the system to learn from input text in a bidirectional way, rather than only from left-to-right or the other way around. This model was later re-implemented, critically evaluated and improved in the RoBERTa model BIBREF5.
These large-scale transformer models provide the advantage of being able to solve NLP tasks by having a common, expensive pre-training phase, followed by a smaller fine-tuning phase. The pre-training happens in an unsupervised way by providing large corpora of text in the desired language. The second phase only needs a relatively small annotated data set for fine-tuning to outperform previous popular approaches in one of a large number of possible language tasks.
While language models are usually trained on English data, some multilingual models also exist. These are usually trained on a large quantity of text in different languages. For example, Multilingual-BERT is trained on a collection of corpora in 104 different languages BIBREF4, and generalizes language components well across languages BIBREF6. However, models trained on data from one specific language usually improve the performance of multilingual models for this particular language BIBREF7, BIBREF8. Training a RoBERTa model BIBREF5 on a Dutch dataset thus has a lot of potential for increasing performance for many downstream Dutch NLP tasks. In this paper, we introduce RobBERT, a Dutch RoBERTa-based pre-trained language model, and critically test its performance using natural language tasks against other Dutch languages models.
Related Work
Transformer models have been successfully used for a wide range of language tasks. Initially, transformers were introduced for use in machine translation, where they vastly improved state-of-the-art results for English to German in an efficient manner BIBREF0. This transformer model architecture resulted in a new paradigm in NLP with the migration from sequence-to-sequence recurrent neural networks to transformer-based models by removing the recurrent component and only keeping attention. This cornerstone was used for BERT, a transformer model that obtained state-of-the-art results for eleven natural language processing tasks, such as question answering and natural language inference BIBREF4. BERT is pre-trained with large corpora of text using two unsupervised tasks. The first task is word masking (also called the Cloze task BIBREF9 or masked language model (MLM)), where the model has to guess which word is masked in certain position in the text. The second task is next sentence prediction. This is done by predicting if two sentences are subsequent in the corpus, or if they are randomly sampled from the corpus. These tasks allowed the model to create internal representations about a language, which could thereafter be reused for different language tasks. This architecture has been shown to be a general language model that could be fine-tuned with little data in a relatively efficient way for a very distinct range of tasks and still outperform previous architectures BIBREF4.
Transformer models are also capable of generating contextualized word embeddings. These contextualized embeddings were presented by BIBREF3 and addressed the well known issue with a word's meaning being defined by its context (e.g. “a stick” versus “let's stick to”). This lack of context is something that traditional word embeddings like word2vec BIBREF10 or GloVe BIBREF11 lack, whereas BERT automatically incorporates the context a word occurs in.
Another advantage of transformer models is that attention allows them to better resolve coreferences between words BIBREF12. A typical example for the importance of coreference resolution is “The trophy doesn’t fit in the brown suitcase because it’s too big.”, where the word “it” would refer to the the suitcase instead of the trophy if the last word was changed to “small” BIBREF13. Being able to resolve these coreferences is for example important for translating to languages with gender, as suitcase and trophy have different genders in French.
Although BERT has been shown to be a useful language model, it has also received some scrutiny on the training and pre-processing of the language model. As mentioned before, BERT uses next sentence prediction (NSP) as one of its two training tasks. In NSP, the model has to predict whether two sentences follow each other in the training text, or are just randomly selected from the corpora. The authors of RoBERTa BIBREF5 showed that while this task made the model achieve a better performance, it was not due to its intended reason, as it might merely predict relatedness rather than subsequent sentences. That BIBREF4 trained a better model when using NSP than without NSP is likely due to the model learning long-range dependencies in text from its inputs, which are longer than just the single sentence on itself. As such, the RoBERTa model uses only the MLM task, and uses multiple full sentences in every input. Other research improved the NSP task by instead making the model predict the correct order of two sentences, where the model thus has to predict whether the sentences occur in the given order in the corpus, or occur in flipped order BIBREF14.
BIBREF4 also presented a multilingual model (mBERT) with the same architecture as BERT, but trained on Wikipedia corpora in 104 languages. Unfortunately, the quality of these multilingual embeddings is often considered worse than their monolingual counterparts. BIBREF15 illustrated this difference in quality for German and English models in a generative setting. The monolingual French CamemBERT model BIBREF7 also compared their model to mBERT, which performed poorer on all tasks. More recently, BIBREF8 also showed similar results for Dutch using their BERTje model, outperforming multilingual BERT in a wide range of tasks, such as sentiment analysis and part-of-speech tagging. Since this work is concurrent with ours, we compare our results with BERTje in this paper.
Pre-training RobBERT
This section describes the data and training regime we used to train our Dutch RoBERTa-based language model called RobBERT.
Pre-training RobBERT ::: Data
We pre-trained our model on the Dutch section of the OSCAR corpus, a large multilingual corpus which was obtained by language classification in the Common Crawl corpus BIBREF16. This Dutch corpus has 6.6 billion words, totalling 39 GB of text. It contains 126,064,722 lines of text, where each line can contain multiple sentences. Subsequent lines are however not related to each other, due to the shuffled nature of the OSCAR data set. For comparison, the French RoBERTa-based language model CamemBERT BIBREF7 has been trained on the French portion of OSCAR, which consists of 138 GB of scraped text.
Our data differs in several ways from the data used to train BERTje, a BERT-based Dutch language model BIBREF8. Firstly, they trained the model on an assembly of multiple Dutch corpora totalling only 12 GB. Secondly, they used WordPiece as subword embeddings, since this is what the original BERT architecture uses. RobBERT on the other hand uses Byte Pair Encoding (BPE), which is also used by GPT-2 BIBREF2 and RoBERTa BIBREF5.
Pre-training RobBERT ::: Training
RobBERT shares its architecture with RoBERTa's base model, which itself is a replication and improvement over BERT BIBREF5. The architecture of our language model is thus equal to the original BERT model with 12 self-attention layers with 12 heads BIBREF4. One difference with the original BERT is due to the different pre-training task specified by RoBERTa, using only the MLM task and not the NSP task. The training thus only uses word masking, where the model has to predict which words were masked in certain positions of a given line of text. The training process uses the Adam optimizer BIBREF17 with polynomial decay of the learning rate $l_r=10^{-6}$ and a ramp-up period of 1000 iterations, with parameters $\beta _1=0.9$ (a common default) and RoBERTa's default $\beta _2=0.98$. Additionally, we also used a weight decay of 0.1 as well as a small dropout of 0.1 to help prevent the model from overfitting BIBREF18.
We used a computing cluster in order to efficiently pre-train our model. More specifically, the pre-training was executed on a computing cluster with 20 nodes with 4 Nvidia Tesla P100 GPUs (16 GB VRAM each) and 2 nodes with 8 Nvidia V100 GPUs (having 32 GB VRAM each). This pre-training happened in fixed batches of 8192 sentences by rescaling each GPUs batch size depending on the number of GPUs available, in order to maximally utilize the cluster without blocking it entirely for other users. The model trained for two epochs, which is over 16k batches in total. With the large batch size of 8192, this equates to 0.5M updates for a traditional BERT model. At this point, the perplexity did not decrease any further.
Evaluation
We evaluated RobBERT in several different settings on multiple downstream tasks. First, we compare its performance with other BERT-models and state-of-the-art systems in sentiment analysis, to show its performance for classification tasks. Second, we compare its performance in a recent Dutch language task, namely the disambiguation of demonstrative pronouns, which allows us to additionally compare the zero-shot performance of our and other BERT models, i.e. using only the pre-trained model without any fine-tuning.
Evaluation ::: Sentiment Analysis
We replicated the high-level sentiment analysis task used to evaluate BERTje BIBREF8 to be able to compare our methods. This task uses a dataset called Dutch Book Reviews Dataset (DBRD), in which book reviews scraped from hebban.nl are labeled as positive or negative BIBREF19. Although the dataset contains 118,516 reviews, only 22,252 of these reviews are actually labeled as positive or negative. The DBRD dataset is already split in a balanced 10% test and 90% train split, allowing us to easily compare to other models trained for solving this task. This dataset was released in a paper analysing the performance of an ULMFiT model (Universal Language Model Fine-tuning for Text Classification model) BIBREF19.
We fine-tuned RobBERT on the first 10,000 training examples as well as on the full data set. While the ULMFiT model is first fine-tuned using the unlabeled reviews before training the classifier BIBREF19, it is unclear whether BERTje also first fine-tuned on the unlabeled reviews or only used the labeled data for fine-tuning the pretrained model. It is also unclear how it dealt with reviews being longer than the maximum number of tokens allowed as input in BERT models, as the average book review length is 547 tokens, with 40% of the documents being longer than our RobBERT model can handle. For a safe comparison, we thus decided to discard the unlabeled data and only use the labeled data for training and test purposes (20,028 and 2,224 examples respectively), and compare approaches for dealing with too long input sequences. We trained our model for 2000 iterations with a batch size of 128 and a warm-up of 500 iterations, reaching a learning rate of $10^{-5}$. We found that our model performed better when trained on the last part of the book reviews than on the first part. This is likely due to this part containing concluding remarks summarizing the overall sentiment. While BERTje was slightly outperformed by ULMFiT BIBREF8, BIBREF19, we can see that RobBERT achieves better performance than both on the test set, although the performance difference is not statistically significantly better than the ULMFiT model, as can be seen in Table TABREF4.
Evaluation ::: Die/Dat Disambiguation
Aside from classic natural language processing tasks in previous subsections, we also evaluated its performance on a task that is specific to Dutch, namely disambiguating “die” and “dat” (= “that” in English). In Dutch, depending on the sentence, both terms can be either demonstrative or relative pronouns; in addition they can also be used in a subordinating conjunction, i.e. to introduce a clause. The use of either of these words depends on the gender of the word it refers to. Distinguishing these words is a task introduced by BIBREF20, who presented multiple models trained on the Europarl BIBREF21 and SoNaR corpora BIBREF22. The results ranged from an accuracy of 75.03% on Europarl to 84.56% on SoNaR.
For this task, we use the Dutch version of the Europarl corpus BIBREF21, which we split in 1.3M utterances for training, 319k for validation, and 399k for testing. We then process every sentence by checking if it contains “die” or “dat”, and if so, add a training example for every occurrence of this word in the sentence, where a single occurrence is masked. For the test set for example, this resulted in about 289k masked sentences. We then test two different approaches for solving this task on this dataset. The first approach is making the BERT models use their MLM task and guess which word should be filled in this spot, and check if it has more confidence in either “die” or “dat” (by checking the first 2,048 guesses at most, as this seemed sufficiently large). This allows us to compare the zero-shot BERT models, i.e. without any fine-tuning after pre-training, for which the results can be seen in Table TABREF7. The second approach uses the same data, but creates two sentences by filling in the mask with both “die” and “dat”, appending both with the [SEP] token and making the model predict which of the two sentences is correct. The fine-tuning was performed using 4 Nvidia GTX 1080 Ti GPUs and evaluated against the same test set of 399k utterances. As before, we fine-tuned the model twice: once with the full training set and once with a subset of 10k utterances from the training set for illustrating the benefits of pre-training on low-resource tasks.
RobBERT outperforms previous models as well as other BERT models both with as well as without fine-tuning (see Table TABREF4 and Table TABREF7). It is also able to reach similar performance using less data. The fact that zero-shot RobBERT outperforms other zero-shot BERT models is also an indication that the base model has internalised more knowledge about Dutch than the other two have. The reason RobBERT and other BERT models outperform the previous RNN-based approach is likely the transformers ability to deal better with coreference resolution BIBREF12, and by extension better in deciding which word the “die” or “dat” belongs to.
Code
The training and evaluation code of this paper as well as the RobBERT model and the fine-tuned models are publicly available for download on https://github.com/iPieter/RobBERT.
Future Work
There are several possible improvements as well as interesting future directions for this research, for example in training similar models. First, as BERT-based models are a very active field of research, it is interesting to experiment with change the pre-training tasks with new unsupervised tasks when they are discovered, such as the sentence order prediction BIBREF14. Second, while RobBERT is trained on lines that contain multiple sentences, it does not put subsequent lines of the corpus after each other due to the shuffled nature of the OSCAR corpus BIBREF16. This is unlike RoBERTa, which does put full sentences next to each other if they fit, in order to learn the long-range dependencies between words that the original BERT learned using its controversial NSP task. It could be interesting to use the processor used to create OSCAR in order to create an unshuffled version to train on, such that this technique can be used on the data set. Third, RobBERT uses the same tokenizer as RoBERTa, meaning it uses a tokenizer built for the English language. Training a new model using a custom Dutch tokenizer, e.g. using the newly released HuggingFace tokenizers library BIBREF23, could increase the performance even further. On the same note, incorporating more Unicode glyphs as separate tokens can also be beneficial for example for tasks related to conversational agents BIBREF24.
RobBERT itself could also be used in new settings to help future research. First, RobBERT could be used in different settings thanks to the renewed interest of sequence-to-sequence models due to their results on a vast range of language tasks BIBREF25, BIBREF26. These models use a BERT-like transformer stack for the encoder and depending on the task a generative model as a decoder. These advances once again highlight the flexibility of the self-attention mechanism and it might be interesting to research the re-usability of RobBERT in these type of architectures. Second, there are many Dutch language tasks that we did not examine in this paper, for which it may also be possible to achieve state-of-the-art results when fine-tuned on this pre-trained model.
Conclusion
We introduced a new language model for Dutch based on RoBERTa, called RobBERT, and showed that it outperforms earlier approaches for Dutch language tasks, as well as other BERT-based language models. We thus hope this model can serve as a base for fine-tuning on other tasks, and thus help foster new models that might advance results for Dutch language tasks.
Acknowledgements
Pieter Delobelle was supported by the Research Foundation - Flanders under EOS No. 30992574 and received funding from the Flemish Government under the “Onderzoeksprogramma Artificiële Intelligentie (AI) Vlaanderen” programme. Thomas Winters is a fellow of the Research Foundation-Flanders (FWO-Vlaanderen). Most computational resources and services used in this work were provided by the VSC (Flemish Supercomputer Center), funded by the Research Foundation - Flanders (FWO) and the Flemish Government – department EWI. We are especially grateful to Luc De Raedt for his guidance as well as for providing the facilities to complete this project. We are thankful to Liesbeth Allein and her supervisors for inspiring us to use the die/dat task. We are also grateful to BIBREF27, BIBREF28, BIBREF29, BIBREF23 for their software packages. | BERTje BIBREF8, an ULMFiT model (Universal Language Model Fine-tuning for Text Classification model) BIBREF19., mBERT |
594a6bf37eab64a16c6a05c365acc100e38fcff1 | 594a6bf37eab64a16c6a05c365acc100e38fcff1_0 | Q: What language tasks did they experiment on?
Text: Introduction
The advent of neural networks in natural language processing (NLP) has significantly improved state-of-the-art results within the field. While recurrent neural networks (RNNs) and long short-term memory networks (LSTMs) initially dominated the field, recent models started incorporating attention mechanisms and then later dropped the recurrent part and just kept the attention mechanisms in so-called transformer models BIBREF0. This latter type of model caused a new revolution in NLP and led to popular language models like GPT-2 BIBREF1, BIBREF2 and ELMo BIBREF3. BERT BIBREF4 improved over previous transformer models and recurrent networks by allowing the system to learn from input text in a bidirectional way, rather than only from left-to-right or the other way around. This model was later re-implemented, critically evaluated and improved in the RoBERTa model BIBREF5.
These large-scale transformer models provide the advantage of being able to solve NLP tasks by having a common, expensive pre-training phase, followed by a smaller fine-tuning phase. The pre-training happens in an unsupervised way by providing large corpora of text in the desired language. The second phase only needs a relatively small annotated data set for fine-tuning to outperform previous popular approaches in one of a large number of possible language tasks.
While language models are usually trained on English data, some multilingual models also exist. These are usually trained on a large quantity of text in different languages. For example, Multilingual-BERT is trained on a collection of corpora in 104 different languages BIBREF4, and generalizes language components well across languages BIBREF6. However, models trained on data from one specific language usually improve the performance of multilingual models for this particular language BIBREF7, BIBREF8. Training a RoBERTa model BIBREF5 on a Dutch dataset thus has a lot of potential for increasing performance for many downstream Dutch NLP tasks. In this paper, we introduce RobBERT, a Dutch RoBERTa-based pre-trained language model, and critically test its performance using natural language tasks against other Dutch languages models.
Related Work
Transformer models have been successfully used for a wide range of language tasks. Initially, transformers were introduced for use in machine translation, where they vastly improved state-of-the-art results for English to German in an efficient manner BIBREF0. This transformer model architecture resulted in a new paradigm in NLP with the migration from sequence-to-sequence recurrent neural networks to transformer-based models by removing the recurrent component and only keeping attention. This cornerstone was used for BERT, a transformer model that obtained state-of-the-art results for eleven natural language processing tasks, such as question answering and natural language inference BIBREF4. BERT is pre-trained with large corpora of text using two unsupervised tasks. The first task is word masking (also called the Cloze task BIBREF9 or masked language model (MLM)), where the model has to guess which word is masked in certain position in the text. The second task is next sentence prediction. This is done by predicting if two sentences are subsequent in the corpus, or if they are randomly sampled from the corpus. These tasks allowed the model to create internal representations about a language, which could thereafter be reused for different language tasks. This architecture has been shown to be a general language model that could be fine-tuned with little data in a relatively efficient way for a very distinct range of tasks and still outperform previous architectures BIBREF4.
Transformer models are also capable of generating contextualized word embeddings. These contextualized embeddings were presented by BIBREF3 and addressed the well known issue with a word's meaning being defined by its context (e.g. “a stick” versus “let's stick to”). This lack of context is something that traditional word embeddings like word2vec BIBREF10 or GloVe BIBREF11 lack, whereas BERT automatically incorporates the context a word occurs in.
Another advantage of transformer models is that attention allows them to better resolve coreferences between words BIBREF12. A typical example for the importance of coreference resolution is “The trophy doesn’t fit in the brown suitcase because it’s too big.”, where the word “it” would refer to the the suitcase instead of the trophy if the last word was changed to “small” BIBREF13. Being able to resolve these coreferences is for example important for translating to languages with gender, as suitcase and trophy have different genders in French.
Although BERT has been shown to be a useful language model, it has also received some scrutiny on the training and pre-processing of the language model. As mentioned before, BERT uses next sentence prediction (NSP) as one of its two training tasks. In NSP, the model has to predict whether two sentences follow each other in the training text, or are just randomly selected from the corpora. The authors of RoBERTa BIBREF5 showed that while this task made the model achieve a better performance, it was not due to its intended reason, as it might merely predict relatedness rather than subsequent sentences. That BIBREF4 trained a better model when using NSP than without NSP is likely due to the model learning long-range dependencies in text from its inputs, which are longer than just the single sentence on itself. As such, the RoBERTa model uses only the MLM task, and uses multiple full sentences in every input. Other research improved the NSP task by instead making the model predict the correct order of two sentences, where the model thus has to predict whether the sentences occur in the given order in the corpus, or occur in flipped order BIBREF14.
BIBREF4 also presented a multilingual model (mBERT) with the same architecture as BERT, but trained on Wikipedia corpora in 104 languages. Unfortunately, the quality of these multilingual embeddings is often considered worse than their monolingual counterparts. BIBREF15 illustrated this difference in quality for German and English models in a generative setting. The monolingual French CamemBERT model BIBREF7 also compared their model to mBERT, which performed poorer on all tasks. More recently, BIBREF8 also showed similar results for Dutch using their BERTje model, outperforming multilingual BERT in a wide range of tasks, such as sentiment analysis and part-of-speech tagging. Since this work is concurrent with ours, we compare our results with BERTje in this paper.
Pre-training RobBERT
This section describes the data and training regime we used to train our Dutch RoBERTa-based language model called RobBERT.
Pre-training RobBERT ::: Data
We pre-trained our model on the Dutch section of the OSCAR corpus, a large multilingual corpus which was obtained by language classification in the Common Crawl corpus BIBREF16. This Dutch corpus has 6.6 billion words, totalling 39 GB of text. It contains 126,064,722 lines of text, where each line can contain multiple sentences. Subsequent lines are however not related to each other, due to the shuffled nature of the OSCAR data set. For comparison, the French RoBERTa-based language model CamemBERT BIBREF7 has been trained on the French portion of OSCAR, which consists of 138 GB of scraped text.
Our data differs in several ways from the data used to train BERTje, a BERT-based Dutch language model BIBREF8. Firstly, they trained the model on an assembly of multiple Dutch corpora totalling only 12 GB. Secondly, they used WordPiece as subword embeddings, since this is what the original BERT architecture uses. RobBERT on the other hand uses Byte Pair Encoding (BPE), which is also used by GPT-2 BIBREF2 and RoBERTa BIBREF5.
Pre-training RobBERT ::: Training
RobBERT shares its architecture with RoBERTa's base model, which itself is a replication and improvement over BERT BIBREF5. The architecture of our language model is thus equal to the original BERT model with 12 self-attention layers with 12 heads BIBREF4. One difference with the original BERT is due to the different pre-training task specified by RoBERTa, using only the MLM task and not the NSP task. The training thus only uses word masking, where the model has to predict which words were masked in certain positions of a given line of text. The training process uses the Adam optimizer BIBREF17 with polynomial decay of the learning rate $l_r=10^{-6}$ and a ramp-up period of 1000 iterations, with parameters $\beta _1=0.9$ (a common default) and RoBERTa's default $\beta _2=0.98$. Additionally, we also used a weight decay of 0.1 as well as a small dropout of 0.1 to help prevent the model from overfitting BIBREF18.
We used a computing cluster in order to efficiently pre-train our model. More specifically, the pre-training was executed on a computing cluster with 20 nodes with 4 Nvidia Tesla P100 GPUs (16 GB VRAM each) and 2 nodes with 8 Nvidia V100 GPUs (having 32 GB VRAM each). This pre-training happened in fixed batches of 8192 sentences by rescaling each GPUs batch size depending on the number of GPUs available, in order to maximally utilize the cluster without blocking it entirely for other users. The model trained for two epochs, which is over 16k batches in total. With the large batch size of 8192, this equates to 0.5M updates for a traditional BERT model. At this point, the perplexity did not decrease any further.
Evaluation
We evaluated RobBERT in several different settings on multiple downstream tasks. First, we compare its performance with other BERT-models and state-of-the-art systems in sentiment analysis, to show its performance for classification tasks. Second, we compare its performance in a recent Dutch language task, namely the disambiguation of demonstrative pronouns, which allows us to additionally compare the zero-shot performance of our and other BERT models, i.e. using only the pre-trained model without any fine-tuning.
Evaluation ::: Sentiment Analysis
We replicated the high-level sentiment analysis task used to evaluate BERTje BIBREF8 to be able to compare our methods. This task uses a dataset called Dutch Book Reviews Dataset (DBRD), in which book reviews scraped from hebban.nl are labeled as positive or negative BIBREF19. Although the dataset contains 118,516 reviews, only 22,252 of these reviews are actually labeled as positive or negative. The DBRD dataset is already split in a balanced 10% test and 90% train split, allowing us to easily compare to other models trained for solving this task. This dataset was released in a paper analysing the performance of an ULMFiT model (Universal Language Model Fine-tuning for Text Classification model) BIBREF19.
We fine-tuned RobBERT on the first 10,000 training examples as well as on the full data set. While the ULMFiT model is first fine-tuned using the unlabeled reviews before training the classifier BIBREF19, it is unclear whether BERTje also first fine-tuned on the unlabeled reviews or only used the labeled data for fine-tuning the pretrained model. It is also unclear how it dealt with reviews being longer than the maximum number of tokens allowed as input in BERT models, as the average book review length is 547 tokens, with 40% of the documents being longer than our RobBERT model can handle. For a safe comparison, we thus decided to discard the unlabeled data and only use the labeled data for training and test purposes (20,028 and 2,224 examples respectively), and compare approaches for dealing with too long input sequences. We trained our model for 2000 iterations with a batch size of 128 and a warm-up of 500 iterations, reaching a learning rate of $10^{-5}$. We found that our model performed better when trained on the last part of the book reviews than on the first part. This is likely due to this part containing concluding remarks summarizing the overall sentiment. While BERTje was slightly outperformed by ULMFiT BIBREF8, BIBREF19, we can see that RobBERT achieves better performance than both on the test set, although the performance difference is not statistically significantly better than the ULMFiT model, as can be seen in Table TABREF4.
Evaluation ::: Die/Dat Disambiguation
Aside from classic natural language processing tasks in previous subsections, we also evaluated its performance on a task that is specific to Dutch, namely disambiguating “die” and “dat” (= “that” in English). In Dutch, depending on the sentence, both terms can be either demonstrative or relative pronouns; in addition they can also be used in a subordinating conjunction, i.e. to introduce a clause. The use of either of these words depends on the gender of the word it refers to. Distinguishing these words is a task introduced by BIBREF20, who presented multiple models trained on the Europarl BIBREF21 and SoNaR corpora BIBREF22. The results ranged from an accuracy of 75.03% on Europarl to 84.56% on SoNaR.
For this task, we use the Dutch version of the Europarl corpus BIBREF21, which we split in 1.3M utterances for training, 319k for validation, and 399k for testing. We then process every sentence by checking if it contains “die” or “dat”, and if so, add a training example for every occurrence of this word in the sentence, where a single occurrence is masked. For the test set for example, this resulted in about 289k masked sentences. We then test two different approaches for solving this task on this dataset. The first approach is making the BERT models use their MLM task and guess which word should be filled in this spot, and check if it has more confidence in either “die” or “dat” (by checking the first 2,048 guesses at most, as this seemed sufficiently large). This allows us to compare the zero-shot BERT models, i.e. without any fine-tuning after pre-training, for which the results can be seen in Table TABREF7. The second approach uses the same data, but creates two sentences by filling in the mask with both “die” and “dat”, appending both with the [SEP] token and making the model predict which of the two sentences is correct. The fine-tuning was performed using 4 Nvidia GTX 1080 Ti GPUs and evaluated against the same test set of 399k utterances. As before, we fine-tuned the model twice: once with the full training set and once with a subset of 10k utterances from the training set for illustrating the benefits of pre-training on low-resource tasks.
RobBERT outperforms previous models as well as other BERT models both with as well as without fine-tuning (see Table TABREF4 and Table TABREF7). It is also able to reach similar performance using less data. The fact that zero-shot RobBERT outperforms other zero-shot BERT models is also an indication that the base model has internalised more knowledge about Dutch than the other two have. The reason RobBERT and other BERT models outperform the previous RNN-based approach is likely the transformers ability to deal better with coreference resolution BIBREF12, and by extension better in deciding which word the “die” or “dat” belongs to.
Code
The training and evaluation code of this paper as well as the RobBERT model and the fine-tuned models are publicly available for download on https://github.com/iPieter/RobBERT.
Future Work
There are several possible improvements as well as interesting future directions for this research, for example in training similar models. First, as BERT-based models are a very active field of research, it is interesting to experiment with change the pre-training tasks with new unsupervised tasks when they are discovered, such as the sentence order prediction BIBREF14. Second, while RobBERT is trained on lines that contain multiple sentences, it does not put subsequent lines of the corpus after each other due to the shuffled nature of the OSCAR corpus BIBREF16. This is unlike RoBERTa, which does put full sentences next to each other if they fit, in order to learn the long-range dependencies between words that the original BERT learned using its controversial NSP task. It could be interesting to use the processor used to create OSCAR in order to create an unshuffled version to train on, such that this technique can be used on the data set. Third, RobBERT uses the same tokenizer as RoBERTa, meaning it uses a tokenizer built for the English language. Training a new model using a custom Dutch tokenizer, e.g. using the newly released HuggingFace tokenizers library BIBREF23, could increase the performance even further. On the same note, incorporating more Unicode glyphs as separate tokens can also be beneficial for example for tasks related to conversational agents BIBREF24.
RobBERT itself could also be used in new settings to help future research. First, RobBERT could be used in different settings thanks to the renewed interest of sequence-to-sequence models due to their results on a vast range of language tasks BIBREF25, BIBREF26. These models use a BERT-like transformer stack for the encoder and depending on the task a generative model as a decoder. These advances once again highlight the flexibility of the self-attention mechanism and it might be interesting to research the re-usability of RobBERT in these type of architectures. Second, there are many Dutch language tasks that we did not examine in this paper, for which it may also be possible to achieve state-of-the-art results when fine-tuned on this pre-trained model.
Conclusion
We introduced a new language model for Dutch based on RoBERTa, called RobBERT, and showed that it outperforms earlier approaches for Dutch language tasks, as well as other BERT-based language models. We thus hope this model can serve as a base for fine-tuning on other tasks, and thus help foster new models that might advance results for Dutch language tasks.
Acknowledgements
Pieter Delobelle was supported by the Research Foundation - Flanders under EOS No. 30992574 and received funding from the Flemish Government under the “Onderzoeksprogramma Artificiële Intelligentie (AI) Vlaanderen” programme. Thomas Winters is a fellow of the Research Foundation-Flanders (FWO-Vlaanderen). Most computational resources and services used in this work were provided by the VSC (Flemish Supercomputer Center), funded by the Research Foundation - Flanders (FWO) and the Flemish Government – department EWI. We are especially grateful to Luc De Raedt for his guidance as well as for providing the facilities to complete this project. We are thankful to Liesbeth Allein and her supervisors for inspiring us to use the die/dat task. We are also grateful to BIBREF27, BIBREF28, BIBREF29, BIBREF23 for their software packages. | sentiment analysis, the disambiguation of demonstrative pronouns, |
d79d897f94e666d5a6fcda3b0c7e807c8fad109e | d79d897f94e666d5a6fcda3b0c7e807c8fad109e_0 | Q: What result from experiments suggest that natural language based agents are more robust?
Text: Introduction
“The world of our experiences must be enormously simplified and generalized before it is possible to make a symbolic inventory of all our experiences of things and relations."
(Edward Sapir, Language: An Introduction to the Study of Speech, 1921)
Deep Learning based algorithms use neural networks in order to learn feature representations that are good for solving high dimensional Machine Learning (ML) tasks. Reinforcement Learning (RL) is a subfield of ML that has been greatly affected by the use of deep neural networks as universal function approximators BIBREF0, BIBREF1. These deep neural networks are used in RL to estimate value functions, state-action value functions, policy mappings, next-state predictions, rewards, and more BIBREF2, BIBREF3, BIBREF4, thus combating the “curse of dimensionality".
The term representation is used differently in different contexts. For the purpose of this paper we define a semantic representation of a state as one that reflects its meaning as it is understood by an expert. The semantic representation of a state should thus be paired with a reliable and computationally efficient method for extracting information from it. Previous success in RL has mainly focused on representing the state in its raw form (e.g., visual input in Atari-based games BIBREF2). This approach stems from the belief that neural networks (specifically convolutional networks) can extract meaningful features from complex inputs. In this work, we challenge current representation techniques and suggest to represent the state using natural language, similar to the way we, as humans, summarize and transfer information efficiently from one to the other BIBREF5.
The ability to associate states with natural language sentences that describe them is a hallmark of understanding representations for reinforcement learning. Humans use rich natural language to describe and communicate their visual perceptions, feelings, beliefs, strategies, and more. The semantics inherent to natural language carry knowledge and cues of complex types of content, including: events, spatial relations, temporal relations, semantic roles, logical structures, support for inference and entailment, as well as predicates and arguments BIBREF6. The expressive nature of language can thus act as an alternative semantic state representation.
Over the past few years, Natural Language Processing (NLP) has shown an acceleration in progress on a wide range of downstream applications ranging from Question Answering BIBREF7, BIBREF8, to Natural Language Inference BIBREF9, BIBREF10, BIBREF11 through Syntactic Parsing BIBREF12, BIBREF13, BIBREF14. Recent work has shown the ability to learn flexible, hierarchical, contextualized representations, obtaining state-of-the-art results on various natural language processing tasks BIBREF15. A basic observation of our work is that natural language representations are also beneficial for solving problems in which natural language is not the underlying source of input. Moreover, our results indicate that natural language is a strong alternative to current complementary methods for semantic representations of a state.
In this work we assume a state can be described using natural language sentences. We use distributional embedding methods in order to represent sentences, processed with a standard Convolutional Neural Network for feature extraction. In Section SECREF2 we describe the basic frameworks we rely on. We discuss possible semantic representations in Section SECREF3, namely, raw visual inputs, semantic segmentation, feature vectors, and natural language representations. Then, in Section SECREF4 we compare NLP representations with their alternatives. Our results suggest that representation of the state using natural language can achieve better performance, even on difficult tasks, or tasks in which the description of the state is saturated with task-nuisances BIBREF17. Moreover, we observe that NLP representations are more robust to transfer and changes in the environment. We conclude the paper with a short discussion and related work.
Preliminaries ::: Reinforcement Learning
In Reinforcement Learning the goal is to learn a policy $\pi (s)$, which is a mapping from state $s$ to a probability distribution over actions $\mathcal {A}$, with the objective to maximize a reward $r(s)$ that is provided by the environment. This is often solved by formulating the problem as a Markov Decision Process (MDP) BIBREF19. Two common quantities used to estimate the performance in MDPs are the value $v (s)$ and action-value $Q (s, a)$ functions, which are defined as follows: ${v(s) = \mathbb {E}^{\pi } [\sum _t \gamma ^t r_t | s_0 = s ]}$ and ${Q(s, a) = \mathbb {E}^{\pi } [\sum _t \gamma ^t r_t | s_0 = s, a_0 = a ]}$. Two prominent algorithms for solving RL tasks, which we use in this paper, are the value-based DQN BIBREF2 and the policy-based PPO BIBREF3.
Deep Q Networks (DQN): The DQN algorithm is an extension of the classical Q-learning approach, to a deep learning regime. Q-learning learns the optimal policy by directly learning the value function, i.e., the action-value function. A neural network is used to estimate the $Q$-values and is trained to minimize the Bellman error, namely
Proximal Policy Optimization (PPO): While the DQN learns the optimal behavioral policy using a dynamic programming approach, PPO takes a different route. PPO builds upon the policy gradient theorem, which optimizes the policy directly, with an addition of a trust-region update rule. The policy gradient theorem updates the policy by
Preliminaries ::: Deep Learning for NLP
A word embedding is a mapping from a word $w$ to a vector $\mathbf {w} \in \mathbb {R}^d$. A simple form of word embedding is the Bag of Words (BoW), a vector $\mathbf {w} \in \mathbb {N}^{|D|}$ ($|D|$ is the dictionary size), in which each word receives a unique 1-hot vector representation. Recently, more efficient methods have been proposed, in which the embedding vector is smaller than the dictionary size, $d \ll |D|$. These methods are also known as distributional embeddings.
The distributional hypothesis in linguistics is derived from the semantic theory of language usage (i.e. words that are used and occur in the same contexts tend to have similar meanings). Distributional word representations are a fundamental building block for representing natural language sentences. Word embeddings such as Word2vec BIBREF20 and GloVe BIBREF21 build upon the distributional hypothesis, improving efficiency of state-of-the-art language models.
Convolutional Neural Networks (CNNs), originally invented for computer vision, have been shown to achieve strong performance on text classification tasks BIBREF22, BIBREF23, as well as other traditional NLP tasks BIBREF24. In this paper we consider a common architecture BIBREF25, in which each word in a sentence is represented as an embedding vector, a single convolutional layer with $m$ filters is applied, producing an $m$-dimensional vector for each $n$-gram. The vectors are combined using max-pooling followed by a ReLU activation. The result is then passed through multiple hidden linear layers with ReLU activation, eventually generating the final output.
Semantic Representation Methods
Contemporary methods for semantic representation of states currently follow one of three approaches: (1) raw visual inputs BIBREF2, BIBREF26, in which raw sensory values of pixels are used from one or multiple sources, (2) feature vectors BIBREF27, BIBREF28, in which general features of the problem are chosen, with no specific structure, and (3) semantic segmentation maps BIBREF29, BIBREF30, in which discrete or logical values are used in one or many channels to represent the general features of the state.
The common approach is to derive decisions (e.g., classification, action, etc.) based on information in its raw form. In RL, the raw form is often the pixels representing an image – however the image is only one form of a semantic representation. In Semantic Segmentation, the image is converted from a 3-channel (RGB) matrix into an $N$-channel matrix, where $N$ is the number of classes. In this case, each channel represents a class, and a binary value at each coordinate denotes whether or not this class is present in the image at this location. For instance, fig: semantic segmentation example considers an autonomous vehicle task. The raw image and segmentation maps are both sufficient for the task (i.e., both contain a sufficient semantic representation). Nevertheless, the semantic segmentation maps contain less task-nuisances BIBREF17, which are random variables that affect the observed data, but are not informative to the task we are trying to solve.
In this paper we propose a forth method for representing a state, namely using natural language descriptions. One method to achieve such a representation is through Image Captioning BIBREF31, BIBREF32. Natural language is both rich as well as flexible. This flexibility enables the algorithm designer to represent the information present in the state as efficiently and compactly as possible. As an example, the top image in fig: semantic segmentation example can be represented using natural language as “There is a car in your lane two meters in front of you, a bicycle rider on your far left in the negative lane, a car in your direction in the opposite lane which is twenty meters away, and trees and pedestrians on the side walk.” or compactly by “There is a car two meters in front of you a pedestrian on the sidewalk to your right and a car inbound in the negative lane which is far away.”. Language also allows us to efficiently compress information. As an example, the segmentation map in the bottom image of fig: semantic segmentation example can be compactly described by “There are 13 pedestrians crossing the road in front of you”. In the next section we will demonstrate the benefits of using natural-language-based semantic state representation in a first person shooter enviornment.
Semantic State Representations in the Doom Environment
In this section we compare the different types of semantic representations for representing states in the ViZDoom environment BIBREF26, as described in the previous section. More specifically, we use a semantic natural language parser in order to describe a state, over numerous instances of levels varying in difficulty, task-nuisances, and objectives. Our results show that, though semantic segmentation and feature vector representation techniques express a similar statistic of the state, natural language representation offers better performance, faster convergence, more robust solutions, as well as better transfer.
The ViZDoom environment involves a 3D world that is significantly more real-world-like than Atari 2600 games, with a relatively realistic physics model. An agent in the ViZDoom environment must effectively perceive, interpret, and learn the 3D world in order to make tactical and strategic decisions of where to go and how to act. There are three types of state representations that are provided by the environment. The first, which is also most commonly used, is raw visual inputs, in which the state is represented by an image from a first person view of the agent. A feature vector representation is an additional state representation provided by the environment. The feature vector representation includes positions as well as labels of all objects and creatures in the vicinity of the agent. Lastly, the environment provides a semantic segmentation map based on the aforementioned feature vector. An example of the visual representations in VizDoom is shown in fig: representations in vizdoom.
In order to incorporate natural language representation to the VizDoom environment we've constructed a semantic parser of the semantic segmentation maps provided by the environment. Each state of the environment was converted into a natural language sentence based on positions and labels of objects in the frame. To implement this, the screen was divided into several vertical and horizontal patches, as depicted in fig: patches. These patches describe relational aspects of the state, such as distance of objects and their direction with respect to the agent's point of view. In each patch, objects were counted, and a natural language description of the patch was constructed. This technique was repeated for all patches to form the final state representation. fig: nlp state rep depicts examples of natural language sentences of different states in the enviornment.
Semantic State Representations in the Doom Environment ::: Experiments
We tested the natural language representation against the visual-based and feature representations on several tasks, with varying difficulty. In these tasks, the agent could navigate, shoot, and collect items such as weapons and medipacks. Often, enemies of different types attacked the agent, and a positive reward was given when an enemy was killed. Occasionally, the agent also suffered from health degeneration. The tasks included a basic scenario, a health gathering scenario, a scenario in which the agent must take cover from fireballs, a scenario in which the agent must defend itself from charging enemies, and a super scenario, where a mixture of the above scenarios was designed to challenge the agent.
More specifically, in the basic scenario, a single monster is spawned in front of the agent. The purpose of this scenario is to teach the agent to aim at the enemy and shoot at it. In the health gathering scenario, the floor of the room is covered in toxin, causing the agent to gradually lose health. Medipacks are spawned randomly in the room and the agent's objective is to keep itself alive by collecting them. In the take cover scenario, multiple fireball shooting monsters are spawned in front of the agent. The goal of the agent is to stay alive as long as possible, dodging inbound fireballs. The difficulty of the task increases over time, as additional monsters are spawned. In the defend the center scenario, melee attacking monsters are randomly spawned in the room, and charge towards the agent. As opposed to other scenarios, the agent is incapable of moving, aside from turning left and right and shooting. In the defend the line scenario, both melee and fireball shooting monsters are spawned near the opposing wall. The agent can only step right, left or shoot. Finally, in the “super" scenario both melee and fireball shooting monsters are repeatably spawned all over the room. the room contains various items the agent can pick up and use, such as medipacks, shotguns, ammunition and armor. Furthermore, the room is filled with unusable objects, various types of trees, pillars and other decorations. The agent can freely move and turn in any direction, as well as shoot. This scenario combines elements from all of the previous scenarios.
Our agent was implemented using a Convolutional Neural Network as described in Section SECREF4. We converted the parsed state into embedded representations of fixed length. We tested both a DQN and a PPO based agent, and compared the natural language representation to the other representation techniques, namely the raw image, feature vector, and semantic segmentation representations.
In order to effectively compare the performance of the different representation methods, we conducted our experiments under similar conditions for all agents. The same hyper-parameters were used under all tested representations. Moreover, to rule out effects of architectural expressiveness, the number of weights in all neural networks was approximately matched, regardless of the input type. Finally, we ensured the “super" scenario was positively biased toward image-based representations. This was done by adding a large amount items to the game level, thereby filling the state with nuisances (these tests are denoted by `nuisance' in the scenario name). This was especially evident in the NLP representations, as sentences became extensively longer (average of over 250 words). This is contrary to image-based representations, which did not change in dimension.
Results of the DQN-based agent are presented in fig: scenario comparison. Each plot depicts the average reward (across 5 seeds) of all representations methods. It can be seen that the NLP representation outperforms the other methods. This is contrary to the fact that it contains the same information as the semantic segmentation maps. More interestingly, comparing the vision-based and feature-based representations render inconsistent conclusions with respect to their relative performance. NLP representations remain robust to changes in the environment as well as task-nuisances in the state. As depicted in fig: nuisance scenarios, inflating the state space with task-nuisances impairs the performance of all representations. There, a large amount of unnecessary objects were spawned in the level, increasing the state's description length to over 250 words, whilst retaining the same amount of useful information. Nevertheless, the NLP representation outperformed the vision and feature based representations, with high robustness to the applied noise.
In order to verify the performance of the natural language representation was not due to extensive discretization of patches, we've conducted experiments increasing the number of horizontal patches - ranging from 3 to 31 patches in the extreme case. Our results, as depicted in fig: patch count, indicate that the amount of discretization of patches did not affect the performance of the NLP agent, remaining a superior representation compared to the rest.
To conclude, our experiments suggest that NLP representations, though they describe the same raw information of the semantic segmentation maps, are more robust to task-nuisances, allow for better transfer, and achieve higher performance in complex tasks, even when their description is long and convoluted. While we've only presented results for DQN agents, we include plots for a PPO agent in the Appendix, showing similar trends and conclusions. We thus deduce that NLP-based semantic state representations are a preferable choice for training VizDoom agents.
Related Work
Work on representation learning is concerned with finding an appropriate representation of data in order to perform a machine learning task BIBREF33. In particular, deep learning exploits this concept by its very nature BIBREF2. Work on representation learning include Predictive State Representations (PSR) BIBREF34, BIBREF35, which capture the state as a vector of predictions of future outcomes, and a Heuristic Embedding of Markov Processes (HEMP) BIBREF36, which learns to embed transition probabilities using an energy-based optimization problem.
There has been extensive work attempting to use natural language in RL. Efforts that integrate language in RL develop tools, approaches, and insights that are valuable for improving the generalization and sample efficiency of learning agents. Previous work on language-conditioned RL has considered the use of natural language in the observation and action space. Environments such as Zork and TextWorld BIBREF37 have been the standard benchmarks for testing text-based games. Nevertheless, these environments do not search for semantic state representations, in which an RL algorithm can be better evaluated and controlled.
BIBREF38 use high-level semantic abstractions of documents in a representation to facilitate relational learning using Inductive Logic Programming and a generative language model. BIBREF39 use high-level guidance expressed in text to enrich a stochastic agent, playing against the built-in AI of Civilization II. They train an agent with the Monte-Carlo search framework in order to jointly learn to identify text that is relevant to a given game state as well as game strategies based only on environment feedback. BIBREF40 utilize natural language in a model-based approach to describe the dynamics and rewards of an environment, showing these can facilitate transfer between different domains.
More recently, the structure and compositionality of natural language has been used for representing policies in hierarchical RL. In a paper by BIBREF41, instructions given in natural language were used in order to break down complex problems into high-level plans and lower-level actions. Their suggested framework leverages the structure inherent to natural language, allowing for transfer to unfamiliar tasks and situations. This use of semantic structure has also been leveraged by BIBREF42, where abstract actions (not necessarily words) were recognized as symbols of a natural and expressive language, improving performance and transfer of RL agents.
Outside the context of RL, previous work has also shown that high-quality linguistic representations can assist in cross-modal transfer, such as using semantic relationships between labels for zero-shot transfer in image classification BIBREF43, BIBREF44.
Discussion and Future Work
Our results indicate that natural language can outperform, and sometime even replace, vision-based representations. Nevertheless, natural language representations can also have disadvantages in various scenarios. For one, they require the designer to be able to describe the state exactly, whether by a rule-based or learned parser. Second, they abstract notions of the state space that the designer may not realize are necessary for solving the problem. As such, semantic representations should be carefully chosen, similar to the process of reward shaping or choosing a training algorithm. Here, we enumerate three instances in which we believe natural language representations are beneficial:
Natural use-case: Information contained in both generic and task-specific textual corpora may be highly valuable for decision making. This case assumes the state can either be easily described using natural language or is already in a natural language state. This includes examples such as user-based domains, in which user profiles and comments are part of the state, or the stock market, in which stocks are described by analysts and other readily available text. 3D physical environments such as VizDoom also fall into this category, as semantic segmentation maps can be easily described using natural language.
Subjective information: Subjectivity refers to aspects used to express opinions, evaluations, and speculations. These may include strategies for a game, the way a doctor feels about her patient, the mood of a driver, and more.
Unstructured information: In these cases, features might be measured by different units, with an arbitrary position in the state's feature vector, rendering them sensitive to permutations. Such state representations are thus hard to process using neural networks. As an example, the medical domain may contain numerous features describing the vitals of a patient. These raw features, when observed by an expert, can be efficiently described using natural language. Moreover, they allow an expert to efficiently add subjective information.
An orthogonal line of research considers automating the process of image annotation. The noise added from the supervised or unsupervised process serves as a great challenge for natural language representation. We suspect the noise accumulated by this procedure would require additional information to be added to the state (e.g., past information). Nevertheless, as we have shown in this paper, such information can be compressed using natural language. In addition, while we have only considered spatial features of the state, information such as movement directions and transient features can be efficiently encoded as well.
Natural language representations help abstract information and interpret the state of an agent, improving its overall performance. Nevertheless, it is imperative to choose a representation that best fits the domain at hand. Designers of RL algorithms should consider searching for a semantic representation that fits their needs. While this work only takes a first step toward finding better semantic state representations, we believe the structure inherent in natural language can be considered a favorable candidate for achieving this goal.
Appendix ::: VizDoom
VizDoom is a "Doom" based research environment that was developed at the Poznań University of Technology. It is based on "ZDoom" game executable, and includes a Python based API. The API offers the user the ability to run game instances, query the game state, and execute actions. The original purpose of VizDoom is to provide a research platform for vision based reinforcement learning. Thus, a natural language representation for the game was needed to be implemented. ViZDoom emulates the "Doom" game and enables us to access data within a certain frame using Python dictionaries. This makes it possible to extract valuable data including player health, ammo, enemy locations etc. Each game frame contains "labels", which contain data on visible objects in the game (the player, enemies, medkits, etc). We used "Doom Builder" in order to edit some of the scenarios and design a new one. Enviroment rewards are presented in doom-scenarios-table.
Appendix ::: Natural language State Space
A semantic representation using natural language should contain information which can be deduced by a human playing the game. For example, even though a human does not know the exact distance between objects, it can classify them as "close" or "far". However, objects that are outside the player's field of vision can not be a part of the state. Furthermore, a human would most likely refer to an object's location relative to itself, using directions such as "right" or "left".
Appendix ::: Language model implementation
To convert each frame to a natural language representation state, the list of available labels is iterated, and a string is built accordingly. The main idea of our implementation is to divide the screen into multiple vertical patches, count the amount of different objects inside by their types, and parse it as a sentence. The decision as to whether an object is close or far can be determined by calculating the distance from it to the player, and using two threshold levels. Object descriptions can be concise or detailed, as needed. We experimented with the following mechanics:
the screen can be divided between patches equally, or by determined ratios. Here, our main guideline was to keep the "front" patch narrow enough so it can be used as "sights".
our initial experiment was with 3 patches, and later we added 2 more patches classified as "outer left" and "outer right". In our experiments we have tested up to 51 patches, referred to as left or right patch with corresponding numbers.
we used 2 thresholds, which allowed us to classify the distance of an object from the player as "close","mid", and "far. Depending on the task, the value of the threshold can be changed, as well as adding more thresholds.
different states might generate sentence with different size. A maximum sentence length is another parameter that was tested. sentences-length-table presents some data regarding the average word count in some of the game sceanrios.
After the sentence describing the state is generated, it is transformed to an embedding vector. Words that were not found in the vocabulary were replaced with an “OOV" vector. All words were then concatenated to a NxDx1 matrix, representing the state. We experimented with both Word2Vec and GloVe pretrained embedding vectors. Eventually, we used the latter, as it consumes less memory and speeds up the training process. The length of the state sentence is one of the hyperparameters of the agents; shorter sentences are zero padded, where longer ones are trimmed.
Appendix ::: Model implementation
All of our models were implemented using PyTorch. The DQN agents used a single network that outputs the Q-Values of the available actions. The PPO agents used an Actor-Critic model with two networks; the first outputs the policy distribution for the input state, and the second network outputs it's value. As mentioned earlier, we used three common neural network architectures:
used for the raw image and semantic segmentation based agents. VizDoom's raw output image resolution is 640X480X3 RGB image. We experimented with both the original image and its down-sampled version. The semantic segmentation image was of resolution 640X480X1, where the pixel value represents the object's class, generated using the VizDoom label API. the network consisted of two convolutional layers, two hidden linear layers and an output layer. The first convolutional layer has 8 6X6 filters with stride 3 and ReLU activation. The second convolutional layer has 16 3X3 filters with stride 2 and ReLU activation. The fully connected layers has 32 and 16 units, both of them are followed by ReLU activation. The output layer's size is the amount of actions the agent has available in the trained scenario.
Used in the feature vector based agent. Naturally, some discretization is needed in order to build a feature vector, so some of the state data is lost. the feature vector was made using features we extracted from the VizDoom API, and its dimensions was 90 X 1. The network is made up of two fully connected layers, each of them followed by a ReLU activation. The first layer has 32 units, and the second one one has 16 units. The output layer's size was the amount of actions available to the agent.
Used in the natural language based agent. As previously mentioned, each word in the natural language state is transformed into a 200X50X1 matrix. The first layers of the TextCNN are convolutional layers with 8 filter which are designed to scan input sentence, and return convolution outputs of sequences of varying lengths. The filters vary in width, such that each of them learns to identify different lengths of sequences in words. Longer filters have higher capability of extracting features from longer word sequences. The filters we have chosen have the following dimensions: 3X50X1, 4X50X1, 5X50X1, 8X50X1,11X50X1. Following the convolution layer there is a ReLU activation and a max pool layer. Finally, there are two fully connected layers; The first layer has 32 units, and second one has 16 units. Both of them are followed by ReLU activation.
All architectures have the same output, regardless of the input type. The DQN network is a regression network, with its output size the number of available actions. The PPO agent has 2 networks; actor and critic. The actor network has a Softmax activation with size equal to the available amount of actions. The critic network is a regression model with a single output representing the state's value. Reward plots for the PPO agent can be found in Figure FIGREF47. | Average reward across 5 seeds show that NLP representations are robust to changes in the environment as well task-nuisances |
599d9ca21bbe2dbe95b08cf44dfc7537bde06f98 | 599d9ca21bbe2dbe95b08cf44dfc7537bde06f98_0 | Q: How better is performance of natural language based agents in experiments?
Text: Introduction
“The world of our experiences must be enormously simplified and generalized before it is possible to make a symbolic inventory of all our experiences of things and relations."
(Edward Sapir, Language: An Introduction to the Study of Speech, 1921)
Deep Learning based algorithms use neural networks in order to learn feature representations that are good for solving high dimensional Machine Learning (ML) tasks. Reinforcement Learning (RL) is a subfield of ML that has been greatly affected by the use of deep neural networks as universal function approximators BIBREF0, BIBREF1. These deep neural networks are used in RL to estimate value functions, state-action value functions, policy mappings, next-state predictions, rewards, and more BIBREF2, BIBREF3, BIBREF4, thus combating the “curse of dimensionality".
The term representation is used differently in different contexts. For the purpose of this paper we define a semantic representation of a state as one that reflects its meaning as it is understood by an expert. The semantic representation of a state should thus be paired with a reliable and computationally efficient method for extracting information from it. Previous success in RL has mainly focused on representing the state in its raw form (e.g., visual input in Atari-based games BIBREF2). This approach stems from the belief that neural networks (specifically convolutional networks) can extract meaningful features from complex inputs. In this work, we challenge current representation techniques and suggest to represent the state using natural language, similar to the way we, as humans, summarize and transfer information efficiently from one to the other BIBREF5.
The ability to associate states with natural language sentences that describe them is a hallmark of understanding representations for reinforcement learning. Humans use rich natural language to describe and communicate their visual perceptions, feelings, beliefs, strategies, and more. The semantics inherent to natural language carry knowledge and cues of complex types of content, including: events, spatial relations, temporal relations, semantic roles, logical structures, support for inference and entailment, as well as predicates and arguments BIBREF6. The expressive nature of language can thus act as an alternative semantic state representation.
Over the past few years, Natural Language Processing (NLP) has shown an acceleration in progress on a wide range of downstream applications ranging from Question Answering BIBREF7, BIBREF8, to Natural Language Inference BIBREF9, BIBREF10, BIBREF11 through Syntactic Parsing BIBREF12, BIBREF13, BIBREF14. Recent work has shown the ability to learn flexible, hierarchical, contextualized representations, obtaining state-of-the-art results on various natural language processing tasks BIBREF15. A basic observation of our work is that natural language representations are also beneficial for solving problems in which natural language is not the underlying source of input. Moreover, our results indicate that natural language is a strong alternative to current complementary methods for semantic representations of a state.
In this work we assume a state can be described using natural language sentences. We use distributional embedding methods in order to represent sentences, processed with a standard Convolutional Neural Network for feature extraction. In Section SECREF2 we describe the basic frameworks we rely on. We discuss possible semantic representations in Section SECREF3, namely, raw visual inputs, semantic segmentation, feature vectors, and natural language representations. Then, in Section SECREF4 we compare NLP representations with their alternatives. Our results suggest that representation of the state using natural language can achieve better performance, even on difficult tasks, or tasks in which the description of the state is saturated with task-nuisances BIBREF17. Moreover, we observe that NLP representations are more robust to transfer and changes in the environment. We conclude the paper with a short discussion and related work.
Preliminaries ::: Reinforcement Learning
In Reinforcement Learning the goal is to learn a policy $\pi (s)$, which is a mapping from state $s$ to a probability distribution over actions $\mathcal {A}$, with the objective to maximize a reward $r(s)$ that is provided by the environment. This is often solved by formulating the problem as a Markov Decision Process (MDP) BIBREF19. Two common quantities used to estimate the performance in MDPs are the value $v (s)$ and action-value $Q (s, a)$ functions, which are defined as follows: ${v(s) = \mathbb {E}^{\pi } [\sum _t \gamma ^t r_t | s_0 = s ]}$ and ${Q(s, a) = \mathbb {E}^{\pi } [\sum _t \gamma ^t r_t | s_0 = s, a_0 = a ]}$. Two prominent algorithms for solving RL tasks, which we use in this paper, are the value-based DQN BIBREF2 and the policy-based PPO BIBREF3.
Deep Q Networks (DQN): The DQN algorithm is an extension of the classical Q-learning approach, to a deep learning regime. Q-learning learns the optimal policy by directly learning the value function, i.e., the action-value function. A neural network is used to estimate the $Q$-values and is trained to minimize the Bellman error, namely
Proximal Policy Optimization (PPO): While the DQN learns the optimal behavioral policy using a dynamic programming approach, PPO takes a different route. PPO builds upon the policy gradient theorem, which optimizes the policy directly, with an addition of a trust-region update rule. The policy gradient theorem updates the policy by
Preliminaries ::: Deep Learning for NLP
A word embedding is a mapping from a word $w$ to a vector $\mathbf {w} \in \mathbb {R}^d$. A simple form of word embedding is the Bag of Words (BoW), a vector $\mathbf {w} \in \mathbb {N}^{|D|}$ ($|D|$ is the dictionary size), in which each word receives a unique 1-hot vector representation. Recently, more efficient methods have been proposed, in which the embedding vector is smaller than the dictionary size, $d \ll |D|$. These methods are also known as distributional embeddings.
The distributional hypothesis in linguistics is derived from the semantic theory of language usage (i.e. words that are used and occur in the same contexts tend to have similar meanings). Distributional word representations are a fundamental building block for representing natural language sentences. Word embeddings such as Word2vec BIBREF20 and GloVe BIBREF21 build upon the distributional hypothesis, improving efficiency of state-of-the-art language models.
Convolutional Neural Networks (CNNs), originally invented for computer vision, have been shown to achieve strong performance on text classification tasks BIBREF22, BIBREF23, as well as other traditional NLP tasks BIBREF24. In this paper we consider a common architecture BIBREF25, in which each word in a sentence is represented as an embedding vector, a single convolutional layer with $m$ filters is applied, producing an $m$-dimensional vector for each $n$-gram. The vectors are combined using max-pooling followed by a ReLU activation. The result is then passed through multiple hidden linear layers with ReLU activation, eventually generating the final output.
Semantic Representation Methods
Contemporary methods for semantic representation of states currently follow one of three approaches: (1) raw visual inputs BIBREF2, BIBREF26, in which raw sensory values of pixels are used from one or multiple sources, (2) feature vectors BIBREF27, BIBREF28, in which general features of the problem are chosen, with no specific structure, and (3) semantic segmentation maps BIBREF29, BIBREF30, in which discrete or logical values are used in one or many channels to represent the general features of the state.
The common approach is to derive decisions (e.g., classification, action, etc.) based on information in its raw form. In RL, the raw form is often the pixels representing an image – however the image is only one form of a semantic representation. In Semantic Segmentation, the image is converted from a 3-channel (RGB) matrix into an $N$-channel matrix, where $N$ is the number of classes. In this case, each channel represents a class, and a binary value at each coordinate denotes whether or not this class is present in the image at this location. For instance, fig: semantic segmentation example considers an autonomous vehicle task. The raw image and segmentation maps are both sufficient for the task (i.e., both contain a sufficient semantic representation). Nevertheless, the semantic segmentation maps contain less task-nuisances BIBREF17, which are random variables that affect the observed data, but are not informative to the task we are trying to solve.
In this paper we propose a forth method for representing a state, namely using natural language descriptions. One method to achieve such a representation is through Image Captioning BIBREF31, BIBREF32. Natural language is both rich as well as flexible. This flexibility enables the algorithm designer to represent the information present in the state as efficiently and compactly as possible. As an example, the top image in fig: semantic segmentation example can be represented using natural language as “There is a car in your lane two meters in front of you, a bicycle rider on your far left in the negative lane, a car in your direction in the opposite lane which is twenty meters away, and trees and pedestrians on the side walk.” or compactly by “There is a car two meters in front of you a pedestrian on the sidewalk to your right and a car inbound in the negative lane which is far away.”. Language also allows us to efficiently compress information. As an example, the segmentation map in the bottom image of fig: semantic segmentation example can be compactly described by “There are 13 pedestrians crossing the road in front of you”. In the next section we will demonstrate the benefits of using natural-language-based semantic state representation in a first person shooter enviornment.
Semantic State Representations in the Doom Environment
In this section we compare the different types of semantic representations for representing states in the ViZDoom environment BIBREF26, as described in the previous section. More specifically, we use a semantic natural language parser in order to describe a state, over numerous instances of levels varying in difficulty, task-nuisances, and objectives. Our results show that, though semantic segmentation and feature vector representation techniques express a similar statistic of the state, natural language representation offers better performance, faster convergence, more robust solutions, as well as better transfer.
The ViZDoom environment involves a 3D world that is significantly more real-world-like than Atari 2600 games, with a relatively realistic physics model. An agent in the ViZDoom environment must effectively perceive, interpret, and learn the 3D world in order to make tactical and strategic decisions of where to go and how to act. There are three types of state representations that are provided by the environment. The first, which is also most commonly used, is raw visual inputs, in which the state is represented by an image from a first person view of the agent. A feature vector representation is an additional state representation provided by the environment. The feature vector representation includes positions as well as labels of all objects and creatures in the vicinity of the agent. Lastly, the environment provides a semantic segmentation map based on the aforementioned feature vector. An example of the visual representations in VizDoom is shown in fig: representations in vizdoom.
In order to incorporate natural language representation to the VizDoom environment we've constructed a semantic parser of the semantic segmentation maps provided by the environment. Each state of the environment was converted into a natural language sentence based on positions and labels of objects in the frame. To implement this, the screen was divided into several vertical and horizontal patches, as depicted in fig: patches. These patches describe relational aspects of the state, such as distance of objects and their direction with respect to the agent's point of view. In each patch, objects were counted, and a natural language description of the patch was constructed. This technique was repeated for all patches to form the final state representation. fig: nlp state rep depicts examples of natural language sentences of different states in the enviornment.
Semantic State Representations in the Doom Environment ::: Experiments
We tested the natural language representation against the visual-based and feature representations on several tasks, with varying difficulty. In these tasks, the agent could navigate, shoot, and collect items such as weapons and medipacks. Often, enemies of different types attacked the agent, and a positive reward was given when an enemy was killed. Occasionally, the agent also suffered from health degeneration. The tasks included a basic scenario, a health gathering scenario, a scenario in which the agent must take cover from fireballs, a scenario in which the agent must defend itself from charging enemies, and a super scenario, where a mixture of the above scenarios was designed to challenge the agent.
More specifically, in the basic scenario, a single monster is spawned in front of the agent. The purpose of this scenario is to teach the agent to aim at the enemy and shoot at it. In the health gathering scenario, the floor of the room is covered in toxin, causing the agent to gradually lose health. Medipacks are spawned randomly in the room and the agent's objective is to keep itself alive by collecting them. In the take cover scenario, multiple fireball shooting monsters are spawned in front of the agent. The goal of the agent is to stay alive as long as possible, dodging inbound fireballs. The difficulty of the task increases over time, as additional monsters are spawned. In the defend the center scenario, melee attacking monsters are randomly spawned in the room, and charge towards the agent. As opposed to other scenarios, the agent is incapable of moving, aside from turning left and right and shooting. In the defend the line scenario, both melee and fireball shooting monsters are spawned near the opposing wall. The agent can only step right, left or shoot. Finally, in the “super" scenario both melee and fireball shooting monsters are repeatably spawned all over the room. the room contains various items the agent can pick up and use, such as medipacks, shotguns, ammunition and armor. Furthermore, the room is filled with unusable objects, various types of trees, pillars and other decorations. The agent can freely move and turn in any direction, as well as shoot. This scenario combines elements from all of the previous scenarios.
Our agent was implemented using a Convolutional Neural Network as described in Section SECREF4. We converted the parsed state into embedded representations of fixed length. We tested both a DQN and a PPO based agent, and compared the natural language representation to the other representation techniques, namely the raw image, feature vector, and semantic segmentation representations.
In order to effectively compare the performance of the different representation methods, we conducted our experiments under similar conditions for all agents. The same hyper-parameters were used under all tested representations. Moreover, to rule out effects of architectural expressiveness, the number of weights in all neural networks was approximately matched, regardless of the input type. Finally, we ensured the “super" scenario was positively biased toward image-based representations. This was done by adding a large amount items to the game level, thereby filling the state with nuisances (these tests are denoted by `nuisance' in the scenario name). This was especially evident in the NLP representations, as sentences became extensively longer (average of over 250 words). This is contrary to image-based representations, which did not change in dimension.
Results of the DQN-based agent are presented in fig: scenario comparison. Each plot depicts the average reward (across 5 seeds) of all representations methods. It can be seen that the NLP representation outperforms the other methods. This is contrary to the fact that it contains the same information as the semantic segmentation maps. More interestingly, comparing the vision-based and feature-based representations render inconsistent conclusions with respect to their relative performance. NLP representations remain robust to changes in the environment as well as task-nuisances in the state. As depicted in fig: nuisance scenarios, inflating the state space with task-nuisances impairs the performance of all representations. There, a large amount of unnecessary objects were spawned in the level, increasing the state's description length to over 250 words, whilst retaining the same amount of useful information. Nevertheless, the NLP representation outperformed the vision and feature based representations, with high robustness to the applied noise.
In order to verify the performance of the natural language representation was not due to extensive discretization of patches, we've conducted experiments increasing the number of horizontal patches - ranging from 3 to 31 patches in the extreme case. Our results, as depicted in fig: patch count, indicate that the amount of discretization of patches did not affect the performance of the NLP agent, remaining a superior representation compared to the rest.
To conclude, our experiments suggest that NLP representations, though they describe the same raw information of the semantic segmentation maps, are more robust to task-nuisances, allow for better transfer, and achieve higher performance in complex tasks, even when their description is long and convoluted. While we've only presented results for DQN agents, we include plots for a PPO agent in the Appendix, showing similar trends and conclusions. We thus deduce that NLP-based semantic state representations are a preferable choice for training VizDoom agents.
Related Work
Work on representation learning is concerned with finding an appropriate representation of data in order to perform a machine learning task BIBREF33. In particular, deep learning exploits this concept by its very nature BIBREF2. Work on representation learning include Predictive State Representations (PSR) BIBREF34, BIBREF35, which capture the state as a vector of predictions of future outcomes, and a Heuristic Embedding of Markov Processes (HEMP) BIBREF36, which learns to embed transition probabilities using an energy-based optimization problem.
There has been extensive work attempting to use natural language in RL. Efforts that integrate language in RL develop tools, approaches, and insights that are valuable for improving the generalization and sample efficiency of learning agents. Previous work on language-conditioned RL has considered the use of natural language in the observation and action space. Environments such as Zork and TextWorld BIBREF37 have been the standard benchmarks for testing text-based games. Nevertheless, these environments do not search for semantic state representations, in which an RL algorithm can be better evaluated and controlled.
BIBREF38 use high-level semantic abstractions of documents in a representation to facilitate relational learning using Inductive Logic Programming and a generative language model. BIBREF39 use high-level guidance expressed in text to enrich a stochastic agent, playing against the built-in AI of Civilization II. They train an agent with the Monte-Carlo search framework in order to jointly learn to identify text that is relevant to a given game state as well as game strategies based only on environment feedback. BIBREF40 utilize natural language in a model-based approach to describe the dynamics and rewards of an environment, showing these can facilitate transfer between different domains.
More recently, the structure and compositionality of natural language has been used for representing policies in hierarchical RL. In a paper by BIBREF41, instructions given in natural language were used in order to break down complex problems into high-level plans and lower-level actions. Their suggested framework leverages the structure inherent to natural language, allowing for transfer to unfamiliar tasks and situations. This use of semantic structure has also been leveraged by BIBREF42, where abstract actions (not necessarily words) were recognized as symbols of a natural and expressive language, improving performance and transfer of RL agents.
Outside the context of RL, previous work has also shown that high-quality linguistic representations can assist in cross-modal transfer, such as using semantic relationships between labels for zero-shot transfer in image classification BIBREF43, BIBREF44.
Discussion and Future Work
Our results indicate that natural language can outperform, and sometime even replace, vision-based representations. Nevertheless, natural language representations can also have disadvantages in various scenarios. For one, they require the designer to be able to describe the state exactly, whether by a rule-based or learned parser. Second, they abstract notions of the state space that the designer may not realize are necessary for solving the problem. As such, semantic representations should be carefully chosen, similar to the process of reward shaping or choosing a training algorithm. Here, we enumerate three instances in which we believe natural language representations are beneficial:
Natural use-case: Information contained in both generic and task-specific textual corpora may be highly valuable for decision making. This case assumes the state can either be easily described using natural language or is already in a natural language state. This includes examples such as user-based domains, in which user profiles and comments are part of the state, or the stock market, in which stocks are described by analysts and other readily available text. 3D physical environments such as VizDoom also fall into this category, as semantic segmentation maps can be easily described using natural language.
Subjective information: Subjectivity refers to aspects used to express opinions, evaluations, and speculations. These may include strategies for a game, the way a doctor feels about her patient, the mood of a driver, and more.
Unstructured information: In these cases, features might be measured by different units, with an arbitrary position in the state's feature vector, rendering them sensitive to permutations. Such state representations are thus hard to process using neural networks. As an example, the medical domain may contain numerous features describing the vitals of a patient. These raw features, when observed by an expert, can be efficiently described using natural language. Moreover, they allow an expert to efficiently add subjective information.
An orthogonal line of research considers automating the process of image annotation. The noise added from the supervised or unsupervised process serves as a great challenge for natural language representation. We suspect the noise accumulated by this procedure would require additional information to be added to the state (e.g., past information). Nevertheless, as we have shown in this paper, such information can be compressed using natural language. In addition, while we have only considered spatial features of the state, information such as movement directions and transient features can be efficiently encoded as well.
Natural language representations help abstract information and interpret the state of an agent, improving its overall performance. Nevertheless, it is imperative to choose a representation that best fits the domain at hand. Designers of RL algorithms should consider searching for a semantic representation that fits their needs. While this work only takes a first step toward finding better semantic state representations, we believe the structure inherent in natural language can be considered a favorable candidate for achieving this goal.
Appendix ::: VizDoom
VizDoom is a "Doom" based research environment that was developed at the Poznań University of Technology. It is based on "ZDoom" game executable, and includes a Python based API. The API offers the user the ability to run game instances, query the game state, and execute actions. The original purpose of VizDoom is to provide a research platform for vision based reinforcement learning. Thus, a natural language representation for the game was needed to be implemented. ViZDoom emulates the "Doom" game and enables us to access data within a certain frame using Python dictionaries. This makes it possible to extract valuable data including player health, ammo, enemy locations etc. Each game frame contains "labels", which contain data on visible objects in the game (the player, enemies, medkits, etc). We used "Doom Builder" in order to edit some of the scenarios and design a new one. Enviroment rewards are presented in doom-scenarios-table.
Appendix ::: Natural language State Space
A semantic representation using natural language should contain information which can be deduced by a human playing the game. For example, even though a human does not know the exact distance between objects, it can classify them as "close" or "far". However, objects that are outside the player's field of vision can not be a part of the state. Furthermore, a human would most likely refer to an object's location relative to itself, using directions such as "right" or "left".
Appendix ::: Language model implementation
To convert each frame to a natural language representation state, the list of available labels is iterated, and a string is built accordingly. The main idea of our implementation is to divide the screen into multiple vertical patches, count the amount of different objects inside by their types, and parse it as a sentence. The decision as to whether an object is close or far can be determined by calculating the distance from it to the player, and using two threshold levels. Object descriptions can be concise or detailed, as needed. We experimented with the following mechanics:
the screen can be divided between patches equally, or by determined ratios. Here, our main guideline was to keep the "front" patch narrow enough so it can be used as "sights".
our initial experiment was with 3 patches, and later we added 2 more patches classified as "outer left" and "outer right". In our experiments we have tested up to 51 patches, referred to as left or right patch with corresponding numbers.
we used 2 thresholds, which allowed us to classify the distance of an object from the player as "close","mid", and "far. Depending on the task, the value of the threshold can be changed, as well as adding more thresholds.
different states might generate sentence with different size. A maximum sentence length is another parameter that was tested. sentences-length-table presents some data regarding the average word count in some of the game sceanrios.
After the sentence describing the state is generated, it is transformed to an embedding vector. Words that were not found in the vocabulary were replaced with an “OOV" vector. All words were then concatenated to a NxDx1 matrix, representing the state. We experimented with both Word2Vec and GloVe pretrained embedding vectors. Eventually, we used the latter, as it consumes less memory and speeds up the training process. The length of the state sentence is one of the hyperparameters of the agents; shorter sentences are zero padded, where longer ones are trimmed.
Appendix ::: Model implementation
All of our models were implemented using PyTorch. The DQN agents used a single network that outputs the Q-Values of the available actions. The PPO agents used an Actor-Critic model with two networks; the first outputs the policy distribution for the input state, and the second network outputs it's value. As mentioned earlier, we used three common neural network architectures:
used for the raw image and semantic segmentation based agents. VizDoom's raw output image resolution is 640X480X3 RGB image. We experimented with both the original image and its down-sampled version. The semantic segmentation image was of resolution 640X480X1, where the pixel value represents the object's class, generated using the VizDoom label API. the network consisted of two convolutional layers, two hidden linear layers and an output layer. The first convolutional layer has 8 6X6 filters with stride 3 and ReLU activation. The second convolutional layer has 16 3X3 filters with stride 2 and ReLU activation. The fully connected layers has 32 and 16 units, both of them are followed by ReLU activation. The output layer's size is the amount of actions the agent has available in the trained scenario.
Used in the feature vector based agent. Naturally, some discretization is needed in order to build a feature vector, so some of the state data is lost. the feature vector was made using features we extracted from the VizDoom API, and its dimensions was 90 X 1. The network is made up of two fully connected layers, each of them followed by a ReLU activation. The first layer has 32 units, and the second one one has 16 units. The output layer's size was the amount of actions available to the agent.
Used in the natural language based agent. As previously mentioned, each word in the natural language state is transformed into a 200X50X1 matrix. The first layers of the TextCNN are convolutional layers with 8 filter which are designed to scan input sentence, and return convolution outputs of sequences of varying lengths. The filters vary in width, such that each of them learns to identify different lengths of sequences in words. Longer filters have higher capability of extracting features from longer word sequences. The filters we have chosen have the following dimensions: 3X50X1, 4X50X1, 5X50X1, 8X50X1,11X50X1. Following the convolution layer there is a ReLU activation and a max pool layer. Finally, there are two fully connected layers; The first layer has 32 units, and second one has 16 units. Both of them are followed by ReLU activation.
All architectures have the same output, regardless of the input type. The DQN network is a regression network, with its output size the number of available actions. The PPO agent has 2 networks; actor and critic. The actor network has a Softmax activation with size equal to the available amount of actions. The critic network is a regression model with a single output representing the state's value. Reward plots for the PPO agent can be found in Figure FIGREF47. | Unanswerable |
827464c79f33e69959de619958ade2df6f65fdee | 827464c79f33e69959de619958ade2df6f65fdee_0 | Q: How much faster natural language agents converge in performed experiments?
Text: Introduction
“The world of our experiences must be enormously simplified and generalized before it is possible to make a symbolic inventory of all our experiences of things and relations."
(Edward Sapir, Language: An Introduction to the Study of Speech, 1921)
Deep Learning based algorithms use neural networks in order to learn feature representations that are good for solving high dimensional Machine Learning (ML) tasks. Reinforcement Learning (RL) is a subfield of ML that has been greatly affected by the use of deep neural networks as universal function approximators BIBREF0, BIBREF1. These deep neural networks are used in RL to estimate value functions, state-action value functions, policy mappings, next-state predictions, rewards, and more BIBREF2, BIBREF3, BIBREF4, thus combating the “curse of dimensionality".
The term representation is used differently in different contexts. For the purpose of this paper we define a semantic representation of a state as one that reflects its meaning as it is understood by an expert. The semantic representation of a state should thus be paired with a reliable and computationally efficient method for extracting information from it. Previous success in RL has mainly focused on representing the state in its raw form (e.g., visual input in Atari-based games BIBREF2). This approach stems from the belief that neural networks (specifically convolutional networks) can extract meaningful features from complex inputs. In this work, we challenge current representation techniques and suggest to represent the state using natural language, similar to the way we, as humans, summarize and transfer information efficiently from one to the other BIBREF5.
The ability to associate states with natural language sentences that describe them is a hallmark of understanding representations for reinforcement learning. Humans use rich natural language to describe and communicate their visual perceptions, feelings, beliefs, strategies, and more. The semantics inherent to natural language carry knowledge and cues of complex types of content, including: events, spatial relations, temporal relations, semantic roles, logical structures, support for inference and entailment, as well as predicates and arguments BIBREF6. The expressive nature of language can thus act as an alternative semantic state representation.
Over the past few years, Natural Language Processing (NLP) has shown an acceleration in progress on a wide range of downstream applications ranging from Question Answering BIBREF7, BIBREF8, to Natural Language Inference BIBREF9, BIBREF10, BIBREF11 through Syntactic Parsing BIBREF12, BIBREF13, BIBREF14. Recent work has shown the ability to learn flexible, hierarchical, contextualized representations, obtaining state-of-the-art results on various natural language processing tasks BIBREF15. A basic observation of our work is that natural language representations are also beneficial for solving problems in which natural language is not the underlying source of input. Moreover, our results indicate that natural language is a strong alternative to current complementary methods for semantic representations of a state.
In this work we assume a state can be described using natural language sentences. We use distributional embedding methods in order to represent sentences, processed with a standard Convolutional Neural Network for feature extraction. In Section SECREF2 we describe the basic frameworks we rely on. We discuss possible semantic representations in Section SECREF3, namely, raw visual inputs, semantic segmentation, feature vectors, and natural language representations. Then, in Section SECREF4 we compare NLP representations with their alternatives. Our results suggest that representation of the state using natural language can achieve better performance, even on difficult tasks, or tasks in which the description of the state is saturated with task-nuisances BIBREF17. Moreover, we observe that NLP representations are more robust to transfer and changes in the environment. We conclude the paper with a short discussion and related work.
Preliminaries ::: Reinforcement Learning
In Reinforcement Learning the goal is to learn a policy $\pi (s)$, which is a mapping from state $s$ to a probability distribution over actions $\mathcal {A}$, with the objective to maximize a reward $r(s)$ that is provided by the environment. This is often solved by formulating the problem as a Markov Decision Process (MDP) BIBREF19. Two common quantities used to estimate the performance in MDPs are the value $v (s)$ and action-value $Q (s, a)$ functions, which are defined as follows: ${v(s) = \mathbb {E}^{\pi } [\sum _t \gamma ^t r_t | s_0 = s ]}$ and ${Q(s, a) = \mathbb {E}^{\pi } [\sum _t \gamma ^t r_t | s_0 = s, a_0 = a ]}$. Two prominent algorithms for solving RL tasks, which we use in this paper, are the value-based DQN BIBREF2 and the policy-based PPO BIBREF3.
Deep Q Networks (DQN): The DQN algorithm is an extension of the classical Q-learning approach, to a deep learning regime. Q-learning learns the optimal policy by directly learning the value function, i.e., the action-value function. A neural network is used to estimate the $Q$-values and is trained to minimize the Bellman error, namely
Proximal Policy Optimization (PPO): While the DQN learns the optimal behavioral policy using a dynamic programming approach, PPO takes a different route. PPO builds upon the policy gradient theorem, which optimizes the policy directly, with an addition of a trust-region update rule. The policy gradient theorem updates the policy by
Preliminaries ::: Deep Learning for NLP
A word embedding is a mapping from a word $w$ to a vector $\mathbf {w} \in \mathbb {R}^d$. A simple form of word embedding is the Bag of Words (BoW), a vector $\mathbf {w} \in \mathbb {N}^{|D|}$ ($|D|$ is the dictionary size), in which each word receives a unique 1-hot vector representation. Recently, more efficient methods have been proposed, in which the embedding vector is smaller than the dictionary size, $d \ll |D|$. These methods are also known as distributional embeddings.
The distributional hypothesis in linguistics is derived from the semantic theory of language usage (i.e. words that are used and occur in the same contexts tend to have similar meanings). Distributional word representations are a fundamental building block for representing natural language sentences. Word embeddings such as Word2vec BIBREF20 and GloVe BIBREF21 build upon the distributional hypothesis, improving efficiency of state-of-the-art language models.
Convolutional Neural Networks (CNNs), originally invented for computer vision, have been shown to achieve strong performance on text classification tasks BIBREF22, BIBREF23, as well as other traditional NLP tasks BIBREF24. In this paper we consider a common architecture BIBREF25, in which each word in a sentence is represented as an embedding vector, a single convolutional layer with $m$ filters is applied, producing an $m$-dimensional vector for each $n$-gram. The vectors are combined using max-pooling followed by a ReLU activation. The result is then passed through multiple hidden linear layers with ReLU activation, eventually generating the final output.
Semantic Representation Methods
Contemporary methods for semantic representation of states currently follow one of three approaches: (1) raw visual inputs BIBREF2, BIBREF26, in which raw sensory values of pixels are used from one or multiple sources, (2) feature vectors BIBREF27, BIBREF28, in which general features of the problem are chosen, with no specific structure, and (3) semantic segmentation maps BIBREF29, BIBREF30, in which discrete or logical values are used in one or many channels to represent the general features of the state.
The common approach is to derive decisions (e.g., classification, action, etc.) based on information in its raw form. In RL, the raw form is often the pixels representing an image – however the image is only one form of a semantic representation. In Semantic Segmentation, the image is converted from a 3-channel (RGB) matrix into an $N$-channel matrix, where $N$ is the number of classes. In this case, each channel represents a class, and a binary value at each coordinate denotes whether or not this class is present in the image at this location. For instance, fig: semantic segmentation example considers an autonomous vehicle task. The raw image and segmentation maps are both sufficient for the task (i.e., both contain a sufficient semantic representation). Nevertheless, the semantic segmentation maps contain less task-nuisances BIBREF17, which are random variables that affect the observed data, but are not informative to the task we are trying to solve.
In this paper we propose a forth method for representing a state, namely using natural language descriptions. One method to achieve such a representation is through Image Captioning BIBREF31, BIBREF32. Natural language is both rich as well as flexible. This flexibility enables the algorithm designer to represent the information present in the state as efficiently and compactly as possible. As an example, the top image in fig: semantic segmentation example can be represented using natural language as “There is a car in your lane two meters in front of you, a bicycle rider on your far left in the negative lane, a car in your direction in the opposite lane which is twenty meters away, and trees and pedestrians on the side walk.” or compactly by “There is a car two meters in front of you a pedestrian on the sidewalk to your right and a car inbound in the negative lane which is far away.”. Language also allows us to efficiently compress information. As an example, the segmentation map in the bottom image of fig: semantic segmentation example can be compactly described by “There are 13 pedestrians crossing the road in front of you”. In the next section we will demonstrate the benefits of using natural-language-based semantic state representation in a first person shooter enviornment.
Semantic State Representations in the Doom Environment
In this section we compare the different types of semantic representations for representing states in the ViZDoom environment BIBREF26, as described in the previous section. More specifically, we use a semantic natural language parser in order to describe a state, over numerous instances of levels varying in difficulty, task-nuisances, and objectives. Our results show that, though semantic segmentation and feature vector representation techniques express a similar statistic of the state, natural language representation offers better performance, faster convergence, more robust solutions, as well as better transfer.
The ViZDoom environment involves a 3D world that is significantly more real-world-like than Atari 2600 games, with a relatively realistic physics model. An agent in the ViZDoom environment must effectively perceive, interpret, and learn the 3D world in order to make tactical and strategic decisions of where to go and how to act. There are three types of state representations that are provided by the environment. The first, which is also most commonly used, is raw visual inputs, in which the state is represented by an image from a first person view of the agent. A feature vector representation is an additional state representation provided by the environment. The feature vector representation includes positions as well as labels of all objects and creatures in the vicinity of the agent. Lastly, the environment provides a semantic segmentation map based on the aforementioned feature vector. An example of the visual representations in VizDoom is shown in fig: representations in vizdoom.
In order to incorporate natural language representation to the VizDoom environment we've constructed a semantic parser of the semantic segmentation maps provided by the environment. Each state of the environment was converted into a natural language sentence based on positions and labels of objects in the frame. To implement this, the screen was divided into several vertical and horizontal patches, as depicted in fig: patches. These patches describe relational aspects of the state, such as distance of objects and their direction with respect to the agent's point of view. In each patch, objects were counted, and a natural language description of the patch was constructed. This technique was repeated for all patches to form the final state representation. fig: nlp state rep depicts examples of natural language sentences of different states in the enviornment.
Semantic State Representations in the Doom Environment ::: Experiments
We tested the natural language representation against the visual-based and feature representations on several tasks, with varying difficulty. In these tasks, the agent could navigate, shoot, and collect items such as weapons and medipacks. Often, enemies of different types attacked the agent, and a positive reward was given when an enemy was killed. Occasionally, the agent also suffered from health degeneration. The tasks included a basic scenario, a health gathering scenario, a scenario in which the agent must take cover from fireballs, a scenario in which the agent must defend itself from charging enemies, and a super scenario, where a mixture of the above scenarios was designed to challenge the agent.
More specifically, in the basic scenario, a single monster is spawned in front of the agent. The purpose of this scenario is to teach the agent to aim at the enemy and shoot at it. In the health gathering scenario, the floor of the room is covered in toxin, causing the agent to gradually lose health. Medipacks are spawned randomly in the room and the agent's objective is to keep itself alive by collecting them. In the take cover scenario, multiple fireball shooting monsters are spawned in front of the agent. The goal of the agent is to stay alive as long as possible, dodging inbound fireballs. The difficulty of the task increases over time, as additional monsters are spawned. In the defend the center scenario, melee attacking monsters are randomly spawned in the room, and charge towards the agent. As opposed to other scenarios, the agent is incapable of moving, aside from turning left and right and shooting. In the defend the line scenario, both melee and fireball shooting monsters are spawned near the opposing wall. The agent can only step right, left or shoot. Finally, in the “super" scenario both melee and fireball shooting monsters are repeatably spawned all over the room. the room contains various items the agent can pick up and use, such as medipacks, shotguns, ammunition and armor. Furthermore, the room is filled with unusable objects, various types of trees, pillars and other decorations. The agent can freely move and turn in any direction, as well as shoot. This scenario combines elements from all of the previous scenarios.
Our agent was implemented using a Convolutional Neural Network as described in Section SECREF4. We converted the parsed state into embedded representations of fixed length. We tested both a DQN and a PPO based agent, and compared the natural language representation to the other representation techniques, namely the raw image, feature vector, and semantic segmentation representations.
In order to effectively compare the performance of the different representation methods, we conducted our experiments under similar conditions for all agents. The same hyper-parameters were used under all tested representations. Moreover, to rule out effects of architectural expressiveness, the number of weights in all neural networks was approximately matched, regardless of the input type. Finally, we ensured the “super" scenario was positively biased toward image-based representations. This was done by adding a large amount items to the game level, thereby filling the state with nuisances (these tests are denoted by `nuisance' in the scenario name). This was especially evident in the NLP representations, as sentences became extensively longer (average of over 250 words). This is contrary to image-based representations, which did not change in dimension.
Results of the DQN-based agent are presented in fig: scenario comparison. Each plot depicts the average reward (across 5 seeds) of all representations methods. It can be seen that the NLP representation outperforms the other methods. This is contrary to the fact that it contains the same information as the semantic segmentation maps. More interestingly, comparing the vision-based and feature-based representations render inconsistent conclusions with respect to their relative performance. NLP representations remain robust to changes in the environment as well as task-nuisances in the state. As depicted in fig: nuisance scenarios, inflating the state space with task-nuisances impairs the performance of all representations. There, a large amount of unnecessary objects were spawned in the level, increasing the state's description length to over 250 words, whilst retaining the same amount of useful information. Nevertheless, the NLP representation outperformed the vision and feature based representations, with high robustness to the applied noise.
In order to verify the performance of the natural language representation was not due to extensive discretization of patches, we've conducted experiments increasing the number of horizontal patches - ranging from 3 to 31 patches in the extreme case. Our results, as depicted in fig: patch count, indicate that the amount of discretization of patches did not affect the performance of the NLP agent, remaining a superior representation compared to the rest.
To conclude, our experiments suggest that NLP representations, though they describe the same raw information of the semantic segmentation maps, are more robust to task-nuisances, allow for better transfer, and achieve higher performance in complex tasks, even when their description is long and convoluted. While we've only presented results for DQN agents, we include plots for a PPO agent in the Appendix, showing similar trends and conclusions. We thus deduce that NLP-based semantic state representations are a preferable choice for training VizDoom agents.
Related Work
Work on representation learning is concerned with finding an appropriate representation of data in order to perform a machine learning task BIBREF33. In particular, deep learning exploits this concept by its very nature BIBREF2. Work on representation learning include Predictive State Representations (PSR) BIBREF34, BIBREF35, which capture the state as a vector of predictions of future outcomes, and a Heuristic Embedding of Markov Processes (HEMP) BIBREF36, which learns to embed transition probabilities using an energy-based optimization problem.
There has been extensive work attempting to use natural language in RL. Efforts that integrate language in RL develop tools, approaches, and insights that are valuable for improving the generalization and sample efficiency of learning agents. Previous work on language-conditioned RL has considered the use of natural language in the observation and action space. Environments such as Zork and TextWorld BIBREF37 have been the standard benchmarks for testing text-based games. Nevertheless, these environments do not search for semantic state representations, in which an RL algorithm can be better evaluated and controlled.
BIBREF38 use high-level semantic abstractions of documents in a representation to facilitate relational learning using Inductive Logic Programming and a generative language model. BIBREF39 use high-level guidance expressed in text to enrich a stochastic agent, playing against the built-in AI of Civilization II. They train an agent with the Monte-Carlo search framework in order to jointly learn to identify text that is relevant to a given game state as well as game strategies based only on environment feedback. BIBREF40 utilize natural language in a model-based approach to describe the dynamics and rewards of an environment, showing these can facilitate transfer between different domains.
More recently, the structure and compositionality of natural language has been used for representing policies in hierarchical RL. In a paper by BIBREF41, instructions given in natural language were used in order to break down complex problems into high-level plans and lower-level actions. Their suggested framework leverages the structure inherent to natural language, allowing for transfer to unfamiliar tasks and situations. This use of semantic structure has also been leveraged by BIBREF42, where abstract actions (not necessarily words) were recognized as symbols of a natural and expressive language, improving performance and transfer of RL agents.
Outside the context of RL, previous work has also shown that high-quality linguistic representations can assist in cross-modal transfer, such as using semantic relationships between labels for zero-shot transfer in image classification BIBREF43, BIBREF44.
Discussion and Future Work
Our results indicate that natural language can outperform, and sometime even replace, vision-based representations. Nevertheless, natural language representations can also have disadvantages in various scenarios. For one, they require the designer to be able to describe the state exactly, whether by a rule-based or learned parser. Second, they abstract notions of the state space that the designer may not realize are necessary for solving the problem. As such, semantic representations should be carefully chosen, similar to the process of reward shaping or choosing a training algorithm. Here, we enumerate three instances in which we believe natural language representations are beneficial:
Natural use-case: Information contained in both generic and task-specific textual corpora may be highly valuable for decision making. This case assumes the state can either be easily described using natural language or is already in a natural language state. This includes examples such as user-based domains, in which user profiles and comments are part of the state, or the stock market, in which stocks are described by analysts and other readily available text. 3D physical environments such as VizDoom also fall into this category, as semantic segmentation maps can be easily described using natural language.
Subjective information: Subjectivity refers to aspects used to express opinions, evaluations, and speculations. These may include strategies for a game, the way a doctor feels about her patient, the mood of a driver, and more.
Unstructured information: In these cases, features might be measured by different units, with an arbitrary position in the state's feature vector, rendering them sensitive to permutations. Such state representations are thus hard to process using neural networks. As an example, the medical domain may contain numerous features describing the vitals of a patient. These raw features, when observed by an expert, can be efficiently described using natural language. Moreover, they allow an expert to efficiently add subjective information.
An orthogonal line of research considers automating the process of image annotation. The noise added from the supervised or unsupervised process serves as a great challenge for natural language representation. We suspect the noise accumulated by this procedure would require additional information to be added to the state (e.g., past information). Nevertheless, as we have shown in this paper, such information can be compressed using natural language. In addition, while we have only considered spatial features of the state, information such as movement directions and transient features can be efficiently encoded as well.
Natural language representations help abstract information and interpret the state of an agent, improving its overall performance. Nevertheless, it is imperative to choose a representation that best fits the domain at hand. Designers of RL algorithms should consider searching for a semantic representation that fits their needs. While this work only takes a first step toward finding better semantic state representations, we believe the structure inherent in natural language can be considered a favorable candidate for achieving this goal.
Appendix ::: VizDoom
VizDoom is a "Doom" based research environment that was developed at the Poznań University of Technology. It is based on "ZDoom" game executable, and includes a Python based API. The API offers the user the ability to run game instances, query the game state, and execute actions. The original purpose of VizDoom is to provide a research platform for vision based reinforcement learning. Thus, a natural language representation for the game was needed to be implemented. ViZDoom emulates the "Doom" game and enables us to access data within a certain frame using Python dictionaries. This makes it possible to extract valuable data including player health, ammo, enemy locations etc. Each game frame contains "labels", which contain data on visible objects in the game (the player, enemies, medkits, etc). We used "Doom Builder" in order to edit some of the scenarios and design a new one. Enviroment rewards are presented in doom-scenarios-table.
Appendix ::: Natural language State Space
A semantic representation using natural language should contain information which can be deduced by a human playing the game. For example, even though a human does not know the exact distance between objects, it can classify them as "close" or "far". However, objects that are outside the player's field of vision can not be a part of the state. Furthermore, a human would most likely refer to an object's location relative to itself, using directions such as "right" or "left".
Appendix ::: Language model implementation
To convert each frame to a natural language representation state, the list of available labels is iterated, and a string is built accordingly. The main idea of our implementation is to divide the screen into multiple vertical patches, count the amount of different objects inside by their types, and parse it as a sentence. The decision as to whether an object is close or far can be determined by calculating the distance from it to the player, and using two threshold levels. Object descriptions can be concise or detailed, as needed. We experimented with the following mechanics:
the screen can be divided between patches equally, or by determined ratios. Here, our main guideline was to keep the "front" patch narrow enough so it can be used as "sights".
our initial experiment was with 3 patches, and later we added 2 more patches classified as "outer left" and "outer right". In our experiments we have tested up to 51 patches, referred to as left or right patch with corresponding numbers.
we used 2 thresholds, which allowed us to classify the distance of an object from the player as "close","mid", and "far. Depending on the task, the value of the threshold can be changed, as well as adding more thresholds.
different states might generate sentence with different size. A maximum sentence length is another parameter that was tested. sentences-length-table presents some data regarding the average word count in some of the game sceanrios.
After the sentence describing the state is generated, it is transformed to an embedding vector. Words that were not found in the vocabulary were replaced with an “OOV" vector. All words were then concatenated to a NxDx1 matrix, representing the state. We experimented with both Word2Vec and GloVe pretrained embedding vectors. Eventually, we used the latter, as it consumes less memory and speeds up the training process. The length of the state sentence is one of the hyperparameters of the agents; shorter sentences are zero padded, where longer ones are trimmed.
Appendix ::: Model implementation
All of our models were implemented using PyTorch. The DQN agents used a single network that outputs the Q-Values of the available actions. The PPO agents used an Actor-Critic model with two networks; the first outputs the policy distribution for the input state, and the second network outputs it's value. As mentioned earlier, we used three common neural network architectures:
used for the raw image and semantic segmentation based agents. VizDoom's raw output image resolution is 640X480X3 RGB image. We experimented with both the original image and its down-sampled version. The semantic segmentation image was of resolution 640X480X1, where the pixel value represents the object's class, generated using the VizDoom label API. the network consisted of two convolutional layers, two hidden linear layers and an output layer. The first convolutional layer has 8 6X6 filters with stride 3 and ReLU activation. The second convolutional layer has 16 3X3 filters with stride 2 and ReLU activation. The fully connected layers has 32 and 16 units, both of them are followed by ReLU activation. The output layer's size is the amount of actions the agent has available in the trained scenario.
Used in the feature vector based agent. Naturally, some discretization is needed in order to build a feature vector, so some of the state data is lost. the feature vector was made using features we extracted from the VizDoom API, and its dimensions was 90 X 1. The network is made up of two fully connected layers, each of them followed by a ReLU activation. The first layer has 32 units, and the second one one has 16 units. The output layer's size was the amount of actions available to the agent.
Used in the natural language based agent. As previously mentioned, each word in the natural language state is transformed into a 200X50X1 matrix. The first layers of the TextCNN are convolutional layers with 8 filter which are designed to scan input sentence, and return convolution outputs of sequences of varying lengths. The filters vary in width, such that each of them learns to identify different lengths of sequences in words. Longer filters have higher capability of extracting features from longer word sequences. The filters we have chosen have the following dimensions: 3X50X1, 4X50X1, 5X50X1, 8X50X1,11X50X1. Following the convolution layer there is a ReLU activation and a max pool layer. Finally, there are two fully connected layers; The first layer has 32 units, and second one has 16 units. Both of them are followed by ReLU activation.
All architectures have the same output, regardless of the input type. The DQN network is a regression network, with its output size the number of available actions. The PPO agent has 2 networks; actor and critic. The actor network has a Softmax activation with size equal to the available amount of actions. The critic network is a regression model with a single output representing the state's value. Reward plots for the PPO agent can be found in Figure FIGREF47. | Unanswerable |
8e857e44e4233193c7b2d538e520d37be3ae1552 | 8e857e44e4233193c7b2d538e520d37be3ae1552_0 | Q: What experiments authors perform?
Text: Introduction
“The world of our experiences must be enormously simplified and generalized before it is possible to make a symbolic inventory of all our experiences of things and relations."
(Edward Sapir, Language: An Introduction to the Study of Speech, 1921)
Deep Learning based algorithms use neural networks in order to learn feature representations that are good for solving high dimensional Machine Learning (ML) tasks. Reinforcement Learning (RL) is a subfield of ML that has been greatly affected by the use of deep neural networks as universal function approximators BIBREF0, BIBREF1. These deep neural networks are used in RL to estimate value functions, state-action value functions, policy mappings, next-state predictions, rewards, and more BIBREF2, BIBREF3, BIBREF4, thus combating the “curse of dimensionality".
The term representation is used differently in different contexts. For the purpose of this paper we define a semantic representation of a state as one that reflects its meaning as it is understood by an expert. The semantic representation of a state should thus be paired with a reliable and computationally efficient method for extracting information from it. Previous success in RL has mainly focused on representing the state in its raw form (e.g., visual input in Atari-based games BIBREF2). This approach stems from the belief that neural networks (specifically convolutional networks) can extract meaningful features from complex inputs. In this work, we challenge current representation techniques and suggest to represent the state using natural language, similar to the way we, as humans, summarize and transfer information efficiently from one to the other BIBREF5.
The ability to associate states with natural language sentences that describe them is a hallmark of understanding representations for reinforcement learning. Humans use rich natural language to describe and communicate their visual perceptions, feelings, beliefs, strategies, and more. The semantics inherent to natural language carry knowledge and cues of complex types of content, including: events, spatial relations, temporal relations, semantic roles, logical structures, support for inference and entailment, as well as predicates and arguments BIBREF6. The expressive nature of language can thus act as an alternative semantic state representation.
Over the past few years, Natural Language Processing (NLP) has shown an acceleration in progress on a wide range of downstream applications ranging from Question Answering BIBREF7, BIBREF8, to Natural Language Inference BIBREF9, BIBREF10, BIBREF11 through Syntactic Parsing BIBREF12, BIBREF13, BIBREF14. Recent work has shown the ability to learn flexible, hierarchical, contextualized representations, obtaining state-of-the-art results on various natural language processing tasks BIBREF15. A basic observation of our work is that natural language representations are also beneficial for solving problems in which natural language is not the underlying source of input. Moreover, our results indicate that natural language is a strong alternative to current complementary methods for semantic representations of a state.
In this work we assume a state can be described using natural language sentences. We use distributional embedding methods in order to represent sentences, processed with a standard Convolutional Neural Network for feature extraction. In Section SECREF2 we describe the basic frameworks we rely on. We discuss possible semantic representations in Section SECREF3, namely, raw visual inputs, semantic segmentation, feature vectors, and natural language representations. Then, in Section SECREF4 we compare NLP representations with their alternatives. Our results suggest that representation of the state using natural language can achieve better performance, even on difficult tasks, or tasks in which the description of the state is saturated with task-nuisances BIBREF17. Moreover, we observe that NLP representations are more robust to transfer and changes in the environment. We conclude the paper with a short discussion and related work.
Preliminaries ::: Reinforcement Learning
In Reinforcement Learning the goal is to learn a policy $\pi (s)$, which is a mapping from state $s$ to a probability distribution over actions $\mathcal {A}$, with the objective to maximize a reward $r(s)$ that is provided by the environment. This is often solved by formulating the problem as a Markov Decision Process (MDP) BIBREF19. Two common quantities used to estimate the performance in MDPs are the value $v (s)$ and action-value $Q (s, a)$ functions, which are defined as follows: ${v(s) = \mathbb {E}^{\pi } [\sum _t \gamma ^t r_t | s_0 = s ]}$ and ${Q(s, a) = \mathbb {E}^{\pi } [\sum _t \gamma ^t r_t | s_0 = s, a_0 = a ]}$. Two prominent algorithms for solving RL tasks, which we use in this paper, are the value-based DQN BIBREF2 and the policy-based PPO BIBREF3.
Deep Q Networks (DQN): The DQN algorithm is an extension of the classical Q-learning approach, to a deep learning regime. Q-learning learns the optimal policy by directly learning the value function, i.e., the action-value function. A neural network is used to estimate the $Q$-values and is trained to minimize the Bellman error, namely
Proximal Policy Optimization (PPO): While the DQN learns the optimal behavioral policy using a dynamic programming approach, PPO takes a different route. PPO builds upon the policy gradient theorem, which optimizes the policy directly, with an addition of a trust-region update rule. The policy gradient theorem updates the policy by
Preliminaries ::: Deep Learning for NLP
A word embedding is a mapping from a word $w$ to a vector $\mathbf {w} \in \mathbb {R}^d$. A simple form of word embedding is the Bag of Words (BoW), a vector $\mathbf {w} \in \mathbb {N}^{|D|}$ ($|D|$ is the dictionary size), in which each word receives a unique 1-hot vector representation. Recently, more efficient methods have been proposed, in which the embedding vector is smaller than the dictionary size, $d \ll |D|$. These methods are also known as distributional embeddings.
The distributional hypothesis in linguistics is derived from the semantic theory of language usage (i.e. words that are used and occur in the same contexts tend to have similar meanings). Distributional word representations are a fundamental building block for representing natural language sentences. Word embeddings such as Word2vec BIBREF20 and GloVe BIBREF21 build upon the distributional hypothesis, improving efficiency of state-of-the-art language models.
Convolutional Neural Networks (CNNs), originally invented for computer vision, have been shown to achieve strong performance on text classification tasks BIBREF22, BIBREF23, as well as other traditional NLP tasks BIBREF24. In this paper we consider a common architecture BIBREF25, in which each word in a sentence is represented as an embedding vector, a single convolutional layer with $m$ filters is applied, producing an $m$-dimensional vector for each $n$-gram. The vectors are combined using max-pooling followed by a ReLU activation. The result is then passed through multiple hidden linear layers with ReLU activation, eventually generating the final output.
Semantic Representation Methods
Contemporary methods for semantic representation of states currently follow one of three approaches: (1) raw visual inputs BIBREF2, BIBREF26, in which raw sensory values of pixels are used from one or multiple sources, (2) feature vectors BIBREF27, BIBREF28, in which general features of the problem are chosen, with no specific structure, and (3) semantic segmentation maps BIBREF29, BIBREF30, in which discrete or logical values are used in one or many channels to represent the general features of the state.
The common approach is to derive decisions (e.g., classification, action, etc.) based on information in its raw form. In RL, the raw form is often the pixels representing an image – however the image is only one form of a semantic representation. In Semantic Segmentation, the image is converted from a 3-channel (RGB) matrix into an $N$-channel matrix, where $N$ is the number of classes. In this case, each channel represents a class, and a binary value at each coordinate denotes whether or not this class is present in the image at this location. For instance, fig: semantic segmentation example considers an autonomous vehicle task. The raw image and segmentation maps are both sufficient for the task (i.e., both contain a sufficient semantic representation). Nevertheless, the semantic segmentation maps contain less task-nuisances BIBREF17, which are random variables that affect the observed data, but are not informative to the task we are trying to solve.
In this paper we propose a forth method for representing a state, namely using natural language descriptions. One method to achieve such a representation is through Image Captioning BIBREF31, BIBREF32. Natural language is both rich as well as flexible. This flexibility enables the algorithm designer to represent the information present in the state as efficiently and compactly as possible. As an example, the top image in fig: semantic segmentation example can be represented using natural language as “There is a car in your lane two meters in front of you, a bicycle rider on your far left in the negative lane, a car in your direction in the opposite lane which is twenty meters away, and trees and pedestrians on the side walk.” or compactly by “There is a car two meters in front of you a pedestrian on the sidewalk to your right and a car inbound in the negative lane which is far away.”. Language also allows us to efficiently compress information. As an example, the segmentation map in the bottom image of fig: semantic segmentation example can be compactly described by “There are 13 pedestrians crossing the road in front of you”. In the next section we will demonstrate the benefits of using natural-language-based semantic state representation in a first person shooter enviornment.
Semantic State Representations in the Doom Environment
In this section we compare the different types of semantic representations for representing states in the ViZDoom environment BIBREF26, as described in the previous section. More specifically, we use a semantic natural language parser in order to describe a state, over numerous instances of levels varying in difficulty, task-nuisances, and objectives. Our results show that, though semantic segmentation and feature vector representation techniques express a similar statistic of the state, natural language representation offers better performance, faster convergence, more robust solutions, as well as better transfer.
The ViZDoom environment involves a 3D world that is significantly more real-world-like than Atari 2600 games, with a relatively realistic physics model. An agent in the ViZDoom environment must effectively perceive, interpret, and learn the 3D world in order to make tactical and strategic decisions of where to go and how to act. There are three types of state representations that are provided by the environment. The first, which is also most commonly used, is raw visual inputs, in which the state is represented by an image from a first person view of the agent. A feature vector representation is an additional state representation provided by the environment. The feature vector representation includes positions as well as labels of all objects and creatures in the vicinity of the agent. Lastly, the environment provides a semantic segmentation map based on the aforementioned feature vector. An example of the visual representations in VizDoom is shown in fig: representations in vizdoom.
In order to incorporate natural language representation to the VizDoom environment we've constructed a semantic parser of the semantic segmentation maps provided by the environment. Each state of the environment was converted into a natural language sentence based on positions and labels of objects in the frame. To implement this, the screen was divided into several vertical and horizontal patches, as depicted in fig: patches. These patches describe relational aspects of the state, such as distance of objects and their direction with respect to the agent's point of view. In each patch, objects were counted, and a natural language description of the patch was constructed. This technique was repeated for all patches to form the final state representation. fig: nlp state rep depicts examples of natural language sentences of different states in the enviornment.
Semantic State Representations in the Doom Environment ::: Experiments
We tested the natural language representation against the visual-based and feature representations on several tasks, with varying difficulty. In these tasks, the agent could navigate, shoot, and collect items such as weapons and medipacks. Often, enemies of different types attacked the agent, and a positive reward was given when an enemy was killed. Occasionally, the agent also suffered from health degeneration. The tasks included a basic scenario, a health gathering scenario, a scenario in which the agent must take cover from fireballs, a scenario in which the agent must defend itself from charging enemies, and a super scenario, where a mixture of the above scenarios was designed to challenge the agent.
More specifically, in the basic scenario, a single monster is spawned in front of the agent. The purpose of this scenario is to teach the agent to aim at the enemy and shoot at it. In the health gathering scenario, the floor of the room is covered in toxin, causing the agent to gradually lose health. Medipacks are spawned randomly in the room and the agent's objective is to keep itself alive by collecting them. In the take cover scenario, multiple fireball shooting monsters are spawned in front of the agent. The goal of the agent is to stay alive as long as possible, dodging inbound fireballs. The difficulty of the task increases over time, as additional monsters are spawned. In the defend the center scenario, melee attacking monsters are randomly spawned in the room, and charge towards the agent. As opposed to other scenarios, the agent is incapable of moving, aside from turning left and right and shooting. In the defend the line scenario, both melee and fireball shooting monsters are spawned near the opposing wall. The agent can only step right, left or shoot. Finally, in the “super" scenario both melee and fireball shooting monsters are repeatably spawned all over the room. the room contains various items the agent can pick up and use, such as medipacks, shotguns, ammunition and armor. Furthermore, the room is filled with unusable objects, various types of trees, pillars and other decorations. The agent can freely move and turn in any direction, as well as shoot. This scenario combines elements from all of the previous scenarios.
Our agent was implemented using a Convolutional Neural Network as described in Section SECREF4. We converted the parsed state into embedded representations of fixed length. We tested both a DQN and a PPO based agent, and compared the natural language representation to the other representation techniques, namely the raw image, feature vector, and semantic segmentation representations.
In order to effectively compare the performance of the different representation methods, we conducted our experiments under similar conditions for all agents. The same hyper-parameters were used under all tested representations. Moreover, to rule out effects of architectural expressiveness, the number of weights in all neural networks was approximately matched, regardless of the input type. Finally, we ensured the “super" scenario was positively biased toward image-based representations. This was done by adding a large amount items to the game level, thereby filling the state with nuisances (these tests are denoted by `nuisance' in the scenario name). This was especially evident in the NLP representations, as sentences became extensively longer (average of over 250 words). This is contrary to image-based representations, which did not change in dimension.
Results of the DQN-based agent are presented in fig: scenario comparison. Each plot depicts the average reward (across 5 seeds) of all representations methods. It can be seen that the NLP representation outperforms the other methods. This is contrary to the fact that it contains the same information as the semantic segmentation maps. More interestingly, comparing the vision-based and feature-based representations render inconsistent conclusions with respect to their relative performance. NLP representations remain robust to changes in the environment as well as task-nuisances in the state. As depicted in fig: nuisance scenarios, inflating the state space with task-nuisances impairs the performance of all representations. There, a large amount of unnecessary objects were spawned in the level, increasing the state's description length to over 250 words, whilst retaining the same amount of useful information. Nevertheless, the NLP representation outperformed the vision and feature based representations, with high robustness to the applied noise.
In order to verify the performance of the natural language representation was not due to extensive discretization of patches, we've conducted experiments increasing the number of horizontal patches - ranging from 3 to 31 patches in the extreme case. Our results, as depicted in fig: patch count, indicate that the amount of discretization of patches did not affect the performance of the NLP agent, remaining a superior representation compared to the rest.
To conclude, our experiments suggest that NLP representations, though they describe the same raw information of the semantic segmentation maps, are more robust to task-nuisances, allow for better transfer, and achieve higher performance in complex tasks, even when their description is long and convoluted. While we've only presented results for DQN agents, we include plots for a PPO agent in the Appendix, showing similar trends and conclusions. We thus deduce that NLP-based semantic state representations are a preferable choice for training VizDoom agents.
Related Work
Work on representation learning is concerned with finding an appropriate representation of data in order to perform a machine learning task BIBREF33. In particular, deep learning exploits this concept by its very nature BIBREF2. Work on representation learning include Predictive State Representations (PSR) BIBREF34, BIBREF35, which capture the state as a vector of predictions of future outcomes, and a Heuristic Embedding of Markov Processes (HEMP) BIBREF36, which learns to embed transition probabilities using an energy-based optimization problem.
There has been extensive work attempting to use natural language in RL. Efforts that integrate language in RL develop tools, approaches, and insights that are valuable for improving the generalization and sample efficiency of learning agents. Previous work on language-conditioned RL has considered the use of natural language in the observation and action space. Environments such as Zork and TextWorld BIBREF37 have been the standard benchmarks for testing text-based games. Nevertheless, these environments do not search for semantic state representations, in which an RL algorithm can be better evaluated and controlled.
BIBREF38 use high-level semantic abstractions of documents in a representation to facilitate relational learning using Inductive Logic Programming and a generative language model. BIBREF39 use high-level guidance expressed in text to enrich a stochastic agent, playing against the built-in AI of Civilization II. They train an agent with the Monte-Carlo search framework in order to jointly learn to identify text that is relevant to a given game state as well as game strategies based only on environment feedback. BIBREF40 utilize natural language in a model-based approach to describe the dynamics and rewards of an environment, showing these can facilitate transfer between different domains.
More recently, the structure and compositionality of natural language has been used for representing policies in hierarchical RL. In a paper by BIBREF41, instructions given in natural language were used in order to break down complex problems into high-level plans and lower-level actions. Their suggested framework leverages the structure inherent to natural language, allowing for transfer to unfamiliar tasks and situations. This use of semantic structure has also been leveraged by BIBREF42, where abstract actions (not necessarily words) were recognized as symbols of a natural and expressive language, improving performance and transfer of RL agents.
Outside the context of RL, previous work has also shown that high-quality linguistic representations can assist in cross-modal transfer, such as using semantic relationships between labels for zero-shot transfer in image classification BIBREF43, BIBREF44.
Discussion and Future Work
Our results indicate that natural language can outperform, and sometime even replace, vision-based representations. Nevertheless, natural language representations can also have disadvantages in various scenarios. For one, they require the designer to be able to describe the state exactly, whether by a rule-based or learned parser. Second, they abstract notions of the state space that the designer may not realize are necessary for solving the problem. As such, semantic representations should be carefully chosen, similar to the process of reward shaping or choosing a training algorithm. Here, we enumerate three instances in which we believe natural language representations are beneficial:
Natural use-case: Information contained in both generic and task-specific textual corpora may be highly valuable for decision making. This case assumes the state can either be easily described using natural language or is already in a natural language state. This includes examples such as user-based domains, in which user profiles and comments are part of the state, or the stock market, in which stocks are described by analysts and other readily available text. 3D physical environments such as VizDoom also fall into this category, as semantic segmentation maps can be easily described using natural language.
Subjective information: Subjectivity refers to aspects used to express opinions, evaluations, and speculations. These may include strategies for a game, the way a doctor feels about her patient, the mood of a driver, and more.
Unstructured information: In these cases, features might be measured by different units, with an arbitrary position in the state's feature vector, rendering them sensitive to permutations. Such state representations are thus hard to process using neural networks. As an example, the medical domain may contain numerous features describing the vitals of a patient. These raw features, when observed by an expert, can be efficiently described using natural language. Moreover, they allow an expert to efficiently add subjective information.
An orthogonal line of research considers automating the process of image annotation. The noise added from the supervised or unsupervised process serves as a great challenge for natural language representation. We suspect the noise accumulated by this procedure would require additional information to be added to the state (e.g., past information). Nevertheless, as we have shown in this paper, such information can be compressed using natural language. In addition, while we have only considered spatial features of the state, information such as movement directions and transient features can be efficiently encoded as well.
Natural language representations help abstract information and interpret the state of an agent, improving its overall performance. Nevertheless, it is imperative to choose a representation that best fits the domain at hand. Designers of RL algorithms should consider searching for a semantic representation that fits their needs. While this work only takes a first step toward finding better semantic state representations, we believe the structure inherent in natural language can be considered a favorable candidate for achieving this goal.
Appendix ::: VizDoom
VizDoom is a "Doom" based research environment that was developed at the Poznań University of Technology. It is based on "ZDoom" game executable, and includes a Python based API. The API offers the user the ability to run game instances, query the game state, and execute actions. The original purpose of VizDoom is to provide a research platform for vision based reinforcement learning. Thus, a natural language representation for the game was needed to be implemented. ViZDoom emulates the "Doom" game and enables us to access data within a certain frame using Python dictionaries. This makes it possible to extract valuable data including player health, ammo, enemy locations etc. Each game frame contains "labels", which contain data on visible objects in the game (the player, enemies, medkits, etc). We used "Doom Builder" in order to edit some of the scenarios and design a new one. Enviroment rewards are presented in doom-scenarios-table.
Appendix ::: Natural language State Space
A semantic representation using natural language should contain information which can be deduced by a human playing the game. For example, even though a human does not know the exact distance between objects, it can classify them as "close" or "far". However, objects that are outside the player's field of vision can not be a part of the state. Furthermore, a human would most likely refer to an object's location relative to itself, using directions such as "right" or "left".
Appendix ::: Language model implementation
To convert each frame to a natural language representation state, the list of available labels is iterated, and a string is built accordingly. The main idea of our implementation is to divide the screen into multiple vertical patches, count the amount of different objects inside by their types, and parse it as a sentence. The decision as to whether an object is close or far can be determined by calculating the distance from it to the player, and using two threshold levels. Object descriptions can be concise or detailed, as needed. We experimented with the following mechanics:
the screen can be divided between patches equally, or by determined ratios. Here, our main guideline was to keep the "front" patch narrow enough so it can be used as "sights".
our initial experiment was with 3 patches, and later we added 2 more patches classified as "outer left" and "outer right". In our experiments we have tested up to 51 patches, referred to as left or right patch with corresponding numbers.
we used 2 thresholds, which allowed us to classify the distance of an object from the player as "close","mid", and "far. Depending on the task, the value of the threshold can be changed, as well as adding more thresholds.
different states might generate sentence with different size. A maximum sentence length is another parameter that was tested. sentences-length-table presents some data regarding the average word count in some of the game sceanrios.
After the sentence describing the state is generated, it is transformed to an embedding vector. Words that were not found in the vocabulary were replaced with an “OOV" vector. All words were then concatenated to a NxDx1 matrix, representing the state. We experimented with both Word2Vec and GloVe pretrained embedding vectors. Eventually, we used the latter, as it consumes less memory and speeds up the training process. The length of the state sentence is one of the hyperparameters of the agents; shorter sentences are zero padded, where longer ones are trimmed.
Appendix ::: Model implementation
All of our models were implemented using PyTorch. The DQN agents used a single network that outputs the Q-Values of the available actions. The PPO agents used an Actor-Critic model with two networks; the first outputs the policy distribution for the input state, and the second network outputs it's value. As mentioned earlier, we used three common neural network architectures:
used for the raw image and semantic segmentation based agents. VizDoom's raw output image resolution is 640X480X3 RGB image. We experimented with both the original image and its down-sampled version. The semantic segmentation image was of resolution 640X480X1, where the pixel value represents the object's class, generated using the VizDoom label API. the network consisted of two convolutional layers, two hidden linear layers and an output layer. The first convolutional layer has 8 6X6 filters with stride 3 and ReLU activation. The second convolutional layer has 16 3X3 filters with stride 2 and ReLU activation. The fully connected layers has 32 and 16 units, both of them are followed by ReLU activation. The output layer's size is the amount of actions the agent has available in the trained scenario.
Used in the feature vector based agent. Naturally, some discretization is needed in order to build a feature vector, so some of the state data is lost. the feature vector was made using features we extracted from the VizDoom API, and its dimensions was 90 X 1. The network is made up of two fully connected layers, each of them followed by a ReLU activation. The first layer has 32 units, and the second one one has 16 units. The output layer's size was the amount of actions available to the agent.
Used in the natural language based agent. As previously mentioned, each word in the natural language state is transformed into a 200X50X1 matrix. The first layers of the TextCNN are convolutional layers with 8 filter which are designed to scan input sentence, and return convolution outputs of sequences of varying lengths. The filters vary in width, such that each of them learns to identify different lengths of sequences in words. Longer filters have higher capability of extracting features from longer word sequences. The filters we have chosen have the following dimensions: 3X50X1, 4X50X1, 5X50X1, 8X50X1,11X50X1. Following the convolution layer there is a ReLU activation and a max pool layer. Finally, there are two fully connected layers; The first layer has 32 units, and second one has 16 units. Both of them are followed by ReLU activation.
All architectures have the same output, regardless of the input type. The DQN network is a regression network, with its output size the number of available actions. The PPO agent has 2 networks; actor and critic. The actor network has a Softmax activation with size equal to the available amount of actions. The critic network is a regression model with a single output representing the state's value. Reward plots for the PPO agent can be found in Figure FIGREF47. | a basic scenario, a health gathering scenario, a scenario in which the agent must take cover from fireballs, a scenario in which the agent must defend itself from charging enemies, and a super scenario, where a mixture of the above scenarios |
084fb7c80a24b341093d4bf968120e3aff56f693 | 084fb7c80a24b341093d4bf968120e3aff56f693_0 | Q: How is state to learn and complete tasks represented via natural language?
Text: Introduction
“The world of our experiences must be enormously simplified and generalized before it is possible to make a symbolic inventory of all our experiences of things and relations."
(Edward Sapir, Language: An Introduction to the Study of Speech, 1921)
Deep Learning based algorithms use neural networks in order to learn feature representations that are good for solving high dimensional Machine Learning (ML) tasks. Reinforcement Learning (RL) is a subfield of ML that has been greatly affected by the use of deep neural networks as universal function approximators BIBREF0, BIBREF1. These deep neural networks are used in RL to estimate value functions, state-action value functions, policy mappings, next-state predictions, rewards, and more BIBREF2, BIBREF3, BIBREF4, thus combating the “curse of dimensionality".
The term representation is used differently in different contexts. For the purpose of this paper we define a semantic representation of a state as one that reflects its meaning as it is understood by an expert. The semantic representation of a state should thus be paired with a reliable and computationally efficient method for extracting information from it. Previous success in RL has mainly focused on representing the state in its raw form (e.g., visual input in Atari-based games BIBREF2). This approach stems from the belief that neural networks (specifically convolutional networks) can extract meaningful features from complex inputs. In this work, we challenge current representation techniques and suggest to represent the state using natural language, similar to the way we, as humans, summarize and transfer information efficiently from one to the other BIBREF5.
The ability to associate states with natural language sentences that describe them is a hallmark of understanding representations for reinforcement learning. Humans use rich natural language to describe and communicate their visual perceptions, feelings, beliefs, strategies, and more. The semantics inherent to natural language carry knowledge and cues of complex types of content, including: events, spatial relations, temporal relations, semantic roles, logical structures, support for inference and entailment, as well as predicates and arguments BIBREF6. The expressive nature of language can thus act as an alternative semantic state representation.
Over the past few years, Natural Language Processing (NLP) has shown an acceleration in progress on a wide range of downstream applications ranging from Question Answering BIBREF7, BIBREF8, to Natural Language Inference BIBREF9, BIBREF10, BIBREF11 through Syntactic Parsing BIBREF12, BIBREF13, BIBREF14. Recent work has shown the ability to learn flexible, hierarchical, contextualized representations, obtaining state-of-the-art results on various natural language processing tasks BIBREF15. A basic observation of our work is that natural language representations are also beneficial for solving problems in which natural language is not the underlying source of input. Moreover, our results indicate that natural language is a strong alternative to current complementary methods for semantic representations of a state.
In this work we assume a state can be described using natural language sentences. We use distributional embedding methods in order to represent sentences, processed with a standard Convolutional Neural Network for feature extraction. In Section SECREF2 we describe the basic frameworks we rely on. We discuss possible semantic representations in Section SECREF3, namely, raw visual inputs, semantic segmentation, feature vectors, and natural language representations. Then, in Section SECREF4 we compare NLP representations with their alternatives. Our results suggest that representation of the state using natural language can achieve better performance, even on difficult tasks, or tasks in which the description of the state is saturated with task-nuisances BIBREF17. Moreover, we observe that NLP representations are more robust to transfer and changes in the environment. We conclude the paper with a short discussion and related work.
Preliminaries ::: Reinforcement Learning
In Reinforcement Learning the goal is to learn a policy $\pi (s)$, which is a mapping from state $s$ to a probability distribution over actions $\mathcal {A}$, with the objective to maximize a reward $r(s)$ that is provided by the environment. This is often solved by formulating the problem as a Markov Decision Process (MDP) BIBREF19. Two common quantities used to estimate the performance in MDPs are the value $v (s)$ and action-value $Q (s, a)$ functions, which are defined as follows: ${v(s) = \mathbb {E}^{\pi } [\sum _t \gamma ^t r_t | s_0 = s ]}$ and ${Q(s, a) = \mathbb {E}^{\pi } [\sum _t \gamma ^t r_t | s_0 = s, a_0 = a ]}$. Two prominent algorithms for solving RL tasks, which we use in this paper, are the value-based DQN BIBREF2 and the policy-based PPO BIBREF3.
Deep Q Networks (DQN): The DQN algorithm is an extension of the classical Q-learning approach, to a deep learning regime. Q-learning learns the optimal policy by directly learning the value function, i.e., the action-value function. A neural network is used to estimate the $Q$-values and is trained to minimize the Bellman error, namely
Proximal Policy Optimization (PPO): While the DQN learns the optimal behavioral policy using a dynamic programming approach, PPO takes a different route. PPO builds upon the policy gradient theorem, which optimizes the policy directly, with an addition of a trust-region update rule. The policy gradient theorem updates the policy by
Preliminaries ::: Deep Learning for NLP
A word embedding is a mapping from a word $w$ to a vector $\mathbf {w} \in \mathbb {R}^d$. A simple form of word embedding is the Bag of Words (BoW), a vector $\mathbf {w} \in \mathbb {N}^{|D|}$ ($|D|$ is the dictionary size), in which each word receives a unique 1-hot vector representation. Recently, more efficient methods have been proposed, in which the embedding vector is smaller than the dictionary size, $d \ll |D|$. These methods are also known as distributional embeddings.
The distributional hypothesis in linguistics is derived from the semantic theory of language usage (i.e. words that are used and occur in the same contexts tend to have similar meanings). Distributional word representations are a fundamental building block for representing natural language sentences. Word embeddings such as Word2vec BIBREF20 and GloVe BIBREF21 build upon the distributional hypothesis, improving efficiency of state-of-the-art language models.
Convolutional Neural Networks (CNNs), originally invented for computer vision, have been shown to achieve strong performance on text classification tasks BIBREF22, BIBREF23, as well as other traditional NLP tasks BIBREF24. In this paper we consider a common architecture BIBREF25, in which each word in a sentence is represented as an embedding vector, a single convolutional layer with $m$ filters is applied, producing an $m$-dimensional vector for each $n$-gram. The vectors are combined using max-pooling followed by a ReLU activation. The result is then passed through multiple hidden linear layers with ReLU activation, eventually generating the final output.
Semantic Representation Methods
Contemporary methods for semantic representation of states currently follow one of three approaches: (1) raw visual inputs BIBREF2, BIBREF26, in which raw sensory values of pixels are used from one or multiple sources, (2) feature vectors BIBREF27, BIBREF28, in which general features of the problem are chosen, with no specific structure, and (3) semantic segmentation maps BIBREF29, BIBREF30, in which discrete or logical values are used in one or many channels to represent the general features of the state.
The common approach is to derive decisions (e.g., classification, action, etc.) based on information in its raw form. In RL, the raw form is often the pixels representing an image – however the image is only one form of a semantic representation. In Semantic Segmentation, the image is converted from a 3-channel (RGB) matrix into an $N$-channel matrix, where $N$ is the number of classes. In this case, each channel represents a class, and a binary value at each coordinate denotes whether or not this class is present in the image at this location. For instance, fig: semantic segmentation example considers an autonomous vehicle task. The raw image and segmentation maps are both sufficient for the task (i.e., both contain a sufficient semantic representation). Nevertheless, the semantic segmentation maps contain less task-nuisances BIBREF17, which are random variables that affect the observed data, but are not informative to the task we are trying to solve.
In this paper we propose a forth method for representing a state, namely using natural language descriptions. One method to achieve such a representation is through Image Captioning BIBREF31, BIBREF32. Natural language is both rich as well as flexible. This flexibility enables the algorithm designer to represent the information present in the state as efficiently and compactly as possible. As an example, the top image in fig: semantic segmentation example can be represented using natural language as “There is a car in your lane two meters in front of you, a bicycle rider on your far left in the negative lane, a car in your direction in the opposite lane which is twenty meters away, and trees and pedestrians on the side walk.” or compactly by “There is a car two meters in front of you a pedestrian on the sidewalk to your right and a car inbound in the negative lane which is far away.”. Language also allows us to efficiently compress information. As an example, the segmentation map in the bottom image of fig: semantic segmentation example can be compactly described by “There are 13 pedestrians crossing the road in front of you”. In the next section we will demonstrate the benefits of using natural-language-based semantic state representation in a first person shooter enviornment.
Semantic State Representations in the Doom Environment
In this section we compare the different types of semantic representations for representing states in the ViZDoom environment BIBREF26, as described in the previous section. More specifically, we use a semantic natural language parser in order to describe a state, over numerous instances of levels varying in difficulty, task-nuisances, and objectives. Our results show that, though semantic segmentation and feature vector representation techniques express a similar statistic of the state, natural language representation offers better performance, faster convergence, more robust solutions, as well as better transfer.
The ViZDoom environment involves a 3D world that is significantly more real-world-like than Atari 2600 games, with a relatively realistic physics model. An agent in the ViZDoom environment must effectively perceive, interpret, and learn the 3D world in order to make tactical and strategic decisions of where to go and how to act. There are three types of state representations that are provided by the environment. The first, which is also most commonly used, is raw visual inputs, in which the state is represented by an image from a first person view of the agent. A feature vector representation is an additional state representation provided by the environment. The feature vector representation includes positions as well as labels of all objects and creatures in the vicinity of the agent. Lastly, the environment provides a semantic segmentation map based on the aforementioned feature vector. An example of the visual representations in VizDoom is shown in fig: representations in vizdoom.
In order to incorporate natural language representation to the VizDoom environment we've constructed a semantic parser of the semantic segmentation maps provided by the environment. Each state of the environment was converted into a natural language sentence based on positions and labels of objects in the frame. To implement this, the screen was divided into several vertical and horizontal patches, as depicted in fig: patches. These patches describe relational aspects of the state, such as distance of objects and their direction with respect to the agent's point of view. In each patch, objects were counted, and a natural language description of the patch was constructed. This technique was repeated for all patches to form the final state representation. fig: nlp state rep depicts examples of natural language sentences of different states in the enviornment.
Semantic State Representations in the Doom Environment ::: Experiments
We tested the natural language representation against the visual-based and feature representations on several tasks, with varying difficulty. In these tasks, the agent could navigate, shoot, and collect items such as weapons and medipacks. Often, enemies of different types attacked the agent, and a positive reward was given when an enemy was killed. Occasionally, the agent also suffered from health degeneration. The tasks included a basic scenario, a health gathering scenario, a scenario in which the agent must take cover from fireballs, a scenario in which the agent must defend itself from charging enemies, and a super scenario, where a mixture of the above scenarios was designed to challenge the agent.
More specifically, in the basic scenario, a single monster is spawned in front of the agent. The purpose of this scenario is to teach the agent to aim at the enemy and shoot at it. In the health gathering scenario, the floor of the room is covered in toxin, causing the agent to gradually lose health. Medipacks are spawned randomly in the room and the agent's objective is to keep itself alive by collecting them. In the take cover scenario, multiple fireball shooting monsters are spawned in front of the agent. The goal of the agent is to stay alive as long as possible, dodging inbound fireballs. The difficulty of the task increases over time, as additional monsters are spawned. In the defend the center scenario, melee attacking monsters are randomly spawned in the room, and charge towards the agent. As opposed to other scenarios, the agent is incapable of moving, aside from turning left and right and shooting. In the defend the line scenario, both melee and fireball shooting monsters are spawned near the opposing wall. The agent can only step right, left or shoot. Finally, in the “super" scenario both melee and fireball shooting monsters are repeatably spawned all over the room. the room contains various items the agent can pick up and use, such as medipacks, shotguns, ammunition and armor. Furthermore, the room is filled with unusable objects, various types of trees, pillars and other decorations. The agent can freely move and turn in any direction, as well as shoot. This scenario combines elements from all of the previous scenarios.
Our agent was implemented using a Convolutional Neural Network as described in Section SECREF4. We converted the parsed state into embedded representations of fixed length. We tested both a DQN and a PPO based agent, and compared the natural language representation to the other representation techniques, namely the raw image, feature vector, and semantic segmentation representations.
In order to effectively compare the performance of the different representation methods, we conducted our experiments under similar conditions for all agents. The same hyper-parameters were used under all tested representations. Moreover, to rule out effects of architectural expressiveness, the number of weights in all neural networks was approximately matched, regardless of the input type. Finally, we ensured the “super" scenario was positively biased toward image-based representations. This was done by adding a large amount items to the game level, thereby filling the state with nuisances (these tests are denoted by `nuisance' in the scenario name). This was especially evident in the NLP representations, as sentences became extensively longer (average of over 250 words). This is contrary to image-based representations, which did not change in dimension.
Results of the DQN-based agent are presented in fig: scenario comparison. Each plot depicts the average reward (across 5 seeds) of all representations methods. It can be seen that the NLP representation outperforms the other methods. This is contrary to the fact that it contains the same information as the semantic segmentation maps. More interestingly, comparing the vision-based and feature-based representations render inconsistent conclusions with respect to their relative performance. NLP representations remain robust to changes in the environment as well as task-nuisances in the state. As depicted in fig: nuisance scenarios, inflating the state space with task-nuisances impairs the performance of all representations. There, a large amount of unnecessary objects were spawned in the level, increasing the state's description length to over 250 words, whilst retaining the same amount of useful information. Nevertheless, the NLP representation outperformed the vision and feature based representations, with high robustness to the applied noise.
In order to verify the performance of the natural language representation was not due to extensive discretization of patches, we've conducted experiments increasing the number of horizontal patches - ranging from 3 to 31 patches in the extreme case. Our results, as depicted in fig: patch count, indicate that the amount of discretization of patches did not affect the performance of the NLP agent, remaining a superior representation compared to the rest.
To conclude, our experiments suggest that NLP representations, though they describe the same raw information of the semantic segmentation maps, are more robust to task-nuisances, allow for better transfer, and achieve higher performance in complex tasks, even when their description is long and convoluted. While we've only presented results for DQN agents, we include plots for a PPO agent in the Appendix, showing similar trends and conclusions. We thus deduce that NLP-based semantic state representations are a preferable choice for training VizDoom agents.
Related Work
Work on representation learning is concerned with finding an appropriate representation of data in order to perform a machine learning task BIBREF33. In particular, deep learning exploits this concept by its very nature BIBREF2. Work on representation learning include Predictive State Representations (PSR) BIBREF34, BIBREF35, which capture the state as a vector of predictions of future outcomes, and a Heuristic Embedding of Markov Processes (HEMP) BIBREF36, which learns to embed transition probabilities using an energy-based optimization problem.
There has been extensive work attempting to use natural language in RL. Efforts that integrate language in RL develop tools, approaches, and insights that are valuable for improving the generalization and sample efficiency of learning agents. Previous work on language-conditioned RL has considered the use of natural language in the observation and action space. Environments such as Zork and TextWorld BIBREF37 have been the standard benchmarks for testing text-based games. Nevertheless, these environments do not search for semantic state representations, in which an RL algorithm can be better evaluated and controlled.
BIBREF38 use high-level semantic abstractions of documents in a representation to facilitate relational learning using Inductive Logic Programming and a generative language model. BIBREF39 use high-level guidance expressed in text to enrich a stochastic agent, playing against the built-in AI of Civilization II. They train an agent with the Monte-Carlo search framework in order to jointly learn to identify text that is relevant to a given game state as well as game strategies based only on environment feedback. BIBREF40 utilize natural language in a model-based approach to describe the dynamics and rewards of an environment, showing these can facilitate transfer between different domains.
More recently, the structure and compositionality of natural language has been used for representing policies in hierarchical RL. In a paper by BIBREF41, instructions given in natural language were used in order to break down complex problems into high-level plans and lower-level actions. Their suggested framework leverages the structure inherent to natural language, allowing for transfer to unfamiliar tasks and situations. This use of semantic structure has also been leveraged by BIBREF42, where abstract actions (not necessarily words) were recognized as symbols of a natural and expressive language, improving performance and transfer of RL agents.
Outside the context of RL, previous work has also shown that high-quality linguistic representations can assist in cross-modal transfer, such as using semantic relationships between labels for zero-shot transfer in image classification BIBREF43, BIBREF44.
Discussion and Future Work
Our results indicate that natural language can outperform, and sometime even replace, vision-based representations. Nevertheless, natural language representations can also have disadvantages in various scenarios. For one, they require the designer to be able to describe the state exactly, whether by a rule-based or learned parser. Second, they abstract notions of the state space that the designer may not realize are necessary for solving the problem. As such, semantic representations should be carefully chosen, similar to the process of reward shaping or choosing a training algorithm. Here, we enumerate three instances in which we believe natural language representations are beneficial:
Natural use-case: Information contained in both generic and task-specific textual corpora may be highly valuable for decision making. This case assumes the state can either be easily described using natural language or is already in a natural language state. This includes examples such as user-based domains, in which user profiles and comments are part of the state, or the stock market, in which stocks are described by analysts and other readily available text. 3D physical environments such as VizDoom also fall into this category, as semantic segmentation maps can be easily described using natural language.
Subjective information: Subjectivity refers to aspects used to express opinions, evaluations, and speculations. These may include strategies for a game, the way a doctor feels about her patient, the mood of a driver, and more.
Unstructured information: In these cases, features might be measured by different units, with an arbitrary position in the state's feature vector, rendering them sensitive to permutations. Such state representations are thus hard to process using neural networks. As an example, the medical domain may contain numerous features describing the vitals of a patient. These raw features, when observed by an expert, can be efficiently described using natural language. Moreover, they allow an expert to efficiently add subjective information.
An orthogonal line of research considers automating the process of image annotation. The noise added from the supervised or unsupervised process serves as a great challenge for natural language representation. We suspect the noise accumulated by this procedure would require additional information to be added to the state (e.g., past information). Nevertheless, as we have shown in this paper, such information can be compressed using natural language. In addition, while we have only considered spatial features of the state, information such as movement directions and transient features can be efficiently encoded as well.
Natural language representations help abstract information and interpret the state of an agent, improving its overall performance. Nevertheless, it is imperative to choose a representation that best fits the domain at hand. Designers of RL algorithms should consider searching for a semantic representation that fits their needs. While this work only takes a first step toward finding better semantic state representations, we believe the structure inherent in natural language can be considered a favorable candidate for achieving this goal.
Appendix ::: VizDoom
VizDoom is a "Doom" based research environment that was developed at the Poznań University of Technology. It is based on "ZDoom" game executable, and includes a Python based API. The API offers the user the ability to run game instances, query the game state, and execute actions. The original purpose of VizDoom is to provide a research platform for vision based reinforcement learning. Thus, a natural language representation for the game was needed to be implemented. ViZDoom emulates the "Doom" game and enables us to access data within a certain frame using Python dictionaries. This makes it possible to extract valuable data including player health, ammo, enemy locations etc. Each game frame contains "labels", which contain data on visible objects in the game (the player, enemies, medkits, etc). We used "Doom Builder" in order to edit some of the scenarios and design a new one. Enviroment rewards are presented in doom-scenarios-table.
Appendix ::: Natural language State Space
A semantic representation using natural language should contain information which can be deduced by a human playing the game. For example, even though a human does not know the exact distance between objects, it can classify them as "close" or "far". However, objects that are outside the player's field of vision can not be a part of the state. Furthermore, a human would most likely refer to an object's location relative to itself, using directions such as "right" or "left".
Appendix ::: Language model implementation
To convert each frame to a natural language representation state, the list of available labels is iterated, and a string is built accordingly. The main idea of our implementation is to divide the screen into multiple vertical patches, count the amount of different objects inside by their types, and parse it as a sentence. The decision as to whether an object is close or far can be determined by calculating the distance from it to the player, and using two threshold levels. Object descriptions can be concise or detailed, as needed. We experimented with the following mechanics:
the screen can be divided between patches equally, or by determined ratios. Here, our main guideline was to keep the "front" patch narrow enough so it can be used as "sights".
our initial experiment was with 3 patches, and later we added 2 more patches classified as "outer left" and "outer right". In our experiments we have tested up to 51 patches, referred to as left or right patch with corresponding numbers.
we used 2 thresholds, which allowed us to classify the distance of an object from the player as "close","mid", and "far. Depending on the task, the value of the threshold can be changed, as well as adding more thresholds.
different states might generate sentence with different size. A maximum sentence length is another parameter that was tested. sentences-length-table presents some data regarding the average word count in some of the game sceanrios.
After the sentence describing the state is generated, it is transformed to an embedding vector. Words that were not found in the vocabulary were replaced with an “OOV" vector. All words were then concatenated to a NxDx1 matrix, representing the state. We experimented with both Word2Vec and GloVe pretrained embedding vectors. Eventually, we used the latter, as it consumes less memory and speeds up the training process. The length of the state sentence is one of the hyperparameters of the agents; shorter sentences are zero padded, where longer ones are trimmed.
Appendix ::: Model implementation
All of our models were implemented using PyTorch. The DQN agents used a single network that outputs the Q-Values of the available actions. The PPO agents used an Actor-Critic model with two networks; the first outputs the policy distribution for the input state, and the second network outputs it's value. As mentioned earlier, we used three common neural network architectures:
used for the raw image and semantic segmentation based agents. VizDoom's raw output image resolution is 640X480X3 RGB image. We experimented with both the original image and its down-sampled version. The semantic segmentation image was of resolution 640X480X1, where the pixel value represents the object's class, generated using the VizDoom label API. the network consisted of two convolutional layers, two hidden linear layers and an output layer. The first convolutional layer has 8 6X6 filters with stride 3 and ReLU activation. The second convolutional layer has 16 3X3 filters with stride 2 and ReLU activation. The fully connected layers has 32 and 16 units, both of them are followed by ReLU activation. The output layer's size is the amount of actions the agent has available in the trained scenario.
Used in the feature vector based agent. Naturally, some discretization is needed in order to build a feature vector, so some of the state data is lost. the feature vector was made using features we extracted from the VizDoom API, and its dimensions was 90 X 1. The network is made up of two fully connected layers, each of them followed by a ReLU activation. The first layer has 32 units, and the second one one has 16 units. The output layer's size was the amount of actions available to the agent.
Used in the natural language based agent. As previously mentioned, each word in the natural language state is transformed into a 200X50X1 matrix. The first layers of the TextCNN are convolutional layers with 8 filter which are designed to scan input sentence, and return convolution outputs of sequences of varying lengths. The filters vary in width, such that each of them learns to identify different lengths of sequences in words. Longer filters have higher capability of extracting features from longer word sequences. The filters we have chosen have the following dimensions: 3X50X1, 4X50X1, 5X50X1, 8X50X1,11X50X1. Following the convolution layer there is a ReLU activation and a max pool layer. Finally, there are two fully connected layers; The first layer has 32 units, and second one has 16 units. Both of them are followed by ReLU activation.
All architectures have the same output, regardless of the input type. The DQN network is a regression network, with its output size the number of available actions. The PPO agent has 2 networks; actor and critic. The actor network has a Softmax activation with size equal to the available amount of actions. The critic network is a regression model with a single output representing the state's value. Reward plots for the PPO agent can be found in Figure FIGREF47. | represent the state using natural language |
babe72f0491e65beff0e5889380e8e32d7a81f78 | babe72f0491e65beff0e5889380e8e32d7a81f78_0 | Q: How does the model compare with the MMR baseline?
Text: Introduction
The development of automatic tools for the summarization of large corpora of documents has attracted a widespread interest in recent years. With fields of application ranging from medical sciences to finance and legal science, these summarization systems considerably reduce the time required for knowledge acquisition and decision making, by identifying and formatting the relevant information from a collection of documents. Since most applications involve large corpora rather than single documents, summarization systems developed recently are meant to produce summaries of multiple documents. Similarly, the interest has shifted from generic towards query-oriented summarization, in which a query expresses the user's needs. Moreover, existing summarizers are generally extractive, namely they produce summaries by extracting relevant sentences from the original corpus.
Among the existing extractive approaches for text summarization, graph-based methods are considered very effective due to their ability to capture the global patterns of connection between the sentences of the corpus. These systems generally define a graph in which the nodes are the sentences and the edges denote relationships of lexical similarities between the sentences. The sentences are then scored using graph ranking algorithms such as the PageRank BIBREF0 or HITS BIBREF1 algorithms, which can also be adapted for the purpose of query-oriented summarization BIBREF2 . A key step of graph-based summarizers is the way the graph is constructed, since it has a strong impact on the sentence scores. As pointed out in BIBREF3 , a critical issue of traditional graph-based summarizers is their inability to capture group relationships among sentences since each edge of a graph only connects a pair of nodes.
Following the idea that each topic of a corpus connects a group of multiple sentences covering that topic, hypergraph models were proposed in BIBREF3 and BIBREF4 , in which the hyperedges represent similarity relationships among groups of sentences. These group relationships are formed by detecting clusters of lexically similar sentences we refer to as themes or theme-based hyperedges. Each theme is believed to cover a specific topic of the corpus. However, since the models of BIBREF3 and BIBREF4 define the themes as groups of lexically similar sentences, the underlying topics are not explicitly discovered. Moreover, their themes do not overlap which contradicts the fact that each sentence carries multiple information and may thus belong to multiple themes, as can be seen from the following example of sentence.
Two topics are covered by the sentence above: the topics of studies and leisure. Hence, the sentence should belong to multiple themes simultaneously, which is not allowed in existing hypergraph models of BIBREF3 and BIBREF4 .
The hypergraph model proposed in this paper alleviates these issues by first extracting topics, i.e. groups of semantically related terms, using a new topic model referred to as SEMCOT. Then, a theme is associated to each topic, such that each theme is defined a the group of sentences covering the associated topic. Finally, a hypergraph is formed with sentences as nodes, themes as hyperedges and hyperedge weights reflecting the prominence of each theme and its relevance to the query. In such a way, our model alleviates the weaknesses of existing hypergraph models since each theme-based hyperedge is associated to a specific topic and each sentence may belong to multiple themes.
Furthermore, a common drawback of existing graph- and hypergraph-based summarizers is that they select sentences based on the computation of an individual relevance score for each sentence. This approach fails to capture the information jointly carried by the sentences which results in redundant summaries missing important topics of the corpus. To alleviate this issue, we propose a new approach of sentence selection using our theme-based hypergraph. A minimal hypergraph transversal is the smallest subset of nodes covering all hyperedges of a hypergraph BIBREF5 . The concept of hypergraph transversal is used in computational biology BIBREF6 and data mining BIBREF5 for identifying a subset of relevant agents in a hypergraph. In the context of our theme-based hypergraph, a hypergraph transversal can be viewed as the smallest subset of sentences covering all themes of the corpus. We extend the notion of transversal to take the theme weights into account and we propose two extensions called minimal soft hypergraph transversal and maximal budgeted hypergraph transversal. The former corresponds to finding a subset of sentences of minimal aggregated length and achieving a target coverage of the topics of the corpus (in a sense that will be clarified). The latter seeks a subset of sentences maximizing the total weight of covered hyperedges while not exceeding a target summary length. As the associated discrete optimization problems are NP-hard, we propose two approximation algorithms building on the theory of submodular functions. Our transversal-based approach for sentence selection alleviates the drawback of methods of individual sentence scoring, since it selects a set of sentences that are jointly covering a maximal number of relevant themes and produces informative and non-redundant summaries. As demonstrated in the paper, the time complexity of the method is equivalent to that of early graph-based summarization systems such as LexRank BIBREF0 , which makes it more efficient than existing hypergraph-based summarizers BIBREF3 , BIBREF4 . The scalability of summarization algorithms is essential, especially in applications involving large corpora such as the summarization of news reports BIBREF7 or the summarization of legal texts BIBREF8 .
The method of BIBREF9 proposes to select sentences by using a maximum coverage approach, which shares some similarities with our model. However, they attempt to select a subset of sentences maximizing the number of relevant terms covered by the sentences. Hence, they fail to capture the topical relationships among sentences, which are, in contrast, included in our theme-based hypergraph.
A thorough comparative analysis with state-of-the-art summarization systems is included in the paper. Our model is shown to outperform other models on a benchmark dataset produced by the Document Understanding Conference. The main contributions of this paper are (1) a new topic model extracting groups of semantically related terms based on patterns of term co-occurrences, (2) a natural hypergraph model representing nodes as sentences and each hyperedge as a theme, namely a group of sentences sharing a topic, and (3) a new sentence selection approach based on hypergraph transversals for the extraction of a subset of jointly relevant sentences.
The structure of the paper is as follows. In section "Background and related work" , we present work related to our method. In section "Problem statement and system overview" , we present an overview of our system which is described in further details in section "Summarization based on hypergraph transversals" . Then, in section "Experiments and evaluation" , we present experimental results. Finally, section "Conclusion" presents a discussion and concluding remarks.
Background and related work
While early models focused on the task of single document summarization, recent systems generally produce summaries of corpora of documents BIBREF10 . Similarly, the focus has shifted from generic summarization to the more realistic task of query-oriented summarization, in which a summary is produced with the essential information contained in a corpus that is also relevant to a user-defined query BIBREF11 .
Summarization systems are further divided into two classes, namely abstractive and extractive models. Extractive summarizers identify relevant sentences in the original corpus and produce summaries by aggregating these sentences BIBREF10 . In contrast, an abstractive summarizer identifies conceptual information in the corpus and reformulates a summary from scratch BIBREF11 . Since abstractive approaches require advanced natural language processing, the majority of existing summarization systems consist of extractive models.
Extractive summarizers differ in the method used to identify relevant sentences, which leads to a classification of models as either feature-based or graph-based approaches. Feature-based methods represent the sentences with a set of predefined features such as the sentence position, the sentence length or the presence of cue phrases BIBREF12 . Then, they train a model to compute relevance scores for the sentences based on their features. Since feature-based approaches generally require datasets with labelled sentences which are hard to produce BIBREF11 , unsupervised graph-based methods have attracted growing interest in recent years.
Graph-based summarizers represent the sentences of a corpus as the nodes of a graph with the edges modelling relationships of similarity between the sentences BIBREF0 . Then, graph-based algorithms are applied to identify relevant sentences. The models generally differ in the type of relationship captured by the graph or in the sentence selection approach. Most graph-based models define the edges connecting sentences based on the co-occurrence of terms in pairs of sentences BIBREF0 , BIBREF2 , BIBREF3 . Then, important sentences are identified either based on node ranking algorithms, or using a global optimization approach. Methods based on node ranking compute individual relevance scores for the sentences and build summaries with highly scored sentences. The earliest such summarizer, LexRank BIBREF0 , applies the PageRank algorithm to compute sentence scores. Introducing a query bias in the node ranking algorithm, this method can be adapted for query-oriented summarization as in BIBREF2 . A different graph model was proposed in BIBREF13 , where sentences and key phrases form the two classes of nodes of a bipartite graph. The sentences and the key phrases are then scored simultaneously by applying a mutual reinforcement algorithm. An extended bipartite graph ranking algorithm is also proposed in BIBREF1 in which the sentences represent one class of nodes and clusters of similar sentences represent the other class. The hubs and authorities algorithm is then applied to compute sentence scores. Adding terms as a third class of nodes, BIBREF14 propose to score terms, sentences and sentence clusters simultaneously, based on a mutual reinforcement algorithm which propagates the scores across the three node classes. A common drawback of the approaches based on node ranking is that they compute individual relevance scores for the sentences and they fail to model the information jointly carried by the sentences, which may result in redundant summaries. Hence, global optimization approaches were proposed to select a set of jointly relevant and non-redundant sentences as in BIBREF15 and BIBREF16 . For instance, BIBREF17 propose a greedy algorithm to find a dominating set of nodes in the sentence graph. A summary is then formed with the corresponding set of sentences. Similarly, BIBREF15 extract a set of sentences with a maximal similarity with the entire corpus and a minimal pairwise lexical similarity, which is modelled as a multi-objective optimization problem. In contrast, BIBREF9 propose a coverage approach in which a set of sentences maximizing the number of distinct relevant terms is selected. Finally, BIBREF16 propose a two step approach in which individual sentence relevance scores are computed first. Then a set of sentences with a maximal total relevance and a minimal joint redundancy is selected. All three methods attempt to solve NP-hard problems. Hence, they propose approximation algorithms based on the theory of submodular functions.
Going beyond pairwise lexical similarities between sentences and relations based on the co-occurrence of terms, hypergraph models were proposed, in which nodes are sentences and hyperedges model group relationships between sentences BIBREF3 . The hyperedges of the hypergraph capture topical relationships among groups of sentences. Existing hypergraph-based systems BIBREF3 , BIBREF4 combine pairwise lexical similarities and clusters of lexically similar sentences to form the hyperedges of the hypergraph. Hypergraph ranking algorithms are then applied to identify important and query-relevant sentences. However, they do not provide any interpretation for the clusters of sentences discovered by their method. Moreover, these clusters do not overlap, which is incoherent with the fact that each sentence carries multiple information and hence belongs to multiple semantic groups of sentences. In contrast, each hyperedge in our proposed hypergraph connects sentences covering the same topic, and these hyperedges do overlap.
A minimal hypergraph transversal is a subset of the nodes of hypergraph of minimum cardinality and such that each hyperedge of the hypergraph is incident to at least one node in the subset BIBREF5 . Theoretically equivalent to the minimum hitting set problem, the problem of finding a minimum hypergraph transversal can be viewed as finding a subset of representative nodes covering the essential information carried by each hyperedge. Hence, hypergraph transversals find applications in various areas such as computational biology, boolean algebra and data mining BIBREF18 . Extensions of hypergraph transversals to include hyperedge and node weights were also proposed in BIBREF19 . Since the associated optimization problems are generally NP-hard, various approximation algorithms were proposed, including greedy algorithms BIBREF20 and LP relaxations BIBREF21 . The problem of finding a hypergraph transversal is conceptually similar to that of finding a summarizing subset of a set of objects modelled as a hypergraph. However, to the best of our knowledge, there was no attempt to use hypergraph transversals for text summarization in the past. Since it seeks a set of jointly relevant sentences, our method shares some similarities with existing graph-based models that apply global optimization strategies for sentence selection BIBREF9 , BIBREF15 , BIBREF16 . However, our hypergraph better captures topical relationships among sentences than the simple graphs based on lexical similarities between sentences.
Problem statement and system overview
Given a corpus of $N_d$ documents and a user-defined query $q$ , we intend to produce a summary of the documents with the information that is considered both central in the corpus and relevant to the query. Since we limit ourselves to the production of extracts, our task is to extract a set $S$ of relevant sentences from the corpus and to aggregate them to build a summary. Let $N_s$ be the total number of sentences in the corpus. We further split the task into two subtasks:
The sentences in the set $S$ are then aggregated to form the final summary. Figure 1 summarizes the steps of our proposed method. After some preprocessing steps, the themes are detected based on a topic detection algorithm which tags each sentence with multiple topics. A theme-based hypergraph is then built with the weight of each theme reflecting both its importance in the corpus and its similarity with the query. Finally, depending on the task at hand, one of two types of hypergraph transversal is generated. If the summary must not exceed a target summary length, then a maximal budgeted hypergraph transversal is generated. If the summary must achieve a target coverage, then a minimal soft hypergraph transversal is generated. Finally the sentences corresponding to the generated transversal are selected for the summary.
Summarization based on hypergraph transversals
In this section, we present the key steps of our algorithm: after some standard preprocessing steps, topics of semantically related terms are detected from which themes grouping topically similar sentences are extracted. A hypergraph is then formed based on the sentence themes and sentences are selected based on the detection of a hypergraph transversal.
Preprocessing and similarity computation
As the majority of extractive summarization approaches, our model is based on the representation of sentences as vectors. To reduce the size of the vocabulary, we remove stopwords that do not contribute to the meaning of sentences such as "the" or "a", using a publicly available list of 667 stopwords . The words are also stemmed using Porter Stemmer BIBREF22 . Let $N_t$ be the resulting number of distinct terms after these two preprocessing steps are performed. We define the inverse sentence frequency $\text{isf}(t)$ BIBREF23 as
$$\text{isf}(t)=\log \left(\frac{N_s}{N_s^t}\right)$$ (Eq. 7)
where $N_s^t$ is the number of sentences containing term $t$ . This weighting scheme yields higher weights for rare terms which are assumed to contribute more to the semantics of sentences BIBREF23 . Sentence $i$ is then represented by a vector $s_i=[\text{tfisf}(i,1),...,\text{tfisf}(i,N_t)]$ where
$$\text{tfisf}(i,t)=\text{tf}(i,t)\text{isf}(t)$$ (Eq. 8)
and $\text{tf}(i,t)$ is the frequency of term $t$ in sentence $i$ . Finally, to denote the similarity between two text fragments $a$ and $b$ (which can be sentences, groups of sentences or the query), we use the cosine similarity between the $\text{tfisf}$ representations of $a$ and $b$ , as suggested in BIBREF2 :
$$\text{sim}(a,b)=\frac{\sum _t \text{tfisf}(a,t)\text{tfisf}(b,t)}{\sqrt{\sum _t\text{tfisf}(a,t)^2}\sqrt{\sum _t\text{tfisf}(b,t)^2}}$$ (Eq. 9)
where $\text{tfisf}(a,t)$ is also defined as the frequency of term $t$ in fragment $a$ multiplied by $\text{isf}(t)$ . This similarity measure will be used in section "Sentence hypergraph construction" to compute the similarity with the query $q$ .
Sentence theme detection based on topic tagging
As mentioned in section "Introduction" , our hypergraph model is based on the detection of themes. A theme is defined as a group of sentences covering the same topic. Hence, our theme detection algorithm is based on a 3-step approach: the extraction of topics, the process of tagging each sentence with multiple topics and the detection of themes based on topic tags.
A topic is viewed as a set of semantically similar terms, namely terms that refer to the same subject or the same piece of information. In the context of a specific corpus of related documents, a topic can be defined as a set of terms that are likely to occur close to each other in a document BIBREF24 . In order to extract topics, we make use of a clustering approach based on the definition of a semantic dissimilarity between terms. For terms $u$ and $v$ , we first define the joint $\text{isf}$ weight $\text{isf}(u,v)$ as
$$\text{isf}(u,v)=\log \left(\frac{N_s}{N_s^{uv}}\right)$$ (Eq. 11)
where $N_s^{uv}$ is the number of sentences in which both terms $u$ and $v$ occur together. Then, the semantic dissimilarity $d_{\text{sem}}(u,v)$ between the two terms is defined as
$$d_{\text{sem}}(u,v)=\frac{\text{isf}(u,v)-\min (\text{isf}(u),\text{isf}(v))}{\max (\text{isf}(u),\text{isf}(v))}$$ (Eq. 12)
which can be viewed as a special case of the so-called google distance which was already successfully applied to learn semantic similarities between terms on webpages BIBREF25 . Using concepts from information theory, $\text{isf}(u)$ represents the number of bits required to express the occurrence of term $u$ in a sentence using an optimally efficient code. Then, $\text{isf}(u,v)-\text{isf}(u)$ can be viewed as the number of bits of information in $v$ relative to $u$ . Assuming $\text{isf}(v)\ge \text{isf}(u)$ , $d_{\text{sem}}(u,v)$ can be viewed as the improvement obtained when compressing $v$ using a previously compressed code for $u$ and compressing $v$ from scratch BIBREF26 . More details can be found in BIBREF25 . In practice, two terms $u$0 and $u$1 with a low value of $u$2 are expected to consistently occur together in the same context, and they are thus considered to be semantically related in the context of the corpus.
Based on the semantic dissimilarity measure between terms, we define a topic as a group of terms with a high semantic density, namely a group of terms such that each term of the group is semantically related to a sufficiently high number of terms in the group. The DBSCAN algorithm is a method of density-based clustering that achieves this result by iteratively growing cohesive groups of agents, with the condition that each member of a group should contain a sufficient number of other members in an $\epsilon $ -neighborhood around it BIBREF27 . Using the semantic dissimilarity as a distance measure, DBSCAN extracts groups of semantically related terms which are considered as topics. The advantages offered by DBSCAN over other clustering algorithms are threefold. First, DBSCAN is capable of detecting the number of clusters automatically. Second, although the semantic dissimilarity is symmetric and nonnegative, it does not satisfy the triangle inequality. This prevents the use of various clustering algorithms such as the agglomerative clustering with complete linkage BIBREF28 . However, DBSCAN does not explicitly require the triangle inequality to be satisfied. Finally, it is able to detect noisy samples in low density region, that do not belong to any other cluster.
Given a set of pairwise dissimilarity measures, a density threshold $\epsilon $ and a minimum neighborhood size $m$ , DBSCAN returns a number $K$ of clusters and a set of labels $\lbrace c(i)\in \lbrace -1,1,...,K\rbrace :1\le i\le N_t\rbrace $ such that $c(i)=-1$ if term $i$ is considered a noisy term. While it is easy to determine a natural value for $m$ , choosing a value for $\epsilon $ is not straightforward. Hence, we adapt DBSCAN algorithm to build our topic model referred to as Semantic Clustering Of Terms (SEMCOT) algorithm. It iteratively applies DBSCAN and decreases the parameter $\epsilon $ until the size of each cluster does not exceed a predefined value. Algorithm "Sentence theme detection based on topic tagging" summarizes the process. Apart from $m$ , the algorithm also takes parameters $m$0 (the initial value of $m$1 ), $m$2 (the maximum number of points allowed in a cluster) and $m$3 (a factor close to 1 by which $m$4 is multiplied until all clusters have sizes lower than $m$5 ). Experiments on real-world data suggest empirical values of $m$6 , $m$7 , $m$8 and $m$9 . Additionally, we observe that, among the terms considered as noisy by DBSCAN, some could be highly infrequent terms with a high $K$0 value but yet having a strong impact on the meaning of sentences. Hence, we include them as topics consisting of single terms if their $K$1 value exceeds a threshold $K$2 whose value is determined by cross-validation, as explained in section "Experiments and evaluation" .
[H] INPUT: Semantic Dissimilarities $\lbrace d_{\text{sem}}(u,v):1\le u,v\le N_t\rbrace $ ,
PARAMETERS: $\epsilon _0$ , $M$ , $m$ , $\beta \le 1$ , $\mu $
OUTPUT: Number $K$ of topics, topic tags $\lbrace c(i):1\le i\le N_t\rbrace $
$\epsilon \leftarrow \epsilon _0$ , $\text{minTerms}\leftarrow m$ , $\text{proceed}\leftarrow \text{True}$
while $\text{proceed}$ :
$[c,K]\leftarrow DBSCAN(d_{\text{sem}},\epsilon ,\text{minTerms})$
if $\underset{1\le k\le K}{\max }(|\lbrace i:c(i)=k\rbrace |)<M$ : $\text{proceed}\leftarrow \text{False}$
else: $\epsilon \leftarrow \beta \epsilon $
for each $t$ s.t. $c(t)=-1$ (noisy terms):
if $\text{isf}(t)\ge \mu $ :
$c(t)\leftarrow K+1$ , $K\leftarrow K+1$
SEMCOT
Once the topics are obtained based on algorithm "Sentence theme detection based on topic tagging" , a theme is associated to each topic, namely a group of sentences covering the same topic. The sentences are first tagged with multiple topics based on a scoring function. The score of the $l$ -th topic in the $i$ -th sentence is given by
$$\sigma _{il}=\underset{t:c(t)=l}{\sum }\text{tfisf}(i,t)$$ (Eq. 13)
and the sentence is tagged with topic $l$ whenever $\sigma _{il}\ge \delta $ , in which $\delta $ is a parameter whose value is tuned as explained in section "Experiments and evaluation" (ensuring that each sentence is tagged with at least one topic). The scores are intentionally not normalized to avoid tagging short sentences with an excessive number of topics. The $l$ -th theme is then defined as the set of sentences
$$T_l=\lbrace i:\sigma _{il}\ge \delta ,1\le i\le N_s\rbrace .$$ (Eq. 14)
While there exist other summarization models based on the detection of clusters or groups of similar sentence, the novelty of our theme model is twofold. First, each theme is easily interpretable as the set of sentences associated to a specific topic. As such, our themes can be considered as groups of semantically related sentences. Second, it is clear that the themes discovered by our approach do overlap since a single sentence may be tagged with multiple topics. To the best of our knowledge, none of the previous cluster-based summarizers involved overlapping groups of sentences. Our model is thus more realistic since it better captures the multiplicity of the information covered by each sentence.
Sentence hypergraph construction
A hypergraph is a generalization of a graph in which the hyperedges may contain any number of nodes, as expressed in definition UID16 BIBREF3 . Our hypergraph model moreover includes both hyperedge and node weights.
Definition 1 (Hypergraph) A node- and hyperedge-weighted hypergraph is defined as a quadruplet $H=(V,E,\phi ,w)$ in which $V$ is a set of nodes, $E\subseteq 2^{V}$ is a set of hyperedges, $\phi \in \mathbb {R}_+^{|V|}$ is a vector of positive node weights and $w\in \mathbb {R}_+^{|E|}$ is a vector of positive hyperedge weights.
For convenience, we will refer to a hypergraph by its weight vectors $\phi $ and $w$ , its hyperedges represented by a set $E\subseteq 2^V$ and its incidence lists $\text{inc}(i)=\lbrace e\in E:i\in e\rbrace $ for each $i\in V$ .
As mentioned in section "Introduction" , our system relies on the definition of a theme-based hypergraph which models groups of semantically related sentences as hyperedges. Hence, compared to traditional graph-based summarizers, the hypergraph is able to capture more complex group relationships between sentences instead of being restricted to pairwise relationships.
In our sentence-based hypergraph, the sentences are the nodes and each theme defines a hyperedge connecting the associated sentences. The weight $\phi _i$ of node $i$ is the length of the $i$ -th sentence, namely:
$$\begin{array}{l} V = \lbrace 1,...,N_s\rbrace \text{ and }\phi _i=L_i\text{, }\text{ }1\le i\le N_s\\ E = \lbrace e_1,...,e_K\rbrace \subseteq 2^V\\ e_l=T_l\text{ i.e. }e_l\in \text{inc}(i)\leftrightarrow i\in T_l \end{array}$$ (Eq. 17)
Finally, the weights of the hyperedges are computed based on the centrality of the associated theme and its similarity with the query:
$$w_l=(1-\lambda )\text{sim}(T_l,D)+\lambda \text{sim}(T_l,q)$$ (Eq. 18)
where $\lambda \in [0,1]$ is a parameter and $D$ represents the entire corpus. $\text{sim}(T_l,D)$ denotes the similarity of the set of sentences in theme $T_l$ with the entire corpus (using the tfisf-based similarity of equation 9 ) which measures the centrality of the theme in the corpus. $\text{sim}(T_l,q)$ refers to the similarity of the theme with the user-defined query $q$ .
Detection of hypergraph transversals for text summarization
The sentences to be included in the query-oriented summary should contain the essential information in the corpus, they should be relevant to the query and, whenever required, they should either not exceed a target length or jointly achieve a target coverage (as mentioned in section "Problem statement and system overview" ). Existing systems of graph-based summarization generally solve the problem by ranking sentences in terms of their individual relevance BIBREF0 , BIBREF2 , BIBREF3 . Then, they extract a set of sentences with a maximal total relevance and pairwise similarities not exceeding a predefined threshold. However, we argue that the joint relevance of a group of sentences is not reflected by the individual relevance of each sentence. And limiting the redundancy of selected sentences as done in BIBREF3 does not guarantee that the sentences jointly cover the relevant themes of the corpus.
Considering each topic as a distinct piece of information in the corpus, an alternative approach is to select the smallest subset of sentences covering each of the topics. The latter condition can be reformulated as ensuring that each theme has at least one of its sentences appearing in the summary. Using our sentence hypergraph representation, this corresponds to the detection of a minimal hypergraph transversal as defined below BIBREF5 .
Definition 2 Given an unweighted hypergraph $H=(V,E)$ , a minimal hypergraph transversal is a subset $S^*\subseteq V$ of nodes satisfying
$$\begin{array}{rcl} S^*&=&\underset{S\subseteq V}{\text{argmin}}|S|\\ && \text{s.t. }\underset{i\in S}{\bigcup }\text{inc}(i)=E \end{array}$$ (Eq. 21)
where $\text{inc}(i)=\lbrace e:i\in e\rbrace $ denotes the set of hyperedges incident to node $i$ .
Figure 2 shows an example of hypergraph and a minimal hypergraph transversal of it (star-shaped nodes). In this case, since the nodes and the hyperedges are unweighted, the minimal transversal is not unique.
The problem of finding a minimal transversal in a hypergraph is NP-hard BIBREF29 . However, greedy algorithms or LP relaxations provide good approximate solutions in practice BIBREF21 . As intended, the definition of transversal includes the notion of joint coverage of the themes by the sentences. However, it neglects node and hyperedge weights and it is unable to identify query-relevant themes. Since both the sentence lengths and the relevance of themes should be taken into account in the summary generation, we introduce two extensions of transversal, namely the minimal soft hypergraph transversal and the maximal budgeted hypergraph transversal. A minimal soft transversal of a hypergraph is obtained by minimizing the total weights of selected nodes while ensuring that the total weight of covered hyperedges exceeds a given threshold.
Definition 3 (minimal soft hypergraph transversal) Given a node and hyperedge weighted hypergraph $H=(V,E,\phi ,w)$ and a parameter $\gamma \in [0,1]$ , a minimal soft hypergraph transversal is a subset $S^*\subseteq V$ of nodes satisfying
$$\begin{array}{rcl} S^*&=&\underset{S\subseteq V}{\text{argmin}}\underset{i\in S}{\sum }\phi _i\\ && \text{s.t. }\underset{e\in \text{inc}(S)}{\sum }w_e\ge \gamma W \end{array}$$ (Eq. 24)
in which $\text{inc}(S)=\underset{i\in S}{\bigcup }\text{inc}(i)$ and $W=\sum _ew_e$ .
The extraction of a minimal soft hypergraph transversal of the sentence hypergraph produces a summary of minimal length achieving a target coverage expressed by parameter $\gamma \in [0,1]$ . As mentioned in section "Problem statement and system overview" , applications of text summarization may also involve a hard constraint on the total summary length $L$ . For that purpose, we introduce the notion of maximal budgeted hypergraph transversal which maximizes the volume of covered hyperedges while not exceeding the target length.
Definition 4 (maximal budgeted hypergraph transversal) Given a node and hyperedge weighted hypergraph $H=(V,E,\phi ,w)$ and a parameter $L>0$ , a maximal budgeted hypergraph transversal is a subset $S^*\subseteq V$ of nodes satisfying
$$\begin{array}{rcl} S^*&=&\underset{S\subseteq V}{\text{argmax}}\underset{e\in \text{inc}(S)}{\sum }w_e\\ && \text{s.t. }\underset{i\in S}{\sum }\phi _i\le L. \end{array}$$ (Eq. 26)
We refer to the function $\underset{e\in \text{inc}(S)}{\sum }w_e$ as the hyperedge coverage of set $S$ . We observe that both weighted transversals defined above include the notion of joint coverage of the hyperedges by the selected nodes. As a result and from the definition of hyperedge weights (equation 18 ), the resulting summary covers themes that are both central in the corpus and relevant to the query. This approach also implies that the resulting summary does not contain redundant sentences covering the exact same themes. As a result selected sentences are expected to cover different themes and to be semantically diverse. Both the problems of finding a minimal soft transversal or finding a maximal budgeted transversal are NP-hard as proved by theorem UID27 .
Theorem 1 (NP-hardness) The problems of finding a minimal soft hypergraph transversal or a maximal budgeted hypergraph transversal in a weighted hypergraph are NP-hard.
Regarding the minimal soft hypergraph transversal problem, with parameter $\gamma =1$ and unit node weights, the problem is equivalent to the classical set cover problem (definition UID20 ) which is NP-complete BIBREF29 . The maximal budgeted hypergraph transversal problem can be shown to be equivalent to the maximum coverage problem with knapsack constraint which was shown to be NP-complete in BIBREF29 .
Since both problems are NP-hard, we formulate polynomial time algorithms to find approximate solutions to them and we provide the associated approximation factors. The algorithms build on the submodularity and the non-decreasing properties of the hyperedge coverage function, which are defined below.
Definition 5 (Submodular and non-decreasing set functions) Given a finite set $A$ , a function $f:2^{A}\rightarrow \mathbb {R}$ is monotonically non-decreasing if $\forall S\subset A$ and $\forall u\in A\setminus S$ ,
$$f(S\cup \lbrace u\rbrace )\ge f(S)$$ (Eq. 29)
and it is submodular if $\forall S,T$ with $S\subseteq T\subset A$ , and $\forall u\in A\setminus T$ ,
$$f(T\cup \lbrace u\rbrace )-f(T)\le f(S\cup \lbrace u\rbrace )-f(S).$$ (Eq. 30)
Based on definition UID28 , we prove in theorem UID31 that the hyperedge coverage function is submodular and monotonically non-decreasing, which provides the basis of our algorithms.
Theorem 2 Given a hypergraph $H=(V,E,\phi ,w)$ , the hyperedge coverage function $f:2^V\rightarrow \mathbb {R}$ defined by
$$f(S)=\underset{e\in \text{inc}(S)}{\sum }w_e$$ (Eq. 32)
is submodular and monotonically non-decreasing.
The hyperege coverage function $f$ is clearly monotonically non-decreasing and it is submodular since $\forall S\subseteq T\subset V$ , and $s\in V\setminus T$ ,
$$\begin{array}{l} (f(S\cup \lbrace s\rbrace )-f(S))-(f(T\cup \lbrace s\rbrace )-f(T))\\ =\left[\underset{e\in \text{inc}(S\cup \lbrace s\rbrace )}{\sum }w_e-\underset{e\in \text{inc}(S)}{\sum }w_e\right]-\left[\underset{e\in \text{inc}(T\cup \lbrace s\rbrace )}{\sum }w_e-\underset{e\in \text{inc}(T)}{\sum }w_e\right]\\ = \left[ \underset{e\in \text{inc}(\lbrace s\rbrace )\setminus \text{inc}(S)}{\sum }w_e\right]-\left[ \underset{e\in \text{inc}(\lbrace s\rbrace )\setminus \text{inc}(T)}{\sum }w_e\right]\\ = \underset{e\in (\text{inc}(T)\cap \text{inc}(\lbrace s\rbrace ))\setminus \text{inc}(S)}{\sum }w_e\ge 0 \end{array}$$ (Eq. 33)
where $\text{inc}(R)=\lbrace e:e\cap S\ne \emptyset \rbrace $ for $R\subseteq V$ . The last equality follows from $\text{inc}(S)\subseteq \text{inc}(T)$ and $\text{inc}(\lbrace s\rbrace )\setminus \text{inc}(T)\subseteq \text{inc}(\lbrace s\rbrace )\setminus \text{inc}(S)$ .
Various classes of NP-hard problems involving a submodular and non-decreasing function can be solved approximately by polynomial time algorithms with provable approximation factors. Algorithms "Detection of hypergraph transversals for text summarization" and "Detection of hypergraph transversals for text summarization" are our core methods for the detection of approximations of maximal budgeted hypergraph transversals and minimal soft hypergraph transversals, respectively. In each case, a transversal is found and the summary is formed by extracting and aggregating the associated sentences. Algorithm "Detection of hypergraph transversals for text summarization" is based on an adaptation of an algorithm presented in BIBREF30 for the maximization of submodular functions under a Knaspack constraint. It is our primary transversal-based summarization model, and we refer to it as the method of Transversal Summarization with Target Length (TL-TranSum algorithm). Algorithm "Detection of hypergraph transversals for text summarization" is an application of the algorithm presented in BIBREF20 for solving the submodular set covering problem. We refer to it as Transversal Summarization with Target Coverage (TC-TranSum algorithm). Both algorithms produce transversals by iteratively appending the node inducing the largest increase in the total weight of the covered hyperedges relative to the node weight. While long sentences are expected to cover more themes and induce a larger increase in the total weight of covered hyperedges, the division by the node weights (i.e. the sentence lengths) balances this tendency and allows the inclusion of short sentences as well. In contrast, the methods of sentence selection based on a maximal relevance and a minimal redundancy such as, for instance, the maximal marginal relevance approach of BIBREF31 , tend to favor the selection of long sentences only. The main difference between algorithms "Detection of hypergraph transversals for text summarization" and "Detection of hypergraph transversals for text summarization" is the stopping criterion: in algorithm "Detection of hypergraph transversals for text summarization" , the approximate minimal soft transversal is obtained whenever the targeted hyperedge coverage is reached while algorithm "Detection of hypergraph transversals for text summarization" appends a given sentence to the approximate maximal budgeted transversal only if its addition does not make the summary length exceed the target length $L$ .
[H] INPUT: Sentence Hypergraph $H=(V,E,\phi ,w)$ , target length $L$ .
OUTPUT: Set $S$ of sentences to be included in the summary.
for each $i\in \lbrace 1,...,N_s\rbrace $ : $r_i\leftarrow \frac{1}{\phi _i}\underset{e\in \text{inc}(i)}{\sum }w_e$
$R\leftarrow \emptyset $ , $Q\leftarrow V$ , $f\leftarrow 0$
while $Q\ne \emptyset $ :
$s^*\leftarrow \underset{i\in Q}{\text{argmax}}\text{ }r_i$ , $Q\leftarrow Q\setminus \lbrace s^*\rbrace $
if $\phi _{s^*}+f\le L$ :
$R\leftarrow R\cup \lbrace s^*\rbrace $ , $f\leftarrow f+l^*$
for each $i\in \lbrace 1,...,N_s\rbrace $ : $r_i\leftarrow r_i-\frac{\underset{e\in \text{inc}(s^*)\cap \text{inc}(i)}{\sum } w_e}{\phi _i}$
Let $G\leftarrow \lbrace \lbrace i\rbrace \text{ : }i\in V,\phi _i\le L\rbrace $
$S\leftarrow \underset{S\in \lbrace Q\rbrace \cup G}{\text{argmax}}\text{ }\text{ }\text{ }\underset{e\in \text{inc}(S)}{\sum }w_e$
return $S$
Transversal Summarization with Target Length (TL-TranSum)
[H] INPUT: Sentence Hypergraph $H=(V,E,\phi ,w)$ , parameter $\gamma \in [0,1]$ .
OUTPUT: Set $S$ of sentences to be included in the summary.
for each $i\in \lbrace 1,...,N_s\rbrace $ : $r_i\leftarrow \frac{1}{\phi _i}\underset{e\in \text{inc}(i)}{\sum }w_e$
$S\leftarrow \emptyset $ , $Q\leftarrow V$ , $\tilde{W}\leftarrow 0$ , $W\leftarrow \sum _ew_e$
while $Q\ne \emptyset $ and $\tilde{W}<\gamma W$ :
$s^*\leftarrow \underset{i\in Q}{\text{argmax}}\text{ }r_i$
$S\leftarrow S\cup \lbrace s^*\rbrace $ , $\tilde{W}\leftarrow \tilde{W}+\phi _{s*}r_{s^*}$
for each $i\in \lbrace 1,...,N_s\rbrace $ : $r_i\leftarrow r_i-\frac{\underset{e\in \text{inc}(s^*)\cap \text{inc}(i)}{\sum } w_e}{\phi _i}$
return $S$
Transversal Summarization with Target Coverage (TC-TranSum)
We next provide theoretical guarantees that support the formulation of algorithms "Detection of hypergraph transversals for text summarization" and "Detection of hypergraph transversals for text summarization" as approximation algorithms for our hypergraph transversals. Theorem UID34 provides a constant approximation factor for the output of algorithm "Detection of hypergraph transversals for text summarization" for the detection of minimal soft hypergraph transversals. It builds on the submodularity and the non-decreasing property of the hyperedge coverage function.
Theorem 3 Let $S^L$ be the summary produced by our TL-TranSum algorithm "Detection of hypergraph transversals for text summarization" , and $S^*$ be a maximal budgeted transversal associated to the sentence hypergraph, then
$$\underset{e\in \text{inc}(S^L)}{\sum }w_e \ge \frac{1}{2}\left(1-\frac{1}{e}\right)\underset{e\in \text{inc}(S^*)}{\sum }w_e.$$ (Eq. 35)
Since the hyperedge coverage function is submodular and monotonically non-decreasing, the extraction of a maximal budgeted transversal is a problem of maximization of a submodular and monotonically non-decreasing function under a Knapsack constraint, namely
$$\underset{S\subseteq V}{\max }f(S)\text{ s.t. }\underset{i\in S}{\sum }\phi _i\le L$$ (Eq. 36)
where $f(S)=\underset{e\in \text{inc}(S)}{\sum }w_e$ . Hence, by theorem 2 in BIBREF30 , the algorithm forming a transversal $S^F$ by iteratively growing a set $S_t$ of sentences according to
$$S_{t+1}=S_t\cup \left\lbrace \underset{s\in V\setminus S_t}{\text{argmax}}\left\lbrace \frac{f(S\cup \lbrace s\rbrace )-f(S)}{\phi _s}, \phi _s+\underset{i\in S_t}{\sum }\phi _i\le L\right\rbrace \right\rbrace $$ (Eq. 37)
produces a final summary $S^F$ satisfying
$$f(S^F)\ge f(S^*)\frac{1}{2}\left(1-\frac{1}{e}\right).$$ (Eq. 38)
As algorithm "Detection of hypergraph transversals for text summarization" implements the iterations expressed by equation 37 , it achieves a constant approximation factor of $\frac{1}{2}\left(1-\frac{1}{e}\right)$ .
Similarly, theorem UID39 provides a data-dependent approximation factor for the output of algorithm "Detection of hypergraph transversals for text summarization" for the detection of maximal budgeted hypergraph transversals. It also builds on the submodularity and the non-decreasing property of the hyperedge coverage function.
Theorem 4 Let $S^P$ be the summary produced by our TC-TranSum algorithm "Detection of hypergraph transversals for text summarization" and let $S^*$ be a minimal soft hypergraph transversal, then
$$\underset{i\in S^P}{\sum }\phi _i\le \underset{i\in S^*}{\sum }\phi _i \left(1+\log \left(\frac{\gamma W}{\gamma W-\underset{e\in \text{inc}(S^{T-1})}{\sum }w_e}\right)\right)$$ (Eq. 40)
where $S_1,...,S_T$ represent the consecutive sets of sentences produced by algorithm "Detection of hypergraph transversals for text summarization" .
Consider the function $g(S)=\min (\gamma W,\underset{e\in \text{inc}(S)}{\sum }w_e)$ . Then the problem of finding a minimal soft hypergraph transversal can be reformulated as
$$S^*=\underset{S\subseteq V}{\text{argmin}} \underset{s\in S}{\sum }\phi _s\text{ s.t. }g(S)\ge g(V)$$ (Eq. 41)
As $g$ is submodular and monotonically non-decreasing, theorem 1 in BIBREF20 shows that the summary $S^G$ produced by iteratively growing a set $S_t$ of sentences such that
$$S_{t+1}=S_t\cup \left\lbrace \underset{s\in V\setminus S_t}{\text{argmax}}\left\lbrace \frac{f(S\cup \lbrace s\rbrace )-f(S)}{\phi _s}\right\rbrace \right\rbrace $$ (Eq. 42)
produces a summary $S^G$ satisfying
$$\underset{i\in S^G}{\sum }\phi _i\le \underset{i\in S^*}{\sum }\phi _i \left(1+\log \left(\frac{g(V)}{g(V)-g(S^{T-1})}\right)\right).$$ (Eq. 43)
which can be rewritten as
$$\underset{i\in S^G}{\sum }\phi _i\le \underset{i\in S^*}{\sum }\phi _i \left(1+\log \left(\frac{\gamma W}{\gamma W-\underset{e\in \text{inc}(S^{T-1})}{\sum }w_e}\right)\right).$$ (Eq. 44)
As algorithm "Detection of hypergraph transversals for text summarization" implements the iterations expressed by equation 42 , the summary $S^S$ produced by our algorithm "Detection of hypergraph transversals for text summarization" satisfies the same inequality.
In practice, the result of theorem UID39 suggests that the quality of the output depends on the relative increase in the hyperedge coverage induced by the last sentence to be appended to the summary. In particular, if each sentence that is appended to the summary in the interations of algorithm "Detection of hypergraph transversals for text summarization" covers a sufficient number of new themes that are not covered already by the summary, the approximation factor is low.
Complexity analysis
We analyse the worst case time complexity of each step of our method. The time complexity of DBSCAN algorithm BIBREF27 is $O(N_t\log (N_t))$ . Hence, the theme detection algorithm "Sentence theme detection based on topic tagging" takes $O(N_cN_t\log (N_t))$ steps, where $N_c$ is the number of iterations of algorithm "Sentence theme detection based on topic tagging" which is generally low compared to the number of terms. The time complexity for the hypergraph construction is $O(K(N_s+N_t))$ where $K$ is the number of topics, or $O(N_t^2)$ if $N_t\ge N_s$ . The time complexity of the sentence selection algorithms "Detection of hypergraph transversals for text summarization" and "Detection of hypergraph transversals for text summarization" are bounded by $O(N_sKC^{\max }L^{\max })$ where $C^{\max }$ is the number of sentences in the largest theme and $L^{\max }$ is the length of the longest sentences. Assuming $O(N_cN_t\log (N_t))$0 is larger than $O(N_cN_t\log (N_t))$1 , the overall time complexity of the method is of $O(N_cN_t\log (N_t))$2 steps in the worst case. Hence the method is essentially equivalent to early graph-based models for text summarization in terms of computational burden, such as the LexRank-based systems BIBREF0 , BIBREF2 or greedy approaches based on global optimization BIBREF17 , BIBREF15 , BIBREF16 . However, it is computationnally more efficient than traditional hypergraph-based summarizers such as the one in BIBREF4 which involves a Markov Chain Monte Carlo inference for its topic model or the one in BIBREF3 which is based on an iterative computation of scores involving costly matrix multiplications at each step.
Experiments and evaluation
We present experimental results obtained with a Python implementation of algorithms "Detection of hypergraph transversals for text summarization" and "Detection of hypergraph transversals for text summarization" on a standard computer with a $2.5GHz$ processor and a 8GB memory.
Dataset and metrics for evaluation
We test our algorithms on DUC2005 BIBREF32 , DUC2006 BIBREF33 and DUC2007 BIBREF34 datasets which were produced by the Document Understanding Conference (DUC) and are widely used as benchmark datasets for the evaluation of query-oriented summarizers. The datasets consist respectively of 50, 50 and 45 corpora, each consisting of 25 documents of approximately 1000 words, on average. A query is associated to each corpus. For evaluation purposes, each corpus is associated with a set of query-relevant summaries written by humans called reference summaries. In each of our experiments, a candidate summary is produced for each corpus by one of our algorithms and it is compared with the reference summaries using the metrics described below. Moreover, in experiments involving algorithm "Detection of hypergraph transversals for text summarization" , the target summary length is set to 250 words as required in DUC evalutions.
In order to evaluate the similarity of a candidate summary with a set of reference summaries, we make use of the ROUGE toolkit of BIBREF35 , and more specifically of ROUGE-2 and ROUGE-SU4 metrics, which were adopted by DUC for summary evaluation. ROUGE-2 measures the number of bigrams found both in the candidate summary and the set of reference summaries. ROUGE-SU4 extends this approach by counting the number of unigrams and the number of 4-skip-bigrams appearing in the candidate and the reference summaries, where a 4-skip-bigram is a pair of words that are separated by no more than 4 words in a text. We refer to ROUGE toolkit BIBREF35 for more details about the evaluation metrics. ROUGE-2 and ROUGE-SU4 metrics are computed following the same setting as in DUC evaluations, namely with word stemming and jackknife resampling but without stopword removal.
Parameter tuning
Besides the parameters of SEMCOT algorithm for which empirical values were given in section "Sentence theme detection based on topic tagging" , there are three parameters of our system that need to be tuned: parameters $\mu $ (threshold on isf value to include a noisy term as a single topic in SEMCOT), $\delta $ (threshold on the topic score for tagging a sentence with a given topic) and $\lambda $ (balance between the query relevance and the centrality in hyperedge weights). The values of all three parameters are determined by an alternating maximization strategy of ROUGE-SU4 score in which the values of two parameters are fixed and the value of the third parameter is tuned to maximize the ROUGE-SU4 score produced by algorithm "Detection of hypergraph transversals for text summarization" with a target summary length of 250 words, in an iterative fashion. The ROUGE-SU4 scores are evaluated by cross-validation using a leave-one-out process on a validation dataset consisting of $70\%$ of DUC2007 dataset, which yields $\mu =1.98$ , $\delta =0.85$ and $\lambda =0.4$ .
Additionally, we display the evolution of ROUGE-SU4 and ROUGE-2 scores as a function of $\delta $ and $\lambda $ . For parameter $\delta $ , we observe in graphs UID49 and UID50 that the quality of the summary is low for $\delta $ close to 0 since it encourages our theme detection algorithm to tag the sentences with irrelevant topics with low associated tfisf values. In contrast, when $\delta $ exceeds $0.9$ , some relevant topics are overlooked and the quality of the summaries drops severely. Regarding parameter $\lambda $ , we observe in graphs UID52 and UID53 that $\lambda =0.4$ yields the highest score since it combines both the relevance of themes to the query and their centrality within the corpus for the computation of hyperedge weights. In contrast, with $\lambda =1$ , the algorithm focuses on the lexical similarity of themes with the query but it neglects the prominence of each theme.
Testing the TC-TranSum algorithm
In order to test our soft transversal-based summarizer, we display the evolution of the summary length and the ROUGE-SU4 score as a function of parameter $\gamma $ of algorithm "Detection of hypergraph transversals for text summarization" . In figure UID57 , we observe that the summary length grows linearly with the value of parameter $\gamma $ which confirms that our system does not favor longer sentences for low values of $\gamma $ . The ROUGE-SU4 curve of figure UID56 has a concave shape, with a low score when $\gamma $ is close to 0 (due to a poor recall) or when $\gamma $ is close to 1 (due to a poor precision). The overall concave shape of the ROUGE-SU4 curve also demonstrates the efficiency of our TC-TranSum algorithm: based on our hyperedge weighting scheme and our hyperedge coverage function, highly relevant sentences inducing a significant increase in the ROUGE-SU4 score are identified and included first in the summary.
In the subsequent experiments, we focus on TL-TranSum algorithm "Detection of hypergraph transversals for text summarization" which includes a target summary length and can thus be compared with other summarization systems which generally include a length constraint.
Testing the hypergraph structure
To justify our theme-based hypergraph definition, we test other hypergraph models. We only change the hyperedge model which determines the kind of relationship between sentences that is captured by the hypergraph. The sentence selection is performed by applying algorithm "Detection of hypergraph transversals for text summarization" to the resulting hypergraph. We test three alternative hyperedge models. First a model based on agglomerative clustering instead of SEMCOT: the same definition of semantic dissimilarity (equation 12 ) is used, then topics are detected as clusters of terms obtained by agglomerative clustering with single linkage with the semantic dissimilarity as a distance measure. The themes are detected and the hypergraph is constructed in the same way as in our model. Second, Overlap model defines hyperedges as overlapping clusters of sentences obtained by applying an algorithm of overlapping cluster detection BIBREF36 and using the cosine distance between tfisf representations of sentences as a distance metric. Finally, we test a hypergraph model already proposed in HyperSum system by BIBREF3 which combines pairwise hyperedges joining any two sentences having terms in common and hyperedges formed by non-overlapping clusters of sentences obtained by DBSCAN algorithm. Table 1 displays the ROUGE-2 and ROUGE-SU4 scores and the corresponding $95\%$ confidence intervals for each model. We observe that our model outperforms both HyperSum and Overlap models by at least $4\%$ and $15\%$ of ROUGE-SU4 score, respectively, which confirms that a two-step process extracting consistent topics first and then defining theme-based hyperedges from topic tags outperforms approaches based on sentence clustering, even when these clusters do overlap. Our model also outperforms the Agglomerative model by $10\%$ of ROUGE-SU4 score, due to its ability to identify noisy terms and to detect the number of topics automatically.
Comparison with related systems
We compare the performance of our TL-TranSum algorithm "Detection of hypergraph transversals for text summarization" with that of five related summarization systems. Topic-sensitive LexRank of BIBREF2 (TS-LexRank) and HITS algorithms of BIBREF1 are early graph-based summarizers. TS-LexRank builds a sentence graph based on term co-occurrences in sentences, and it applies a query-biased PageRank algorithm for sentence scoring. HITS method additionally extracts clusters of sentences and it applies the hubs and authorities algorithm for sentence scoring, with the sentences as authorities and the clusters as hubs. As suggested in BIBREF3 , in order to extract query relevant sentences, only the top $5\%$ of sentences that are most relevant to the query are considered. HyperSum extends early graph-based summarizers by defining a cluster-based hypergraph with the sentences as nodes and hyperedges as sentence clusters, as described in section "Testing the hypergraph structure" . The sentences are then scored using an iterative label propagation algorithm over the hypergraph, starting with the lexical similarity of each sentence with the query as initial labels. In all three methods, the sentences with highest scores and pairwise lexical similarity not exceeding a threshold are included in the summary. Finally, we test two methods that also build on the theory of submodular functions. First, the MaxCover approach BIBREF9 seeks a summary by maximizing the number of distinct relevant terms appearing in the summary while not exceeding the target summary length (using equation 18 to compute the term relevance scores). While the objective function of the method is similar to that of the problem of finding a maximal budgeted hypergraph transversal (equation 26 ) of BIBREF16 , they overlook the semantic similarities between terms which are captured by our SEMCOT algorithm and our hypergraph model. Similarly, the Maximal Relevance Minimal Redundancy (MRMR) first computes relevance scores of sentences as in equation 18 , then it seeks a summary with a maximal total relevance score and a minimal redundancy while not exceeding the target summary length. The problem is solved by an iterative algorithm building on the submodularity and non-decreasing property of the objective function.
Table 2 displays the ROUGE-2 and ROUGE-SU4 scores with the corresponding $95\%$ confidence intervals for all six systems, including our TL-TranSum method. We observe that our system outperforms other graph and hypergraph-based summarizers involving the computation of individual sentence scores: LexRank by $6\%$ , HITS by $13\%$ and HyperSum by $6\%$ of ROUGE-SU4 score; which confirms both the relevance of our theme-based hypergraph model and the capacity of our transversal-based summarizer to identify jointly relevant sentences as opposed to methods based on the computation of individual sentence scores. Moreover, our TL-TranSum method also outperforms other approaches such as MaxCover ( $5\%$ ) and MRMR ( $7\%$ ). These methods are also based on a submodular and non-decreasing function expressing the information coverage of the summary, but they are limited to lexical similarities between sentences and fail to detect topics and themes to measure the information coverage of the summary.
Comparison with DUC systems
As a final experiment, we compare our TL-TranSum approach to other summarizers presented at DUC contests. Table 3 displays the ROUGE-2 and ROUGE-SU4 scores for the worst summary produced by a human, for the top four systems submitted for the contests, for the baseline proposed by NIST (a summary consisting of the leading sentences of randomly selected documents) and the average score of all methods submitted, respectively for DUC2005, DUC2006 and DUC2007 contests. Regarding DUC2007, our method outperforms the best system by $2\%$ and the average ROUGE-SU4 score by $21\%$ . It also performs significantly better than the baseline of NIST. However, it is outperformed by the human summarizer since our systems produces extracts, while humans naturally reformulate the original sentences to compress their content and produce more informative summaries. Tests on DUC2006 dataset lead to similar conclusions, with our TL-TranSum algorithm outperforming the best other system and the average ROUGE-SU4 score by $2\%$ and $22\%$ , respectively. On DUC2005 dataset however, our TL-TranSum method is outperformed by the beset system which is due to the use of advanced NLP techniques (such as sentence trimming BIBREF37 ) which tend to increase the ROUGE-SU4 score. Nevertheless, the ROUGE-SU4 score produced by our TL-TranSum algorithm is still $15\%$ higher than the average score for DUC2005 contest.
Conclusion
In this paper, a new hypergraph-based summarization model was proposed, in which the nodes are the sentences of the corpus and the hyperedges are themes grouping sentences covering the same topics. Going beyond existing methods based on simple graphs and pairwise lexical similarities, our hypergraph model captures groups of semantically related sentences. Moreover, two new method of sentence selection based on the detection of hypergraph transversals were proposed: one to generate summaries of minimal length and achieving a target coverage, and the other to generate a summary achieving a maximal coverage of relevant themes while not exceeding a target length. The approach generates informative summaries by extracting a subset of sentences jointly covering the relevant themes of the corpus. Experiments on a real-world dataset demonstrate the effectiveness of the approach. The hypergraph model itself is shown to produce more accurate summaries than other models based on term or sentence clustering. The overall system also outperforms related graph- or hypergraph-based approaches by at least $10\%$ of ROUGE-SU4 score.
As a future research direction, we may analyse the performance of other algorithms for the detection of hypergraph transversals, such as methods based on LP relaxations. We may also further extend our topic model to take the polysemy of terms into acount: since each term may carry multiple meanings, a given term could refer to different topics depending on its context. Finally, we intend to adapt our model for solving related problems, such as commmunity question answering. | Moreover, our TL-TranSum method also outperforms other approaches such as MaxCover ( $5\%$ ) and MRMR ( $7\%$ ) |
31ee92e521be110b6a5a8d08cc9e6f90a3a97aae | 31ee92e521be110b6a5a8d08cc9e6f90a3a97aae_0 | Q: Does the paper discuss previous models which have been applied to the same task?
Text: Moral sentiment change and language
People's moral sentiment—our feelings toward right or wrong—can change over time. For instance, the public's views toward slavery have shifted substantially over the past centuries BIBREF0. How society's moral views evolve has been a long-standing issue and a constant source of controversy subject to interpretations from social scientists, historians, philosophers, among others. Here we ask whether natural language processing has the potential to inform moral sentiment change in society at scale, involving minimal human labour or intervention.
The topic of moral sentiment has been thus far considered a traditional inquiry in philosophy BIBREF1, BIBREF2, BIBREF3, with contemporary development of this topic represented in social psychology BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, cognitive linguistics BIBREF9, and more recently, the advent of Moral Foundations Theory BIBREF10, BIBREF11, BIBREF12. Despite the fundamental importance and interdisciplinarity of this topic, large-scale formal treatment of moral sentiment, particularly its evolution, is still in infancy from the natural language processing (NLP) community (see overview in Section SECREF2).
We believe that there is a tremendous potential to bring NLP methodologies to bear on the problem of moral sentiment change. We build on extensive recent work showing that word embeddings reveal implicit human biases BIBREF13, BIBREF14 and social stereotypes BIBREF15. Differing from this existing work, we demonstrate that moral sentiment change can be revealed by moral biases implicitly learned from diachronic text corpora. Accordingly, we present to our knowledge the first text-based framework for probing moral sentiment change at a large scale with support for different levels of analysis concerning moral relevance, moral polarity, and fine-grained moral dimensions. As such, for any query item such as slavery, our goal is to automatically infer its moral trajectories from sentiments at each of these levels over a long period of time.
Our approach is based on the premise that people's moral sentiments are reflected in natural language, and more specifically, in text BIBREF16. In particular, we know that books are highly effective tools for conveying moral views to the public. For example, Uncle Tom's Cabin BIBREF17 was central to the anti-slavery movement in the United States. The framework that we develop builds on this premise to explore changes in moral sentiment reflected in longitudinal or historical text.
Figure FIGREF1 offers a preview of our framework by visualizing the evolution trajectories of the public's moral sentiment toward concepts signified by the probe words slavery, democracy, and gay. Each of these concepts illustrates a piece of “moral history” tracked through a period of 200 years (1800 to 2000), and our framework is able to capture nuanced moral changes. For instance, slavery initially lies at the border of moral virtue (positive sentiment) and vice (negative sentiment) in the 1800s yet gradually moves toward the center of moral vice over the 200-year period; in contrast, democracy considered morally negative (e.g., subversion and anti-authority under monarchy) in the 1800s is now perceived as morally positive, as a mechanism for fairness; gay, which came to denote homosexuality only in the 1930s BIBREF18, is inferred to be morally irrelevant until the modern day. We will describe systematic evaluations and applications of our framework that extend beyond these anecdotal cases of moral sentiment change.
The general text-based framework that we propose consists of a parameter-free approach that facilitates the prediction of public moral sentiment toward individual concepts, automated retrieval of morally changing concepts, and broad-scale psycholinguistic analyses of historical rates of moral sentiment change. We provide a description of the probabilistic models and data used, followed by comprehensive evaluations of our methodology.
Emerging NLP research on morality
An emerging body of work in natural language processing and computational social science has investigated how NLP systems can detect moral sentiment in online text. For example, moral rhetoric in social media and political discourse BIBREF19, BIBREF20, BIBREF21, the relation between moralization in social media and violent protests BIBREF22, and bias toward refugees in talk radio shows BIBREF23 have been some of the topics explored in this line of inquiry. In contrast to this line of research, the development of a formal framework for moral sentiment change is still under-explored, with no existing systematic and formal treatment of this topic BIBREF16.
While there is emerging awareness of ethical issues in NLP BIBREF24, BIBREF25, work exploiting NLP techniques to study principles of moral sentiment change is scarce. Moreover, since morality is variable across cultures and time BIBREF12, BIBREF16, developing systems that capture the diachronic nature of moral sentiment will be a pivotal research direction. Our work leverages and complements existing research that finds implicit human biases from word embeddings BIBREF13, BIBREF14, BIBREF19 by developing a novel perspective on using NLP methodology to discover principles of moral sentiment change in human society.
A three-tier modelling framework
Our framework treats the moral sentiment toward a concept at three incremental levels, as illustrated in Figure FIGREF3. First, we consider moral relevance, distinguishing between morally irrelevant and morally relevant concepts. At the second tier, moral polarity, we further split morally relevant concepts into those that are positively or negatively perceived in the moral domain. Finally, a third tier classifies these concepts into fine-grained categories of human morality.
We draw from research in social psychology to inform our methodology, most prominently Moral Foundations Theory BIBREF26. MFT seeks to explain the structure and variation of human morality across cultures, and proposes five moral foundations: Care / Harm, Fairness / Cheating, Loyalty / Betrayal, Authority / Subversion, and Sanctity / Degradation. Each foundation is summarized by a positive and a negative pole, resulting in ten fine-grained moral categories.
A three-tier modelling framework ::: Lexical data for moral sentiment
To ground moral sentiment in text, we leverage the Moral Foundations Dictionary BIBREF27. The MFD is a psycholinguistic resource that associates each MFT category with a set of seed words, which are words that provide evidence for the corresponding moral category in text. We use the MFD for moral polarity classification by dividing seed words into positive and negative sets, and for fine-grained categorization by splitting them into the 10 MFT categories.
To implement the first tier of our framework and detect moral relevance, we complement our morally relevant seed words with a corresponding set of seed words approximating moral irrelevance based on the notion of valence, i.e., the degree of pleasantness or unpleasantness of a stimulus. We refer to the emotional valence ratings collected by BIBREF28 for approximately 14,000 English words, and choose the words with most neutral valence rating that do not occur in the MFD as our set of morally irrelevant seed words, for an equal total number of morally relevant and morally irrelevant words.
A three-tier modelling framework ::: Models
We propose and evaluate a set of probabilistic models to classify concepts in the three tiers of morality specified above. Our models exploit the semantic structure of word embeddings BIBREF29 to perform tiered moral classification of query concepts. In each tier, the model receives a query word embedding vector $\mathbf {q}$ and a set of seed words for each class in that tier, and infers the posterior probabilities over the set of classes $c$ to which the query concept is associated with.
The seed words function as “labelled examples” that guide the moral classification of novel concepts, and are organized per classification tier as follows. In moral relevance classification, sets $\mathbf {S}_0$ and $\mathbf {S}_1$ contain the morally irrelevant and morally relevant seed words, respectively; for moral polarity, $\mathbf {S}_+$ and $\mathbf {S}_-$ contain the positive and negative seed words; and for fine-grained moral categories, $\mathbf {S}_1, \ldots , \mathbf {S}_{10}$ contain the seed words for the 10 categories of MFT. Then our general problem is to estimate $p(c\,|\,\mathbf {q})$, where $\mathbf {q}$ is a query vector and $c$ is a moral category in the desired tier.
We evaluate the following four models:
A Centroid model summarizes each set of seed words by its expected vector in embedding space, and classifies concepts into the class of closest expected embedding in Euclidean distance following a softmax rule;
A Naïve Bayes model considers both mean and variance, under the assumption of independence among embedding dimensions, by fitting a normal distribution with mean vector and diagonal covariance matrix to the set of seed words of each class;
A $k$-Nearest Neighbors ($k$NN) model exploits local density estimation and classifies concepts according to the majority vote of the $k$ seed words closest to the query vector;
A Kernel Density Estimation (KDE) model performs density estimation at a broader scale by considering the contribution of each seed word toward the total likelihood of each class, regulated by a bandwidth parameter $h$ that controls the sensitivity of the model to distance in embedding space.
Table TABREF2 specifies the formulation of each model. Note that we adopt a parsimonious design principle in our modelling: both Centroid and Naïve Bayes are parameter-free models, $k$NN only depends on the choice of $k$, and KDE uses a single bandwidth parameter $h$.
Historical corpus data
To apply our models diachronically, we require a word embedding space that captures the meanings of words at different points in time and reflects changes pertaining to a particular word as diachronic shifts in a common embedding space.
Following BIBREF30, we combine skip-gram word embeddings BIBREF29 trained on longitudinal corpora of English with rotational alignments of embedding spaces to obtain diachronic word embeddings that are aligned through time.
We divide historical time into decade-long bins, and use two sets of embeddings provided by BIBREF30, each trained on a different historical corpus of English:
Google N-grams BIBREF31: a corpus of $8.5 \times 10^{11}$ tokens collected from the English literature (Google Books, all-genres) spanning the period 1800–1999.
COHA BIBREF32: a smaller corpus of $4.1 \times 10^8$ tokens from works selected so as to be genre-balanced and representative of American English in the period 1810–2009.
Model evaluations
We evaluated our models in two ways: classification of moral seed words on all three tiers (moral relevance, polarity, and fine-grained categories), and correlation of model predictions with human judgments.
Model evaluations ::: Moral sentiment inference of seed words
In this evaluation, we assessed the ability of our models to classify the seed words that compose our moral environment in a leave-one-out classification task. We performed the evaluation for all three classification tiers: 1) moral relevance, where seed words are split into morally relevant and morally irrelevant; 2) moral polarity, where moral seed words are split into positive and negative; 3) fine-grained categories, where moral seed words are split into the 10 MFT categories. In each test, we removed one seed word from the training set at a time to obtain cross-validated model predictions.
Table TABREF14 shows classification accuracy for all models and corpora on each tier for the 1990–1999 period. We observe that all models perform substantially better than chance, confirming the efficacy of our methodology in capturing moral dimensions of words. We also observe that models using word embeddings trained on Google N-grams perform better than those trained on COHA, which could be expected given the larger corpus size of the former.
In the remaining analyses, we employ the Centroid model, which offers competitive accuracy and a simple, parameter-free specification.
Model evaluations ::: Alignment with human valence ratings
We evaluated the approximate agreement between our methodology and human judgments using valence ratings, i.e., the degree of pleasantness or unpleasantness of a stimulus. Our assumption is that the valence of a concept should correlate with its perceived moral polarity, e.g., morally repulsive ideas should evoke an unpleasant feeling. However, we do not expect this correspondence to be perfect; for example, the concept of dessert evokes a pleasant reaction without being morally relevant.
In this analysis, we took the valence ratings for the nearly 14,000 English nouns collected by BIBREF28 and, for each query word $q$, we generated a corresponding prediction of positive moral polarity from our model, $P(c_+\,|\,\mathbf {q})$. Table TABREF16 shows the correlations between human valence ratings and predictions of positive moral polarity generated by models trained on each of our corpora. We observe that the correlations are significant, suggesting the ability of our methodology to capture relevant features of moral sentiment from text.
In the remaining applications, we use the diachronic embeddings trained on the Google N-grams corpus, which enabled superior model performance throughout our evaluations.
Applications to diachronic morality
We applied our framework in three ways: 1) evaluation of selected concepts in historical time courses and prediction of human judgments; 2) automatic detection of moral sentiment change; and 3) broad-scale study of the relations between psycholinguistic variables and historical change of moral sentiment toward concepts.
Applications to diachronic morality ::: Moral change in individual concepts ::: Historical time courses.
We applied our models diachronically to predict time courses of moral relevance, moral polarity, and fine-grained moral categories toward two historically relevant topics: slavery and democracy. By grounding our model in word embeddings for each decade and querying concepts at the three tiers of classification, we obtained the time courses shown in Figure FIGREF21.
We note that these trajectories illustrate actual historical trends. Predictions for democracy show a trend toward morally positive sentiment, consistent with the adoption of democratic regimes in Western societies. On the other hand, predictions for slavery trend down and suggest a drop around the 1860s, coinciding with the American Civil War. We also observe changes in the dominant fine-grained moral categories, such as the perception of democracy as a fair concept, suggesting potential mechanisms behind the polarity changes and providing further insight into the public sentiment toward these concepts as evidenced by text.
Applications to diachronic morality ::: Moral change in individual concepts ::: Prediction of human judgments.
We explored the predictive potential of our framework by comparing model predictions with human judgments of moral relevance and acceptability. We used data from the Pew Research Center's 2013 Global Attitudes survey BIBREF33, in which participants from 40 countries judged 8 topics such as abortion and homosexuality as one of “acceptable", “unacceptable", and “not a moral issue".
We compared human ratings with model predictions at two tiers: for moral relevance, we paired the proportion of “not a moral issue” human responses with irrelevance predictions $p(c_0\,|\,\mathbf {q})$ for each topic, and for moral acceptability, we paired the proportion of “acceptable” responses with positive predictions $p(c_+\,|\,\mathbf {q})$. We used 1990s word embeddings, and obtained predictions for two-word topics by querying the model with their averaged embeddings.
Figure FIGREF23 shows plots of relevance and polarity predictions against survey proportions, and we observe a visible correspondence between model predictions and human judgments despite the difficulty of this task and limited number of topics.
Applications to diachronic morality ::: Retrieval of morally changing concepts
Beyond analyzing selected concepts, we applied our framework predictively on a large repertoire of words to automatically discover the concepts that have exhibited the greatest change in moral sentiment at two tiers, moral relevance and moral polarity.
We selected the 10,000 nouns with highest total frequency in the 1800–1999 period according to data from BIBREF30, restricted to words labelled as nouns in WordNet BIBREF34 for validation. For each such word $\mathbf {q}$, we computed diachronic moral relevance scores $R_i = p(c_1\,|\,\mathbf {q}), i=1,\ldots ,20$ for the 20 decades in our time span. Then, we performed a linear regression of $R$ on $T = 1,\ldots ,n$ and took the fitted slope as a measure of moral relevance change. We repeated the same procedure for moral polarity. Finally, we removed words with average relevance score below $0.5$ to focus on morally relevant retrievals.
Table TABREF17 shows the words with steepest predicted change toward moral relevance, along with their predicted fine-grained moral categories in modern times (i.e., 1900–1999). Table TABREF18 shows the words with steepest predicted change toward the positive and negative moral poles. To further investigate the moral sentiment that may have led to such polarity shifts, we also show the predicted fine-grained moral categories of each word at its earliest time of predicted moral relevance and in modern times. Although we do not have access to ground truth for this application, these results offer initial insight into the historical moral landscape of the English language at scale.
Applications to diachronic morality ::: Broad-scale investigation of moral change
In this application, we investigated the hypothesis that concept concreteness is inversely related to change in moral relevance, i.e., that concepts considered more abstract might become morally relevant at a higher rate than concepts considered more concrete. To test this hypothesis, we performed a multiple linear regression analysis on rate of change toward moral relevance of a large repertoire of words against concept concreteness ratings, word frequency BIBREF35, and word length BIBREF36.
We obtained norms of concreteness ratings from BIBREF28. We collected the same set of high-frequency nouns as in the previous analysis, along with their fitted slopes of moral relevance change. Since we were interested in moral relevance change within this large set of words, we restricted our analysis to those words whose model predictions indicate change in moral relevance, in either direction, from the 1800s to the 1990s.
We performed a multiple linear regression under the following model:
Here $\rho (w)$ is the slope of moral relevance change for word $w$; $f(w$) is its average frequency; $l(w)$ is its character length; $c(w)$ is its concreteness rating; $\beta _f$, $\beta _l$, $\beta _c$, and $\beta _0$ are the corresponding factor weights and intercept, respectively; and $\epsilon \sim \mathcal {N}(0, \sigma )$ is the regression error term.
Table TABREF27 shows the results of multiple linear regression. We observe that concreteness is a significant negative predictor of change toward moral relevance, suggesting that abstract concepts are more strongly associated with increasing moral relevance over time than concrete concepts. This significance persists under partial correlation test against the control factors ($p < 0.01$).
We further verified the diachronic component of this effect in a random permutation analysis. We generated 1,000 control time courses by randomly shuffling the 20 decades in our data, and repeated the regression analysis to obtain a control distribution for each regression coefficient. All effects became non-significant under the shuffled condition, suggesting the relevance of concept concreteness for diachronic change in moral sentiment (see Supplementary Material).
Discussion and conclusion
We presented a text-based framework for exploring the socio-scientific problem of moral sentiment change. Our methodology uses minimal parameters and exploits implicit moral biases learned from diachronic word embeddings to reveal the public's moral perception toward a large concept repertoire over a long historical period.
Differing from existing work in NLP that treats moral sentiment as a flat classification problem BIBREF19, BIBREF20, our framework probes moral sentiment change at multiple levels and captures moral dynamics concerning relevance, polarity, and fine-grained categories informed by Moral Foundations Theory BIBREF12. We applied our methodology to the automated analyses of moral change both in individual concepts and at a broad scale, thus providing insights into psycholinguistic variables that associate with rates of moral change in the public.
Our current work focuses on exploring moral sentiment change in English-speaking cultures. Future research should evaluate the appropriateness of the framework to probing moral change from a diverse range of cultures and linguistic backgrounds, and the extent to which moral sentiment change interacts and crisscrosses with linguistic meaning change and lexical coinage. Our work creates opportunities for applying natural language processing toward characterizing moral sentiment change in society.
Acknowledgments
We would like to thank Nina Wang, Nicola Lacerata, Dan Jurafsky, Paul Bloom, Dzmitry Bahdanau, and the Computational Linguistics Group at the University of Toronto for helpful discussion. We would also like to thank Ben Prystawski for his feedback on the manuscript. JX is supported by an NSERC USRA Fellowship and YX is funded through a SSHRC Insight Grant, an NSERC Discovery Grant, and a Connaught New Researcher Award. | Yes |
737397f66751624bcf4ef891a10b29cfc46b0520 | 737397f66751624bcf4ef891a10b29cfc46b0520_0 | Q: Which datasets are used in the paper?
Text: Moral sentiment change and language
People's moral sentiment—our feelings toward right or wrong—can change over time. For instance, the public's views toward slavery have shifted substantially over the past centuries BIBREF0. How society's moral views evolve has been a long-standing issue and a constant source of controversy subject to interpretations from social scientists, historians, philosophers, among others. Here we ask whether natural language processing has the potential to inform moral sentiment change in society at scale, involving minimal human labour or intervention.
The topic of moral sentiment has been thus far considered a traditional inquiry in philosophy BIBREF1, BIBREF2, BIBREF3, with contemporary development of this topic represented in social psychology BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, cognitive linguistics BIBREF9, and more recently, the advent of Moral Foundations Theory BIBREF10, BIBREF11, BIBREF12. Despite the fundamental importance and interdisciplinarity of this topic, large-scale formal treatment of moral sentiment, particularly its evolution, is still in infancy from the natural language processing (NLP) community (see overview in Section SECREF2).
We believe that there is a tremendous potential to bring NLP methodologies to bear on the problem of moral sentiment change. We build on extensive recent work showing that word embeddings reveal implicit human biases BIBREF13, BIBREF14 and social stereotypes BIBREF15. Differing from this existing work, we demonstrate that moral sentiment change can be revealed by moral biases implicitly learned from diachronic text corpora. Accordingly, we present to our knowledge the first text-based framework for probing moral sentiment change at a large scale with support for different levels of analysis concerning moral relevance, moral polarity, and fine-grained moral dimensions. As such, for any query item such as slavery, our goal is to automatically infer its moral trajectories from sentiments at each of these levels over a long period of time.
Our approach is based on the premise that people's moral sentiments are reflected in natural language, and more specifically, in text BIBREF16. In particular, we know that books are highly effective tools for conveying moral views to the public. For example, Uncle Tom's Cabin BIBREF17 was central to the anti-slavery movement in the United States. The framework that we develop builds on this premise to explore changes in moral sentiment reflected in longitudinal or historical text.
Figure FIGREF1 offers a preview of our framework by visualizing the evolution trajectories of the public's moral sentiment toward concepts signified by the probe words slavery, democracy, and gay. Each of these concepts illustrates a piece of “moral history” tracked through a period of 200 years (1800 to 2000), and our framework is able to capture nuanced moral changes. For instance, slavery initially lies at the border of moral virtue (positive sentiment) and vice (negative sentiment) in the 1800s yet gradually moves toward the center of moral vice over the 200-year period; in contrast, democracy considered morally negative (e.g., subversion and anti-authority under monarchy) in the 1800s is now perceived as morally positive, as a mechanism for fairness; gay, which came to denote homosexuality only in the 1930s BIBREF18, is inferred to be morally irrelevant until the modern day. We will describe systematic evaluations and applications of our framework that extend beyond these anecdotal cases of moral sentiment change.
The general text-based framework that we propose consists of a parameter-free approach that facilitates the prediction of public moral sentiment toward individual concepts, automated retrieval of morally changing concepts, and broad-scale psycholinguistic analyses of historical rates of moral sentiment change. We provide a description of the probabilistic models and data used, followed by comprehensive evaluations of our methodology.
Emerging NLP research on morality
An emerging body of work in natural language processing and computational social science has investigated how NLP systems can detect moral sentiment in online text. For example, moral rhetoric in social media and political discourse BIBREF19, BIBREF20, BIBREF21, the relation between moralization in social media and violent protests BIBREF22, and bias toward refugees in talk radio shows BIBREF23 have been some of the topics explored in this line of inquiry. In contrast to this line of research, the development of a formal framework for moral sentiment change is still under-explored, with no existing systematic and formal treatment of this topic BIBREF16.
While there is emerging awareness of ethical issues in NLP BIBREF24, BIBREF25, work exploiting NLP techniques to study principles of moral sentiment change is scarce. Moreover, since morality is variable across cultures and time BIBREF12, BIBREF16, developing systems that capture the diachronic nature of moral sentiment will be a pivotal research direction. Our work leverages and complements existing research that finds implicit human biases from word embeddings BIBREF13, BIBREF14, BIBREF19 by developing a novel perspective on using NLP methodology to discover principles of moral sentiment change in human society.
A three-tier modelling framework
Our framework treats the moral sentiment toward a concept at three incremental levels, as illustrated in Figure FIGREF3. First, we consider moral relevance, distinguishing between morally irrelevant and morally relevant concepts. At the second tier, moral polarity, we further split morally relevant concepts into those that are positively or negatively perceived in the moral domain. Finally, a third tier classifies these concepts into fine-grained categories of human morality.
We draw from research in social psychology to inform our methodology, most prominently Moral Foundations Theory BIBREF26. MFT seeks to explain the structure and variation of human morality across cultures, and proposes five moral foundations: Care / Harm, Fairness / Cheating, Loyalty / Betrayal, Authority / Subversion, and Sanctity / Degradation. Each foundation is summarized by a positive and a negative pole, resulting in ten fine-grained moral categories.
A three-tier modelling framework ::: Lexical data for moral sentiment
To ground moral sentiment in text, we leverage the Moral Foundations Dictionary BIBREF27. The MFD is a psycholinguistic resource that associates each MFT category with a set of seed words, which are words that provide evidence for the corresponding moral category in text. We use the MFD for moral polarity classification by dividing seed words into positive and negative sets, and for fine-grained categorization by splitting them into the 10 MFT categories.
To implement the first tier of our framework and detect moral relevance, we complement our morally relevant seed words with a corresponding set of seed words approximating moral irrelevance based on the notion of valence, i.e., the degree of pleasantness or unpleasantness of a stimulus. We refer to the emotional valence ratings collected by BIBREF28 for approximately 14,000 English words, and choose the words with most neutral valence rating that do not occur in the MFD as our set of morally irrelevant seed words, for an equal total number of morally relevant and morally irrelevant words.
A three-tier modelling framework ::: Models
We propose and evaluate a set of probabilistic models to classify concepts in the three tiers of morality specified above. Our models exploit the semantic structure of word embeddings BIBREF29 to perform tiered moral classification of query concepts. In each tier, the model receives a query word embedding vector $\mathbf {q}$ and a set of seed words for each class in that tier, and infers the posterior probabilities over the set of classes $c$ to which the query concept is associated with.
The seed words function as “labelled examples” that guide the moral classification of novel concepts, and are organized per classification tier as follows. In moral relevance classification, sets $\mathbf {S}_0$ and $\mathbf {S}_1$ contain the morally irrelevant and morally relevant seed words, respectively; for moral polarity, $\mathbf {S}_+$ and $\mathbf {S}_-$ contain the positive and negative seed words; and for fine-grained moral categories, $\mathbf {S}_1, \ldots , \mathbf {S}_{10}$ contain the seed words for the 10 categories of MFT. Then our general problem is to estimate $p(c\,|\,\mathbf {q})$, where $\mathbf {q}$ is a query vector and $c$ is a moral category in the desired tier.
We evaluate the following four models:
A Centroid model summarizes each set of seed words by its expected vector in embedding space, and classifies concepts into the class of closest expected embedding in Euclidean distance following a softmax rule;
A Naïve Bayes model considers both mean and variance, under the assumption of independence among embedding dimensions, by fitting a normal distribution with mean vector and diagonal covariance matrix to the set of seed words of each class;
A $k$-Nearest Neighbors ($k$NN) model exploits local density estimation and classifies concepts according to the majority vote of the $k$ seed words closest to the query vector;
A Kernel Density Estimation (KDE) model performs density estimation at a broader scale by considering the contribution of each seed word toward the total likelihood of each class, regulated by a bandwidth parameter $h$ that controls the sensitivity of the model to distance in embedding space.
Table TABREF2 specifies the formulation of each model. Note that we adopt a parsimonious design principle in our modelling: both Centroid and Naïve Bayes are parameter-free models, $k$NN only depends on the choice of $k$, and KDE uses a single bandwidth parameter $h$.
Historical corpus data
To apply our models diachronically, we require a word embedding space that captures the meanings of words at different points in time and reflects changes pertaining to a particular word as diachronic shifts in a common embedding space.
Following BIBREF30, we combine skip-gram word embeddings BIBREF29 trained on longitudinal corpora of English with rotational alignments of embedding spaces to obtain diachronic word embeddings that are aligned through time.
We divide historical time into decade-long bins, and use two sets of embeddings provided by BIBREF30, each trained on a different historical corpus of English:
Google N-grams BIBREF31: a corpus of $8.5 \times 10^{11}$ tokens collected from the English literature (Google Books, all-genres) spanning the period 1800–1999.
COHA BIBREF32: a smaller corpus of $4.1 \times 10^8$ tokens from works selected so as to be genre-balanced and representative of American English in the period 1810–2009.
Model evaluations
We evaluated our models in two ways: classification of moral seed words on all three tiers (moral relevance, polarity, and fine-grained categories), and correlation of model predictions with human judgments.
Model evaluations ::: Moral sentiment inference of seed words
In this evaluation, we assessed the ability of our models to classify the seed words that compose our moral environment in a leave-one-out classification task. We performed the evaluation for all three classification tiers: 1) moral relevance, where seed words are split into morally relevant and morally irrelevant; 2) moral polarity, where moral seed words are split into positive and negative; 3) fine-grained categories, where moral seed words are split into the 10 MFT categories. In each test, we removed one seed word from the training set at a time to obtain cross-validated model predictions.
Table TABREF14 shows classification accuracy for all models and corpora on each tier for the 1990–1999 period. We observe that all models perform substantially better than chance, confirming the efficacy of our methodology in capturing moral dimensions of words. We also observe that models using word embeddings trained on Google N-grams perform better than those trained on COHA, which could be expected given the larger corpus size of the former.
In the remaining analyses, we employ the Centroid model, which offers competitive accuracy and a simple, parameter-free specification.
Model evaluations ::: Alignment with human valence ratings
We evaluated the approximate agreement between our methodology and human judgments using valence ratings, i.e., the degree of pleasantness or unpleasantness of a stimulus. Our assumption is that the valence of a concept should correlate with its perceived moral polarity, e.g., morally repulsive ideas should evoke an unpleasant feeling. However, we do not expect this correspondence to be perfect; for example, the concept of dessert evokes a pleasant reaction without being morally relevant.
In this analysis, we took the valence ratings for the nearly 14,000 English nouns collected by BIBREF28 and, for each query word $q$, we generated a corresponding prediction of positive moral polarity from our model, $P(c_+\,|\,\mathbf {q})$. Table TABREF16 shows the correlations between human valence ratings and predictions of positive moral polarity generated by models trained on each of our corpora. We observe that the correlations are significant, suggesting the ability of our methodology to capture relevant features of moral sentiment from text.
In the remaining applications, we use the diachronic embeddings trained on the Google N-grams corpus, which enabled superior model performance throughout our evaluations.
Applications to diachronic morality
We applied our framework in three ways: 1) evaluation of selected concepts in historical time courses and prediction of human judgments; 2) automatic detection of moral sentiment change; and 3) broad-scale study of the relations between psycholinguistic variables and historical change of moral sentiment toward concepts.
Applications to diachronic morality ::: Moral change in individual concepts ::: Historical time courses.
We applied our models diachronically to predict time courses of moral relevance, moral polarity, and fine-grained moral categories toward two historically relevant topics: slavery and democracy. By grounding our model in word embeddings for each decade and querying concepts at the three tiers of classification, we obtained the time courses shown in Figure FIGREF21.
We note that these trajectories illustrate actual historical trends. Predictions for democracy show a trend toward morally positive sentiment, consistent with the adoption of democratic regimes in Western societies. On the other hand, predictions for slavery trend down and suggest a drop around the 1860s, coinciding with the American Civil War. We also observe changes in the dominant fine-grained moral categories, such as the perception of democracy as a fair concept, suggesting potential mechanisms behind the polarity changes and providing further insight into the public sentiment toward these concepts as evidenced by text.
Applications to diachronic morality ::: Moral change in individual concepts ::: Prediction of human judgments.
We explored the predictive potential of our framework by comparing model predictions with human judgments of moral relevance and acceptability. We used data from the Pew Research Center's 2013 Global Attitudes survey BIBREF33, in which participants from 40 countries judged 8 topics such as abortion and homosexuality as one of “acceptable", “unacceptable", and “not a moral issue".
We compared human ratings with model predictions at two tiers: for moral relevance, we paired the proportion of “not a moral issue” human responses with irrelevance predictions $p(c_0\,|\,\mathbf {q})$ for each topic, and for moral acceptability, we paired the proportion of “acceptable” responses with positive predictions $p(c_+\,|\,\mathbf {q})$. We used 1990s word embeddings, and obtained predictions for two-word topics by querying the model with their averaged embeddings.
Figure FIGREF23 shows plots of relevance and polarity predictions against survey proportions, and we observe a visible correspondence between model predictions and human judgments despite the difficulty of this task and limited number of topics.
Applications to diachronic morality ::: Retrieval of morally changing concepts
Beyond analyzing selected concepts, we applied our framework predictively on a large repertoire of words to automatically discover the concepts that have exhibited the greatest change in moral sentiment at two tiers, moral relevance and moral polarity.
We selected the 10,000 nouns with highest total frequency in the 1800–1999 period according to data from BIBREF30, restricted to words labelled as nouns in WordNet BIBREF34 for validation. For each such word $\mathbf {q}$, we computed diachronic moral relevance scores $R_i = p(c_1\,|\,\mathbf {q}), i=1,\ldots ,20$ for the 20 decades in our time span. Then, we performed a linear regression of $R$ on $T = 1,\ldots ,n$ and took the fitted slope as a measure of moral relevance change. We repeated the same procedure for moral polarity. Finally, we removed words with average relevance score below $0.5$ to focus on morally relevant retrievals.
Table TABREF17 shows the words with steepest predicted change toward moral relevance, along with their predicted fine-grained moral categories in modern times (i.e., 1900–1999). Table TABREF18 shows the words with steepest predicted change toward the positive and negative moral poles. To further investigate the moral sentiment that may have led to such polarity shifts, we also show the predicted fine-grained moral categories of each word at its earliest time of predicted moral relevance and in modern times. Although we do not have access to ground truth for this application, these results offer initial insight into the historical moral landscape of the English language at scale.
Applications to diachronic morality ::: Broad-scale investigation of moral change
In this application, we investigated the hypothesis that concept concreteness is inversely related to change in moral relevance, i.e., that concepts considered more abstract might become morally relevant at a higher rate than concepts considered more concrete. To test this hypothesis, we performed a multiple linear regression analysis on rate of change toward moral relevance of a large repertoire of words against concept concreteness ratings, word frequency BIBREF35, and word length BIBREF36.
We obtained norms of concreteness ratings from BIBREF28. We collected the same set of high-frequency nouns as in the previous analysis, along with their fitted slopes of moral relevance change. Since we were interested in moral relevance change within this large set of words, we restricted our analysis to those words whose model predictions indicate change in moral relevance, in either direction, from the 1800s to the 1990s.
We performed a multiple linear regression under the following model:
Here $\rho (w)$ is the slope of moral relevance change for word $w$; $f(w$) is its average frequency; $l(w)$ is its character length; $c(w)$ is its concreteness rating; $\beta _f$, $\beta _l$, $\beta _c$, and $\beta _0$ are the corresponding factor weights and intercept, respectively; and $\epsilon \sim \mathcal {N}(0, \sigma )$ is the regression error term.
Table TABREF27 shows the results of multiple linear regression. We observe that concreteness is a significant negative predictor of change toward moral relevance, suggesting that abstract concepts are more strongly associated with increasing moral relevance over time than concrete concepts. This significance persists under partial correlation test against the control factors ($p < 0.01$).
We further verified the diachronic component of this effect in a random permutation analysis. We generated 1,000 control time courses by randomly shuffling the 20 decades in our data, and repeated the regression analysis to obtain a control distribution for each regression coefficient. All effects became non-significant under the shuffled condition, suggesting the relevance of concept concreteness for diachronic change in moral sentiment (see Supplementary Material).
Discussion and conclusion
We presented a text-based framework for exploring the socio-scientific problem of moral sentiment change. Our methodology uses minimal parameters and exploits implicit moral biases learned from diachronic word embeddings to reveal the public's moral perception toward a large concept repertoire over a long historical period.
Differing from existing work in NLP that treats moral sentiment as a flat classification problem BIBREF19, BIBREF20, our framework probes moral sentiment change at multiple levels and captures moral dynamics concerning relevance, polarity, and fine-grained categories informed by Moral Foundations Theory BIBREF12. We applied our methodology to the automated analyses of moral change both in individual concepts and at a broad scale, thus providing insights into psycholinguistic variables that associate with rates of moral change in the public.
Our current work focuses on exploring moral sentiment change in English-speaking cultures. Future research should evaluate the appropriateness of the framework to probing moral change from a diverse range of cultures and linguistic backgrounds, and the extent to which moral sentiment change interacts and crisscrosses with linguistic meaning change and lexical coinage. Our work creates opportunities for applying natural language processing toward characterizing moral sentiment change in society.
Acknowledgments
We would like to thank Nina Wang, Nicola Lacerata, Dan Jurafsky, Paul Bloom, Dzmitry Bahdanau, and the Computational Linguistics Group at the University of Toronto for helpful discussion. We would also like to thank Ben Prystawski for his feedback on the manuscript. JX is supported by an NSERC USRA Fellowship and YX is funded through a SSHRC Insight Grant, an NSERC Discovery Grant, and a Connaught New Researcher Award. | Google N-grams
COHA
Moral Foundations Dictionary (MFD)
|
87cb19e453cf7e248f24b5f7d1ff9f02d87fc261 | 87cb19e453cf7e248f24b5f7d1ff9f02d87fc261_0 | Q: How does the parameter-free model work?
Text: Moral sentiment change and language
People's moral sentiment—our feelings toward right or wrong—can change over time. For instance, the public's views toward slavery have shifted substantially over the past centuries BIBREF0. How society's moral views evolve has been a long-standing issue and a constant source of controversy subject to interpretations from social scientists, historians, philosophers, among others. Here we ask whether natural language processing has the potential to inform moral sentiment change in society at scale, involving minimal human labour or intervention.
The topic of moral sentiment has been thus far considered a traditional inquiry in philosophy BIBREF1, BIBREF2, BIBREF3, with contemporary development of this topic represented in social psychology BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, cognitive linguistics BIBREF9, and more recently, the advent of Moral Foundations Theory BIBREF10, BIBREF11, BIBREF12. Despite the fundamental importance and interdisciplinarity of this topic, large-scale formal treatment of moral sentiment, particularly its evolution, is still in infancy from the natural language processing (NLP) community (see overview in Section SECREF2).
We believe that there is a tremendous potential to bring NLP methodologies to bear on the problem of moral sentiment change. We build on extensive recent work showing that word embeddings reveal implicit human biases BIBREF13, BIBREF14 and social stereotypes BIBREF15. Differing from this existing work, we demonstrate that moral sentiment change can be revealed by moral biases implicitly learned from diachronic text corpora. Accordingly, we present to our knowledge the first text-based framework for probing moral sentiment change at a large scale with support for different levels of analysis concerning moral relevance, moral polarity, and fine-grained moral dimensions. As such, for any query item such as slavery, our goal is to automatically infer its moral trajectories from sentiments at each of these levels over a long period of time.
Our approach is based on the premise that people's moral sentiments are reflected in natural language, and more specifically, in text BIBREF16. In particular, we know that books are highly effective tools for conveying moral views to the public. For example, Uncle Tom's Cabin BIBREF17 was central to the anti-slavery movement in the United States. The framework that we develop builds on this premise to explore changes in moral sentiment reflected in longitudinal or historical text.
Figure FIGREF1 offers a preview of our framework by visualizing the evolution trajectories of the public's moral sentiment toward concepts signified by the probe words slavery, democracy, and gay. Each of these concepts illustrates a piece of “moral history” tracked through a period of 200 years (1800 to 2000), and our framework is able to capture nuanced moral changes. For instance, slavery initially lies at the border of moral virtue (positive sentiment) and vice (negative sentiment) in the 1800s yet gradually moves toward the center of moral vice over the 200-year period; in contrast, democracy considered morally negative (e.g., subversion and anti-authority under monarchy) in the 1800s is now perceived as morally positive, as a mechanism for fairness; gay, which came to denote homosexuality only in the 1930s BIBREF18, is inferred to be morally irrelevant until the modern day. We will describe systematic evaluations and applications of our framework that extend beyond these anecdotal cases of moral sentiment change.
The general text-based framework that we propose consists of a parameter-free approach that facilitates the prediction of public moral sentiment toward individual concepts, automated retrieval of morally changing concepts, and broad-scale psycholinguistic analyses of historical rates of moral sentiment change. We provide a description of the probabilistic models and data used, followed by comprehensive evaluations of our methodology.
Emerging NLP research on morality
An emerging body of work in natural language processing and computational social science has investigated how NLP systems can detect moral sentiment in online text. For example, moral rhetoric in social media and political discourse BIBREF19, BIBREF20, BIBREF21, the relation between moralization in social media and violent protests BIBREF22, and bias toward refugees in talk radio shows BIBREF23 have been some of the topics explored in this line of inquiry. In contrast to this line of research, the development of a formal framework for moral sentiment change is still under-explored, with no existing systematic and formal treatment of this topic BIBREF16.
While there is emerging awareness of ethical issues in NLP BIBREF24, BIBREF25, work exploiting NLP techniques to study principles of moral sentiment change is scarce. Moreover, since morality is variable across cultures and time BIBREF12, BIBREF16, developing systems that capture the diachronic nature of moral sentiment will be a pivotal research direction. Our work leverages and complements existing research that finds implicit human biases from word embeddings BIBREF13, BIBREF14, BIBREF19 by developing a novel perspective on using NLP methodology to discover principles of moral sentiment change in human society.
A three-tier modelling framework
Our framework treats the moral sentiment toward a concept at three incremental levels, as illustrated in Figure FIGREF3. First, we consider moral relevance, distinguishing between morally irrelevant and morally relevant concepts. At the second tier, moral polarity, we further split morally relevant concepts into those that are positively or negatively perceived in the moral domain. Finally, a third tier classifies these concepts into fine-grained categories of human morality.
We draw from research in social psychology to inform our methodology, most prominently Moral Foundations Theory BIBREF26. MFT seeks to explain the structure and variation of human morality across cultures, and proposes five moral foundations: Care / Harm, Fairness / Cheating, Loyalty / Betrayal, Authority / Subversion, and Sanctity / Degradation. Each foundation is summarized by a positive and a negative pole, resulting in ten fine-grained moral categories.
A three-tier modelling framework ::: Lexical data for moral sentiment
To ground moral sentiment in text, we leverage the Moral Foundations Dictionary BIBREF27. The MFD is a psycholinguistic resource that associates each MFT category with a set of seed words, which are words that provide evidence for the corresponding moral category in text. We use the MFD for moral polarity classification by dividing seed words into positive and negative sets, and for fine-grained categorization by splitting them into the 10 MFT categories.
To implement the first tier of our framework and detect moral relevance, we complement our morally relevant seed words with a corresponding set of seed words approximating moral irrelevance based on the notion of valence, i.e., the degree of pleasantness or unpleasantness of a stimulus. We refer to the emotional valence ratings collected by BIBREF28 for approximately 14,000 English words, and choose the words with most neutral valence rating that do not occur in the MFD as our set of morally irrelevant seed words, for an equal total number of morally relevant and morally irrelevant words.
A three-tier modelling framework ::: Models
We propose and evaluate a set of probabilistic models to classify concepts in the three tiers of morality specified above. Our models exploit the semantic structure of word embeddings BIBREF29 to perform tiered moral classification of query concepts. In each tier, the model receives a query word embedding vector $\mathbf {q}$ and a set of seed words for each class in that tier, and infers the posterior probabilities over the set of classes $c$ to which the query concept is associated with.
The seed words function as “labelled examples” that guide the moral classification of novel concepts, and are organized per classification tier as follows. In moral relevance classification, sets $\mathbf {S}_0$ and $\mathbf {S}_1$ contain the morally irrelevant and morally relevant seed words, respectively; for moral polarity, $\mathbf {S}_+$ and $\mathbf {S}_-$ contain the positive and negative seed words; and for fine-grained moral categories, $\mathbf {S}_1, \ldots , \mathbf {S}_{10}$ contain the seed words for the 10 categories of MFT. Then our general problem is to estimate $p(c\,|\,\mathbf {q})$, where $\mathbf {q}$ is a query vector and $c$ is a moral category in the desired tier.
We evaluate the following four models:
A Centroid model summarizes each set of seed words by its expected vector in embedding space, and classifies concepts into the class of closest expected embedding in Euclidean distance following a softmax rule;
A Naïve Bayes model considers both mean and variance, under the assumption of independence among embedding dimensions, by fitting a normal distribution with mean vector and diagonal covariance matrix to the set of seed words of each class;
A $k$-Nearest Neighbors ($k$NN) model exploits local density estimation and classifies concepts according to the majority vote of the $k$ seed words closest to the query vector;
A Kernel Density Estimation (KDE) model performs density estimation at a broader scale by considering the contribution of each seed word toward the total likelihood of each class, regulated by a bandwidth parameter $h$ that controls the sensitivity of the model to distance in embedding space.
Table TABREF2 specifies the formulation of each model. Note that we adopt a parsimonious design principle in our modelling: both Centroid and Naïve Bayes are parameter-free models, $k$NN only depends on the choice of $k$, and KDE uses a single bandwidth parameter $h$.
Historical corpus data
To apply our models diachronically, we require a word embedding space that captures the meanings of words at different points in time and reflects changes pertaining to a particular word as diachronic shifts in a common embedding space.
Following BIBREF30, we combine skip-gram word embeddings BIBREF29 trained on longitudinal corpora of English with rotational alignments of embedding spaces to obtain diachronic word embeddings that are aligned through time.
We divide historical time into decade-long bins, and use two sets of embeddings provided by BIBREF30, each trained on a different historical corpus of English:
Google N-grams BIBREF31: a corpus of $8.5 \times 10^{11}$ tokens collected from the English literature (Google Books, all-genres) spanning the period 1800–1999.
COHA BIBREF32: a smaller corpus of $4.1 \times 10^8$ tokens from works selected so as to be genre-balanced and representative of American English in the period 1810–2009.
Model evaluations
We evaluated our models in two ways: classification of moral seed words on all three tiers (moral relevance, polarity, and fine-grained categories), and correlation of model predictions with human judgments.
Model evaluations ::: Moral sentiment inference of seed words
In this evaluation, we assessed the ability of our models to classify the seed words that compose our moral environment in a leave-one-out classification task. We performed the evaluation for all three classification tiers: 1) moral relevance, where seed words are split into morally relevant and morally irrelevant; 2) moral polarity, where moral seed words are split into positive and negative; 3) fine-grained categories, where moral seed words are split into the 10 MFT categories. In each test, we removed one seed word from the training set at a time to obtain cross-validated model predictions.
Table TABREF14 shows classification accuracy for all models and corpora on each tier for the 1990–1999 period. We observe that all models perform substantially better than chance, confirming the efficacy of our methodology in capturing moral dimensions of words. We also observe that models using word embeddings trained on Google N-grams perform better than those trained on COHA, which could be expected given the larger corpus size of the former.
In the remaining analyses, we employ the Centroid model, which offers competitive accuracy and a simple, parameter-free specification.
Model evaluations ::: Alignment with human valence ratings
We evaluated the approximate agreement between our methodology and human judgments using valence ratings, i.e., the degree of pleasantness or unpleasantness of a stimulus. Our assumption is that the valence of a concept should correlate with its perceived moral polarity, e.g., morally repulsive ideas should evoke an unpleasant feeling. However, we do not expect this correspondence to be perfect; for example, the concept of dessert evokes a pleasant reaction without being morally relevant.
In this analysis, we took the valence ratings for the nearly 14,000 English nouns collected by BIBREF28 and, for each query word $q$, we generated a corresponding prediction of positive moral polarity from our model, $P(c_+\,|\,\mathbf {q})$. Table TABREF16 shows the correlations between human valence ratings and predictions of positive moral polarity generated by models trained on each of our corpora. We observe that the correlations are significant, suggesting the ability of our methodology to capture relevant features of moral sentiment from text.
In the remaining applications, we use the diachronic embeddings trained on the Google N-grams corpus, which enabled superior model performance throughout our evaluations.
Applications to diachronic morality
We applied our framework in three ways: 1) evaluation of selected concepts in historical time courses and prediction of human judgments; 2) automatic detection of moral sentiment change; and 3) broad-scale study of the relations between psycholinguistic variables and historical change of moral sentiment toward concepts.
Applications to diachronic morality ::: Moral change in individual concepts ::: Historical time courses.
We applied our models diachronically to predict time courses of moral relevance, moral polarity, and fine-grained moral categories toward two historically relevant topics: slavery and democracy. By grounding our model in word embeddings for each decade and querying concepts at the three tiers of classification, we obtained the time courses shown in Figure FIGREF21.
We note that these trajectories illustrate actual historical trends. Predictions for democracy show a trend toward morally positive sentiment, consistent with the adoption of democratic regimes in Western societies. On the other hand, predictions for slavery trend down and suggest a drop around the 1860s, coinciding with the American Civil War. We also observe changes in the dominant fine-grained moral categories, such as the perception of democracy as a fair concept, suggesting potential mechanisms behind the polarity changes and providing further insight into the public sentiment toward these concepts as evidenced by text.
Applications to diachronic morality ::: Moral change in individual concepts ::: Prediction of human judgments.
We explored the predictive potential of our framework by comparing model predictions with human judgments of moral relevance and acceptability. We used data from the Pew Research Center's 2013 Global Attitudes survey BIBREF33, in which participants from 40 countries judged 8 topics such as abortion and homosexuality as one of “acceptable", “unacceptable", and “not a moral issue".
We compared human ratings with model predictions at two tiers: for moral relevance, we paired the proportion of “not a moral issue” human responses with irrelevance predictions $p(c_0\,|\,\mathbf {q})$ for each topic, and for moral acceptability, we paired the proportion of “acceptable” responses with positive predictions $p(c_+\,|\,\mathbf {q})$. We used 1990s word embeddings, and obtained predictions for two-word topics by querying the model with their averaged embeddings.
Figure FIGREF23 shows plots of relevance and polarity predictions against survey proportions, and we observe a visible correspondence between model predictions and human judgments despite the difficulty of this task and limited number of topics.
Applications to diachronic morality ::: Retrieval of morally changing concepts
Beyond analyzing selected concepts, we applied our framework predictively on a large repertoire of words to automatically discover the concepts that have exhibited the greatest change in moral sentiment at two tiers, moral relevance and moral polarity.
We selected the 10,000 nouns with highest total frequency in the 1800–1999 period according to data from BIBREF30, restricted to words labelled as nouns in WordNet BIBREF34 for validation. For each such word $\mathbf {q}$, we computed diachronic moral relevance scores $R_i = p(c_1\,|\,\mathbf {q}), i=1,\ldots ,20$ for the 20 decades in our time span. Then, we performed a linear regression of $R$ on $T = 1,\ldots ,n$ and took the fitted slope as a measure of moral relevance change. We repeated the same procedure for moral polarity. Finally, we removed words with average relevance score below $0.5$ to focus on morally relevant retrievals.
Table TABREF17 shows the words with steepest predicted change toward moral relevance, along with their predicted fine-grained moral categories in modern times (i.e., 1900–1999). Table TABREF18 shows the words with steepest predicted change toward the positive and negative moral poles. To further investigate the moral sentiment that may have led to such polarity shifts, we also show the predicted fine-grained moral categories of each word at its earliest time of predicted moral relevance and in modern times. Although we do not have access to ground truth for this application, these results offer initial insight into the historical moral landscape of the English language at scale.
Applications to diachronic morality ::: Broad-scale investigation of moral change
In this application, we investigated the hypothesis that concept concreteness is inversely related to change in moral relevance, i.e., that concepts considered more abstract might become morally relevant at a higher rate than concepts considered more concrete. To test this hypothesis, we performed a multiple linear regression analysis on rate of change toward moral relevance of a large repertoire of words against concept concreteness ratings, word frequency BIBREF35, and word length BIBREF36.
We obtained norms of concreteness ratings from BIBREF28. We collected the same set of high-frequency nouns as in the previous analysis, along with their fitted slopes of moral relevance change. Since we were interested in moral relevance change within this large set of words, we restricted our analysis to those words whose model predictions indicate change in moral relevance, in either direction, from the 1800s to the 1990s.
We performed a multiple linear regression under the following model:
Here $\rho (w)$ is the slope of moral relevance change for word $w$; $f(w$) is its average frequency; $l(w)$ is its character length; $c(w)$ is its concreteness rating; $\beta _f$, $\beta _l$, $\beta _c$, and $\beta _0$ are the corresponding factor weights and intercept, respectively; and $\epsilon \sim \mathcal {N}(0, \sigma )$ is the regression error term.
Table TABREF27 shows the results of multiple linear regression. We observe that concreteness is a significant negative predictor of change toward moral relevance, suggesting that abstract concepts are more strongly associated with increasing moral relevance over time than concrete concepts. This significance persists under partial correlation test against the control factors ($p < 0.01$).
We further verified the diachronic component of this effect in a random permutation analysis. We generated 1,000 control time courses by randomly shuffling the 20 decades in our data, and repeated the regression analysis to obtain a control distribution for each regression coefficient. All effects became non-significant under the shuffled condition, suggesting the relevance of concept concreteness for diachronic change in moral sentiment (see Supplementary Material).
Discussion and conclusion
We presented a text-based framework for exploring the socio-scientific problem of moral sentiment change. Our methodology uses minimal parameters and exploits implicit moral biases learned from diachronic word embeddings to reveal the public's moral perception toward a large concept repertoire over a long historical period.
Differing from existing work in NLP that treats moral sentiment as a flat classification problem BIBREF19, BIBREF20, our framework probes moral sentiment change at multiple levels and captures moral dynamics concerning relevance, polarity, and fine-grained categories informed by Moral Foundations Theory BIBREF12. We applied our methodology to the automated analyses of moral change both in individual concepts and at a broad scale, thus providing insights into psycholinguistic variables that associate with rates of moral change in the public.
Our current work focuses on exploring moral sentiment change in English-speaking cultures. Future research should evaluate the appropriateness of the framework to probing moral change from a diverse range of cultures and linguistic backgrounds, and the extent to which moral sentiment change interacts and crisscrosses with linguistic meaning change and lexical coinage. Our work creates opportunities for applying natural language processing toward characterizing moral sentiment change in society.
Acknowledgments
We would like to thank Nina Wang, Nicola Lacerata, Dan Jurafsky, Paul Bloom, Dzmitry Bahdanau, and the Computational Linguistics Group at the University of Toronto for helpful discussion. We would also like to thank Ben Prystawski for his feedback on the manuscript. JX is supported by an NSERC USRA Fellowship and YX is funded through a SSHRC Insight Grant, an NSERC Discovery Grant, and a Connaught New Researcher Award. | A Centroid model summarizes each set of seed words by its expected vector in embedding space, and classifies concepts into the class of closest expected embedding in Euclidean distance following a softmax rule;, A Naïve Bayes model considers both mean and variance, under the assumption of independence among embedding dimensions, by fitting a normal distribution with mean vector and diagonal covariance matrix to the set of seed words of each class; |
5fb6a21d10adf4e81482bb5c1ec1787dc9de260d | 5fb6a21d10adf4e81482bb5c1ec1787dc9de260d_0 | Q: How do they quantify moral relevance?
Text: Moral sentiment change and language
People's moral sentiment—our feelings toward right or wrong—can change over time. For instance, the public's views toward slavery have shifted substantially over the past centuries BIBREF0. How society's moral views evolve has been a long-standing issue and a constant source of controversy subject to interpretations from social scientists, historians, philosophers, among others. Here we ask whether natural language processing has the potential to inform moral sentiment change in society at scale, involving minimal human labour or intervention.
The topic of moral sentiment has been thus far considered a traditional inquiry in philosophy BIBREF1, BIBREF2, BIBREF3, with contemporary development of this topic represented in social psychology BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, cognitive linguistics BIBREF9, and more recently, the advent of Moral Foundations Theory BIBREF10, BIBREF11, BIBREF12. Despite the fundamental importance and interdisciplinarity of this topic, large-scale formal treatment of moral sentiment, particularly its evolution, is still in infancy from the natural language processing (NLP) community (see overview in Section SECREF2).
We believe that there is a tremendous potential to bring NLP methodologies to bear on the problem of moral sentiment change. We build on extensive recent work showing that word embeddings reveal implicit human biases BIBREF13, BIBREF14 and social stereotypes BIBREF15. Differing from this existing work, we demonstrate that moral sentiment change can be revealed by moral biases implicitly learned from diachronic text corpora. Accordingly, we present to our knowledge the first text-based framework for probing moral sentiment change at a large scale with support for different levels of analysis concerning moral relevance, moral polarity, and fine-grained moral dimensions. As such, for any query item such as slavery, our goal is to automatically infer its moral trajectories from sentiments at each of these levels over a long period of time.
Our approach is based on the premise that people's moral sentiments are reflected in natural language, and more specifically, in text BIBREF16. In particular, we know that books are highly effective tools for conveying moral views to the public. For example, Uncle Tom's Cabin BIBREF17 was central to the anti-slavery movement in the United States. The framework that we develop builds on this premise to explore changes in moral sentiment reflected in longitudinal or historical text.
Figure FIGREF1 offers a preview of our framework by visualizing the evolution trajectories of the public's moral sentiment toward concepts signified by the probe words slavery, democracy, and gay. Each of these concepts illustrates a piece of “moral history” tracked through a period of 200 years (1800 to 2000), and our framework is able to capture nuanced moral changes. For instance, slavery initially lies at the border of moral virtue (positive sentiment) and vice (negative sentiment) in the 1800s yet gradually moves toward the center of moral vice over the 200-year period; in contrast, democracy considered morally negative (e.g., subversion and anti-authority under monarchy) in the 1800s is now perceived as morally positive, as a mechanism for fairness; gay, which came to denote homosexuality only in the 1930s BIBREF18, is inferred to be morally irrelevant until the modern day. We will describe systematic evaluations and applications of our framework that extend beyond these anecdotal cases of moral sentiment change.
The general text-based framework that we propose consists of a parameter-free approach that facilitates the prediction of public moral sentiment toward individual concepts, automated retrieval of morally changing concepts, and broad-scale psycholinguistic analyses of historical rates of moral sentiment change. We provide a description of the probabilistic models and data used, followed by comprehensive evaluations of our methodology.
Emerging NLP research on morality
An emerging body of work in natural language processing and computational social science has investigated how NLP systems can detect moral sentiment in online text. For example, moral rhetoric in social media and political discourse BIBREF19, BIBREF20, BIBREF21, the relation between moralization in social media and violent protests BIBREF22, and bias toward refugees in talk radio shows BIBREF23 have been some of the topics explored in this line of inquiry. In contrast to this line of research, the development of a formal framework for moral sentiment change is still under-explored, with no existing systematic and formal treatment of this topic BIBREF16.
While there is emerging awareness of ethical issues in NLP BIBREF24, BIBREF25, work exploiting NLP techniques to study principles of moral sentiment change is scarce. Moreover, since morality is variable across cultures and time BIBREF12, BIBREF16, developing systems that capture the diachronic nature of moral sentiment will be a pivotal research direction. Our work leverages and complements existing research that finds implicit human biases from word embeddings BIBREF13, BIBREF14, BIBREF19 by developing a novel perspective on using NLP methodology to discover principles of moral sentiment change in human society.
A three-tier modelling framework
Our framework treats the moral sentiment toward a concept at three incremental levels, as illustrated in Figure FIGREF3. First, we consider moral relevance, distinguishing between morally irrelevant and morally relevant concepts. At the second tier, moral polarity, we further split morally relevant concepts into those that are positively or negatively perceived in the moral domain. Finally, a third tier classifies these concepts into fine-grained categories of human morality.
We draw from research in social psychology to inform our methodology, most prominently Moral Foundations Theory BIBREF26. MFT seeks to explain the structure and variation of human morality across cultures, and proposes five moral foundations: Care / Harm, Fairness / Cheating, Loyalty / Betrayal, Authority / Subversion, and Sanctity / Degradation. Each foundation is summarized by a positive and a negative pole, resulting in ten fine-grained moral categories.
A three-tier modelling framework ::: Lexical data for moral sentiment
To ground moral sentiment in text, we leverage the Moral Foundations Dictionary BIBREF27. The MFD is a psycholinguistic resource that associates each MFT category with a set of seed words, which are words that provide evidence for the corresponding moral category in text. We use the MFD for moral polarity classification by dividing seed words into positive and negative sets, and for fine-grained categorization by splitting them into the 10 MFT categories.
To implement the first tier of our framework and detect moral relevance, we complement our morally relevant seed words with a corresponding set of seed words approximating moral irrelevance based on the notion of valence, i.e., the degree of pleasantness or unpleasantness of a stimulus. We refer to the emotional valence ratings collected by BIBREF28 for approximately 14,000 English words, and choose the words with most neutral valence rating that do not occur in the MFD as our set of morally irrelevant seed words, for an equal total number of morally relevant and morally irrelevant words.
A three-tier modelling framework ::: Models
We propose and evaluate a set of probabilistic models to classify concepts in the three tiers of morality specified above. Our models exploit the semantic structure of word embeddings BIBREF29 to perform tiered moral classification of query concepts. In each tier, the model receives a query word embedding vector $\mathbf {q}$ and a set of seed words for each class in that tier, and infers the posterior probabilities over the set of classes $c$ to which the query concept is associated with.
The seed words function as “labelled examples” that guide the moral classification of novel concepts, and are organized per classification tier as follows. In moral relevance classification, sets $\mathbf {S}_0$ and $\mathbf {S}_1$ contain the morally irrelevant and morally relevant seed words, respectively; for moral polarity, $\mathbf {S}_+$ and $\mathbf {S}_-$ contain the positive and negative seed words; and for fine-grained moral categories, $\mathbf {S}_1, \ldots , \mathbf {S}_{10}$ contain the seed words for the 10 categories of MFT. Then our general problem is to estimate $p(c\,|\,\mathbf {q})$, where $\mathbf {q}$ is a query vector and $c$ is a moral category in the desired tier.
We evaluate the following four models:
A Centroid model summarizes each set of seed words by its expected vector in embedding space, and classifies concepts into the class of closest expected embedding in Euclidean distance following a softmax rule;
A Naïve Bayes model considers both mean and variance, under the assumption of independence among embedding dimensions, by fitting a normal distribution with mean vector and diagonal covariance matrix to the set of seed words of each class;
A $k$-Nearest Neighbors ($k$NN) model exploits local density estimation and classifies concepts according to the majority vote of the $k$ seed words closest to the query vector;
A Kernel Density Estimation (KDE) model performs density estimation at a broader scale by considering the contribution of each seed word toward the total likelihood of each class, regulated by a bandwidth parameter $h$ that controls the sensitivity of the model to distance in embedding space.
Table TABREF2 specifies the formulation of each model. Note that we adopt a parsimonious design principle in our modelling: both Centroid and Naïve Bayes are parameter-free models, $k$NN only depends on the choice of $k$, and KDE uses a single bandwidth parameter $h$.
Historical corpus data
To apply our models diachronically, we require a word embedding space that captures the meanings of words at different points in time and reflects changes pertaining to a particular word as diachronic shifts in a common embedding space.
Following BIBREF30, we combine skip-gram word embeddings BIBREF29 trained on longitudinal corpora of English with rotational alignments of embedding spaces to obtain diachronic word embeddings that are aligned through time.
We divide historical time into decade-long bins, and use two sets of embeddings provided by BIBREF30, each trained on a different historical corpus of English:
Google N-grams BIBREF31: a corpus of $8.5 \times 10^{11}$ tokens collected from the English literature (Google Books, all-genres) spanning the period 1800–1999.
COHA BIBREF32: a smaller corpus of $4.1 \times 10^8$ tokens from works selected so as to be genre-balanced and representative of American English in the period 1810–2009.
Model evaluations
We evaluated our models in two ways: classification of moral seed words on all three tiers (moral relevance, polarity, and fine-grained categories), and correlation of model predictions with human judgments.
Model evaluations ::: Moral sentiment inference of seed words
In this evaluation, we assessed the ability of our models to classify the seed words that compose our moral environment in a leave-one-out classification task. We performed the evaluation for all three classification tiers: 1) moral relevance, where seed words are split into morally relevant and morally irrelevant; 2) moral polarity, where moral seed words are split into positive and negative; 3) fine-grained categories, where moral seed words are split into the 10 MFT categories. In each test, we removed one seed word from the training set at a time to obtain cross-validated model predictions.
Table TABREF14 shows classification accuracy for all models and corpora on each tier for the 1990–1999 period. We observe that all models perform substantially better than chance, confirming the efficacy of our methodology in capturing moral dimensions of words. We also observe that models using word embeddings trained on Google N-grams perform better than those trained on COHA, which could be expected given the larger corpus size of the former.
In the remaining analyses, we employ the Centroid model, which offers competitive accuracy and a simple, parameter-free specification.
Model evaluations ::: Alignment with human valence ratings
We evaluated the approximate agreement between our methodology and human judgments using valence ratings, i.e., the degree of pleasantness or unpleasantness of a stimulus. Our assumption is that the valence of a concept should correlate with its perceived moral polarity, e.g., morally repulsive ideas should evoke an unpleasant feeling. However, we do not expect this correspondence to be perfect; for example, the concept of dessert evokes a pleasant reaction without being morally relevant.
In this analysis, we took the valence ratings for the nearly 14,000 English nouns collected by BIBREF28 and, for each query word $q$, we generated a corresponding prediction of positive moral polarity from our model, $P(c_+\,|\,\mathbf {q})$. Table TABREF16 shows the correlations between human valence ratings and predictions of positive moral polarity generated by models trained on each of our corpora. We observe that the correlations are significant, suggesting the ability of our methodology to capture relevant features of moral sentiment from text.
In the remaining applications, we use the diachronic embeddings trained on the Google N-grams corpus, which enabled superior model performance throughout our evaluations.
Applications to diachronic morality
We applied our framework in three ways: 1) evaluation of selected concepts in historical time courses and prediction of human judgments; 2) automatic detection of moral sentiment change; and 3) broad-scale study of the relations between psycholinguistic variables and historical change of moral sentiment toward concepts.
Applications to diachronic morality ::: Moral change in individual concepts ::: Historical time courses.
We applied our models diachronically to predict time courses of moral relevance, moral polarity, and fine-grained moral categories toward two historically relevant topics: slavery and democracy. By grounding our model in word embeddings for each decade and querying concepts at the three tiers of classification, we obtained the time courses shown in Figure FIGREF21.
We note that these trajectories illustrate actual historical trends. Predictions for democracy show a trend toward morally positive sentiment, consistent with the adoption of democratic regimes in Western societies. On the other hand, predictions for slavery trend down and suggest a drop around the 1860s, coinciding with the American Civil War. We also observe changes in the dominant fine-grained moral categories, such as the perception of democracy as a fair concept, suggesting potential mechanisms behind the polarity changes and providing further insight into the public sentiment toward these concepts as evidenced by text.
Applications to diachronic morality ::: Moral change in individual concepts ::: Prediction of human judgments.
We explored the predictive potential of our framework by comparing model predictions with human judgments of moral relevance and acceptability. We used data from the Pew Research Center's 2013 Global Attitudes survey BIBREF33, in which participants from 40 countries judged 8 topics such as abortion and homosexuality as one of “acceptable", “unacceptable", and “not a moral issue".
We compared human ratings with model predictions at two tiers: for moral relevance, we paired the proportion of “not a moral issue” human responses with irrelevance predictions $p(c_0\,|\,\mathbf {q})$ for each topic, and for moral acceptability, we paired the proportion of “acceptable” responses with positive predictions $p(c_+\,|\,\mathbf {q})$. We used 1990s word embeddings, and obtained predictions for two-word topics by querying the model with their averaged embeddings.
Figure FIGREF23 shows plots of relevance and polarity predictions against survey proportions, and we observe a visible correspondence between model predictions and human judgments despite the difficulty of this task and limited number of topics.
Applications to diachronic morality ::: Retrieval of morally changing concepts
Beyond analyzing selected concepts, we applied our framework predictively on a large repertoire of words to automatically discover the concepts that have exhibited the greatest change in moral sentiment at two tiers, moral relevance and moral polarity.
We selected the 10,000 nouns with highest total frequency in the 1800–1999 period according to data from BIBREF30, restricted to words labelled as nouns in WordNet BIBREF34 for validation. For each such word $\mathbf {q}$, we computed diachronic moral relevance scores $R_i = p(c_1\,|\,\mathbf {q}), i=1,\ldots ,20$ for the 20 decades in our time span. Then, we performed a linear regression of $R$ on $T = 1,\ldots ,n$ and took the fitted slope as a measure of moral relevance change. We repeated the same procedure for moral polarity. Finally, we removed words with average relevance score below $0.5$ to focus on morally relevant retrievals.
Table TABREF17 shows the words with steepest predicted change toward moral relevance, along with their predicted fine-grained moral categories in modern times (i.e., 1900–1999). Table TABREF18 shows the words with steepest predicted change toward the positive and negative moral poles. To further investigate the moral sentiment that may have led to such polarity shifts, we also show the predicted fine-grained moral categories of each word at its earliest time of predicted moral relevance and in modern times. Although we do not have access to ground truth for this application, these results offer initial insight into the historical moral landscape of the English language at scale.
Applications to diachronic morality ::: Broad-scale investigation of moral change
In this application, we investigated the hypothesis that concept concreteness is inversely related to change in moral relevance, i.e., that concepts considered more abstract might become morally relevant at a higher rate than concepts considered more concrete. To test this hypothesis, we performed a multiple linear regression analysis on rate of change toward moral relevance of a large repertoire of words against concept concreteness ratings, word frequency BIBREF35, and word length BIBREF36.
We obtained norms of concreteness ratings from BIBREF28. We collected the same set of high-frequency nouns as in the previous analysis, along with their fitted slopes of moral relevance change. Since we were interested in moral relevance change within this large set of words, we restricted our analysis to those words whose model predictions indicate change in moral relevance, in either direction, from the 1800s to the 1990s.
We performed a multiple linear regression under the following model:
Here $\rho (w)$ is the slope of moral relevance change for word $w$; $f(w$) is its average frequency; $l(w)$ is its character length; $c(w)$ is its concreteness rating; $\beta _f$, $\beta _l$, $\beta _c$, and $\beta _0$ are the corresponding factor weights and intercept, respectively; and $\epsilon \sim \mathcal {N}(0, \sigma )$ is the regression error term.
Table TABREF27 shows the results of multiple linear regression. We observe that concreteness is a significant negative predictor of change toward moral relevance, suggesting that abstract concepts are more strongly associated with increasing moral relevance over time than concrete concepts. This significance persists under partial correlation test against the control factors ($p < 0.01$).
We further verified the diachronic component of this effect in a random permutation analysis. We generated 1,000 control time courses by randomly shuffling the 20 decades in our data, and repeated the regression analysis to obtain a control distribution for each regression coefficient. All effects became non-significant under the shuffled condition, suggesting the relevance of concept concreteness for diachronic change in moral sentiment (see Supplementary Material).
Discussion and conclusion
We presented a text-based framework for exploring the socio-scientific problem of moral sentiment change. Our methodology uses minimal parameters and exploits implicit moral biases learned from diachronic word embeddings to reveal the public's moral perception toward a large concept repertoire over a long historical period.
Differing from existing work in NLP that treats moral sentiment as a flat classification problem BIBREF19, BIBREF20, our framework probes moral sentiment change at multiple levels and captures moral dynamics concerning relevance, polarity, and fine-grained categories informed by Moral Foundations Theory BIBREF12. We applied our methodology to the automated analyses of moral change both in individual concepts and at a broad scale, thus providing insights into psycholinguistic variables that associate with rates of moral change in the public.
Our current work focuses on exploring moral sentiment change in English-speaking cultures. Future research should evaluate the appropriateness of the framework to probing moral change from a diverse range of cultures and linguistic backgrounds, and the extent to which moral sentiment change interacts and crisscrosses with linguistic meaning change and lexical coinage. Our work creates opportunities for applying natural language processing toward characterizing moral sentiment change in society.
Acknowledgments
We would like to thank Nina Wang, Nicola Lacerata, Dan Jurafsky, Paul Bloom, Dzmitry Bahdanau, and the Computational Linguistics Group at the University of Toronto for helpful discussion. We would also like to thank Ben Prystawski for his feedback on the manuscript. JX is supported by an NSERC USRA Fellowship and YX is funded through a SSHRC Insight Grant, an NSERC Discovery Grant, and a Connaught New Researcher Award. | By complementing morally relevant seed words with a set of morally irrelevant seed words based on the notion of valence |
542a87f856cb2c934072bacaa495f3c2645f93be | 542a87f856cb2c934072bacaa495f3c2645f93be_0 | Q: Which fine-grained moral dimension examples do they showcase?
Text: Moral sentiment change and language
People's moral sentiment—our feelings toward right or wrong—can change over time. For instance, the public's views toward slavery have shifted substantially over the past centuries BIBREF0. How society's moral views evolve has been a long-standing issue and a constant source of controversy subject to interpretations from social scientists, historians, philosophers, among others. Here we ask whether natural language processing has the potential to inform moral sentiment change in society at scale, involving minimal human labour or intervention.
The topic of moral sentiment has been thus far considered a traditional inquiry in philosophy BIBREF1, BIBREF2, BIBREF3, with contemporary development of this topic represented in social psychology BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, cognitive linguistics BIBREF9, and more recently, the advent of Moral Foundations Theory BIBREF10, BIBREF11, BIBREF12. Despite the fundamental importance and interdisciplinarity of this topic, large-scale formal treatment of moral sentiment, particularly its evolution, is still in infancy from the natural language processing (NLP) community (see overview in Section SECREF2).
We believe that there is a tremendous potential to bring NLP methodologies to bear on the problem of moral sentiment change. We build on extensive recent work showing that word embeddings reveal implicit human biases BIBREF13, BIBREF14 and social stereotypes BIBREF15. Differing from this existing work, we demonstrate that moral sentiment change can be revealed by moral biases implicitly learned from diachronic text corpora. Accordingly, we present to our knowledge the first text-based framework for probing moral sentiment change at a large scale with support for different levels of analysis concerning moral relevance, moral polarity, and fine-grained moral dimensions. As such, for any query item such as slavery, our goal is to automatically infer its moral trajectories from sentiments at each of these levels over a long period of time.
Our approach is based on the premise that people's moral sentiments are reflected in natural language, and more specifically, in text BIBREF16. In particular, we know that books are highly effective tools for conveying moral views to the public. For example, Uncle Tom's Cabin BIBREF17 was central to the anti-slavery movement in the United States. The framework that we develop builds on this premise to explore changes in moral sentiment reflected in longitudinal or historical text.
Figure FIGREF1 offers a preview of our framework by visualizing the evolution trajectories of the public's moral sentiment toward concepts signified by the probe words slavery, democracy, and gay. Each of these concepts illustrates a piece of “moral history” tracked through a period of 200 years (1800 to 2000), and our framework is able to capture nuanced moral changes. For instance, slavery initially lies at the border of moral virtue (positive sentiment) and vice (negative sentiment) in the 1800s yet gradually moves toward the center of moral vice over the 200-year period; in contrast, democracy considered morally negative (e.g., subversion and anti-authority under monarchy) in the 1800s is now perceived as morally positive, as a mechanism for fairness; gay, which came to denote homosexuality only in the 1930s BIBREF18, is inferred to be morally irrelevant until the modern day. We will describe systematic evaluations and applications of our framework that extend beyond these anecdotal cases of moral sentiment change.
The general text-based framework that we propose consists of a parameter-free approach that facilitates the prediction of public moral sentiment toward individual concepts, automated retrieval of morally changing concepts, and broad-scale psycholinguistic analyses of historical rates of moral sentiment change. We provide a description of the probabilistic models and data used, followed by comprehensive evaluations of our methodology.
Emerging NLP research on morality
An emerging body of work in natural language processing and computational social science has investigated how NLP systems can detect moral sentiment in online text. For example, moral rhetoric in social media and political discourse BIBREF19, BIBREF20, BIBREF21, the relation between moralization in social media and violent protests BIBREF22, and bias toward refugees in talk radio shows BIBREF23 have been some of the topics explored in this line of inquiry. In contrast to this line of research, the development of a formal framework for moral sentiment change is still under-explored, with no existing systematic and formal treatment of this topic BIBREF16.
While there is emerging awareness of ethical issues in NLP BIBREF24, BIBREF25, work exploiting NLP techniques to study principles of moral sentiment change is scarce. Moreover, since morality is variable across cultures and time BIBREF12, BIBREF16, developing systems that capture the diachronic nature of moral sentiment will be a pivotal research direction. Our work leverages and complements existing research that finds implicit human biases from word embeddings BIBREF13, BIBREF14, BIBREF19 by developing a novel perspective on using NLP methodology to discover principles of moral sentiment change in human society.
A three-tier modelling framework
Our framework treats the moral sentiment toward a concept at three incremental levels, as illustrated in Figure FIGREF3. First, we consider moral relevance, distinguishing between morally irrelevant and morally relevant concepts. At the second tier, moral polarity, we further split morally relevant concepts into those that are positively or negatively perceived in the moral domain. Finally, a third tier classifies these concepts into fine-grained categories of human morality.
We draw from research in social psychology to inform our methodology, most prominently Moral Foundations Theory BIBREF26. MFT seeks to explain the structure and variation of human morality across cultures, and proposes five moral foundations: Care / Harm, Fairness / Cheating, Loyalty / Betrayal, Authority / Subversion, and Sanctity / Degradation. Each foundation is summarized by a positive and a negative pole, resulting in ten fine-grained moral categories.
A three-tier modelling framework ::: Lexical data for moral sentiment
To ground moral sentiment in text, we leverage the Moral Foundations Dictionary BIBREF27. The MFD is a psycholinguistic resource that associates each MFT category with a set of seed words, which are words that provide evidence for the corresponding moral category in text. We use the MFD for moral polarity classification by dividing seed words into positive and negative sets, and for fine-grained categorization by splitting them into the 10 MFT categories.
To implement the first tier of our framework and detect moral relevance, we complement our morally relevant seed words with a corresponding set of seed words approximating moral irrelevance based on the notion of valence, i.e., the degree of pleasantness or unpleasantness of a stimulus. We refer to the emotional valence ratings collected by BIBREF28 for approximately 14,000 English words, and choose the words with most neutral valence rating that do not occur in the MFD as our set of morally irrelevant seed words, for an equal total number of morally relevant and morally irrelevant words.
A three-tier modelling framework ::: Models
We propose and evaluate a set of probabilistic models to classify concepts in the three tiers of morality specified above. Our models exploit the semantic structure of word embeddings BIBREF29 to perform tiered moral classification of query concepts. In each tier, the model receives a query word embedding vector $\mathbf {q}$ and a set of seed words for each class in that tier, and infers the posterior probabilities over the set of classes $c$ to which the query concept is associated with.
The seed words function as “labelled examples” that guide the moral classification of novel concepts, and are organized per classification tier as follows. In moral relevance classification, sets $\mathbf {S}_0$ and $\mathbf {S}_1$ contain the morally irrelevant and morally relevant seed words, respectively; for moral polarity, $\mathbf {S}_+$ and $\mathbf {S}_-$ contain the positive and negative seed words; and for fine-grained moral categories, $\mathbf {S}_1, \ldots , \mathbf {S}_{10}$ contain the seed words for the 10 categories of MFT. Then our general problem is to estimate $p(c\,|\,\mathbf {q})$, where $\mathbf {q}$ is a query vector and $c$ is a moral category in the desired tier.
We evaluate the following four models:
A Centroid model summarizes each set of seed words by its expected vector in embedding space, and classifies concepts into the class of closest expected embedding in Euclidean distance following a softmax rule;
A Naïve Bayes model considers both mean and variance, under the assumption of independence among embedding dimensions, by fitting a normal distribution with mean vector and diagonal covariance matrix to the set of seed words of each class;
A $k$-Nearest Neighbors ($k$NN) model exploits local density estimation and classifies concepts according to the majority vote of the $k$ seed words closest to the query vector;
A Kernel Density Estimation (KDE) model performs density estimation at a broader scale by considering the contribution of each seed word toward the total likelihood of each class, regulated by a bandwidth parameter $h$ that controls the sensitivity of the model to distance in embedding space.
Table TABREF2 specifies the formulation of each model. Note that we adopt a parsimonious design principle in our modelling: both Centroid and Naïve Bayes are parameter-free models, $k$NN only depends on the choice of $k$, and KDE uses a single bandwidth parameter $h$.
Historical corpus data
To apply our models diachronically, we require a word embedding space that captures the meanings of words at different points in time and reflects changes pertaining to a particular word as diachronic shifts in a common embedding space.
Following BIBREF30, we combine skip-gram word embeddings BIBREF29 trained on longitudinal corpora of English with rotational alignments of embedding spaces to obtain diachronic word embeddings that are aligned through time.
We divide historical time into decade-long bins, and use two sets of embeddings provided by BIBREF30, each trained on a different historical corpus of English:
Google N-grams BIBREF31: a corpus of $8.5 \times 10^{11}$ tokens collected from the English literature (Google Books, all-genres) spanning the period 1800–1999.
COHA BIBREF32: a smaller corpus of $4.1 \times 10^8$ tokens from works selected so as to be genre-balanced and representative of American English in the period 1810–2009.
Model evaluations
We evaluated our models in two ways: classification of moral seed words on all three tiers (moral relevance, polarity, and fine-grained categories), and correlation of model predictions with human judgments.
Model evaluations ::: Moral sentiment inference of seed words
In this evaluation, we assessed the ability of our models to classify the seed words that compose our moral environment in a leave-one-out classification task. We performed the evaluation for all three classification tiers: 1) moral relevance, where seed words are split into morally relevant and morally irrelevant; 2) moral polarity, where moral seed words are split into positive and negative; 3) fine-grained categories, where moral seed words are split into the 10 MFT categories. In each test, we removed one seed word from the training set at a time to obtain cross-validated model predictions.
Table TABREF14 shows classification accuracy for all models and corpora on each tier for the 1990–1999 period. We observe that all models perform substantially better than chance, confirming the efficacy of our methodology in capturing moral dimensions of words. We also observe that models using word embeddings trained on Google N-grams perform better than those trained on COHA, which could be expected given the larger corpus size of the former.
In the remaining analyses, we employ the Centroid model, which offers competitive accuracy and a simple, parameter-free specification.
Model evaluations ::: Alignment with human valence ratings
We evaluated the approximate agreement between our methodology and human judgments using valence ratings, i.e., the degree of pleasantness or unpleasantness of a stimulus. Our assumption is that the valence of a concept should correlate with its perceived moral polarity, e.g., morally repulsive ideas should evoke an unpleasant feeling. However, we do not expect this correspondence to be perfect; for example, the concept of dessert evokes a pleasant reaction without being morally relevant.
In this analysis, we took the valence ratings for the nearly 14,000 English nouns collected by BIBREF28 and, for each query word $q$, we generated a corresponding prediction of positive moral polarity from our model, $P(c_+\,|\,\mathbf {q})$. Table TABREF16 shows the correlations between human valence ratings and predictions of positive moral polarity generated by models trained on each of our corpora. We observe that the correlations are significant, suggesting the ability of our methodology to capture relevant features of moral sentiment from text.
In the remaining applications, we use the diachronic embeddings trained on the Google N-grams corpus, which enabled superior model performance throughout our evaluations.
Applications to diachronic morality
We applied our framework in three ways: 1) evaluation of selected concepts in historical time courses and prediction of human judgments; 2) automatic detection of moral sentiment change; and 3) broad-scale study of the relations between psycholinguistic variables and historical change of moral sentiment toward concepts.
Applications to diachronic morality ::: Moral change in individual concepts ::: Historical time courses.
We applied our models diachronically to predict time courses of moral relevance, moral polarity, and fine-grained moral categories toward two historically relevant topics: slavery and democracy. By grounding our model in word embeddings for each decade and querying concepts at the three tiers of classification, we obtained the time courses shown in Figure FIGREF21.
We note that these trajectories illustrate actual historical trends. Predictions for democracy show a trend toward morally positive sentiment, consistent with the adoption of democratic regimes in Western societies. On the other hand, predictions for slavery trend down and suggest a drop around the 1860s, coinciding with the American Civil War. We also observe changes in the dominant fine-grained moral categories, such as the perception of democracy as a fair concept, suggesting potential mechanisms behind the polarity changes and providing further insight into the public sentiment toward these concepts as evidenced by text.
Applications to diachronic morality ::: Moral change in individual concepts ::: Prediction of human judgments.
We explored the predictive potential of our framework by comparing model predictions with human judgments of moral relevance and acceptability. We used data from the Pew Research Center's 2013 Global Attitudes survey BIBREF33, in which participants from 40 countries judged 8 topics such as abortion and homosexuality as one of “acceptable", “unacceptable", and “not a moral issue".
We compared human ratings with model predictions at two tiers: for moral relevance, we paired the proportion of “not a moral issue” human responses with irrelevance predictions $p(c_0\,|\,\mathbf {q})$ for each topic, and for moral acceptability, we paired the proportion of “acceptable” responses with positive predictions $p(c_+\,|\,\mathbf {q})$. We used 1990s word embeddings, and obtained predictions for two-word topics by querying the model with their averaged embeddings.
Figure FIGREF23 shows plots of relevance and polarity predictions against survey proportions, and we observe a visible correspondence between model predictions and human judgments despite the difficulty of this task and limited number of topics.
Applications to diachronic morality ::: Retrieval of morally changing concepts
Beyond analyzing selected concepts, we applied our framework predictively on a large repertoire of words to automatically discover the concepts that have exhibited the greatest change in moral sentiment at two tiers, moral relevance and moral polarity.
We selected the 10,000 nouns with highest total frequency in the 1800–1999 period according to data from BIBREF30, restricted to words labelled as nouns in WordNet BIBREF34 for validation. For each such word $\mathbf {q}$, we computed diachronic moral relevance scores $R_i = p(c_1\,|\,\mathbf {q}), i=1,\ldots ,20$ for the 20 decades in our time span. Then, we performed a linear regression of $R$ on $T = 1,\ldots ,n$ and took the fitted slope as a measure of moral relevance change. We repeated the same procedure for moral polarity. Finally, we removed words with average relevance score below $0.5$ to focus on morally relevant retrievals.
Table TABREF17 shows the words with steepest predicted change toward moral relevance, along with their predicted fine-grained moral categories in modern times (i.e., 1900–1999). Table TABREF18 shows the words with steepest predicted change toward the positive and negative moral poles. To further investigate the moral sentiment that may have led to such polarity shifts, we also show the predicted fine-grained moral categories of each word at its earliest time of predicted moral relevance and in modern times. Although we do not have access to ground truth for this application, these results offer initial insight into the historical moral landscape of the English language at scale.
Applications to diachronic morality ::: Broad-scale investigation of moral change
In this application, we investigated the hypothesis that concept concreteness is inversely related to change in moral relevance, i.e., that concepts considered more abstract might become morally relevant at a higher rate than concepts considered more concrete. To test this hypothesis, we performed a multiple linear regression analysis on rate of change toward moral relevance of a large repertoire of words against concept concreteness ratings, word frequency BIBREF35, and word length BIBREF36.
We obtained norms of concreteness ratings from BIBREF28. We collected the same set of high-frequency nouns as in the previous analysis, along with their fitted slopes of moral relevance change. Since we were interested in moral relevance change within this large set of words, we restricted our analysis to those words whose model predictions indicate change in moral relevance, in either direction, from the 1800s to the 1990s.
We performed a multiple linear regression under the following model:
Here $\rho (w)$ is the slope of moral relevance change for word $w$; $f(w$) is its average frequency; $l(w)$ is its character length; $c(w)$ is its concreteness rating; $\beta _f$, $\beta _l$, $\beta _c$, and $\beta _0$ are the corresponding factor weights and intercept, respectively; and $\epsilon \sim \mathcal {N}(0, \sigma )$ is the regression error term.
Table TABREF27 shows the results of multiple linear regression. We observe that concreteness is a significant negative predictor of change toward moral relevance, suggesting that abstract concepts are more strongly associated with increasing moral relevance over time than concrete concepts. This significance persists under partial correlation test against the control factors ($p < 0.01$).
We further verified the diachronic component of this effect in a random permutation analysis. We generated 1,000 control time courses by randomly shuffling the 20 decades in our data, and repeated the regression analysis to obtain a control distribution for each regression coefficient. All effects became non-significant under the shuffled condition, suggesting the relevance of concept concreteness for diachronic change in moral sentiment (see Supplementary Material).
Discussion and conclusion
We presented a text-based framework for exploring the socio-scientific problem of moral sentiment change. Our methodology uses minimal parameters and exploits implicit moral biases learned from diachronic word embeddings to reveal the public's moral perception toward a large concept repertoire over a long historical period.
Differing from existing work in NLP that treats moral sentiment as a flat classification problem BIBREF19, BIBREF20, our framework probes moral sentiment change at multiple levels and captures moral dynamics concerning relevance, polarity, and fine-grained categories informed by Moral Foundations Theory BIBREF12. We applied our methodology to the automated analyses of moral change both in individual concepts and at a broad scale, thus providing insights into psycholinguistic variables that associate with rates of moral change in the public.
Our current work focuses on exploring moral sentiment change in English-speaking cultures. Future research should evaluate the appropriateness of the framework to probing moral change from a diverse range of cultures and linguistic backgrounds, and the extent to which moral sentiment change interacts and crisscrosses with linguistic meaning change and lexical coinage. Our work creates opportunities for applying natural language processing toward characterizing moral sentiment change in society.
Acknowledgments
We would like to thank Nina Wang, Nicola Lacerata, Dan Jurafsky, Paul Bloom, Dzmitry Bahdanau, and the Computational Linguistics Group at the University of Toronto for helpful discussion. We would also like to thank Ben Prystawski for his feedback on the manuscript. JX is supported by an NSERC USRA Fellowship and YX is funded through a SSHRC Insight Grant, an NSERC Discovery Grant, and a Connaught New Researcher Award. | Care / Harm, Fairness / Cheating, Loyalty / Betrayal, Authority / Subversion, and Sanctity / Degradation |
4fcc668eb3a042f60c4ce2e7d008e7923b25b4fc | 4fcc668eb3a042f60c4ce2e7d008e7923b25b4fc_0 | Q: Which dataset sources to they use to demonstrate moral sentiment through history?
Text: Moral sentiment change and language
People's moral sentiment—our feelings toward right or wrong—can change over time. For instance, the public's views toward slavery have shifted substantially over the past centuries BIBREF0. How society's moral views evolve has been a long-standing issue and a constant source of controversy subject to interpretations from social scientists, historians, philosophers, among others. Here we ask whether natural language processing has the potential to inform moral sentiment change in society at scale, involving minimal human labour or intervention.
The topic of moral sentiment has been thus far considered a traditional inquiry in philosophy BIBREF1, BIBREF2, BIBREF3, with contemporary development of this topic represented in social psychology BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, cognitive linguistics BIBREF9, and more recently, the advent of Moral Foundations Theory BIBREF10, BIBREF11, BIBREF12. Despite the fundamental importance and interdisciplinarity of this topic, large-scale formal treatment of moral sentiment, particularly its evolution, is still in infancy from the natural language processing (NLP) community (see overview in Section SECREF2).
We believe that there is a tremendous potential to bring NLP methodologies to bear on the problem of moral sentiment change. We build on extensive recent work showing that word embeddings reveal implicit human biases BIBREF13, BIBREF14 and social stereotypes BIBREF15. Differing from this existing work, we demonstrate that moral sentiment change can be revealed by moral biases implicitly learned from diachronic text corpora. Accordingly, we present to our knowledge the first text-based framework for probing moral sentiment change at a large scale with support for different levels of analysis concerning moral relevance, moral polarity, and fine-grained moral dimensions. As such, for any query item such as slavery, our goal is to automatically infer its moral trajectories from sentiments at each of these levels over a long period of time.
Our approach is based on the premise that people's moral sentiments are reflected in natural language, and more specifically, in text BIBREF16. In particular, we know that books are highly effective tools for conveying moral views to the public. For example, Uncle Tom's Cabin BIBREF17 was central to the anti-slavery movement in the United States. The framework that we develop builds on this premise to explore changes in moral sentiment reflected in longitudinal or historical text.
Figure FIGREF1 offers a preview of our framework by visualizing the evolution trajectories of the public's moral sentiment toward concepts signified by the probe words slavery, democracy, and gay. Each of these concepts illustrates a piece of “moral history” tracked through a period of 200 years (1800 to 2000), and our framework is able to capture nuanced moral changes. For instance, slavery initially lies at the border of moral virtue (positive sentiment) and vice (negative sentiment) in the 1800s yet gradually moves toward the center of moral vice over the 200-year period; in contrast, democracy considered morally negative (e.g., subversion and anti-authority under monarchy) in the 1800s is now perceived as morally positive, as a mechanism for fairness; gay, which came to denote homosexuality only in the 1930s BIBREF18, is inferred to be morally irrelevant until the modern day. We will describe systematic evaluations and applications of our framework that extend beyond these anecdotal cases of moral sentiment change.
The general text-based framework that we propose consists of a parameter-free approach that facilitates the prediction of public moral sentiment toward individual concepts, automated retrieval of morally changing concepts, and broad-scale psycholinguistic analyses of historical rates of moral sentiment change. We provide a description of the probabilistic models and data used, followed by comprehensive evaluations of our methodology.
Emerging NLP research on morality
An emerging body of work in natural language processing and computational social science has investigated how NLP systems can detect moral sentiment in online text. For example, moral rhetoric in social media and political discourse BIBREF19, BIBREF20, BIBREF21, the relation between moralization in social media and violent protests BIBREF22, and bias toward refugees in talk radio shows BIBREF23 have been some of the topics explored in this line of inquiry. In contrast to this line of research, the development of a formal framework for moral sentiment change is still under-explored, with no existing systematic and formal treatment of this topic BIBREF16.
While there is emerging awareness of ethical issues in NLP BIBREF24, BIBREF25, work exploiting NLP techniques to study principles of moral sentiment change is scarce. Moreover, since morality is variable across cultures and time BIBREF12, BIBREF16, developing systems that capture the diachronic nature of moral sentiment will be a pivotal research direction. Our work leverages and complements existing research that finds implicit human biases from word embeddings BIBREF13, BIBREF14, BIBREF19 by developing a novel perspective on using NLP methodology to discover principles of moral sentiment change in human society.
A three-tier modelling framework
Our framework treats the moral sentiment toward a concept at three incremental levels, as illustrated in Figure FIGREF3. First, we consider moral relevance, distinguishing between morally irrelevant and morally relevant concepts. At the second tier, moral polarity, we further split morally relevant concepts into those that are positively or negatively perceived in the moral domain. Finally, a third tier classifies these concepts into fine-grained categories of human morality.
We draw from research in social psychology to inform our methodology, most prominently Moral Foundations Theory BIBREF26. MFT seeks to explain the structure and variation of human morality across cultures, and proposes five moral foundations: Care / Harm, Fairness / Cheating, Loyalty / Betrayal, Authority / Subversion, and Sanctity / Degradation. Each foundation is summarized by a positive and a negative pole, resulting in ten fine-grained moral categories.
A three-tier modelling framework ::: Lexical data for moral sentiment
To ground moral sentiment in text, we leverage the Moral Foundations Dictionary BIBREF27. The MFD is a psycholinguistic resource that associates each MFT category with a set of seed words, which are words that provide evidence for the corresponding moral category in text. We use the MFD for moral polarity classification by dividing seed words into positive and negative sets, and for fine-grained categorization by splitting them into the 10 MFT categories.
To implement the first tier of our framework and detect moral relevance, we complement our morally relevant seed words with a corresponding set of seed words approximating moral irrelevance based on the notion of valence, i.e., the degree of pleasantness or unpleasantness of a stimulus. We refer to the emotional valence ratings collected by BIBREF28 for approximately 14,000 English words, and choose the words with most neutral valence rating that do not occur in the MFD as our set of morally irrelevant seed words, for an equal total number of morally relevant and morally irrelevant words.
A three-tier modelling framework ::: Models
We propose and evaluate a set of probabilistic models to classify concepts in the three tiers of morality specified above. Our models exploit the semantic structure of word embeddings BIBREF29 to perform tiered moral classification of query concepts. In each tier, the model receives a query word embedding vector $\mathbf {q}$ and a set of seed words for each class in that tier, and infers the posterior probabilities over the set of classes $c$ to which the query concept is associated with.
The seed words function as “labelled examples” that guide the moral classification of novel concepts, and are organized per classification tier as follows. In moral relevance classification, sets $\mathbf {S}_0$ and $\mathbf {S}_1$ contain the morally irrelevant and morally relevant seed words, respectively; for moral polarity, $\mathbf {S}_+$ and $\mathbf {S}_-$ contain the positive and negative seed words; and for fine-grained moral categories, $\mathbf {S}_1, \ldots , \mathbf {S}_{10}$ contain the seed words for the 10 categories of MFT. Then our general problem is to estimate $p(c\,|\,\mathbf {q})$, where $\mathbf {q}$ is a query vector and $c$ is a moral category in the desired tier.
We evaluate the following four models:
A Centroid model summarizes each set of seed words by its expected vector in embedding space, and classifies concepts into the class of closest expected embedding in Euclidean distance following a softmax rule;
A Naïve Bayes model considers both mean and variance, under the assumption of independence among embedding dimensions, by fitting a normal distribution with mean vector and diagonal covariance matrix to the set of seed words of each class;
A $k$-Nearest Neighbors ($k$NN) model exploits local density estimation and classifies concepts according to the majority vote of the $k$ seed words closest to the query vector;
A Kernel Density Estimation (KDE) model performs density estimation at a broader scale by considering the contribution of each seed word toward the total likelihood of each class, regulated by a bandwidth parameter $h$ that controls the sensitivity of the model to distance in embedding space.
Table TABREF2 specifies the formulation of each model. Note that we adopt a parsimonious design principle in our modelling: both Centroid and Naïve Bayes are parameter-free models, $k$NN only depends on the choice of $k$, and KDE uses a single bandwidth parameter $h$.
Historical corpus data
To apply our models diachronically, we require a word embedding space that captures the meanings of words at different points in time and reflects changes pertaining to a particular word as diachronic shifts in a common embedding space.
Following BIBREF30, we combine skip-gram word embeddings BIBREF29 trained on longitudinal corpora of English with rotational alignments of embedding spaces to obtain diachronic word embeddings that are aligned through time.
We divide historical time into decade-long bins, and use two sets of embeddings provided by BIBREF30, each trained on a different historical corpus of English:
Google N-grams BIBREF31: a corpus of $8.5 \times 10^{11}$ tokens collected from the English literature (Google Books, all-genres) spanning the period 1800–1999.
COHA BIBREF32: a smaller corpus of $4.1 \times 10^8$ tokens from works selected so as to be genre-balanced and representative of American English in the period 1810–2009.
Model evaluations
We evaluated our models in two ways: classification of moral seed words on all three tiers (moral relevance, polarity, and fine-grained categories), and correlation of model predictions with human judgments.
Model evaluations ::: Moral sentiment inference of seed words
In this evaluation, we assessed the ability of our models to classify the seed words that compose our moral environment in a leave-one-out classification task. We performed the evaluation for all three classification tiers: 1) moral relevance, where seed words are split into morally relevant and morally irrelevant; 2) moral polarity, where moral seed words are split into positive and negative; 3) fine-grained categories, where moral seed words are split into the 10 MFT categories. In each test, we removed one seed word from the training set at a time to obtain cross-validated model predictions.
Table TABREF14 shows classification accuracy for all models and corpora on each tier for the 1990–1999 period. We observe that all models perform substantially better than chance, confirming the efficacy of our methodology in capturing moral dimensions of words. We also observe that models using word embeddings trained on Google N-grams perform better than those trained on COHA, which could be expected given the larger corpus size of the former.
In the remaining analyses, we employ the Centroid model, which offers competitive accuracy and a simple, parameter-free specification.
Model evaluations ::: Alignment with human valence ratings
We evaluated the approximate agreement between our methodology and human judgments using valence ratings, i.e., the degree of pleasantness or unpleasantness of a stimulus. Our assumption is that the valence of a concept should correlate with its perceived moral polarity, e.g., morally repulsive ideas should evoke an unpleasant feeling. However, we do not expect this correspondence to be perfect; for example, the concept of dessert evokes a pleasant reaction without being morally relevant.
In this analysis, we took the valence ratings for the nearly 14,000 English nouns collected by BIBREF28 and, for each query word $q$, we generated a corresponding prediction of positive moral polarity from our model, $P(c_+\,|\,\mathbf {q})$. Table TABREF16 shows the correlations between human valence ratings and predictions of positive moral polarity generated by models trained on each of our corpora. We observe that the correlations are significant, suggesting the ability of our methodology to capture relevant features of moral sentiment from text.
In the remaining applications, we use the diachronic embeddings trained on the Google N-grams corpus, which enabled superior model performance throughout our evaluations.
Applications to diachronic morality
We applied our framework in three ways: 1) evaluation of selected concepts in historical time courses and prediction of human judgments; 2) automatic detection of moral sentiment change; and 3) broad-scale study of the relations between psycholinguistic variables and historical change of moral sentiment toward concepts.
Applications to diachronic morality ::: Moral change in individual concepts ::: Historical time courses.
We applied our models diachronically to predict time courses of moral relevance, moral polarity, and fine-grained moral categories toward two historically relevant topics: slavery and democracy. By grounding our model in word embeddings for each decade and querying concepts at the three tiers of classification, we obtained the time courses shown in Figure FIGREF21.
We note that these trajectories illustrate actual historical trends. Predictions for democracy show a trend toward morally positive sentiment, consistent with the adoption of democratic regimes in Western societies. On the other hand, predictions for slavery trend down and suggest a drop around the 1860s, coinciding with the American Civil War. We also observe changes in the dominant fine-grained moral categories, such as the perception of democracy as a fair concept, suggesting potential mechanisms behind the polarity changes and providing further insight into the public sentiment toward these concepts as evidenced by text.
Applications to diachronic morality ::: Moral change in individual concepts ::: Prediction of human judgments.
We explored the predictive potential of our framework by comparing model predictions with human judgments of moral relevance and acceptability. We used data from the Pew Research Center's 2013 Global Attitudes survey BIBREF33, in which participants from 40 countries judged 8 topics such as abortion and homosexuality as one of “acceptable", “unacceptable", and “not a moral issue".
We compared human ratings with model predictions at two tiers: for moral relevance, we paired the proportion of “not a moral issue” human responses with irrelevance predictions $p(c_0\,|\,\mathbf {q})$ for each topic, and for moral acceptability, we paired the proportion of “acceptable” responses with positive predictions $p(c_+\,|\,\mathbf {q})$. We used 1990s word embeddings, and obtained predictions for two-word topics by querying the model with their averaged embeddings.
Figure FIGREF23 shows plots of relevance and polarity predictions against survey proportions, and we observe a visible correspondence between model predictions and human judgments despite the difficulty of this task and limited number of topics.
Applications to diachronic morality ::: Retrieval of morally changing concepts
Beyond analyzing selected concepts, we applied our framework predictively on a large repertoire of words to automatically discover the concepts that have exhibited the greatest change in moral sentiment at two tiers, moral relevance and moral polarity.
We selected the 10,000 nouns with highest total frequency in the 1800–1999 period according to data from BIBREF30, restricted to words labelled as nouns in WordNet BIBREF34 for validation. For each such word $\mathbf {q}$, we computed diachronic moral relevance scores $R_i = p(c_1\,|\,\mathbf {q}), i=1,\ldots ,20$ for the 20 decades in our time span. Then, we performed a linear regression of $R$ on $T = 1,\ldots ,n$ and took the fitted slope as a measure of moral relevance change. We repeated the same procedure for moral polarity. Finally, we removed words with average relevance score below $0.5$ to focus on morally relevant retrievals.
Table TABREF17 shows the words with steepest predicted change toward moral relevance, along with their predicted fine-grained moral categories in modern times (i.e., 1900–1999). Table TABREF18 shows the words with steepest predicted change toward the positive and negative moral poles. To further investigate the moral sentiment that may have led to such polarity shifts, we also show the predicted fine-grained moral categories of each word at its earliest time of predicted moral relevance and in modern times. Although we do not have access to ground truth for this application, these results offer initial insight into the historical moral landscape of the English language at scale.
Applications to diachronic morality ::: Broad-scale investigation of moral change
In this application, we investigated the hypothesis that concept concreteness is inversely related to change in moral relevance, i.e., that concepts considered more abstract might become morally relevant at a higher rate than concepts considered more concrete. To test this hypothesis, we performed a multiple linear regression analysis on rate of change toward moral relevance of a large repertoire of words against concept concreteness ratings, word frequency BIBREF35, and word length BIBREF36.
We obtained norms of concreteness ratings from BIBREF28. We collected the same set of high-frequency nouns as in the previous analysis, along with their fitted slopes of moral relevance change. Since we were interested in moral relevance change within this large set of words, we restricted our analysis to those words whose model predictions indicate change in moral relevance, in either direction, from the 1800s to the 1990s.
We performed a multiple linear regression under the following model:
Here $\rho (w)$ is the slope of moral relevance change for word $w$; $f(w$) is its average frequency; $l(w)$ is its character length; $c(w)$ is its concreteness rating; $\beta _f$, $\beta _l$, $\beta _c$, and $\beta _0$ are the corresponding factor weights and intercept, respectively; and $\epsilon \sim \mathcal {N}(0, \sigma )$ is the regression error term.
Table TABREF27 shows the results of multiple linear regression. We observe that concreteness is a significant negative predictor of change toward moral relevance, suggesting that abstract concepts are more strongly associated with increasing moral relevance over time than concrete concepts. This significance persists under partial correlation test against the control factors ($p < 0.01$).
We further verified the diachronic component of this effect in a random permutation analysis. We generated 1,000 control time courses by randomly shuffling the 20 decades in our data, and repeated the regression analysis to obtain a control distribution for each regression coefficient. All effects became non-significant under the shuffled condition, suggesting the relevance of concept concreteness for diachronic change in moral sentiment (see Supplementary Material).
Discussion and conclusion
We presented a text-based framework for exploring the socio-scientific problem of moral sentiment change. Our methodology uses minimal parameters and exploits implicit moral biases learned from diachronic word embeddings to reveal the public's moral perception toward a large concept repertoire over a long historical period.
Differing from existing work in NLP that treats moral sentiment as a flat classification problem BIBREF19, BIBREF20, our framework probes moral sentiment change at multiple levels and captures moral dynamics concerning relevance, polarity, and fine-grained categories informed by Moral Foundations Theory BIBREF12. We applied our methodology to the automated analyses of moral change both in individual concepts and at a broad scale, thus providing insights into psycholinguistic variables that associate with rates of moral change in the public.
Our current work focuses on exploring moral sentiment change in English-speaking cultures. Future research should evaluate the appropriateness of the framework to probing moral change from a diverse range of cultures and linguistic backgrounds, and the extent to which moral sentiment change interacts and crisscrosses with linguistic meaning change and lexical coinage. Our work creates opportunities for applying natural language processing toward characterizing moral sentiment change in society.
Acknowledgments
We would like to thank Nina Wang, Nicola Lacerata, Dan Jurafsky, Paul Bloom, Dzmitry Bahdanau, and the Computational Linguistics Group at the University of Toronto for helpful discussion. We would also like to thank Ben Prystawski for his feedback on the manuscript. JX is supported by an NSERC USRA Fellowship and YX is funded through a SSHRC Insight Grant, an NSERC Discovery Grant, and a Connaught New Researcher Award. | Unanswerable |
c180f44667505ec03214d44f4970c0db487a8bae | c180f44667505ec03214d44f4970c0db487a8bae_0 | Q: How well did the system do?
Text: Introduction
Interactive fictions—also called text-adventure games or text-based games—are games in which a player interacts with a virtual world purely through textual natural language—receiving descriptions of what they “see” and writing out how they want to act, an example can be seen in Figure FIGREF2. Interactive fiction games are often structured as puzzles, or quests, set within the confines of given game world. Interactive fictions have been adopted as a test-bed for real-time game playing agents BIBREF0, BIBREF1, BIBREF2. Unlike other, graphical games, interactive fictions test agents' abilities to infer the state of the world through communication and to indirectly affect change in the world through language. Interactive fictions are typically modeled after real or fantasy worlds; commonsense knowledge is an important factor in successfully playing interactive fictions BIBREF3, BIBREF4.
In this paper we explore a different challenge for artificial intelligence: automatically generating text-based virtual worlds for interactive fictions. A core component of many narrative-based tasks—everything from storytelling to game generation—is world building. The world of a story or game defines the boundaries of where the narrative is allowed and what the player is allowed to do. There are four core challenges to world generation: (1) commonsense knowledge: the world must reference priors that the player possesses so that players can make sense of the world and build expectations on how to interact with it. This is especially true in interactive fictions where the world is presented textually because many details of the world necessarily be left out (e.g., the pot is on a stove; kitchens are found in houses) that might otherwise be literal in a graphical virtual world. (2) Thematic knowledge: interactive fictions usually involve a theme or genre that comes with its own expectations. For example, light speed travel is plausible in sci-fi worlds but not realistic in the real world. (3) Coherence: the world must not appear to be an random assortment of locations. (3) Natural language: The descriptions of the rooms as well as the permissible actions must text, implying that the system has natural language generation capability.
Because worlds are conveyed entirely through natural language, the potential output space for possible generated worlds is combinatorially large. To constrain this space and to make it possible to evaluate generated world, we present an approach which makes use of existing stories, building on the worlds presented in them but leaving enough room for the worlds to be unique. Specifically, we take a story such as Sherlock Holmes or Rapunzel—a linear reading experience—and extract the description of the world the story is set in to make an interactive world the player can explore.
Our method first extracts a partial, potentially disconnected knowledge graph from the story, encoding information regarding locations, characters, and objects in the form of $\langle entity,relation,entity\rangle $ triples. Relations between these types of entities as well as their properties are captured in this knowledge graph. However, stories often do not explicitly contain all the information required to fully fill out such a graph. A story may mention that there is a sword stuck in a stone but not what you can do with the sword or where it is in relation to everything else. Our method fills in missing relation and affordance information using thematic knowledge gained from training on stories in a similar genre. This knowledge graph is then used to guide the text description generation process for the various locations, characters, and objects. The game is then assembled on the basis of the knowledge graph and the corresponding generated descriptions.
We have two major contributions. (1) A neural model and a rules-based baseline for each of the tasks described above. The phases are that of graph extraction and completion followed by description generation and game formulation. Each of these phases are relatively distinct and utilize their own models. (2) A human subject study for comparing the neural model and variations on it to the rules-based and human-made approaches. We perform two separate human subject studies—one for the first phase of knowledge graph construction and another for the overall game creation process—testing specifically for coherence, interestingness, and the ability to maintain a theme or genre.
Related Work
There has been a slew of recent work in developing agents that can play text games BIBREF0, BIBREF5, BIBREF1, BIBREF6. BIBREF7 ammanabrolutransfer,ammanabrolu,ammanabrolu2020graph in particular use knowledge graphs as state representations for game-playing agents. BIBREF8 propose QAit, a set of question answering tasks framed as text-based or interactive fiction games. QAit focuses on helping agents learn procedural knowledge through interaction with a dynamic environment. These works all focus on agents that learn to play a given set of interactive fiction games as opposed to generating them.
Scheherazade BIBREF9 is a system that learns a plot graph based on stories written by crowd sourcing the task of writing short stories. The learned plot graph contains details relevant to ensure story coherence. It includes: plot events, temporal precedence, and mutual exclusion relations. Scheherazade-IF BIBREF10 extends the system to generate choose-your-own-adventure style interactive fictions in which the player chooses from prescribed options. BIBREF11 explore a method of creating interactive narratives revolving around locations, wherein sentences are mapped to a real-world GPS location from a corpus of sentences belonging to a certain genre. Narratives are made by chaining together sentences selected based on the player's current real-world location. In contrast to these models, our method generates a parser-based interactive fiction in which the player types in a textual command, allowing for greater expressiveness.
BIBREF12 define the problem of procedural content generation in interactive fiction games in terms of the twin considerations of world and quest generation and focus on the latter. They present a system in which quest content is first generated by learning from a corpus and then grounded into a given interactive fiction world. The work is this paper focuses on the world generation problem glossed in the prior work. Thus these two systems can be seen as complimentary.
Light BIBREF13 is a crowdsourced dataset of grounded text-adventure game dialogues. It contains information regarding locations, characters, and objects set in a fantasy world. The authors demonstrate that the supervised training of transformer-based models lets us contextually relevant dialog, actions, and emotes. Most in line with the spirit of this paper, BIBREF14 leverage Light to generate worlds for text-based games. They train a neural network based model using Light to compositionally arrange locations, characters, and objects into an interactive world. Their model is tested using a human subject study against other machine learning based algorithms with respect to the cohesiveness and diversity of generated worlds. Our work, in contrast, focuses on extracting the information necessary for building interactive worlds from existing story plots.
World Generation
World generation happens in two phases. In the first phase, a partial knowledge graph is extracted from a story plot and then filled in using thematic commonsense knowledge. In the second phase, the graph is used as the skeleton to generate a full interactive fiction game—generating textual descriptions or “flavortext” for rooms and embedded objects. We present a novel neural approach in addition to a rule guided baseline for each of these phases in this section.
World Generation ::: Knowledge Graph Construction
The first phase is to extract a knowledge graph from the story that depicts locations, characters, objects, and the relations between these entities. We present two techniques. The first uses neural question-answering technique to extract relations from a story text. The second, provided as a baseline, uses OpenIE5, a commonly used rule-based information extraction technique. For the sake of simplicity, we considered primarily the location-location and location-character/object relations, represented by the “next to” and “has” edges respectively in Figure FIGREF4.
World Generation ::: Knowledge Graph Construction ::: Neural Graph Construction
While many neural models already exist that perform similar tasks such as named entity extraction and part of speech tagging, they often come at the cost of large amounts of specialized labeled data suited for that task. We instead propose a new method that leverages models trained for context-grounded question-answering tasks to do entity extraction with no task dependent data or fine-tuning necessary. Our method, dubbed AskBERT, leverages the Question-Answering (QA) model ALBERT BIBREF15. AskBERT consists of two main steps as shown in Figure FIGREF7: vertex extraction and graph construction.
The first step is to extract the set of entities—graph vertices—from the story. We are looking to extract information specifically regarding characters, locations, and objects. This is done by using asking the QA model questions such as “Who is a character in the story?”. BIBREF16 have shown that the phrasing of questions given to a QA model is important and this forms the basis of how we formulate our questions—questions are asked so that they are more likely to return a single answer, e.g. asking “Where is a location in the story?” as opposed to “Where are the locations in the story?”. In particular, we notice that pronoun choice can be crucial; “Where is a location in the story?” yielded more consistent extraction than “What is a location in the story?”. ALBERT QA is trained to also output a special <$no$-$answer$> token when it cannot find an answer to the question within the story. Our method makes use of this by iteratively asking QA model a question and masking out the most likely answer outputted on the previous step. This process continues until the <$no$-$answer$> token becomes the most likely answer.
The next step is graph construction. Typical interactive fiction worlds are usually structured as trees, i.e. no cycles except between locations. Using this fact, we use an approach that builds a graph from the vertex set by one relation—or edge—at a time. Once again using the entire story plot as context, we query the ALBERT-QA model picking a random starting location $x$ from the set of vertices previously extracted.and asking the questions “What location can I visit from $x$?” and “Who/What is in $x$?”. The methodology for phrasing these questions follows that described for the vertex extraction. The answer given by the QA model is matched to the vertex set by picking the vertex $u$ that contains the best word-token overlap with the answer. Relations between vertices are added by computing a relation probability on the basis of the output probabilities of the answer given by the QA model. The probability that vertices $x,u$ are related:
where
is the sum of the individual token probabilities of all the overlapping tokens in the answer from the QA model and $u$.
World Generation ::: Knowledge Graph Construction ::: Rule-Based Graph Construction
We compared our proposed AskBERT method with a non-neural, rule-based approach. This approach is based on the information extracted by OpenIE5, followed by some post-processing such as named-entity recognition and part-of-speech tagging. OpenIE5 combines several cutting-edge ideas from several existing papers BIBREF17, BIBREF18, BIBREF19 to create a powerful information extraction tools. For a given sentence, OpenIE5 generates multiple triples in the format of $\langle entity, relation, entity\rangle $ as concise representations of the sentence, each with a confidence score. These triples are also occasionally annotated with location information indicating that a triple happened in a location.
As in the neural AskBERT model, we attempt to extract information regarding locations, characters, and objects. The entire story plot is passed into the OpenIE5 and we receive a set of triples. The location annotations on the triples are used to create a set of locations. We mark which sentences in the story contain these locations. POS tagging based on marking noun-phrases is then used in conjunction with NER to further filter the set of triples—identifying the set of characters and objects in the story.
The graph is constructed by linking the set of triples on the basis of the location they belong to. While some sentences contain very explicit location information for OpenIE5 to mark it out in the triples, most of them do not. We therefore make the assumption that the location remains the same for all triples extracted in between sentences where locations are explicitly mentioned. For example, if there exists $location A$ in the 1st sentence and $location B$ in the 5th sentence of the story, all the events described in sentences 1-4 are considered to take place in $location A$. The entities mentioned in these events are connected to $location A$ in the graph.
World Generation ::: Description Generation
The second phase involves using the constructed knowledge graph to generate textual descriptions of the entities we have extracted, also known as flavortext. This involves generating descriptions of what a player “sees” when they enter a location and short blurbs for each object and character. These descriptions need to not only be faithful to the information present in the knowledge graph and the overall story plot but to also contain flavor and be interesting for the player.
World Generation ::: Description Generation ::: Neural Description Generation
Here, we approach the problem of description generation by taking inspiration from conditional transformer-based generation methods BIBREF20. Our approach is outlined in Figure FIGREF11 and an example description shown in Figure FIGREF2. For any given entity in the story, we first locate it in the story plot and then construct a prompt which consists of the entire story up to and including the sentence when the entity is first mentioned in the story followed by a question asking to describe that entity. With respect to prompts, we found that more direct methods such as question-answering were more consistent than open-ended sentence completion. For example, “Q: Who is the prince? A:” often produced descriptions that were more faithful to the information already present about the prince in the story than “You see the prince. He is/looks”. For our transformer-based generation, we use a pre-trained 355M GPT-2 model BIBREF21 finetuned on a corpus of plot summaries collected from Wikipedia. The plots used for finetuning are tailored specific to the genre of the story in order to provide more relevant generation for the target genre. Additional details regarding the datasets used are provided in Section SECREF4. This method strikes a balance between knowledge graph verbalization techniques which often lack “flavor” and open ended generation which struggles to maintain semantic coherence.
World Generation ::: Description Generation ::: Rules-Based Description Generation
In the rule-based approach, we utilized the templates from the built-in text game generator of TextWorld BIBREF1 to generate the description for our graphs. TextWorld is an open-source library that provides a way to generate text-game learning environments for training reinforcement learning agents using pre-built grammars.
Two major templates involved here are the Room Intro Templates and Container Description Templates from TextWorld, responsible for generating descriptions of locations and blurbs for objects/characters respectively. The location and object/character information are taken from the knowledge graph constructed previously.
Example of Room Intro Templates: “This might come as a shock to you, but you've just $\#entered\#$ a <$location$-$name$>”
Example of Container Description Templates: “The <$location$-$name$> $\#contains\#$ <$object/person$-$name$>”
Each token surrounded by $\#$ sign can be expanded using a select set of terminal tokens. For instance, $\#entered\#$ could be filled with any of the following phrases here: entered; walked into; fallen into; moved into; stumbled into; come into. Additional prefixes, suffixes and adjectives were added to increase the relative variety of descriptions. Unlike the neural methods, the rule-based approach is not able to generate detailed and flavorful descriptions of the properties of the locations/objects/characters. By virtue of the templates, however, it is much better at maintaining consistency with the information contained in the knowledge graph.
Evaluation
We conducted two sets of human participant evaluations by recruiting participants over Amazon Mechanical Turk. The first evaluation tests the knowledge graph construction phase, in which we measure perceived coherence and genre or theme resemblance of graphs extracted by different models. The second study compares full games—including description generation and game assembly, which can't easily be isolated from graph construction—generated by different methods. This study looks at how interesting the games were to the players in addition to overall coherence and genre resemblance. Both studies are performed across two genres: mystery and fairy-tales. This is done in part to test the relative effectiveness of our approach across different genres with varying thematic commonsense knowledge. The dataset used was compiled via story summaries that were scraped from Wikipedia via a recursive crawling bot. The bot searched pages for both for plot sections as well as links to other potential stories. From the process, 695 fairy-tales and 536 mystery stories were compiled from two categories: novels and short stories. We note that the mysteries did not often contain many fantasy elements, i.e. they consisted of mysteries set in our world such as Sherlock Holmes, while the fairy-tales were much more removed from reality. Details regarding how each of the studies were conducted and the corresponding setup are presented below.
Evaluation ::: Knowledge Graph Construction Evaluation
We first select a subset of 10 stories randomly from each genre and then extract a knowledge graph using three different models. Each participant is presented with the three graphs extracted from a single story in each genre and then asked to rank them on the basis of how coherent they were and how well the graphs match the genre. The graphs resembles the one shown in in Figure FIGREF4 and are presented to the participant sequentially. The exact order of the graphs and genres was also randomized to mitigate any potential latent correlations. Overall, this study had a total of 130 participants.This ensures that, on average, graphs from every story were seen by 13 participants.
In addition to the neural AskBERT and rules-based methods, we also test a variation of the neural model which we dub to be the “random” approach. The method of vertex extraction remains identical to the neural method, but we instead connect the vertices randomly instead of selecting the most confident according to the QA model. We initialize the graph with a starting location entity. Then, we randomly sample from the vertex set and connect it to a randomly sampled location in the graph until every vertex has been connected. This ablation in particular is designed to test the ability of our neural model to predict relations between entities. It lets us observe how accurately linking related vertices effects each of the metrics that we test for. For a fair comparison between the graphs produced by different approaches, we randomly removed some of the nodes and edges from the initial graphs so that the maximum number of locations per graph and the maximum number of objects/people per location in each story genre are the same.
The results are shown in Table TABREF20. We show the median rank of each of the models for both questions across the genres. Ranked data is generally closely interrelated and so we perform Friedman's test between the three models to validate that the results are statistically significant. This is presented as the $p$-value in table (asterisks indicate significance at $p<0.05$). In cases where we make comparisons between specific pairs of models, when necessary, we additionally perform the Mann-Whitney U test to ensure that the rankings differed significantly.
In the mystery genre, the rules-based method was often ranked first in terms of genre resemblance, followed by the neural and random models. This particular result was not statistically significant however, likely indicating that all the models performed approximately equally in this category. The neural approach was deemed to be the most coherent followed by the rules and random. For the fairy-tales, the neural model ranked higher on both of the questions asked of the participants. In this genre, the random neural model also performed better than the rules based approach.
Tables TABREF18 and TABREF19 show the statistics of the constructed knowledge graphs in terms of vertices and edges. We see that the rules-based graph construction has a lower number of locations, characters, and relations between entities but far more objects in general. The greater number of objects is likely due to the rules-based approach being unable to correctly identify locations and characters. The gap between the methods is less pronounced in the mystery genre as opposed to the fairy-tales, in fact the rules-based graphs have more relations than the neural ones. The random and neural models have the same number of entities in all categories by construction but random in general has lower variance on the number of relations found. In this case as well, the variance is lower for mystery as opposed to fairy-tales. When taken in the context of the results in Table TABREF20, it appears to indicate that leveraging thematic commonsense in the form of AskBERT for graph construction directly results in graphs that are more coherent and maintain genre more easily. This is especially true in the case of the fairy-tales where the thematic and everyday commonsense diverge more than than in the case of the mysteries.
Evaluation ::: Full Game Evaluation
This participant study was designed to test the overall game formulation process encompassing both phases described in Section SECREF3. A single story from each genre was chosen by hand from the 10 stories used for the graph evaluation process. From the knowledge graphs for this story, we generate descriptions using the neural, rules, and random approaches described previously. Additionally, we introduce a human-authored game for each story here to provide an additional benchmark. This author selected was familiar with text-adventure games in general as well as the genres of detective mystery and fairy tale. To ensure a fair comparison, we ensure that the maximum number of locations and maximum number of characters/objects per location matched the other methods. After setting general format expectations, the author read the selected stories and constructed knowledge graphs in a corresponding three step process of: identifying the $n$ most important entities in the story, mapping positional relationships between entities, and then synthesizing flavor text for the entities based off of said location, the overall story plot, and background topic knowledge.
Once the knowledge graph and associated descriptions are generated for a particular story, they are then automatically turned into a fully playable text-game using the text game engine Evennia. Evennia was chosen for its flexibility and customization, as well as a convenient web client for end user testing. The data structures were translated into builder commands within Evennia that constructed the various layouts, flavor text, and rules of the game world. Users were placed in one “room” out of the different world locations within the game they were playing, and asked to explore the game world that was available to them. Users achieved this by moving between rooms and investigating objects. Each time a new room was entered or object investigated, the player's total number of explored entities would be displayed as their score.
Each participant was was asked to play the neural game and then another one from one of the three additional models within a genre. The completion criteria for each game is collect half the total score possible in the game, i.e. explore half of all possible rooms and examine half of all possible entities. This provided the participant with multiple possible methods of finishing a particular game. On completion, the participant was asked to rank the two games according to overall perceived coherence, interestingness, and adherence to the genre. We additionally provided a required initial tutorial game which demonstrated all of these mechanics. The order in which participants played the games was also randomized as in the graph evaluation to remove potential correlations. We had 75 participants in total, 39 for mystery and 36 for fairy-tales. As each player played the neural model created game and one from each of the other approaches—this gave us 13 on average for the other approaches in the mystery genre and 12 for fairy-tales.
The summary of the results of the full game study is shown in Table TABREF23. As the comparisons made in this study are all made pairwise between our neural model and one of the baselines—they are presented in terms of what percentage of participants prefer the baseline game over the neural game. Once again, as this is highly interrelated ranked data, we perform the Mann-Whitney U test between each of the pairs to ensure that the rankings differed significantly. This is also indicated on the table.
In the mystery genre, the neural approach is generally preferred by a greater percentage of participants than the rules or random. The human-made game outperforms them all. A significant exception to is that participants thought that the rules-based game was more interesting than the neural game. The trends in the fairy-tale genre are in general similar with a few notable deviations. The first deviation is that the rules-based and random approaches perform significantly worse than neural in this genre. We see also that the neural game is as coherent as the human-made game.
As in the previous study, we hypothesize that this is likely due to the rules-based approach being more suited to the mystery genre, which is often more mundane and contains less fantastical elements. By extension, we can say that thematic commonsense in fairy-tales has less overlap with everyday commonsense than for mundane mysteries. This has a few implications, one of which is that this theme specific information is unlikely to have been seen by OpenIE5 before. This is indicated in the relatively improved performance of the rules-based model in this genre across in terms of both interestingness and coherence.The genre difference can also be observed in terms of the performance of the random model. This model also lacking when compared to our neural model across all the questions asked especially in the fairy-tale setting. This appears to imply that filling in gaps in the knowledge graph using thematically relevant information such as with AskBERT results in more interesting and coherent descriptions and games especially in settings where the thematic commonsense diverges from everyday commonsense.
Conclusion
Procedural world generation systems are required to be semantically consistent, comply with thematic and everyday commonsense understanding, and maintain overall interestingness. We describe an approach that transform a linear reading experience in the form of a story plot into a interactive narrative experience. Our method, AskBERT, extracts and fills in a knowledge graph using thematic commonsense and then uses it as a skeleton to flesh out the rest of the world. A key insight from our human participant study reveals that the ability to construct a thematically consistent knowledge graph is critical to overall perceptions of coherence and interestingness particularly when the theme diverges from everyday commonsense understanding. | the neural approach is generally preferred by a greater percentage of participants than the rules or random, human-made game outperforms them all |
76d62e414a345fe955dc2d99562ef5772130bc7e | 76d62e414a345fe955dc2d99562ef5772130bc7e_0 | Q: How is the information extracted?
Text: Introduction
Interactive fictions—also called text-adventure games or text-based games—are games in which a player interacts with a virtual world purely through textual natural language—receiving descriptions of what they “see” and writing out how they want to act, an example can be seen in Figure FIGREF2. Interactive fiction games are often structured as puzzles, or quests, set within the confines of given game world. Interactive fictions have been adopted as a test-bed for real-time game playing agents BIBREF0, BIBREF1, BIBREF2. Unlike other, graphical games, interactive fictions test agents' abilities to infer the state of the world through communication and to indirectly affect change in the world through language. Interactive fictions are typically modeled after real or fantasy worlds; commonsense knowledge is an important factor in successfully playing interactive fictions BIBREF3, BIBREF4.
In this paper we explore a different challenge for artificial intelligence: automatically generating text-based virtual worlds for interactive fictions. A core component of many narrative-based tasks—everything from storytelling to game generation—is world building. The world of a story or game defines the boundaries of where the narrative is allowed and what the player is allowed to do. There are four core challenges to world generation: (1) commonsense knowledge: the world must reference priors that the player possesses so that players can make sense of the world and build expectations on how to interact with it. This is especially true in interactive fictions where the world is presented textually because many details of the world necessarily be left out (e.g., the pot is on a stove; kitchens are found in houses) that might otherwise be literal in a graphical virtual world. (2) Thematic knowledge: interactive fictions usually involve a theme or genre that comes with its own expectations. For example, light speed travel is plausible in sci-fi worlds but not realistic in the real world. (3) Coherence: the world must not appear to be an random assortment of locations. (3) Natural language: The descriptions of the rooms as well as the permissible actions must text, implying that the system has natural language generation capability.
Because worlds are conveyed entirely through natural language, the potential output space for possible generated worlds is combinatorially large. To constrain this space and to make it possible to evaluate generated world, we present an approach which makes use of existing stories, building on the worlds presented in them but leaving enough room for the worlds to be unique. Specifically, we take a story such as Sherlock Holmes or Rapunzel—a linear reading experience—and extract the description of the world the story is set in to make an interactive world the player can explore.
Our method first extracts a partial, potentially disconnected knowledge graph from the story, encoding information regarding locations, characters, and objects in the form of $\langle entity,relation,entity\rangle $ triples. Relations between these types of entities as well as their properties are captured in this knowledge graph. However, stories often do not explicitly contain all the information required to fully fill out such a graph. A story may mention that there is a sword stuck in a stone but not what you can do with the sword or where it is in relation to everything else. Our method fills in missing relation and affordance information using thematic knowledge gained from training on stories in a similar genre. This knowledge graph is then used to guide the text description generation process for the various locations, characters, and objects. The game is then assembled on the basis of the knowledge graph and the corresponding generated descriptions.
We have two major contributions. (1) A neural model and a rules-based baseline for each of the tasks described above. The phases are that of graph extraction and completion followed by description generation and game formulation. Each of these phases are relatively distinct and utilize their own models. (2) A human subject study for comparing the neural model and variations on it to the rules-based and human-made approaches. We perform two separate human subject studies—one for the first phase of knowledge graph construction and another for the overall game creation process—testing specifically for coherence, interestingness, and the ability to maintain a theme or genre.
Related Work
There has been a slew of recent work in developing agents that can play text games BIBREF0, BIBREF5, BIBREF1, BIBREF6. BIBREF7 ammanabrolutransfer,ammanabrolu,ammanabrolu2020graph in particular use knowledge graphs as state representations for game-playing agents. BIBREF8 propose QAit, a set of question answering tasks framed as text-based or interactive fiction games. QAit focuses on helping agents learn procedural knowledge through interaction with a dynamic environment. These works all focus on agents that learn to play a given set of interactive fiction games as opposed to generating them.
Scheherazade BIBREF9 is a system that learns a plot graph based on stories written by crowd sourcing the task of writing short stories. The learned plot graph contains details relevant to ensure story coherence. It includes: plot events, temporal precedence, and mutual exclusion relations. Scheherazade-IF BIBREF10 extends the system to generate choose-your-own-adventure style interactive fictions in which the player chooses from prescribed options. BIBREF11 explore a method of creating interactive narratives revolving around locations, wherein sentences are mapped to a real-world GPS location from a corpus of sentences belonging to a certain genre. Narratives are made by chaining together sentences selected based on the player's current real-world location. In contrast to these models, our method generates a parser-based interactive fiction in which the player types in a textual command, allowing for greater expressiveness.
BIBREF12 define the problem of procedural content generation in interactive fiction games in terms of the twin considerations of world and quest generation and focus on the latter. They present a system in which quest content is first generated by learning from a corpus and then grounded into a given interactive fiction world. The work is this paper focuses on the world generation problem glossed in the prior work. Thus these two systems can be seen as complimentary.
Light BIBREF13 is a crowdsourced dataset of grounded text-adventure game dialogues. It contains information regarding locations, characters, and objects set in a fantasy world. The authors demonstrate that the supervised training of transformer-based models lets us contextually relevant dialog, actions, and emotes. Most in line with the spirit of this paper, BIBREF14 leverage Light to generate worlds for text-based games. They train a neural network based model using Light to compositionally arrange locations, characters, and objects into an interactive world. Their model is tested using a human subject study against other machine learning based algorithms with respect to the cohesiveness and diversity of generated worlds. Our work, in contrast, focuses on extracting the information necessary for building interactive worlds from existing story plots.
World Generation
World generation happens in two phases. In the first phase, a partial knowledge graph is extracted from a story plot and then filled in using thematic commonsense knowledge. In the second phase, the graph is used as the skeleton to generate a full interactive fiction game—generating textual descriptions or “flavortext” for rooms and embedded objects. We present a novel neural approach in addition to a rule guided baseline for each of these phases in this section.
World Generation ::: Knowledge Graph Construction
The first phase is to extract a knowledge graph from the story that depicts locations, characters, objects, and the relations between these entities. We present two techniques. The first uses neural question-answering technique to extract relations from a story text. The second, provided as a baseline, uses OpenIE5, a commonly used rule-based information extraction technique. For the sake of simplicity, we considered primarily the location-location and location-character/object relations, represented by the “next to” and “has” edges respectively in Figure FIGREF4.
World Generation ::: Knowledge Graph Construction ::: Neural Graph Construction
While many neural models already exist that perform similar tasks such as named entity extraction and part of speech tagging, they often come at the cost of large amounts of specialized labeled data suited for that task. We instead propose a new method that leverages models trained for context-grounded question-answering tasks to do entity extraction with no task dependent data or fine-tuning necessary. Our method, dubbed AskBERT, leverages the Question-Answering (QA) model ALBERT BIBREF15. AskBERT consists of two main steps as shown in Figure FIGREF7: vertex extraction and graph construction.
The first step is to extract the set of entities—graph vertices—from the story. We are looking to extract information specifically regarding characters, locations, and objects. This is done by using asking the QA model questions such as “Who is a character in the story?”. BIBREF16 have shown that the phrasing of questions given to a QA model is important and this forms the basis of how we formulate our questions—questions are asked so that they are more likely to return a single answer, e.g. asking “Where is a location in the story?” as opposed to “Where are the locations in the story?”. In particular, we notice that pronoun choice can be crucial; “Where is a location in the story?” yielded more consistent extraction than “What is a location in the story?”. ALBERT QA is trained to also output a special <$no$-$answer$> token when it cannot find an answer to the question within the story. Our method makes use of this by iteratively asking QA model a question and masking out the most likely answer outputted on the previous step. This process continues until the <$no$-$answer$> token becomes the most likely answer.
The next step is graph construction. Typical interactive fiction worlds are usually structured as trees, i.e. no cycles except between locations. Using this fact, we use an approach that builds a graph from the vertex set by one relation—or edge—at a time. Once again using the entire story plot as context, we query the ALBERT-QA model picking a random starting location $x$ from the set of vertices previously extracted.and asking the questions “What location can I visit from $x$?” and “Who/What is in $x$?”. The methodology for phrasing these questions follows that described for the vertex extraction. The answer given by the QA model is matched to the vertex set by picking the vertex $u$ that contains the best word-token overlap with the answer. Relations between vertices are added by computing a relation probability on the basis of the output probabilities of the answer given by the QA model. The probability that vertices $x,u$ are related:
where
is the sum of the individual token probabilities of all the overlapping tokens in the answer from the QA model and $u$.
World Generation ::: Knowledge Graph Construction ::: Rule-Based Graph Construction
We compared our proposed AskBERT method with a non-neural, rule-based approach. This approach is based on the information extracted by OpenIE5, followed by some post-processing such as named-entity recognition and part-of-speech tagging. OpenIE5 combines several cutting-edge ideas from several existing papers BIBREF17, BIBREF18, BIBREF19 to create a powerful information extraction tools. For a given sentence, OpenIE5 generates multiple triples in the format of $\langle entity, relation, entity\rangle $ as concise representations of the sentence, each with a confidence score. These triples are also occasionally annotated with location information indicating that a triple happened in a location.
As in the neural AskBERT model, we attempt to extract information regarding locations, characters, and objects. The entire story plot is passed into the OpenIE5 and we receive a set of triples. The location annotations on the triples are used to create a set of locations. We mark which sentences in the story contain these locations. POS tagging based on marking noun-phrases is then used in conjunction with NER to further filter the set of triples—identifying the set of characters and objects in the story.
The graph is constructed by linking the set of triples on the basis of the location they belong to. While some sentences contain very explicit location information for OpenIE5 to mark it out in the triples, most of them do not. We therefore make the assumption that the location remains the same for all triples extracted in between sentences where locations are explicitly mentioned. For example, if there exists $location A$ in the 1st sentence and $location B$ in the 5th sentence of the story, all the events described in sentences 1-4 are considered to take place in $location A$. The entities mentioned in these events are connected to $location A$ in the graph.
World Generation ::: Description Generation
The second phase involves using the constructed knowledge graph to generate textual descriptions of the entities we have extracted, also known as flavortext. This involves generating descriptions of what a player “sees” when they enter a location and short blurbs for each object and character. These descriptions need to not only be faithful to the information present in the knowledge graph and the overall story plot but to also contain flavor and be interesting for the player.
World Generation ::: Description Generation ::: Neural Description Generation
Here, we approach the problem of description generation by taking inspiration from conditional transformer-based generation methods BIBREF20. Our approach is outlined in Figure FIGREF11 and an example description shown in Figure FIGREF2. For any given entity in the story, we first locate it in the story plot and then construct a prompt which consists of the entire story up to and including the sentence when the entity is first mentioned in the story followed by a question asking to describe that entity. With respect to prompts, we found that more direct methods such as question-answering were more consistent than open-ended sentence completion. For example, “Q: Who is the prince? A:” often produced descriptions that were more faithful to the information already present about the prince in the story than “You see the prince. He is/looks”. For our transformer-based generation, we use a pre-trained 355M GPT-2 model BIBREF21 finetuned on a corpus of plot summaries collected from Wikipedia. The plots used for finetuning are tailored specific to the genre of the story in order to provide more relevant generation for the target genre. Additional details regarding the datasets used are provided in Section SECREF4. This method strikes a balance between knowledge graph verbalization techniques which often lack “flavor” and open ended generation which struggles to maintain semantic coherence.
World Generation ::: Description Generation ::: Rules-Based Description Generation
In the rule-based approach, we utilized the templates from the built-in text game generator of TextWorld BIBREF1 to generate the description for our graphs. TextWorld is an open-source library that provides a way to generate text-game learning environments for training reinforcement learning agents using pre-built grammars.
Two major templates involved here are the Room Intro Templates and Container Description Templates from TextWorld, responsible for generating descriptions of locations and blurbs for objects/characters respectively. The location and object/character information are taken from the knowledge graph constructed previously.
Example of Room Intro Templates: “This might come as a shock to you, but you've just $\#entered\#$ a <$location$-$name$>”
Example of Container Description Templates: “The <$location$-$name$> $\#contains\#$ <$object/person$-$name$>”
Each token surrounded by $\#$ sign can be expanded using a select set of terminal tokens. For instance, $\#entered\#$ could be filled with any of the following phrases here: entered; walked into; fallen into; moved into; stumbled into; come into. Additional prefixes, suffixes and adjectives were added to increase the relative variety of descriptions. Unlike the neural methods, the rule-based approach is not able to generate detailed and flavorful descriptions of the properties of the locations/objects/characters. By virtue of the templates, however, it is much better at maintaining consistency with the information contained in the knowledge graph.
Evaluation
We conducted two sets of human participant evaluations by recruiting participants over Amazon Mechanical Turk. The first evaluation tests the knowledge graph construction phase, in which we measure perceived coherence and genre or theme resemblance of graphs extracted by different models. The second study compares full games—including description generation and game assembly, which can't easily be isolated from graph construction—generated by different methods. This study looks at how interesting the games were to the players in addition to overall coherence and genre resemblance. Both studies are performed across two genres: mystery and fairy-tales. This is done in part to test the relative effectiveness of our approach across different genres with varying thematic commonsense knowledge. The dataset used was compiled via story summaries that were scraped from Wikipedia via a recursive crawling bot. The bot searched pages for both for plot sections as well as links to other potential stories. From the process, 695 fairy-tales and 536 mystery stories were compiled from two categories: novels and short stories. We note that the mysteries did not often contain many fantasy elements, i.e. they consisted of mysteries set in our world such as Sherlock Holmes, while the fairy-tales were much more removed from reality. Details regarding how each of the studies were conducted and the corresponding setup are presented below.
Evaluation ::: Knowledge Graph Construction Evaluation
We first select a subset of 10 stories randomly from each genre and then extract a knowledge graph using three different models. Each participant is presented with the three graphs extracted from a single story in each genre and then asked to rank them on the basis of how coherent they were and how well the graphs match the genre. The graphs resembles the one shown in in Figure FIGREF4 and are presented to the participant sequentially. The exact order of the graphs and genres was also randomized to mitigate any potential latent correlations. Overall, this study had a total of 130 participants.This ensures that, on average, graphs from every story were seen by 13 participants.
In addition to the neural AskBERT and rules-based methods, we also test a variation of the neural model which we dub to be the “random” approach. The method of vertex extraction remains identical to the neural method, but we instead connect the vertices randomly instead of selecting the most confident according to the QA model. We initialize the graph with a starting location entity. Then, we randomly sample from the vertex set and connect it to a randomly sampled location in the graph until every vertex has been connected. This ablation in particular is designed to test the ability of our neural model to predict relations between entities. It lets us observe how accurately linking related vertices effects each of the metrics that we test for. For a fair comparison between the graphs produced by different approaches, we randomly removed some of the nodes and edges from the initial graphs so that the maximum number of locations per graph and the maximum number of objects/people per location in each story genre are the same.
The results are shown in Table TABREF20. We show the median rank of each of the models for both questions across the genres. Ranked data is generally closely interrelated and so we perform Friedman's test between the three models to validate that the results are statistically significant. This is presented as the $p$-value in table (asterisks indicate significance at $p<0.05$). In cases where we make comparisons between specific pairs of models, when necessary, we additionally perform the Mann-Whitney U test to ensure that the rankings differed significantly.
In the mystery genre, the rules-based method was often ranked first in terms of genre resemblance, followed by the neural and random models. This particular result was not statistically significant however, likely indicating that all the models performed approximately equally in this category. The neural approach was deemed to be the most coherent followed by the rules and random. For the fairy-tales, the neural model ranked higher on both of the questions asked of the participants. In this genre, the random neural model also performed better than the rules based approach.
Tables TABREF18 and TABREF19 show the statistics of the constructed knowledge graphs in terms of vertices and edges. We see that the rules-based graph construction has a lower number of locations, characters, and relations between entities but far more objects in general. The greater number of objects is likely due to the rules-based approach being unable to correctly identify locations and characters. The gap between the methods is less pronounced in the mystery genre as opposed to the fairy-tales, in fact the rules-based graphs have more relations than the neural ones. The random and neural models have the same number of entities in all categories by construction but random in general has lower variance on the number of relations found. In this case as well, the variance is lower for mystery as opposed to fairy-tales. When taken in the context of the results in Table TABREF20, it appears to indicate that leveraging thematic commonsense in the form of AskBERT for graph construction directly results in graphs that are more coherent and maintain genre more easily. This is especially true in the case of the fairy-tales where the thematic and everyday commonsense diverge more than than in the case of the mysteries.
Evaluation ::: Full Game Evaluation
This participant study was designed to test the overall game formulation process encompassing both phases described in Section SECREF3. A single story from each genre was chosen by hand from the 10 stories used for the graph evaluation process. From the knowledge graphs for this story, we generate descriptions using the neural, rules, and random approaches described previously. Additionally, we introduce a human-authored game for each story here to provide an additional benchmark. This author selected was familiar with text-adventure games in general as well as the genres of detective mystery and fairy tale. To ensure a fair comparison, we ensure that the maximum number of locations and maximum number of characters/objects per location matched the other methods. After setting general format expectations, the author read the selected stories and constructed knowledge graphs in a corresponding three step process of: identifying the $n$ most important entities in the story, mapping positional relationships between entities, and then synthesizing flavor text for the entities based off of said location, the overall story plot, and background topic knowledge.
Once the knowledge graph and associated descriptions are generated for a particular story, they are then automatically turned into a fully playable text-game using the text game engine Evennia. Evennia was chosen for its flexibility and customization, as well as a convenient web client for end user testing. The data structures were translated into builder commands within Evennia that constructed the various layouts, flavor text, and rules of the game world. Users were placed in one “room” out of the different world locations within the game they were playing, and asked to explore the game world that was available to them. Users achieved this by moving between rooms and investigating objects. Each time a new room was entered or object investigated, the player's total number of explored entities would be displayed as their score.
Each participant was was asked to play the neural game and then another one from one of the three additional models within a genre. The completion criteria for each game is collect half the total score possible in the game, i.e. explore half of all possible rooms and examine half of all possible entities. This provided the participant with multiple possible methods of finishing a particular game. On completion, the participant was asked to rank the two games according to overall perceived coherence, interestingness, and adherence to the genre. We additionally provided a required initial tutorial game which demonstrated all of these mechanics. The order in which participants played the games was also randomized as in the graph evaluation to remove potential correlations. We had 75 participants in total, 39 for mystery and 36 for fairy-tales. As each player played the neural model created game and one from each of the other approaches—this gave us 13 on average for the other approaches in the mystery genre and 12 for fairy-tales.
The summary of the results of the full game study is shown in Table TABREF23. As the comparisons made in this study are all made pairwise between our neural model and one of the baselines—they are presented in terms of what percentage of participants prefer the baseline game over the neural game. Once again, as this is highly interrelated ranked data, we perform the Mann-Whitney U test between each of the pairs to ensure that the rankings differed significantly. This is also indicated on the table.
In the mystery genre, the neural approach is generally preferred by a greater percentage of participants than the rules or random. The human-made game outperforms them all. A significant exception to is that participants thought that the rules-based game was more interesting than the neural game. The trends in the fairy-tale genre are in general similar with a few notable deviations. The first deviation is that the rules-based and random approaches perform significantly worse than neural in this genre. We see also that the neural game is as coherent as the human-made game.
As in the previous study, we hypothesize that this is likely due to the rules-based approach being more suited to the mystery genre, which is often more mundane and contains less fantastical elements. By extension, we can say that thematic commonsense in fairy-tales has less overlap with everyday commonsense than for mundane mysteries. This has a few implications, one of which is that this theme specific information is unlikely to have been seen by OpenIE5 before. This is indicated in the relatively improved performance of the rules-based model in this genre across in terms of both interestingness and coherence.The genre difference can also be observed in terms of the performance of the random model. This model also lacking when compared to our neural model across all the questions asked especially in the fairy-tale setting. This appears to imply that filling in gaps in the knowledge graph using thematically relevant information such as with AskBERT results in more interesting and coherent descriptions and games especially in settings where the thematic commonsense diverges from everyday commonsense.
Conclusion
Procedural world generation systems are required to be semantically consistent, comply with thematic and everyday commonsense understanding, and maintain overall interestingness. We describe an approach that transform a linear reading experience in the form of a story plot into a interactive narrative experience. Our method, AskBERT, extracts and fills in a knowledge graph using thematic commonsense and then uses it as a skeleton to flesh out the rest of the world. A key insight from our human participant study reveals that the ability to construct a thematically consistent knowledge graph is critical to overall perceptions of coherence and interestingness particularly when the theme diverges from everyday commonsense understanding. | neural question-answering technique to extract relations from a story text, OpenIE5, a commonly used rule-based information extraction technique |
6b9310b577c6232e3614a1612cbbbb17067b3886 | 6b9310b577c6232e3614a1612cbbbb17067b3886_0 | Q: What are some guidelines in writing input vernacular so model can generate
Text: Introduction
During thousands of years, millions of classical Chinese poems have been written. They contain ancient poets' emotions such as their appreciation for nature, desiring for freedom and concerns for their countries. Among various types of classical poetry, quatrain poems stand out. On the one hand, their aestheticism and terseness exhibit unique elegance. On the other hand, composing such poems is extremely challenging due to their phonological, tonal and structural restrictions.
Most previous models for generating classical Chinese poems BIBREF0, BIBREF1 are based on limited keywords or characters at fixed positions (e.g., acrostic poems). Since users could only interfere with the semantic of generated poems using a few input words, models control the procedure of poem generation. In this paper, we proposed a novel model for classical Chinese poem generation. As illustrated in Figure FIGREF1, our model generates a classical Chinese poem based on a vernacular Chinese paragraph. Our objective is not only to make the model generate aesthetic and terse poems, but also keep rich semantic of the original vernacular paragraph. Therefore, our model gives users more control power over the semantic of generated poems by carefully writing the vernacular paragraph.
Although a great number of classical poems and vernacular paragraphs are easily available, there exist only limited human-annotated pairs of poems and their corresponding vernacular translations. Thus, it is unlikely to train such poem generation model using supervised approaches. Inspired by unsupervised machine translation (UMT) BIBREF2, we treated our task as a translation problem, namely translating vernacular paragraphs to classical poems.
However, our work is not just a straight-forward application of UMT. In a training example for UMT, the length difference of source and target languages are usually not large, but this is not true in our task. Classical poems tend to be more concise and abstract, while vernacular text tends to be detailed and lengthy. Based on our observation on gold-standard annotations, vernacular paragraphs usually contain more than twice as many Chinese characters as their corresponding classical poems. Therefore, such discrepancy leads to two main problems during our preliminary experiments: (1) Under-translation: when summarizing vernacular paragraphs to poems, some vernacular sentences are not translated and ignored by our model. Take the last two vernacular sentences in Figure FIGREF1 as examples, they are not covered in the generated poem. (2) Over-translation: when expanding poems to vernacular paragraphs, certain words are unnecessarily translated for multiple times. For example, the last sentence in the generated poem of Figure FIGREF1, as green as sapphire, is back-translated as as green as as as sapphire.
Inspired by the phrase segmentation schema in classical poems BIBREF3, we proposed the method of phrase-segmentation-based padding to handle with under-translation. By padding poems based on the phrase segmentation custom of classical poems, our model better aligns poems with their corresponding vernacular paragraphs and meanwhile lowers the risk of under-translation. Inspired by Paulus2018ADR, we designed a reinforcement learning policy to penalize the model if it generates vernacular paragraphs with too many repeated words. Experiments show our method can effectively decrease the possibility of over-translation.
The contributions of our work are threefold:
(1) We proposed a novel task for unsupervised Chinese poem generation from vernacular text.
(2) We proposed using phrase-segmentation-based padding and reinforcement learning to address two important problems in this task, namely under-translation and over-translation.
(3) Through extensive experiments, we proved the effectiveness of our models and explored how to write the input vernacular to inspire better poems. Human evaluation shows our models are able to generate high quality poems, which are comparable to amateur poems.
Related Works
Classical Chinese Poem Generation Most previous works in classical Chinese poem generation focus on improving the semantic coherence of generated poems. Based on LSTM, Zhang and Lapata Zhang2014ChinesePG purposed generating poem lines incrementally by taking into account the history of what has been generated so far. Yan Yan2016iPA proposed a polishing generation schema, each poem line is generated incrementally and iteratively by refining each line one-by-one. Wang et al. Wang2016ChinesePG and Yi et al. Yi2018ChinesePG proposed models to keep the generated poems coherent and semantically consistent with the user's intent. There are also researches that focus on other aspects of poem generation. (Yang et al. Yang2018StylisticCP explored increasing the diversity of generated poems using an unsupervised approach. Xu et al. Xu2018HowII explored generating Chinese poems from images. While most previous works generate poems based on topic words, our work targets at a novel task: generating poems from vernacular Chinese paragraphs.
Unsupervised Machine Translation Compared with supervised machine translation approaches BIBREF4, BIBREF5, unsupervised machine translation BIBREF6, BIBREF2 does not rely on human-labeled parallel corpora for training. This technique is proved to greatly improve the performance of low-resource languages translation systems. (e.g. English-Urdu translation). The unsupervised machine translation framework is also applied to various other tasks, e.g. image captioning BIBREF7, text style transfer BIBREF8, speech to text translation BIBREF9 and clinical text simplification BIBREF10. The UMT framework makes it possible to apply neural models to tasks where limited human labeled data is available. However, in previous tasks that adopt the UMT framework, the abstraction levels of source and target language are the same. This is not the case for our task.
Under-Translation & Over-Translation Both are troublesome problems for neural sequence-to-sequence models. Most previous related researches adopt the coverage mechanism BIBREF11, BIBREF12, BIBREF13. However, as far as we know, there were no successful attempt applying coverage mechanism to transformer-based models BIBREF14.
Model ::: Main Architecture
We transform our poem generation task as an unsupervised machine translation problem. As illustrated in Figure FIGREF1, based on the recently proposed UMT framework BIBREF2, our model is composed of the following components:
Encoder $\textbf {E}_s$ and decoder $\textbf {D}_s$ for vernacular paragraph processing
Encoder $\textbf {E}_t$ and decoder $\textbf {D}_t$ for classical poem processing
where $\textbf {E}_s$ (or $\textbf {E}_t$) takes in a vernacular paragraph (or a classical poem) and converts it into a hidden representation, and $\textbf {D}_s$ (or $\textbf {D}_t$) takes in the hidden representation and converts it into a vernacular paragraph (or a poem). Our model relies on a vernacular texts corpus $\textbf {\emph {S}}$ and a poem corpus $\textbf {\emph {T}}$. We denote $S$ and $T$ as instances in $\textbf {\emph {S}}$ and $\textbf {\emph {T}}$ respectively.
The training of our model relies on three procedures, namely parameter initialization, language modeling and back-translation. We will give detailed introduction to each procedure.
Parameter initialization As both vernacular and classical poem use Chinese characters, we initialize the character embedding of both languages in one common space, the same character in two languages shares the same embedding. This initialization helps associate characters with their plausible translations in the other language.
Language modeling It helps the model generate texts that conform to a certain language. A well-trained language model is able to detect and correct minor lexical and syntactic errors. We train the language models for both vernacular and classical poem by minimizing the following loss:
where $S_N$ (or $T_N$) is generated by adding noise (drop, swap or blank a few words) in $S$ (or $T$).
Back-translation Based on a vernacular paragraph $S$, we generate a poem $T_S$ using $\textbf {E}_s$ and $\textbf {D}_t$, we then translate $T_S$ back into a vernacular paragraph $S_{T_S} = \textbf {D}_s(\textbf {E}_t(T_S))$. Here, $S$ could be used as gold standard for the back-translated paragraph $S_{T_s}$. In this way, we could turn the unsupervised translation into a supervised task by maximizing the similarity between $S$ and $S_{T_S}$. The same also applies to using poem $T$ as gold standard for its corresponding back-translation $T_{S_T}$. We define the following loss:
Note that $\mathcal {L}^{bt}$ does not back propagate through the generation of $T_S$ and $S_T$ as we observe no improvement in doing so. When training the model, we minimize the composite loss:
where $\alpha _1$ and $\alpha _2$ are scaling factors.
Model ::: Addressing Under-Translation and Over-Translation
During our early experiments, we realize that the naive UMT framework is not readily applied to our task. Classical Chinese poems are featured for its terseness and abstractness. They usually focus on depicting broad poetic images rather than details. We collected a dataset of classical Chinese poems and their corresponding vernacular translations, the average length of the poems is $32.0$ characters, while for vernacular translations, it is $73.3$. The huge gap in sequence length between source and target language would induce over-translation and under-translation when training UMT models. In the following sections, we explain the two problems and introduce our improvements.
Model ::: Addressing Under-Translation and Over-Translation ::: Under-Translation
By nature, classical poems are more concise and abstract while vernaculars are more detailed and lengthy, to express the same meaning, a vernacular paragraph usually contains more characters than a classical poem. As a result, when summarizing a vernacular paragraph $S$ to a poem $T_S$, $T_S$ may not cover all information in $S$ due to its length limit. In real practice, we notice the generated poems usually only cover the information in the front part of the vernacular paragraph, while the latter part is unmentioned.
To alleviate under-translation, we propose phrase segmentation-based padding. Specifically, we first segment each line in a classical poem into several sub-sequences, we then join these sub-sequences with the special padding tokens <p>. During training, the padded lines are used instead of the original poem lines. As illustrated in Figure FIGREF10, padding would create better alignments between a vernacular paragraph and a prolonged poem, making it more likely for the latter part of the vernacular paragraph to be covered in the poem. As we mentioned before, the length of the vernacular translation is about twice the length of its corresponding classical poem, so we pad each segmented line to twice its original length.
According to Ye jia:1984, to present a stronger sense of rhythm, each type of poem has its unique phrase segmentation schema, for example, most seven-character quatrain poems adopt the 2-2-3 schema, i.e. each quatrain line contains 3 phrases, the first, second and third phrase contains 2, 2, 3 characters respectively. Inspired by this law, we segment lines in a poem according to the corresponding phrase segmentation schema. In this way, we could avoid characters within the scope of a phrase to be cut apart, thus best preserve the semantic of each phrase.BIBREF15
Model ::: Addressing Under-Translation and Over-Translation ::: Over-Translation
In NMT, when decoding is complete, the decoder would generate an <EOS>token, indicating it has reached the end of the output sequence. However, when expending a poem $T$ into a vernacular Chinese paragraph $S_T$, due to the conciseness nature of poems, after finishing translating every source character in $T$, the output sequence $S_T$ may still be much shorter than the expected length of a poem‘s vernacular translation. As a result, the decoder would believe it has not finished decoding. Instead of generating the <EOS>token, the decoder would continue to generate new output characters from previously translated source characters. This would cause the decoder to repetitively output a piece of text many times.
To remedy this issue, in addition to minimizing the original loss function $\mathcal {L}$, we propose to minimize a specific discrete metric, which is made possible with reinforcement learning.
We define repetition ratio $RR(S)$ of a paragraph $S$ as:
where $vocab(S)$ refers to the number of distinctive characters in $S$, $len(S)$ refers the number of all characters in $S$. Obviously, if a generated sequence contains many repeated characters, it would have high repetition ratio. Following the self-critical policy gradient training BIBREF16, we define the following loss function:
where $\tau $ is a manually set threshold. Intuitively, minimizing $\mathcal {L}^{rl}$ is equivalent to maximizing the conditional likelihood of the sequence $S$ given $S_{T_S}$ if its repetition ratio is lower than the threshold $\tau $. Following BIBREF17, we revise the composite loss as:
where $\alpha _1, \alpha _2, \alpha _3$ are scaling factors.
Experiment
The objectives of our experiment are to explore the following questions: (1) How much do our models improve the generated poems? (Section SECREF23) (2) What are characteristics of the input vernacular paragraph that lead to a good generated poem? (Section SECREF26) (3) What are weaknesses of generated poems compared to human poems? (Section SECREF27) To this end, we built a dataset as described in Section SECREF18. Evaluation metrics and baselines are described in Section SECREF21 and SECREF22. For the implementation details of building the dataset and models, please refer to supplementary materials.
Experiment ::: Datasets
Training and Validation Sets We collected a corpus of poems and a corpus of vernacular literature from online resources. The poem corpus contains 163K quatrain poems from Tang Poems and Song Poems, the vernacular literature corpus contains 337K short paragraphs from 281 famous books, the corpus covers various literary forms including prose, fiction and essay. Note that our poem corpus and a vernacular corpus are not aligned. We further split the two corpora into a training set and a validation set.
Test Set From online resources, we collected 487 seven-character quatrain poems from Tang Poems and Song Poems, as well as their corresponding high quality vernacular translations. These poems could be used as gold standards for poems generated from their corresponding vernacular translations. Table TABREF11 shows the statistics of our training, validation and test set.
Experiment ::: Evaluation Metrics
Perplexity Perplexity reflects the probability a model generates a certain poem. Intuitively, a better model would yield higher probability (lower perplexity) on the gold poem.
BLEU As a standard evaluation metric for machine translation, BLEU BIBREF18 measures the intersection of n-grams between the generated poem and the gold poem. A better generated poem usually achieves higher BLEU score, as it shares more n-gram with the gold poem.
Human evaluation While perplexity and BLEU are objective metrics that could be applied to large-volume test set, evaluating Chinese poems is after all a subjective task. We invited 30 human evaluators to join our human evaluation. The human evaluators were divided into two groups. The expert group contains 15 people who hold a bachelor degree in Chinese literature, and the amateur group contains 15 people who holds a bachelor degree in other fields. All 30 human evaluators are native Chinese speakers.
We ask evaluators to grade each generated poem from four perspectives: 1) Fluency: Is the generated poem grammatically and rhythmically well formed, 2) Semantic coherence: Is the generated poem itself semantic coherent and meaningful, 3) Semantic preservability: Does the generated poem preserve the semantic of the modern Chinese translation, 4) Poeticness: Does the generated poem display the characteristic of a poem and does the poem build good poetic image. The grading scale for each perspective is from 1 to 5.
Experiment ::: Baselines
We compare the performance of the following models: (1) LSTM BIBREF19; (2)Naive transformer BIBREF14; (3)Transformer + Anti OT (RL loss); (4)Transformer + Anti UT (phrase segmentation-based padding); (5)Transformer + Anti OT&UT.
Experiment ::: Reborn Poems: Generating Poems from Vernacular Translations
As illustrated in Table TABREF12 (ID 1). Given the vernacular translation of each gold poem in test set, we generate five poems using our models. Intuitively, the more the generated poem resembles the gold poem, the better the model is. We report mean perplexity and BLEU scores in Table TABREF19 (Where +Anti OT refers to adding the reinforcement loss to mitigate over-fitting and +Anti UT refers to adding phrase segmentation-based padding to mitigate under-translation), human evaluation results in Table TABREF20.
According to experiment results, perplexity, BLEU scores and total scores in human evaluation are consistent with each other. We observe all BLEU scores are fairly low, we believe it is reasonable as there could be multiple ways to compose a poem given a vernacular paragraph. Among transformer-based models, both +Anti OT and +Anti UT outperforms the naive transformer, while Anti OT&UT shows the best performance, this demonstrates alleviating under-translation and over-translation both helps generate better poems. Specifically, +Anti UT shows bigger improvement than +Anti OT. According to human evaluation, among the four perspectives, our Anti OT&UT brought most score improvement in Semantic preservability, this proves our improvement on semantic preservability was most obvious to human evaluators. All transformer-based models outperform LSTM. Note that the average length of the vernacular translation is over 70 characters, comparing with transformer-based models, LSTM may only keep the information in the beginning and end of the vernacular. We anticipated some score inconsistency between expert group and amateur group. However, after analyzing human evaluation results, we did not observed big divergence between two groups.
Experiment ::: Interpoetry: Generating Poems from Various Literature Forms
Chinese literature is not only featured for classical poems, but also various other literature forms. Song lyricUTF8gbsn(宋词), or ci also gained tremendous popularity in its palmy days, standing out in classical Chinese literature. Modern prose, modern poems and pop song lyrics have won extensive praise among Chinese people in modern days. The goal of this experiment is to transfer texts of other literature forms into quatrain poems. We expect the generated poems to not only keep the semantic of the original text, but also demonstrate terseness, rhythm and other characteristics of ancient poems. Specifically, we chose 20 famous fragments from four types of Chinese literature (5 fragments for each of modern prose, modern poems, pop song lyrics and Song lyrics). We try to As no ground truth is available, we resorted to human evaluation with the same grading standard in Section SECREF23.
Comparing the scores of different literature forms, we observe Song lyric achieves higher scores than the other three forms of modern literature. It is not surprising as both Song lyric and quatrain poems are written in classical Chinese, while the other three literature forms are all in vernacular.
Comparing the scores within the same literature form, we observe the scores of poems generated from different paragraphs tends to vary. After carefully studying the generated poems as well as their scores, we have the following observation:
1) In classical Chinese poems, poetic images UTF8gbsn(意象) were widely used to express emotions and to build artistic conception. A certain poetic image usually has some fixed implications. For example, autumn is usually used to imply sadness and loneliness. However, with the change of time, poetic images and their implications have also changed. According to our observation, if a vernacular paragraph contains more poetic images used in classical literature, its generated poem usually achieves higher score. As illustrated in Table TABREF12, both paragraph 2 and 3 are generated from pop song lyrics, paragraph 2 uses many poetic images from classical literature (e.g. pear flowers, makeup), while paragraph 3 uses modern poetic images (e.g. sparrows on the utility pole). Obviously, compared with poem 2, sentences in poem 3 seems more confusing, as the poetic images in modern times may not fit well into the language model of classical poems.
2) We also observed that poems generated from descriptive paragraphs achieve higher scores than from logical or philosophical paragraphs. For example, in Table TABREF12, both paragraph 4 (more descriptive) and paragraph 5 (more philosophical) were selected from famous modern prose. However, compared with poem 4, poem 5 seems semantically more confusing. We offer two explanations to the above phenomenon: i. Limited by the 28-character restriction, it is hard for quatrain poems to cover complex logical or philosophical explanation. ii. As vernacular paragraphs are more detailed and lengthy, some information in a vernacular paragraph may be lost when it is summarized into a classical poem. While losing some information may not change the general meaning of a descriptive paragraph, it could make a big difference in a logical or philosophical paragraph.
Experiment ::: Human Discrimination Test
We manually select 25 generated poems from vernacular Chinese translations and pair each one with its corresponding human written poem. We then present the 25 pairs to human evaluators and ask them to differentiate which poem is generated by human poet.
As demonstrated in Table TABREF29, although the general meanings in human poems and generated poems seem to be the same, the wordings they employ are quite different. This explains the low BLEU scores in Section 4.3. According to the test results in Table TABREF30, human evaluators only achieved 65.8% in mean accuracy. This indicates the best generated poems are somewhat comparable to poems written by amateur poets.
We interviewed evaluators who achieved higher than 80% accuracy on their differentiation strategies. Most interviewed evaluators state they realize the sentences in a human written poem are usually well organized to highlight a theme or to build a poetic image, while the correlation between sentences in a generated poem does not seem strong. As demonstrated in Table TABREF29, the last two sentences in both human poems (marked as red) echo each other well, while the sentences in machine-generated poems seem more independent. This gives us hints on the weakness of generated poems: While neural models may generate poems that resemble human poems lexically and syntactically, it's still hard for them to compete with human beings in building up good structures.
Discussion
Addressing Under-Translation In this part, we wish to explore the effect of different phrase segmentation schemas on our phrase segmentation-based padding. According to Ye jia:1984, most seven-character quatrain poems adopt the 2-2-3 segmentation schema. As shown in examples in Figure FIGREF31, we compare our phrase segmentation-based padding (2-2-3 schema) to two less common schemas (i.e. 2-3-2 and 3-2-2 schema) we report our experiment results in Table TABREF32.
The results show our 2-2-3 segmentation-schema greatly outperforms 2-3-2 and 3-2-2 schema in both perplexity and BLEU scores. Note that the BLEU scores of 2-3-2 and 3-2-2 schema remains almost the same as our naive baseline (Without padding). According to the observation, we have the following conclusions: 1) Although padding better aligns the vernacular paragraph to the poem, it may not improve the quality of the generated poem. 2) The padding tokens should be placed according to the phrase segmentation schema of the poem as it preserves the semantic within the scope of each phrase.
Addressing Over-Translation To explore the effect of our reinforcement learning policy on alleviating over-translation, we calculate the repetition ratio of vernacular paragraphs generated from classical poems in our validation set. We found naive transformer achieves $40.8\%$ in repetition ratio, while our +Anti OT achieves $34.9\%$. Given the repetition ratio of vernacular paragraphs (written by human beings) in our validation set is $30.1\%$, the experiment results demonstrated our RL loss effectively alleviate over-translation, which in turn leads to better generated poems.
Conclusion
In this paper, we proposed a novel task of generating classical Chinese poems from vernacular paragraphs. We adapted the unsupervised machine translation model to our task and meanwhile proposed two novel approaches to address the under-translation and over-translation problems. Experiments show that our task can give users more controllability in generating poems. In addition, our approaches are very effective to solve the problems when the UMT model is directly used in this task. In the future, we plan to explore: (1) Applying the UMT model in the tasks where the abstraction levels of source and target languages are different (e.g., unsupervised automatic summarization); (2) Improving the quality of generated poems via better structure organization approaches. | if a vernacular paragraph contains more poetic images used in classical literature, its generated poem usually achieves higher score, poems generated from descriptive paragraphs achieve higher scores than from logical or philosophical paragraphs |
d484a71e23d128f146182dccc30001df35cdf93f | d484a71e23d128f146182dccc30001df35cdf93f_0 | Q: How much is proposed model better in perplexity and BLEU score than typical UMT models?
Text: Introduction
During thousands of years, millions of classical Chinese poems have been written. They contain ancient poets' emotions such as their appreciation for nature, desiring for freedom and concerns for their countries. Among various types of classical poetry, quatrain poems stand out. On the one hand, their aestheticism and terseness exhibit unique elegance. On the other hand, composing such poems is extremely challenging due to their phonological, tonal and structural restrictions.
Most previous models for generating classical Chinese poems BIBREF0, BIBREF1 are based on limited keywords or characters at fixed positions (e.g., acrostic poems). Since users could only interfere with the semantic of generated poems using a few input words, models control the procedure of poem generation. In this paper, we proposed a novel model for classical Chinese poem generation. As illustrated in Figure FIGREF1, our model generates a classical Chinese poem based on a vernacular Chinese paragraph. Our objective is not only to make the model generate aesthetic and terse poems, but also keep rich semantic of the original vernacular paragraph. Therefore, our model gives users more control power over the semantic of generated poems by carefully writing the vernacular paragraph.
Although a great number of classical poems and vernacular paragraphs are easily available, there exist only limited human-annotated pairs of poems and their corresponding vernacular translations. Thus, it is unlikely to train such poem generation model using supervised approaches. Inspired by unsupervised machine translation (UMT) BIBREF2, we treated our task as a translation problem, namely translating vernacular paragraphs to classical poems.
However, our work is not just a straight-forward application of UMT. In a training example for UMT, the length difference of source and target languages are usually not large, but this is not true in our task. Classical poems tend to be more concise and abstract, while vernacular text tends to be detailed and lengthy. Based on our observation on gold-standard annotations, vernacular paragraphs usually contain more than twice as many Chinese characters as their corresponding classical poems. Therefore, such discrepancy leads to two main problems during our preliminary experiments: (1) Under-translation: when summarizing vernacular paragraphs to poems, some vernacular sentences are not translated and ignored by our model. Take the last two vernacular sentences in Figure FIGREF1 as examples, they are not covered in the generated poem. (2) Over-translation: when expanding poems to vernacular paragraphs, certain words are unnecessarily translated for multiple times. For example, the last sentence in the generated poem of Figure FIGREF1, as green as sapphire, is back-translated as as green as as as sapphire.
Inspired by the phrase segmentation schema in classical poems BIBREF3, we proposed the method of phrase-segmentation-based padding to handle with under-translation. By padding poems based on the phrase segmentation custom of classical poems, our model better aligns poems with their corresponding vernacular paragraphs and meanwhile lowers the risk of under-translation. Inspired by Paulus2018ADR, we designed a reinforcement learning policy to penalize the model if it generates vernacular paragraphs with too many repeated words. Experiments show our method can effectively decrease the possibility of over-translation.
The contributions of our work are threefold:
(1) We proposed a novel task for unsupervised Chinese poem generation from vernacular text.
(2) We proposed using phrase-segmentation-based padding and reinforcement learning to address two important problems in this task, namely under-translation and over-translation.
(3) Through extensive experiments, we proved the effectiveness of our models and explored how to write the input vernacular to inspire better poems. Human evaluation shows our models are able to generate high quality poems, which are comparable to amateur poems.
Related Works
Classical Chinese Poem Generation Most previous works in classical Chinese poem generation focus on improving the semantic coherence of generated poems. Based on LSTM, Zhang and Lapata Zhang2014ChinesePG purposed generating poem lines incrementally by taking into account the history of what has been generated so far. Yan Yan2016iPA proposed a polishing generation schema, each poem line is generated incrementally and iteratively by refining each line one-by-one. Wang et al. Wang2016ChinesePG and Yi et al. Yi2018ChinesePG proposed models to keep the generated poems coherent and semantically consistent with the user's intent. There are also researches that focus on other aspects of poem generation. (Yang et al. Yang2018StylisticCP explored increasing the diversity of generated poems using an unsupervised approach. Xu et al. Xu2018HowII explored generating Chinese poems from images. While most previous works generate poems based on topic words, our work targets at a novel task: generating poems from vernacular Chinese paragraphs.
Unsupervised Machine Translation Compared with supervised machine translation approaches BIBREF4, BIBREF5, unsupervised machine translation BIBREF6, BIBREF2 does not rely on human-labeled parallel corpora for training. This technique is proved to greatly improve the performance of low-resource languages translation systems. (e.g. English-Urdu translation). The unsupervised machine translation framework is also applied to various other tasks, e.g. image captioning BIBREF7, text style transfer BIBREF8, speech to text translation BIBREF9 and clinical text simplification BIBREF10. The UMT framework makes it possible to apply neural models to tasks where limited human labeled data is available. However, in previous tasks that adopt the UMT framework, the abstraction levels of source and target language are the same. This is not the case for our task.
Under-Translation & Over-Translation Both are troublesome problems for neural sequence-to-sequence models. Most previous related researches adopt the coverage mechanism BIBREF11, BIBREF12, BIBREF13. However, as far as we know, there were no successful attempt applying coverage mechanism to transformer-based models BIBREF14.
Model ::: Main Architecture
We transform our poem generation task as an unsupervised machine translation problem. As illustrated in Figure FIGREF1, based on the recently proposed UMT framework BIBREF2, our model is composed of the following components:
Encoder $\textbf {E}_s$ and decoder $\textbf {D}_s$ for vernacular paragraph processing
Encoder $\textbf {E}_t$ and decoder $\textbf {D}_t$ for classical poem processing
where $\textbf {E}_s$ (or $\textbf {E}_t$) takes in a vernacular paragraph (or a classical poem) and converts it into a hidden representation, and $\textbf {D}_s$ (or $\textbf {D}_t$) takes in the hidden representation and converts it into a vernacular paragraph (or a poem). Our model relies on a vernacular texts corpus $\textbf {\emph {S}}$ and a poem corpus $\textbf {\emph {T}}$. We denote $S$ and $T$ as instances in $\textbf {\emph {S}}$ and $\textbf {\emph {T}}$ respectively.
The training of our model relies on three procedures, namely parameter initialization, language modeling and back-translation. We will give detailed introduction to each procedure.
Parameter initialization As both vernacular and classical poem use Chinese characters, we initialize the character embedding of both languages in one common space, the same character in two languages shares the same embedding. This initialization helps associate characters with their plausible translations in the other language.
Language modeling It helps the model generate texts that conform to a certain language. A well-trained language model is able to detect and correct minor lexical and syntactic errors. We train the language models for both vernacular and classical poem by minimizing the following loss:
where $S_N$ (or $T_N$) is generated by adding noise (drop, swap or blank a few words) in $S$ (or $T$).
Back-translation Based on a vernacular paragraph $S$, we generate a poem $T_S$ using $\textbf {E}_s$ and $\textbf {D}_t$, we then translate $T_S$ back into a vernacular paragraph $S_{T_S} = \textbf {D}_s(\textbf {E}_t(T_S))$. Here, $S$ could be used as gold standard for the back-translated paragraph $S_{T_s}$. In this way, we could turn the unsupervised translation into a supervised task by maximizing the similarity between $S$ and $S_{T_S}$. The same also applies to using poem $T$ as gold standard for its corresponding back-translation $T_{S_T}$. We define the following loss:
Note that $\mathcal {L}^{bt}$ does not back propagate through the generation of $T_S$ and $S_T$ as we observe no improvement in doing so. When training the model, we minimize the composite loss:
where $\alpha _1$ and $\alpha _2$ are scaling factors.
Model ::: Addressing Under-Translation and Over-Translation
During our early experiments, we realize that the naive UMT framework is not readily applied to our task. Classical Chinese poems are featured for its terseness and abstractness. They usually focus on depicting broad poetic images rather than details. We collected a dataset of classical Chinese poems and their corresponding vernacular translations, the average length of the poems is $32.0$ characters, while for vernacular translations, it is $73.3$. The huge gap in sequence length between source and target language would induce over-translation and under-translation when training UMT models. In the following sections, we explain the two problems and introduce our improvements.
Model ::: Addressing Under-Translation and Over-Translation ::: Under-Translation
By nature, classical poems are more concise and abstract while vernaculars are more detailed and lengthy, to express the same meaning, a vernacular paragraph usually contains more characters than a classical poem. As a result, when summarizing a vernacular paragraph $S$ to a poem $T_S$, $T_S$ may not cover all information in $S$ due to its length limit. In real practice, we notice the generated poems usually only cover the information in the front part of the vernacular paragraph, while the latter part is unmentioned.
To alleviate under-translation, we propose phrase segmentation-based padding. Specifically, we first segment each line in a classical poem into several sub-sequences, we then join these sub-sequences with the special padding tokens <p>. During training, the padded lines are used instead of the original poem lines. As illustrated in Figure FIGREF10, padding would create better alignments between a vernacular paragraph and a prolonged poem, making it more likely for the latter part of the vernacular paragraph to be covered in the poem. As we mentioned before, the length of the vernacular translation is about twice the length of its corresponding classical poem, so we pad each segmented line to twice its original length.
According to Ye jia:1984, to present a stronger sense of rhythm, each type of poem has its unique phrase segmentation schema, for example, most seven-character quatrain poems adopt the 2-2-3 schema, i.e. each quatrain line contains 3 phrases, the first, second and third phrase contains 2, 2, 3 characters respectively. Inspired by this law, we segment lines in a poem according to the corresponding phrase segmentation schema. In this way, we could avoid characters within the scope of a phrase to be cut apart, thus best preserve the semantic of each phrase.BIBREF15
Model ::: Addressing Under-Translation and Over-Translation ::: Over-Translation
In NMT, when decoding is complete, the decoder would generate an <EOS>token, indicating it has reached the end of the output sequence. However, when expending a poem $T$ into a vernacular Chinese paragraph $S_T$, due to the conciseness nature of poems, after finishing translating every source character in $T$, the output sequence $S_T$ may still be much shorter than the expected length of a poem‘s vernacular translation. As a result, the decoder would believe it has not finished decoding. Instead of generating the <EOS>token, the decoder would continue to generate new output characters from previously translated source characters. This would cause the decoder to repetitively output a piece of text many times.
To remedy this issue, in addition to minimizing the original loss function $\mathcal {L}$, we propose to minimize a specific discrete metric, which is made possible with reinforcement learning.
We define repetition ratio $RR(S)$ of a paragraph $S$ as:
where $vocab(S)$ refers to the number of distinctive characters in $S$, $len(S)$ refers the number of all characters in $S$. Obviously, if a generated sequence contains many repeated characters, it would have high repetition ratio. Following the self-critical policy gradient training BIBREF16, we define the following loss function:
where $\tau $ is a manually set threshold. Intuitively, minimizing $\mathcal {L}^{rl}$ is equivalent to maximizing the conditional likelihood of the sequence $S$ given $S_{T_S}$ if its repetition ratio is lower than the threshold $\tau $. Following BIBREF17, we revise the composite loss as:
where $\alpha _1, \alpha _2, \alpha _3$ are scaling factors.
Experiment
The objectives of our experiment are to explore the following questions: (1) How much do our models improve the generated poems? (Section SECREF23) (2) What are characteristics of the input vernacular paragraph that lead to a good generated poem? (Section SECREF26) (3) What are weaknesses of generated poems compared to human poems? (Section SECREF27) To this end, we built a dataset as described in Section SECREF18. Evaluation metrics and baselines are described in Section SECREF21 and SECREF22. For the implementation details of building the dataset and models, please refer to supplementary materials.
Experiment ::: Datasets
Training and Validation Sets We collected a corpus of poems and a corpus of vernacular literature from online resources. The poem corpus contains 163K quatrain poems from Tang Poems and Song Poems, the vernacular literature corpus contains 337K short paragraphs from 281 famous books, the corpus covers various literary forms including prose, fiction and essay. Note that our poem corpus and a vernacular corpus are not aligned. We further split the two corpora into a training set and a validation set.
Test Set From online resources, we collected 487 seven-character quatrain poems from Tang Poems and Song Poems, as well as their corresponding high quality vernacular translations. These poems could be used as gold standards for poems generated from their corresponding vernacular translations. Table TABREF11 shows the statistics of our training, validation and test set.
Experiment ::: Evaluation Metrics
Perplexity Perplexity reflects the probability a model generates a certain poem. Intuitively, a better model would yield higher probability (lower perplexity) on the gold poem.
BLEU As a standard evaluation metric for machine translation, BLEU BIBREF18 measures the intersection of n-grams between the generated poem and the gold poem. A better generated poem usually achieves higher BLEU score, as it shares more n-gram with the gold poem.
Human evaluation While perplexity and BLEU are objective metrics that could be applied to large-volume test set, evaluating Chinese poems is after all a subjective task. We invited 30 human evaluators to join our human evaluation. The human evaluators were divided into two groups. The expert group contains 15 people who hold a bachelor degree in Chinese literature, and the amateur group contains 15 people who holds a bachelor degree in other fields. All 30 human evaluators are native Chinese speakers.
We ask evaluators to grade each generated poem from four perspectives: 1) Fluency: Is the generated poem grammatically and rhythmically well formed, 2) Semantic coherence: Is the generated poem itself semantic coherent and meaningful, 3) Semantic preservability: Does the generated poem preserve the semantic of the modern Chinese translation, 4) Poeticness: Does the generated poem display the characteristic of a poem and does the poem build good poetic image. The grading scale for each perspective is from 1 to 5.
Experiment ::: Baselines
We compare the performance of the following models: (1) LSTM BIBREF19; (2)Naive transformer BIBREF14; (3)Transformer + Anti OT (RL loss); (4)Transformer + Anti UT (phrase segmentation-based padding); (5)Transformer + Anti OT&UT.
Experiment ::: Reborn Poems: Generating Poems from Vernacular Translations
As illustrated in Table TABREF12 (ID 1). Given the vernacular translation of each gold poem in test set, we generate five poems using our models. Intuitively, the more the generated poem resembles the gold poem, the better the model is. We report mean perplexity and BLEU scores in Table TABREF19 (Where +Anti OT refers to adding the reinforcement loss to mitigate over-fitting and +Anti UT refers to adding phrase segmentation-based padding to mitigate under-translation), human evaluation results in Table TABREF20.
According to experiment results, perplexity, BLEU scores and total scores in human evaluation are consistent with each other. We observe all BLEU scores are fairly low, we believe it is reasonable as there could be multiple ways to compose a poem given a vernacular paragraph. Among transformer-based models, both +Anti OT and +Anti UT outperforms the naive transformer, while Anti OT&UT shows the best performance, this demonstrates alleviating under-translation and over-translation both helps generate better poems. Specifically, +Anti UT shows bigger improvement than +Anti OT. According to human evaluation, among the four perspectives, our Anti OT&UT brought most score improvement in Semantic preservability, this proves our improvement on semantic preservability was most obvious to human evaluators. All transformer-based models outperform LSTM. Note that the average length of the vernacular translation is over 70 characters, comparing with transformer-based models, LSTM may only keep the information in the beginning and end of the vernacular. We anticipated some score inconsistency between expert group and amateur group. However, after analyzing human evaluation results, we did not observed big divergence between two groups.
Experiment ::: Interpoetry: Generating Poems from Various Literature Forms
Chinese literature is not only featured for classical poems, but also various other literature forms. Song lyricUTF8gbsn(宋词), or ci also gained tremendous popularity in its palmy days, standing out in classical Chinese literature. Modern prose, modern poems and pop song lyrics have won extensive praise among Chinese people in modern days. The goal of this experiment is to transfer texts of other literature forms into quatrain poems. We expect the generated poems to not only keep the semantic of the original text, but also demonstrate terseness, rhythm and other characteristics of ancient poems. Specifically, we chose 20 famous fragments from four types of Chinese literature (5 fragments for each of modern prose, modern poems, pop song lyrics and Song lyrics). We try to As no ground truth is available, we resorted to human evaluation with the same grading standard in Section SECREF23.
Comparing the scores of different literature forms, we observe Song lyric achieves higher scores than the other three forms of modern literature. It is not surprising as both Song lyric and quatrain poems are written in classical Chinese, while the other three literature forms are all in vernacular.
Comparing the scores within the same literature form, we observe the scores of poems generated from different paragraphs tends to vary. After carefully studying the generated poems as well as their scores, we have the following observation:
1) In classical Chinese poems, poetic images UTF8gbsn(意象) were widely used to express emotions and to build artistic conception. A certain poetic image usually has some fixed implications. For example, autumn is usually used to imply sadness and loneliness. However, with the change of time, poetic images and their implications have also changed. According to our observation, if a vernacular paragraph contains more poetic images used in classical literature, its generated poem usually achieves higher score. As illustrated in Table TABREF12, both paragraph 2 and 3 are generated from pop song lyrics, paragraph 2 uses many poetic images from classical literature (e.g. pear flowers, makeup), while paragraph 3 uses modern poetic images (e.g. sparrows on the utility pole). Obviously, compared with poem 2, sentences in poem 3 seems more confusing, as the poetic images in modern times may not fit well into the language model of classical poems.
2) We also observed that poems generated from descriptive paragraphs achieve higher scores than from logical or philosophical paragraphs. For example, in Table TABREF12, both paragraph 4 (more descriptive) and paragraph 5 (more philosophical) were selected from famous modern prose. However, compared with poem 4, poem 5 seems semantically more confusing. We offer two explanations to the above phenomenon: i. Limited by the 28-character restriction, it is hard for quatrain poems to cover complex logical or philosophical explanation. ii. As vernacular paragraphs are more detailed and lengthy, some information in a vernacular paragraph may be lost when it is summarized into a classical poem. While losing some information may not change the general meaning of a descriptive paragraph, it could make a big difference in a logical or philosophical paragraph.
Experiment ::: Human Discrimination Test
We manually select 25 generated poems from vernacular Chinese translations and pair each one with its corresponding human written poem. We then present the 25 pairs to human evaluators and ask them to differentiate which poem is generated by human poet.
As demonstrated in Table TABREF29, although the general meanings in human poems and generated poems seem to be the same, the wordings they employ are quite different. This explains the low BLEU scores in Section 4.3. According to the test results in Table TABREF30, human evaluators only achieved 65.8% in mean accuracy. This indicates the best generated poems are somewhat comparable to poems written by amateur poets.
We interviewed evaluators who achieved higher than 80% accuracy on their differentiation strategies. Most interviewed evaluators state they realize the sentences in a human written poem are usually well organized to highlight a theme or to build a poetic image, while the correlation between sentences in a generated poem does not seem strong. As demonstrated in Table TABREF29, the last two sentences in both human poems (marked as red) echo each other well, while the sentences in machine-generated poems seem more independent. This gives us hints on the weakness of generated poems: While neural models may generate poems that resemble human poems lexically and syntactically, it's still hard for them to compete with human beings in building up good structures.
Discussion
Addressing Under-Translation In this part, we wish to explore the effect of different phrase segmentation schemas on our phrase segmentation-based padding. According to Ye jia:1984, most seven-character quatrain poems adopt the 2-2-3 segmentation schema. As shown in examples in Figure FIGREF31, we compare our phrase segmentation-based padding (2-2-3 schema) to two less common schemas (i.e. 2-3-2 and 3-2-2 schema) we report our experiment results in Table TABREF32.
The results show our 2-2-3 segmentation-schema greatly outperforms 2-3-2 and 3-2-2 schema in both perplexity and BLEU scores. Note that the BLEU scores of 2-3-2 and 3-2-2 schema remains almost the same as our naive baseline (Without padding). According to the observation, we have the following conclusions: 1) Although padding better aligns the vernacular paragraph to the poem, it may not improve the quality of the generated poem. 2) The padding tokens should be placed according to the phrase segmentation schema of the poem as it preserves the semantic within the scope of each phrase.
Addressing Over-Translation To explore the effect of our reinforcement learning policy on alleviating over-translation, we calculate the repetition ratio of vernacular paragraphs generated from classical poems in our validation set. We found naive transformer achieves $40.8\%$ in repetition ratio, while our +Anti OT achieves $34.9\%$. Given the repetition ratio of vernacular paragraphs (written by human beings) in our validation set is $30.1\%$, the experiment results demonstrated our RL loss effectively alleviate over-translation, which in turn leads to better generated poems.
Conclusion
In this paper, we proposed a novel task of generating classical Chinese poems from vernacular paragraphs. We adapted the unsupervised machine translation model to our task and meanwhile proposed two novel approaches to address the under-translation and over-translation problems. Experiments show that our task can give users more controllability in generating poems. In addition, our approaches are very effective to solve the problems when the UMT model is directly used in this task. In the future, we plan to explore: (1) Applying the UMT model in the tasks where the abstraction levels of source and target languages are different (e.g., unsupervised automatic summarization); (2) Improving the quality of generated poems via better structure organization approaches. | Perplexity of the best model is 65.58 compared to best baseline 105.79.
Bleu of the best model is 6.57 compared to best baseline 5.50. |
5787ac3e80840fe4cf7bfae7e8983fa6644d6220 | 5787ac3e80840fe4cf7bfae7e8983fa6644d6220_0 | Q: What dataset is used for training?
Text: Introduction
During thousands of years, millions of classical Chinese poems have been written. They contain ancient poets' emotions such as their appreciation for nature, desiring for freedom and concerns for their countries. Among various types of classical poetry, quatrain poems stand out. On the one hand, their aestheticism and terseness exhibit unique elegance. On the other hand, composing such poems is extremely challenging due to their phonological, tonal and structural restrictions.
Most previous models for generating classical Chinese poems BIBREF0, BIBREF1 are based on limited keywords or characters at fixed positions (e.g., acrostic poems). Since users could only interfere with the semantic of generated poems using a few input words, models control the procedure of poem generation. In this paper, we proposed a novel model for classical Chinese poem generation. As illustrated in Figure FIGREF1, our model generates a classical Chinese poem based on a vernacular Chinese paragraph. Our objective is not only to make the model generate aesthetic and terse poems, but also keep rich semantic of the original vernacular paragraph. Therefore, our model gives users more control power over the semantic of generated poems by carefully writing the vernacular paragraph.
Although a great number of classical poems and vernacular paragraphs are easily available, there exist only limited human-annotated pairs of poems and their corresponding vernacular translations. Thus, it is unlikely to train such poem generation model using supervised approaches. Inspired by unsupervised machine translation (UMT) BIBREF2, we treated our task as a translation problem, namely translating vernacular paragraphs to classical poems.
However, our work is not just a straight-forward application of UMT. In a training example for UMT, the length difference of source and target languages are usually not large, but this is not true in our task. Classical poems tend to be more concise and abstract, while vernacular text tends to be detailed and lengthy. Based on our observation on gold-standard annotations, vernacular paragraphs usually contain more than twice as many Chinese characters as their corresponding classical poems. Therefore, such discrepancy leads to two main problems during our preliminary experiments: (1) Under-translation: when summarizing vernacular paragraphs to poems, some vernacular sentences are not translated and ignored by our model. Take the last two vernacular sentences in Figure FIGREF1 as examples, they are not covered in the generated poem. (2) Over-translation: when expanding poems to vernacular paragraphs, certain words are unnecessarily translated for multiple times. For example, the last sentence in the generated poem of Figure FIGREF1, as green as sapphire, is back-translated as as green as as as sapphire.
Inspired by the phrase segmentation schema in classical poems BIBREF3, we proposed the method of phrase-segmentation-based padding to handle with under-translation. By padding poems based on the phrase segmentation custom of classical poems, our model better aligns poems with their corresponding vernacular paragraphs and meanwhile lowers the risk of under-translation. Inspired by Paulus2018ADR, we designed a reinforcement learning policy to penalize the model if it generates vernacular paragraphs with too many repeated words. Experiments show our method can effectively decrease the possibility of over-translation.
The contributions of our work are threefold:
(1) We proposed a novel task for unsupervised Chinese poem generation from vernacular text.
(2) We proposed using phrase-segmentation-based padding and reinforcement learning to address two important problems in this task, namely under-translation and over-translation.
(3) Through extensive experiments, we proved the effectiveness of our models and explored how to write the input vernacular to inspire better poems. Human evaluation shows our models are able to generate high quality poems, which are comparable to amateur poems.
Related Works
Classical Chinese Poem Generation Most previous works in classical Chinese poem generation focus on improving the semantic coherence of generated poems. Based on LSTM, Zhang and Lapata Zhang2014ChinesePG purposed generating poem lines incrementally by taking into account the history of what has been generated so far. Yan Yan2016iPA proposed a polishing generation schema, each poem line is generated incrementally and iteratively by refining each line one-by-one. Wang et al. Wang2016ChinesePG and Yi et al. Yi2018ChinesePG proposed models to keep the generated poems coherent and semantically consistent with the user's intent. There are also researches that focus on other aspects of poem generation. (Yang et al. Yang2018StylisticCP explored increasing the diversity of generated poems using an unsupervised approach. Xu et al. Xu2018HowII explored generating Chinese poems from images. While most previous works generate poems based on topic words, our work targets at a novel task: generating poems from vernacular Chinese paragraphs.
Unsupervised Machine Translation Compared with supervised machine translation approaches BIBREF4, BIBREF5, unsupervised machine translation BIBREF6, BIBREF2 does not rely on human-labeled parallel corpora for training. This technique is proved to greatly improve the performance of low-resource languages translation systems. (e.g. English-Urdu translation). The unsupervised machine translation framework is also applied to various other tasks, e.g. image captioning BIBREF7, text style transfer BIBREF8, speech to text translation BIBREF9 and clinical text simplification BIBREF10. The UMT framework makes it possible to apply neural models to tasks where limited human labeled data is available. However, in previous tasks that adopt the UMT framework, the abstraction levels of source and target language are the same. This is not the case for our task.
Under-Translation & Over-Translation Both are troublesome problems for neural sequence-to-sequence models. Most previous related researches adopt the coverage mechanism BIBREF11, BIBREF12, BIBREF13. However, as far as we know, there were no successful attempt applying coverage mechanism to transformer-based models BIBREF14.
Model ::: Main Architecture
We transform our poem generation task as an unsupervised machine translation problem. As illustrated in Figure FIGREF1, based on the recently proposed UMT framework BIBREF2, our model is composed of the following components:
Encoder $\textbf {E}_s$ and decoder $\textbf {D}_s$ for vernacular paragraph processing
Encoder $\textbf {E}_t$ and decoder $\textbf {D}_t$ for classical poem processing
where $\textbf {E}_s$ (or $\textbf {E}_t$) takes in a vernacular paragraph (or a classical poem) and converts it into a hidden representation, and $\textbf {D}_s$ (or $\textbf {D}_t$) takes in the hidden representation and converts it into a vernacular paragraph (or a poem). Our model relies on a vernacular texts corpus $\textbf {\emph {S}}$ and a poem corpus $\textbf {\emph {T}}$. We denote $S$ and $T$ as instances in $\textbf {\emph {S}}$ and $\textbf {\emph {T}}$ respectively.
The training of our model relies on three procedures, namely parameter initialization, language modeling and back-translation. We will give detailed introduction to each procedure.
Parameter initialization As both vernacular and classical poem use Chinese characters, we initialize the character embedding of both languages in one common space, the same character in two languages shares the same embedding. This initialization helps associate characters with their plausible translations in the other language.
Language modeling It helps the model generate texts that conform to a certain language. A well-trained language model is able to detect and correct minor lexical and syntactic errors. We train the language models for both vernacular and classical poem by minimizing the following loss:
where $S_N$ (or $T_N$) is generated by adding noise (drop, swap or blank a few words) in $S$ (or $T$).
Back-translation Based on a vernacular paragraph $S$, we generate a poem $T_S$ using $\textbf {E}_s$ and $\textbf {D}_t$, we then translate $T_S$ back into a vernacular paragraph $S_{T_S} = \textbf {D}_s(\textbf {E}_t(T_S))$. Here, $S$ could be used as gold standard for the back-translated paragraph $S_{T_s}$. In this way, we could turn the unsupervised translation into a supervised task by maximizing the similarity between $S$ and $S_{T_S}$. The same also applies to using poem $T$ as gold standard for its corresponding back-translation $T_{S_T}$. We define the following loss:
Note that $\mathcal {L}^{bt}$ does not back propagate through the generation of $T_S$ and $S_T$ as we observe no improvement in doing so. When training the model, we minimize the composite loss:
where $\alpha _1$ and $\alpha _2$ are scaling factors.
Model ::: Addressing Under-Translation and Over-Translation
During our early experiments, we realize that the naive UMT framework is not readily applied to our task. Classical Chinese poems are featured for its terseness and abstractness. They usually focus on depicting broad poetic images rather than details. We collected a dataset of classical Chinese poems and their corresponding vernacular translations, the average length of the poems is $32.0$ characters, while for vernacular translations, it is $73.3$. The huge gap in sequence length between source and target language would induce over-translation and under-translation when training UMT models. In the following sections, we explain the two problems and introduce our improvements.
Model ::: Addressing Under-Translation and Over-Translation ::: Under-Translation
By nature, classical poems are more concise and abstract while vernaculars are more detailed and lengthy, to express the same meaning, a vernacular paragraph usually contains more characters than a classical poem. As a result, when summarizing a vernacular paragraph $S$ to a poem $T_S$, $T_S$ may not cover all information in $S$ due to its length limit. In real practice, we notice the generated poems usually only cover the information in the front part of the vernacular paragraph, while the latter part is unmentioned.
To alleviate under-translation, we propose phrase segmentation-based padding. Specifically, we first segment each line in a classical poem into several sub-sequences, we then join these sub-sequences with the special padding tokens <p>. During training, the padded lines are used instead of the original poem lines. As illustrated in Figure FIGREF10, padding would create better alignments between a vernacular paragraph and a prolonged poem, making it more likely for the latter part of the vernacular paragraph to be covered in the poem. As we mentioned before, the length of the vernacular translation is about twice the length of its corresponding classical poem, so we pad each segmented line to twice its original length.
According to Ye jia:1984, to present a stronger sense of rhythm, each type of poem has its unique phrase segmentation schema, for example, most seven-character quatrain poems adopt the 2-2-3 schema, i.e. each quatrain line contains 3 phrases, the first, second and third phrase contains 2, 2, 3 characters respectively. Inspired by this law, we segment lines in a poem according to the corresponding phrase segmentation schema. In this way, we could avoid characters within the scope of a phrase to be cut apart, thus best preserve the semantic of each phrase.BIBREF15
Model ::: Addressing Under-Translation and Over-Translation ::: Over-Translation
In NMT, when decoding is complete, the decoder would generate an <EOS>token, indicating it has reached the end of the output sequence. However, when expending a poem $T$ into a vernacular Chinese paragraph $S_T$, due to the conciseness nature of poems, after finishing translating every source character in $T$, the output sequence $S_T$ may still be much shorter than the expected length of a poem‘s vernacular translation. As a result, the decoder would believe it has not finished decoding. Instead of generating the <EOS>token, the decoder would continue to generate new output characters from previously translated source characters. This would cause the decoder to repetitively output a piece of text many times.
To remedy this issue, in addition to minimizing the original loss function $\mathcal {L}$, we propose to minimize a specific discrete metric, which is made possible with reinforcement learning.
We define repetition ratio $RR(S)$ of a paragraph $S$ as:
where $vocab(S)$ refers to the number of distinctive characters in $S$, $len(S)$ refers the number of all characters in $S$. Obviously, if a generated sequence contains many repeated characters, it would have high repetition ratio. Following the self-critical policy gradient training BIBREF16, we define the following loss function:
where $\tau $ is a manually set threshold. Intuitively, minimizing $\mathcal {L}^{rl}$ is equivalent to maximizing the conditional likelihood of the sequence $S$ given $S_{T_S}$ if its repetition ratio is lower than the threshold $\tau $. Following BIBREF17, we revise the composite loss as:
where $\alpha _1, \alpha _2, \alpha _3$ are scaling factors.
Experiment
The objectives of our experiment are to explore the following questions: (1) How much do our models improve the generated poems? (Section SECREF23) (2) What are characteristics of the input vernacular paragraph that lead to a good generated poem? (Section SECREF26) (3) What are weaknesses of generated poems compared to human poems? (Section SECREF27) To this end, we built a dataset as described in Section SECREF18. Evaluation metrics and baselines are described in Section SECREF21 and SECREF22. For the implementation details of building the dataset and models, please refer to supplementary materials.
Experiment ::: Datasets
Training and Validation Sets We collected a corpus of poems and a corpus of vernacular literature from online resources. The poem corpus contains 163K quatrain poems from Tang Poems and Song Poems, the vernacular literature corpus contains 337K short paragraphs from 281 famous books, the corpus covers various literary forms including prose, fiction and essay. Note that our poem corpus and a vernacular corpus are not aligned. We further split the two corpora into a training set and a validation set.
Test Set From online resources, we collected 487 seven-character quatrain poems from Tang Poems and Song Poems, as well as their corresponding high quality vernacular translations. These poems could be used as gold standards for poems generated from their corresponding vernacular translations. Table TABREF11 shows the statistics of our training, validation and test set.
Experiment ::: Evaluation Metrics
Perplexity Perplexity reflects the probability a model generates a certain poem. Intuitively, a better model would yield higher probability (lower perplexity) on the gold poem.
BLEU As a standard evaluation metric for machine translation, BLEU BIBREF18 measures the intersection of n-grams between the generated poem and the gold poem. A better generated poem usually achieves higher BLEU score, as it shares more n-gram with the gold poem.
Human evaluation While perplexity and BLEU are objective metrics that could be applied to large-volume test set, evaluating Chinese poems is after all a subjective task. We invited 30 human evaluators to join our human evaluation. The human evaluators were divided into two groups. The expert group contains 15 people who hold a bachelor degree in Chinese literature, and the amateur group contains 15 people who holds a bachelor degree in other fields. All 30 human evaluators are native Chinese speakers.
We ask evaluators to grade each generated poem from four perspectives: 1) Fluency: Is the generated poem grammatically and rhythmically well formed, 2) Semantic coherence: Is the generated poem itself semantic coherent and meaningful, 3) Semantic preservability: Does the generated poem preserve the semantic of the modern Chinese translation, 4) Poeticness: Does the generated poem display the characteristic of a poem and does the poem build good poetic image. The grading scale for each perspective is from 1 to 5.
Experiment ::: Baselines
We compare the performance of the following models: (1) LSTM BIBREF19; (2)Naive transformer BIBREF14; (3)Transformer + Anti OT (RL loss); (4)Transformer + Anti UT (phrase segmentation-based padding); (5)Transformer + Anti OT&UT.
Experiment ::: Reborn Poems: Generating Poems from Vernacular Translations
As illustrated in Table TABREF12 (ID 1). Given the vernacular translation of each gold poem in test set, we generate five poems using our models. Intuitively, the more the generated poem resembles the gold poem, the better the model is. We report mean perplexity and BLEU scores in Table TABREF19 (Where +Anti OT refers to adding the reinforcement loss to mitigate over-fitting and +Anti UT refers to adding phrase segmentation-based padding to mitigate under-translation), human evaluation results in Table TABREF20.
According to experiment results, perplexity, BLEU scores and total scores in human evaluation are consistent with each other. We observe all BLEU scores are fairly low, we believe it is reasonable as there could be multiple ways to compose a poem given a vernacular paragraph. Among transformer-based models, both +Anti OT and +Anti UT outperforms the naive transformer, while Anti OT&UT shows the best performance, this demonstrates alleviating under-translation and over-translation both helps generate better poems. Specifically, +Anti UT shows bigger improvement than +Anti OT. According to human evaluation, among the four perspectives, our Anti OT&UT brought most score improvement in Semantic preservability, this proves our improvement on semantic preservability was most obvious to human evaluators. All transformer-based models outperform LSTM. Note that the average length of the vernacular translation is over 70 characters, comparing with transformer-based models, LSTM may only keep the information in the beginning and end of the vernacular. We anticipated some score inconsistency between expert group and amateur group. However, after analyzing human evaluation results, we did not observed big divergence between two groups.
Experiment ::: Interpoetry: Generating Poems from Various Literature Forms
Chinese literature is not only featured for classical poems, but also various other literature forms. Song lyricUTF8gbsn(宋词), or ci also gained tremendous popularity in its palmy days, standing out in classical Chinese literature. Modern prose, modern poems and pop song lyrics have won extensive praise among Chinese people in modern days. The goal of this experiment is to transfer texts of other literature forms into quatrain poems. We expect the generated poems to not only keep the semantic of the original text, but also demonstrate terseness, rhythm and other characteristics of ancient poems. Specifically, we chose 20 famous fragments from four types of Chinese literature (5 fragments for each of modern prose, modern poems, pop song lyrics and Song lyrics). We try to As no ground truth is available, we resorted to human evaluation with the same grading standard in Section SECREF23.
Comparing the scores of different literature forms, we observe Song lyric achieves higher scores than the other three forms of modern literature. It is not surprising as both Song lyric and quatrain poems are written in classical Chinese, while the other three literature forms are all in vernacular.
Comparing the scores within the same literature form, we observe the scores of poems generated from different paragraphs tends to vary. After carefully studying the generated poems as well as their scores, we have the following observation:
1) In classical Chinese poems, poetic images UTF8gbsn(意象) were widely used to express emotions and to build artistic conception. A certain poetic image usually has some fixed implications. For example, autumn is usually used to imply sadness and loneliness. However, with the change of time, poetic images and their implications have also changed. According to our observation, if a vernacular paragraph contains more poetic images used in classical literature, its generated poem usually achieves higher score. As illustrated in Table TABREF12, both paragraph 2 and 3 are generated from pop song lyrics, paragraph 2 uses many poetic images from classical literature (e.g. pear flowers, makeup), while paragraph 3 uses modern poetic images (e.g. sparrows on the utility pole). Obviously, compared with poem 2, sentences in poem 3 seems more confusing, as the poetic images in modern times may not fit well into the language model of classical poems.
2) We also observed that poems generated from descriptive paragraphs achieve higher scores than from logical or philosophical paragraphs. For example, in Table TABREF12, both paragraph 4 (more descriptive) and paragraph 5 (more philosophical) were selected from famous modern prose. However, compared with poem 4, poem 5 seems semantically more confusing. We offer two explanations to the above phenomenon: i. Limited by the 28-character restriction, it is hard for quatrain poems to cover complex logical or philosophical explanation. ii. As vernacular paragraphs are more detailed and lengthy, some information in a vernacular paragraph may be lost when it is summarized into a classical poem. While losing some information may not change the general meaning of a descriptive paragraph, it could make a big difference in a logical or philosophical paragraph.
Experiment ::: Human Discrimination Test
We manually select 25 generated poems from vernacular Chinese translations and pair each one with its corresponding human written poem. We then present the 25 pairs to human evaluators and ask them to differentiate which poem is generated by human poet.
As demonstrated in Table TABREF29, although the general meanings in human poems and generated poems seem to be the same, the wordings they employ are quite different. This explains the low BLEU scores in Section 4.3. According to the test results in Table TABREF30, human evaluators only achieved 65.8% in mean accuracy. This indicates the best generated poems are somewhat comparable to poems written by amateur poets.
We interviewed evaluators who achieved higher than 80% accuracy on their differentiation strategies. Most interviewed evaluators state they realize the sentences in a human written poem are usually well organized to highlight a theme or to build a poetic image, while the correlation between sentences in a generated poem does not seem strong. As demonstrated in Table TABREF29, the last two sentences in both human poems (marked as red) echo each other well, while the sentences in machine-generated poems seem more independent. This gives us hints on the weakness of generated poems: While neural models may generate poems that resemble human poems lexically and syntactically, it's still hard for them to compete with human beings in building up good structures.
Discussion
Addressing Under-Translation In this part, we wish to explore the effect of different phrase segmentation schemas on our phrase segmentation-based padding. According to Ye jia:1984, most seven-character quatrain poems adopt the 2-2-3 segmentation schema. As shown in examples in Figure FIGREF31, we compare our phrase segmentation-based padding (2-2-3 schema) to two less common schemas (i.e. 2-3-2 and 3-2-2 schema) we report our experiment results in Table TABREF32.
The results show our 2-2-3 segmentation-schema greatly outperforms 2-3-2 and 3-2-2 schema in both perplexity and BLEU scores. Note that the BLEU scores of 2-3-2 and 3-2-2 schema remains almost the same as our naive baseline (Without padding). According to the observation, we have the following conclusions: 1) Although padding better aligns the vernacular paragraph to the poem, it may not improve the quality of the generated poem. 2) The padding tokens should be placed according to the phrase segmentation schema of the poem as it preserves the semantic within the scope of each phrase.
Addressing Over-Translation To explore the effect of our reinforcement learning policy on alleviating over-translation, we calculate the repetition ratio of vernacular paragraphs generated from classical poems in our validation set. We found naive transformer achieves $40.8\%$ in repetition ratio, while our +Anti OT achieves $34.9\%$. Given the repetition ratio of vernacular paragraphs (written by human beings) in our validation set is $30.1\%$, the experiment results demonstrated our RL loss effectively alleviate over-translation, which in turn leads to better generated poems.
Conclusion
In this paper, we proposed a novel task of generating classical Chinese poems from vernacular paragraphs. We adapted the unsupervised machine translation model to our task and meanwhile proposed two novel approaches to address the under-translation and over-translation problems. Experiments show that our task can give users more controllability in generating poems. In addition, our approaches are very effective to solve the problems when the UMT model is directly used in this task. In the future, we plan to explore: (1) Applying the UMT model in the tasks where the abstraction levels of source and target languages are different (e.g., unsupervised automatic summarization); (2) Improving the quality of generated poems via better structure organization approaches. | We collected a corpus of poems and a corpus of vernacular literature from online resources |
ee31c8a94e07b3207ca28caef3fbaf9a38d94964 | ee31c8a94e07b3207ca28caef3fbaf9a38d94964_0 | Q: What were the evaluation metrics?
Text: Introduction
Task-oriented dialogue system, which helps users to achieve specific goals with natural language, is attracting more and more research attention. With the success of the sequence-to-sequence (Seq2Seq) models in text generation BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, several works tried to model the task-oriented dialogue as the Seq2Seq generation of response from the dialogue history BIBREF5, BIBREF6, BIBREF7. This kind of modeling scheme frees the task-oriented dialogue system from the manually designed pipeline modules and heavy annotation labor for these modules.
Different from typical text generation, the successful conversations for task-oriented dialogue system heavily depend on accurate knowledge base (KB) queries. Taking the dialogue in Figure FIGREF1 as an example, to answer the driver's query on the gas station, the dialogue system is required to retrieve the entities like “200 Alester Ave” and “Valero”. For the task-oriented system based on Seq2Seq generation, there is a trend in recent study towards modeling the KB query as an attention network over the entire KB entity representations, hoping to learn a model to pay more attention to the relevant entities BIBREF6, BIBREF7, BIBREF8, BIBREF9. Though achieving good end-to-end dialogue generation with over-the-entire-KB attention mechanism, these methods do not guarantee the generation consistency regarding KB entities and sometimes yield responses with conflict entities, like “Valero is located at 899 Ames Ct” for the gas station query (as shown in Figure FIGREF1). In fact, the correct address for Valero is 200 Alester Ave. A consistent response is relatively easy to achieve for the conventional pipeline systems because they query the KB by issuing API calls BIBREF10, BIBREF11, BIBREF12, and the returned entities, which typically come from a single KB row, are consistently related to the object (like the “gas station”) that serves the user's request. This indicates that a response can usually be supported by a single KB row. It's promising to incorporate such observation into the Seq2Seq dialogue generation model, since it encourages KB relevant generation and avoids the model from producing responses with conflict entities.
To achieve entity-consistent generation in the Seq2Seq task-oriented dialogue system, we propose a novel framework which query the KB in two steps. In the first step, we introduce a retrieval module — KB-retriever to explicitly query the KB. Inspired by the observation that a single KB row usually supports a response, given the dialogue history and a set of KB rows, the KB-retriever uses a memory network BIBREF13 to select the most relevant row. The retrieval result is then fed into a Seq2Seq dialogue generation model to filter the irrelevant KB entities and improve the consistency within the generated entities. In the second step, we further perform attention mechanism to address the most correlated KB column. Finally, we adopt the copy mechanism to incorporate the retrieved KB entity.
Since dialogue dataset is not typically annotated with the retrieval results, training the KB-retriever is non-trivial. To make the training feasible, we propose two methods: 1) we use a set of heuristics to derive the training data and train the retriever in a distant supervised fashion; 2) we use Gumbel-Softmax BIBREF14 as an approximation of the non-differentiable selecting process and train the retriever along with the Seq2Seq dialogue generation model. Experiments on two publicly available datasets (Camrest BIBREF11 and InCar Assistant BIBREF6) confirm the effectiveness of the KB-retriever. Both the retrievers trained with distant-supervision and Gumbel-Softmax technique outperform the compared systems in the automatic and human evaluations. Analysis empirically verifies our assumption that more than 80% responses in the dataset can be supported by a single KB row and better retrieval results lead to better task-oriented dialogue generation performance.
Definition
In this section, we will describe the input and output of the end-to-end task-oriented dialogue system, and the definition of Seq2Seq task-oriented dialogue generation.
Definition ::: Dialogue History
Given a dialogue between a user ($u$) and a system ($s$), we follow eric:2017:SIGDial and represent the $k$-turned dialogue utterances as $\lbrace (u_{1}, s_{1} ), (u_{2} , s_{2} ), ... , (u_{k}, s_{k})\rbrace $. At the $i^{\text{th}}$ turn of the dialogue, we aggregate dialogue context which consists of the tokens of $(u_{1}, s_{1}, ..., s_{i-1}, u_{i})$ and use $\mathbf {x} = (x_{1}, x_{2}, ..., x_{m})$ to denote the whole dialogue history word by word, where $m$ is the number of tokens in the dialogue history.
Definition ::: Knowledge Base
In this paper, we assume to have the access to a relational-database-like KB $B$, which consists of $|\mathcal {R}|$ rows and $|\mathcal {C}|$ columns. The value of entity in the $j^{\text{th}}$ row and the $i^{\text{th}}$ column is noted as $v_{j, i}$.
Definition ::: Seq2Seq Dialogue Generation
We define the Seq2Seq task-oriented dialogue generation as finding the most likely response $\mathbf {y}$ according to the input dialogue history $\mathbf {x}$ and KB $B$. Formally, the probability of a response is defined as
where $y_t$ represents an output token.
Our Framework
In this section, we describe our framework for end-to-end task-oriented dialogues. The architecture of our framework is demonstrated in Figure FIGREF3, which consists of two major components including an memory network-based retriever and the seq2seq dialogue generation with KB Retriever. Our framework first uses the KB-retriever to select the most relevant KB row and further filter the irrelevant entities in a Seq2Seq response generation model to improve the consistency among the output entities. While in decoding, we further perform the attention mechanism to choose the most probable KB column. We will present the details of our framework in the following sections.
Our Framework ::: Encoder
In our encoder, we adopt the bidirectional LSTM BIBREF15 to encode the dialogue history $\mathbf {x}$, which captures temporal relationships within the sequence. The encoder first map the tokens in $\mathbf {x}$ to vectors with embedding function $\phi ^{\text{emb}}$, and then the BiLSTM read the vector forwardly and backwardly to produce context-sensitive hidden states $(\mathbf {h}_{1}, \mathbf {h}_2, ..., \mathbf {h}_{m})$ by repeatedly applying the recurrence $\mathbf {h}_{i}=\text{BiLSTM}\left( \phi ^{\text{emb}}\left( x_{i}\right) , \mathbf {h}_{i-1}\right)$.
Our Framework ::: Vanilla Attention-based Decoder
Here, we follow eric:2017:SIGDial to adopt the attention-based decoder to generation the response word by word. LSTM is also used to represent the partially generated output sequence $(y_{1}, y_2, ...,y_{t-1})$ as $(\tilde{\mathbf {h}}_{1}, \tilde{\mathbf {h}}_2, ...,\tilde{\mathbf {h}}_t)$. For the generation of next token $y_t$, their model first calculates an attentive representation $\tilde{\mathbf {h}}^{^{\prime }}_t$ of the dialogue history as
Then, the concatenation of the hidden representation of the partially outputted sequence $\tilde{\mathbf {h}}_t$ and the attentive dialogue history representation $\tilde{\mathbf {h}}^{^{\prime }}_t$ are projected to the vocabulary space $\mathcal {V}$ by $U$ as
to calculate the score (logit) for the next token generation. The probability of next token $y_t$ is finally calculated as
Our Framework ::: Entity-Consistency Augmented Decoder
As shown in section SECREF7, we can see that the generation of tokens are just based on the dialogue history attention, which makes the model ignorant to the KB entities. In this section, we present how to query the KB explicitly in two steps for improving the entity consistence, which first adopt the KB-retriever to select the most relevant KB row and the generation of KB entities from the entities-augmented decoder is constrained to the entities within the most probable row, thus improve the entity generation consistency. Next, we perform the column attention to select the most probable KB column. Finally, we show how to use the copy mechanism to incorporate the retrieved entity while decoding.
Our Framework ::: Entity-Consistency Augmented Decoder ::: KB Row Selection
In our framework, our KB-retriever takes the dialogue history and KB rows as inputs and selects the most relevant row. This selection process resembles the task of selecting one word from the inputs to answer questions BIBREF13, and we use a memory network to model this process. In the following sections, we will first describe how to represent the inputs, then we will talk about our memory network-based retriever
Our Framework ::: Entity-Consistency Augmented Decoder ::: KB Row Selection ::: Dialogue History Representation:
We encode the dialogue history by adopting the neural bag-of-words (BoW) followed the original paper BIBREF13. Each token in the dialogue history is mapped into a vector by another embedding function $\phi ^{\text{emb}^{\prime }}(x)$ and the dialogue history representation $\mathbf {q}$ is computed as the sum of these vectors: $\mathbf {q} = \sum ^{m}_{i=1} \phi ^{\text{emb}^{\prime }} (x_{i}) $.
Our Framework ::: Entity-Consistency Augmented Decoder ::: KB Row Selection ::: KB Row Representation:
In this section, we describe how to encode the KB row. Each KB cell is represented as the cell value $v$ embedding as $\mathbf {c}_{j, k} = \phi ^{\text{value}}(v_{j, k})$, and the neural BoW is also used to represent a KB row $\mathbf {r}_{j}$ as $\mathbf {r}_{j} = \sum _{k=1}^{|\mathcal {C}|} \mathbf {c}_{j,k}$.
Our Framework ::: Entity-Consistency Augmented Decoder ::: KB Row Selection ::: Memory Network-Based Retriever:
We model the KB retrieval process as selecting the row that most-likely supports the response generation. Memory network BIBREF13 has shown to be effective to model this kind of selection. For a $n$-hop memory network, the model keeps a set of input matrices $\lbrace R^{1}, R^{2}, ..., R^{n+1}\rbrace $, where each $R^{i}$ is a stack of $|\mathcal {R}|$ inputs $(\mathbf {r}^{i}_1, \mathbf {r}^{i}_2, ..., \mathbf {r}^{i}_{|\mathcal {R}|})$. The model also keeps query $\mathbf {q}^{1}$ as the input. A single hop memory network computes the probability $\mathbf {a}_j$ of selecting the $j^{\text{th}}$ input as
For the multi-hop cases, layers of single hop memory network are stacked and the query of the $(i+1)^{\text{th}}$ layer network is computed as
and the output of the last layer is used as the output of the whole network. For more details about memory network, please refer to the original paper BIBREF13.
After getting $\mathbf {a}$, we represent the retrieval results as a 0-1 matrix $T \in \lbrace 0, 1\rbrace ^{|\mathcal {R}|\times \mathcal {|C|}}$, where each element in $T$ is calculated as
In the retrieval result, $T_{j, k}$ indicates whether the entity in the $j^{\text{th}}$ row and the $k^{\text{th}}$ column is relevant to the final generation of the response. In this paper, we further flatten T to a 0-1 vector $\mathbf {t} \in \lbrace 0, 1\rbrace ^{|\mathcal {E}|}$ (where $|\mathcal {E}|$ equals $|\mathcal {R}|\times \mathcal {|C|}$) as our retrieval row results.
Our Framework ::: Entity-Consistency Augmented Decoder ::: KB Column Selection
After getting the retrieved row result that indicates which KB row is the most relevant to the generation, we further perform column attention in decoding time to select the probable KB column. For our KB column selection, following the eric:2017:SIGDial we use the decoder hidden state $(\tilde{\mathbf {h}}_{1}, \tilde{\mathbf {h}}_2, ...,\tilde{\mathbf {h}}_t)$ to compute an attention score with the embedding of column attribute name. The attention score $\mathbf {c}\in R^{|\mathcal {E}|}$ then become the logits of the column be selected, which can be calculated as
where $\mathbf {c}_j$ is the attention score of the $j^{\text{th}}$ KB column, $\mathbf {k}_j$ is represented with the embedding of word embedding of KB column name. $W^{^{\prime }}_{1}$, $W^{^{\prime }}_{2}$ and $\mathbf {t}^{T}$ are trainable parameters of the model.
Our Framework ::: Entity-Consistency Augmented Decoder ::: Decoder with Retrieved Entity
After the row selection and column selection, we can define the final retrieved KB entity score as the element-wise dot between the row retriever result and the column selection score, which can be calculated as
where the $v^{t}$ indicates the final KB retrieved entity score. Finally, we follow eric:2017:SIGDial to use copy mechanism to incorporate the retrieved entity, which can be defined as
where $\mathbf {o}_t$’s dimensionality is $ |\mathcal {V}|$ +$|\mathcal {E}|$. In $\mathbf {v}^t$ , lower $ |\mathcal {V}|$ is zero and the rest$|\mathcal {E}|$ is retrieved entity scores.
Training the KB-Retriever
As mentioned in section SECREF9, we adopt the memory network to train our KB-retriever. However, in the Seq2Seq dialogue generation, the training data does not include the annotated KB row retrieval results, which makes supervised training the KB-retriever impossible. To tackle this problem, we propose two training methods for our KB-row-retriever. 1) In the first method, inspired by the recent success of distant supervision in information extraction BIBREF16, BIBREF17, BIBREF18, BIBREF19, we take advantage of the similarity between the surface string of KB entries and the reference response, and design a set of heuristics to extract training data for the KB-retriever. 2) In the second method, instead of training the KB-retriever as an independent component, we train it along with the training of the Seq2Seq dialogue generation. To make the retrieval process in Equation DISPLAY_FORM13 differentiable, we use Gumbel-Softmax BIBREF14 as an approximation of the $\operatornamewithlimits{argmax}$ during training.
Training the KB-Retriever ::: Training with Distant Supervision
Although it's difficult to obtain the annotated retrieval data for the KB-retriever, we can “guess” the most relevant KB row from the reference response, and then obtain the weakly labeled data for the retriever. Intuitively, for the current utterance in the same dialogue which usually belongs to one topic and the KB row that contains the largest number of entities mentioned in the whole dialogue should support the utterance. In our training with distant supervision, we further simplify our assumption and assume that one dialogue which is usually belongs to one topic and can be supported by the most relevant KB row, which means for a $k$-turned dialogue, we construct $k$ pairs of training instances for the retriever and all the inputs $(u_{1}, s_{1}, ..., s_{i-1}, u_{i} \mid i \le k)$ are associated with the same weakly labeled KB retrieval result $T^*$.
In this paper, we compute each row's similarity to the whole dialogue and choose the most similar row as $T^*$. We define the similarity of each row as the number of matched spans with the surface form of the entities in the row. Taking the dialogue in Figure FIGREF1 for an example, the similarity of the 4$^\text{th}$ row equals to 4 with “200 Alester Ave”, “gas station”, “Valero”, and “road block nearby” matching the dialogue context; and the similarity of the 7$^\text{th}$ row equals to 1 with only “road block nearby” matching.
In our model with the distantly supervised retriever, the retrieval results serve as the input for the Seq2Seq generation. During training the Seq2Seq generation, we use the weakly labeled retrieval result $T^{*}$ as the input.
Training the KB-Retriever ::: Training with Gumbel-Softmax
In addition to treating the row retrieval result as an input to the generation model, and training the kb-row-retriever independently, we can train it along with the training of the Seq2Seq dialogue generation in an end-to-end fashion. The major difficulty of such a training scheme is that the discrete retrieval result is not differentiable and the training signal from the generation model cannot be passed to the parameters of the retriever. Gumbel-softmax technique BIBREF14 has been shown an effective approximation to the discrete variable and proved to work in sentence representation. In this paper, we adopt the Gumbel-Softmax technique to train the KB retriever. We use
as the approximation of $T$, where $\mathbf {g}_{j}$ are i.i.d samples drawn from $\text{Gumbel}(0,1)$ and $\tau $ is a constant that controls the smoothness of the distribution. $T^{\text{approx}}_{j}$ replaces $T^{\text{}}_{j}$ in equation DISPLAY_FORM13 and goes through the same flattening and expanding process as $\mathbf {V}$ to get $\mathbf {v}^{\mathbf {t}^{\text{approx}^{\prime }}}$ and the training signal from Seq2Seq generation is passed via the logit
To make training with Gumbel-Softmax more stable, we first initialize the parameters by pre-training the KB-retriever with distant supervision and further fine-tuning our framework.
Training the KB-Retriever ::: Experimental Settings
We choose the InCar Assistant dataset BIBREF6 including three distinct domains: navigation, weather and calendar domain. For weather domain, we follow wen2018sequence to separate the highest temperature, lowest temperature and weather attribute into three different columns. For calendar domain, there are some dialogues without a KB or incomplete KB. In this case, we padding a special token “-” in these incomplete KBs. Our framework is trained separately in these three domains, using the same train/validation/test split sets as eric:2017:SIGDial. To justify the generalization of the proposed model, we also use another public CamRest dataset BIBREF11 and partition the datasets into training, validation and testing set in the ratio 3:1:1. Especially, we hired some human experts to format the CamRest dataset by equipping the corresponding KB to every dialogues.
All hyper-parameters are selected according to validation set. We use a three-hop memory network to model our KB-retriever. The dimensionalities of the embedding is selected from $\lbrace 100, 200\rbrace $ and LSTM hidden units is selected from $\lbrace 50, 100, 150, 200, 350\rbrace $. The dropout we use in our framework is selected from $\lbrace 0.25, 0.5, 0.75\rbrace $ and the batch size we adopt is selected from $\lbrace 1,2\rbrace $. L2 regularization is used on our model with a tension of $5\times 10^{-6}$ for reducing overfitting. For training the retriever with distant supervision, we adopt the weight typing trick BIBREF20. We use Adam BIBREF21 to optimize the parameters in our model and adopt the suggested hyper-parameters for optimization.
We adopt both the automatic and human evaluations in our experiments.
Training the KB-Retriever ::: Baseline Models
We compare our model with several baselines including:
Attn seq2seq BIBREF22: A model with simple attention over the input context at each time step during decoding.
Ptr-UNK BIBREF23: Ptr-UNK is the model which augments a sequence-to-sequence architecture with attention-based copy mechanism over the encoder context.
KV Net BIBREF6: The model adopted and argumented decoder which decodes over the concatenation of vocabulary and KB entities, which allows the model to generate entities.
Mem2Seq BIBREF7: Mem2Seq is the model that takes dialogue history and KB entities as input and uses a pointer gate to control either generating a vocabulary word or selecting an input as the output.
DSR BIBREF9: DSR leveraged dialogue state representation to retrieve the KB implicitly and applied copying mechanism to retrieve entities from knowledge base while decoding.
In InCar dataset, for the Attn seq2seq, Ptr-UNK and Mem2seq, we adopt the reported results from madotto2018mem2seq. In CamRest dataset, for the Mem2Seq, we adopt their open-sourced code to get the results while for the DSR, we run their code on the same dataset to obtain the results.
Results
Follow the prior works BIBREF6, BIBREF7, BIBREF9, we adopt the BLEU and the Micro Entity F1 to evaluate our model performance. The experimental results are illustrated in Table TABREF30.
In the first block of Table TABREF30, we show the Human, rule-based and KV Net (with*) result which are reported from eric:2017:SIGDial. We argue that their results are not directly comparable because their work uses the entities in thier canonicalized forms, which are not calculated based on real entity value. It's noticing that our framework with two methods still outperform KV Net in InCar dataset on whole BLEU and Entity F metrics, which demonstrates the effectiveness of our framework.
In the second block of Table TABREF30, we can see that our framework trained with both the distant supervision and the Gumbel-Softmax beats all existing models on two datasets. Our model outperforms each baseline on both BLEU and F1 metrics. In InCar dataset, Our model with Gumbel-Softmax has the highest BLEU compared with baselines, which which shows that our framework can generate more fluent response. Especially, our framework has achieved 2.5% improvement on navigate domain, 1.8% improvement on weather domain and 3.5% improvement on calendar domain on F1 metric. It indicates that the effectiveness of our KB-retriever module and our framework can retrieve more correct entity from KB. In CamRest dataset, the same trend of improvement has been witnessed, which further show the effectiveness of our framework.
Besides, we observe that the model trained with Gumbel-Softmax outperforms with distant supervision method. We attribute this to the fact that the KB-retriever and the Seq2Seq module are fine-tuned in an end-to-end fashion, which can refine the KB-retriever and further promote the dialogue generation.
Results ::: The proportion of responses that can be supported by a single KB row
In this section, we verify our assumption by examining the proportion of responses that can be supported by a single row.
We define a response being supported by the most relevant KB row as all the responded entities are included by that row. We study the proportion of these responses over the test set. The number is 95% for the navigation domain, 90% for the CamRest dataset and 80% for the weather domain. This confirms our assumption that most responses can be supported by the relevant KB row. Correctly retrieving the supporting row should be beneficial.
We further study the weather domain to see the rest 20% exceptions. Instead of being supported by multiple rows, most of these exceptions cannot be supported by any KB row. For example, there is one case whose reference response is “It 's not rainy today”, and the related KB entity is sunny. These cases provide challenges beyond the scope of this paper. If we consider this kind of cases as being supported by a single row, such proportion in the weather domain is 99%.
Results ::: Generation Consistency
In this paper, we expect the consistent generation from our model. To verify this, we compute the consistency recall of the utterances that have multiple entities. An utterance is considered as consistent if it has multiple entities and these entities belong to the same row which we annotated with distant supervision.
The consistency result is shown in Table TABREF37. From this table, we can see that incorporating retriever in the dialogue generation improves the consistency.
Results ::: Correlation between the number of KB rows and generation consistency
To further explore the correlation between the number of KB rows and generation consistency, we conduct experiments with distant manner to study the correlation between the number of KB rows and generation consistency.
We choose KBs with different number of rows on a scale from 1 to 5 for the generation. From Figure FIGREF32, as the number of KB rows increase, we can see a decrease in generation consistency. This indicates that irrelevant information would harm the dialogue generation consistency.
Results ::: Visualization
To gain more insights into how the our retriever module influences the whole KB score distribution, we visualized the KB entity probability at the decoding position where we generate the entity 200_Alester_Ave. From the example (Fig FIGREF38), we can see the $4^\text{th}$ row and the $1^\text{th}$ column has the highest probabilities for generating 200_Alester_Ave, which verify the effectiveness of firstly selecting the most relevant KB row and further selecting the most relevant KB column.
Results ::: Human Evaluation
We provide human evaluation on our framework and the compared models. These responses are based on distinct dialogue history. We hire several human experts and ask them to judge the quality of the responses according to correctness, fluency, and humanlikeness on a scale from 1 to 5. In each judgment, the expert is presented with the dialogue history, an output of a system with the name anonymized, and the gold response.
The evaluation results are illustrated in Table TABREF37. Our framework outperforms other baseline models on all metrics according to Table TABREF37. The most significant improvement is from correctness, indicating that our model can retrieve accurate entity from KB and generate more informative information that the users want to know.
Related Work
Sequence-to-sequence (Seq2Seq) models in text generation BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 has gained more popular and they are applied for the open-domain dialogs BIBREF24, BIBREF25 in the end-to-end training method. Recently, the Seq2Seq can be used for learning task oriented dialogs and how to query the structured KB is the remaining challenges.
Properly querying the KB has long been a challenge in the task-oriented dialogue system. In the pipeline system, the KB query is strongly correlated with the design of language understanding, state tracking, and policy management. Typically, after obtaining the dialogue state, the policy management module issues an API call accordingly to query the KB. With the development of neural network in natural language processing, efforts have been made to replacing the discrete and pre-defined dialogue state with the distributed representation BIBREF10, BIBREF11, BIBREF12, BIBREF26. In our framework, our retrieval result can be treated as a numeric representation of the API call return.
Instead of interacting with the KB via API calls, more and more recent works tried to incorporate KB query as a part of the model. The most popular way of modeling KB query is treating it as an attention network over the entire KB entities BIBREF6, BIBREF27, BIBREF8, BIBREF28, BIBREF29 and the return can be a fuzzy summation of the entity representations. madotto2018mem2seq's practice of modeling the KB query with memory network can also be considered as learning an attentive preference over these entities. wen2018sequence propose the implicit dialogue state representation to query the KB and achieve the promising performance. Different from their modes, we propose the KB-retriever to explicitly query the KB, and the query result is used to filter the irrelevant entities in the dialogue generation to improve the consistency among the output entities.
Conclusion
In this paper, we propose a novel framework to improve entities consistency by querying KB in two steps. In the first step, inspired by the observation that a response can usually be supported by a single KB row, we introduce the KB retriever to return the most relevant KB row, which is used to filter the irrelevant KB entities and encourage consistent generation. In the second step, we further perform attention mechanism to select the most relevant KB column. Experimental results show the effectiveness of our method. Extensive analysis further confirms the observation and reveal the correlation between the success of KB query and the success of task-oriented dialogue generation.
Acknowledgments
We thank the anonymous reviewers for their helpful comments and suggestions. This work was supported by the National Natural Science Foundation of China (NSFC) via grant 61976072, 61632011 and 61772153. | BLEU, Micro Entity F1, quality of the responses according to correctness, fluency, and humanlikeness on a scale from 1 to 5 |
66d743b735ba75589486e6af073e955b6bb9d2a4 | 66d743b735ba75589486e6af073e955b6bb9d2a4_0 | Q: What were the baseline systems?
Text: Introduction
Task-oriented dialogue system, which helps users to achieve specific goals with natural language, is attracting more and more research attention. With the success of the sequence-to-sequence (Seq2Seq) models in text generation BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, several works tried to model the task-oriented dialogue as the Seq2Seq generation of response from the dialogue history BIBREF5, BIBREF6, BIBREF7. This kind of modeling scheme frees the task-oriented dialogue system from the manually designed pipeline modules and heavy annotation labor for these modules.
Different from typical text generation, the successful conversations for task-oriented dialogue system heavily depend on accurate knowledge base (KB) queries. Taking the dialogue in Figure FIGREF1 as an example, to answer the driver's query on the gas station, the dialogue system is required to retrieve the entities like “200 Alester Ave” and “Valero”. For the task-oriented system based on Seq2Seq generation, there is a trend in recent study towards modeling the KB query as an attention network over the entire KB entity representations, hoping to learn a model to pay more attention to the relevant entities BIBREF6, BIBREF7, BIBREF8, BIBREF9. Though achieving good end-to-end dialogue generation with over-the-entire-KB attention mechanism, these methods do not guarantee the generation consistency regarding KB entities and sometimes yield responses with conflict entities, like “Valero is located at 899 Ames Ct” for the gas station query (as shown in Figure FIGREF1). In fact, the correct address for Valero is 200 Alester Ave. A consistent response is relatively easy to achieve for the conventional pipeline systems because they query the KB by issuing API calls BIBREF10, BIBREF11, BIBREF12, and the returned entities, which typically come from a single KB row, are consistently related to the object (like the “gas station”) that serves the user's request. This indicates that a response can usually be supported by a single KB row. It's promising to incorporate such observation into the Seq2Seq dialogue generation model, since it encourages KB relevant generation and avoids the model from producing responses with conflict entities.
To achieve entity-consistent generation in the Seq2Seq task-oriented dialogue system, we propose a novel framework which query the KB in two steps. In the first step, we introduce a retrieval module — KB-retriever to explicitly query the KB. Inspired by the observation that a single KB row usually supports a response, given the dialogue history and a set of KB rows, the KB-retriever uses a memory network BIBREF13 to select the most relevant row. The retrieval result is then fed into a Seq2Seq dialogue generation model to filter the irrelevant KB entities and improve the consistency within the generated entities. In the second step, we further perform attention mechanism to address the most correlated KB column. Finally, we adopt the copy mechanism to incorporate the retrieved KB entity.
Since dialogue dataset is not typically annotated with the retrieval results, training the KB-retriever is non-trivial. To make the training feasible, we propose two methods: 1) we use a set of heuristics to derive the training data and train the retriever in a distant supervised fashion; 2) we use Gumbel-Softmax BIBREF14 as an approximation of the non-differentiable selecting process and train the retriever along with the Seq2Seq dialogue generation model. Experiments on two publicly available datasets (Camrest BIBREF11 and InCar Assistant BIBREF6) confirm the effectiveness of the KB-retriever. Both the retrievers trained with distant-supervision and Gumbel-Softmax technique outperform the compared systems in the automatic and human evaluations. Analysis empirically verifies our assumption that more than 80% responses in the dataset can be supported by a single KB row and better retrieval results lead to better task-oriented dialogue generation performance.
Definition
In this section, we will describe the input and output of the end-to-end task-oriented dialogue system, and the definition of Seq2Seq task-oriented dialogue generation.
Definition ::: Dialogue History
Given a dialogue between a user ($u$) and a system ($s$), we follow eric:2017:SIGDial and represent the $k$-turned dialogue utterances as $\lbrace (u_{1}, s_{1} ), (u_{2} , s_{2} ), ... , (u_{k}, s_{k})\rbrace $. At the $i^{\text{th}}$ turn of the dialogue, we aggregate dialogue context which consists of the tokens of $(u_{1}, s_{1}, ..., s_{i-1}, u_{i})$ and use $\mathbf {x} = (x_{1}, x_{2}, ..., x_{m})$ to denote the whole dialogue history word by word, where $m$ is the number of tokens in the dialogue history.
Definition ::: Knowledge Base
In this paper, we assume to have the access to a relational-database-like KB $B$, which consists of $|\mathcal {R}|$ rows and $|\mathcal {C}|$ columns. The value of entity in the $j^{\text{th}}$ row and the $i^{\text{th}}$ column is noted as $v_{j, i}$.
Definition ::: Seq2Seq Dialogue Generation
We define the Seq2Seq task-oriented dialogue generation as finding the most likely response $\mathbf {y}$ according to the input dialogue history $\mathbf {x}$ and KB $B$. Formally, the probability of a response is defined as
where $y_t$ represents an output token.
Our Framework
In this section, we describe our framework for end-to-end task-oriented dialogues. The architecture of our framework is demonstrated in Figure FIGREF3, which consists of two major components including an memory network-based retriever and the seq2seq dialogue generation with KB Retriever. Our framework first uses the KB-retriever to select the most relevant KB row and further filter the irrelevant entities in a Seq2Seq response generation model to improve the consistency among the output entities. While in decoding, we further perform the attention mechanism to choose the most probable KB column. We will present the details of our framework in the following sections.
Our Framework ::: Encoder
In our encoder, we adopt the bidirectional LSTM BIBREF15 to encode the dialogue history $\mathbf {x}$, which captures temporal relationships within the sequence. The encoder first map the tokens in $\mathbf {x}$ to vectors with embedding function $\phi ^{\text{emb}}$, and then the BiLSTM read the vector forwardly and backwardly to produce context-sensitive hidden states $(\mathbf {h}_{1}, \mathbf {h}_2, ..., \mathbf {h}_{m})$ by repeatedly applying the recurrence $\mathbf {h}_{i}=\text{BiLSTM}\left( \phi ^{\text{emb}}\left( x_{i}\right) , \mathbf {h}_{i-1}\right)$.
Our Framework ::: Vanilla Attention-based Decoder
Here, we follow eric:2017:SIGDial to adopt the attention-based decoder to generation the response word by word. LSTM is also used to represent the partially generated output sequence $(y_{1}, y_2, ...,y_{t-1})$ as $(\tilde{\mathbf {h}}_{1}, \tilde{\mathbf {h}}_2, ...,\tilde{\mathbf {h}}_t)$. For the generation of next token $y_t$, their model first calculates an attentive representation $\tilde{\mathbf {h}}^{^{\prime }}_t$ of the dialogue history as
Then, the concatenation of the hidden representation of the partially outputted sequence $\tilde{\mathbf {h}}_t$ and the attentive dialogue history representation $\tilde{\mathbf {h}}^{^{\prime }}_t$ are projected to the vocabulary space $\mathcal {V}$ by $U$ as
to calculate the score (logit) for the next token generation. The probability of next token $y_t$ is finally calculated as
Our Framework ::: Entity-Consistency Augmented Decoder
As shown in section SECREF7, we can see that the generation of tokens are just based on the dialogue history attention, which makes the model ignorant to the KB entities. In this section, we present how to query the KB explicitly in two steps for improving the entity consistence, which first adopt the KB-retriever to select the most relevant KB row and the generation of KB entities from the entities-augmented decoder is constrained to the entities within the most probable row, thus improve the entity generation consistency. Next, we perform the column attention to select the most probable KB column. Finally, we show how to use the copy mechanism to incorporate the retrieved entity while decoding.
Our Framework ::: Entity-Consistency Augmented Decoder ::: KB Row Selection
In our framework, our KB-retriever takes the dialogue history and KB rows as inputs and selects the most relevant row. This selection process resembles the task of selecting one word from the inputs to answer questions BIBREF13, and we use a memory network to model this process. In the following sections, we will first describe how to represent the inputs, then we will talk about our memory network-based retriever
Our Framework ::: Entity-Consistency Augmented Decoder ::: KB Row Selection ::: Dialogue History Representation:
We encode the dialogue history by adopting the neural bag-of-words (BoW) followed the original paper BIBREF13. Each token in the dialogue history is mapped into a vector by another embedding function $\phi ^{\text{emb}^{\prime }}(x)$ and the dialogue history representation $\mathbf {q}$ is computed as the sum of these vectors: $\mathbf {q} = \sum ^{m}_{i=1} \phi ^{\text{emb}^{\prime }} (x_{i}) $.
Our Framework ::: Entity-Consistency Augmented Decoder ::: KB Row Selection ::: KB Row Representation:
In this section, we describe how to encode the KB row. Each KB cell is represented as the cell value $v$ embedding as $\mathbf {c}_{j, k} = \phi ^{\text{value}}(v_{j, k})$, and the neural BoW is also used to represent a KB row $\mathbf {r}_{j}$ as $\mathbf {r}_{j} = \sum _{k=1}^{|\mathcal {C}|} \mathbf {c}_{j,k}$.
Our Framework ::: Entity-Consistency Augmented Decoder ::: KB Row Selection ::: Memory Network-Based Retriever:
We model the KB retrieval process as selecting the row that most-likely supports the response generation. Memory network BIBREF13 has shown to be effective to model this kind of selection. For a $n$-hop memory network, the model keeps a set of input matrices $\lbrace R^{1}, R^{2}, ..., R^{n+1}\rbrace $, where each $R^{i}$ is a stack of $|\mathcal {R}|$ inputs $(\mathbf {r}^{i}_1, \mathbf {r}^{i}_2, ..., \mathbf {r}^{i}_{|\mathcal {R}|})$. The model also keeps query $\mathbf {q}^{1}$ as the input. A single hop memory network computes the probability $\mathbf {a}_j$ of selecting the $j^{\text{th}}$ input as
For the multi-hop cases, layers of single hop memory network are stacked and the query of the $(i+1)^{\text{th}}$ layer network is computed as
and the output of the last layer is used as the output of the whole network. For more details about memory network, please refer to the original paper BIBREF13.
After getting $\mathbf {a}$, we represent the retrieval results as a 0-1 matrix $T \in \lbrace 0, 1\rbrace ^{|\mathcal {R}|\times \mathcal {|C|}}$, where each element in $T$ is calculated as
In the retrieval result, $T_{j, k}$ indicates whether the entity in the $j^{\text{th}}$ row and the $k^{\text{th}}$ column is relevant to the final generation of the response. In this paper, we further flatten T to a 0-1 vector $\mathbf {t} \in \lbrace 0, 1\rbrace ^{|\mathcal {E}|}$ (where $|\mathcal {E}|$ equals $|\mathcal {R}|\times \mathcal {|C|}$) as our retrieval row results.
Our Framework ::: Entity-Consistency Augmented Decoder ::: KB Column Selection
After getting the retrieved row result that indicates which KB row is the most relevant to the generation, we further perform column attention in decoding time to select the probable KB column. For our KB column selection, following the eric:2017:SIGDial we use the decoder hidden state $(\tilde{\mathbf {h}}_{1}, \tilde{\mathbf {h}}_2, ...,\tilde{\mathbf {h}}_t)$ to compute an attention score with the embedding of column attribute name. The attention score $\mathbf {c}\in R^{|\mathcal {E}|}$ then become the logits of the column be selected, which can be calculated as
where $\mathbf {c}_j$ is the attention score of the $j^{\text{th}}$ KB column, $\mathbf {k}_j$ is represented with the embedding of word embedding of KB column name. $W^{^{\prime }}_{1}$, $W^{^{\prime }}_{2}$ and $\mathbf {t}^{T}$ are trainable parameters of the model.
Our Framework ::: Entity-Consistency Augmented Decoder ::: Decoder with Retrieved Entity
After the row selection and column selection, we can define the final retrieved KB entity score as the element-wise dot between the row retriever result and the column selection score, which can be calculated as
where the $v^{t}$ indicates the final KB retrieved entity score. Finally, we follow eric:2017:SIGDial to use copy mechanism to incorporate the retrieved entity, which can be defined as
where $\mathbf {o}_t$’s dimensionality is $ |\mathcal {V}|$ +$|\mathcal {E}|$. In $\mathbf {v}^t$ , lower $ |\mathcal {V}|$ is zero and the rest$|\mathcal {E}|$ is retrieved entity scores.
Training the KB-Retriever
As mentioned in section SECREF9, we adopt the memory network to train our KB-retriever. However, in the Seq2Seq dialogue generation, the training data does not include the annotated KB row retrieval results, which makes supervised training the KB-retriever impossible. To tackle this problem, we propose two training methods for our KB-row-retriever. 1) In the first method, inspired by the recent success of distant supervision in information extraction BIBREF16, BIBREF17, BIBREF18, BIBREF19, we take advantage of the similarity between the surface string of KB entries and the reference response, and design a set of heuristics to extract training data for the KB-retriever. 2) In the second method, instead of training the KB-retriever as an independent component, we train it along with the training of the Seq2Seq dialogue generation. To make the retrieval process in Equation DISPLAY_FORM13 differentiable, we use Gumbel-Softmax BIBREF14 as an approximation of the $\operatornamewithlimits{argmax}$ during training.
Training the KB-Retriever ::: Training with Distant Supervision
Although it's difficult to obtain the annotated retrieval data for the KB-retriever, we can “guess” the most relevant KB row from the reference response, and then obtain the weakly labeled data for the retriever. Intuitively, for the current utterance in the same dialogue which usually belongs to one topic and the KB row that contains the largest number of entities mentioned in the whole dialogue should support the utterance. In our training with distant supervision, we further simplify our assumption and assume that one dialogue which is usually belongs to one topic and can be supported by the most relevant KB row, which means for a $k$-turned dialogue, we construct $k$ pairs of training instances for the retriever and all the inputs $(u_{1}, s_{1}, ..., s_{i-1}, u_{i} \mid i \le k)$ are associated with the same weakly labeled KB retrieval result $T^*$.
In this paper, we compute each row's similarity to the whole dialogue and choose the most similar row as $T^*$. We define the similarity of each row as the number of matched spans with the surface form of the entities in the row. Taking the dialogue in Figure FIGREF1 for an example, the similarity of the 4$^\text{th}$ row equals to 4 with “200 Alester Ave”, “gas station”, “Valero”, and “road block nearby” matching the dialogue context; and the similarity of the 7$^\text{th}$ row equals to 1 with only “road block nearby” matching.
In our model with the distantly supervised retriever, the retrieval results serve as the input for the Seq2Seq generation. During training the Seq2Seq generation, we use the weakly labeled retrieval result $T^{*}$ as the input.
Training the KB-Retriever ::: Training with Gumbel-Softmax
In addition to treating the row retrieval result as an input to the generation model, and training the kb-row-retriever independently, we can train it along with the training of the Seq2Seq dialogue generation in an end-to-end fashion. The major difficulty of such a training scheme is that the discrete retrieval result is not differentiable and the training signal from the generation model cannot be passed to the parameters of the retriever. Gumbel-softmax technique BIBREF14 has been shown an effective approximation to the discrete variable and proved to work in sentence representation. In this paper, we adopt the Gumbel-Softmax technique to train the KB retriever. We use
as the approximation of $T$, where $\mathbf {g}_{j}$ are i.i.d samples drawn from $\text{Gumbel}(0,1)$ and $\tau $ is a constant that controls the smoothness of the distribution. $T^{\text{approx}}_{j}$ replaces $T^{\text{}}_{j}$ in equation DISPLAY_FORM13 and goes through the same flattening and expanding process as $\mathbf {V}$ to get $\mathbf {v}^{\mathbf {t}^{\text{approx}^{\prime }}}$ and the training signal from Seq2Seq generation is passed via the logit
To make training with Gumbel-Softmax more stable, we first initialize the parameters by pre-training the KB-retriever with distant supervision and further fine-tuning our framework.
Training the KB-Retriever ::: Experimental Settings
We choose the InCar Assistant dataset BIBREF6 including three distinct domains: navigation, weather and calendar domain. For weather domain, we follow wen2018sequence to separate the highest temperature, lowest temperature and weather attribute into three different columns. For calendar domain, there are some dialogues without a KB or incomplete KB. In this case, we padding a special token “-” in these incomplete KBs. Our framework is trained separately in these three domains, using the same train/validation/test split sets as eric:2017:SIGDial. To justify the generalization of the proposed model, we also use another public CamRest dataset BIBREF11 and partition the datasets into training, validation and testing set in the ratio 3:1:1. Especially, we hired some human experts to format the CamRest dataset by equipping the corresponding KB to every dialogues.
All hyper-parameters are selected according to validation set. We use a three-hop memory network to model our KB-retriever. The dimensionalities of the embedding is selected from $\lbrace 100, 200\rbrace $ and LSTM hidden units is selected from $\lbrace 50, 100, 150, 200, 350\rbrace $. The dropout we use in our framework is selected from $\lbrace 0.25, 0.5, 0.75\rbrace $ and the batch size we adopt is selected from $\lbrace 1,2\rbrace $. L2 regularization is used on our model with a tension of $5\times 10^{-6}$ for reducing overfitting. For training the retriever with distant supervision, we adopt the weight typing trick BIBREF20. We use Adam BIBREF21 to optimize the parameters in our model and adopt the suggested hyper-parameters for optimization.
We adopt both the automatic and human evaluations in our experiments.
Training the KB-Retriever ::: Baseline Models
We compare our model with several baselines including:
Attn seq2seq BIBREF22: A model with simple attention over the input context at each time step during decoding.
Ptr-UNK BIBREF23: Ptr-UNK is the model which augments a sequence-to-sequence architecture with attention-based copy mechanism over the encoder context.
KV Net BIBREF6: The model adopted and argumented decoder which decodes over the concatenation of vocabulary and KB entities, which allows the model to generate entities.
Mem2Seq BIBREF7: Mem2Seq is the model that takes dialogue history and KB entities as input and uses a pointer gate to control either generating a vocabulary word or selecting an input as the output.
DSR BIBREF9: DSR leveraged dialogue state representation to retrieve the KB implicitly and applied copying mechanism to retrieve entities from knowledge base while decoding.
In InCar dataset, for the Attn seq2seq, Ptr-UNK and Mem2seq, we adopt the reported results from madotto2018mem2seq. In CamRest dataset, for the Mem2Seq, we adopt their open-sourced code to get the results while for the DSR, we run their code on the same dataset to obtain the results.
Results
Follow the prior works BIBREF6, BIBREF7, BIBREF9, we adopt the BLEU and the Micro Entity F1 to evaluate our model performance. The experimental results are illustrated in Table TABREF30.
In the first block of Table TABREF30, we show the Human, rule-based and KV Net (with*) result which are reported from eric:2017:SIGDial. We argue that their results are not directly comparable because their work uses the entities in thier canonicalized forms, which are not calculated based on real entity value. It's noticing that our framework with two methods still outperform KV Net in InCar dataset on whole BLEU and Entity F metrics, which demonstrates the effectiveness of our framework.
In the second block of Table TABREF30, we can see that our framework trained with both the distant supervision and the Gumbel-Softmax beats all existing models on two datasets. Our model outperforms each baseline on both BLEU and F1 metrics. In InCar dataset, Our model with Gumbel-Softmax has the highest BLEU compared with baselines, which which shows that our framework can generate more fluent response. Especially, our framework has achieved 2.5% improvement on navigate domain, 1.8% improvement on weather domain and 3.5% improvement on calendar domain on F1 metric. It indicates that the effectiveness of our KB-retriever module and our framework can retrieve more correct entity from KB. In CamRest dataset, the same trend of improvement has been witnessed, which further show the effectiveness of our framework.
Besides, we observe that the model trained with Gumbel-Softmax outperforms with distant supervision method. We attribute this to the fact that the KB-retriever and the Seq2Seq module are fine-tuned in an end-to-end fashion, which can refine the KB-retriever and further promote the dialogue generation.
Results ::: The proportion of responses that can be supported by a single KB row
In this section, we verify our assumption by examining the proportion of responses that can be supported by a single row.
We define a response being supported by the most relevant KB row as all the responded entities are included by that row. We study the proportion of these responses over the test set. The number is 95% for the navigation domain, 90% for the CamRest dataset and 80% for the weather domain. This confirms our assumption that most responses can be supported by the relevant KB row. Correctly retrieving the supporting row should be beneficial.
We further study the weather domain to see the rest 20% exceptions. Instead of being supported by multiple rows, most of these exceptions cannot be supported by any KB row. For example, there is one case whose reference response is “It 's not rainy today”, and the related KB entity is sunny. These cases provide challenges beyond the scope of this paper. If we consider this kind of cases as being supported by a single row, such proportion in the weather domain is 99%.
Results ::: Generation Consistency
In this paper, we expect the consistent generation from our model. To verify this, we compute the consistency recall of the utterances that have multiple entities. An utterance is considered as consistent if it has multiple entities and these entities belong to the same row which we annotated with distant supervision.
The consistency result is shown in Table TABREF37. From this table, we can see that incorporating retriever in the dialogue generation improves the consistency.
Results ::: Correlation between the number of KB rows and generation consistency
To further explore the correlation between the number of KB rows and generation consistency, we conduct experiments with distant manner to study the correlation between the number of KB rows and generation consistency.
We choose KBs with different number of rows on a scale from 1 to 5 for the generation. From Figure FIGREF32, as the number of KB rows increase, we can see a decrease in generation consistency. This indicates that irrelevant information would harm the dialogue generation consistency.
Results ::: Visualization
To gain more insights into how the our retriever module influences the whole KB score distribution, we visualized the KB entity probability at the decoding position where we generate the entity 200_Alester_Ave. From the example (Fig FIGREF38), we can see the $4^\text{th}$ row and the $1^\text{th}$ column has the highest probabilities for generating 200_Alester_Ave, which verify the effectiveness of firstly selecting the most relevant KB row and further selecting the most relevant KB column.
Results ::: Human Evaluation
We provide human evaluation on our framework and the compared models. These responses are based on distinct dialogue history. We hire several human experts and ask them to judge the quality of the responses according to correctness, fluency, and humanlikeness on a scale from 1 to 5. In each judgment, the expert is presented with the dialogue history, an output of a system with the name anonymized, and the gold response.
The evaluation results are illustrated in Table TABREF37. Our framework outperforms other baseline models on all metrics according to Table TABREF37. The most significant improvement is from correctness, indicating that our model can retrieve accurate entity from KB and generate more informative information that the users want to know.
Related Work
Sequence-to-sequence (Seq2Seq) models in text generation BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 has gained more popular and they are applied for the open-domain dialogs BIBREF24, BIBREF25 in the end-to-end training method. Recently, the Seq2Seq can be used for learning task oriented dialogs and how to query the structured KB is the remaining challenges.
Properly querying the KB has long been a challenge in the task-oriented dialogue system. In the pipeline system, the KB query is strongly correlated with the design of language understanding, state tracking, and policy management. Typically, after obtaining the dialogue state, the policy management module issues an API call accordingly to query the KB. With the development of neural network in natural language processing, efforts have been made to replacing the discrete and pre-defined dialogue state with the distributed representation BIBREF10, BIBREF11, BIBREF12, BIBREF26. In our framework, our retrieval result can be treated as a numeric representation of the API call return.
Instead of interacting with the KB via API calls, more and more recent works tried to incorporate KB query as a part of the model. The most popular way of modeling KB query is treating it as an attention network over the entire KB entities BIBREF6, BIBREF27, BIBREF8, BIBREF28, BIBREF29 and the return can be a fuzzy summation of the entity representations. madotto2018mem2seq's practice of modeling the KB query with memory network can also be considered as learning an attentive preference over these entities. wen2018sequence propose the implicit dialogue state representation to query the KB and achieve the promising performance. Different from their modes, we propose the KB-retriever to explicitly query the KB, and the query result is used to filter the irrelevant entities in the dialogue generation to improve the consistency among the output entities.
Conclusion
In this paper, we propose a novel framework to improve entities consistency by querying KB in two steps. In the first step, inspired by the observation that a response can usually be supported by a single KB row, we introduce the KB retriever to return the most relevant KB row, which is used to filter the irrelevant KB entities and encourage consistent generation. In the second step, we further perform attention mechanism to select the most relevant KB column. Experimental results show the effectiveness of our method. Extensive analysis further confirms the observation and reveal the correlation between the success of KB query and the success of task-oriented dialogue generation.
Acknowledgments
We thank the anonymous reviewers for their helpful comments and suggestions. This work was supported by the National Natural Science Foundation of China (NSFC) via grant 61976072, 61632011 and 61772153. | Attn seq2seq, Ptr-UNK, KV Net, Mem2Seq, DSR |
b9f852256113ef468d60e95912800fab604966f6 | b9f852256113ef468d60e95912800fab604966f6_0 | Q: Which dialog datasets did they experiment with?
Text: Introduction
Task-oriented dialogue system, which helps users to achieve specific goals with natural language, is attracting more and more research attention. With the success of the sequence-to-sequence (Seq2Seq) models in text generation BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, several works tried to model the task-oriented dialogue as the Seq2Seq generation of response from the dialogue history BIBREF5, BIBREF6, BIBREF7. This kind of modeling scheme frees the task-oriented dialogue system from the manually designed pipeline modules and heavy annotation labor for these modules.
Different from typical text generation, the successful conversations for task-oriented dialogue system heavily depend on accurate knowledge base (KB) queries. Taking the dialogue in Figure FIGREF1 as an example, to answer the driver's query on the gas station, the dialogue system is required to retrieve the entities like “200 Alester Ave” and “Valero”. For the task-oriented system based on Seq2Seq generation, there is a trend in recent study towards modeling the KB query as an attention network over the entire KB entity representations, hoping to learn a model to pay more attention to the relevant entities BIBREF6, BIBREF7, BIBREF8, BIBREF9. Though achieving good end-to-end dialogue generation with over-the-entire-KB attention mechanism, these methods do not guarantee the generation consistency regarding KB entities and sometimes yield responses with conflict entities, like “Valero is located at 899 Ames Ct” for the gas station query (as shown in Figure FIGREF1). In fact, the correct address for Valero is 200 Alester Ave. A consistent response is relatively easy to achieve for the conventional pipeline systems because they query the KB by issuing API calls BIBREF10, BIBREF11, BIBREF12, and the returned entities, which typically come from a single KB row, are consistently related to the object (like the “gas station”) that serves the user's request. This indicates that a response can usually be supported by a single KB row. It's promising to incorporate such observation into the Seq2Seq dialogue generation model, since it encourages KB relevant generation and avoids the model from producing responses with conflict entities.
To achieve entity-consistent generation in the Seq2Seq task-oriented dialogue system, we propose a novel framework which query the KB in two steps. In the first step, we introduce a retrieval module — KB-retriever to explicitly query the KB. Inspired by the observation that a single KB row usually supports a response, given the dialogue history and a set of KB rows, the KB-retriever uses a memory network BIBREF13 to select the most relevant row. The retrieval result is then fed into a Seq2Seq dialogue generation model to filter the irrelevant KB entities and improve the consistency within the generated entities. In the second step, we further perform attention mechanism to address the most correlated KB column. Finally, we adopt the copy mechanism to incorporate the retrieved KB entity.
Since dialogue dataset is not typically annotated with the retrieval results, training the KB-retriever is non-trivial. To make the training feasible, we propose two methods: 1) we use a set of heuristics to derive the training data and train the retriever in a distant supervised fashion; 2) we use Gumbel-Softmax BIBREF14 as an approximation of the non-differentiable selecting process and train the retriever along with the Seq2Seq dialogue generation model. Experiments on two publicly available datasets (Camrest BIBREF11 and InCar Assistant BIBREF6) confirm the effectiveness of the KB-retriever. Both the retrievers trained with distant-supervision and Gumbel-Softmax technique outperform the compared systems in the automatic and human evaluations. Analysis empirically verifies our assumption that more than 80% responses in the dataset can be supported by a single KB row and better retrieval results lead to better task-oriented dialogue generation performance.
Definition
In this section, we will describe the input and output of the end-to-end task-oriented dialogue system, and the definition of Seq2Seq task-oriented dialogue generation.
Definition ::: Dialogue History
Given a dialogue between a user ($u$) and a system ($s$), we follow eric:2017:SIGDial and represent the $k$-turned dialogue utterances as $\lbrace (u_{1}, s_{1} ), (u_{2} , s_{2} ), ... , (u_{k}, s_{k})\rbrace $. At the $i^{\text{th}}$ turn of the dialogue, we aggregate dialogue context which consists of the tokens of $(u_{1}, s_{1}, ..., s_{i-1}, u_{i})$ and use $\mathbf {x} = (x_{1}, x_{2}, ..., x_{m})$ to denote the whole dialogue history word by word, where $m$ is the number of tokens in the dialogue history.
Definition ::: Knowledge Base
In this paper, we assume to have the access to a relational-database-like KB $B$, which consists of $|\mathcal {R}|$ rows and $|\mathcal {C}|$ columns. The value of entity in the $j^{\text{th}}$ row and the $i^{\text{th}}$ column is noted as $v_{j, i}$.
Definition ::: Seq2Seq Dialogue Generation
We define the Seq2Seq task-oriented dialogue generation as finding the most likely response $\mathbf {y}$ according to the input dialogue history $\mathbf {x}$ and KB $B$. Formally, the probability of a response is defined as
where $y_t$ represents an output token.
Our Framework
In this section, we describe our framework for end-to-end task-oriented dialogues. The architecture of our framework is demonstrated in Figure FIGREF3, which consists of two major components including an memory network-based retriever and the seq2seq dialogue generation with KB Retriever. Our framework first uses the KB-retriever to select the most relevant KB row and further filter the irrelevant entities in a Seq2Seq response generation model to improve the consistency among the output entities. While in decoding, we further perform the attention mechanism to choose the most probable KB column. We will present the details of our framework in the following sections.
Our Framework ::: Encoder
In our encoder, we adopt the bidirectional LSTM BIBREF15 to encode the dialogue history $\mathbf {x}$, which captures temporal relationships within the sequence. The encoder first map the tokens in $\mathbf {x}$ to vectors with embedding function $\phi ^{\text{emb}}$, and then the BiLSTM read the vector forwardly and backwardly to produce context-sensitive hidden states $(\mathbf {h}_{1}, \mathbf {h}_2, ..., \mathbf {h}_{m})$ by repeatedly applying the recurrence $\mathbf {h}_{i}=\text{BiLSTM}\left( \phi ^{\text{emb}}\left( x_{i}\right) , \mathbf {h}_{i-1}\right)$.
Our Framework ::: Vanilla Attention-based Decoder
Here, we follow eric:2017:SIGDial to adopt the attention-based decoder to generation the response word by word. LSTM is also used to represent the partially generated output sequence $(y_{1}, y_2, ...,y_{t-1})$ as $(\tilde{\mathbf {h}}_{1}, \tilde{\mathbf {h}}_2, ...,\tilde{\mathbf {h}}_t)$. For the generation of next token $y_t$, their model first calculates an attentive representation $\tilde{\mathbf {h}}^{^{\prime }}_t$ of the dialogue history as
Then, the concatenation of the hidden representation of the partially outputted sequence $\tilde{\mathbf {h}}_t$ and the attentive dialogue history representation $\tilde{\mathbf {h}}^{^{\prime }}_t$ are projected to the vocabulary space $\mathcal {V}$ by $U$ as
to calculate the score (logit) for the next token generation. The probability of next token $y_t$ is finally calculated as
Our Framework ::: Entity-Consistency Augmented Decoder
As shown in section SECREF7, we can see that the generation of tokens are just based on the dialogue history attention, which makes the model ignorant to the KB entities. In this section, we present how to query the KB explicitly in two steps for improving the entity consistence, which first adopt the KB-retriever to select the most relevant KB row and the generation of KB entities from the entities-augmented decoder is constrained to the entities within the most probable row, thus improve the entity generation consistency. Next, we perform the column attention to select the most probable KB column. Finally, we show how to use the copy mechanism to incorporate the retrieved entity while decoding.
Our Framework ::: Entity-Consistency Augmented Decoder ::: KB Row Selection
In our framework, our KB-retriever takes the dialogue history and KB rows as inputs and selects the most relevant row. This selection process resembles the task of selecting one word from the inputs to answer questions BIBREF13, and we use a memory network to model this process. In the following sections, we will first describe how to represent the inputs, then we will talk about our memory network-based retriever
Our Framework ::: Entity-Consistency Augmented Decoder ::: KB Row Selection ::: Dialogue History Representation:
We encode the dialogue history by adopting the neural bag-of-words (BoW) followed the original paper BIBREF13. Each token in the dialogue history is mapped into a vector by another embedding function $\phi ^{\text{emb}^{\prime }}(x)$ and the dialogue history representation $\mathbf {q}$ is computed as the sum of these vectors: $\mathbf {q} = \sum ^{m}_{i=1} \phi ^{\text{emb}^{\prime }} (x_{i}) $.
Our Framework ::: Entity-Consistency Augmented Decoder ::: KB Row Selection ::: KB Row Representation:
In this section, we describe how to encode the KB row. Each KB cell is represented as the cell value $v$ embedding as $\mathbf {c}_{j, k} = \phi ^{\text{value}}(v_{j, k})$, and the neural BoW is also used to represent a KB row $\mathbf {r}_{j}$ as $\mathbf {r}_{j} = \sum _{k=1}^{|\mathcal {C}|} \mathbf {c}_{j,k}$.
Our Framework ::: Entity-Consistency Augmented Decoder ::: KB Row Selection ::: Memory Network-Based Retriever:
We model the KB retrieval process as selecting the row that most-likely supports the response generation. Memory network BIBREF13 has shown to be effective to model this kind of selection. For a $n$-hop memory network, the model keeps a set of input matrices $\lbrace R^{1}, R^{2}, ..., R^{n+1}\rbrace $, where each $R^{i}$ is a stack of $|\mathcal {R}|$ inputs $(\mathbf {r}^{i}_1, \mathbf {r}^{i}_2, ..., \mathbf {r}^{i}_{|\mathcal {R}|})$. The model also keeps query $\mathbf {q}^{1}$ as the input. A single hop memory network computes the probability $\mathbf {a}_j$ of selecting the $j^{\text{th}}$ input as
For the multi-hop cases, layers of single hop memory network are stacked and the query of the $(i+1)^{\text{th}}$ layer network is computed as
and the output of the last layer is used as the output of the whole network. For more details about memory network, please refer to the original paper BIBREF13.
After getting $\mathbf {a}$, we represent the retrieval results as a 0-1 matrix $T \in \lbrace 0, 1\rbrace ^{|\mathcal {R}|\times \mathcal {|C|}}$, where each element in $T$ is calculated as
In the retrieval result, $T_{j, k}$ indicates whether the entity in the $j^{\text{th}}$ row and the $k^{\text{th}}$ column is relevant to the final generation of the response. In this paper, we further flatten T to a 0-1 vector $\mathbf {t} \in \lbrace 0, 1\rbrace ^{|\mathcal {E}|}$ (where $|\mathcal {E}|$ equals $|\mathcal {R}|\times \mathcal {|C|}$) as our retrieval row results.
Our Framework ::: Entity-Consistency Augmented Decoder ::: KB Column Selection
After getting the retrieved row result that indicates which KB row is the most relevant to the generation, we further perform column attention in decoding time to select the probable KB column. For our KB column selection, following the eric:2017:SIGDial we use the decoder hidden state $(\tilde{\mathbf {h}}_{1}, \tilde{\mathbf {h}}_2, ...,\tilde{\mathbf {h}}_t)$ to compute an attention score with the embedding of column attribute name. The attention score $\mathbf {c}\in R^{|\mathcal {E}|}$ then become the logits of the column be selected, which can be calculated as
where $\mathbf {c}_j$ is the attention score of the $j^{\text{th}}$ KB column, $\mathbf {k}_j$ is represented with the embedding of word embedding of KB column name. $W^{^{\prime }}_{1}$, $W^{^{\prime }}_{2}$ and $\mathbf {t}^{T}$ are trainable parameters of the model.
Our Framework ::: Entity-Consistency Augmented Decoder ::: Decoder with Retrieved Entity
After the row selection and column selection, we can define the final retrieved KB entity score as the element-wise dot between the row retriever result and the column selection score, which can be calculated as
where the $v^{t}$ indicates the final KB retrieved entity score. Finally, we follow eric:2017:SIGDial to use copy mechanism to incorporate the retrieved entity, which can be defined as
where $\mathbf {o}_t$’s dimensionality is $ |\mathcal {V}|$ +$|\mathcal {E}|$. In $\mathbf {v}^t$ , lower $ |\mathcal {V}|$ is zero and the rest$|\mathcal {E}|$ is retrieved entity scores.
Training the KB-Retriever
As mentioned in section SECREF9, we adopt the memory network to train our KB-retriever. However, in the Seq2Seq dialogue generation, the training data does not include the annotated KB row retrieval results, which makes supervised training the KB-retriever impossible. To tackle this problem, we propose two training methods for our KB-row-retriever. 1) In the first method, inspired by the recent success of distant supervision in information extraction BIBREF16, BIBREF17, BIBREF18, BIBREF19, we take advantage of the similarity between the surface string of KB entries and the reference response, and design a set of heuristics to extract training data for the KB-retriever. 2) In the second method, instead of training the KB-retriever as an independent component, we train it along with the training of the Seq2Seq dialogue generation. To make the retrieval process in Equation DISPLAY_FORM13 differentiable, we use Gumbel-Softmax BIBREF14 as an approximation of the $\operatornamewithlimits{argmax}$ during training.
Training the KB-Retriever ::: Training with Distant Supervision
Although it's difficult to obtain the annotated retrieval data for the KB-retriever, we can “guess” the most relevant KB row from the reference response, and then obtain the weakly labeled data for the retriever. Intuitively, for the current utterance in the same dialogue which usually belongs to one topic and the KB row that contains the largest number of entities mentioned in the whole dialogue should support the utterance. In our training with distant supervision, we further simplify our assumption and assume that one dialogue which is usually belongs to one topic and can be supported by the most relevant KB row, which means for a $k$-turned dialogue, we construct $k$ pairs of training instances for the retriever and all the inputs $(u_{1}, s_{1}, ..., s_{i-1}, u_{i} \mid i \le k)$ are associated with the same weakly labeled KB retrieval result $T^*$.
In this paper, we compute each row's similarity to the whole dialogue and choose the most similar row as $T^*$. We define the similarity of each row as the number of matched spans with the surface form of the entities in the row. Taking the dialogue in Figure FIGREF1 for an example, the similarity of the 4$^\text{th}$ row equals to 4 with “200 Alester Ave”, “gas station”, “Valero”, and “road block nearby” matching the dialogue context; and the similarity of the 7$^\text{th}$ row equals to 1 with only “road block nearby” matching.
In our model with the distantly supervised retriever, the retrieval results serve as the input for the Seq2Seq generation. During training the Seq2Seq generation, we use the weakly labeled retrieval result $T^{*}$ as the input.
Training the KB-Retriever ::: Training with Gumbel-Softmax
In addition to treating the row retrieval result as an input to the generation model, and training the kb-row-retriever independently, we can train it along with the training of the Seq2Seq dialogue generation in an end-to-end fashion. The major difficulty of such a training scheme is that the discrete retrieval result is not differentiable and the training signal from the generation model cannot be passed to the parameters of the retriever. Gumbel-softmax technique BIBREF14 has been shown an effective approximation to the discrete variable and proved to work in sentence representation. In this paper, we adopt the Gumbel-Softmax technique to train the KB retriever. We use
as the approximation of $T$, where $\mathbf {g}_{j}$ are i.i.d samples drawn from $\text{Gumbel}(0,1)$ and $\tau $ is a constant that controls the smoothness of the distribution. $T^{\text{approx}}_{j}$ replaces $T^{\text{}}_{j}$ in equation DISPLAY_FORM13 and goes through the same flattening and expanding process as $\mathbf {V}$ to get $\mathbf {v}^{\mathbf {t}^{\text{approx}^{\prime }}}$ and the training signal from Seq2Seq generation is passed via the logit
To make training with Gumbel-Softmax more stable, we first initialize the parameters by pre-training the KB-retriever with distant supervision and further fine-tuning our framework.
Training the KB-Retriever ::: Experimental Settings
We choose the InCar Assistant dataset BIBREF6 including three distinct domains: navigation, weather and calendar domain. For weather domain, we follow wen2018sequence to separate the highest temperature, lowest temperature and weather attribute into three different columns. For calendar domain, there are some dialogues without a KB or incomplete KB. In this case, we padding a special token “-” in these incomplete KBs. Our framework is trained separately in these three domains, using the same train/validation/test split sets as eric:2017:SIGDial. To justify the generalization of the proposed model, we also use another public CamRest dataset BIBREF11 and partition the datasets into training, validation and testing set in the ratio 3:1:1. Especially, we hired some human experts to format the CamRest dataset by equipping the corresponding KB to every dialogues.
All hyper-parameters are selected according to validation set. We use a three-hop memory network to model our KB-retriever. The dimensionalities of the embedding is selected from $\lbrace 100, 200\rbrace $ and LSTM hidden units is selected from $\lbrace 50, 100, 150, 200, 350\rbrace $. The dropout we use in our framework is selected from $\lbrace 0.25, 0.5, 0.75\rbrace $ and the batch size we adopt is selected from $\lbrace 1,2\rbrace $. L2 regularization is used on our model with a tension of $5\times 10^{-6}$ for reducing overfitting. For training the retriever with distant supervision, we adopt the weight typing trick BIBREF20. We use Adam BIBREF21 to optimize the parameters in our model and adopt the suggested hyper-parameters for optimization.
We adopt both the automatic and human evaluations in our experiments.
Training the KB-Retriever ::: Baseline Models
We compare our model with several baselines including:
Attn seq2seq BIBREF22: A model with simple attention over the input context at each time step during decoding.
Ptr-UNK BIBREF23: Ptr-UNK is the model which augments a sequence-to-sequence architecture with attention-based copy mechanism over the encoder context.
KV Net BIBREF6: The model adopted and argumented decoder which decodes over the concatenation of vocabulary and KB entities, which allows the model to generate entities.
Mem2Seq BIBREF7: Mem2Seq is the model that takes dialogue history and KB entities as input and uses a pointer gate to control either generating a vocabulary word or selecting an input as the output.
DSR BIBREF9: DSR leveraged dialogue state representation to retrieve the KB implicitly and applied copying mechanism to retrieve entities from knowledge base while decoding.
In InCar dataset, for the Attn seq2seq, Ptr-UNK and Mem2seq, we adopt the reported results from madotto2018mem2seq. In CamRest dataset, for the Mem2Seq, we adopt their open-sourced code to get the results while for the DSR, we run their code on the same dataset to obtain the results.
Results
Follow the prior works BIBREF6, BIBREF7, BIBREF9, we adopt the BLEU and the Micro Entity F1 to evaluate our model performance. The experimental results are illustrated in Table TABREF30.
In the first block of Table TABREF30, we show the Human, rule-based and KV Net (with*) result which are reported from eric:2017:SIGDial. We argue that their results are not directly comparable because their work uses the entities in thier canonicalized forms, which are not calculated based on real entity value. It's noticing that our framework with two methods still outperform KV Net in InCar dataset on whole BLEU and Entity F metrics, which demonstrates the effectiveness of our framework.
In the second block of Table TABREF30, we can see that our framework trained with both the distant supervision and the Gumbel-Softmax beats all existing models on two datasets. Our model outperforms each baseline on both BLEU and F1 metrics. In InCar dataset, Our model with Gumbel-Softmax has the highest BLEU compared with baselines, which which shows that our framework can generate more fluent response. Especially, our framework has achieved 2.5% improvement on navigate domain, 1.8% improvement on weather domain and 3.5% improvement on calendar domain on F1 metric. It indicates that the effectiveness of our KB-retriever module and our framework can retrieve more correct entity from KB. In CamRest dataset, the same trend of improvement has been witnessed, which further show the effectiveness of our framework.
Besides, we observe that the model trained with Gumbel-Softmax outperforms with distant supervision method. We attribute this to the fact that the KB-retriever and the Seq2Seq module are fine-tuned in an end-to-end fashion, which can refine the KB-retriever and further promote the dialogue generation.
Results ::: The proportion of responses that can be supported by a single KB row
In this section, we verify our assumption by examining the proportion of responses that can be supported by a single row.
We define a response being supported by the most relevant KB row as all the responded entities are included by that row. We study the proportion of these responses over the test set. The number is 95% for the navigation domain, 90% for the CamRest dataset and 80% for the weather domain. This confirms our assumption that most responses can be supported by the relevant KB row. Correctly retrieving the supporting row should be beneficial.
We further study the weather domain to see the rest 20% exceptions. Instead of being supported by multiple rows, most of these exceptions cannot be supported by any KB row. For example, there is one case whose reference response is “It 's not rainy today”, and the related KB entity is sunny. These cases provide challenges beyond the scope of this paper. If we consider this kind of cases as being supported by a single row, such proportion in the weather domain is 99%.
Results ::: Generation Consistency
In this paper, we expect the consistent generation from our model. To verify this, we compute the consistency recall of the utterances that have multiple entities. An utterance is considered as consistent if it has multiple entities and these entities belong to the same row which we annotated with distant supervision.
The consistency result is shown in Table TABREF37. From this table, we can see that incorporating retriever in the dialogue generation improves the consistency.
Results ::: Correlation between the number of KB rows and generation consistency
To further explore the correlation between the number of KB rows and generation consistency, we conduct experiments with distant manner to study the correlation between the number of KB rows and generation consistency.
We choose KBs with different number of rows on a scale from 1 to 5 for the generation. From Figure FIGREF32, as the number of KB rows increase, we can see a decrease in generation consistency. This indicates that irrelevant information would harm the dialogue generation consistency.
Results ::: Visualization
To gain more insights into how the our retriever module influences the whole KB score distribution, we visualized the KB entity probability at the decoding position where we generate the entity 200_Alester_Ave. From the example (Fig FIGREF38), we can see the $4^\text{th}$ row and the $1^\text{th}$ column has the highest probabilities for generating 200_Alester_Ave, which verify the effectiveness of firstly selecting the most relevant KB row and further selecting the most relevant KB column.
Results ::: Human Evaluation
We provide human evaluation on our framework and the compared models. These responses are based on distinct dialogue history. We hire several human experts and ask them to judge the quality of the responses according to correctness, fluency, and humanlikeness on a scale from 1 to 5. In each judgment, the expert is presented with the dialogue history, an output of a system with the name anonymized, and the gold response.
The evaluation results are illustrated in Table TABREF37. Our framework outperforms other baseline models on all metrics according to Table TABREF37. The most significant improvement is from correctness, indicating that our model can retrieve accurate entity from KB and generate more informative information that the users want to know.
Related Work
Sequence-to-sequence (Seq2Seq) models in text generation BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 has gained more popular and they are applied for the open-domain dialogs BIBREF24, BIBREF25 in the end-to-end training method. Recently, the Seq2Seq can be used for learning task oriented dialogs and how to query the structured KB is the remaining challenges.
Properly querying the KB has long been a challenge in the task-oriented dialogue system. In the pipeline system, the KB query is strongly correlated with the design of language understanding, state tracking, and policy management. Typically, after obtaining the dialogue state, the policy management module issues an API call accordingly to query the KB. With the development of neural network in natural language processing, efforts have been made to replacing the discrete and pre-defined dialogue state with the distributed representation BIBREF10, BIBREF11, BIBREF12, BIBREF26. In our framework, our retrieval result can be treated as a numeric representation of the API call return.
Instead of interacting with the KB via API calls, more and more recent works tried to incorporate KB query as a part of the model. The most popular way of modeling KB query is treating it as an attention network over the entire KB entities BIBREF6, BIBREF27, BIBREF8, BIBREF28, BIBREF29 and the return can be a fuzzy summation of the entity representations. madotto2018mem2seq's practice of modeling the KB query with memory network can also be considered as learning an attentive preference over these entities. wen2018sequence propose the implicit dialogue state representation to query the KB and achieve the promising performance. Different from their modes, we propose the KB-retriever to explicitly query the KB, and the query result is used to filter the irrelevant entities in the dialogue generation to improve the consistency among the output entities.
Conclusion
In this paper, we propose a novel framework to improve entities consistency by querying KB in two steps. In the first step, inspired by the observation that a response can usually be supported by a single KB row, we introduce the KB retriever to return the most relevant KB row, which is used to filter the irrelevant KB entities and encourage consistent generation. In the second step, we further perform attention mechanism to select the most relevant KB column. Experimental results show the effectiveness of our method. Extensive analysis further confirms the observation and reveal the correlation between the success of KB query and the success of task-oriented dialogue generation.
Acknowledgments
We thank the anonymous reviewers for their helpful comments and suggestions. This work was supported by the National Natural Science Foundation of China (NSFC) via grant 61976072, 61632011 and 61772153. | Camrest, InCar Assistant |
88f8ab2a417eae497338514142ac12c3cec20876 | 88f8ab2a417eae497338514142ac12c3cec20876_0 | Q: What KB is used?
Text: Introduction
Task-oriented dialogue system, which helps users to achieve specific goals with natural language, is attracting more and more research attention. With the success of the sequence-to-sequence (Seq2Seq) models in text generation BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, several works tried to model the task-oriented dialogue as the Seq2Seq generation of response from the dialogue history BIBREF5, BIBREF6, BIBREF7. This kind of modeling scheme frees the task-oriented dialogue system from the manually designed pipeline modules and heavy annotation labor for these modules.
Different from typical text generation, the successful conversations for task-oriented dialogue system heavily depend on accurate knowledge base (KB) queries. Taking the dialogue in Figure FIGREF1 as an example, to answer the driver's query on the gas station, the dialogue system is required to retrieve the entities like “200 Alester Ave” and “Valero”. For the task-oriented system based on Seq2Seq generation, there is a trend in recent study towards modeling the KB query as an attention network over the entire KB entity representations, hoping to learn a model to pay more attention to the relevant entities BIBREF6, BIBREF7, BIBREF8, BIBREF9. Though achieving good end-to-end dialogue generation with over-the-entire-KB attention mechanism, these methods do not guarantee the generation consistency regarding KB entities and sometimes yield responses with conflict entities, like “Valero is located at 899 Ames Ct” for the gas station query (as shown in Figure FIGREF1). In fact, the correct address for Valero is 200 Alester Ave. A consistent response is relatively easy to achieve for the conventional pipeline systems because they query the KB by issuing API calls BIBREF10, BIBREF11, BIBREF12, and the returned entities, which typically come from a single KB row, are consistently related to the object (like the “gas station”) that serves the user's request. This indicates that a response can usually be supported by a single KB row. It's promising to incorporate such observation into the Seq2Seq dialogue generation model, since it encourages KB relevant generation and avoids the model from producing responses with conflict entities.
To achieve entity-consistent generation in the Seq2Seq task-oriented dialogue system, we propose a novel framework which query the KB in two steps. In the first step, we introduce a retrieval module — KB-retriever to explicitly query the KB. Inspired by the observation that a single KB row usually supports a response, given the dialogue history and a set of KB rows, the KB-retriever uses a memory network BIBREF13 to select the most relevant row. The retrieval result is then fed into a Seq2Seq dialogue generation model to filter the irrelevant KB entities and improve the consistency within the generated entities. In the second step, we further perform attention mechanism to address the most correlated KB column. Finally, we adopt the copy mechanism to incorporate the retrieved KB entity.
Since dialogue dataset is not typically annotated with the retrieval results, training the KB-retriever is non-trivial. To make the training feasible, we propose two methods: 1) we use a set of heuristics to derive the training data and train the retriever in a distant supervised fashion; 2) we use Gumbel-Softmax BIBREF14 as an approximation of the non-differentiable selecting process and train the retriever along with the Seq2Seq dialogue generation model. Experiments on two publicly available datasets (Camrest BIBREF11 and InCar Assistant BIBREF6) confirm the effectiveness of the KB-retriever. Both the retrievers trained with distant-supervision and Gumbel-Softmax technique outperform the compared systems in the automatic and human evaluations. Analysis empirically verifies our assumption that more than 80% responses in the dataset can be supported by a single KB row and better retrieval results lead to better task-oriented dialogue generation performance.
Definition
In this section, we will describe the input and output of the end-to-end task-oriented dialogue system, and the definition of Seq2Seq task-oriented dialogue generation.
Definition ::: Dialogue History
Given a dialogue between a user ($u$) and a system ($s$), we follow eric:2017:SIGDial and represent the $k$-turned dialogue utterances as $\lbrace (u_{1}, s_{1} ), (u_{2} , s_{2} ), ... , (u_{k}, s_{k})\rbrace $. At the $i^{\text{th}}$ turn of the dialogue, we aggregate dialogue context which consists of the tokens of $(u_{1}, s_{1}, ..., s_{i-1}, u_{i})$ and use $\mathbf {x} = (x_{1}, x_{2}, ..., x_{m})$ to denote the whole dialogue history word by word, where $m$ is the number of tokens in the dialogue history.
Definition ::: Knowledge Base
In this paper, we assume to have the access to a relational-database-like KB $B$, which consists of $|\mathcal {R}|$ rows and $|\mathcal {C}|$ columns. The value of entity in the $j^{\text{th}}$ row and the $i^{\text{th}}$ column is noted as $v_{j, i}$.
Definition ::: Seq2Seq Dialogue Generation
We define the Seq2Seq task-oriented dialogue generation as finding the most likely response $\mathbf {y}$ according to the input dialogue history $\mathbf {x}$ and KB $B$. Formally, the probability of a response is defined as
where $y_t$ represents an output token.
Our Framework
In this section, we describe our framework for end-to-end task-oriented dialogues. The architecture of our framework is demonstrated in Figure FIGREF3, which consists of two major components including an memory network-based retriever and the seq2seq dialogue generation with KB Retriever. Our framework first uses the KB-retriever to select the most relevant KB row and further filter the irrelevant entities in a Seq2Seq response generation model to improve the consistency among the output entities. While in decoding, we further perform the attention mechanism to choose the most probable KB column. We will present the details of our framework in the following sections.
Our Framework ::: Encoder
In our encoder, we adopt the bidirectional LSTM BIBREF15 to encode the dialogue history $\mathbf {x}$, which captures temporal relationships within the sequence. The encoder first map the tokens in $\mathbf {x}$ to vectors with embedding function $\phi ^{\text{emb}}$, and then the BiLSTM read the vector forwardly and backwardly to produce context-sensitive hidden states $(\mathbf {h}_{1}, \mathbf {h}_2, ..., \mathbf {h}_{m})$ by repeatedly applying the recurrence $\mathbf {h}_{i}=\text{BiLSTM}\left( \phi ^{\text{emb}}\left( x_{i}\right) , \mathbf {h}_{i-1}\right)$.
Our Framework ::: Vanilla Attention-based Decoder
Here, we follow eric:2017:SIGDial to adopt the attention-based decoder to generation the response word by word. LSTM is also used to represent the partially generated output sequence $(y_{1}, y_2, ...,y_{t-1})$ as $(\tilde{\mathbf {h}}_{1}, \tilde{\mathbf {h}}_2, ...,\tilde{\mathbf {h}}_t)$. For the generation of next token $y_t$, their model first calculates an attentive representation $\tilde{\mathbf {h}}^{^{\prime }}_t$ of the dialogue history as
Then, the concatenation of the hidden representation of the partially outputted sequence $\tilde{\mathbf {h}}_t$ and the attentive dialogue history representation $\tilde{\mathbf {h}}^{^{\prime }}_t$ are projected to the vocabulary space $\mathcal {V}$ by $U$ as
to calculate the score (logit) for the next token generation. The probability of next token $y_t$ is finally calculated as
Our Framework ::: Entity-Consistency Augmented Decoder
As shown in section SECREF7, we can see that the generation of tokens are just based on the dialogue history attention, which makes the model ignorant to the KB entities. In this section, we present how to query the KB explicitly in two steps for improving the entity consistence, which first adopt the KB-retriever to select the most relevant KB row and the generation of KB entities from the entities-augmented decoder is constrained to the entities within the most probable row, thus improve the entity generation consistency. Next, we perform the column attention to select the most probable KB column. Finally, we show how to use the copy mechanism to incorporate the retrieved entity while decoding.
Our Framework ::: Entity-Consistency Augmented Decoder ::: KB Row Selection
In our framework, our KB-retriever takes the dialogue history and KB rows as inputs and selects the most relevant row. This selection process resembles the task of selecting one word from the inputs to answer questions BIBREF13, and we use a memory network to model this process. In the following sections, we will first describe how to represent the inputs, then we will talk about our memory network-based retriever
Our Framework ::: Entity-Consistency Augmented Decoder ::: KB Row Selection ::: Dialogue History Representation:
We encode the dialogue history by adopting the neural bag-of-words (BoW) followed the original paper BIBREF13. Each token in the dialogue history is mapped into a vector by another embedding function $\phi ^{\text{emb}^{\prime }}(x)$ and the dialogue history representation $\mathbf {q}$ is computed as the sum of these vectors: $\mathbf {q} = \sum ^{m}_{i=1} \phi ^{\text{emb}^{\prime }} (x_{i}) $.
Our Framework ::: Entity-Consistency Augmented Decoder ::: KB Row Selection ::: KB Row Representation:
In this section, we describe how to encode the KB row. Each KB cell is represented as the cell value $v$ embedding as $\mathbf {c}_{j, k} = \phi ^{\text{value}}(v_{j, k})$, and the neural BoW is also used to represent a KB row $\mathbf {r}_{j}$ as $\mathbf {r}_{j} = \sum _{k=1}^{|\mathcal {C}|} \mathbf {c}_{j,k}$.
Our Framework ::: Entity-Consistency Augmented Decoder ::: KB Row Selection ::: Memory Network-Based Retriever:
We model the KB retrieval process as selecting the row that most-likely supports the response generation. Memory network BIBREF13 has shown to be effective to model this kind of selection. For a $n$-hop memory network, the model keeps a set of input matrices $\lbrace R^{1}, R^{2}, ..., R^{n+1}\rbrace $, where each $R^{i}$ is a stack of $|\mathcal {R}|$ inputs $(\mathbf {r}^{i}_1, \mathbf {r}^{i}_2, ..., \mathbf {r}^{i}_{|\mathcal {R}|})$. The model also keeps query $\mathbf {q}^{1}$ as the input. A single hop memory network computes the probability $\mathbf {a}_j$ of selecting the $j^{\text{th}}$ input as
For the multi-hop cases, layers of single hop memory network are stacked and the query of the $(i+1)^{\text{th}}$ layer network is computed as
and the output of the last layer is used as the output of the whole network. For more details about memory network, please refer to the original paper BIBREF13.
After getting $\mathbf {a}$, we represent the retrieval results as a 0-1 matrix $T \in \lbrace 0, 1\rbrace ^{|\mathcal {R}|\times \mathcal {|C|}}$, where each element in $T$ is calculated as
In the retrieval result, $T_{j, k}$ indicates whether the entity in the $j^{\text{th}}$ row and the $k^{\text{th}}$ column is relevant to the final generation of the response. In this paper, we further flatten T to a 0-1 vector $\mathbf {t} \in \lbrace 0, 1\rbrace ^{|\mathcal {E}|}$ (where $|\mathcal {E}|$ equals $|\mathcal {R}|\times \mathcal {|C|}$) as our retrieval row results.
Our Framework ::: Entity-Consistency Augmented Decoder ::: KB Column Selection
After getting the retrieved row result that indicates which KB row is the most relevant to the generation, we further perform column attention in decoding time to select the probable KB column. For our KB column selection, following the eric:2017:SIGDial we use the decoder hidden state $(\tilde{\mathbf {h}}_{1}, \tilde{\mathbf {h}}_2, ...,\tilde{\mathbf {h}}_t)$ to compute an attention score with the embedding of column attribute name. The attention score $\mathbf {c}\in R^{|\mathcal {E}|}$ then become the logits of the column be selected, which can be calculated as
where $\mathbf {c}_j$ is the attention score of the $j^{\text{th}}$ KB column, $\mathbf {k}_j$ is represented with the embedding of word embedding of KB column name. $W^{^{\prime }}_{1}$, $W^{^{\prime }}_{2}$ and $\mathbf {t}^{T}$ are trainable parameters of the model.
Our Framework ::: Entity-Consistency Augmented Decoder ::: Decoder with Retrieved Entity
After the row selection and column selection, we can define the final retrieved KB entity score as the element-wise dot between the row retriever result and the column selection score, which can be calculated as
where the $v^{t}$ indicates the final KB retrieved entity score. Finally, we follow eric:2017:SIGDial to use copy mechanism to incorporate the retrieved entity, which can be defined as
where $\mathbf {o}_t$’s dimensionality is $ |\mathcal {V}|$ +$|\mathcal {E}|$. In $\mathbf {v}^t$ , lower $ |\mathcal {V}|$ is zero and the rest$|\mathcal {E}|$ is retrieved entity scores.
Training the KB-Retriever
As mentioned in section SECREF9, we adopt the memory network to train our KB-retriever. However, in the Seq2Seq dialogue generation, the training data does not include the annotated KB row retrieval results, which makes supervised training the KB-retriever impossible. To tackle this problem, we propose two training methods for our KB-row-retriever. 1) In the first method, inspired by the recent success of distant supervision in information extraction BIBREF16, BIBREF17, BIBREF18, BIBREF19, we take advantage of the similarity between the surface string of KB entries and the reference response, and design a set of heuristics to extract training data for the KB-retriever. 2) In the second method, instead of training the KB-retriever as an independent component, we train it along with the training of the Seq2Seq dialogue generation. To make the retrieval process in Equation DISPLAY_FORM13 differentiable, we use Gumbel-Softmax BIBREF14 as an approximation of the $\operatornamewithlimits{argmax}$ during training.
Training the KB-Retriever ::: Training with Distant Supervision
Although it's difficult to obtain the annotated retrieval data for the KB-retriever, we can “guess” the most relevant KB row from the reference response, and then obtain the weakly labeled data for the retriever. Intuitively, for the current utterance in the same dialogue which usually belongs to one topic and the KB row that contains the largest number of entities mentioned in the whole dialogue should support the utterance. In our training with distant supervision, we further simplify our assumption and assume that one dialogue which is usually belongs to one topic and can be supported by the most relevant KB row, which means for a $k$-turned dialogue, we construct $k$ pairs of training instances for the retriever and all the inputs $(u_{1}, s_{1}, ..., s_{i-1}, u_{i} \mid i \le k)$ are associated with the same weakly labeled KB retrieval result $T^*$.
In this paper, we compute each row's similarity to the whole dialogue and choose the most similar row as $T^*$. We define the similarity of each row as the number of matched spans with the surface form of the entities in the row. Taking the dialogue in Figure FIGREF1 for an example, the similarity of the 4$^\text{th}$ row equals to 4 with “200 Alester Ave”, “gas station”, “Valero”, and “road block nearby” matching the dialogue context; and the similarity of the 7$^\text{th}$ row equals to 1 with only “road block nearby” matching.
In our model with the distantly supervised retriever, the retrieval results serve as the input for the Seq2Seq generation. During training the Seq2Seq generation, we use the weakly labeled retrieval result $T^{*}$ as the input.
Training the KB-Retriever ::: Training with Gumbel-Softmax
In addition to treating the row retrieval result as an input to the generation model, and training the kb-row-retriever independently, we can train it along with the training of the Seq2Seq dialogue generation in an end-to-end fashion. The major difficulty of such a training scheme is that the discrete retrieval result is not differentiable and the training signal from the generation model cannot be passed to the parameters of the retriever. Gumbel-softmax technique BIBREF14 has been shown an effective approximation to the discrete variable and proved to work in sentence representation. In this paper, we adopt the Gumbel-Softmax technique to train the KB retriever. We use
as the approximation of $T$, where $\mathbf {g}_{j}$ are i.i.d samples drawn from $\text{Gumbel}(0,1)$ and $\tau $ is a constant that controls the smoothness of the distribution. $T^{\text{approx}}_{j}$ replaces $T^{\text{}}_{j}$ in equation DISPLAY_FORM13 and goes through the same flattening and expanding process as $\mathbf {V}$ to get $\mathbf {v}^{\mathbf {t}^{\text{approx}^{\prime }}}$ and the training signal from Seq2Seq generation is passed via the logit
To make training with Gumbel-Softmax more stable, we first initialize the parameters by pre-training the KB-retriever with distant supervision and further fine-tuning our framework.
Training the KB-Retriever ::: Experimental Settings
We choose the InCar Assistant dataset BIBREF6 including three distinct domains: navigation, weather and calendar domain. For weather domain, we follow wen2018sequence to separate the highest temperature, lowest temperature and weather attribute into three different columns. For calendar domain, there are some dialogues without a KB or incomplete KB. In this case, we padding a special token “-” in these incomplete KBs. Our framework is trained separately in these three domains, using the same train/validation/test split sets as eric:2017:SIGDial. To justify the generalization of the proposed model, we also use another public CamRest dataset BIBREF11 and partition the datasets into training, validation and testing set in the ratio 3:1:1. Especially, we hired some human experts to format the CamRest dataset by equipping the corresponding KB to every dialogues.
All hyper-parameters are selected according to validation set. We use a three-hop memory network to model our KB-retriever. The dimensionalities of the embedding is selected from $\lbrace 100, 200\rbrace $ and LSTM hidden units is selected from $\lbrace 50, 100, 150, 200, 350\rbrace $. The dropout we use in our framework is selected from $\lbrace 0.25, 0.5, 0.75\rbrace $ and the batch size we adopt is selected from $\lbrace 1,2\rbrace $. L2 regularization is used on our model with a tension of $5\times 10^{-6}$ for reducing overfitting. For training the retriever with distant supervision, we adopt the weight typing trick BIBREF20. We use Adam BIBREF21 to optimize the parameters in our model and adopt the suggested hyper-parameters for optimization.
We adopt both the automatic and human evaluations in our experiments.
Training the KB-Retriever ::: Baseline Models
We compare our model with several baselines including:
Attn seq2seq BIBREF22: A model with simple attention over the input context at each time step during decoding.
Ptr-UNK BIBREF23: Ptr-UNK is the model which augments a sequence-to-sequence architecture with attention-based copy mechanism over the encoder context.
KV Net BIBREF6: The model adopted and argumented decoder which decodes over the concatenation of vocabulary and KB entities, which allows the model to generate entities.
Mem2Seq BIBREF7: Mem2Seq is the model that takes dialogue history and KB entities as input and uses a pointer gate to control either generating a vocabulary word or selecting an input as the output.
DSR BIBREF9: DSR leveraged dialogue state representation to retrieve the KB implicitly and applied copying mechanism to retrieve entities from knowledge base while decoding.
In InCar dataset, for the Attn seq2seq, Ptr-UNK and Mem2seq, we adopt the reported results from madotto2018mem2seq. In CamRest dataset, for the Mem2Seq, we adopt their open-sourced code to get the results while for the DSR, we run their code on the same dataset to obtain the results.
Results
Follow the prior works BIBREF6, BIBREF7, BIBREF9, we adopt the BLEU and the Micro Entity F1 to evaluate our model performance. The experimental results are illustrated in Table TABREF30.
In the first block of Table TABREF30, we show the Human, rule-based and KV Net (with*) result which are reported from eric:2017:SIGDial. We argue that their results are not directly comparable because their work uses the entities in thier canonicalized forms, which are not calculated based on real entity value. It's noticing that our framework with two methods still outperform KV Net in InCar dataset on whole BLEU and Entity F metrics, which demonstrates the effectiveness of our framework.
In the second block of Table TABREF30, we can see that our framework trained with both the distant supervision and the Gumbel-Softmax beats all existing models on two datasets. Our model outperforms each baseline on both BLEU and F1 metrics. In InCar dataset, Our model with Gumbel-Softmax has the highest BLEU compared with baselines, which which shows that our framework can generate more fluent response. Especially, our framework has achieved 2.5% improvement on navigate domain, 1.8% improvement on weather domain and 3.5% improvement on calendar domain on F1 metric. It indicates that the effectiveness of our KB-retriever module and our framework can retrieve more correct entity from KB. In CamRest dataset, the same trend of improvement has been witnessed, which further show the effectiveness of our framework.
Besides, we observe that the model trained with Gumbel-Softmax outperforms with distant supervision method. We attribute this to the fact that the KB-retriever and the Seq2Seq module are fine-tuned in an end-to-end fashion, which can refine the KB-retriever and further promote the dialogue generation.
Results ::: The proportion of responses that can be supported by a single KB row
In this section, we verify our assumption by examining the proportion of responses that can be supported by a single row.
We define a response being supported by the most relevant KB row as all the responded entities are included by that row. We study the proportion of these responses over the test set. The number is 95% for the navigation domain, 90% for the CamRest dataset and 80% for the weather domain. This confirms our assumption that most responses can be supported by the relevant KB row. Correctly retrieving the supporting row should be beneficial.
We further study the weather domain to see the rest 20% exceptions. Instead of being supported by multiple rows, most of these exceptions cannot be supported by any KB row. For example, there is one case whose reference response is “It 's not rainy today”, and the related KB entity is sunny. These cases provide challenges beyond the scope of this paper. If we consider this kind of cases as being supported by a single row, such proportion in the weather domain is 99%.
Results ::: Generation Consistency
In this paper, we expect the consistent generation from our model. To verify this, we compute the consistency recall of the utterances that have multiple entities. An utterance is considered as consistent if it has multiple entities and these entities belong to the same row which we annotated with distant supervision.
The consistency result is shown in Table TABREF37. From this table, we can see that incorporating retriever in the dialogue generation improves the consistency.
Results ::: Correlation between the number of KB rows and generation consistency
To further explore the correlation between the number of KB rows and generation consistency, we conduct experiments with distant manner to study the correlation between the number of KB rows and generation consistency.
We choose KBs with different number of rows on a scale from 1 to 5 for the generation. From Figure FIGREF32, as the number of KB rows increase, we can see a decrease in generation consistency. This indicates that irrelevant information would harm the dialogue generation consistency.
Results ::: Visualization
To gain more insights into how the our retriever module influences the whole KB score distribution, we visualized the KB entity probability at the decoding position where we generate the entity 200_Alester_Ave. From the example (Fig FIGREF38), we can see the $4^\text{th}$ row and the $1^\text{th}$ column has the highest probabilities for generating 200_Alester_Ave, which verify the effectiveness of firstly selecting the most relevant KB row and further selecting the most relevant KB column.
Results ::: Human Evaluation
We provide human evaluation on our framework and the compared models. These responses are based on distinct dialogue history. We hire several human experts and ask them to judge the quality of the responses according to correctness, fluency, and humanlikeness on a scale from 1 to 5. In each judgment, the expert is presented with the dialogue history, an output of a system with the name anonymized, and the gold response.
The evaluation results are illustrated in Table TABREF37. Our framework outperforms other baseline models on all metrics according to Table TABREF37. The most significant improvement is from correctness, indicating that our model can retrieve accurate entity from KB and generate more informative information that the users want to know.
Related Work
Sequence-to-sequence (Seq2Seq) models in text generation BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 has gained more popular and they are applied for the open-domain dialogs BIBREF24, BIBREF25 in the end-to-end training method. Recently, the Seq2Seq can be used for learning task oriented dialogs and how to query the structured KB is the remaining challenges.
Properly querying the KB has long been a challenge in the task-oriented dialogue system. In the pipeline system, the KB query is strongly correlated with the design of language understanding, state tracking, and policy management. Typically, after obtaining the dialogue state, the policy management module issues an API call accordingly to query the KB. With the development of neural network in natural language processing, efforts have been made to replacing the discrete and pre-defined dialogue state with the distributed representation BIBREF10, BIBREF11, BIBREF12, BIBREF26. In our framework, our retrieval result can be treated as a numeric representation of the API call return.
Instead of interacting with the KB via API calls, more and more recent works tried to incorporate KB query as a part of the model. The most popular way of modeling KB query is treating it as an attention network over the entire KB entities BIBREF6, BIBREF27, BIBREF8, BIBREF28, BIBREF29 and the return can be a fuzzy summation of the entity representations. madotto2018mem2seq's practice of modeling the KB query with memory network can also be considered as learning an attentive preference over these entities. wen2018sequence propose the implicit dialogue state representation to query the KB and achieve the promising performance. Different from their modes, we propose the KB-retriever to explicitly query the KB, and the query result is used to filter the irrelevant entities in the dialogue generation to improve the consistency among the output entities.
Conclusion
In this paper, we propose a novel framework to improve entities consistency by querying KB in two steps. In the first step, inspired by the observation that a response can usually be supported by a single KB row, we introduce the KB retriever to return the most relevant KB row, which is used to filter the irrelevant KB entities and encourage consistent generation. In the second step, we further perform attention mechanism to select the most relevant KB column. Experimental results show the effectiveness of our method. Extensive analysis further confirms the observation and reveal the correlation between the success of KB query and the success of task-oriented dialogue generation.
Acknowledgments
We thank the anonymous reviewers for their helpful comments and suggestions. This work was supported by the National Natural Science Foundation of China (NSFC) via grant 61976072, 61632011 and 61772153. | Unanswerable |
05e3b831e4c02bbd64a6e35f6c52f0922a41539a | 05e3b831e4c02bbd64a6e35f6c52f0922a41539a_0 | Q: At which interval do they extract video and audio frames?
Text: Introduction
Deep neural networks have been successfully applied to several computer vision tasks such as image classification BIBREF0 , object detection BIBREF1 , video action classification BIBREF2 , etc. They have also been successfully applied to natural language processing tasks such as machine translation BIBREF3 , machine reading comprehension BIBREF4 , etc. There has also been an explosion of interest in tasks which combine multiple modalities such as audio, vision, and language together. Some popular multi-modal tasks combining these three modalities, and their differences are highlighted in Table TABREF1 .
Given an image and a question related to the image, the vqa challenge BIBREF5 tasked users with selecting an answer to the question. BIBREF6 identified several sources of bias in the vqa dataset, which led to deep neural models answering several questions superficially. They found that in several instances, deep architectures exploited the statistics of the dataset to select answers ignoring the provided image. This prompted the release of vqa 2.0 BIBREF7 which attempts to balance the original dataset. In it, each question is paired to two similar images which have different answers. Due to the complexity of vqa, understanding the failures of deep neural architectures for this task has been a challenge. It is not easy to interpret whether the system failed in understanding the question or in understanding the image or in reasoning over it. The CLEVR dataset BIBREF8 was hence proposed as a useful benchmark to evaluate such systems on the task of visual reasoning. Extending question answering over images to videos, BIBREF9 have proposed MovieQA, where the task is to select the correct answer to a provided question given the movie clip on which it is based.
Intelligent systems that can interact with human users for a useful purpose are highly valuable. To this end, there has been a recent push towards moving from single-turn qa to multi-turn dialogue, which is a natural and intuitive setting for humans. Among multi-modal dialogue tasks, visdial BIBREF10 provides an image and dialogue where each turn is a qa pair. The task is to train a model to answer these questions within the dialogue. The avsd challenge extends the visdial task from images to the audio-visual domain.
We present our modelname model for the avsd task. modelname combines a hred for encoding and generating qa-dialogue with a novel FiLM-based audio-visual feature extractor for videos and an auxiliary multi-task learning-based decoder for decoding a summary of the video. It outperforms the baseline results for the avsd dataset BIBREF11 and was ranked 2nd overall among the dstc7 avsd challenge participants.
In Section SECREF2 , we discuss existing literature on end-to-end dialogue systems with a special focus on multi-modal dialogue systems. Section SECREF3 describes the avsd dataset. In Section SECREF4 , we present the architecture of our modelname model. We describe our evaluation and experimental setup in Section SECREF5 and then conclude in Section SECREF6 .
Related Work
With the availability of large conversational corpora from sources like Reddit and Twitter, there has been a lot of recent work on end-to-end modelling of dialogue for open domains. BIBREF12 treated dialogue as a machine translation problem where they translate from the stimulus to the response. They observed this to be more challenging than machine translation tasks due the larger diversity of possible responses. Among approaches that just use the previous utterance to generate the current response, BIBREF13 proposed a response generation model based on the encoder decoder framework. BIBREF14 also proposed an encoder-decoder based neural network architecture that uses the previous two utterances to generate the current response. Among discriminative methods (i.e. methods that produce a score for utterances from a set and then rank them), BIBREF15 proposed a neural architecture to select the best next response from a list of responses by measuring their similarity to the dialogue context. BIBREF16 extended prior work on encoder-decoder-based models to multi-turn conversations. They trained a hierarchical model called hred for generating dialogue utterances where a recurrent neural network encoder encodes each utterance. A higher-level recurrent neural network maintains the dialogue state by further encoding the individual utterance encodings. This dialogue state is then decoded by another recurrent decoder to generate the response at that point in time. In followup work, BIBREF17 used a latent stochastic variable to condition the generation process which aided their model in producing longer coherent outputs that better retain the context.
Datasets and tasks BIBREF10 , BIBREF18 , BIBREF19 have also been released recently to study visual-input based conversations. BIBREF10 train several generative and discriminative deep neural models for the visdial task. They observe that on this task, discriminative models outperform generative models and that models making better use of the dialogue history do better than models that do not use dialogue history at all. Unexpectedly, the performance between models that use the image features and models that do no use these features is not significantly different. As we discussed in Section SECREF1 , this is similar to the issues vqa models faced initially due to the imbalanced nature of the dataset, which leads us to believe that language is a strong prior on the visdial dataset too. BIBREF20 train two separate agents to play a cooperative game where one agent has to answer the other agent's questions, which in turn has to predict the fc7 features of the Image obtained from VGGNet. Both agents are based on hred models and they show that agents fine-tuned with rl outperform agents trained solely with supervised learning. BIBREF18 train both generative and discriminative deep neural models on the igc dataset, where the task is to generate questions and answers to carry on a meaningful conversation. BIBREF19 train hred-based models on GuessWhat?! dataset in which agents have to play a guessing game where one player has to find an object in the picture which the other player knows about and can answer questions about them.
Moving from image-based dialogue to video-based dialogue adds further complexity and challenges. Limited availability of such data is one of the challenges. Apart from the avsd dataset, there does not exist a video dialogue dataset to the best of our knowledge and the avsd data itself is fairly limited in size. Extracting relevant features from videos also contains the inherent complexity of extracting features from individual frames and additionally requires understanding their temporal interaction. The temporal nature of videos also makes it important to be able to focus on a varying-length subset of video frames as the action which is being asked about might be happening within them. There is also the need to encode the additional modality of audio which would be required for answering questions that rely on the audio track. With limited size of publicly available datasets based on the visual modality, learning useful features from high dimensional visual data has been a challenge even for the visdial dataset, and we anticipate this to be an even more significant challenge on the avsd dataset as it involves videos.
On the avsd task, BIBREF11 train an attention-based audio-visual scene-aware dialogue model which we use as the baseline model for this paper. They divide each video into multiple equal-duration segments and, from each of them, extract video features using an I3D BIBREF21 model, and audio features using a VGGish BIBREF22 model. The I3D model was pre-trained on Kinetics BIBREF23 dataset and the VGGish model was pre-trained on Audio Set BIBREF24 . The baseline encodes the current utterance's question with a lstm BIBREF25 and uses the encoding to attend to the audio and video features from all the video segments and to fuse them together. The dialogue history is modelled with a hierarchical recurrent lstm encoder where the input to the lower level encoder is a concatenation of question-answer pairs. The fused feature representation is concatenated with the question encoding and the dialogue history encoding and the resulting vector is used to decode the current answer using an lstm decoder. Similar to the visdial models, the performance difference between the best model that uses text and the best model that uses both text and video features is small. This indicates that the language is a stronger prior here and the baseline model is unable to make good use of the highly relevant video.
Automated evaluation of both task-oriented and non-task-oriented dialogue systems has been a challenge BIBREF26 , BIBREF27 too. Most such dialogue systems are evaluated using per-turn evaluation metrics since there is no suitable per-dialogue metric as conversations do not need to happen in a deterministic ordering of turns. These per-turn evaluation metrics are mostly word-overlap-based metrics such as BLEU, METEOR, ROUGE, and CIDEr, borrowed from the machine translation literature. Due to the diverse nature of possible responses, world-overlap metrics are not highly suitable for evaluating these tasks. Human evaluation of generated responses is considered the most reliable metric for such tasks but it is cost prohibitive and hence the dialogue system literature continues to rely widely on word-overlap-based metrics.
The avsd dataset and challenge
The avsd dataset BIBREF28 consists of dialogues collected via amt. Each dialogue is associated with a video from the Charades BIBREF29 dataset and has conversations between two amt workers related to the video. The Charades dataset has multi-action short videos and it provides text descriptions for these videos, which the avsd challenge also distributes as the caption. The avsd dataset has been collected using similar methodology as the visdial dataset. In avsd, each dialogue turn consists of a question and answer pair. One of the amt workers assumes the role of questioner while the other amt worker assumes the role of answerer. The questioner sees three static frames from the video and has to ask questions. The answerer sees the video and answers the questions asked by the questioner. After 10 such qa turns, the questioner wraps up by writing a summary of the video based on the conversation.
Dataset statistics such as the number of dialogues, turns, and words for the avsd dataset are presented in Table TABREF5 . For the initially released prototype dataset, the training set of the avsd dataset corresponds to videos taken from the training set of the Charades dataset while the validation and test sets of the avsd dataset correspond to videos taken from the validation set of the Charades dataset. For the official dataset, training, validation and test sets are drawn from the corresponding Charades sets.
The Charades dataset also provides additional annotations for the videos such as action, scene, and object annotations, which are considered to be external data sources by the avsd challenge, for which there is a special sub-task in the challenge. The action annotations also include the start and end time of the action in the video.
Models
Our modelname model is based on the hred framework for modelling dialogue systems. In our model, an utterance-level recurrent lstm encoder encodes utterances and a dialogue-level recurrent lstm encoder encodes the final hidden states of the utterance-level encoders, thus maintaining the dialogue state and dialogue coherence. We use the final hidden states of the utterance-level encoders in the attention mechanism that is applied to the outputs of the description, video, and audio encoders. The attended features from these encoders are fused with the dialogue-level encoder's hidden states. An utterance-level decoder decodes the response for each such dialogue state following a question. We also add an auxiliary decoding module which is similar to the response decoder except that it tries to generate the caption and/or the summary of the video. We present our model in Figure FIGREF2 and describe the individual components in detail below.
Utterance-level Encoder
The utterance-level encoder is a recurrent neural network consisting of a single layer of lstm cells. The input to the lstm are word embeddings for each word in the utterance. The utterance is concatenated with a special symbol <eos> marking the end of the sequence. We initialize our word embeddings using 300-dimensional GloVe BIBREF30 and then fine-tune them during training. For words not present in the GloVe vocabulary, we initialize their word embeddings from a random uniform distribution.
Description Encoder
Similar to the utterance-level encoder, the description encoder is also a single-layer lstm recurrent neural network. Its word embeddings are also initialized with GloVe and then fine-tuned during training. For the description, we use the caption and/or the summary for the video provided with the dataset. The description encoder also has access to the last hidden state of the utterance-level encoder, which it uses to generate an attention map over the hidden states of its lstm. The final output of this module is the attention-weighted sum of the lstm hidden states.
Video Encoder with Time-Extended FiLM
For the video encoder, we use an I3D model pre-trained on the Kinetics dataset BIBREF23 and extract the output of its Mixed_7c layer for INLINEFORM0 (30 for our models) equi-distant segments of the video. Over these features, we add INLINEFORM1 (2 for our models) FiLM BIBREF31 blocks which have been highly successful in visual reasoning problems. Each FiLM block applies a conditional (on the utterance encoding) feature-wise affine transformation on the features input to it, ultimately leading to the extraction of more relevant features. The FiLM blocks are followed by fully connected layers which are further encoded by a single layer recurrent lstm network. The last hidden state of the utterance-level encoder then generates an attention map over the hidden states of its lstm, which is multiplied by the hidden states to provide the output of this module. We also experimented with using convolutional Mixed_5c features to capture spatial information but on the limited avsd dataset they did not yield any improvement. When not using the FiLM blocks, we use the final layer I3D features (provided by the avsd organizers) and encode them with the lstm directly, followed by the attention step. We present the video encoder in Figure FIGREF3 .
Audio Encoder
The audio encoder is structurally similar to the video encoder. We use the VGGish features provided by the avsd challenge organizers. Also similar to the video encoder, when not using the FiLM blocks, we use the VGGish features and encode them with the lstm directly, followed by the attention step. The audio encoder is depicted in Figure FIGREF4 .
Fusing Modalities for Dialogue Context
The outputs of the encoders for past utterances, descriptions, video, and audio together form the dialogue context INLINEFORM0 which is the input of the decoder. We first combine past utterances using a dialogue-level encoder which is a single-layer lstm recurrent neural network. The input to this encoder are the final hidden states of the utterance-level lstm. To combine the hidden states of these diverse modalities, we found concatenation to perform better on the validation set than averaging or the Hadamard product.
Decoders
The answer decoder consists of a single-layer recurrent lstm network and generates the answer to the last question utterance. At each time-step, it is provided with the dialogue-level state and produces a softmax over a vector corresponding to vocabulary words and stops when 30 words were produced or an end of sentence token is encountered.
The auxiliary decoder is functionally similar to the answer decoder. The decoded sentence is the caption and/or description of the video. We use the Video Encoder state instead of the Dialogue-level Encoder state as input since with this module we want to learn a better video representation capable of decoding the description.
Loss Function
For a given context embedding INLINEFORM0 at dialogue turn INLINEFORM1 , we minimize the negative log-likelihood of the answer word INLINEFORM2 (vocabulary size), normalized by the number of words INLINEFORM3 in the ground truth response INLINEFORM4 , L(Ct, r) = -1Mm=1MiV( [rt,m=i] INLINEFORM5 ) , where the probabilities INLINEFORM6 are given by the decoder LSTM output, r*t,m-1 ={ll rt,m-1 ; s>0.2, sU(0, 1)
v INLINEFORM0 ; else . is given by scheduled sampling BIBREF32 , and INLINEFORM1 is a symbol denoting the start of a sequence. We optimize the model using the AMSGrad algorithm BIBREF33 and use a per-condition random search to determine hyperparameters. We train the model using the BLEU-4 score on the validation set as our stopping citerion.
Experiments
The avsd challenge tasks we address here are:
We train our modelname model for Task 1.a and Task 2.a of the challenge and we present the results in Table TABREF9 . Our model outperforms the baseline model released by BIBREF11 on all of these tasks. The scores for the winning team have been released to challenge participants and are also included. Their approach, however, is not public as of yet. We observe the following for our models:
Since the official test set has not been released publicly, results reported on the official test set have been provided by the challenge organizers. For the prototype test set and for the ablation study presented in Table TABREF24 , we use the same code for evaluation metrics as used by BIBREF11 for fairness and comparability. We attribute the significant performance gain of our model over the baseline to a combination of several factors as described below:
Our primary architectural differences over the baseline model are: not concatenating the question, answer pairs before encoding them, the auxiliary decoder module, and using the Time-Extended FiLM module for feature extraction. These, combined with using scheduled sampling and running hyperparameter optimization over the validation set to select hyperparameters, give us the observed performance boost.
We observe that our models generate fairly relevant responses to questions in the dialogues, and models with audio-visual inputs respond to audio-visual questions (e.g. “is there any voices or music ?”) correctly more often.
We conduct an ablation study on the effectiveness of different components (eg., text, video and audio) and present it in Table TABREF24 . Our experiments show that:
Conclusions
We presented modelname, a state-of-the-art dialogue model for conversations about videos. We evaluated the model on the official AVSD test set, where it achieves a relative improvement of more than 16% over the baseline model on BLEU-4 and more than 33% on CIDEr. The challenging aspect of multi-modal dialogue is fusing modalities with varying information density. On AVSD, it is easiest to learn from the input text, while video features remain largely opaque to the decoder. modelname uses a generalization of FiLM to video that conditions video feature extraction on a question. However, similar to related work, absolute improvements of incorporating video features into dialogue are consistent but small. Thus, while our results indicate the suitability of our FiLM generalization, they also highlight that applications at the intersection between language and video are currently constrained by the quality of video features, and emphasizes the need for larger datasets. | Unanswerable |
bd74452f8ea0d1d82bbd6911fbacea1bf6e08cab | bd74452f8ea0d1d82bbd6911fbacea1bf6e08cab_0 | Q: Do they use pretrained word vectors for dialogue context embedding?
Text: Introduction
Deep neural networks have been successfully applied to several computer vision tasks such as image classification BIBREF0 , object detection BIBREF1 , video action classification BIBREF2 , etc. They have also been successfully applied to natural language processing tasks such as machine translation BIBREF3 , machine reading comprehension BIBREF4 , etc. There has also been an explosion of interest in tasks which combine multiple modalities such as audio, vision, and language together. Some popular multi-modal tasks combining these three modalities, and their differences are highlighted in Table TABREF1 .
Given an image and a question related to the image, the vqa challenge BIBREF5 tasked users with selecting an answer to the question. BIBREF6 identified several sources of bias in the vqa dataset, which led to deep neural models answering several questions superficially. They found that in several instances, deep architectures exploited the statistics of the dataset to select answers ignoring the provided image. This prompted the release of vqa 2.0 BIBREF7 which attempts to balance the original dataset. In it, each question is paired to two similar images which have different answers. Due to the complexity of vqa, understanding the failures of deep neural architectures for this task has been a challenge. It is not easy to interpret whether the system failed in understanding the question or in understanding the image or in reasoning over it. The CLEVR dataset BIBREF8 was hence proposed as a useful benchmark to evaluate such systems on the task of visual reasoning. Extending question answering over images to videos, BIBREF9 have proposed MovieQA, where the task is to select the correct answer to a provided question given the movie clip on which it is based.
Intelligent systems that can interact with human users for a useful purpose are highly valuable. To this end, there has been a recent push towards moving from single-turn qa to multi-turn dialogue, which is a natural and intuitive setting for humans. Among multi-modal dialogue tasks, visdial BIBREF10 provides an image and dialogue where each turn is a qa pair. The task is to train a model to answer these questions within the dialogue. The avsd challenge extends the visdial task from images to the audio-visual domain.
We present our modelname model for the avsd task. modelname combines a hred for encoding and generating qa-dialogue with a novel FiLM-based audio-visual feature extractor for videos and an auxiliary multi-task learning-based decoder for decoding a summary of the video. It outperforms the baseline results for the avsd dataset BIBREF11 and was ranked 2nd overall among the dstc7 avsd challenge participants.
In Section SECREF2 , we discuss existing literature on end-to-end dialogue systems with a special focus on multi-modal dialogue systems. Section SECREF3 describes the avsd dataset. In Section SECREF4 , we present the architecture of our modelname model. We describe our evaluation and experimental setup in Section SECREF5 and then conclude in Section SECREF6 .
Related Work
With the availability of large conversational corpora from sources like Reddit and Twitter, there has been a lot of recent work on end-to-end modelling of dialogue for open domains. BIBREF12 treated dialogue as a machine translation problem where they translate from the stimulus to the response. They observed this to be more challenging than machine translation tasks due the larger diversity of possible responses. Among approaches that just use the previous utterance to generate the current response, BIBREF13 proposed a response generation model based on the encoder decoder framework. BIBREF14 also proposed an encoder-decoder based neural network architecture that uses the previous two utterances to generate the current response. Among discriminative methods (i.e. methods that produce a score for utterances from a set and then rank them), BIBREF15 proposed a neural architecture to select the best next response from a list of responses by measuring their similarity to the dialogue context. BIBREF16 extended prior work on encoder-decoder-based models to multi-turn conversations. They trained a hierarchical model called hred for generating dialogue utterances where a recurrent neural network encoder encodes each utterance. A higher-level recurrent neural network maintains the dialogue state by further encoding the individual utterance encodings. This dialogue state is then decoded by another recurrent decoder to generate the response at that point in time. In followup work, BIBREF17 used a latent stochastic variable to condition the generation process which aided their model in producing longer coherent outputs that better retain the context.
Datasets and tasks BIBREF10 , BIBREF18 , BIBREF19 have also been released recently to study visual-input based conversations. BIBREF10 train several generative and discriminative deep neural models for the visdial task. They observe that on this task, discriminative models outperform generative models and that models making better use of the dialogue history do better than models that do not use dialogue history at all. Unexpectedly, the performance between models that use the image features and models that do no use these features is not significantly different. As we discussed in Section SECREF1 , this is similar to the issues vqa models faced initially due to the imbalanced nature of the dataset, which leads us to believe that language is a strong prior on the visdial dataset too. BIBREF20 train two separate agents to play a cooperative game where one agent has to answer the other agent's questions, which in turn has to predict the fc7 features of the Image obtained from VGGNet. Both agents are based on hred models and they show that agents fine-tuned with rl outperform agents trained solely with supervised learning. BIBREF18 train both generative and discriminative deep neural models on the igc dataset, where the task is to generate questions and answers to carry on a meaningful conversation. BIBREF19 train hred-based models on GuessWhat?! dataset in which agents have to play a guessing game where one player has to find an object in the picture which the other player knows about and can answer questions about them.
Moving from image-based dialogue to video-based dialogue adds further complexity and challenges. Limited availability of such data is one of the challenges. Apart from the avsd dataset, there does not exist a video dialogue dataset to the best of our knowledge and the avsd data itself is fairly limited in size. Extracting relevant features from videos also contains the inherent complexity of extracting features from individual frames and additionally requires understanding their temporal interaction. The temporal nature of videos also makes it important to be able to focus on a varying-length subset of video frames as the action which is being asked about might be happening within them. There is also the need to encode the additional modality of audio which would be required for answering questions that rely on the audio track. With limited size of publicly available datasets based on the visual modality, learning useful features from high dimensional visual data has been a challenge even for the visdial dataset, and we anticipate this to be an even more significant challenge on the avsd dataset as it involves videos.
On the avsd task, BIBREF11 train an attention-based audio-visual scene-aware dialogue model which we use as the baseline model for this paper. They divide each video into multiple equal-duration segments and, from each of them, extract video features using an I3D BIBREF21 model, and audio features using a VGGish BIBREF22 model. The I3D model was pre-trained on Kinetics BIBREF23 dataset and the VGGish model was pre-trained on Audio Set BIBREF24 . The baseline encodes the current utterance's question with a lstm BIBREF25 and uses the encoding to attend to the audio and video features from all the video segments and to fuse them together. The dialogue history is modelled with a hierarchical recurrent lstm encoder where the input to the lower level encoder is a concatenation of question-answer pairs. The fused feature representation is concatenated with the question encoding and the dialogue history encoding and the resulting vector is used to decode the current answer using an lstm decoder. Similar to the visdial models, the performance difference between the best model that uses text and the best model that uses both text and video features is small. This indicates that the language is a stronger prior here and the baseline model is unable to make good use of the highly relevant video.
Automated evaluation of both task-oriented and non-task-oriented dialogue systems has been a challenge BIBREF26 , BIBREF27 too. Most such dialogue systems are evaluated using per-turn evaluation metrics since there is no suitable per-dialogue metric as conversations do not need to happen in a deterministic ordering of turns. These per-turn evaluation metrics are mostly word-overlap-based metrics such as BLEU, METEOR, ROUGE, and CIDEr, borrowed from the machine translation literature. Due to the diverse nature of possible responses, world-overlap metrics are not highly suitable for evaluating these tasks. Human evaluation of generated responses is considered the most reliable metric for such tasks but it is cost prohibitive and hence the dialogue system literature continues to rely widely on word-overlap-based metrics.
The avsd dataset and challenge
The avsd dataset BIBREF28 consists of dialogues collected via amt. Each dialogue is associated with a video from the Charades BIBREF29 dataset and has conversations between two amt workers related to the video. The Charades dataset has multi-action short videos and it provides text descriptions for these videos, which the avsd challenge also distributes as the caption. The avsd dataset has been collected using similar methodology as the visdial dataset. In avsd, each dialogue turn consists of a question and answer pair. One of the amt workers assumes the role of questioner while the other amt worker assumes the role of answerer. The questioner sees three static frames from the video and has to ask questions. The answerer sees the video and answers the questions asked by the questioner. After 10 such qa turns, the questioner wraps up by writing a summary of the video based on the conversation.
Dataset statistics such as the number of dialogues, turns, and words for the avsd dataset are presented in Table TABREF5 . For the initially released prototype dataset, the training set of the avsd dataset corresponds to videos taken from the training set of the Charades dataset while the validation and test sets of the avsd dataset correspond to videos taken from the validation set of the Charades dataset. For the official dataset, training, validation and test sets are drawn from the corresponding Charades sets.
The Charades dataset also provides additional annotations for the videos such as action, scene, and object annotations, which are considered to be external data sources by the avsd challenge, for which there is a special sub-task in the challenge. The action annotations also include the start and end time of the action in the video.
Models
Our modelname model is based on the hred framework for modelling dialogue systems. In our model, an utterance-level recurrent lstm encoder encodes utterances and a dialogue-level recurrent lstm encoder encodes the final hidden states of the utterance-level encoders, thus maintaining the dialogue state and dialogue coherence. We use the final hidden states of the utterance-level encoders in the attention mechanism that is applied to the outputs of the description, video, and audio encoders. The attended features from these encoders are fused with the dialogue-level encoder's hidden states. An utterance-level decoder decodes the response for each such dialogue state following a question. We also add an auxiliary decoding module which is similar to the response decoder except that it tries to generate the caption and/or the summary of the video. We present our model in Figure FIGREF2 and describe the individual components in detail below.
Utterance-level Encoder
The utterance-level encoder is a recurrent neural network consisting of a single layer of lstm cells. The input to the lstm are word embeddings for each word in the utterance. The utterance is concatenated with a special symbol <eos> marking the end of the sequence. We initialize our word embeddings using 300-dimensional GloVe BIBREF30 and then fine-tune them during training. For words not present in the GloVe vocabulary, we initialize their word embeddings from a random uniform distribution.
Description Encoder
Similar to the utterance-level encoder, the description encoder is also a single-layer lstm recurrent neural network. Its word embeddings are also initialized with GloVe and then fine-tuned during training. For the description, we use the caption and/or the summary for the video provided with the dataset. The description encoder also has access to the last hidden state of the utterance-level encoder, which it uses to generate an attention map over the hidden states of its lstm. The final output of this module is the attention-weighted sum of the lstm hidden states.
Video Encoder with Time-Extended FiLM
For the video encoder, we use an I3D model pre-trained on the Kinetics dataset BIBREF23 and extract the output of its Mixed_7c layer for INLINEFORM0 (30 for our models) equi-distant segments of the video. Over these features, we add INLINEFORM1 (2 for our models) FiLM BIBREF31 blocks which have been highly successful in visual reasoning problems. Each FiLM block applies a conditional (on the utterance encoding) feature-wise affine transformation on the features input to it, ultimately leading to the extraction of more relevant features. The FiLM blocks are followed by fully connected layers which are further encoded by a single layer recurrent lstm network. The last hidden state of the utterance-level encoder then generates an attention map over the hidden states of its lstm, which is multiplied by the hidden states to provide the output of this module. We also experimented with using convolutional Mixed_5c features to capture spatial information but on the limited avsd dataset they did not yield any improvement. When not using the FiLM blocks, we use the final layer I3D features (provided by the avsd organizers) and encode them with the lstm directly, followed by the attention step. We present the video encoder in Figure FIGREF3 .
Audio Encoder
The audio encoder is structurally similar to the video encoder. We use the VGGish features provided by the avsd challenge organizers. Also similar to the video encoder, when not using the FiLM blocks, we use the VGGish features and encode them with the lstm directly, followed by the attention step. The audio encoder is depicted in Figure FIGREF4 .
Fusing Modalities for Dialogue Context
The outputs of the encoders for past utterances, descriptions, video, and audio together form the dialogue context INLINEFORM0 which is the input of the decoder. We first combine past utterances using a dialogue-level encoder which is a single-layer lstm recurrent neural network. The input to this encoder are the final hidden states of the utterance-level lstm. To combine the hidden states of these diverse modalities, we found concatenation to perform better on the validation set than averaging or the Hadamard product.
Decoders
The answer decoder consists of a single-layer recurrent lstm network and generates the answer to the last question utterance. At each time-step, it is provided with the dialogue-level state and produces a softmax over a vector corresponding to vocabulary words and stops when 30 words were produced or an end of sentence token is encountered.
The auxiliary decoder is functionally similar to the answer decoder. The decoded sentence is the caption and/or description of the video. We use the Video Encoder state instead of the Dialogue-level Encoder state as input since with this module we want to learn a better video representation capable of decoding the description.
Loss Function
For a given context embedding INLINEFORM0 at dialogue turn INLINEFORM1 , we minimize the negative log-likelihood of the answer word INLINEFORM2 (vocabulary size), normalized by the number of words INLINEFORM3 in the ground truth response INLINEFORM4 , L(Ct, r) = -1Mm=1MiV( [rt,m=i] INLINEFORM5 ) , where the probabilities INLINEFORM6 are given by the decoder LSTM output, r*t,m-1 ={ll rt,m-1 ; s>0.2, sU(0, 1)
v INLINEFORM0 ; else . is given by scheduled sampling BIBREF32 , and INLINEFORM1 is a symbol denoting the start of a sequence. We optimize the model using the AMSGrad algorithm BIBREF33 and use a per-condition random search to determine hyperparameters. We train the model using the BLEU-4 score on the validation set as our stopping citerion.
Experiments
The avsd challenge tasks we address here are:
We train our modelname model for Task 1.a and Task 2.a of the challenge and we present the results in Table TABREF9 . Our model outperforms the baseline model released by BIBREF11 on all of these tasks. The scores for the winning team have been released to challenge participants and are also included. Their approach, however, is not public as of yet. We observe the following for our models:
Since the official test set has not been released publicly, results reported on the official test set have been provided by the challenge organizers. For the prototype test set and for the ablation study presented in Table TABREF24 , we use the same code for evaluation metrics as used by BIBREF11 for fairness and comparability. We attribute the significant performance gain of our model over the baseline to a combination of several factors as described below:
Our primary architectural differences over the baseline model are: not concatenating the question, answer pairs before encoding them, the auxiliary decoder module, and using the Time-Extended FiLM module for feature extraction. These, combined with using scheduled sampling and running hyperparameter optimization over the validation set to select hyperparameters, give us the observed performance boost.
We observe that our models generate fairly relevant responses to questions in the dialogues, and models with audio-visual inputs respond to audio-visual questions (e.g. “is there any voices or music ?”) correctly more often.
We conduct an ablation study on the effectiveness of different components (eg., text, video and audio) and present it in Table TABREF24 . Our experiments show that:
Conclusions
We presented modelname, a state-of-the-art dialogue model for conversations about videos. We evaluated the model on the official AVSD test set, where it achieves a relative improvement of more than 16% over the baseline model on BLEU-4 and more than 33% on CIDEr. The challenging aspect of multi-modal dialogue is fusing modalities with varying information density. On AVSD, it is easiest to learn from the input text, while video features remain largely opaque to the decoder. modelname uses a generalization of FiLM to video that conditions video feature extraction on a question. However, similar to related work, absolute improvements of incorporating video features into dialogue are consistent but small. Thus, while our results indicate the suitability of our FiLM generalization, they also highlight that applications at the intersection between language and video are currently constrained by the quality of video features, and emphasizes the need for larger datasets. | Yes |
6472f9d0a385be81e0970be91795b1b97aa5a9cf | 6472f9d0a385be81e0970be91795b1b97aa5a9cf_0 | Q: Do they train a different training method except from scheduled sampling?
Text: Introduction
Deep neural networks have been successfully applied to several computer vision tasks such as image classification BIBREF0 , object detection BIBREF1 , video action classification BIBREF2 , etc. They have also been successfully applied to natural language processing tasks such as machine translation BIBREF3 , machine reading comprehension BIBREF4 , etc. There has also been an explosion of interest in tasks which combine multiple modalities such as audio, vision, and language together. Some popular multi-modal tasks combining these three modalities, and their differences are highlighted in Table TABREF1 .
Given an image and a question related to the image, the vqa challenge BIBREF5 tasked users with selecting an answer to the question. BIBREF6 identified several sources of bias in the vqa dataset, which led to deep neural models answering several questions superficially. They found that in several instances, deep architectures exploited the statistics of the dataset to select answers ignoring the provided image. This prompted the release of vqa 2.0 BIBREF7 which attempts to balance the original dataset. In it, each question is paired to two similar images which have different answers. Due to the complexity of vqa, understanding the failures of deep neural architectures for this task has been a challenge. It is not easy to interpret whether the system failed in understanding the question or in understanding the image or in reasoning over it. The CLEVR dataset BIBREF8 was hence proposed as a useful benchmark to evaluate such systems on the task of visual reasoning. Extending question answering over images to videos, BIBREF9 have proposed MovieQA, where the task is to select the correct answer to a provided question given the movie clip on which it is based.
Intelligent systems that can interact with human users for a useful purpose are highly valuable. To this end, there has been a recent push towards moving from single-turn qa to multi-turn dialogue, which is a natural and intuitive setting for humans. Among multi-modal dialogue tasks, visdial BIBREF10 provides an image and dialogue where each turn is a qa pair. The task is to train a model to answer these questions within the dialogue. The avsd challenge extends the visdial task from images to the audio-visual domain.
We present our modelname model for the avsd task. modelname combines a hred for encoding and generating qa-dialogue with a novel FiLM-based audio-visual feature extractor for videos and an auxiliary multi-task learning-based decoder for decoding a summary of the video. It outperforms the baseline results for the avsd dataset BIBREF11 and was ranked 2nd overall among the dstc7 avsd challenge participants.
In Section SECREF2 , we discuss existing literature on end-to-end dialogue systems with a special focus on multi-modal dialogue systems. Section SECREF3 describes the avsd dataset. In Section SECREF4 , we present the architecture of our modelname model. We describe our evaluation and experimental setup in Section SECREF5 and then conclude in Section SECREF6 .
Related Work
With the availability of large conversational corpora from sources like Reddit and Twitter, there has been a lot of recent work on end-to-end modelling of dialogue for open domains. BIBREF12 treated dialogue as a machine translation problem where they translate from the stimulus to the response. They observed this to be more challenging than machine translation tasks due the larger diversity of possible responses. Among approaches that just use the previous utterance to generate the current response, BIBREF13 proposed a response generation model based on the encoder decoder framework. BIBREF14 also proposed an encoder-decoder based neural network architecture that uses the previous two utterances to generate the current response. Among discriminative methods (i.e. methods that produce a score for utterances from a set and then rank them), BIBREF15 proposed a neural architecture to select the best next response from a list of responses by measuring their similarity to the dialogue context. BIBREF16 extended prior work on encoder-decoder-based models to multi-turn conversations. They trained a hierarchical model called hred for generating dialogue utterances where a recurrent neural network encoder encodes each utterance. A higher-level recurrent neural network maintains the dialogue state by further encoding the individual utterance encodings. This dialogue state is then decoded by another recurrent decoder to generate the response at that point in time. In followup work, BIBREF17 used a latent stochastic variable to condition the generation process which aided their model in producing longer coherent outputs that better retain the context.
Datasets and tasks BIBREF10 , BIBREF18 , BIBREF19 have also been released recently to study visual-input based conversations. BIBREF10 train several generative and discriminative deep neural models for the visdial task. They observe that on this task, discriminative models outperform generative models and that models making better use of the dialogue history do better than models that do not use dialogue history at all. Unexpectedly, the performance between models that use the image features and models that do no use these features is not significantly different. As we discussed in Section SECREF1 , this is similar to the issues vqa models faced initially due to the imbalanced nature of the dataset, which leads us to believe that language is a strong prior on the visdial dataset too. BIBREF20 train two separate agents to play a cooperative game where one agent has to answer the other agent's questions, which in turn has to predict the fc7 features of the Image obtained from VGGNet. Both agents are based on hred models and they show that agents fine-tuned with rl outperform agents trained solely with supervised learning. BIBREF18 train both generative and discriminative deep neural models on the igc dataset, where the task is to generate questions and answers to carry on a meaningful conversation. BIBREF19 train hred-based models on GuessWhat?! dataset in which agents have to play a guessing game where one player has to find an object in the picture which the other player knows about and can answer questions about them.
Moving from image-based dialogue to video-based dialogue adds further complexity and challenges. Limited availability of such data is one of the challenges. Apart from the avsd dataset, there does not exist a video dialogue dataset to the best of our knowledge and the avsd data itself is fairly limited in size. Extracting relevant features from videos also contains the inherent complexity of extracting features from individual frames and additionally requires understanding their temporal interaction. The temporal nature of videos also makes it important to be able to focus on a varying-length subset of video frames as the action which is being asked about might be happening within them. There is also the need to encode the additional modality of audio which would be required for answering questions that rely on the audio track. With limited size of publicly available datasets based on the visual modality, learning useful features from high dimensional visual data has been a challenge even for the visdial dataset, and we anticipate this to be an even more significant challenge on the avsd dataset as it involves videos.
On the avsd task, BIBREF11 train an attention-based audio-visual scene-aware dialogue model which we use as the baseline model for this paper. They divide each video into multiple equal-duration segments and, from each of them, extract video features using an I3D BIBREF21 model, and audio features using a VGGish BIBREF22 model. The I3D model was pre-trained on Kinetics BIBREF23 dataset and the VGGish model was pre-trained on Audio Set BIBREF24 . The baseline encodes the current utterance's question with a lstm BIBREF25 and uses the encoding to attend to the audio and video features from all the video segments and to fuse them together. The dialogue history is modelled with a hierarchical recurrent lstm encoder where the input to the lower level encoder is a concatenation of question-answer pairs. The fused feature representation is concatenated with the question encoding and the dialogue history encoding and the resulting vector is used to decode the current answer using an lstm decoder. Similar to the visdial models, the performance difference between the best model that uses text and the best model that uses both text and video features is small. This indicates that the language is a stronger prior here and the baseline model is unable to make good use of the highly relevant video.
Automated evaluation of both task-oriented and non-task-oriented dialogue systems has been a challenge BIBREF26 , BIBREF27 too. Most such dialogue systems are evaluated using per-turn evaluation metrics since there is no suitable per-dialogue metric as conversations do not need to happen in a deterministic ordering of turns. These per-turn evaluation metrics are mostly word-overlap-based metrics such as BLEU, METEOR, ROUGE, and CIDEr, borrowed from the machine translation literature. Due to the diverse nature of possible responses, world-overlap metrics are not highly suitable for evaluating these tasks. Human evaluation of generated responses is considered the most reliable metric for such tasks but it is cost prohibitive and hence the dialogue system literature continues to rely widely on word-overlap-based metrics.
The avsd dataset and challenge
The avsd dataset BIBREF28 consists of dialogues collected via amt. Each dialogue is associated with a video from the Charades BIBREF29 dataset and has conversations between two amt workers related to the video. The Charades dataset has multi-action short videos and it provides text descriptions for these videos, which the avsd challenge also distributes as the caption. The avsd dataset has been collected using similar methodology as the visdial dataset. In avsd, each dialogue turn consists of a question and answer pair. One of the amt workers assumes the role of questioner while the other amt worker assumes the role of answerer. The questioner sees three static frames from the video and has to ask questions. The answerer sees the video and answers the questions asked by the questioner. After 10 such qa turns, the questioner wraps up by writing a summary of the video based on the conversation.
Dataset statistics such as the number of dialogues, turns, and words for the avsd dataset are presented in Table TABREF5 . For the initially released prototype dataset, the training set of the avsd dataset corresponds to videos taken from the training set of the Charades dataset while the validation and test sets of the avsd dataset correspond to videos taken from the validation set of the Charades dataset. For the official dataset, training, validation and test sets are drawn from the corresponding Charades sets.
The Charades dataset also provides additional annotations for the videos such as action, scene, and object annotations, which are considered to be external data sources by the avsd challenge, for which there is a special sub-task in the challenge. The action annotations also include the start and end time of the action in the video.
Models
Our modelname model is based on the hred framework for modelling dialogue systems. In our model, an utterance-level recurrent lstm encoder encodes utterances and a dialogue-level recurrent lstm encoder encodes the final hidden states of the utterance-level encoders, thus maintaining the dialogue state and dialogue coherence. We use the final hidden states of the utterance-level encoders in the attention mechanism that is applied to the outputs of the description, video, and audio encoders. The attended features from these encoders are fused with the dialogue-level encoder's hidden states. An utterance-level decoder decodes the response for each such dialogue state following a question. We also add an auxiliary decoding module which is similar to the response decoder except that it tries to generate the caption and/or the summary of the video. We present our model in Figure FIGREF2 and describe the individual components in detail below.
Utterance-level Encoder
The utterance-level encoder is a recurrent neural network consisting of a single layer of lstm cells. The input to the lstm are word embeddings for each word in the utterance. The utterance is concatenated with a special symbol <eos> marking the end of the sequence. We initialize our word embeddings using 300-dimensional GloVe BIBREF30 and then fine-tune them during training. For words not present in the GloVe vocabulary, we initialize their word embeddings from a random uniform distribution.
Description Encoder
Similar to the utterance-level encoder, the description encoder is also a single-layer lstm recurrent neural network. Its word embeddings are also initialized with GloVe and then fine-tuned during training. For the description, we use the caption and/or the summary for the video provided with the dataset. The description encoder also has access to the last hidden state of the utterance-level encoder, which it uses to generate an attention map over the hidden states of its lstm. The final output of this module is the attention-weighted sum of the lstm hidden states.
Video Encoder with Time-Extended FiLM
For the video encoder, we use an I3D model pre-trained on the Kinetics dataset BIBREF23 and extract the output of its Mixed_7c layer for INLINEFORM0 (30 for our models) equi-distant segments of the video. Over these features, we add INLINEFORM1 (2 for our models) FiLM BIBREF31 blocks which have been highly successful in visual reasoning problems. Each FiLM block applies a conditional (on the utterance encoding) feature-wise affine transformation on the features input to it, ultimately leading to the extraction of more relevant features. The FiLM blocks are followed by fully connected layers which are further encoded by a single layer recurrent lstm network. The last hidden state of the utterance-level encoder then generates an attention map over the hidden states of its lstm, which is multiplied by the hidden states to provide the output of this module. We also experimented with using convolutional Mixed_5c features to capture spatial information but on the limited avsd dataset they did not yield any improvement. When not using the FiLM blocks, we use the final layer I3D features (provided by the avsd organizers) and encode them with the lstm directly, followed by the attention step. We present the video encoder in Figure FIGREF3 .
Audio Encoder
The audio encoder is structurally similar to the video encoder. We use the VGGish features provided by the avsd challenge organizers. Also similar to the video encoder, when not using the FiLM blocks, we use the VGGish features and encode them with the lstm directly, followed by the attention step. The audio encoder is depicted in Figure FIGREF4 .
Fusing Modalities for Dialogue Context
The outputs of the encoders for past utterances, descriptions, video, and audio together form the dialogue context INLINEFORM0 which is the input of the decoder. We first combine past utterances using a dialogue-level encoder which is a single-layer lstm recurrent neural network. The input to this encoder are the final hidden states of the utterance-level lstm. To combine the hidden states of these diverse modalities, we found concatenation to perform better on the validation set than averaging or the Hadamard product.
Decoders
The answer decoder consists of a single-layer recurrent lstm network and generates the answer to the last question utterance. At each time-step, it is provided with the dialogue-level state and produces a softmax over a vector corresponding to vocabulary words and stops when 30 words were produced or an end of sentence token is encountered.
The auxiliary decoder is functionally similar to the answer decoder. The decoded sentence is the caption and/or description of the video. We use the Video Encoder state instead of the Dialogue-level Encoder state as input since with this module we want to learn a better video representation capable of decoding the description.
Loss Function
For a given context embedding INLINEFORM0 at dialogue turn INLINEFORM1 , we minimize the negative log-likelihood of the answer word INLINEFORM2 (vocabulary size), normalized by the number of words INLINEFORM3 in the ground truth response INLINEFORM4 , L(Ct, r) = -1Mm=1MiV( [rt,m=i] INLINEFORM5 ) , where the probabilities INLINEFORM6 are given by the decoder LSTM output, r*t,m-1 ={ll rt,m-1 ; s>0.2, sU(0, 1)
v INLINEFORM0 ; else . is given by scheduled sampling BIBREF32 , and INLINEFORM1 is a symbol denoting the start of a sequence. We optimize the model using the AMSGrad algorithm BIBREF33 and use a per-condition random search to determine hyperparameters. We train the model using the BLEU-4 score on the validation set as our stopping citerion.
Experiments
The avsd challenge tasks we address here are:
We train our modelname model for Task 1.a and Task 2.a of the challenge and we present the results in Table TABREF9 . Our model outperforms the baseline model released by BIBREF11 on all of these tasks. The scores for the winning team have been released to challenge participants and are also included. Their approach, however, is not public as of yet. We observe the following for our models:
Since the official test set has not been released publicly, results reported on the official test set have been provided by the challenge organizers. For the prototype test set and for the ablation study presented in Table TABREF24 , we use the same code for evaluation metrics as used by BIBREF11 for fairness and comparability. We attribute the significant performance gain of our model over the baseline to a combination of several factors as described below:
Our primary architectural differences over the baseline model are: not concatenating the question, answer pairs before encoding them, the auxiliary decoder module, and using the Time-Extended FiLM module for feature extraction. These, combined with using scheduled sampling and running hyperparameter optimization over the validation set to select hyperparameters, give us the observed performance boost.
We observe that our models generate fairly relevant responses to questions in the dialogues, and models with audio-visual inputs respond to audio-visual questions (e.g. “is there any voices or music ?”) correctly more often.
We conduct an ablation study on the effectiveness of different components (eg., text, video and audio) and present it in Table TABREF24 . Our experiments show that:
Conclusions
We presented modelname, a state-of-the-art dialogue model for conversations about videos. We evaluated the model on the official AVSD test set, where it achieves a relative improvement of more than 16% over the baseline model on BLEU-4 and more than 33% on CIDEr. The challenging aspect of multi-modal dialogue is fusing modalities with varying information density. On AVSD, it is easiest to learn from the input text, while video features remain largely opaque to the decoder. modelname uses a generalization of FiLM to video that conditions video feature extraction on a question. However, similar to related work, absolute improvements of incorporating video features into dialogue are consistent but small. Thus, while our results indicate the suitability of our FiLM generalization, they also highlight that applications at the intersection between language and video are currently constrained by the quality of video features, and emphasizes the need for larger datasets. | Answer with content missing: (list missing)
Scheduled sampling: In our experiments, we found that models trained with scheduled sampling performed better (about 0.004 BLEU-4 on validation set) than the ones trained using teacher-forcing for the AVSD dataset. Hence, we use scheduled sampling for all the results we report in this paper.
Yes. |
2173809eb117570d289cefada6971e946b902bd6 | 2173809eb117570d289cefada6971e946b902bd6_0 | Q: Is the web interface publicly accessible?
Text: Introduction
With the surge in the use of social media, micro-blogging sites like Twitter, Facebook, and Foursquare have become household words. Growing ubiquity of mobile phones in highly populated developing nations has spurred an exponential rise in social media usage. The heavy volume of social media posts tagged with users' location information on micro-blogging website Twitter presents a unique opportunity to scan these posts. These Short texts (e.g. "tweets") on social media contain information about various events happening around the globe, as people post about events and incidents alike. Conventional web outlets provide emergency phone numbers (i.e. 100, 911), etc., and are fast and accurate. Our system, on the other hand, connects its users through a relatively newer platform i.e. social media, and provides an alternative to these conventional methods. In case of their failure or when such means are busy/occupied, an alternative could prove to be life saving.
These real life events are reported on Twitter with different perspectives, opinions, and sentiment. Every day, people discuss events thousands of times across social media sites. We would like to detect such events in case of an emergency. Some previous studies BIBREF0 investigate the use of features such as keywords in the tweet, number of words, and context to devise a classifier for event detection. BIBREF1 discusses various techniques researchers have used previously to detect events from Twitter. BIBREF2 describe a system to automatically detect events about known entities from Twitter. This work is highly specific to detection of events only related to known entities. BIBREF3 discuss a system that returns a ranked list of relevant events given a user query.
Several research efforts have focused on identifying events in real time( BIBREF4 BIBREF5 BIBREF6 BIBREF0 ). These include systems to detect emergent topics from Twitter in real time ( BIBREF4 BIBREF7 ), an online clustering technique for identifying tweets in real time BIBREF5 , a system to detect localized events and also track evolution of such events over a period of time BIBREF6 . Our focus is on detecting urban emergencies as events from Twitter messages. We classify events ranging from natural disasters to fire break outs, and accidents. Our system detects whether a tweet, which contains a keyword from a pre-decided list, is related to an actual emergency or not. It also classifies the event into its appropriate category, and visualizes the possible location of the emergency event on the map. We also support notifications to our users, containing the contacts of specifically concerned authorities, as per the category of their tweet.
The rest of the paper is as follows: Section SECREF2 provides the motivation for our work, and the challenges in building such a system. Section SECREF3 describes the step by step details of our work, and its results. We evaluate our system and present the results in Section SECREF4 . Section SECREF5 showcases our demonstrations in detail, and Section SECREF6 concludes the paper by briefly describing the overall contribution, implementation and demonstration.
Motivation and Challenges
In 2015, INLINEFORM0 of all unnatural deaths in India were caused by accidents, and INLINEFORM1 by accidental fires. Moreover, the Indian subcontinent suffered seven earthquakes in 2015, with the recent Nepal earthquake alone killing more than 9000 people and injuring INLINEFORM2 . We believe we can harness the current social media activity on the web to minimize losses by quickly connecting affected people and the concerned authorities. Our work is motivated by the following factors, (a) Social media is very accessible in the current scenario. (The “Digital India” initiative by the Government of India promotes internet activity, and thus a pro-active social media.) (b) As per the Internet trends reported in 2014, about 117 million Indians are connected to the Internet through mobile devices. (c) A system such as ours can point out or visualize the affected areas precisely and help inform the authorities in a timely fashion. (d) Such a system can be used on a global scale to reduce the effect of natural calamities and prevent loss of life.
There are several challenges in building such an application: (a) Such a system expects a tweet to be location tagged. Otherwise, event detection techniques to extract the spatio-temporal data from the tweet can be vague, and lead to false alarms. (b) Such a system should also be able to verify the user's credibility as pranksters may raise false alarms. (c) Tweets are usually written in a very informal language, which requires a sophisticated language processing component to sanitize the tweet input before event detection. (d) A channel with the concerned authorities should be established for them to take serious action, on alarms raised by such a system. (e) An urban emergency such as a natural disaster could affect communications severely, in case of an earthquake or a cyclone, communications channels like Internet connectivity may get disrupted easily. In such cases, our system may not be of help, as it requires the user to be connected to the internet. We address the above challenges and present our approach in the next section.
Our Approach
We propose a software architecture for Emergency detection and visualization as shown in figure FIGREF9 . We collect data using Twitter API, and perform language pre-processing before applying a classification model. Tweets are labelled manually with <emergency>and <non-emergency>labels, and later classified manually to provide labels according to the type of emergency they indicate. We use the manually labeled data for training our classifiers.
We use traditional classification techniques such as Support Vector Machines(SVM), and Naive Bayes(NB) for training, and perform 10-fold cross validation to obtain f-scores. Later, in real time, our system uses the Twitter streaming APIs to get data, pre-processes it using the same modules, and detects emergencies using the classifiers built above. The tweets related to emergencies are displayed on the web interface along with the location and information for the concerned authorities. The pre-processing of Twitter data obtained is needed as it usually contains ad-hoc abbreviations, phonetic substitutions, URLs, hashtags, and a lot of misspelled words. We use the following language processing modules for such corrections.
Pre-Processing Modules
We implement a cleaning module to automate the cleaning of tweets obtained from the Twitter API. We remove URLs, special symbols like @ along with the user mentions, Hashtags and any associated text. We also replace special symbols by blank spaces, and inculcate the module as shown in figure FIGREF9 .
An example of such a sample tweet cleaning is shown in table TABREF10 .
While tweeting, users often express their emotions by stressing over a few characters in the word. For example, usage of words like hellpppp, fiiiiiireeee, ruuuuunnnnn, druuuuuunnnkkk, soooooooo actually corresponds to help, fire, run, drunk, so etc. We use the compression module implemented by BIBREF8 for converting terms like “pleeeeeeeaaaaaassseeee” to “please”.
It is unlikely for an English word to contain the same character consecutively for three or more times. We, hence, compress all the repeated windows of character length greater than two, to two characters. For example “pleeeeeaaaassee” is converted to “pleeaassee”. Each window now contains two characters of the same alphabet in cases of repetition. Let n be the number of windows, obtained from the previous step. We, then, apply brute force search over INLINEFORM0 possibilities to select a valid dictionary word.
Table TABREF13 contains sanitized sample output from our compression module for further processing.
Text Normalization is the process of translating ad-hoc abbreviations, typographical errors, phonetic substitution and ungrammatical structures used in text messaging (Tweets and SMS) to plain English. Use of such language (often referred as Chatting Language) induces noise which poses additional processing challenges.
We use the normalization module implemented by BIBREF8 for text normalization. Training process requires a Language Model of the target language and a parallel corpora containing aligned un-normalized and normalized word pairs. Our language model consists of 15000 English words taken from various sources on the web.
Parallel corpora was collected from the following sources:
Stanford Normalization Corpora which consists of 9122 pairs of un-normalized and normalized words / phrases.
The above corpora, however, lacked acronyms and short hand texts like 2mrw, l8r, b4, hlp, flor which are frequently used in chatting. We collected 215 pairs un-normalized to normalized word/phrase mappings via crowd-sourcing.
Table TABREF16 contains input and normalized output from our module.
Users often make spelling mistakes while tweeting. A spell checker makes sure that a valid English word is sent to the classification system. We take this problem into account by introducing a spell checker as a pre-processing module by using the JAVA API of Jazzy spell checker for handling spelling mistakes.
An example of correction provided by the Spell Checker module is given below:-
Input: building INLINEFORM0 flor, help
Output: building INLINEFORM0 floor, help
Please note that, our current system performs compression, normalization and spell-checking if the language used is English. The classifier training and detection process are described below.
Emergency Classification
The first classifier model acts as a filter for the second stage of classification. We use both SVM and NB to compare the results and choose SVM later for stage one classification model, owing to a better F-score. The training is performed on tweets labeled with classes <emergency>, and <non-emergency> based on unigrams as features. We create word vectors of strings in the tweet using a filter available in the WEKA API BIBREF9 , and perform cross validation using standard classification techniques.
Type Classification
We employ a multi-class Naive Bayes classifier as the second stage classification mechanism, for categorizing tweets appropriately, depending on the type of emergencies they indicate. This multi-class classifier is trained on data manually labeled with classes. We tokenize the training data using “NgramTokenizer” and then, apply a filter to create word vectors of strings before training. We use “trigrams” as features to build a model which, later, classifies tweets into appropriate categories, in real time. We then perform cross validation using standard techniques to calculate the results, which are shown under the label “Stage 2”, in table TABREF20 .
Location Visualizer
We use Google Maps Geocoding API to display the possible location of the tweet origin based on longitude and latitude. Our visualizer presents the user with a map and pinpoints the location with custom icons for earthquake, cyclone, fire accident etc. Since we currently collect tweets with a location filter for the city of "Mumbai", we display its map location on the interface. The possible occurrences of such incidents are displayed on the map as soon as our system is able to detect it.
We also display the same on an Android device using the WebView functionality available to developers, thus solving the issue of portability. Our system displays visualization of the various emergencies detected on both web browsers and mobile devices.
Evaluation
We evaluate our system using automated, and manual evaluation techniques. We perform 10-fold cross validation to obtain the F-scores for our classification systems. We use the following technique for dataset creation. We test the system in realtime environments, and tweet about fires at random locations in our city, using test accounts. Our system was able to detect such tweets and detect them with locations shown on the map.
Dataset Creation
We collect data by using the Twitter API for saved data, available for public use. For our experiments we collect 3200 tweets filtered by keywords like “fire”, “earthquake”, “theft”, “robbery”, “drunk driving”, “drunk driving accident” etc. Later, we manually label tweets with <emergency>and <non-emergency>labels for classification as stage one. Our dataset contains 1313 tweet with positive label <emergency>and 1887 tweets with a negative label <non-emergency>. We create another dataset with the positively labeled tweets and provide them with category labels like “fire”, “accident”, “earthquake” etc.
Classifier Evaluation
The results of 10-fold cross-validation performed for stage one are shown in table TABREF20 , under the label “Stage 1”. In table TABREF20 , For “Stage 1” of classification, F-score obtained using SVM classifier is INLINEFORM0 as shown in row 2, column 2. We also provide the system with sample tweets in real time and assess its ability to detect the emergency, and classify it accordingly. The classification training for Stage 1 was performed using two traditional classification techniques SVM and NB. SVM outperformed NB by around INLINEFORM1 and became the choice of classification technique for stage one.
Some false positives obtained during manual evaluation are, “I am sooooo so drunk right nowwwwwwww” and “fire in my office , the boss is angry”. These occurrences show the need of more labeled gold data for our classifiers, and some other features, like Part-of-Speech tags, Named Entity recognition, Bigrams, Trigrams etc. to perform better.
The results of 10-fold cross-validation performed for stage two classfication model are also shown in table TABREF20 , under the label “Stage 2”. The training for stage two was also performed using both SVM and NB, but NB outperformed SVM by around INLINEFORM0 to become a choice for stage two classification model.
We also perform attribute evaluation for the classification model, and create a word cloud based on the output values, shown in figure FIGREF24 . It shows that our classifier model is trained on appropriate words, which are very close to the emergency situations viz. “fire”, “earthquake”, “accident”, “break” (Unigram representation here, but possibly occurs in a bigram phrase with “fire”) etc. In figure FIGREF24 , the word cloud represents the word “respond” as the most frequently occurring word as people need urgent help, and quick response from the assistance teams.
Demostration Description
Users interact with Civique through its Web-based user interface and Android based application interface. The features underlying Civique are demonstrated through the following two show cases:
Show case 1: Tweet Detection and Classification
This showcase aims at detecting related tweets, and classifying them into appropriate categories. For this, we have created a list of filter words, which are used to filter tweets from the Twitter streaming API. These set of words help us filter the tweets related to any incident. We will tweet, and users are able to see how our system captures such tweets and classifies them. Users should be able to see the tweet emerge as an incident on the web-interface, as shown in figure FIGREF26 and the on the android application, as shown in figure FIGREF27 . Figure FIGREF27 demonstrates how a notification is generated when our system detects an emergency tweet. When a user clicks the emerged spot, the system should be able to display the sanitized version / extracted spatio-temporal data from the tweet. We test the system in a realtime environment, and validate our experiments. We also report the false positives generated during the process in section SECREF25 above.
Show case 2: User Notification and Contact Info.
Civique includes a set of local contacts for civic authorities who are to be / who can be contacted in case of various emergencies. Users can see how Civique detects an emergency and classifies it. They can also watch how the system generates a notification on the web interface and the Android interface, requesting them to contact the authorities for emergencies. Users can change their preferences on the mobile device anytime and can also opt not to receive notifications. Users should be able to contact the authorities online using the application, but in case the online contact is not responsive, or in case of a sudden loss of connectivity, we provide the user with the offline contact information of the concerned civic authorities along with the notifications.
Conclusions
Civique is a system which detects urban emergencies like earthquakes, cyclones, fire break out, accidents etc. and visualizes them on both on a browsable web interface and an Android application. We collect data from the popular micro-blogging site Twitter and use language processing modules to sanitize the input. We use this data as input to train a two step classification system, which indicates whether a tweet is related to an emergency or not, and if it is, then what category of emergency it belongs to. We display such positively classified tweets along with their type and location on a Google map, and notify our users to inform the concerned authorities, and possibly evacuate the area, if his location matches the affected area. We believe such a system can help the disaster management machinery, and government bodies like Fire department, Police department, etc., to act swiftly, thus minimizing the loss of life.
Twitter users use slang, profanity, misspellings and neologisms. We, use standard cleaning methods, and combine NLP with Machine Learning (ML) to further our cause of tweet classification. At the current stage, we also have an Android application ready for our system, which shows the improvised, mobile-viewable web interface.
In the future, we aim to develop detection of emergency categories on the fly, obscure emergencies like “airplane hijacking” should also be detected by our system. We plan to analyze the temporal sequence of the tweet set from a single location to determine whether multiple problems on the same location are the result of a single event, or relate to multiple events. | Unanswerable |
293e9a0b30670f4f0a380e574a416665a8c55bc2 | 293e9a0b30670f4f0a380e574a416665a8c55bc2_0 | Q: Is the Android application publicly available?
Text: Introduction
With the surge in the use of social media, micro-blogging sites like Twitter, Facebook, and Foursquare have become household words. Growing ubiquity of mobile phones in highly populated developing nations has spurred an exponential rise in social media usage. The heavy volume of social media posts tagged with users' location information on micro-blogging website Twitter presents a unique opportunity to scan these posts. These Short texts (e.g. "tweets") on social media contain information about various events happening around the globe, as people post about events and incidents alike. Conventional web outlets provide emergency phone numbers (i.e. 100, 911), etc., and are fast and accurate. Our system, on the other hand, connects its users through a relatively newer platform i.e. social media, and provides an alternative to these conventional methods. In case of their failure or when such means are busy/occupied, an alternative could prove to be life saving.
These real life events are reported on Twitter with different perspectives, opinions, and sentiment. Every day, people discuss events thousands of times across social media sites. We would like to detect such events in case of an emergency. Some previous studies BIBREF0 investigate the use of features such as keywords in the tweet, number of words, and context to devise a classifier for event detection. BIBREF1 discusses various techniques researchers have used previously to detect events from Twitter. BIBREF2 describe a system to automatically detect events about known entities from Twitter. This work is highly specific to detection of events only related to known entities. BIBREF3 discuss a system that returns a ranked list of relevant events given a user query.
Several research efforts have focused on identifying events in real time( BIBREF4 BIBREF5 BIBREF6 BIBREF0 ). These include systems to detect emergent topics from Twitter in real time ( BIBREF4 BIBREF7 ), an online clustering technique for identifying tweets in real time BIBREF5 , a system to detect localized events and also track evolution of such events over a period of time BIBREF6 . Our focus is on detecting urban emergencies as events from Twitter messages. We classify events ranging from natural disasters to fire break outs, and accidents. Our system detects whether a tweet, which contains a keyword from a pre-decided list, is related to an actual emergency or not. It also classifies the event into its appropriate category, and visualizes the possible location of the emergency event on the map. We also support notifications to our users, containing the contacts of specifically concerned authorities, as per the category of their tweet.
The rest of the paper is as follows: Section SECREF2 provides the motivation for our work, and the challenges in building such a system. Section SECREF3 describes the step by step details of our work, and its results. We evaluate our system and present the results in Section SECREF4 . Section SECREF5 showcases our demonstrations in detail, and Section SECREF6 concludes the paper by briefly describing the overall contribution, implementation and demonstration.
Motivation and Challenges
In 2015, INLINEFORM0 of all unnatural deaths in India were caused by accidents, and INLINEFORM1 by accidental fires. Moreover, the Indian subcontinent suffered seven earthquakes in 2015, with the recent Nepal earthquake alone killing more than 9000 people and injuring INLINEFORM2 . We believe we can harness the current social media activity on the web to minimize losses by quickly connecting affected people and the concerned authorities. Our work is motivated by the following factors, (a) Social media is very accessible in the current scenario. (The “Digital India” initiative by the Government of India promotes internet activity, and thus a pro-active social media.) (b) As per the Internet trends reported in 2014, about 117 million Indians are connected to the Internet through mobile devices. (c) A system such as ours can point out or visualize the affected areas precisely and help inform the authorities in a timely fashion. (d) Such a system can be used on a global scale to reduce the effect of natural calamities and prevent loss of life.
There are several challenges in building such an application: (a) Such a system expects a tweet to be location tagged. Otherwise, event detection techniques to extract the spatio-temporal data from the tweet can be vague, and lead to false alarms. (b) Such a system should also be able to verify the user's credibility as pranksters may raise false alarms. (c) Tweets are usually written in a very informal language, which requires a sophisticated language processing component to sanitize the tweet input before event detection. (d) A channel with the concerned authorities should be established for them to take serious action, on alarms raised by such a system. (e) An urban emergency such as a natural disaster could affect communications severely, in case of an earthquake or a cyclone, communications channels like Internet connectivity may get disrupted easily. In such cases, our system may not be of help, as it requires the user to be connected to the internet. We address the above challenges and present our approach in the next section.
Our Approach
We propose a software architecture for Emergency detection and visualization as shown in figure FIGREF9 . We collect data using Twitter API, and perform language pre-processing before applying a classification model. Tweets are labelled manually with <emergency>and <non-emergency>labels, and later classified manually to provide labels according to the type of emergency they indicate. We use the manually labeled data for training our classifiers.
We use traditional classification techniques such as Support Vector Machines(SVM), and Naive Bayes(NB) for training, and perform 10-fold cross validation to obtain f-scores. Later, in real time, our system uses the Twitter streaming APIs to get data, pre-processes it using the same modules, and detects emergencies using the classifiers built above. The tweets related to emergencies are displayed on the web interface along with the location and information for the concerned authorities. The pre-processing of Twitter data obtained is needed as it usually contains ad-hoc abbreviations, phonetic substitutions, URLs, hashtags, and a lot of misspelled words. We use the following language processing modules for such corrections.
Pre-Processing Modules
We implement a cleaning module to automate the cleaning of tweets obtained from the Twitter API. We remove URLs, special symbols like @ along with the user mentions, Hashtags and any associated text. We also replace special symbols by blank spaces, and inculcate the module as shown in figure FIGREF9 .
An example of such a sample tweet cleaning is shown in table TABREF10 .
While tweeting, users often express their emotions by stressing over a few characters in the word. For example, usage of words like hellpppp, fiiiiiireeee, ruuuuunnnnn, druuuuuunnnkkk, soooooooo actually corresponds to help, fire, run, drunk, so etc. We use the compression module implemented by BIBREF8 for converting terms like “pleeeeeeeaaaaaassseeee” to “please”.
It is unlikely for an English word to contain the same character consecutively for three or more times. We, hence, compress all the repeated windows of character length greater than two, to two characters. For example “pleeeeeaaaassee” is converted to “pleeaassee”. Each window now contains two characters of the same alphabet in cases of repetition. Let n be the number of windows, obtained from the previous step. We, then, apply brute force search over INLINEFORM0 possibilities to select a valid dictionary word.
Table TABREF13 contains sanitized sample output from our compression module for further processing.
Text Normalization is the process of translating ad-hoc abbreviations, typographical errors, phonetic substitution and ungrammatical structures used in text messaging (Tweets and SMS) to plain English. Use of such language (often referred as Chatting Language) induces noise which poses additional processing challenges.
We use the normalization module implemented by BIBREF8 for text normalization. Training process requires a Language Model of the target language and a parallel corpora containing aligned un-normalized and normalized word pairs. Our language model consists of 15000 English words taken from various sources on the web.
Parallel corpora was collected from the following sources:
Stanford Normalization Corpora which consists of 9122 pairs of un-normalized and normalized words / phrases.
The above corpora, however, lacked acronyms and short hand texts like 2mrw, l8r, b4, hlp, flor which are frequently used in chatting. We collected 215 pairs un-normalized to normalized word/phrase mappings via crowd-sourcing.
Table TABREF16 contains input and normalized output from our module.
Users often make spelling mistakes while tweeting. A spell checker makes sure that a valid English word is sent to the classification system. We take this problem into account by introducing a spell checker as a pre-processing module by using the JAVA API of Jazzy spell checker for handling spelling mistakes.
An example of correction provided by the Spell Checker module is given below:-
Input: building INLINEFORM0 flor, help
Output: building INLINEFORM0 floor, help
Please note that, our current system performs compression, normalization and spell-checking if the language used is English. The classifier training and detection process are described below.
Emergency Classification
The first classifier model acts as a filter for the second stage of classification. We use both SVM and NB to compare the results and choose SVM later for stage one classification model, owing to a better F-score. The training is performed on tweets labeled with classes <emergency>, and <non-emergency> based on unigrams as features. We create word vectors of strings in the tweet using a filter available in the WEKA API BIBREF9 , and perform cross validation using standard classification techniques.
Type Classification
We employ a multi-class Naive Bayes classifier as the second stage classification mechanism, for categorizing tweets appropriately, depending on the type of emergencies they indicate. This multi-class classifier is trained on data manually labeled with classes. We tokenize the training data using “NgramTokenizer” and then, apply a filter to create word vectors of strings before training. We use “trigrams” as features to build a model which, later, classifies tweets into appropriate categories, in real time. We then perform cross validation using standard techniques to calculate the results, which are shown under the label “Stage 2”, in table TABREF20 .
Location Visualizer
We use Google Maps Geocoding API to display the possible location of the tweet origin based on longitude and latitude. Our visualizer presents the user with a map and pinpoints the location with custom icons for earthquake, cyclone, fire accident etc. Since we currently collect tweets with a location filter for the city of "Mumbai", we display its map location on the interface. The possible occurrences of such incidents are displayed on the map as soon as our system is able to detect it.
We also display the same on an Android device using the WebView functionality available to developers, thus solving the issue of portability. Our system displays visualization of the various emergencies detected on both web browsers and mobile devices.
Evaluation
We evaluate our system using automated, and manual evaluation techniques. We perform 10-fold cross validation to obtain the F-scores for our classification systems. We use the following technique for dataset creation. We test the system in realtime environments, and tweet about fires at random locations in our city, using test accounts. Our system was able to detect such tweets and detect them with locations shown on the map.
Dataset Creation
We collect data by using the Twitter API for saved data, available for public use. For our experiments we collect 3200 tweets filtered by keywords like “fire”, “earthquake”, “theft”, “robbery”, “drunk driving”, “drunk driving accident” etc. Later, we manually label tweets with <emergency>and <non-emergency>labels for classification as stage one. Our dataset contains 1313 tweet with positive label <emergency>and 1887 tweets with a negative label <non-emergency>. We create another dataset with the positively labeled tweets and provide them with category labels like “fire”, “accident”, “earthquake” etc.
Classifier Evaluation
The results of 10-fold cross-validation performed for stage one are shown in table TABREF20 , under the label “Stage 1”. In table TABREF20 , For “Stage 1” of classification, F-score obtained using SVM classifier is INLINEFORM0 as shown in row 2, column 2. We also provide the system with sample tweets in real time and assess its ability to detect the emergency, and classify it accordingly. The classification training for Stage 1 was performed using two traditional classification techniques SVM and NB. SVM outperformed NB by around INLINEFORM1 and became the choice of classification technique for stage one.
Some false positives obtained during manual evaluation are, “I am sooooo so drunk right nowwwwwwww” and “fire in my office , the boss is angry”. These occurrences show the need of more labeled gold data for our classifiers, and some other features, like Part-of-Speech tags, Named Entity recognition, Bigrams, Trigrams etc. to perform better.
The results of 10-fold cross-validation performed for stage two classfication model are also shown in table TABREF20 , under the label “Stage 2”. The training for stage two was also performed using both SVM and NB, but NB outperformed SVM by around INLINEFORM0 to become a choice for stage two classification model.
We also perform attribute evaluation for the classification model, and create a word cloud based on the output values, shown in figure FIGREF24 . It shows that our classifier model is trained on appropriate words, which are very close to the emergency situations viz. “fire”, “earthquake”, “accident”, “break” (Unigram representation here, but possibly occurs in a bigram phrase with “fire”) etc. In figure FIGREF24 , the word cloud represents the word “respond” as the most frequently occurring word as people need urgent help, and quick response from the assistance teams.
Demostration Description
Users interact with Civique through its Web-based user interface and Android based application interface. The features underlying Civique are demonstrated through the following two show cases:
Show case 1: Tweet Detection and Classification
This showcase aims at detecting related tweets, and classifying them into appropriate categories. For this, we have created a list of filter words, which are used to filter tweets from the Twitter streaming API. These set of words help us filter the tweets related to any incident. We will tweet, and users are able to see how our system captures such tweets and classifies them. Users should be able to see the tweet emerge as an incident on the web-interface, as shown in figure FIGREF26 and the on the android application, as shown in figure FIGREF27 . Figure FIGREF27 demonstrates how a notification is generated when our system detects an emergency tweet. When a user clicks the emerged spot, the system should be able to display the sanitized version / extracted spatio-temporal data from the tweet. We test the system in a realtime environment, and validate our experiments. We also report the false positives generated during the process in section SECREF25 above.
Show case 2: User Notification and Contact Info.
Civique includes a set of local contacts for civic authorities who are to be / who can be contacted in case of various emergencies. Users can see how Civique detects an emergency and classifies it. They can also watch how the system generates a notification on the web interface and the Android interface, requesting them to contact the authorities for emergencies. Users can change their preferences on the mobile device anytime and can also opt not to receive notifications. Users should be able to contact the authorities online using the application, but in case the online contact is not responsive, or in case of a sudden loss of connectivity, we provide the user with the offline contact information of the concerned civic authorities along with the notifications.
Conclusions
Civique is a system which detects urban emergencies like earthquakes, cyclones, fire break out, accidents etc. and visualizes them on both on a browsable web interface and an Android application. We collect data from the popular micro-blogging site Twitter and use language processing modules to sanitize the input. We use this data as input to train a two step classification system, which indicates whether a tweet is related to an emergency or not, and if it is, then what category of emergency it belongs to. We display such positively classified tweets along with their type and location on a Google map, and notify our users to inform the concerned authorities, and possibly evacuate the area, if his location matches the affected area. We believe such a system can help the disaster management machinery, and government bodies like Fire department, Police department, etc., to act swiftly, thus minimizing the loss of life.
Twitter users use slang, profanity, misspellings and neologisms. We, use standard cleaning methods, and combine NLP with Machine Learning (ML) to further our cause of tweet classification. At the current stage, we also have an Android application ready for our system, which shows the improvised, mobile-viewable web interface.
In the future, we aim to develop detection of emergency categories on the fly, obscure emergencies like “airplane hijacking” should also be detected by our system. We plan to analyze the temporal sequence of the tweet set from a single location to determine whether multiple problems on the same location are the result of a single event, or relate to multiple events. | Unanswerable |
17de58c17580350c9da9c2f3612784b432154d11 | 17de58c17580350c9da9c2f3612784b432154d11_0 | Q: What classifier is used for emergency categorization?
Text: Introduction
With the surge in the use of social media, micro-blogging sites like Twitter, Facebook, and Foursquare have become household words. Growing ubiquity of mobile phones in highly populated developing nations has spurred an exponential rise in social media usage. The heavy volume of social media posts tagged with users' location information on micro-blogging website Twitter presents a unique opportunity to scan these posts. These Short texts (e.g. "tweets") on social media contain information about various events happening around the globe, as people post about events and incidents alike. Conventional web outlets provide emergency phone numbers (i.e. 100, 911), etc., and are fast and accurate. Our system, on the other hand, connects its users through a relatively newer platform i.e. social media, and provides an alternative to these conventional methods. In case of their failure or when such means are busy/occupied, an alternative could prove to be life saving.
These real life events are reported on Twitter with different perspectives, opinions, and sentiment. Every day, people discuss events thousands of times across social media sites. We would like to detect such events in case of an emergency. Some previous studies BIBREF0 investigate the use of features such as keywords in the tweet, number of words, and context to devise a classifier for event detection. BIBREF1 discusses various techniques researchers have used previously to detect events from Twitter. BIBREF2 describe a system to automatically detect events about known entities from Twitter. This work is highly specific to detection of events only related to known entities. BIBREF3 discuss a system that returns a ranked list of relevant events given a user query.
Several research efforts have focused on identifying events in real time( BIBREF4 BIBREF5 BIBREF6 BIBREF0 ). These include systems to detect emergent topics from Twitter in real time ( BIBREF4 BIBREF7 ), an online clustering technique for identifying tweets in real time BIBREF5 , a system to detect localized events and also track evolution of such events over a period of time BIBREF6 . Our focus is on detecting urban emergencies as events from Twitter messages. We classify events ranging from natural disasters to fire break outs, and accidents. Our system detects whether a tweet, which contains a keyword from a pre-decided list, is related to an actual emergency or not. It also classifies the event into its appropriate category, and visualizes the possible location of the emergency event on the map. We also support notifications to our users, containing the contacts of specifically concerned authorities, as per the category of their tweet.
The rest of the paper is as follows: Section SECREF2 provides the motivation for our work, and the challenges in building such a system. Section SECREF3 describes the step by step details of our work, and its results. We evaluate our system and present the results in Section SECREF4 . Section SECREF5 showcases our demonstrations in detail, and Section SECREF6 concludes the paper by briefly describing the overall contribution, implementation and demonstration.
Motivation and Challenges
In 2015, INLINEFORM0 of all unnatural deaths in India were caused by accidents, and INLINEFORM1 by accidental fires. Moreover, the Indian subcontinent suffered seven earthquakes in 2015, with the recent Nepal earthquake alone killing more than 9000 people and injuring INLINEFORM2 . We believe we can harness the current social media activity on the web to minimize losses by quickly connecting affected people and the concerned authorities. Our work is motivated by the following factors, (a) Social media is very accessible in the current scenario. (The “Digital India” initiative by the Government of India promotes internet activity, and thus a pro-active social media.) (b) As per the Internet trends reported in 2014, about 117 million Indians are connected to the Internet through mobile devices. (c) A system such as ours can point out or visualize the affected areas precisely and help inform the authorities in a timely fashion. (d) Such a system can be used on a global scale to reduce the effect of natural calamities and prevent loss of life.
There are several challenges in building such an application: (a) Such a system expects a tweet to be location tagged. Otherwise, event detection techniques to extract the spatio-temporal data from the tweet can be vague, and lead to false alarms. (b) Such a system should also be able to verify the user's credibility as pranksters may raise false alarms. (c) Tweets are usually written in a very informal language, which requires a sophisticated language processing component to sanitize the tweet input before event detection. (d) A channel with the concerned authorities should be established for them to take serious action, on alarms raised by such a system. (e) An urban emergency such as a natural disaster could affect communications severely, in case of an earthquake or a cyclone, communications channels like Internet connectivity may get disrupted easily. In such cases, our system may not be of help, as it requires the user to be connected to the internet. We address the above challenges and present our approach in the next section.
Our Approach
We propose a software architecture for Emergency detection and visualization as shown in figure FIGREF9 . We collect data using Twitter API, and perform language pre-processing before applying a classification model. Tweets are labelled manually with <emergency>and <non-emergency>labels, and later classified manually to provide labels according to the type of emergency they indicate. We use the manually labeled data for training our classifiers.
We use traditional classification techniques such as Support Vector Machines(SVM), and Naive Bayes(NB) for training, and perform 10-fold cross validation to obtain f-scores. Later, in real time, our system uses the Twitter streaming APIs to get data, pre-processes it using the same modules, and detects emergencies using the classifiers built above. The tweets related to emergencies are displayed on the web interface along with the location and information for the concerned authorities. The pre-processing of Twitter data obtained is needed as it usually contains ad-hoc abbreviations, phonetic substitutions, URLs, hashtags, and a lot of misspelled words. We use the following language processing modules for such corrections.
Pre-Processing Modules
We implement a cleaning module to automate the cleaning of tweets obtained from the Twitter API. We remove URLs, special symbols like @ along with the user mentions, Hashtags and any associated text. We also replace special symbols by blank spaces, and inculcate the module as shown in figure FIGREF9 .
An example of such a sample tweet cleaning is shown in table TABREF10 .
While tweeting, users often express their emotions by stressing over a few characters in the word. For example, usage of words like hellpppp, fiiiiiireeee, ruuuuunnnnn, druuuuuunnnkkk, soooooooo actually corresponds to help, fire, run, drunk, so etc. We use the compression module implemented by BIBREF8 for converting terms like “pleeeeeeeaaaaaassseeee” to “please”.
It is unlikely for an English word to contain the same character consecutively for three or more times. We, hence, compress all the repeated windows of character length greater than two, to two characters. For example “pleeeeeaaaassee” is converted to “pleeaassee”. Each window now contains two characters of the same alphabet in cases of repetition. Let n be the number of windows, obtained from the previous step. We, then, apply brute force search over INLINEFORM0 possibilities to select a valid dictionary word.
Table TABREF13 contains sanitized sample output from our compression module for further processing.
Text Normalization is the process of translating ad-hoc abbreviations, typographical errors, phonetic substitution and ungrammatical structures used in text messaging (Tweets and SMS) to plain English. Use of such language (often referred as Chatting Language) induces noise which poses additional processing challenges.
We use the normalization module implemented by BIBREF8 for text normalization. Training process requires a Language Model of the target language and a parallel corpora containing aligned un-normalized and normalized word pairs. Our language model consists of 15000 English words taken from various sources on the web.
Parallel corpora was collected from the following sources:
Stanford Normalization Corpora which consists of 9122 pairs of un-normalized and normalized words / phrases.
The above corpora, however, lacked acronyms and short hand texts like 2mrw, l8r, b4, hlp, flor which are frequently used in chatting. We collected 215 pairs un-normalized to normalized word/phrase mappings via crowd-sourcing.
Table TABREF16 contains input and normalized output from our module.
Users often make spelling mistakes while tweeting. A spell checker makes sure that a valid English word is sent to the classification system. We take this problem into account by introducing a spell checker as a pre-processing module by using the JAVA API of Jazzy spell checker for handling spelling mistakes.
An example of correction provided by the Spell Checker module is given below:-
Input: building INLINEFORM0 flor, help
Output: building INLINEFORM0 floor, help
Please note that, our current system performs compression, normalization and spell-checking if the language used is English. The classifier training and detection process are described below.
Emergency Classification
The first classifier model acts as a filter for the second stage of classification. We use both SVM and NB to compare the results and choose SVM later for stage one classification model, owing to a better F-score. The training is performed on tweets labeled with classes <emergency>, and <non-emergency> based on unigrams as features. We create word vectors of strings in the tweet using a filter available in the WEKA API BIBREF9 , and perform cross validation using standard classification techniques.
Type Classification
We employ a multi-class Naive Bayes classifier as the second stage classification mechanism, for categorizing tweets appropriately, depending on the type of emergencies they indicate. This multi-class classifier is trained on data manually labeled with classes. We tokenize the training data using “NgramTokenizer” and then, apply a filter to create word vectors of strings before training. We use “trigrams” as features to build a model which, later, classifies tweets into appropriate categories, in real time. We then perform cross validation using standard techniques to calculate the results, which are shown under the label “Stage 2”, in table TABREF20 .
Location Visualizer
We use Google Maps Geocoding API to display the possible location of the tweet origin based on longitude and latitude. Our visualizer presents the user with a map and pinpoints the location with custom icons for earthquake, cyclone, fire accident etc. Since we currently collect tweets with a location filter for the city of "Mumbai", we display its map location on the interface. The possible occurrences of such incidents are displayed on the map as soon as our system is able to detect it.
We also display the same on an Android device using the WebView functionality available to developers, thus solving the issue of portability. Our system displays visualization of the various emergencies detected on both web browsers and mobile devices.
Evaluation
We evaluate our system using automated, and manual evaluation techniques. We perform 10-fold cross validation to obtain the F-scores for our classification systems. We use the following technique for dataset creation. We test the system in realtime environments, and tweet about fires at random locations in our city, using test accounts. Our system was able to detect such tweets and detect them with locations shown on the map.
Dataset Creation
We collect data by using the Twitter API for saved data, available for public use. For our experiments we collect 3200 tweets filtered by keywords like “fire”, “earthquake”, “theft”, “robbery”, “drunk driving”, “drunk driving accident” etc. Later, we manually label tweets with <emergency>and <non-emergency>labels for classification as stage one. Our dataset contains 1313 tweet with positive label <emergency>and 1887 tweets with a negative label <non-emergency>. We create another dataset with the positively labeled tweets and provide them with category labels like “fire”, “accident”, “earthquake” etc.
Classifier Evaluation
The results of 10-fold cross-validation performed for stage one are shown in table TABREF20 , under the label “Stage 1”. In table TABREF20 , For “Stage 1” of classification, F-score obtained using SVM classifier is INLINEFORM0 as shown in row 2, column 2. We also provide the system with sample tweets in real time and assess its ability to detect the emergency, and classify it accordingly. The classification training for Stage 1 was performed using two traditional classification techniques SVM and NB. SVM outperformed NB by around INLINEFORM1 and became the choice of classification technique for stage one.
Some false positives obtained during manual evaluation are, “I am sooooo so drunk right nowwwwwwww” and “fire in my office , the boss is angry”. These occurrences show the need of more labeled gold data for our classifiers, and some other features, like Part-of-Speech tags, Named Entity recognition, Bigrams, Trigrams etc. to perform better.
The results of 10-fold cross-validation performed for stage two classfication model are also shown in table TABREF20 , under the label “Stage 2”. The training for stage two was also performed using both SVM and NB, but NB outperformed SVM by around INLINEFORM0 to become a choice for stage two classification model.
We also perform attribute evaluation for the classification model, and create a word cloud based on the output values, shown in figure FIGREF24 . It shows that our classifier model is trained on appropriate words, which are very close to the emergency situations viz. “fire”, “earthquake”, “accident”, “break” (Unigram representation here, but possibly occurs in a bigram phrase with “fire”) etc. In figure FIGREF24 , the word cloud represents the word “respond” as the most frequently occurring word as people need urgent help, and quick response from the assistance teams.
Demostration Description
Users interact with Civique through its Web-based user interface and Android based application interface. The features underlying Civique are demonstrated through the following two show cases:
Show case 1: Tweet Detection and Classification
This showcase aims at detecting related tweets, and classifying them into appropriate categories. For this, we have created a list of filter words, which are used to filter tweets from the Twitter streaming API. These set of words help us filter the tweets related to any incident. We will tweet, and users are able to see how our system captures such tweets and classifies them. Users should be able to see the tweet emerge as an incident on the web-interface, as shown in figure FIGREF26 and the on the android application, as shown in figure FIGREF27 . Figure FIGREF27 demonstrates how a notification is generated when our system detects an emergency tweet. When a user clicks the emerged spot, the system should be able to display the sanitized version / extracted spatio-temporal data from the tweet. We test the system in a realtime environment, and validate our experiments. We also report the false positives generated during the process in section SECREF25 above.
Show case 2: User Notification and Contact Info.
Civique includes a set of local contacts for civic authorities who are to be / who can be contacted in case of various emergencies. Users can see how Civique detects an emergency and classifies it. They can also watch how the system generates a notification on the web interface and the Android interface, requesting them to contact the authorities for emergencies. Users can change their preferences on the mobile device anytime and can also opt not to receive notifications. Users should be able to contact the authorities online using the application, but in case the online contact is not responsive, or in case of a sudden loss of connectivity, we provide the user with the offline contact information of the concerned civic authorities along with the notifications.
Conclusions
Civique is a system which detects urban emergencies like earthquakes, cyclones, fire break out, accidents etc. and visualizes them on both on a browsable web interface and an Android application. We collect data from the popular micro-blogging site Twitter and use language processing modules to sanitize the input. We use this data as input to train a two step classification system, which indicates whether a tweet is related to an emergency or not, and if it is, then what category of emergency it belongs to. We display such positively classified tweets along with their type and location on a Google map, and notify our users to inform the concerned authorities, and possibly evacuate the area, if his location matches the affected area. We believe such a system can help the disaster management machinery, and government bodies like Fire department, Police department, etc., to act swiftly, thus minimizing the loss of life.
Twitter users use slang, profanity, misspellings and neologisms. We, use standard cleaning methods, and combine NLP with Machine Learning (ML) to further our cause of tweet classification. At the current stage, we also have an Android application ready for our system, which shows the improvised, mobile-viewable web interface.
In the future, we aim to develop detection of emergency categories on the fly, obscure emergencies like “airplane hijacking” should also be detected by our system. We plan to analyze the temporal sequence of the tweet set from a single location to determine whether multiple problems on the same location are the result of a single event, or relate to multiple events. | multi-class Naive Bayes |
ff27d6e6eb77e55b4d39d343870118d1a6debd5e | ff27d6e6eb77e55b4d39d343870118d1a6debd5e_0 | Q: What classifier is used for emergency detection?
Text: Introduction
With the surge in the use of social media, micro-blogging sites like Twitter, Facebook, and Foursquare have become household words. Growing ubiquity of mobile phones in highly populated developing nations has spurred an exponential rise in social media usage. The heavy volume of social media posts tagged with users' location information on micro-blogging website Twitter presents a unique opportunity to scan these posts. These Short texts (e.g. "tweets") on social media contain information about various events happening around the globe, as people post about events and incidents alike. Conventional web outlets provide emergency phone numbers (i.e. 100, 911), etc., and are fast and accurate. Our system, on the other hand, connects its users through a relatively newer platform i.e. social media, and provides an alternative to these conventional methods. In case of their failure or when such means are busy/occupied, an alternative could prove to be life saving.
These real life events are reported on Twitter with different perspectives, opinions, and sentiment. Every day, people discuss events thousands of times across social media sites. We would like to detect such events in case of an emergency. Some previous studies BIBREF0 investigate the use of features such as keywords in the tweet, number of words, and context to devise a classifier for event detection. BIBREF1 discusses various techniques researchers have used previously to detect events from Twitter. BIBREF2 describe a system to automatically detect events about known entities from Twitter. This work is highly specific to detection of events only related to known entities. BIBREF3 discuss a system that returns a ranked list of relevant events given a user query.
Several research efforts have focused on identifying events in real time( BIBREF4 BIBREF5 BIBREF6 BIBREF0 ). These include systems to detect emergent topics from Twitter in real time ( BIBREF4 BIBREF7 ), an online clustering technique for identifying tweets in real time BIBREF5 , a system to detect localized events and also track evolution of such events over a period of time BIBREF6 . Our focus is on detecting urban emergencies as events from Twitter messages. We classify events ranging from natural disasters to fire break outs, and accidents. Our system detects whether a tweet, which contains a keyword from a pre-decided list, is related to an actual emergency or not. It also classifies the event into its appropriate category, and visualizes the possible location of the emergency event on the map. We also support notifications to our users, containing the contacts of specifically concerned authorities, as per the category of their tweet.
The rest of the paper is as follows: Section SECREF2 provides the motivation for our work, and the challenges in building such a system. Section SECREF3 describes the step by step details of our work, and its results. We evaluate our system and present the results in Section SECREF4 . Section SECREF5 showcases our demonstrations in detail, and Section SECREF6 concludes the paper by briefly describing the overall contribution, implementation and demonstration.
Motivation and Challenges
In 2015, INLINEFORM0 of all unnatural deaths in India were caused by accidents, and INLINEFORM1 by accidental fires. Moreover, the Indian subcontinent suffered seven earthquakes in 2015, with the recent Nepal earthquake alone killing more than 9000 people and injuring INLINEFORM2 . We believe we can harness the current social media activity on the web to minimize losses by quickly connecting affected people and the concerned authorities. Our work is motivated by the following factors, (a) Social media is very accessible in the current scenario. (The “Digital India” initiative by the Government of India promotes internet activity, and thus a pro-active social media.) (b) As per the Internet trends reported in 2014, about 117 million Indians are connected to the Internet through mobile devices. (c) A system such as ours can point out or visualize the affected areas precisely and help inform the authorities in a timely fashion. (d) Such a system can be used on a global scale to reduce the effect of natural calamities and prevent loss of life.
There are several challenges in building such an application: (a) Such a system expects a tweet to be location tagged. Otherwise, event detection techniques to extract the spatio-temporal data from the tweet can be vague, and lead to false alarms. (b) Such a system should also be able to verify the user's credibility as pranksters may raise false alarms. (c) Tweets are usually written in a very informal language, which requires a sophisticated language processing component to sanitize the tweet input before event detection. (d) A channel with the concerned authorities should be established for them to take serious action, on alarms raised by such a system. (e) An urban emergency such as a natural disaster could affect communications severely, in case of an earthquake or a cyclone, communications channels like Internet connectivity may get disrupted easily. In such cases, our system may not be of help, as it requires the user to be connected to the internet. We address the above challenges and present our approach in the next section.
Our Approach
We propose a software architecture for Emergency detection and visualization as shown in figure FIGREF9 . We collect data using Twitter API, and perform language pre-processing before applying a classification model. Tweets are labelled manually with <emergency>and <non-emergency>labels, and later classified manually to provide labels according to the type of emergency they indicate. We use the manually labeled data for training our classifiers.
We use traditional classification techniques such as Support Vector Machines(SVM), and Naive Bayes(NB) for training, and perform 10-fold cross validation to obtain f-scores. Later, in real time, our system uses the Twitter streaming APIs to get data, pre-processes it using the same modules, and detects emergencies using the classifiers built above. The tweets related to emergencies are displayed on the web interface along with the location and information for the concerned authorities. The pre-processing of Twitter data obtained is needed as it usually contains ad-hoc abbreviations, phonetic substitutions, URLs, hashtags, and a lot of misspelled words. We use the following language processing modules for such corrections.
Pre-Processing Modules
We implement a cleaning module to automate the cleaning of tweets obtained from the Twitter API. We remove URLs, special symbols like @ along with the user mentions, Hashtags and any associated text. We also replace special symbols by blank spaces, and inculcate the module as shown in figure FIGREF9 .
An example of such a sample tweet cleaning is shown in table TABREF10 .
While tweeting, users often express their emotions by stressing over a few characters in the word. For example, usage of words like hellpppp, fiiiiiireeee, ruuuuunnnnn, druuuuuunnnkkk, soooooooo actually corresponds to help, fire, run, drunk, so etc. We use the compression module implemented by BIBREF8 for converting terms like “pleeeeeeeaaaaaassseeee” to “please”.
It is unlikely for an English word to contain the same character consecutively for three or more times. We, hence, compress all the repeated windows of character length greater than two, to two characters. For example “pleeeeeaaaassee” is converted to “pleeaassee”. Each window now contains two characters of the same alphabet in cases of repetition. Let n be the number of windows, obtained from the previous step. We, then, apply brute force search over INLINEFORM0 possibilities to select a valid dictionary word.
Table TABREF13 contains sanitized sample output from our compression module for further processing.
Text Normalization is the process of translating ad-hoc abbreviations, typographical errors, phonetic substitution and ungrammatical structures used in text messaging (Tweets and SMS) to plain English. Use of such language (often referred as Chatting Language) induces noise which poses additional processing challenges.
We use the normalization module implemented by BIBREF8 for text normalization. Training process requires a Language Model of the target language and a parallel corpora containing aligned un-normalized and normalized word pairs. Our language model consists of 15000 English words taken from various sources on the web.
Parallel corpora was collected from the following sources:
Stanford Normalization Corpora which consists of 9122 pairs of un-normalized and normalized words / phrases.
The above corpora, however, lacked acronyms and short hand texts like 2mrw, l8r, b4, hlp, flor which are frequently used in chatting. We collected 215 pairs un-normalized to normalized word/phrase mappings via crowd-sourcing.
Table TABREF16 contains input and normalized output from our module.
Users often make spelling mistakes while tweeting. A spell checker makes sure that a valid English word is sent to the classification system. We take this problem into account by introducing a spell checker as a pre-processing module by using the JAVA API of Jazzy spell checker for handling spelling mistakes.
An example of correction provided by the Spell Checker module is given below:-
Input: building INLINEFORM0 flor, help
Output: building INLINEFORM0 floor, help
Please note that, our current system performs compression, normalization and spell-checking if the language used is English. The classifier training and detection process are described below.
Emergency Classification
The first classifier model acts as a filter for the second stage of classification. We use both SVM and NB to compare the results and choose SVM later for stage one classification model, owing to a better F-score. The training is performed on tweets labeled with classes <emergency>, and <non-emergency> based on unigrams as features. We create word vectors of strings in the tweet using a filter available in the WEKA API BIBREF9 , and perform cross validation using standard classification techniques.
Type Classification
We employ a multi-class Naive Bayes classifier as the second stage classification mechanism, for categorizing tweets appropriately, depending on the type of emergencies they indicate. This multi-class classifier is trained on data manually labeled with classes. We tokenize the training data using “NgramTokenizer” and then, apply a filter to create word vectors of strings before training. We use “trigrams” as features to build a model which, later, classifies tweets into appropriate categories, in real time. We then perform cross validation using standard techniques to calculate the results, which are shown under the label “Stage 2”, in table TABREF20 .
Location Visualizer
We use Google Maps Geocoding API to display the possible location of the tweet origin based on longitude and latitude. Our visualizer presents the user with a map and pinpoints the location with custom icons for earthquake, cyclone, fire accident etc. Since we currently collect tweets with a location filter for the city of "Mumbai", we display its map location on the interface. The possible occurrences of such incidents are displayed on the map as soon as our system is able to detect it.
We also display the same on an Android device using the WebView functionality available to developers, thus solving the issue of portability. Our system displays visualization of the various emergencies detected on both web browsers and mobile devices.
Evaluation
We evaluate our system using automated, and manual evaluation techniques. We perform 10-fold cross validation to obtain the F-scores for our classification systems. We use the following technique for dataset creation. We test the system in realtime environments, and tweet about fires at random locations in our city, using test accounts. Our system was able to detect such tweets and detect them with locations shown on the map.
Dataset Creation
We collect data by using the Twitter API for saved data, available for public use. For our experiments we collect 3200 tweets filtered by keywords like “fire”, “earthquake”, “theft”, “robbery”, “drunk driving”, “drunk driving accident” etc. Later, we manually label tweets with <emergency>and <non-emergency>labels for classification as stage one. Our dataset contains 1313 tweet with positive label <emergency>and 1887 tweets with a negative label <non-emergency>. We create another dataset with the positively labeled tweets and provide them with category labels like “fire”, “accident”, “earthquake” etc.
Classifier Evaluation
The results of 10-fold cross-validation performed for stage one are shown in table TABREF20 , under the label “Stage 1”. In table TABREF20 , For “Stage 1” of classification, F-score obtained using SVM classifier is INLINEFORM0 as shown in row 2, column 2. We also provide the system with sample tweets in real time and assess its ability to detect the emergency, and classify it accordingly. The classification training for Stage 1 was performed using two traditional classification techniques SVM and NB. SVM outperformed NB by around INLINEFORM1 and became the choice of classification technique for stage one.
Some false positives obtained during manual evaluation are, “I am sooooo so drunk right nowwwwwwww” and “fire in my office , the boss is angry”. These occurrences show the need of more labeled gold data for our classifiers, and some other features, like Part-of-Speech tags, Named Entity recognition, Bigrams, Trigrams etc. to perform better.
The results of 10-fold cross-validation performed for stage two classfication model are also shown in table TABREF20 , under the label “Stage 2”. The training for stage two was also performed using both SVM and NB, but NB outperformed SVM by around INLINEFORM0 to become a choice for stage two classification model.
We also perform attribute evaluation for the classification model, and create a word cloud based on the output values, shown in figure FIGREF24 . It shows that our classifier model is trained on appropriate words, which are very close to the emergency situations viz. “fire”, “earthquake”, “accident”, “break” (Unigram representation here, but possibly occurs in a bigram phrase with “fire”) etc. In figure FIGREF24 , the word cloud represents the word “respond” as the most frequently occurring word as people need urgent help, and quick response from the assistance teams.
Demostration Description
Users interact with Civique through its Web-based user interface and Android based application interface. The features underlying Civique are demonstrated through the following two show cases:
Show case 1: Tweet Detection and Classification
This showcase aims at detecting related tweets, and classifying them into appropriate categories. For this, we have created a list of filter words, which are used to filter tweets from the Twitter streaming API. These set of words help us filter the tweets related to any incident. We will tweet, and users are able to see how our system captures such tweets and classifies them. Users should be able to see the tweet emerge as an incident on the web-interface, as shown in figure FIGREF26 and the on the android application, as shown in figure FIGREF27 . Figure FIGREF27 demonstrates how a notification is generated when our system detects an emergency tweet. When a user clicks the emerged spot, the system should be able to display the sanitized version / extracted spatio-temporal data from the tweet. We test the system in a realtime environment, and validate our experiments. We also report the false positives generated during the process in section SECREF25 above.
Show case 2: User Notification and Contact Info.
Civique includes a set of local contacts for civic authorities who are to be / who can be contacted in case of various emergencies. Users can see how Civique detects an emergency and classifies it. They can also watch how the system generates a notification on the web interface and the Android interface, requesting them to contact the authorities for emergencies. Users can change their preferences on the mobile device anytime and can also opt not to receive notifications. Users should be able to contact the authorities online using the application, but in case the online contact is not responsive, or in case of a sudden loss of connectivity, we provide the user with the offline contact information of the concerned civic authorities along with the notifications.
Conclusions
Civique is a system which detects urban emergencies like earthquakes, cyclones, fire break out, accidents etc. and visualizes them on both on a browsable web interface and an Android application. We collect data from the popular micro-blogging site Twitter and use language processing modules to sanitize the input. We use this data as input to train a two step classification system, which indicates whether a tweet is related to an emergency or not, and if it is, then what category of emergency it belongs to. We display such positively classified tweets along with their type and location on a Google map, and notify our users to inform the concerned authorities, and possibly evacuate the area, if his location matches the affected area. We believe such a system can help the disaster management machinery, and government bodies like Fire department, Police department, etc., to act swiftly, thus minimizing the loss of life.
Twitter users use slang, profanity, misspellings and neologisms. We, use standard cleaning methods, and combine NLP with Machine Learning (ML) to further our cause of tweet classification. At the current stage, we also have an Android application ready for our system, which shows the improvised, mobile-viewable web interface.
In the future, we aim to develop detection of emergency categories on the fly, obscure emergencies like “airplane hijacking” should also be detected by our system. We plan to analyze the temporal sequence of the tweet set from a single location to determine whether multiple problems on the same location are the result of a single event, or relate to multiple events. | SVM |
29772ba04886bee2d26b7320e1c6d9b156078891 | 29772ba04886bee2d26b7320e1c6d9b156078891_0 | Q: Do the tweets come from any individual?
Text: Introduction
With the surge in the use of social media, micro-blogging sites like Twitter, Facebook, and Foursquare have become household words. Growing ubiquity of mobile phones in highly populated developing nations has spurred an exponential rise in social media usage. The heavy volume of social media posts tagged with users' location information on micro-blogging website Twitter presents a unique opportunity to scan these posts. These Short texts (e.g. "tweets") on social media contain information about various events happening around the globe, as people post about events and incidents alike. Conventional web outlets provide emergency phone numbers (i.e. 100, 911), etc., and are fast and accurate. Our system, on the other hand, connects its users through a relatively newer platform i.e. social media, and provides an alternative to these conventional methods. In case of their failure or when such means are busy/occupied, an alternative could prove to be life saving.
These real life events are reported on Twitter with different perspectives, opinions, and sentiment. Every day, people discuss events thousands of times across social media sites. We would like to detect such events in case of an emergency. Some previous studies BIBREF0 investigate the use of features such as keywords in the tweet, number of words, and context to devise a classifier for event detection. BIBREF1 discusses various techniques researchers have used previously to detect events from Twitter. BIBREF2 describe a system to automatically detect events about known entities from Twitter. This work is highly specific to detection of events only related to known entities. BIBREF3 discuss a system that returns a ranked list of relevant events given a user query.
Several research efforts have focused on identifying events in real time( BIBREF4 BIBREF5 BIBREF6 BIBREF0 ). These include systems to detect emergent topics from Twitter in real time ( BIBREF4 BIBREF7 ), an online clustering technique for identifying tweets in real time BIBREF5 , a system to detect localized events and also track evolution of such events over a period of time BIBREF6 . Our focus is on detecting urban emergencies as events from Twitter messages. We classify events ranging from natural disasters to fire break outs, and accidents. Our system detects whether a tweet, which contains a keyword from a pre-decided list, is related to an actual emergency or not. It also classifies the event into its appropriate category, and visualizes the possible location of the emergency event on the map. We also support notifications to our users, containing the contacts of specifically concerned authorities, as per the category of their tweet.
The rest of the paper is as follows: Section SECREF2 provides the motivation for our work, and the challenges in building such a system. Section SECREF3 describes the step by step details of our work, and its results. We evaluate our system and present the results in Section SECREF4 . Section SECREF5 showcases our demonstrations in detail, and Section SECREF6 concludes the paper by briefly describing the overall contribution, implementation and demonstration.
Motivation and Challenges
In 2015, INLINEFORM0 of all unnatural deaths in India were caused by accidents, and INLINEFORM1 by accidental fires. Moreover, the Indian subcontinent suffered seven earthquakes in 2015, with the recent Nepal earthquake alone killing more than 9000 people and injuring INLINEFORM2 . We believe we can harness the current social media activity on the web to minimize losses by quickly connecting affected people and the concerned authorities. Our work is motivated by the following factors, (a) Social media is very accessible in the current scenario. (The “Digital India” initiative by the Government of India promotes internet activity, and thus a pro-active social media.) (b) As per the Internet trends reported in 2014, about 117 million Indians are connected to the Internet through mobile devices. (c) A system such as ours can point out or visualize the affected areas precisely and help inform the authorities in a timely fashion. (d) Such a system can be used on a global scale to reduce the effect of natural calamities and prevent loss of life.
There are several challenges in building such an application: (a) Such a system expects a tweet to be location tagged. Otherwise, event detection techniques to extract the spatio-temporal data from the tweet can be vague, and lead to false alarms. (b) Such a system should also be able to verify the user's credibility as pranksters may raise false alarms. (c) Tweets are usually written in a very informal language, which requires a sophisticated language processing component to sanitize the tweet input before event detection. (d) A channel with the concerned authorities should be established for them to take serious action, on alarms raised by such a system. (e) An urban emergency such as a natural disaster could affect communications severely, in case of an earthquake or a cyclone, communications channels like Internet connectivity may get disrupted easily. In such cases, our system may not be of help, as it requires the user to be connected to the internet. We address the above challenges and present our approach in the next section.
Our Approach
We propose a software architecture for Emergency detection and visualization as shown in figure FIGREF9 . We collect data using Twitter API, and perform language pre-processing before applying a classification model. Tweets are labelled manually with <emergency>and <non-emergency>labels, and later classified manually to provide labels according to the type of emergency they indicate. We use the manually labeled data for training our classifiers.
We use traditional classification techniques such as Support Vector Machines(SVM), and Naive Bayes(NB) for training, and perform 10-fold cross validation to obtain f-scores. Later, in real time, our system uses the Twitter streaming APIs to get data, pre-processes it using the same modules, and detects emergencies using the classifiers built above. The tweets related to emergencies are displayed on the web interface along with the location and information for the concerned authorities. The pre-processing of Twitter data obtained is needed as it usually contains ad-hoc abbreviations, phonetic substitutions, URLs, hashtags, and a lot of misspelled words. We use the following language processing modules for such corrections.
Pre-Processing Modules
We implement a cleaning module to automate the cleaning of tweets obtained from the Twitter API. We remove URLs, special symbols like @ along with the user mentions, Hashtags and any associated text. We also replace special symbols by blank spaces, and inculcate the module as shown in figure FIGREF9 .
An example of such a sample tweet cleaning is shown in table TABREF10 .
While tweeting, users often express their emotions by stressing over a few characters in the word. For example, usage of words like hellpppp, fiiiiiireeee, ruuuuunnnnn, druuuuuunnnkkk, soooooooo actually corresponds to help, fire, run, drunk, so etc. We use the compression module implemented by BIBREF8 for converting terms like “pleeeeeeeaaaaaassseeee” to “please”.
It is unlikely for an English word to contain the same character consecutively for three or more times. We, hence, compress all the repeated windows of character length greater than two, to two characters. For example “pleeeeeaaaassee” is converted to “pleeaassee”. Each window now contains two characters of the same alphabet in cases of repetition. Let n be the number of windows, obtained from the previous step. We, then, apply brute force search over INLINEFORM0 possibilities to select a valid dictionary word.
Table TABREF13 contains sanitized sample output from our compression module for further processing.
Text Normalization is the process of translating ad-hoc abbreviations, typographical errors, phonetic substitution and ungrammatical structures used in text messaging (Tweets and SMS) to plain English. Use of such language (often referred as Chatting Language) induces noise which poses additional processing challenges.
We use the normalization module implemented by BIBREF8 for text normalization. Training process requires a Language Model of the target language and a parallel corpora containing aligned un-normalized and normalized word pairs. Our language model consists of 15000 English words taken from various sources on the web.
Parallel corpora was collected from the following sources:
Stanford Normalization Corpora which consists of 9122 pairs of un-normalized and normalized words / phrases.
The above corpora, however, lacked acronyms and short hand texts like 2mrw, l8r, b4, hlp, flor which are frequently used in chatting. We collected 215 pairs un-normalized to normalized word/phrase mappings via crowd-sourcing.
Table TABREF16 contains input and normalized output from our module.
Users often make spelling mistakes while tweeting. A spell checker makes sure that a valid English word is sent to the classification system. We take this problem into account by introducing a spell checker as a pre-processing module by using the JAVA API of Jazzy spell checker for handling spelling mistakes.
An example of correction provided by the Spell Checker module is given below:-
Input: building INLINEFORM0 flor, help
Output: building INLINEFORM0 floor, help
Please note that, our current system performs compression, normalization and spell-checking if the language used is English. The classifier training and detection process are described below.
Emergency Classification
The first classifier model acts as a filter for the second stage of classification. We use both SVM and NB to compare the results and choose SVM later for stage one classification model, owing to a better F-score. The training is performed on tweets labeled with classes <emergency>, and <non-emergency> based on unigrams as features. We create word vectors of strings in the tweet using a filter available in the WEKA API BIBREF9 , and perform cross validation using standard classification techniques.
Type Classification
We employ a multi-class Naive Bayes classifier as the second stage classification mechanism, for categorizing tweets appropriately, depending on the type of emergencies they indicate. This multi-class classifier is trained on data manually labeled with classes. We tokenize the training data using “NgramTokenizer” and then, apply a filter to create word vectors of strings before training. We use “trigrams” as features to build a model which, later, classifies tweets into appropriate categories, in real time. We then perform cross validation using standard techniques to calculate the results, which are shown under the label “Stage 2”, in table TABREF20 .
Location Visualizer
We use Google Maps Geocoding API to display the possible location of the tweet origin based on longitude and latitude. Our visualizer presents the user with a map and pinpoints the location with custom icons for earthquake, cyclone, fire accident etc. Since we currently collect tweets with a location filter for the city of "Mumbai", we display its map location on the interface. The possible occurrences of such incidents are displayed on the map as soon as our system is able to detect it.
We also display the same on an Android device using the WebView functionality available to developers, thus solving the issue of portability. Our system displays visualization of the various emergencies detected on both web browsers and mobile devices.
Evaluation
We evaluate our system using automated, and manual evaluation techniques. We perform 10-fold cross validation to obtain the F-scores for our classification systems. We use the following technique for dataset creation. We test the system in realtime environments, and tweet about fires at random locations in our city, using test accounts. Our system was able to detect such tweets and detect them with locations shown on the map.
Dataset Creation
We collect data by using the Twitter API for saved data, available for public use. For our experiments we collect 3200 tweets filtered by keywords like “fire”, “earthquake”, “theft”, “robbery”, “drunk driving”, “drunk driving accident” etc. Later, we manually label tweets with <emergency>and <non-emergency>labels for classification as stage one. Our dataset contains 1313 tweet with positive label <emergency>and 1887 tweets with a negative label <non-emergency>. We create another dataset with the positively labeled tweets and provide them with category labels like “fire”, “accident”, “earthquake” etc.
Classifier Evaluation
The results of 10-fold cross-validation performed for stage one are shown in table TABREF20 , under the label “Stage 1”. In table TABREF20 , For “Stage 1” of classification, F-score obtained using SVM classifier is INLINEFORM0 as shown in row 2, column 2. We also provide the system with sample tweets in real time and assess its ability to detect the emergency, and classify it accordingly. The classification training for Stage 1 was performed using two traditional classification techniques SVM and NB. SVM outperformed NB by around INLINEFORM1 and became the choice of classification technique for stage one.
Some false positives obtained during manual evaluation are, “I am sooooo so drunk right nowwwwwwww” and “fire in my office , the boss is angry”. These occurrences show the need of more labeled gold data for our classifiers, and some other features, like Part-of-Speech tags, Named Entity recognition, Bigrams, Trigrams etc. to perform better.
The results of 10-fold cross-validation performed for stage two classfication model are also shown in table TABREF20 , under the label “Stage 2”. The training for stage two was also performed using both SVM and NB, but NB outperformed SVM by around INLINEFORM0 to become a choice for stage two classification model.
We also perform attribute evaluation for the classification model, and create a word cloud based on the output values, shown in figure FIGREF24 . It shows that our classifier model is trained on appropriate words, which are very close to the emergency situations viz. “fire”, “earthquake”, “accident”, “break” (Unigram representation here, but possibly occurs in a bigram phrase with “fire”) etc. In figure FIGREF24 , the word cloud represents the word “respond” as the most frequently occurring word as people need urgent help, and quick response from the assistance teams.
Demostration Description
Users interact with Civique through its Web-based user interface and Android based application interface. The features underlying Civique are demonstrated through the following two show cases:
Show case 1: Tweet Detection and Classification
This showcase aims at detecting related tweets, and classifying them into appropriate categories. For this, we have created a list of filter words, which are used to filter tweets from the Twitter streaming API. These set of words help us filter the tweets related to any incident. We will tweet, and users are able to see how our system captures such tweets and classifies them. Users should be able to see the tweet emerge as an incident on the web-interface, as shown in figure FIGREF26 and the on the android application, as shown in figure FIGREF27 . Figure FIGREF27 demonstrates how a notification is generated when our system detects an emergency tweet. When a user clicks the emerged spot, the system should be able to display the sanitized version / extracted spatio-temporal data from the tweet. We test the system in a realtime environment, and validate our experiments. We also report the false positives generated during the process in section SECREF25 above.
Show case 2: User Notification and Contact Info.
Civique includes a set of local contacts for civic authorities who are to be / who can be contacted in case of various emergencies. Users can see how Civique detects an emergency and classifies it. They can also watch how the system generates a notification on the web interface and the Android interface, requesting them to contact the authorities for emergencies. Users can change their preferences on the mobile device anytime and can also opt not to receive notifications. Users should be able to contact the authorities online using the application, but in case the online contact is not responsive, or in case of a sudden loss of connectivity, we provide the user with the offline contact information of the concerned civic authorities along with the notifications.
Conclusions
Civique is a system which detects urban emergencies like earthquakes, cyclones, fire break out, accidents etc. and visualizes them on both on a browsable web interface and an Android application. We collect data from the popular micro-blogging site Twitter and use language processing modules to sanitize the input. We use this data as input to train a two step classification system, which indicates whether a tweet is related to an emergency or not, and if it is, then what category of emergency it belongs to. We display such positively classified tweets along with their type and location on a Google map, and notify our users to inform the concerned authorities, and possibly evacuate the area, if his location matches the affected area. We believe such a system can help the disaster management machinery, and government bodies like Fire department, Police department, etc., to act swiftly, thus minimizing the loss of life.
Twitter users use slang, profanity, misspellings and neologisms. We, use standard cleaning methods, and combine NLP with Machine Learning (ML) to further our cause of tweet classification. At the current stage, we also have an Android application ready for our system, which shows the improvised, mobile-viewable web interface.
In the future, we aim to develop detection of emergency categories on the fly, obscure emergencies like “airplane hijacking” should also be detected by our system. We plan to analyze the temporal sequence of the tweet set from a single location to determine whether multiple problems on the same location are the result of a single event, or relate to multiple events. | Yes |
94dc437463f7a7d68b8f6b57f1e3606eacc4490a | 94dc437463f7a7d68b8f6b57f1e3606eacc4490a_0 | Q: How many categories are there?
Text: Introduction
With the surge in the use of social media, micro-blogging sites like Twitter, Facebook, and Foursquare have become household words. Growing ubiquity of mobile phones in highly populated developing nations has spurred an exponential rise in social media usage. The heavy volume of social media posts tagged with users' location information on micro-blogging website Twitter presents a unique opportunity to scan these posts. These Short texts (e.g. "tweets") on social media contain information about various events happening around the globe, as people post about events and incidents alike. Conventional web outlets provide emergency phone numbers (i.e. 100, 911), etc., and are fast and accurate. Our system, on the other hand, connects its users through a relatively newer platform i.e. social media, and provides an alternative to these conventional methods. In case of their failure or when such means are busy/occupied, an alternative could prove to be life saving.
These real life events are reported on Twitter with different perspectives, opinions, and sentiment. Every day, people discuss events thousands of times across social media sites. We would like to detect such events in case of an emergency. Some previous studies BIBREF0 investigate the use of features such as keywords in the tweet, number of words, and context to devise a classifier for event detection. BIBREF1 discusses various techniques researchers have used previously to detect events from Twitter. BIBREF2 describe a system to automatically detect events about known entities from Twitter. This work is highly specific to detection of events only related to known entities. BIBREF3 discuss a system that returns a ranked list of relevant events given a user query.
Several research efforts have focused on identifying events in real time( BIBREF4 BIBREF5 BIBREF6 BIBREF0 ). These include systems to detect emergent topics from Twitter in real time ( BIBREF4 BIBREF7 ), an online clustering technique for identifying tweets in real time BIBREF5 , a system to detect localized events and also track evolution of such events over a period of time BIBREF6 . Our focus is on detecting urban emergencies as events from Twitter messages. We classify events ranging from natural disasters to fire break outs, and accidents. Our system detects whether a tweet, which contains a keyword from a pre-decided list, is related to an actual emergency or not. It also classifies the event into its appropriate category, and visualizes the possible location of the emergency event on the map. We also support notifications to our users, containing the contacts of specifically concerned authorities, as per the category of their tweet.
The rest of the paper is as follows: Section SECREF2 provides the motivation for our work, and the challenges in building such a system. Section SECREF3 describes the step by step details of our work, and its results. We evaluate our system and present the results in Section SECREF4 . Section SECREF5 showcases our demonstrations in detail, and Section SECREF6 concludes the paper by briefly describing the overall contribution, implementation and demonstration.
Motivation and Challenges
In 2015, INLINEFORM0 of all unnatural deaths in India were caused by accidents, and INLINEFORM1 by accidental fires. Moreover, the Indian subcontinent suffered seven earthquakes in 2015, with the recent Nepal earthquake alone killing more than 9000 people and injuring INLINEFORM2 . We believe we can harness the current social media activity on the web to minimize losses by quickly connecting affected people and the concerned authorities. Our work is motivated by the following factors, (a) Social media is very accessible in the current scenario. (The “Digital India” initiative by the Government of India promotes internet activity, and thus a pro-active social media.) (b) As per the Internet trends reported in 2014, about 117 million Indians are connected to the Internet through mobile devices. (c) A system such as ours can point out or visualize the affected areas precisely and help inform the authorities in a timely fashion. (d) Such a system can be used on a global scale to reduce the effect of natural calamities and prevent loss of life.
There are several challenges in building such an application: (a) Such a system expects a tweet to be location tagged. Otherwise, event detection techniques to extract the spatio-temporal data from the tweet can be vague, and lead to false alarms. (b) Such a system should also be able to verify the user's credibility as pranksters may raise false alarms. (c) Tweets are usually written in a very informal language, which requires a sophisticated language processing component to sanitize the tweet input before event detection. (d) A channel with the concerned authorities should be established for them to take serious action, on alarms raised by such a system. (e) An urban emergency such as a natural disaster could affect communications severely, in case of an earthquake or a cyclone, communications channels like Internet connectivity may get disrupted easily. In such cases, our system may not be of help, as it requires the user to be connected to the internet. We address the above challenges and present our approach in the next section.
Our Approach
We propose a software architecture for Emergency detection and visualization as shown in figure FIGREF9 . We collect data using Twitter API, and perform language pre-processing before applying a classification model. Tweets are labelled manually with <emergency>and <non-emergency>labels, and later classified manually to provide labels according to the type of emergency they indicate. We use the manually labeled data for training our classifiers.
We use traditional classification techniques such as Support Vector Machines(SVM), and Naive Bayes(NB) for training, and perform 10-fold cross validation to obtain f-scores. Later, in real time, our system uses the Twitter streaming APIs to get data, pre-processes it using the same modules, and detects emergencies using the classifiers built above. The tweets related to emergencies are displayed on the web interface along with the location and information for the concerned authorities. The pre-processing of Twitter data obtained is needed as it usually contains ad-hoc abbreviations, phonetic substitutions, URLs, hashtags, and a lot of misspelled words. We use the following language processing modules for such corrections.
Pre-Processing Modules
We implement a cleaning module to automate the cleaning of tweets obtained from the Twitter API. We remove URLs, special symbols like @ along with the user mentions, Hashtags and any associated text. We also replace special symbols by blank spaces, and inculcate the module as shown in figure FIGREF9 .
An example of such a sample tweet cleaning is shown in table TABREF10 .
While tweeting, users often express their emotions by stressing over a few characters in the word. For example, usage of words like hellpppp, fiiiiiireeee, ruuuuunnnnn, druuuuuunnnkkk, soooooooo actually corresponds to help, fire, run, drunk, so etc. We use the compression module implemented by BIBREF8 for converting terms like “pleeeeeeeaaaaaassseeee” to “please”.
It is unlikely for an English word to contain the same character consecutively for three or more times. We, hence, compress all the repeated windows of character length greater than two, to two characters. For example “pleeeeeaaaassee” is converted to “pleeaassee”. Each window now contains two characters of the same alphabet in cases of repetition. Let n be the number of windows, obtained from the previous step. We, then, apply brute force search over INLINEFORM0 possibilities to select a valid dictionary word.
Table TABREF13 contains sanitized sample output from our compression module for further processing.
Text Normalization is the process of translating ad-hoc abbreviations, typographical errors, phonetic substitution and ungrammatical structures used in text messaging (Tweets and SMS) to plain English. Use of such language (often referred as Chatting Language) induces noise which poses additional processing challenges.
We use the normalization module implemented by BIBREF8 for text normalization. Training process requires a Language Model of the target language and a parallel corpora containing aligned un-normalized and normalized word pairs. Our language model consists of 15000 English words taken from various sources on the web.
Parallel corpora was collected from the following sources:
Stanford Normalization Corpora which consists of 9122 pairs of un-normalized and normalized words / phrases.
The above corpora, however, lacked acronyms and short hand texts like 2mrw, l8r, b4, hlp, flor which are frequently used in chatting. We collected 215 pairs un-normalized to normalized word/phrase mappings via crowd-sourcing.
Table TABREF16 contains input and normalized output from our module.
Users often make spelling mistakes while tweeting. A spell checker makes sure that a valid English word is sent to the classification system. We take this problem into account by introducing a spell checker as a pre-processing module by using the JAVA API of Jazzy spell checker for handling spelling mistakes.
An example of correction provided by the Spell Checker module is given below:-
Input: building INLINEFORM0 flor, help
Output: building INLINEFORM0 floor, help
Please note that, our current system performs compression, normalization and spell-checking if the language used is English. The classifier training and detection process are described below.
Emergency Classification
The first classifier model acts as a filter for the second stage of classification. We use both SVM and NB to compare the results and choose SVM later for stage one classification model, owing to a better F-score. The training is performed on tweets labeled with classes <emergency>, and <non-emergency> based on unigrams as features. We create word vectors of strings in the tweet using a filter available in the WEKA API BIBREF9 , and perform cross validation using standard classification techniques.
Type Classification
We employ a multi-class Naive Bayes classifier as the second stage classification mechanism, for categorizing tweets appropriately, depending on the type of emergencies they indicate. This multi-class classifier is trained on data manually labeled with classes. We tokenize the training data using “NgramTokenizer” and then, apply a filter to create word vectors of strings before training. We use “trigrams” as features to build a model which, later, classifies tweets into appropriate categories, in real time. We then perform cross validation using standard techniques to calculate the results, which are shown under the label “Stage 2”, in table TABREF20 .
Location Visualizer
We use Google Maps Geocoding API to display the possible location of the tweet origin based on longitude and latitude. Our visualizer presents the user with a map and pinpoints the location with custom icons for earthquake, cyclone, fire accident etc. Since we currently collect tweets with a location filter for the city of "Mumbai", we display its map location on the interface. The possible occurrences of such incidents are displayed on the map as soon as our system is able to detect it.
We also display the same on an Android device using the WebView functionality available to developers, thus solving the issue of portability. Our system displays visualization of the various emergencies detected on both web browsers and mobile devices.
Evaluation
We evaluate our system using automated, and manual evaluation techniques. We perform 10-fold cross validation to obtain the F-scores for our classification systems. We use the following technique for dataset creation. We test the system in realtime environments, and tweet about fires at random locations in our city, using test accounts. Our system was able to detect such tweets and detect them with locations shown on the map.
Dataset Creation
We collect data by using the Twitter API for saved data, available for public use. For our experiments we collect 3200 tweets filtered by keywords like “fire”, “earthquake”, “theft”, “robbery”, “drunk driving”, “drunk driving accident” etc. Later, we manually label tweets with <emergency>and <non-emergency>labels for classification as stage one. Our dataset contains 1313 tweet with positive label <emergency>and 1887 tweets with a negative label <non-emergency>. We create another dataset with the positively labeled tweets and provide them with category labels like “fire”, “accident”, “earthquake” etc.
Classifier Evaluation
The results of 10-fold cross-validation performed for stage one are shown in table TABREF20 , under the label “Stage 1”. In table TABREF20 , For “Stage 1” of classification, F-score obtained using SVM classifier is INLINEFORM0 as shown in row 2, column 2. We also provide the system with sample tweets in real time and assess its ability to detect the emergency, and classify it accordingly. The classification training for Stage 1 was performed using two traditional classification techniques SVM and NB. SVM outperformed NB by around INLINEFORM1 and became the choice of classification technique for stage one.
Some false positives obtained during manual evaluation are, “I am sooooo so drunk right nowwwwwwww” and “fire in my office , the boss is angry”. These occurrences show the need of more labeled gold data for our classifiers, and some other features, like Part-of-Speech tags, Named Entity recognition, Bigrams, Trigrams etc. to perform better.
The results of 10-fold cross-validation performed for stage two classfication model are also shown in table TABREF20 , under the label “Stage 2”. The training for stage two was also performed using both SVM and NB, but NB outperformed SVM by around INLINEFORM0 to become a choice for stage two classification model.
We also perform attribute evaluation for the classification model, and create a word cloud based on the output values, shown in figure FIGREF24 . It shows that our classifier model is trained on appropriate words, which are very close to the emergency situations viz. “fire”, “earthquake”, “accident”, “break” (Unigram representation here, but possibly occurs in a bigram phrase with “fire”) etc. In figure FIGREF24 , the word cloud represents the word “respond” as the most frequently occurring word as people need urgent help, and quick response from the assistance teams.
Demostration Description
Users interact with Civique through its Web-based user interface and Android based application interface. The features underlying Civique are demonstrated through the following two show cases:
Show case 1: Tweet Detection and Classification
This showcase aims at detecting related tweets, and classifying them into appropriate categories. For this, we have created a list of filter words, which are used to filter tweets from the Twitter streaming API. These set of words help us filter the tweets related to any incident. We will tweet, and users are able to see how our system captures such tweets and classifies them. Users should be able to see the tweet emerge as an incident on the web-interface, as shown in figure FIGREF26 and the on the android application, as shown in figure FIGREF27 . Figure FIGREF27 demonstrates how a notification is generated when our system detects an emergency tweet. When a user clicks the emerged spot, the system should be able to display the sanitized version / extracted spatio-temporal data from the tweet. We test the system in a realtime environment, and validate our experiments. We also report the false positives generated during the process in section SECREF25 above.
Show case 2: User Notification and Contact Info.
Civique includes a set of local contacts for civic authorities who are to be / who can be contacted in case of various emergencies. Users can see how Civique detects an emergency and classifies it. They can also watch how the system generates a notification on the web interface and the Android interface, requesting them to contact the authorities for emergencies. Users can change their preferences on the mobile device anytime and can also opt not to receive notifications. Users should be able to contact the authorities online using the application, but in case the online contact is not responsive, or in case of a sudden loss of connectivity, we provide the user with the offline contact information of the concerned civic authorities along with the notifications.
Conclusions
Civique is a system which detects urban emergencies like earthquakes, cyclones, fire break out, accidents etc. and visualizes them on both on a browsable web interface and an Android application. We collect data from the popular micro-blogging site Twitter and use language processing modules to sanitize the input. We use this data as input to train a two step classification system, which indicates whether a tweet is related to an emergency or not, and if it is, then what category of emergency it belongs to. We display such positively classified tweets along with their type and location on a Google map, and notify our users to inform the concerned authorities, and possibly evacuate the area, if his location matches the affected area. We believe such a system can help the disaster management machinery, and government bodies like Fire department, Police department, etc., to act swiftly, thus minimizing the loss of life.
Twitter users use slang, profanity, misspellings and neologisms. We, use standard cleaning methods, and combine NLP with Machine Learning (ML) to further our cause of tweet classification. At the current stage, we also have an Android application ready for our system, which shows the improvised, mobile-viewable web interface.
In the future, we aim to develop detection of emergency categories on the fly, obscure emergencies like “airplane hijacking” should also be detected by our system. We plan to analyze the temporal sequence of the tweet set from a single location to determine whether multiple problems on the same location are the result of a single event, or relate to multiple events. | Unanswerable |
9d9d84822a9c42eb0257feb7c18086d390dae3be | 9d9d84822a9c42eb0257feb7c18086d390dae3be_0 | Q: What was the baseline?
Text: Introduction
With the surge in the use of social media, micro-blogging sites like Twitter, Facebook, and Foursquare have become household words. Growing ubiquity of mobile phones in highly populated developing nations has spurred an exponential rise in social media usage. The heavy volume of social media posts tagged with users' location information on micro-blogging website Twitter presents a unique opportunity to scan these posts. These Short texts (e.g. "tweets") on social media contain information about various events happening around the globe, as people post about events and incidents alike. Conventional web outlets provide emergency phone numbers (i.e. 100, 911), etc., and are fast and accurate. Our system, on the other hand, connects its users through a relatively newer platform i.e. social media, and provides an alternative to these conventional methods. In case of their failure or when such means are busy/occupied, an alternative could prove to be life saving.
These real life events are reported on Twitter with different perspectives, opinions, and sentiment. Every day, people discuss events thousands of times across social media sites. We would like to detect such events in case of an emergency. Some previous studies BIBREF0 investigate the use of features such as keywords in the tweet, number of words, and context to devise a classifier for event detection. BIBREF1 discusses various techniques researchers have used previously to detect events from Twitter. BIBREF2 describe a system to automatically detect events about known entities from Twitter. This work is highly specific to detection of events only related to known entities. BIBREF3 discuss a system that returns a ranked list of relevant events given a user query.
Several research efforts have focused on identifying events in real time( BIBREF4 BIBREF5 BIBREF6 BIBREF0 ). These include systems to detect emergent topics from Twitter in real time ( BIBREF4 BIBREF7 ), an online clustering technique for identifying tweets in real time BIBREF5 , a system to detect localized events and also track evolution of such events over a period of time BIBREF6 . Our focus is on detecting urban emergencies as events from Twitter messages. We classify events ranging from natural disasters to fire break outs, and accidents. Our system detects whether a tweet, which contains a keyword from a pre-decided list, is related to an actual emergency or not. It also classifies the event into its appropriate category, and visualizes the possible location of the emergency event on the map. We also support notifications to our users, containing the contacts of specifically concerned authorities, as per the category of their tweet.
The rest of the paper is as follows: Section SECREF2 provides the motivation for our work, and the challenges in building such a system. Section SECREF3 describes the step by step details of our work, and its results. We evaluate our system and present the results in Section SECREF4 . Section SECREF5 showcases our demonstrations in detail, and Section SECREF6 concludes the paper by briefly describing the overall contribution, implementation and demonstration.
Motivation and Challenges
In 2015, INLINEFORM0 of all unnatural deaths in India were caused by accidents, and INLINEFORM1 by accidental fires. Moreover, the Indian subcontinent suffered seven earthquakes in 2015, with the recent Nepal earthquake alone killing more than 9000 people and injuring INLINEFORM2 . We believe we can harness the current social media activity on the web to minimize losses by quickly connecting affected people and the concerned authorities. Our work is motivated by the following factors, (a) Social media is very accessible in the current scenario. (The “Digital India” initiative by the Government of India promotes internet activity, and thus a pro-active social media.) (b) As per the Internet trends reported in 2014, about 117 million Indians are connected to the Internet through mobile devices. (c) A system such as ours can point out or visualize the affected areas precisely and help inform the authorities in a timely fashion. (d) Such a system can be used on a global scale to reduce the effect of natural calamities and prevent loss of life.
There are several challenges in building such an application: (a) Such a system expects a tweet to be location tagged. Otherwise, event detection techniques to extract the spatio-temporal data from the tweet can be vague, and lead to false alarms. (b) Such a system should also be able to verify the user's credibility as pranksters may raise false alarms. (c) Tweets are usually written in a very informal language, which requires a sophisticated language processing component to sanitize the tweet input before event detection. (d) A channel with the concerned authorities should be established for them to take serious action, on alarms raised by such a system. (e) An urban emergency such as a natural disaster could affect communications severely, in case of an earthquake or a cyclone, communications channels like Internet connectivity may get disrupted easily. In such cases, our system may not be of help, as it requires the user to be connected to the internet. We address the above challenges and present our approach in the next section.
Our Approach
We propose a software architecture for Emergency detection and visualization as shown in figure FIGREF9 . We collect data using Twitter API, and perform language pre-processing before applying a classification model. Tweets are labelled manually with <emergency>and <non-emergency>labels, and later classified manually to provide labels according to the type of emergency they indicate. We use the manually labeled data for training our classifiers.
We use traditional classification techniques such as Support Vector Machines(SVM), and Naive Bayes(NB) for training, and perform 10-fold cross validation to obtain f-scores. Later, in real time, our system uses the Twitter streaming APIs to get data, pre-processes it using the same modules, and detects emergencies using the classifiers built above. The tweets related to emergencies are displayed on the web interface along with the location and information for the concerned authorities. The pre-processing of Twitter data obtained is needed as it usually contains ad-hoc abbreviations, phonetic substitutions, URLs, hashtags, and a lot of misspelled words. We use the following language processing modules for such corrections.
Pre-Processing Modules
We implement a cleaning module to automate the cleaning of tweets obtained from the Twitter API. We remove URLs, special symbols like @ along with the user mentions, Hashtags and any associated text. We also replace special symbols by blank spaces, and inculcate the module as shown in figure FIGREF9 .
An example of such a sample tweet cleaning is shown in table TABREF10 .
While tweeting, users often express their emotions by stressing over a few characters in the word. For example, usage of words like hellpppp, fiiiiiireeee, ruuuuunnnnn, druuuuuunnnkkk, soooooooo actually corresponds to help, fire, run, drunk, so etc. We use the compression module implemented by BIBREF8 for converting terms like “pleeeeeeeaaaaaassseeee” to “please”.
It is unlikely for an English word to contain the same character consecutively for three or more times. We, hence, compress all the repeated windows of character length greater than two, to two characters. For example “pleeeeeaaaassee” is converted to “pleeaassee”. Each window now contains two characters of the same alphabet in cases of repetition. Let n be the number of windows, obtained from the previous step. We, then, apply brute force search over INLINEFORM0 possibilities to select a valid dictionary word.
Table TABREF13 contains sanitized sample output from our compression module for further processing.
Text Normalization is the process of translating ad-hoc abbreviations, typographical errors, phonetic substitution and ungrammatical structures used in text messaging (Tweets and SMS) to plain English. Use of such language (often referred as Chatting Language) induces noise which poses additional processing challenges.
We use the normalization module implemented by BIBREF8 for text normalization. Training process requires a Language Model of the target language and a parallel corpora containing aligned un-normalized and normalized word pairs. Our language model consists of 15000 English words taken from various sources on the web.
Parallel corpora was collected from the following sources:
Stanford Normalization Corpora which consists of 9122 pairs of un-normalized and normalized words / phrases.
The above corpora, however, lacked acronyms and short hand texts like 2mrw, l8r, b4, hlp, flor which are frequently used in chatting. We collected 215 pairs un-normalized to normalized word/phrase mappings via crowd-sourcing.
Table TABREF16 contains input and normalized output from our module.
Users often make spelling mistakes while tweeting. A spell checker makes sure that a valid English word is sent to the classification system. We take this problem into account by introducing a spell checker as a pre-processing module by using the JAVA API of Jazzy spell checker for handling spelling mistakes.
An example of correction provided by the Spell Checker module is given below:-
Input: building INLINEFORM0 flor, help
Output: building INLINEFORM0 floor, help
Please note that, our current system performs compression, normalization and spell-checking if the language used is English. The classifier training and detection process are described below.
Emergency Classification
The first classifier model acts as a filter for the second stage of classification. We use both SVM and NB to compare the results and choose SVM later for stage one classification model, owing to a better F-score. The training is performed on tweets labeled with classes <emergency>, and <non-emergency> based on unigrams as features. We create word vectors of strings in the tweet using a filter available in the WEKA API BIBREF9 , and perform cross validation using standard classification techniques.
Type Classification
We employ a multi-class Naive Bayes classifier as the second stage classification mechanism, for categorizing tweets appropriately, depending on the type of emergencies they indicate. This multi-class classifier is trained on data manually labeled with classes. We tokenize the training data using “NgramTokenizer” and then, apply a filter to create word vectors of strings before training. We use “trigrams” as features to build a model which, later, classifies tweets into appropriate categories, in real time. We then perform cross validation using standard techniques to calculate the results, which are shown under the label “Stage 2”, in table TABREF20 .
Location Visualizer
We use Google Maps Geocoding API to display the possible location of the tweet origin based on longitude and latitude. Our visualizer presents the user with a map and pinpoints the location with custom icons for earthquake, cyclone, fire accident etc. Since we currently collect tweets with a location filter for the city of "Mumbai", we display its map location on the interface. The possible occurrences of such incidents are displayed on the map as soon as our system is able to detect it.
We also display the same on an Android device using the WebView functionality available to developers, thus solving the issue of portability. Our system displays visualization of the various emergencies detected on both web browsers and mobile devices.
Evaluation
We evaluate our system using automated, and manual evaluation techniques. We perform 10-fold cross validation to obtain the F-scores for our classification systems. We use the following technique for dataset creation. We test the system in realtime environments, and tweet about fires at random locations in our city, using test accounts. Our system was able to detect such tweets and detect them with locations shown on the map.
Dataset Creation
We collect data by using the Twitter API for saved data, available for public use. For our experiments we collect 3200 tweets filtered by keywords like “fire”, “earthquake”, “theft”, “robbery”, “drunk driving”, “drunk driving accident” etc. Later, we manually label tweets with <emergency>and <non-emergency>labels for classification as stage one. Our dataset contains 1313 tweet with positive label <emergency>and 1887 tweets with a negative label <non-emergency>. We create another dataset with the positively labeled tweets and provide them with category labels like “fire”, “accident”, “earthquake” etc.
Classifier Evaluation
The results of 10-fold cross-validation performed for stage one are shown in table TABREF20 , under the label “Stage 1”. In table TABREF20 , For “Stage 1” of classification, F-score obtained using SVM classifier is INLINEFORM0 as shown in row 2, column 2. We also provide the system with sample tweets in real time and assess its ability to detect the emergency, and classify it accordingly. The classification training for Stage 1 was performed using two traditional classification techniques SVM and NB. SVM outperformed NB by around INLINEFORM1 and became the choice of classification technique for stage one.
Some false positives obtained during manual evaluation are, “I am sooooo so drunk right nowwwwwwww” and “fire in my office , the boss is angry”. These occurrences show the need of more labeled gold data for our classifiers, and some other features, like Part-of-Speech tags, Named Entity recognition, Bigrams, Trigrams etc. to perform better.
The results of 10-fold cross-validation performed for stage two classfication model are also shown in table TABREF20 , under the label “Stage 2”. The training for stage two was also performed using both SVM and NB, but NB outperformed SVM by around INLINEFORM0 to become a choice for stage two classification model.
We also perform attribute evaluation for the classification model, and create a word cloud based on the output values, shown in figure FIGREF24 . It shows that our classifier model is trained on appropriate words, which are very close to the emergency situations viz. “fire”, “earthquake”, “accident”, “break” (Unigram representation here, but possibly occurs in a bigram phrase with “fire”) etc. In figure FIGREF24 , the word cloud represents the word “respond” as the most frequently occurring word as people need urgent help, and quick response from the assistance teams.
Demostration Description
Users interact with Civique through its Web-based user interface and Android based application interface. The features underlying Civique are demonstrated through the following two show cases:
Show case 1: Tweet Detection and Classification
This showcase aims at detecting related tweets, and classifying them into appropriate categories. For this, we have created a list of filter words, which are used to filter tweets from the Twitter streaming API. These set of words help us filter the tweets related to any incident. We will tweet, and users are able to see how our system captures such tweets and classifies them. Users should be able to see the tweet emerge as an incident on the web-interface, as shown in figure FIGREF26 and the on the android application, as shown in figure FIGREF27 . Figure FIGREF27 demonstrates how a notification is generated when our system detects an emergency tweet. When a user clicks the emerged spot, the system should be able to display the sanitized version / extracted spatio-temporal data from the tweet. We test the system in a realtime environment, and validate our experiments. We also report the false positives generated during the process in section SECREF25 above.
Show case 2: User Notification and Contact Info.
Civique includes a set of local contacts for civic authorities who are to be / who can be contacted in case of various emergencies. Users can see how Civique detects an emergency and classifies it. They can also watch how the system generates a notification on the web interface and the Android interface, requesting them to contact the authorities for emergencies. Users can change their preferences on the mobile device anytime and can also opt not to receive notifications. Users should be able to contact the authorities online using the application, but in case the online contact is not responsive, or in case of a sudden loss of connectivity, we provide the user with the offline contact information of the concerned civic authorities along with the notifications.
Conclusions
Civique is a system which detects urban emergencies like earthquakes, cyclones, fire break out, accidents etc. and visualizes them on both on a browsable web interface and an Android application. We collect data from the popular micro-blogging site Twitter and use language processing modules to sanitize the input. We use this data as input to train a two step classification system, which indicates whether a tweet is related to an emergency or not, and if it is, then what category of emergency it belongs to. We display such positively classified tweets along with their type and location on a Google map, and notify our users to inform the concerned authorities, and possibly evacuate the area, if his location matches the affected area. We believe such a system can help the disaster management machinery, and government bodies like Fire department, Police department, etc., to act swiftly, thus minimizing the loss of life.
Twitter users use slang, profanity, misspellings and neologisms. We, use standard cleaning methods, and combine NLP with Machine Learning (ML) to further our cause of tweet classification. At the current stage, we also have an Android application ready for our system, which shows the improvised, mobile-viewable web interface.
In the future, we aim to develop detection of emergency categories on the fly, obscure emergencies like “airplane hijacking” should also be detected by our system. We plan to analyze the temporal sequence of the tweet set from a single location to determine whether multiple problems on the same location are the result of a single event, or relate to multiple events. | Unanswerable |
d27e3a099954e917b6491e81b2e907478d7f1233 | d27e3a099954e917b6491e81b2e907478d7f1233_0 | Q: Are the tweets specific to a region?
Text: Introduction
With the surge in the use of social media, micro-blogging sites like Twitter, Facebook, and Foursquare have become household words. Growing ubiquity of mobile phones in highly populated developing nations has spurred an exponential rise in social media usage. The heavy volume of social media posts tagged with users' location information on micro-blogging website Twitter presents a unique opportunity to scan these posts. These Short texts (e.g. "tweets") on social media contain information about various events happening around the globe, as people post about events and incidents alike. Conventional web outlets provide emergency phone numbers (i.e. 100, 911), etc., and are fast and accurate. Our system, on the other hand, connects its users through a relatively newer platform i.e. social media, and provides an alternative to these conventional methods. In case of their failure or when such means are busy/occupied, an alternative could prove to be life saving.
These real life events are reported on Twitter with different perspectives, opinions, and sentiment. Every day, people discuss events thousands of times across social media sites. We would like to detect such events in case of an emergency. Some previous studies BIBREF0 investigate the use of features such as keywords in the tweet, number of words, and context to devise a classifier for event detection. BIBREF1 discusses various techniques researchers have used previously to detect events from Twitter. BIBREF2 describe a system to automatically detect events about known entities from Twitter. This work is highly specific to detection of events only related to known entities. BIBREF3 discuss a system that returns a ranked list of relevant events given a user query.
Several research efforts have focused on identifying events in real time( BIBREF4 BIBREF5 BIBREF6 BIBREF0 ). These include systems to detect emergent topics from Twitter in real time ( BIBREF4 BIBREF7 ), an online clustering technique for identifying tweets in real time BIBREF5 , a system to detect localized events and also track evolution of such events over a period of time BIBREF6 . Our focus is on detecting urban emergencies as events from Twitter messages. We classify events ranging from natural disasters to fire break outs, and accidents. Our system detects whether a tweet, which contains a keyword from a pre-decided list, is related to an actual emergency or not. It also classifies the event into its appropriate category, and visualizes the possible location of the emergency event on the map. We also support notifications to our users, containing the contacts of specifically concerned authorities, as per the category of their tweet.
The rest of the paper is as follows: Section SECREF2 provides the motivation for our work, and the challenges in building such a system. Section SECREF3 describes the step by step details of our work, and its results. We evaluate our system and present the results in Section SECREF4 . Section SECREF5 showcases our demonstrations in detail, and Section SECREF6 concludes the paper by briefly describing the overall contribution, implementation and demonstration.
Motivation and Challenges
In 2015, INLINEFORM0 of all unnatural deaths in India were caused by accidents, and INLINEFORM1 by accidental fires. Moreover, the Indian subcontinent suffered seven earthquakes in 2015, with the recent Nepal earthquake alone killing more than 9000 people and injuring INLINEFORM2 . We believe we can harness the current social media activity on the web to minimize losses by quickly connecting affected people and the concerned authorities. Our work is motivated by the following factors, (a) Social media is very accessible in the current scenario. (The “Digital India” initiative by the Government of India promotes internet activity, and thus a pro-active social media.) (b) As per the Internet trends reported in 2014, about 117 million Indians are connected to the Internet through mobile devices. (c) A system such as ours can point out or visualize the affected areas precisely and help inform the authorities in a timely fashion. (d) Such a system can be used on a global scale to reduce the effect of natural calamities and prevent loss of life.
There are several challenges in building such an application: (a) Such a system expects a tweet to be location tagged. Otherwise, event detection techniques to extract the spatio-temporal data from the tweet can be vague, and lead to false alarms. (b) Such a system should also be able to verify the user's credibility as pranksters may raise false alarms. (c) Tweets are usually written in a very informal language, which requires a sophisticated language processing component to sanitize the tweet input before event detection. (d) A channel with the concerned authorities should be established for them to take serious action, on alarms raised by such a system. (e) An urban emergency such as a natural disaster could affect communications severely, in case of an earthquake or a cyclone, communications channels like Internet connectivity may get disrupted easily. In such cases, our system may not be of help, as it requires the user to be connected to the internet. We address the above challenges and present our approach in the next section.
Our Approach
We propose a software architecture for Emergency detection and visualization as shown in figure FIGREF9 . We collect data using Twitter API, and perform language pre-processing before applying a classification model. Tweets are labelled manually with <emergency>and <non-emergency>labels, and later classified manually to provide labels according to the type of emergency they indicate. We use the manually labeled data for training our classifiers.
We use traditional classification techniques such as Support Vector Machines(SVM), and Naive Bayes(NB) for training, and perform 10-fold cross validation to obtain f-scores. Later, in real time, our system uses the Twitter streaming APIs to get data, pre-processes it using the same modules, and detects emergencies using the classifiers built above. The tweets related to emergencies are displayed on the web interface along with the location and information for the concerned authorities. The pre-processing of Twitter data obtained is needed as it usually contains ad-hoc abbreviations, phonetic substitutions, URLs, hashtags, and a lot of misspelled words. We use the following language processing modules for such corrections.
Pre-Processing Modules
We implement a cleaning module to automate the cleaning of tweets obtained from the Twitter API. We remove URLs, special symbols like @ along with the user mentions, Hashtags and any associated text. We also replace special symbols by blank spaces, and inculcate the module as shown in figure FIGREF9 .
An example of such a sample tweet cleaning is shown in table TABREF10 .
While tweeting, users often express their emotions by stressing over a few characters in the word. For example, usage of words like hellpppp, fiiiiiireeee, ruuuuunnnnn, druuuuuunnnkkk, soooooooo actually corresponds to help, fire, run, drunk, so etc. We use the compression module implemented by BIBREF8 for converting terms like “pleeeeeeeaaaaaassseeee” to “please”.
It is unlikely for an English word to contain the same character consecutively for three or more times. We, hence, compress all the repeated windows of character length greater than two, to two characters. For example “pleeeeeaaaassee” is converted to “pleeaassee”. Each window now contains two characters of the same alphabet in cases of repetition. Let n be the number of windows, obtained from the previous step. We, then, apply brute force search over INLINEFORM0 possibilities to select a valid dictionary word.
Table TABREF13 contains sanitized sample output from our compression module for further processing.
Text Normalization is the process of translating ad-hoc abbreviations, typographical errors, phonetic substitution and ungrammatical structures used in text messaging (Tweets and SMS) to plain English. Use of such language (often referred as Chatting Language) induces noise which poses additional processing challenges.
We use the normalization module implemented by BIBREF8 for text normalization. Training process requires a Language Model of the target language and a parallel corpora containing aligned un-normalized and normalized word pairs. Our language model consists of 15000 English words taken from various sources on the web.
Parallel corpora was collected from the following sources:
Stanford Normalization Corpora which consists of 9122 pairs of un-normalized and normalized words / phrases.
The above corpora, however, lacked acronyms and short hand texts like 2mrw, l8r, b4, hlp, flor which are frequently used in chatting. We collected 215 pairs un-normalized to normalized word/phrase mappings via crowd-sourcing.
Table TABREF16 contains input and normalized output from our module.
Users often make spelling mistakes while tweeting. A spell checker makes sure that a valid English word is sent to the classification system. We take this problem into account by introducing a spell checker as a pre-processing module by using the JAVA API of Jazzy spell checker for handling spelling mistakes.
An example of correction provided by the Spell Checker module is given below:-
Input: building INLINEFORM0 flor, help
Output: building INLINEFORM0 floor, help
Please note that, our current system performs compression, normalization and spell-checking if the language used is English. The classifier training and detection process are described below.
Emergency Classification
The first classifier model acts as a filter for the second stage of classification. We use both SVM and NB to compare the results and choose SVM later for stage one classification model, owing to a better F-score. The training is performed on tweets labeled with classes <emergency>, and <non-emergency> based on unigrams as features. We create word vectors of strings in the tweet using a filter available in the WEKA API BIBREF9 , and perform cross validation using standard classification techniques.
Type Classification
We employ a multi-class Naive Bayes classifier as the second stage classification mechanism, for categorizing tweets appropriately, depending on the type of emergencies they indicate. This multi-class classifier is trained on data manually labeled with classes. We tokenize the training data using “NgramTokenizer” and then, apply a filter to create word vectors of strings before training. We use “trigrams” as features to build a model which, later, classifies tweets into appropriate categories, in real time. We then perform cross validation using standard techniques to calculate the results, which are shown under the label “Stage 2”, in table TABREF20 .
Location Visualizer
We use Google Maps Geocoding API to display the possible location of the tweet origin based on longitude and latitude. Our visualizer presents the user with a map and pinpoints the location with custom icons for earthquake, cyclone, fire accident etc. Since we currently collect tweets with a location filter for the city of "Mumbai", we display its map location on the interface. The possible occurrences of such incidents are displayed on the map as soon as our system is able to detect it.
We also display the same on an Android device using the WebView functionality available to developers, thus solving the issue of portability. Our system displays visualization of the various emergencies detected on both web browsers and mobile devices.
Evaluation
We evaluate our system using automated, and manual evaluation techniques. We perform 10-fold cross validation to obtain the F-scores for our classification systems. We use the following technique for dataset creation. We test the system in realtime environments, and tweet about fires at random locations in our city, using test accounts. Our system was able to detect such tweets and detect them with locations shown on the map.
Dataset Creation
We collect data by using the Twitter API for saved data, available for public use. For our experiments we collect 3200 tweets filtered by keywords like “fire”, “earthquake”, “theft”, “robbery”, “drunk driving”, “drunk driving accident” etc. Later, we manually label tweets with <emergency>and <non-emergency>labels for classification as stage one. Our dataset contains 1313 tweet with positive label <emergency>and 1887 tweets with a negative label <non-emergency>. We create another dataset with the positively labeled tweets and provide them with category labels like “fire”, “accident”, “earthquake” etc.
Classifier Evaluation
The results of 10-fold cross-validation performed for stage one are shown in table TABREF20 , under the label “Stage 1”. In table TABREF20 , For “Stage 1” of classification, F-score obtained using SVM classifier is INLINEFORM0 as shown in row 2, column 2. We also provide the system with sample tweets in real time and assess its ability to detect the emergency, and classify it accordingly. The classification training for Stage 1 was performed using two traditional classification techniques SVM and NB. SVM outperformed NB by around INLINEFORM1 and became the choice of classification technique for stage one.
Some false positives obtained during manual evaluation are, “I am sooooo so drunk right nowwwwwwww” and “fire in my office , the boss is angry”. These occurrences show the need of more labeled gold data for our classifiers, and some other features, like Part-of-Speech tags, Named Entity recognition, Bigrams, Trigrams etc. to perform better.
The results of 10-fold cross-validation performed for stage two classfication model are also shown in table TABREF20 , under the label “Stage 2”. The training for stage two was also performed using both SVM and NB, but NB outperformed SVM by around INLINEFORM0 to become a choice for stage two classification model.
We also perform attribute evaluation for the classification model, and create a word cloud based on the output values, shown in figure FIGREF24 . It shows that our classifier model is trained on appropriate words, which are very close to the emergency situations viz. “fire”, “earthquake”, “accident”, “break” (Unigram representation here, but possibly occurs in a bigram phrase with “fire”) etc. In figure FIGREF24 , the word cloud represents the word “respond” as the most frequently occurring word as people need urgent help, and quick response from the assistance teams.
Demostration Description
Users interact with Civique through its Web-based user interface and Android based application interface. The features underlying Civique are demonstrated through the following two show cases:
Show case 1: Tweet Detection and Classification
This showcase aims at detecting related tweets, and classifying them into appropriate categories. For this, we have created a list of filter words, which are used to filter tweets from the Twitter streaming API. These set of words help us filter the tweets related to any incident. We will tweet, and users are able to see how our system captures such tweets and classifies them. Users should be able to see the tweet emerge as an incident on the web-interface, as shown in figure FIGREF26 and the on the android application, as shown in figure FIGREF27 . Figure FIGREF27 demonstrates how a notification is generated when our system detects an emergency tweet. When a user clicks the emerged spot, the system should be able to display the sanitized version / extracted spatio-temporal data from the tweet. We test the system in a realtime environment, and validate our experiments. We also report the false positives generated during the process in section SECREF25 above.
Show case 2: User Notification and Contact Info.
Civique includes a set of local contacts for civic authorities who are to be / who can be contacted in case of various emergencies. Users can see how Civique detects an emergency and classifies it. They can also watch how the system generates a notification on the web interface and the Android interface, requesting them to contact the authorities for emergencies. Users can change their preferences on the mobile device anytime and can also opt not to receive notifications. Users should be able to contact the authorities online using the application, but in case the online contact is not responsive, or in case of a sudden loss of connectivity, we provide the user with the offline contact information of the concerned civic authorities along with the notifications.
Conclusions
Civique is a system which detects urban emergencies like earthquakes, cyclones, fire break out, accidents etc. and visualizes them on both on a browsable web interface and an Android application. We collect data from the popular micro-blogging site Twitter and use language processing modules to sanitize the input. We use this data as input to train a two step classification system, which indicates whether a tweet is related to an emergency or not, and if it is, then what category of emergency it belongs to. We display such positively classified tweets along with their type and location on a Google map, and notify our users to inform the concerned authorities, and possibly evacuate the area, if his location matches the affected area. We believe such a system can help the disaster management machinery, and government bodies like Fire department, Police department, etc., to act swiftly, thus minimizing the loss of life.
Twitter users use slang, profanity, misspellings and neologisms. We, use standard cleaning methods, and combine NLP with Machine Learning (ML) to further our cause of tweet classification. At the current stage, we also have an Android application ready for our system, which shows the improvised, mobile-viewable web interface.
In the future, we aim to develop detection of emergency categories on the fly, obscure emergencies like “airplane hijacking” should also be detected by our system. We plan to analyze the temporal sequence of the tweet set from a single location to determine whether multiple problems on the same location are the result of a single event, or relate to multiple events. | No |
c0a11ba0f6bbb4c69b5a0d4ae9d18e86a4a8f354 | c0a11ba0f6bbb4c69b5a0d4ae9d18e86a4a8f354_0 | Q: Do they release MED?
Text: Introduction
Natural language inference (NLI), also known as recognizing textual entailment (RTE), has been proposed as a benchmark task for natural language understanding. Given a premise $P$ and a hypothesis $H$ , the task is to determine whether the premise semantically entails the hypothesis BIBREF0 . A number of recent works attempt to test and analyze what type of inferences an NLI model may be performing, focusing on various types of lexical inferences BIBREF1 , BIBREF2 , BIBREF3 and logical inferences BIBREF4 , BIBREF5 .
Concerning logical inferences, monotonicity reasoning BIBREF6 , BIBREF7 , which is a type of reasoning based on word replacement, requires the ability to capture the interaction between lexical and syntactic structures. Consider examples in ( "Introduction" ) and ( "Introduction" ).
All [ workers $\leavevmode {\color {blue!80!black}\downarrow }$ ] [joined for a French dinner $\leavevmode {\color {red!80!black}\uparrow }$ ] All workers joined for a dinner All new workers joined for a French dinner Not all [new workers $\leavevmode {\color {red!80!black}\uparrow }$ ] joined for a dinner Not all workers joined for a dinner
A context is upward entailing (shown by [... $\leavevmode {\color {red!80!black}\uparrow }$ ]) that allows an inference from ( "Introduction" ) to ( "Introduction" ), where French dinner is replaced by a more general concept dinner. On the other hand, a downward entailing context (shown by [... $\leavevmode {\color {blue!80!black}\downarrow }$ ]) allows an inference from ( "Introduction" ) to ( "Introduction" ), where workers is replaced by a more specific concept new workers. Interestingly, the direction of monotonicity can be reversed again by embedding yet another downward entailing context (e.g., not in ( "Introduction" )), as witness the fact that ( "Introduction" ) entails ( "Introduction" ). To properly handle both directions of monotonicity, NLI models must detect monotonicity operators (e.g., all, not) and their arguments from the syntactic structure.
For previous datasets containing monotonicity inference problems, FraCaS BIBREF8 and the GLUE diagnostic dataset BIBREF9 are manually-curated datasets for testing a wide range of linguistic phenomena. However, monotonicity problems are limited to very small sizes (FraCaS: 37/346 examples and GLUE: 93/1650 examples). The limited syntactic patterns and vocabularies in previous test sets are obstacles in accurately evaluating NLI models on monotonicity reasoning.
To tackle this issue, we present a new evaluation dataset that covers a wide range of monotonicity reasoning that was created by crowdsourcing and collected from linguistics publications (Section "Dataset" ). Compared with manual or automatic construction, we can collect naturally-occurring examples by crowdsourcing and well-designed ones from linguistics publications. To enable the evaluation of skills required for monotonicity reasoning, we annotate each example in our dataset with linguistic tags associated with monotonicity reasoning.
We measure the performance of state-of-the-art NLI models on monotonicity reasoning and investigate their generalization ability in upward and downward reasoning (Section "Results and Discussion" ). The results show that all models trained with SNLI BIBREF4 and MultiNLI BIBREF10 perform worse on downward inferences than on upward inferences.
In addition, we analyzed the performance of models trained with an automatically created monotonicity dataset, HELP BIBREF11 . The analysis with monotonicity data augmentation shows that models tend to perform better in the same direction of monotonicity with the training set, while they perform worse in the opposite direction. This indicates that the accuracy on monotonicity reasoning depends solely on the majority direction in the training set, and models might lack the ability to capture the structural relations between monotonicity operators and their arguments.
Monotonicity
As an example of a monotonicity inference, consider the example with the determiner every in ( "Monotonicity" ); here the premise $P$ entails the hypothesis $H$ .
$P$ : Every [ $_{\scriptsize \mathsf {NP}}$ person $\leavevmode {\color {blue!80!black}\downarrow }$ ] [ $_{\scriptsize \mathsf {VP}}$ bought a movie ticket $\leavevmode {\color {red!80!black}\uparrow }$ ] $H$ : Every young person bought a ticket
Every is downward entailing in the first argument ( $\mathsf {NP}$ ) and upward entailing in the second argument ( $\mathsf {VP}$ ), and thus the term person can be more specific by adding modifiers (person $\sqsupseteq $ young person), replacing it with its hyponym (person $\sqsupseteq $ spectator), or adding conjunction (person $\sqsupseteq $ person and alien). On the other hand, the term buy a ticket can be more general by removing modifiers (bought a movie ticket $\sqsubseteq $ bought a ticket), replacing it with its hypernym (bought a movie ticket $\sqsubseteq $ bought a show ticket), or adding disjunction (bought a movie ticket $\sqsubseteq $ bought or sold a movie ticket). Table 1 shows determiners modeled as binary operators and their polarities with respect to the first and second arguments.
There are various types of downward operators, not limited to determiners (see Table 2 ). As shown in ( "Monotonicity" ), if a propositional object is embedded in a downward monotonic context (e.g., when), the polarity of words over its scope can be reversed.
$P$ : When [every [ $_{\scriptsize \mathsf {NP}}$ young person $\leavevmode {\color {red!80!black}\uparrow }$ ] [ $_{\scriptsize \mathsf {VP}}$ bought a ticket $\leavevmode {\color {blue!80!black}\downarrow }$ ]], [that shop was open] $H$ : When [every [ $_{\scriptsize \mathsf {NP}}$ person] [ $_{\scriptsize \mathsf {VP}}$ bought a movie ticket]], [that shop was open]
Thus, the polarity ( $\leavevmode {\color {red!80!black}\uparrow }$ and $\leavevmode {\color {blue!80!black}\downarrow }$ ), where the replacement with more general (specific) phrases licenses entailment, needs to be determined by the interaction of monotonicity properties and syntactic structures; polarity of each constituent is calculated based on a monotonicity operator of functional expressions (e.g., every, when) and their function-term relations.
Human-oriented dataset
To create monotonicity inference problems, we should satisfy three requirements: (a) detect the monotonicity operators and their arguments; (b) based on the syntactic structure, induce the polarity of the argument positions; and (c) replace the phrase in the argument position with a more general or specific phrase in natural and various ways (e.g., by using lexical knowledge or logical connectives). For (a) and (b), we first conduct polarity computation on a syntactic structure for each sentence, and then select premises involving upward/downward expressions.
For (c), we use crowdsourcing to narrow or broaden the arguments. The motivation for using crowdsourcing is to collect naturally alike monotonicity inference problems that include various expressions. One problem here is that it is unclear how to instruct workers to create monotonicity inference problems without knowledge of natural language syntax and semantics. We must make tasks simple for workers to comprehend and provide sound judgements. Moreover, recent studies BIBREF12 , BIBREF3 , BIBREF13 point out that previous crowdsourced datasets, such as SNLI BIBREF14 and MultiNLI BIBREF10 , include hidden biases. As these previous datasets are motivated by approximated entailments, workers are asked to freely write hypotheses given a premise, which does not strictly restrict them to creating logically complex inferences.
Taking these concerns into consideration, we designed two-step tasks to be performed via crowdsourcing for creating a monotonicity test set; (i) a hypothesis creation task and (ii) a validation task. The task (i) is to create a hypothesis by making some polarized part of an original sentence more specific. Instead of writing a complete sentence from scratch, workers are asked to rewrite only a relatively short sentence. By restricting workers to rewrite only a polarized part, we can effectively collect monotonicity inference examples. The task (ii) is to annotate an entailment label for the premise-hypothesis pair generated in (i). Figure 1 summarizes the overview of our human-oriented dataset creation. We used the crowdsourcing platform Figure Eight for both tasks.
As a resource, we use declarative sentences with more than five tokens from the Parallel Meaning Bank (PMB) BIBREF15 . The PMB contains syntactically correct sentences annotated with its syntactic category in Combinatory Categorial Grammar (CCG; BIBREF16 , BIBREF16 ) format, which is suitable for our purpose. To get a whole CCG derivation tree, we parse each sentence by the state-of-the-art CCG parser, depccg BIBREF17 . Then, we add a polarity to every constituent of the CCG tree by the polarity computation system ccg2mono BIBREF18 and make the polarized part a blank field.
We ran a trial rephrasing task on 500 examples and detected 17 expressions that were too general and thus difficult to rephrase them in a natural way (e.g., every one, no time). We removed examples involving such expressions. To collect more downward inference examples, we select examples involving determiners in Table 1 and downward operators in Table 2 . As a result, we selected 1,485 examples involving expressions having arguments with upward monotonicity and 1,982 examples involving expressions having arguments with downward monotonicity.
We present crowdworkers with a sentence whose polarized part is underlined, and ask them to replace the underlined part with more specific phrases in three different ways. In the instructions, we showed examples rephrased in various ways: by adding modifiers, by adding conjunction phrases, and by replacing a word with its hyponyms.
Workers were paid US$0.05 for each set of substitutions, and each set was assigned to three workers. To remove low-quality examples, we set the minimum time it should take to complete each set to 200 seconds. The entry in our task was restricted to workers from native speaking English countries. 128 workers contributed to the task, and we created 15,339 hypotheses (7,179 upward examples and 8,160 downward examples).
The gold label of each premise-hypothesis pair created in the previous task is automatically determined by monotonicity calculus. That is, a downward inference pair is labeled as entailment, while an upward inference pair is labeled as non-entailment.
However, workers sometimes provided some ungrammatical or unnatural sentences such as the case where a rephrased phrase does not satisfy the selectional restrictions (e.g., original: Tom doesn't live in Boston, rephrased: Tom doesn't live in yes), making it difficult to judge their entailment relations. Thus, we performed an annotation task to ensure accurate labeling of gold labels. We asked workers about the entailment relation of each premise-hypothesis pair as well as how natural it is.
Worker comprehension of an entailment relation directly affects the quality of inference problems. To avoid worker misunderstandings, we showed workers the following definitions of labels and five examples for each label:
entailment: the case where the hypothesis is true under any situation that the premise describes.
non-entailment: the case where the hypothesis is not always true under a situation that the premise describes.
unnatural: the case where either the premise and/or the hypothesis is ungrammatical or does not make sense.
Workers were paid US$0.04 for each question, and each question was assigned to three workers. To collect high-quality annotation results, we imposed ten test questions on each worker, and removed workers who gave more than three wrong answers. We also set the minimum time it should take to complete each question to 200 seconds. 1,237 workers contributed to this task, and we annotated gold labels of 15,339 premise-hypothesis pairs.
Table 3 shows the numbers of cases where answers matched gold labels automatically determined by monotonicity calculus. This table shows that there exist inference pairs whose labels are difficult even for humans to determine; there are 3,354 premise-hypothesis pairs whose gold labels as annotated by polarity computations match with those answered by all workers. We selected these naturalistic monotonicity inference pairs for the candidates of the final test set.
To make the distribution of gold labels symmetric, we checked these pairs to determine if we can swap the premise and the hypothesis, reverse their gold labels, and create another monotonicity inference pair. In some cases, shown below, the gold label cannot be reversed if we swap the premise and the hypothesis.
In ( UID15 ), child and kid are not hyponyms but synonyms, and the premise $P$ and the hypothesis $H$ are paraphrases.
$P$ : Tom is no longer a child $H$ : Tom is no longer a kid
These cases are not strict downward inference problems, in the sense that a phrase is not replaced by its hyponym/hypernym.
Consider the example ( UID16 ).
$P$ : The moon has no atmosphere $H$ : The moon has no atmosphere, and the gravity force is too low
The hypothesis $H$ was created by asking workers to make atmosphere in the premise $P$ more specific. However, the additional phrase and the gravity force is too low does not form constituents with atmosphere. Thus, such examples are not strict downward monotone inferences.
In such cases as (a) and (b), we do not swap the premise and the hypothesis. In the end, we collected 4,068 examples from crowdsourced datasets.
Linguistics-oriented dataset
We also collect monotonicity inference problems from previous manually curated datasets and linguistics publications. The motivation is that previous linguistics publications related to monotonicity reasoning are expected to contain well-designed inference problems, which might be challenging problems for NLI models.
We collected 1,184 examples from 11 linguistics publications BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 . Regarding previous manually-curated datasets, we collected 93 examples for monotonicity reasoning from the GLUE diagnostic dataset, and 37 single-premise problems from FraCaS.
Both the GLUE diagnostic dataset and FraCaS categorize problems by their types of monotonicity reasoning, but we found that each dataset has different classification criteria. Thus, following GLUE, we reclassified problems into three types of monotone reasoning (upward, downward, and non-monotone) by checking if they include (i) the target monotonicity operator in both the premise and the hypothesis and (ii) the phrase replacement in its argument position. In the GLUE diagnostic dataset, there are several problems whose gold labels are contradiction. We regard them as non-entailment in that the premise does not semantically entail the hypothesis.
Statistics
We merged the human-oriented dataset created via crowdsourcing and the linguistics-oriented dataset created from linguistics publications to create the current version of the monotonicity entailment dataset (MED). Table 4 shows some examples from the MED dataset. We can see that our dataset contains various phrase replacements (e.g., conjunction, relative clauses, and comparatives). Table 5 reports the statistics of the MED dataset, including 5,382 premise-hypothesis pairs (1,820 upward examples, 3,270 downward examples, and 292 non-monotone examples). Regarding non-monotone problems, gold labels are always non-entailment, whether a hypothesis is more specific or general than its premise, and thus almost all non-monotone problems are labeled as non-entailment. The size of the word vocabulary in the MED dataset is 4,023, and overlap ratios of vocabulary with previous standard NLI datasets is 95% with MultiNLI and 90% with SNLI.
We assigned a set of annotation tags for linguistic phenomena to each example in the test set. These tags allow us to analyze how well models perform on each linguistic phenomenon related to monotonicity reasoning. We defined 6 tags (see Table 4 for examples):
lexical knowledge (2,073 examples): inference problems that require lexical relations (i.e., hypernyms, hyponyms, or synonyms)
reverse (240 examples): inference problems where a propositional object is embedded in a downward environment more than once
conjunction (283 examples): inference problems that include the phrase replacement by adding conjunction (and) to the hypothesis
disjunction (254 examples): inference problems that include the phrase replacement by adding disjunction (or) to the hypothesis
conditionals (149 examples): inference problems that include conditionals (e.g., if, when, unless) in the hypothesis
negative polarity items (NPIs) (338 examples): inference problems that include NPIs (e.g., any, ever, at all, anything, anyone, anymore, anyhow, anywhere) in the hypothesis
Baselines
To test the difficulty of our dataset, we checked the majority class label and the accuracies of five state-of-the-art NLI models adopting different approaches: BiMPM (Bilateral Multi-Perspective Matching Model; BIBREF31 , BIBREF31 ), ESIM (Enhanced Sequential Inference Model; BIBREF32 , BIBREF32 ), Decomposable Attention Model BIBREF33 , KIM (Knowledge-based Inference Model; BIBREF34 , BIBREF34 ), and BERT (Bidirectional Encoder Representations from Transformers model; BIBREF35 , BIBREF35 ). Regarding BERT, we checked the performance of a model pretrained on Wikipedia and BookCorpus for language modeling and trained with SNLI and MultiNLI. For other models, we checked the performance trained with SNLI. In agreement with our dataset, we regarded the prediction label contradiction as non-entailment.
Table 6 shows that the accuracies of all models were better on upward inferences, in accordance with the reported results of the GLUE leaderboard. The overall accuracy of each model was low. In particular, all models underperformed the majority baseline on downward inferences, despite some models having rich lexical knowledge from a knowledge base (KIM) or pretraining (BERT). This indicates that downward inferences are difficult to perform even with the expansion of lexical knowledge. In addition, it is interesting to see that if a model performed better on upward inferences, it performed worse on downward inferences. We will investigate these results in detail below.
Data augmentation for analysis
To explore whether the performance of models on monotonicity reasoning depends on the training set or the model themselves, we conducted further analysis performed by data augmentation with the automatically generated monotonicity dataset HELP BIBREF11 . HELP contains 36K monotonicity inference examples (7,784 upward examples, 21,192 downward examples, and 1,105 non-monotone examples). The size of the HELP word vocabulary is 15K, and the overlap ratio of vocabulary between HELP and MED is 15.2%.
We trained BERT on MultiNLI only and on MultiNLI augmented with HELP, and compared their performance. Following BIBREF3 , we also checked the performance of a hypothesis-only model trained with each training set to test whether our test set contains undesired biases.
Table 7 shows that the performance of BERT with the hypothesis-only training set dropped around 10-40% as compared with the one with the premise-hypothesis training set, even if we use the data augmentation technique. This indicates that the MED test set does not allow models to predict from hypotheses alone. Data augmentation by HELP improved the overall accuracy to 71.6%, but there is still room for improvement. In addition, while adding HELP increased the accuracy on downward inferences, it slightly decreased accuracy on upward inferences. The size of downward examples in HELP is much larger than that of upward examples. This might improve accuracy on downward inferences, but might decrease accuracy on upward inferences.
To investigate the relationship between accuracy on upward inferences and downward inferences, we checked the performance throughout training BERT with only upward and downward inference examples in HELP (Figure 2 (i), (ii)). These two figures show that, as the size of the upward training set increased, BERT performed better on upward inferences but worse on downward inferences, and vice versa.
Figure 2 (iii) shows performance on a different ratio of upward and downward inference training sets. When downward inference examples constitute more than half of the training set, accuracies on upward and downward inferences were reversed. As the ratio of downward inferences increased, BERT performed much worse on upward inferences. This indicates that a training set in one direction (upward or downward entailing) of monotonicity might be harmful to models when learning the opposite direction of monotonicity.
Previous work using HELP BIBREF11 reported that the BERT trained with MultiNLI and HELP containing both upward and downward inferences improved accuracy on both directions of monotonicity. MultiNLI rarely comes from downward inferences (see Section "Discussion" ), and its size is large enough to be immune to the side-effects of downward inference examples in HELP. This indicates that MultiNLI might act as a buffer against side-effects of the monotonicity-driven data augmentation technique.
Table 8 shows the evaluation results by genre. This result shows that inference problems collected from linguistics publications are more challenging than crowdsourced inference problems, even if we add HELP to training sets. As shown in Figure 2 , the change in performance on problems from linguistics publications is milder than that on problems from crowdsourcing. This result also indicates the difficulty of problems from linguistics publications. Regarding non-monotone problems collected via crowdsourcing, there are very few non-monotone problems, so accuracy is 100%. Adding non-monotone problems to our test set is left for future work.
Table 9 shows the evaluation results by type of linguistic phenomenon. While accuracy on problems involving NPIs and conditionals was improved on both upward and downward inferences, accuracy on problems involving conjunction and disjunction was improved on only one direction. In addition, it is interesting to see that the change in accuracy on conjunction was opposite to that on disjunction. Downward inference examples involving disjunction are similar to upward inference ones; that is, inferences from a sentence to a shorter sentence are valid (e.g., Not many campers have had a sunburn or caught a cold $\Rightarrow $ Not many campers have caught a cold). Thus, these results were also caused by addition of downward inference examples. Also, accuracy on problems annotated with reverse tags was apparently better without HELP because all examples are upward inferences embedded in a downward environment twice.
Table 9 also shows that accuracy on conditionals was better on upward inferences than that on downward inferences. This indicates that BERT might fail to capture the monotonicity property that conditionals create a downward entailing context in their scope while they create an upward entailing context out of their scope.
Regarding lexical knowledge, the data augmentation technique improved the performance much better on downward inferences which do not require lexical knowledge. However, among the 394 problems for which all models provided wrong answers, 244 problems are non-lexical inference problems. This indicates that some non-lexical inference problems are more difficult than lexical inference problems, though accuracy on non-lexical inference problems was better than that on lexical inference problems.
Discussion
One of our findings is that there is a type of downward inferences to which every model fails to provide correct answers. One such example is concerned with the contrast between few and a few. Among 394 problems for which all models provided wrong answers, 148 downward inference problems were problems involving the downward monotonicity operator few such as in the following example:
$P$ : Few of the books had typical or marginal readers $H$ : Few of the books had some typical readers We transformed these downward inference problems to upward inference problems in two ways: (i) by replacing the downward operator few with the upward operator a few, and (ii) by removing the downward operator few. We tested BERT using these transformed test sets. The results showed that BERT predicted the same answers for the transformed test sets. This suggests that BERT does not understand the difference between the downward operator few and the upward operator a few.
The results of crowdsourcing tasks in Section 3.1.3 showed that some downward inferences can naturally be performed in human reasoning. However, we also found that the MultiNLI training set BIBREF10 , which is one of the dataset created from naturally-occurring texts, contains only 77 downward inference problems, including the following one.
$P$ : No racin' on the Range $H$ : No horse racing is allowed on the Range
One possible reason why there are few downward inferences is that certain pragmatic factors can block people to draw a downward inference. For instance, in the case of the inference problem in ( "Discussion" ), unless the added disjunct in $H$ , i.e., a small cat with green eyes, is salient in the context, it would be difficult to draw the conclusion $H$ from the premise $P$ .
$P$ : I saw a dog $H$ : I saw a dog or a small cat with green eyes
Such pragmatic factors would be one of the reasons why it is difficult to obtain downward inferences in naturally occurring texts.
Conclusion
We introduced a large monotonicity entailment dataset, called MED. To illustrate the usefulness of MED, we tested state-of-the-art NLI models, and found that performance on the new test set was substantially worse for all state-of-the-art NLI models. In addition, the accuracy on downward inferences was inversely proportional to the one on upward inferences.
An experiment with the data augmentation technique showed that accuracy on upward and downward inferences depends on the proportion of upward and downward inferences in the training set. This indicates that current neural models might have limitations on their generalization ability in monotonicity reasoning. We hope that the MED will be valuable for future research on more advanced models that are capable of monotonicity reasoning in a proper way.
Acknowledgement
This work was partially supported by JST AIP- PRISM Grant Number JPMJCR18Y1, Japan, and JSPS KAKENHI Grant Number JP18H03284, Japan. We thank our three anonymous reviewers for helpful suggestions. We are also grateful to Koki Washio, Masashi Yoshikawa, and Thomas McLachlan for helpful discussion. | Yes |
dfc393ba10ec4af5a17e5957fcbafdffdb1a6443 | dfc393ba10ec4af5a17e5957fcbafdffdb1a6443_0 | Q: What NLI models do they analyze?
Text: Introduction
Natural language inference (NLI), also known as recognizing textual entailment (RTE), has been proposed as a benchmark task for natural language understanding. Given a premise $P$ and a hypothesis $H$ , the task is to determine whether the premise semantically entails the hypothesis BIBREF0 . A number of recent works attempt to test and analyze what type of inferences an NLI model may be performing, focusing on various types of lexical inferences BIBREF1 , BIBREF2 , BIBREF3 and logical inferences BIBREF4 , BIBREF5 .
Concerning logical inferences, monotonicity reasoning BIBREF6 , BIBREF7 , which is a type of reasoning based on word replacement, requires the ability to capture the interaction between lexical and syntactic structures. Consider examples in ( "Introduction" ) and ( "Introduction" ).
All [ workers $\leavevmode {\color {blue!80!black}\downarrow }$ ] [joined for a French dinner $\leavevmode {\color {red!80!black}\uparrow }$ ] All workers joined for a dinner All new workers joined for a French dinner Not all [new workers $\leavevmode {\color {red!80!black}\uparrow }$ ] joined for a dinner Not all workers joined for a dinner
A context is upward entailing (shown by [... $\leavevmode {\color {red!80!black}\uparrow }$ ]) that allows an inference from ( "Introduction" ) to ( "Introduction" ), where French dinner is replaced by a more general concept dinner. On the other hand, a downward entailing context (shown by [... $\leavevmode {\color {blue!80!black}\downarrow }$ ]) allows an inference from ( "Introduction" ) to ( "Introduction" ), where workers is replaced by a more specific concept new workers. Interestingly, the direction of monotonicity can be reversed again by embedding yet another downward entailing context (e.g., not in ( "Introduction" )), as witness the fact that ( "Introduction" ) entails ( "Introduction" ). To properly handle both directions of monotonicity, NLI models must detect monotonicity operators (e.g., all, not) and their arguments from the syntactic structure.
For previous datasets containing monotonicity inference problems, FraCaS BIBREF8 and the GLUE diagnostic dataset BIBREF9 are manually-curated datasets for testing a wide range of linguistic phenomena. However, monotonicity problems are limited to very small sizes (FraCaS: 37/346 examples and GLUE: 93/1650 examples). The limited syntactic patterns and vocabularies in previous test sets are obstacles in accurately evaluating NLI models on monotonicity reasoning.
To tackle this issue, we present a new evaluation dataset that covers a wide range of monotonicity reasoning that was created by crowdsourcing and collected from linguistics publications (Section "Dataset" ). Compared with manual or automatic construction, we can collect naturally-occurring examples by crowdsourcing and well-designed ones from linguistics publications. To enable the evaluation of skills required for monotonicity reasoning, we annotate each example in our dataset with linguistic tags associated with monotonicity reasoning.
We measure the performance of state-of-the-art NLI models on monotonicity reasoning and investigate their generalization ability in upward and downward reasoning (Section "Results and Discussion" ). The results show that all models trained with SNLI BIBREF4 and MultiNLI BIBREF10 perform worse on downward inferences than on upward inferences.
In addition, we analyzed the performance of models trained with an automatically created monotonicity dataset, HELP BIBREF11 . The analysis with monotonicity data augmentation shows that models tend to perform better in the same direction of monotonicity with the training set, while they perform worse in the opposite direction. This indicates that the accuracy on monotonicity reasoning depends solely on the majority direction in the training set, and models might lack the ability to capture the structural relations between monotonicity operators and their arguments.
Monotonicity
As an example of a monotonicity inference, consider the example with the determiner every in ( "Monotonicity" ); here the premise $P$ entails the hypothesis $H$ .
$P$ : Every [ $_{\scriptsize \mathsf {NP}}$ person $\leavevmode {\color {blue!80!black}\downarrow }$ ] [ $_{\scriptsize \mathsf {VP}}$ bought a movie ticket $\leavevmode {\color {red!80!black}\uparrow }$ ] $H$ : Every young person bought a ticket
Every is downward entailing in the first argument ( $\mathsf {NP}$ ) and upward entailing in the second argument ( $\mathsf {VP}$ ), and thus the term person can be more specific by adding modifiers (person $\sqsupseteq $ young person), replacing it with its hyponym (person $\sqsupseteq $ spectator), or adding conjunction (person $\sqsupseteq $ person and alien). On the other hand, the term buy a ticket can be more general by removing modifiers (bought a movie ticket $\sqsubseteq $ bought a ticket), replacing it with its hypernym (bought a movie ticket $\sqsubseteq $ bought a show ticket), or adding disjunction (bought a movie ticket $\sqsubseteq $ bought or sold a movie ticket). Table 1 shows determiners modeled as binary operators and their polarities with respect to the first and second arguments.
There are various types of downward operators, not limited to determiners (see Table 2 ). As shown in ( "Monotonicity" ), if a propositional object is embedded in a downward monotonic context (e.g., when), the polarity of words over its scope can be reversed.
$P$ : When [every [ $_{\scriptsize \mathsf {NP}}$ young person $\leavevmode {\color {red!80!black}\uparrow }$ ] [ $_{\scriptsize \mathsf {VP}}$ bought a ticket $\leavevmode {\color {blue!80!black}\downarrow }$ ]], [that shop was open] $H$ : When [every [ $_{\scriptsize \mathsf {NP}}$ person] [ $_{\scriptsize \mathsf {VP}}$ bought a movie ticket]], [that shop was open]
Thus, the polarity ( $\leavevmode {\color {red!80!black}\uparrow }$ and $\leavevmode {\color {blue!80!black}\downarrow }$ ), where the replacement with more general (specific) phrases licenses entailment, needs to be determined by the interaction of monotonicity properties and syntactic structures; polarity of each constituent is calculated based on a monotonicity operator of functional expressions (e.g., every, when) and their function-term relations.
Human-oriented dataset
To create monotonicity inference problems, we should satisfy three requirements: (a) detect the monotonicity operators and their arguments; (b) based on the syntactic structure, induce the polarity of the argument positions; and (c) replace the phrase in the argument position with a more general or specific phrase in natural and various ways (e.g., by using lexical knowledge or logical connectives). For (a) and (b), we first conduct polarity computation on a syntactic structure for each sentence, and then select premises involving upward/downward expressions.
For (c), we use crowdsourcing to narrow or broaden the arguments. The motivation for using crowdsourcing is to collect naturally alike monotonicity inference problems that include various expressions. One problem here is that it is unclear how to instruct workers to create monotonicity inference problems without knowledge of natural language syntax and semantics. We must make tasks simple for workers to comprehend and provide sound judgements. Moreover, recent studies BIBREF12 , BIBREF3 , BIBREF13 point out that previous crowdsourced datasets, such as SNLI BIBREF14 and MultiNLI BIBREF10 , include hidden biases. As these previous datasets are motivated by approximated entailments, workers are asked to freely write hypotheses given a premise, which does not strictly restrict them to creating logically complex inferences.
Taking these concerns into consideration, we designed two-step tasks to be performed via crowdsourcing for creating a monotonicity test set; (i) a hypothesis creation task and (ii) a validation task. The task (i) is to create a hypothesis by making some polarized part of an original sentence more specific. Instead of writing a complete sentence from scratch, workers are asked to rewrite only a relatively short sentence. By restricting workers to rewrite only a polarized part, we can effectively collect monotonicity inference examples. The task (ii) is to annotate an entailment label for the premise-hypothesis pair generated in (i). Figure 1 summarizes the overview of our human-oriented dataset creation. We used the crowdsourcing platform Figure Eight for both tasks.
As a resource, we use declarative sentences with more than five tokens from the Parallel Meaning Bank (PMB) BIBREF15 . The PMB contains syntactically correct sentences annotated with its syntactic category in Combinatory Categorial Grammar (CCG; BIBREF16 , BIBREF16 ) format, which is suitable for our purpose. To get a whole CCG derivation tree, we parse each sentence by the state-of-the-art CCG parser, depccg BIBREF17 . Then, we add a polarity to every constituent of the CCG tree by the polarity computation system ccg2mono BIBREF18 and make the polarized part a blank field.
We ran a trial rephrasing task on 500 examples and detected 17 expressions that were too general and thus difficult to rephrase them in a natural way (e.g., every one, no time). We removed examples involving such expressions. To collect more downward inference examples, we select examples involving determiners in Table 1 and downward operators in Table 2 . As a result, we selected 1,485 examples involving expressions having arguments with upward monotonicity and 1,982 examples involving expressions having arguments with downward monotonicity.
We present crowdworkers with a sentence whose polarized part is underlined, and ask them to replace the underlined part with more specific phrases in three different ways. In the instructions, we showed examples rephrased in various ways: by adding modifiers, by adding conjunction phrases, and by replacing a word with its hyponyms.
Workers were paid US$0.05 for each set of substitutions, and each set was assigned to three workers. To remove low-quality examples, we set the minimum time it should take to complete each set to 200 seconds. The entry in our task was restricted to workers from native speaking English countries. 128 workers contributed to the task, and we created 15,339 hypotheses (7,179 upward examples and 8,160 downward examples).
The gold label of each premise-hypothesis pair created in the previous task is automatically determined by monotonicity calculus. That is, a downward inference pair is labeled as entailment, while an upward inference pair is labeled as non-entailment.
However, workers sometimes provided some ungrammatical or unnatural sentences such as the case where a rephrased phrase does not satisfy the selectional restrictions (e.g., original: Tom doesn't live in Boston, rephrased: Tom doesn't live in yes), making it difficult to judge their entailment relations. Thus, we performed an annotation task to ensure accurate labeling of gold labels. We asked workers about the entailment relation of each premise-hypothesis pair as well as how natural it is.
Worker comprehension of an entailment relation directly affects the quality of inference problems. To avoid worker misunderstandings, we showed workers the following definitions of labels and five examples for each label:
entailment: the case where the hypothesis is true under any situation that the premise describes.
non-entailment: the case where the hypothesis is not always true under a situation that the premise describes.
unnatural: the case where either the premise and/or the hypothesis is ungrammatical or does not make sense.
Workers were paid US$0.04 for each question, and each question was assigned to three workers. To collect high-quality annotation results, we imposed ten test questions on each worker, and removed workers who gave more than three wrong answers. We also set the minimum time it should take to complete each question to 200 seconds. 1,237 workers contributed to this task, and we annotated gold labels of 15,339 premise-hypothesis pairs.
Table 3 shows the numbers of cases where answers matched gold labels automatically determined by monotonicity calculus. This table shows that there exist inference pairs whose labels are difficult even for humans to determine; there are 3,354 premise-hypothesis pairs whose gold labels as annotated by polarity computations match with those answered by all workers. We selected these naturalistic monotonicity inference pairs for the candidates of the final test set.
To make the distribution of gold labels symmetric, we checked these pairs to determine if we can swap the premise and the hypothesis, reverse their gold labels, and create another monotonicity inference pair. In some cases, shown below, the gold label cannot be reversed if we swap the premise and the hypothesis.
In ( UID15 ), child and kid are not hyponyms but synonyms, and the premise $P$ and the hypothesis $H$ are paraphrases.
$P$ : Tom is no longer a child $H$ : Tom is no longer a kid
These cases are not strict downward inference problems, in the sense that a phrase is not replaced by its hyponym/hypernym.
Consider the example ( UID16 ).
$P$ : The moon has no atmosphere $H$ : The moon has no atmosphere, and the gravity force is too low
The hypothesis $H$ was created by asking workers to make atmosphere in the premise $P$ more specific. However, the additional phrase and the gravity force is too low does not form constituents with atmosphere. Thus, such examples are not strict downward monotone inferences.
In such cases as (a) and (b), we do not swap the premise and the hypothesis. In the end, we collected 4,068 examples from crowdsourced datasets.
Linguistics-oriented dataset
We also collect monotonicity inference problems from previous manually curated datasets and linguistics publications. The motivation is that previous linguistics publications related to monotonicity reasoning are expected to contain well-designed inference problems, which might be challenging problems for NLI models.
We collected 1,184 examples from 11 linguistics publications BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 . Regarding previous manually-curated datasets, we collected 93 examples for monotonicity reasoning from the GLUE diagnostic dataset, and 37 single-premise problems from FraCaS.
Both the GLUE diagnostic dataset and FraCaS categorize problems by their types of monotonicity reasoning, but we found that each dataset has different classification criteria. Thus, following GLUE, we reclassified problems into three types of monotone reasoning (upward, downward, and non-monotone) by checking if they include (i) the target monotonicity operator in both the premise and the hypothesis and (ii) the phrase replacement in its argument position. In the GLUE diagnostic dataset, there are several problems whose gold labels are contradiction. We regard them as non-entailment in that the premise does not semantically entail the hypothesis.
Statistics
We merged the human-oriented dataset created via crowdsourcing and the linguistics-oriented dataset created from linguistics publications to create the current version of the monotonicity entailment dataset (MED). Table 4 shows some examples from the MED dataset. We can see that our dataset contains various phrase replacements (e.g., conjunction, relative clauses, and comparatives). Table 5 reports the statistics of the MED dataset, including 5,382 premise-hypothesis pairs (1,820 upward examples, 3,270 downward examples, and 292 non-monotone examples). Regarding non-monotone problems, gold labels are always non-entailment, whether a hypothesis is more specific or general than its premise, and thus almost all non-monotone problems are labeled as non-entailment. The size of the word vocabulary in the MED dataset is 4,023, and overlap ratios of vocabulary with previous standard NLI datasets is 95% with MultiNLI and 90% with SNLI.
We assigned a set of annotation tags for linguistic phenomena to each example in the test set. These tags allow us to analyze how well models perform on each linguistic phenomenon related to monotonicity reasoning. We defined 6 tags (see Table 4 for examples):
lexical knowledge (2,073 examples): inference problems that require lexical relations (i.e., hypernyms, hyponyms, or synonyms)
reverse (240 examples): inference problems where a propositional object is embedded in a downward environment more than once
conjunction (283 examples): inference problems that include the phrase replacement by adding conjunction (and) to the hypothesis
disjunction (254 examples): inference problems that include the phrase replacement by adding disjunction (or) to the hypothesis
conditionals (149 examples): inference problems that include conditionals (e.g., if, when, unless) in the hypothesis
negative polarity items (NPIs) (338 examples): inference problems that include NPIs (e.g., any, ever, at all, anything, anyone, anymore, anyhow, anywhere) in the hypothesis
Baselines
To test the difficulty of our dataset, we checked the majority class label and the accuracies of five state-of-the-art NLI models adopting different approaches: BiMPM (Bilateral Multi-Perspective Matching Model; BIBREF31 , BIBREF31 ), ESIM (Enhanced Sequential Inference Model; BIBREF32 , BIBREF32 ), Decomposable Attention Model BIBREF33 , KIM (Knowledge-based Inference Model; BIBREF34 , BIBREF34 ), and BERT (Bidirectional Encoder Representations from Transformers model; BIBREF35 , BIBREF35 ). Regarding BERT, we checked the performance of a model pretrained on Wikipedia and BookCorpus for language modeling and trained with SNLI and MultiNLI. For other models, we checked the performance trained with SNLI. In agreement with our dataset, we regarded the prediction label contradiction as non-entailment.
Table 6 shows that the accuracies of all models were better on upward inferences, in accordance with the reported results of the GLUE leaderboard. The overall accuracy of each model was low. In particular, all models underperformed the majority baseline on downward inferences, despite some models having rich lexical knowledge from a knowledge base (KIM) or pretraining (BERT). This indicates that downward inferences are difficult to perform even with the expansion of lexical knowledge. In addition, it is interesting to see that if a model performed better on upward inferences, it performed worse on downward inferences. We will investigate these results in detail below.
Data augmentation for analysis
To explore whether the performance of models on monotonicity reasoning depends on the training set or the model themselves, we conducted further analysis performed by data augmentation with the automatically generated monotonicity dataset HELP BIBREF11 . HELP contains 36K monotonicity inference examples (7,784 upward examples, 21,192 downward examples, and 1,105 non-monotone examples). The size of the HELP word vocabulary is 15K, and the overlap ratio of vocabulary between HELP and MED is 15.2%.
We trained BERT on MultiNLI only and on MultiNLI augmented with HELP, and compared their performance. Following BIBREF3 , we also checked the performance of a hypothesis-only model trained with each training set to test whether our test set contains undesired biases.
Table 7 shows that the performance of BERT with the hypothesis-only training set dropped around 10-40% as compared with the one with the premise-hypothesis training set, even if we use the data augmentation technique. This indicates that the MED test set does not allow models to predict from hypotheses alone. Data augmentation by HELP improved the overall accuracy to 71.6%, but there is still room for improvement. In addition, while adding HELP increased the accuracy on downward inferences, it slightly decreased accuracy on upward inferences. The size of downward examples in HELP is much larger than that of upward examples. This might improve accuracy on downward inferences, but might decrease accuracy on upward inferences.
To investigate the relationship between accuracy on upward inferences and downward inferences, we checked the performance throughout training BERT with only upward and downward inference examples in HELP (Figure 2 (i), (ii)). These two figures show that, as the size of the upward training set increased, BERT performed better on upward inferences but worse on downward inferences, and vice versa.
Figure 2 (iii) shows performance on a different ratio of upward and downward inference training sets. When downward inference examples constitute more than half of the training set, accuracies on upward and downward inferences were reversed. As the ratio of downward inferences increased, BERT performed much worse on upward inferences. This indicates that a training set in one direction (upward or downward entailing) of monotonicity might be harmful to models when learning the opposite direction of monotonicity.
Previous work using HELP BIBREF11 reported that the BERT trained with MultiNLI and HELP containing both upward and downward inferences improved accuracy on both directions of monotonicity. MultiNLI rarely comes from downward inferences (see Section "Discussion" ), and its size is large enough to be immune to the side-effects of downward inference examples in HELP. This indicates that MultiNLI might act as a buffer against side-effects of the monotonicity-driven data augmentation technique.
Table 8 shows the evaluation results by genre. This result shows that inference problems collected from linguistics publications are more challenging than crowdsourced inference problems, even if we add HELP to training sets. As shown in Figure 2 , the change in performance on problems from linguistics publications is milder than that on problems from crowdsourcing. This result also indicates the difficulty of problems from linguistics publications. Regarding non-monotone problems collected via crowdsourcing, there are very few non-monotone problems, so accuracy is 100%. Adding non-monotone problems to our test set is left for future work.
Table 9 shows the evaluation results by type of linguistic phenomenon. While accuracy on problems involving NPIs and conditionals was improved on both upward and downward inferences, accuracy on problems involving conjunction and disjunction was improved on only one direction. In addition, it is interesting to see that the change in accuracy on conjunction was opposite to that on disjunction. Downward inference examples involving disjunction are similar to upward inference ones; that is, inferences from a sentence to a shorter sentence are valid (e.g., Not many campers have had a sunburn or caught a cold $\Rightarrow $ Not many campers have caught a cold). Thus, these results were also caused by addition of downward inference examples. Also, accuracy on problems annotated with reverse tags was apparently better without HELP because all examples are upward inferences embedded in a downward environment twice.
Table 9 also shows that accuracy on conditionals was better on upward inferences than that on downward inferences. This indicates that BERT might fail to capture the monotonicity property that conditionals create a downward entailing context in their scope while they create an upward entailing context out of their scope.
Regarding lexical knowledge, the data augmentation technique improved the performance much better on downward inferences which do not require lexical knowledge. However, among the 394 problems for which all models provided wrong answers, 244 problems are non-lexical inference problems. This indicates that some non-lexical inference problems are more difficult than lexical inference problems, though accuracy on non-lexical inference problems was better than that on lexical inference problems.
Discussion
One of our findings is that there is a type of downward inferences to which every model fails to provide correct answers. One such example is concerned with the contrast between few and a few. Among 394 problems for which all models provided wrong answers, 148 downward inference problems were problems involving the downward monotonicity operator few such as in the following example:
$P$ : Few of the books had typical or marginal readers $H$ : Few of the books had some typical readers We transformed these downward inference problems to upward inference problems in two ways: (i) by replacing the downward operator few with the upward operator a few, and (ii) by removing the downward operator few. We tested BERT using these transformed test sets. The results showed that BERT predicted the same answers for the transformed test sets. This suggests that BERT does not understand the difference between the downward operator few and the upward operator a few.
The results of crowdsourcing tasks in Section 3.1.3 showed that some downward inferences can naturally be performed in human reasoning. However, we also found that the MultiNLI training set BIBREF10 , which is one of the dataset created from naturally-occurring texts, contains only 77 downward inference problems, including the following one.
$P$ : No racin' on the Range $H$ : No horse racing is allowed on the Range
One possible reason why there are few downward inferences is that certain pragmatic factors can block people to draw a downward inference. For instance, in the case of the inference problem in ( "Discussion" ), unless the added disjunct in $H$ , i.e., a small cat with green eyes, is salient in the context, it would be difficult to draw the conclusion $H$ from the premise $P$ .
$P$ : I saw a dog $H$ : I saw a dog or a small cat with green eyes
Such pragmatic factors would be one of the reasons why it is difficult to obtain downward inferences in naturally occurring texts.
Conclusion
We introduced a large monotonicity entailment dataset, called MED. To illustrate the usefulness of MED, we tested state-of-the-art NLI models, and found that performance on the new test set was substantially worse for all state-of-the-art NLI models. In addition, the accuracy on downward inferences was inversely proportional to the one on upward inferences.
An experiment with the data augmentation technique showed that accuracy on upward and downward inferences depends on the proportion of upward and downward inferences in the training set. This indicates that current neural models might have limitations on their generalization ability in monotonicity reasoning. We hope that the MED will be valuable for future research on more advanced models that are capable of monotonicity reasoning in a proper way.
Acknowledgement
This work was partially supported by JST AIP- PRISM Grant Number JPMJCR18Y1, Japan, and JSPS KAKENHI Grant Number JP18H03284, Japan. We thank our three anonymous reviewers for helpful suggestions. We are also grateful to Koki Washio, Masashi Yoshikawa, and Thomas McLachlan for helpful discussion. | BiMPM, ESIM, Decomposable Attention Model, KIM, BERT |
311a7fa62721e82265f4e0689b4adc05f6b74215 | 311a7fa62721e82265f4e0689b4adc05f6b74215_0 | Q: How do they define upward and downward reasoning?
Text: Introduction
Natural language inference (NLI), also known as recognizing textual entailment (RTE), has been proposed as a benchmark task for natural language understanding. Given a premise $P$ and a hypothesis $H$ , the task is to determine whether the premise semantically entails the hypothesis BIBREF0 . A number of recent works attempt to test and analyze what type of inferences an NLI model may be performing, focusing on various types of lexical inferences BIBREF1 , BIBREF2 , BIBREF3 and logical inferences BIBREF4 , BIBREF5 .
Concerning logical inferences, monotonicity reasoning BIBREF6 , BIBREF7 , which is a type of reasoning based on word replacement, requires the ability to capture the interaction between lexical and syntactic structures. Consider examples in ( "Introduction" ) and ( "Introduction" ).
All [ workers $\leavevmode {\color {blue!80!black}\downarrow }$ ] [joined for a French dinner $\leavevmode {\color {red!80!black}\uparrow }$ ] All workers joined for a dinner All new workers joined for a French dinner Not all [new workers $\leavevmode {\color {red!80!black}\uparrow }$ ] joined for a dinner Not all workers joined for a dinner
A context is upward entailing (shown by [... $\leavevmode {\color {red!80!black}\uparrow }$ ]) that allows an inference from ( "Introduction" ) to ( "Introduction" ), where French dinner is replaced by a more general concept dinner. On the other hand, a downward entailing context (shown by [... $\leavevmode {\color {blue!80!black}\downarrow }$ ]) allows an inference from ( "Introduction" ) to ( "Introduction" ), where workers is replaced by a more specific concept new workers. Interestingly, the direction of monotonicity can be reversed again by embedding yet another downward entailing context (e.g., not in ( "Introduction" )), as witness the fact that ( "Introduction" ) entails ( "Introduction" ). To properly handle both directions of monotonicity, NLI models must detect monotonicity operators (e.g., all, not) and their arguments from the syntactic structure.
For previous datasets containing monotonicity inference problems, FraCaS BIBREF8 and the GLUE diagnostic dataset BIBREF9 are manually-curated datasets for testing a wide range of linguistic phenomena. However, monotonicity problems are limited to very small sizes (FraCaS: 37/346 examples and GLUE: 93/1650 examples). The limited syntactic patterns and vocabularies in previous test sets are obstacles in accurately evaluating NLI models on monotonicity reasoning.
To tackle this issue, we present a new evaluation dataset that covers a wide range of monotonicity reasoning that was created by crowdsourcing and collected from linguistics publications (Section "Dataset" ). Compared with manual or automatic construction, we can collect naturally-occurring examples by crowdsourcing and well-designed ones from linguistics publications. To enable the evaluation of skills required for monotonicity reasoning, we annotate each example in our dataset with linguistic tags associated with monotonicity reasoning.
We measure the performance of state-of-the-art NLI models on monotonicity reasoning and investigate their generalization ability in upward and downward reasoning (Section "Results and Discussion" ). The results show that all models trained with SNLI BIBREF4 and MultiNLI BIBREF10 perform worse on downward inferences than on upward inferences.
In addition, we analyzed the performance of models trained with an automatically created monotonicity dataset, HELP BIBREF11 . The analysis with monotonicity data augmentation shows that models tend to perform better in the same direction of monotonicity with the training set, while they perform worse in the opposite direction. This indicates that the accuracy on monotonicity reasoning depends solely on the majority direction in the training set, and models might lack the ability to capture the structural relations between monotonicity operators and their arguments.
Monotonicity
As an example of a monotonicity inference, consider the example with the determiner every in ( "Monotonicity" ); here the premise $P$ entails the hypothesis $H$ .
$P$ : Every [ $_{\scriptsize \mathsf {NP}}$ person $\leavevmode {\color {blue!80!black}\downarrow }$ ] [ $_{\scriptsize \mathsf {VP}}$ bought a movie ticket $\leavevmode {\color {red!80!black}\uparrow }$ ] $H$ : Every young person bought a ticket
Every is downward entailing in the first argument ( $\mathsf {NP}$ ) and upward entailing in the second argument ( $\mathsf {VP}$ ), and thus the term person can be more specific by adding modifiers (person $\sqsupseteq $ young person), replacing it with its hyponym (person $\sqsupseteq $ spectator), or adding conjunction (person $\sqsupseteq $ person and alien). On the other hand, the term buy a ticket can be more general by removing modifiers (bought a movie ticket $\sqsubseteq $ bought a ticket), replacing it with its hypernym (bought a movie ticket $\sqsubseteq $ bought a show ticket), or adding disjunction (bought a movie ticket $\sqsubseteq $ bought or sold a movie ticket). Table 1 shows determiners modeled as binary operators and their polarities with respect to the first and second arguments.
There are various types of downward operators, not limited to determiners (see Table 2 ). As shown in ( "Monotonicity" ), if a propositional object is embedded in a downward monotonic context (e.g., when), the polarity of words over its scope can be reversed.
$P$ : When [every [ $_{\scriptsize \mathsf {NP}}$ young person $\leavevmode {\color {red!80!black}\uparrow }$ ] [ $_{\scriptsize \mathsf {VP}}$ bought a ticket $\leavevmode {\color {blue!80!black}\downarrow }$ ]], [that shop was open] $H$ : When [every [ $_{\scriptsize \mathsf {NP}}$ person] [ $_{\scriptsize \mathsf {VP}}$ bought a movie ticket]], [that shop was open]
Thus, the polarity ( $\leavevmode {\color {red!80!black}\uparrow }$ and $\leavevmode {\color {blue!80!black}\downarrow }$ ), where the replacement with more general (specific) phrases licenses entailment, needs to be determined by the interaction of monotonicity properties and syntactic structures; polarity of each constituent is calculated based on a monotonicity operator of functional expressions (e.g., every, when) and their function-term relations.
Human-oriented dataset
To create monotonicity inference problems, we should satisfy three requirements: (a) detect the monotonicity operators and their arguments; (b) based on the syntactic structure, induce the polarity of the argument positions; and (c) replace the phrase in the argument position with a more general or specific phrase in natural and various ways (e.g., by using lexical knowledge or logical connectives). For (a) and (b), we first conduct polarity computation on a syntactic structure for each sentence, and then select premises involving upward/downward expressions.
For (c), we use crowdsourcing to narrow or broaden the arguments. The motivation for using crowdsourcing is to collect naturally alike monotonicity inference problems that include various expressions. One problem here is that it is unclear how to instruct workers to create monotonicity inference problems without knowledge of natural language syntax and semantics. We must make tasks simple for workers to comprehend and provide sound judgements. Moreover, recent studies BIBREF12 , BIBREF3 , BIBREF13 point out that previous crowdsourced datasets, such as SNLI BIBREF14 and MultiNLI BIBREF10 , include hidden biases. As these previous datasets are motivated by approximated entailments, workers are asked to freely write hypotheses given a premise, which does not strictly restrict them to creating logically complex inferences.
Taking these concerns into consideration, we designed two-step tasks to be performed via crowdsourcing for creating a monotonicity test set; (i) a hypothesis creation task and (ii) a validation task. The task (i) is to create a hypothesis by making some polarized part of an original sentence more specific. Instead of writing a complete sentence from scratch, workers are asked to rewrite only a relatively short sentence. By restricting workers to rewrite only a polarized part, we can effectively collect monotonicity inference examples. The task (ii) is to annotate an entailment label for the premise-hypothesis pair generated in (i). Figure 1 summarizes the overview of our human-oriented dataset creation. We used the crowdsourcing platform Figure Eight for both tasks.
As a resource, we use declarative sentences with more than five tokens from the Parallel Meaning Bank (PMB) BIBREF15 . The PMB contains syntactically correct sentences annotated with its syntactic category in Combinatory Categorial Grammar (CCG; BIBREF16 , BIBREF16 ) format, which is suitable for our purpose. To get a whole CCG derivation tree, we parse each sentence by the state-of-the-art CCG parser, depccg BIBREF17 . Then, we add a polarity to every constituent of the CCG tree by the polarity computation system ccg2mono BIBREF18 and make the polarized part a blank field.
We ran a trial rephrasing task on 500 examples and detected 17 expressions that were too general and thus difficult to rephrase them in a natural way (e.g., every one, no time). We removed examples involving such expressions. To collect more downward inference examples, we select examples involving determiners in Table 1 and downward operators in Table 2 . As a result, we selected 1,485 examples involving expressions having arguments with upward monotonicity and 1,982 examples involving expressions having arguments with downward monotonicity.
We present crowdworkers with a sentence whose polarized part is underlined, and ask them to replace the underlined part with more specific phrases in three different ways. In the instructions, we showed examples rephrased in various ways: by adding modifiers, by adding conjunction phrases, and by replacing a word with its hyponyms.
Workers were paid US$0.05 for each set of substitutions, and each set was assigned to three workers. To remove low-quality examples, we set the minimum time it should take to complete each set to 200 seconds. The entry in our task was restricted to workers from native speaking English countries. 128 workers contributed to the task, and we created 15,339 hypotheses (7,179 upward examples and 8,160 downward examples).
The gold label of each premise-hypothesis pair created in the previous task is automatically determined by monotonicity calculus. That is, a downward inference pair is labeled as entailment, while an upward inference pair is labeled as non-entailment.
However, workers sometimes provided some ungrammatical or unnatural sentences such as the case where a rephrased phrase does not satisfy the selectional restrictions (e.g., original: Tom doesn't live in Boston, rephrased: Tom doesn't live in yes), making it difficult to judge their entailment relations. Thus, we performed an annotation task to ensure accurate labeling of gold labels. We asked workers about the entailment relation of each premise-hypothesis pair as well as how natural it is.
Worker comprehension of an entailment relation directly affects the quality of inference problems. To avoid worker misunderstandings, we showed workers the following definitions of labels and five examples for each label:
entailment: the case where the hypothesis is true under any situation that the premise describes.
non-entailment: the case where the hypothesis is not always true under a situation that the premise describes.
unnatural: the case where either the premise and/or the hypothesis is ungrammatical or does not make sense.
Workers were paid US$0.04 for each question, and each question was assigned to three workers. To collect high-quality annotation results, we imposed ten test questions on each worker, and removed workers who gave more than three wrong answers. We also set the minimum time it should take to complete each question to 200 seconds. 1,237 workers contributed to this task, and we annotated gold labels of 15,339 premise-hypothesis pairs.
Table 3 shows the numbers of cases where answers matched gold labels automatically determined by monotonicity calculus. This table shows that there exist inference pairs whose labels are difficult even for humans to determine; there are 3,354 premise-hypothesis pairs whose gold labels as annotated by polarity computations match with those answered by all workers. We selected these naturalistic monotonicity inference pairs for the candidates of the final test set.
To make the distribution of gold labels symmetric, we checked these pairs to determine if we can swap the premise and the hypothesis, reverse their gold labels, and create another monotonicity inference pair. In some cases, shown below, the gold label cannot be reversed if we swap the premise and the hypothesis.
In ( UID15 ), child and kid are not hyponyms but synonyms, and the premise $P$ and the hypothesis $H$ are paraphrases.
$P$ : Tom is no longer a child $H$ : Tom is no longer a kid
These cases are not strict downward inference problems, in the sense that a phrase is not replaced by its hyponym/hypernym.
Consider the example ( UID16 ).
$P$ : The moon has no atmosphere $H$ : The moon has no atmosphere, and the gravity force is too low
The hypothesis $H$ was created by asking workers to make atmosphere in the premise $P$ more specific. However, the additional phrase and the gravity force is too low does not form constituents with atmosphere. Thus, such examples are not strict downward monotone inferences.
In such cases as (a) and (b), we do not swap the premise and the hypothesis. In the end, we collected 4,068 examples from crowdsourced datasets.
Linguistics-oriented dataset
We also collect monotonicity inference problems from previous manually curated datasets and linguistics publications. The motivation is that previous linguistics publications related to monotonicity reasoning are expected to contain well-designed inference problems, which might be challenging problems for NLI models.
We collected 1,184 examples from 11 linguistics publications BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 . Regarding previous manually-curated datasets, we collected 93 examples for monotonicity reasoning from the GLUE diagnostic dataset, and 37 single-premise problems from FraCaS.
Both the GLUE diagnostic dataset and FraCaS categorize problems by their types of monotonicity reasoning, but we found that each dataset has different classification criteria. Thus, following GLUE, we reclassified problems into three types of monotone reasoning (upward, downward, and non-monotone) by checking if they include (i) the target monotonicity operator in both the premise and the hypothesis and (ii) the phrase replacement in its argument position. In the GLUE diagnostic dataset, there are several problems whose gold labels are contradiction. We regard them as non-entailment in that the premise does not semantically entail the hypothesis.
Statistics
We merged the human-oriented dataset created via crowdsourcing and the linguistics-oriented dataset created from linguistics publications to create the current version of the monotonicity entailment dataset (MED). Table 4 shows some examples from the MED dataset. We can see that our dataset contains various phrase replacements (e.g., conjunction, relative clauses, and comparatives). Table 5 reports the statistics of the MED dataset, including 5,382 premise-hypothesis pairs (1,820 upward examples, 3,270 downward examples, and 292 non-monotone examples). Regarding non-monotone problems, gold labels are always non-entailment, whether a hypothesis is more specific or general than its premise, and thus almost all non-monotone problems are labeled as non-entailment. The size of the word vocabulary in the MED dataset is 4,023, and overlap ratios of vocabulary with previous standard NLI datasets is 95% with MultiNLI and 90% with SNLI.
We assigned a set of annotation tags for linguistic phenomena to each example in the test set. These tags allow us to analyze how well models perform on each linguistic phenomenon related to monotonicity reasoning. We defined 6 tags (see Table 4 for examples):
lexical knowledge (2,073 examples): inference problems that require lexical relations (i.e., hypernyms, hyponyms, or synonyms)
reverse (240 examples): inference problems where a propositional object is embedded in a downward environment more than once
conjunction (283 examples): inference problems that include the phrase replacement by adding conjunction (and) to the hypothesis
disjunction (254 examples): inference problems that include the phrase replacement by adding disjunction (or) to the hypothesis
conditionals (149 examples): inference problems that include conditionals (e.g., if, when, unless) in the hypothesis
negative polarity items (NPIs) (338 examples): inference problems that include NPIs (e.g., any, ever, at all, anything, anyone, anymore, anyhow, anywhere) in the hypothesis
Baselines
To test the difficulty of our dataset, we checked the majority class label and the accuracies of five state-of-the-art NLI models adopting different approaches: BiMPM (Bilateral Multi-Perspective Matching Model; BIBREF31 , BIBREF31 ), ESIM (Enhanced Sequential Inference Model; BIBREF32 , BIBREF32 ), Decomposable Attention Model BIBREF33 , KIM (Knowledge-based Inference Model; BIBREF34 , BIBREF34 ), and BERT (Bidirectional Encoder Representations from Transformers model; BIBREF35 , BIBREF35 ). Regarding BERT, we checked the performance of a model pretrained on Wikipedia and BookCorpus for language modeling and trained with SNLI and MultiNLI. For other models, we checked the performance trained with SNLI. In agreement with our dataset, we regarded the prediction label contradiction as non-entailment.
Table 6 shows that the accuracies of all models were better on upward inferences, in accordance with the reported results of the GLUE leaderboard. The overall accuracy of each model was low. In particular, all models underperformed the majority baseline on downward inferences, despite some models having rich lexical knowledge from a knowledge base (KIM) or pretraining (BERT). This indicates that downward inferences are difficult to perform even with the expansion of lexical knowledge. In addition, it is interesting to see that if a model performed better on upward inferences, it performed worse on downward inferences. We will investigate these results in detail below.
Data augmentation for analysis
To explore whether the performance of models on monotonicity reasoning depends on the training set or the model themselves, we conducted further analysis performed by data augmentation with the automatically generated monotonicity dataset HELP BIBREF11 . HELP contains 36K monotonicity inference examples (7,784 upward examples, 21,192 downward examples, and 1,105 non-monotone examples). The size of the HELP word vocabulary is 15K, and the overlap ratio of vocabulary between HELP and MED is 15.2%.
We trained BERT on MultiNLI only and on MultiNLI augmented with HELP, and compared their performance. Following BIBREF3 , we also checked the performance of a hypothesis-only model trained with each training set to test whether our test set contains undesired biases.
Table 7 shows that the performance of BERT with the hypothesis-only training set dropped around 10-40% as compared with the one with the premise-hypothesis training set, even if we use the data augmentation technique. This indicates that the MED test set does not allow models to predict from hypotheses alone. Data augmentation by HELP improved the overall accuracy to 71.6%, but there is still room for improvement. In addition, while adding HELP increased the accuracy on downward inferences, it slightly decreased accuracy on upward inferences. The size of downward examples in HELP is much larger than that of upward examples. This might improve accuracy on downward inferences, but might decrease accuracy on upward inferences.
To investigate the relationship between accuracy on upward inferences and downward inferences, we checked the performance throughout training BERT with only upward and downward inference examples in HELP (Figure 2 (i), (ii)). These two figures show that, as the size of the upward training set increased, BERT performed better on upward inferences but worse on downward inferences, and vice versa.
Figure 2 (iii) shows performance on a different ratio of upward and downward inference training sets. When downward inference examples constitute more than half of the training set, accuracies on upward and downward inferences were reversed. As the ratio of downward inferences increased, BERT performed much worse on upward inferences. This indicates that a training set in one direction (upward or downward entailing) of monotonicity might be harmful to models when learning the opposite direction of monotonicity.
Previous work using HELP BIBREF11 reported that the BERT trained with MultiNLI and HELP containing both upward and downward inferences improved accuracy on both directions of monotonicity. MultiNLI rarely comes from downward inferences (see Section "Discussion" ), and its size is large enough to be immune to the side-effects of downward inference examples in HELP. This indicates that MultiNLI might act as a buffer against side-effects of the monotonicity-driven data augmentation technique.
Table 8 shows the evaluation results by genre. This result shows that inference problems collected from linguistics publications are more challenging than crowdsourced inference problems, even if we add HELP to training sets. As shown in Figure 2 , the change in performance on problems from linguistics publications is milder than that on problems from crowdsourcing. This result also indicates the difficulty of problems from linguistics publications. Regarding non-monotone problems collected via crowdsourcing, there are very few non-monotone problems, so accuracy is 100%. Adding non-monotone problems to our test set is left for future work.
Table 9 shows the evaluation results by type of linguistic phenomenon. While accuracy on problems involving NPIs and conditionals was improved on both upward and downward inferences, accuracy on problems involving conjunction and disjunction was improved on only one direction. In addition, it is interesting to see that the change in accuracy on conjunction was opposite to that on disjunction. Downward inference examples involving disjunction are similar to upward inference ones; that is, inferences from a sentence to a shorter sentence are valid (e.g., Not many campers have had a sunburn or caught a cold $\Rightarrow $ Not many campers have caught a cold). Thus, these results were also caused by addition of downward inference examples. Also, accuracy on problems annotated with reverse tags was apparently better without HELP because all examples are upward inferences embedded in a downward environment twice.
Table 9 also shows that accuracy on conditionals was better on upward inferences than that on downward inferences. This indicates that BERT might fail to capture the monotonicity property that conditionals create a downward entailing context in their scope while they create an upward entailing context out of their scope.
Regarding lexical knowledge, the data augmentation technique improved the performance much better on downward inferences which do not require lexical knowledge. However, among the 394 problems for which all models provided wrong answers, 244 problems are non-lexical inference problems. This indicates that some non-lexical inference problems are more difficult than lexical inference problems, though accuracy on non-lexical inference problems was better than that on lexical inference problems.
Discussion
One of our findings is that there is a type of downward inferences to which every model fails to provide correct answers. One such example is concerned with the contrast between few and a few. Among 394 problems for which all models provided wrong answers, 148 downward inference problems were problems involving the downward monotonicity operator few such as in the following example:
$P$ : Few of the books had typical or marginal readers $H$ : Few of the books had some typical readers We transformed these downward inference problems to upward inference problems in two ways: (i) by replacing the downward operator few with the upward operator a few, and (ii) by removing the downward operator few. We tested BERT using these transformed test sets. The results showed that BERT predicted the same answers for the transformed test sets. This suggests that BERT does not understand the difference between the downward operator few and the upward operator a few.
The results of crowdsourcing tasks in Section 3.1.3 showed that some downward inferences can naturally be performed in human reasoning. However, we also found that the MultiNLI training set BIBREF10 , which is one of the dataset created from naturally-occurring texts, contains only 77 downward inference problems, including the following one.
$P$ : No racin' on the Range $H$ : No horse racing is allowed on the Range
One possible reason why there are few downward inferences is that certain pragmatic factors can block people to draw a downward inference. For instance, in the case of the inference problem in ( "Discussion" ), unless the added disjunct in $H$ , i.e., a small cat with green eyes, is salient in the context, it would be difficult to draw the conclusion $H$ from the premise $P$ .
$P$ : I saw a dog $H$ : I saw a dog or a small cat with green eyes
Such pragmatic factors would be one of the reasons why it is difficult to obtain downward inferences in naturally occurring texts.
Conclusion
We introduced a large monotonicity entailment dataset, called MED. To illustrate the usefulness of MED, we tested state-of-the-art NLI models, and found that performance on the new test set was substantially worse for all state-of-the-art NLI models. In addition, the accuracy on downward inferences was inversely proportional to the one on upward inferences.
An experiment with the data augmentation technique showed that accuracy on upward and downward inferences depends on the proportion of upward and downward inferences in the training set. This indicates that current neural models might have limitations on their generalization ability in monotonicity reasoning. We hope that the MED will be valuable for future research on more advanced models that are capable of monotonicity reasoning in a proper way.
Acknowledgement
This work was partially supported by JST AIP- PRISM Grant Number JPMJCR18Y1, Japan, and JSPS KAKENHI Grant Number JP18H03284, Japan. We thank our three anonymous reviewers for helpful suggestions. We are also grateful to Koki Washio, Masashi Yoshikawa, and Thomas McLachlan for helpful discussion. | Upward reasoning is defined as going from one specific concept to a more general one. Downward reasoning is defined as the opposite, going from a general concept to one that is more specific. |
82bcacad668351c0f81bd841becb2dbf115f000e | 82bcacad668351c0f81bd841becb2dbf115f000e_0 | Q: What is monotonicity reasoning?
Text: Introduction
Natural language inference (NLI), also known as recognizing textual entailment (RTE), has been proposed as a benchmark task for natural language understanding. Given a premise $P$ and a hypothesis $H$ , the task is to determine whether the premise semantically entails the hypothesis BIBREF0 . A number of recent works attempt to test and analyze what type of inferences an NLI model may be performing, focusing on various types of lexical inferences BIBREF1 , BIBREF2 , BIBREF3 and logical inferences BIBREF4 , BIBREF5 .
Concerning logical inferences, monotonicity reasoning BIBREF6 , BIBREF7 , which is a type of reasoning based on word replacement, requires the ability to capture the interaction between lexical and syntactic structures. Consider examples in ( "Introduction" ) and ( "Introduction" ).
All [ workers $\leavevmode {\color {blue!80!black}\downarrow }$ ] [joined for a French dinner $\leavevmode {\color {red!80!black}\uparrow }$ ] All workers joined for a dinner All new workers joined for a French dinner Not all [new workers $\leavevmode {\color {red!80!black}\uparrow }$ ] joined for a dinner Not all workers joined for a dinner
A context is upward entailing (shown by [... $\leavevmode {\color {red!80!black}\uparrow }$ ]) that allows an inference from ( "Introduction" ) to ( "Introduction" ), where French dinner is replaced by a more general concept dinner. On the other hand, a downward entailing context (shown by [... $\leavevmode {\color {blue!80!black}\downarrow }$ ]) allows an inference from ( "Introduction" ) to ( "Introduction" ), where workers is replaced by a more specific concept new workers. Interestingly, the direction of monotonicity can be reversed again by embedding yet another downward entailing context (e.g., not in ( "Introduction" )), as witness the fact that ( "Introduction" ) entails ( "Introduction" ). To properly handle both directions of monotonicity, NLI models must detect monotonicity operators (e.g., all, not) and their arguments from the syntactic structure.
For previous datasets containing monotonicity inference problems, FraCaS BIBREF8 and the GLUE diagnostic dataset BIBREF9 are manually-curated datasets for testing a wide range of linguistic phenomena. However, monotonicity problems are limited to very small sizes (FraCaS: 37/346 examples and GLUE: 93/1650 examples). The limited syntactic patterns and vocabularies in previous test sets are obstacles in accurately evaluating NLI models on monotonicity reasoning.
To tackle this issue, we present a new evaluation dataset that covers a wide range of monotonicity reasoning that was created by crowdsourcing and collected from linguistics publications (Section "Dataset" ). Compared with manual or automatic construction, we can collect naturally-occurring examples by crowdsourcing and well-designed ones from linguistics publications. To enable the evaluation of skills required for monotonicity reasoning, we annotate each example in our dataset with linguistic tags associated with monotonicity reasoning.
We measure the performance of state-of-the-art NLI models on monotonicity reasoning and investigate their generalization ability in upward and downward reasoning (Section "Results and Discussion" ). The results show that all models trained with SNLI BIBREF4 and MultiNLI BIBREF10 perform worse on downward inferences than on upward inferences.
In addition, we analyzed the performance of models trained with an automatically created monotonicity dataset, HELP BIBREF11 . The analysis with monotonicity data augmentation shows that models tend to perform better in the same direction of monotonicity with the training set, while they perform worse in the opposite direction. This indicates that the accuracy on monotonicity reasoning depends solely on the majority direction in the training set, and models might lack the ability to capture the structural relations between monotonicity operators and their arguments.
Monotonicity
As an example of a monotonicity inference, consider the example with the determiner every in ( "Monotonicity" ); here the premise $P$ entails the hypothesis $H$ .
$P$ : Every [ $_{\scriptsize \mathsf {NP}}$ person $\leavevmode {\color {blue!80!black}\downarrow }$ ] [ $_{\scriptsize \mathsf {VP}}$ bought a movie ticket $\leavevmode {\color {red!80!black}\uparrow }$ ] $H$ : Every young person bought a ticket
Every is downward entailing in the first argument ( $\mathsf {NP}$ ) and upward entailing in the second argument ( $\mathsf {VP}$ ), and thus the term person can be more specific by adding modifiers (person $\sqsupseteq $ young person), replacing it with its hyponym (person $\sqsupseteq $ spectator), or adding conjunction (person $\sqsupseteq $ person and alien). On the other hand, the term buy a ticket can be more general by removing modifiers (bought a movie ticket $\sqsubseteq $ bought a ticket), replacing it with its hypernym (bought a movie ticket $\sqsubseteq $ bought a show ticket), or adding disjunction (bought a movie ticket $\sqsubseteq $ bought or sold a movie ticket). Table 1 shows determiners modeled as binary operators and their polarities with respect to the first and second arguments.
There are various types of downward operators, not limited to determiners (see Table 2 ). As shown in ( "Monotonicity" ), if a propositional object is embedded in a downward monotonic context (e.g., when), the polarity of words over its scope can be reversed.
$P$ : When [every [ $_{\scriptsize \mathsf {NP}}$ young person $\leavevmode {\color {red!80!black}\uparrow }$ ] [ $_{\scriptsize \mathsf {VP}}$ bought a ticket $\leavevmode {\color {blue!80!black}\downarrow }$ ]], [that shop was open] $H$ : When [every [ $_{\scriptsize \mathsf {NP}}$ person] [ $_{\scriptsize \mathsf {VP}}$ bought a movie ticket]], [that shop was open]
Thus, the polarity ( $\leavevmode {\color {red!80!black}\uparrow }$ and $\leavevmode {\color {blue!80!black}\downarrow }$ ), where the replacement with more general (specific) phrases licenses entailment, needs to be determined by the interaction of monotonicity properties and syntactic structures; polarity of each constituent is calculated based on a monotonicity operator of functional expressions (e.g., every, when) and their function-term relations.
Human-oriented dataset
To create monotonicity inference problems, we should satisfy three requirements: (a) detect the monotonicity operators and their arguments; (b) based on the syntactic structure, induce the polarity of the argument positions; and (c) replace the phrase in the argument position with a more general or specific phrase in natural and various ways (e.g., by using lexical knowledge or logical connectives). For (a) and (b), we first conduct polarity computation on a syntactic structure for each sentence, and then select premises involving upward/downward expressions.
For (c), we use crowdsourcing to narrow or broaden the arguments. The motivation for using crowdsourcing is to collect naturally alike monotonicity inference problems that include various expressions. One problem here is that it is unclear how to instruct workers to create monotonicity inference problems without knowledge of natural language syntax and semantics. We must make tasks simple for workers to comprehend and provide sound judgements. Moreover, recent studies BIBREF12 , BIBREF3 , BIBREF13 point out that previous crowdsourced datasets, such as SNLI BIBREF14 and MultiNLI BIBREF10 , include hidden biases. As these previous datasets are motivated by approximated entailments, workers are asked to freely write hypotheses given a premise, which does not strictly restrict them to creating logically complex inferences.
Taking these concerns into consideration, we designed two-step tasks to be performed via crowdsourcing for creating a monotonicity test set; (i) a hypothesis creation task and (ii) a validation task. The task (i) is to create a hypothesis by making some polarized part of an original sentence more specific. Instead of writing a complete sentence from scratch, workers are asked to rewrite only a relatively short sentence. By restricting workers to rewrite only a polarized part, we can effectively collect monotonicity inference examples. The task (ii) is to annotate an entailment label for the premise-hypothesis pair generated in (i). Figure 1 summarizes the overview of our human-oriented dataset creation. We used the crowdsourcing platform Figure Eight for both tasks.
As a resource, we use declarative sentences with more than five tokens from the Parallel Meaning Bank (PMB) BIBREF15 . The PMB contains syntactically correct sentences annotated with its syntactic category in Combinatory Categorial Grammar (CCG; BIBREF16 , BIBREF16 ) format, which is suitable for our purpose. To get a whole CCG derivation tree, we parse each sentence by the state-of-the-art CCG parser, depccg BIBREF17 . Then, we add a polarity to every constituent of the CCG tree by the polarity computation system ccg2mono BIBREF18 and make the polarized part a blank field.
We ran a trial rephrasing task on 500 examples and detected 17 expressions that were too general and thus difficult to rephrase them in a natural way (e.g., every one, no time). We removed examples involving such expressions. To collect more downward inference examples, we select examples involving determiners in Table 1 and downward operators in Table 2 . As a result, we selected 1,485 examples involving expressions having arguments with upward monotonicity and 1,982 examples involving expressions having arguments with downward monotonicity.
We present crowdworkers with a sentence whose polarized part is underlined, and ask them to replace the underlined part with more specific phrases in three different ways. In the instructions, we showed examples rephrased in various ways: by adding modifiers, by adding conjunction phrases, and by replacing a word with its hyponyms.
Workers were paid US$0.05 for each set of substitutions, and each set was assigned to three workers. To remove low-quality examples, we set the minimum time it should take to complete each set to 200 seconds. The entry in our task was restricted to workers from native speaking English countries. 128 workers contributed to the task, and we created 15,339 hypotheses (7,179 upward examples and 8,160 downward examples).
The gold label of each premise-hypothesis pair created in the previous task is automatically determined by monotonicity calculus. That is, a downward inference pair is labeled as entailment, while an upward inference pair is labeled as non-entailment.
However, workers sometimes provided some ungrammatical or unnatural sentences such as the case where a rephrased phrase does not satisfy the selectional restrictions (e.g., original: Tom doesn't live in Boston, rephrased: Tom doesn't live in yes), making it difficult to judge their entailment relations. Thus, we performed an annotation task to ensure accurate labeling of gold labels. We asked workers about the entailment relation of each premise-hypothesis pair as well as how natural it is.
Worker comprehension of an entailment relation directly affects the quality of inference problems. To avoid worker misunderstandings, we showed workers the following definitions of labels and five examples for each label:
entailment: the case where the hypothesis is true under any situation that the premise describes.
non-entailment: the case where the hypothesis is not always true under a situation that the premise describes.
unnatural: the case where either the premise and/or the hypothesis is ungrammatical or does not make sense.
Workers were paid US$0.04 for each question, and each question was assigned to three workers. To collect high-quality annotation results, we imposed ten test questions on each worker, and removed workers who gave more than three wrong answers. We also set the minimum time it should take to complete each question to 200 seconds. 1,237 workers contributed to this task, and we annotated gold labels of 15,339 premise-hypothesis pairs.
Table 3 shows the numbers of cases where answers matched gold labels automatically determined by monotonicity calculus. This table shows that there exist inference pairs whose labels are difficult even for humans to determine; there are 3,354 premise-hypothesis pairs whose gold labels as annotated by polarity computations match with those answered by all workers. We selected these naturalistic monotonicity inference pairs for the candidates of the final test set.
To make the distribution of gold labels symmetric, we checked these pairs to determine if we can swap the premise and the hypothesis, reverse their gold labels, and create another monotonicity inference pair. In some cases, shown below, the gold label cannot be reversed if we swap the premise and the hypothesis.
In ( UID15 ), child and kid are not hyponyms but synonyms, and the premise $P$ and the hypothesis $H$ are paraphrases.
$P$ : Tom is no longer a child $H$ : Tom is no longer a kid
These cases are not strict downward inference problems, in the sense that a phrase is not replaced by its hyponym/hypernym.
Consider the example ( UID16 ).
$P$ : The moon has no atmosphere $H$ : The moon has no atmosphere, and the gravity force is too low
The hypothesis $H$ was created by asking workers to make atmosphere in the premise $P$ more specific. However, the additional phrase and the gravity force is too low does not form constituents with atmosphere. Thus, such examples are not strict downward monotone inferences.
In such cases as (a) and (b), we do not swap the premise and the hypothesis. In the end, we collected 4,068 examples from crowdsourced datasets.
Linguistics-oriented dataset
We also collect monotonicity inference problems from previous manually curated datasets and linguistics publications. The motivation is that previous linguistics publications related to monotonicity reasoning are expected to contain well-designed inference problems, which might be challenging problems for NLI models.
We collected 1,184 examples from 11 linguistics publications BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 . Regarding previous manually-curated datasets, we collected 93 examples for monotonicity reasoning from the GLUE diagnostic dataset, and 37 single-premise problems from FraCaS.
Both the GLUE diagnostic dataset and FraCaS categorize problems by their types of monotonicity reasoning, but we found that each dataset has different classification criteria. Thus, following GLUE, we reclassified problems into three types of monotone reasoning (upward, downward, and non-monotone) by checking if they include (i) the target monotonicity operator in both the premise and the hypothesis and (ii) the phrase replacement in its argument position. In the GLUE diagnostic dataset, there are several problems whose gold labels are contradiction. We regard them as non-entailment in that the premise does not semantically entail the hypothesis.
Statistics
We merged the human-oriented dataset created via crowdsourcing and the linguistics-oriented dataset created from linguistics publications to create the current version of the monotonicity entailment dataset (MED). Table 4 shows some examples from the MED dataset. We can see that our dataset contains various phrase replacements (e.g., conjunction, relative clauses, and comparatives). Table 5 reports the statistics of the MED dataset, including 5,382 premise-hypothesis pairs (1,820 upward examples, 3,270 downward examples, and 292 non-monotone examples). Regarding non-monotone problems, gold labels are always non-entailment, whether a hypothesis is more specific or general than its premise, and thus almost all non-monotone problems are labeled as non-entailment. The size of the word vocabulary in the MED dataset is 4,023, and overlap ratios of vocabulary with previous standard NLI datasets is 95% with MultiNLI and 90% with SNLI.
We assigned a set of annotation tags for linguistic phenomena to each example in the test set. These tags allow us to analyze how well models perform on each linguistic phenomenon related to monotonicity reasoning. We defined 6 tags (see Table 4 for examples):
lexical knowledge (2,073 examples): inference problems that require lexical relations (i.e., hypernyms, hyponyms, or synonyms)
reverse (240 examples): inference problems where a propositional object is embedded in a downward environment more than once
conjunction (283 examples): inference problems that include the phrase replacement by adding conjunction (and) to the hypothesis
disjunction (254 examples): inference problems that include the phrase replacement by adding disjunction (or) to the hypothesis
conditionals (149 examples): inference problems that include conditionals (e.g., if, when, unless) in the hypothesis
negative polarity items (NPIs) (338 examples): inference problems that include NPIs (e.g., any, ever, at all, anything, anyone, anymore, anyhow, anywhere) in the hypothesis
Baselines
To test the difficulty of our dataset, we checked the majority class label and the accuracies of five state-of-the-art NLI models adopting different approaches: BiMPM (Bilateral Multi-Perspective Matching Model; BIBREF31 , BIBREF31 ), ESIM (Enhanced Sequential Inference Model; BIBREF32 , BIBREF32 ), Decomposable Attention Model BIBREF33 , KIM (Knowledge-based Inference Model; BIBREF34 , BIBREF34 ), and BERT (Bidirectional Encoder Representations from Transformers model; BIBREF35 , BIBREF35 ). Regarding BERT, we checked the performance of a model pretrained on Wikipedia and BookCorpus for language modeling and trained with SNLI and MultiNLI. For other models, we checked the performance trained with SNLI. In agreement with our dataset, we regarded the prediction label contradiction as non-entailment.
Table 6 shows that the accuracies of all models were better on upward inferences, in accordance with the reported results of the GLUE leaderboard. The overall accuracy of each model was low. In particular, all models underperformed the majority baseline on downward inferences, despite some models having rich lexical knowledge from a knowledge base (KIM) or pretraining (BERT). This indicates that downward inferences are difficult to perform even with the expansion of lexical knowledge. In addition, it is interesting to see that if a model performed better on upward inferences, it performed worse on downward inferences. We will investigate these results in detail below.
Data augmentation for analysis
To explore whether the performance of models on monotonicity reasoning depends on the training set or the model themselves, we conducted further analysis performed by data augmentation with the automatically generated monotonicity dataset HELP BIBREF11 . HELP contains 36K monotonicity inference examples (7,784 upward examples, 21,192 downward examples, and 1,105 non-monotone examples). The size of the HELP word vocabulary is 15K, and the overlap ratio of vocabulary between HELP and MED is 15.2%.
We trained BERT on MultiNLI only and on MultiNLI augmented with HELP, and compared their performance. Following BIBREF3 , we also checked the performance of a hypothesis-only model trained with each training set to test whether our test set contains undesired biases.
Table 7 shows that the performance of BERT with the hypothesis-only training set dropped around 10-40% as compared with the one with the premise-hypothesis training set, even if we use the data augmentation technique. This indicates that the MED test set does not allow models to predict from hypotheses alone. Data augmentation by HELP improved the overall accuracy to 71.6%, but there is still room for improvement. In addition, while adding HELP increased the accuracy on downward inferences, it slightly decreased accuracy on upward inferences. The size of downward examples in HELP is much larger than that of upward examples. This might improve accuracy on downward inferences, but might decrease accuracy on upward inferences.
To investigate the relationship between accuracy on upward inferences and downward inferences, we checked the performance throughout training BERT with only upward and downward inference examples in HELP (Figure 2 (i), (ii)). These two figures show that, as the size of the upward training set increased, BERT performed better on upward inferences but worse on downward inferences, and vice versa.
Figure 2 (iii) shows performance on a different ratio of upward and downward inference training sets. When downward inference examples constitute more than half of the training set, accuracies on upward and downward inferences were reversed. As the ratio of downward inferences increased, BERT performed much worse on upward inferences. This indicates that a training set in one direction (upward or downward entailing) of monotonicity might be harmful to models when learning the opposite direction of monotonicity.
Previous work using HELP BIBREF11 reported that the BERT trained with MultiNLI and HELP containing both upward and downward inferences improved accuracy on both directions of monotonicity. MultiNLI rarely comes from downward inferences (see Section "Discussion" ), and its size is large enough to be immune to the side-effects of downward inference examples in HELP. This indicates that MultiNLI might act as a buffer against side-effects of the monotonicity-driven data augmentation technique.
Table 8 shows the evaluation results by genre. This result shows that inference problems collected from linguistics publications are more challenging than crowdsourced inference problems, even if we add HELP to training sets. As shown in Figure 2 , the change in performance on problems from linguistics publications is milder than that on problems from crowdsourcing. This result also indicates the difficulty of problems from linguistics publications. Regarding non-monotone problems collected via crowdsourcing, there are very few non-monotone problems, so accuracy is 100%. Adding non-monotone problems to our test set is left for future work.
Table 9 shows the evaluation results by type of linguistic phenomenon. While accuracy on problems involving NPIs and conditionals was improved on both upward and downward inferences, accuracy on problems involving conjunction and disjunction was improved on only one direction. In addition, it is interesting to see that the change in accuracy on conjunction was opposite to that on disjunction. Downward inference examples involving disjunction are similar to upward inference ones; that is, inferences from a sentence to a shorter sentence are valid (e.g., Not many campers have had a sunburn or caught a cold $\Rightarrow $ Not many campers have caught a cold). Thus, these results were also caused by addition of downward inference examples. Also, accuracy on problems annotated with reverse tags was apparently better without HELP because all examples are upward inferences embedded in a downward environment twice.
Table 9 also shows that accuracy on conditionals was better on upward inferences than that on downward inferences. This indicates that BERT might fail to capture the monotonicity property that conditionals create a downward entailing context in their scope while they create an upward entailing context out of their scope.
Regarding lexical knowledge, the data augmentation technique improved the performance much better on downward inferences which do not require lexical knowledge. However, among the 394 problems for which all models provided wrong answers, 244 problems are non-lexical inference problems. This indicates that some non-lexical inference problems are more difficult than lexical inference problems, though accuracy on non-lexical inference problems was better than that on lexical inference problems.
Discussion
One of our findings is that there is a type of downward inferences to which every model fails to provide correct answers. One such example is concerned with the contrast between few and a few. Among 394 problems for which all models provided wrong answers, 148 downward inference problems were problems involving the downward monotonicity operator few such as in the following example:
$P$ : Few of the books had typical or marginal readers $H$ : Few of the books had some typical readers We transformed these downward inference problems to upward inference problems in two ways: (i) by replacing the downward operator few with the upward operator a few, and (ii) by removing the downward operator few. We tested BERT using these transformed test sets. The results showed that BERT predicted the same answers for the transformed test sets. This suggests that BERT does not understand the difference between the downward operator few and the upward operator a few.
The results of crowdsourcing tasks in Section 3.1.3 showed that some downward inferences can naturally be performed in human reasoning. However, we also found that the MultiNLI training set BIBREF10 , which is one of the dataset created from naturally-occurring texts, contains only 77 downward inference problems, including the following one.
$P$ : No racin' on the Range $H$ : No horse racing is allowed on the Range
One possible reason why there are few downward inferences is that certain pragmatic factors can block people to draw a downward inference. For instance, in the case of the inference problem in ( "Discussion" ), unless the added disjunct in $H$ , i.e., a small cat with green eyes, is salient in the context, it would be difficult to draw the conclusion $H$ from the premise $P$ .
$P$ : I saw a dog $H$ : I saw a dog or a small cat with green eyes
Such pragmatic factors would be one of the reasons why it is difficult to obtain downward inferences in naturally occurring texts.
Conclusion
We introduced a large monotonicity entailment dataset, called MED. To illustrate the usefulness of MED, we tested state-of-the-art NLI models, and found that performance on the new test set was substantially worse for all state-of-the-art NLI models. In addition, the accuracy on downward inferences was inversely proportional to the one on upward inferences.
An experiment with the data augmentation technique showed that accuracy on upward and downward inferences depends on the proportion of upward and downward inferences in the training set. This indicates that current neural models might have limitations on their generalization ability in monotonicity reasoning. We hope that the MED will be valuable for future research on more advanced models that are capable of monotonicity reasoning in a proper way.
Acknowledgement
This work was partially supported by JST AIP- PRISM Grant Number JPMJCR18Y1, Japan, and JSPS KAKENHI Grant Number JP18H03284, Japan. We thank our three anonymous reviewers for helpful suggestions. We are also grateful to Koki Washio, Masashi Yoshikawa, and Thomas McLachlan for helpful discussion. | a type of reasoning based on word replacement, requires the ability to capture the interaction between lexical and syntactic structures |
5937ebbf04f62d41b48cbc6b5c38fc309e5c2328 | 5937ebbf04f62d41b48cbc6b5c38fc309e5c2328_0 | Q: What other relations were found in the datasets?
Text: Introduction
With the growing demand for human-computer/robot interaction systems, detecting the emotional state of the user can heavily benefit a conversational agent to respond at an appropriate emotional level. Emotion recognition in conversations has proven important for potential applications such as response recommendation or generation, emotion-based text-to-speech, personalisation, etc. Human emotional states can be expressed verbally and non-verbally BIBREF0, BIBREF1, however, while building an interactive dialogue system, the interface needs dialogue acts. A typical dialogue system consists of a language understanding module which requires to determine the meaning of and intention in the human input utterances BIBREF2, BIBREF3. Also, in discourse or conversational analysis, dialogue acts are the main linguistic features to consider BIBREF4. A dialogue act provides an intention and performative function in an utterance of the dialogue. For example, it can infer a user's intention by distinguishing Question, Answer, Request, Agree/Reject, etc. and performative functions such as Acknowledgement, Conversational-opening or -closing, Thanking, etc. The dialogue act information together with emotional states can be very useful for a spoken dialogue system to produce natural interaction BIBREF5.
The research in emotion recognition is growing very rapidly and many datasets are available, such as text-based, speech- or vision-level, and multimodal emotion data. Emotion expression recognition is a challenging task and hence multimodality is crucial BIBREF0. However, few conversational multi-modal emotion recognition datasets are available, for example, IEMOCAP BIBREF6, SEMAINE BIBREF7, MELD BIBREF8. They are multi-modal dyadic conversational datasets containing audio-visual and conversational transcripts. Every utterance in these datasets is labeled with an emotion label.
In this work, we apply an automated neural ensemble annotation process for dialogue act labeling. Several neural models are trained with the Switchboard Dialogue Act (SwDA) Corpus BIBREF9, BIBREF10 and used for inferring dialogue acts on the emotion datasets. We ensemble five model output labels by checking majority occurrences (most of the model labels are the same) and ranking confidence values of the models. We have annotated two potential multi-modal conversation datasets for emotion recognition: IEMOCAP (Interactive Emotional dyadic MOtion CAPture database) BIBREF6 and MELD (Multimodal EmotionLines Dataset) BIBREF8. Figure FIGREF2, shows an example of dialogue acts with emotion and sentiment labels from the MELD dataset. We confirmed the reliability of annotations with inter-annotator metrics. We analysed the co-occurrences of the dialogue act and emotion labels and discovered a key relationship between them; certain dialogue acts of the utterances show significant and useful association with respective emotional states. For example, Accept/Agree dialogue act often occurs with the Joy emotion while Reject with Anger, Acknowledgements with Surprise, Thanking with Joy, and Apology with Sadness, etc. The detailed analysis of the emotional dialogue acts (EDAs) and annotated datasets are being made available at the SECURE EU Project website.
Annotation of Emotional Dialogue Acts ::: Data for Conversational Emotion Analysis
There are two emotion taxonomies: (1) discrete emotion categories (DEC) and (2) fined-grained dimensional basis of emotion states (DBE). The DECs are Joy, Sadness, Fear, Surprise, Disgust, Anger and Neutral; identified by Ekman et al. ekman1987universalemos. The DBE of the emotion is usually elicited from two or three dimensions BIBREF1, BIBREF11, BIBREF12. A two-dimensional model is commonly used with Valence and Arousal (also called activation), and in the three-dimensional model, the third dimension is Dominance. IEMOCAP is annotated with all DECs and two additional emotion classes, Frustration and Excited. IEMOCAP is also annotated with three DBE, that includes Valance, Arousal and Dominance BIBREF6. MELD BIBREF8, which is an evolved version of the Emotionlines dataset developed by BIBREF13, is annotated with exactly 7 DECs and sentiments (positive, negative and neutral).
Annotation of Emotional Dialogue Acts ::: Dialogue Act Tagset and SwDA Corpus
There have been many taxonomies for dialogue acts: speech acts BIBREF14 refer to the utterance, not only to present information but to the action at is performed. Speech acts were later modified into five classes (Assertive, Directive, Commissive, Expressive, Declarative) BIBREF15. There are many such standard taxonomies and schemes to annotate conversational data, and most of them follow the discourse compositionality. These schemes have proven their importance for discourse or conversational analysis BIBREF16. During the increased development of dialogue systems and discourse analysis, the standard taxonomy was introduced in recent decades, called Dialogue Act Markup in Several Layers (DAMSL) tag set. According to DAMSL, each DA has a forward-looking function (such as Statement, Info-request, Thanking) and a backwards-looking function (such as Accept, Reject, Answer) BIBREF17.
The DAMSL annotation includes not only the utterance-level but also segmented-utterance labelling. However, in the emotion datasets, the utterances are not segmented, as we can see in Figure FIGREF2 first or fourth utterances are not segmented as two separate. The fourth utterance, it could be segmented to have two dialogue act labels, for example, a statement (sd) and a question (qy). That provides very fine-grained DA classes and follows the concept of discourse compositionality. DAMSL distinguishes wh-question (qw), yes-no question (qy), open-ended (qo), and or-question (qr) classes, not just because these questions are syntactically distinct, but also because they have different forward functions BIBREF18. For example, yes-no question is more likely to get a “yes" answer than a wh-question (qw). This also gives an intuition that the answers follow the syntactic formulation of question, providing a context. For example, qy is used for a question that, from a discourse perspective, expects a Yes (ny) or No (nn) answer.
We have investigated the annotation method and trained our neural models with the Switchboard Dialogue Act (SwDA) Corpus BIBREF9, BIBREF10. SwDA Corpus is annotated with the DAMSL tag set and it is been used for reporting and bench-marking state-of-the-art results in dialogue act recognition tasks BIBREF19, BIBREF20, BIBREF21 which makes it ideal for our use case. The Switchboard DAMSL Coders Manual can be followed for knowing more about the dialogue act labels.
Annotation of Emotional Dialogue Acts ::: Neural Model Annotators
We adopted the neural architectures based on Bothe et al. bothe2018discourse where two variants are: non-context model (classifying at utterance level) and context model (recognizing the dialogue act of the current utterance given a few preceding utterances). From conversational analysis using dialogue acts in Bothe et al. bothe2018interspeech, we learned that the preceding two utterances contribute significantly to recognizing the dialogue act of the current utterance. Hence, we adapt this setting for the context model and create a pool of annotators using recurrent neural networks (RNNs). RNNs can model the contextual information in the sequence of words of an utterance and in the sequence of utterances of a dialogue. Each word in an utterance is represented with a word embedding vector of dimension 1024. We use the word embedding vectors from pre-trained ELMo (Embeddings from Language Models) embeddings BIBREF22. We have a pool of five neural annotators as shown in Figure FIGREF6. Our online tool called Discourse-Wizard is available to practice automated dialogue act labeling. In this tool we use the same neural architectures but model-trained embeddings (while, in this work we use pre-trained ELMo embeddings, as they are better performant but computationally and size-wise expensive to be hosted in the online tool). The annotators are:
Utt-level 1 Dialogue Act Neural Annotator (DANA) is an utterance-level classifier that uses word embeddings ($w$) as an input to an RNN layer, attention mechanism and computes the probability of dialogue acts ($da$) using the softmax function (see in Figure FIGREF10, dotted line utt-l1). This model achieved 75.13% accuracy on the SwDA corpus test set.
Context 1 DANA is a context model that uses 2 preceding utterances while recognizing the dialogue act of the current utterance (see context model with con1 line in Figure FIGREF10). It uses a hierarchical RNN with the first RNN layer to encode the utterance from word embeddings ($w$) and the second RNN layer is provided with three utterances ($u$) (current and two preceding) composed from the first layer followed by the attention mechanism ($a$), where $\sum _{n=0}^{n} a_{t-n} = 1$. Finally, the softmax function is used to compute the probability distribution. This model achieved 77.55% accuracy on the SwDA corpus test set.
Utt-level 2 DANA is another utterance-level classifier which takes an average of the word embeddings in the input utterance and uses a feedforward neural network hidden layer (see utt-l2 line in Figure FIGREF10, where $mean$ passed to $softmax$ directly). Similar to the previous model, it computes the probability of dialogue acts using the softmax function. This model achieved 72.59% accuracy on the test set of the SwDA corpus.
Context 2 DANA is another context model that uses three utterances similar to the Context 1 DANA model, but the utterances are composed as the mean of the word embeddings over each utterance, similar to the Utt-level 2 model ($mean$ passed to context model in Figure FIGREF10 with con2 line). Hence, the Context 2 DANA model is composed of one RNN layer with three input vectors, finally topped with the softmax function for computing the probability distribution of the dialogue acts. This model achieved 75.97% accuracy on the test set of the SwDA corpus.
Context 3 DANA is a context model that uses three utterances similar to the previous models, but the utterance representations combine both features from the Context 1 and Context 2 models (con1 and con2 together in Figure FIGREF10). Hence, the Context 3 DANA model combines features of almost all the previous four models to provide the recognition of the dialogue acts. This model achieves 75.91% accuracy on the SwDA corpus test set.
Annotation of Emotional Dialogue Acts ::: Ensemble of Neural Annotators
First preference is given to the labels that are perfectly matching in all the neural annotators. In Table TABREF11, we can see that both datasets have about 40% of exactly matching labels over all models (AM). Then priority is given to the context-based models to check if the label in all context models is matching perfectly. In case two out of three context models are correct, then it is being checked if that label is also produced by at least one of the non-context models. Then, we allow labels to rely on these at least two context models. As a result, about 47% of the labels are taken based on the context models (CM). When we see that none of the context models is producing the same results, then we rank the labels with their respective confidence values produced as a probability distribution using the $softmax$ function. The labels are sorted in descending order according to confidence values. Then we check if the first three (case when one context model and both non-context models produce the same label) or at least two labels are matching, then we allow to pick that one. There are about 3% in IEMOCAP and 5% in MELD (BM).
Finally, when none the above conditions are fulfilled, we leave out the label with an unknown category. This unknown category of the dialogue act is labeled with `xx' in the final annotations, and they are about 7% in IEMOCAP and 11% in MELD (NM). The statistics of the EDAs is reported in Table TABREF13 for both datasets. Total utterances in MELD includes training, validation and test datasets.
Annotation of Emotional Dialogue Acts ::: Reliability of Neural Annotators
The pool of neural annotators provides a fair range of annotations, and we checked the reliability with the following metrics BIBREF23. Krippendorff's Alpha ($\alpha $) is a reliability coefficient developed to measure the agreement among observers, annotators, and raters, and is often used in emotion annotation BIBREF24. We apply it on the five neural annotators at the nominal level of measurement of dialogue act categories. $\alpha $ is computed as follows:
where $D_{o}$ is the observed disagreement and $D_{e}$ is the disagreement that is expected by chance. $\alpha =1$ means all annotators produce the same label, while $\alpha =0$ would mean none agreed on any label. As we can see in Table TABREF20, both datasets IEMOCAP and MELD produce significant inter-neural annotator agreement, 0.553 and 0.494, respectively.
A very popular inter-annotator metric is Fleiss' Kappa score, also reported in Table TABREF20, which determines consistency in the ratings. The kappa $k$ can be defined as,
where the denominator $1 -\bar{P}_e$ elicits the degree of agreement that is attainable above chance, and the numerator $\bar{P} -\bar{P}_e$ provides the degree of the agreement actually achieved above chance. Hence, $k = 1$ if the raters agree completely, and $k = 0$ when none reach any agreement. We got 0.556 and 0.502 for IEOMOCAP and MELD respectively with our five neural annotators. This indicated that the annotators are labeling the dialogue acts reliably and consistently. We also report the Spearman's correlation between context-based models (Context1 and Context2), and it shows a strong correlation between them (Table TABREF20). While using the labels we checked the absolute match between all context-based models and hence their strong correlation indicates their robustness.
EDAs Analysis
We can see emotional dialogue act co-occurrences with respect to emotion labels in Figure FIGREF12 for both datasets. There are sets of three bars per dialogue act in the figure, the first and second bar represent emotion labels of IEMOCAP (IE) and MELD (ME), and the third bar is for MELD sentiment (MS) labels. MELD emotion and sentiment statistics are interesting as they are strongly correlated to each other. The bars contain the normalized number of utterances for emotion labels with respect to the total number of utterances for that particular dialogue act category. The statements without-opinion (sd) and with-opinion (sv) contain utterances with almost all emotions. Many neutral utterances are spanning over all the dialogue acts.
Quotation (⌃q) dialogue acts, on the other hand, are mostly used with `Anger' and `Frustration' (in case of IEMOCAP), however, some utterances with `Joy' or `Sadness' as well (see examples in Table TABREF21). Action Directive (ad) dialogue act utterances, which are usually orders, frequently occur with `Anger' or `Frustration' although many with `Happy' emotion in case of the MELD dataset. Acknowledgements (b) are mostly with positive or neutral, however, Appreciation (ba) and Rhetorical (bh) backchannels often occur with a greater number in `Surprise', `Joy' and/or with `Excited' (in case of IEMOCAP). Questions (qh, qw, qy and qy⌃d) are mostly asked with emotions `Surprise', `Excited', `Frustration' or `Disgust' (in case of MELD) and many are neutral. No-answers (nn) are mostly `Sad' or `Frustrated' as compared to yes-answers (ny). Forward-functions such as Apology (fa) are mostly with `Sadness' whereas Thanking (ft) and Conventional-closing or -opening (fc or fp) are usually with `Joy' or `Excited'.
We also noticed that both datasets exhibit a similar relation between dialogue act and emotion. It is important to notice that the dialogue act annotation is based on the given transcripts, however, the emotional expressions are better perceived with audio or video BIBREF6. We report some examples where we mark the utterances with an determined label (xx) in the last row of Table TABREF21. They are skipped from the final annotation because of not fulfilling the conditions explained in Section SECREF14 It is also interesting to see the previous utterance dialogue acts (P-DA) of those skipped utterances, and the sequence of the labels can be followed from Figure FIGREF6 (utt-l1, utt-l2, con1, con2, con3).
In the first example, the previous utterance was b, and three DANA models produced labels of the current utterance as b, but it is skipped because the confidence values were not sufficient to bring it as a final label. The second utterance can be challenging even for humans to perceive with any of the dialogue acts. However, the third and fourth utterances are followed by a yes-no question (qy), and hence, we can see in the third example, that context models tried their best to at least perceive it as an answer (ng, ny, nn). The last utterance, “I'm so sorry!", has been completely disagreed by all the five annotators. Similar apology phrases are mostly found with `Sadness' emotion label's, and the correct dialogue act is Apology (fa). However, they are placed either in the sd or in ba dialogue act category. We believe that with human annotator's help those labels of the utterances can be corrected with very limited efforts.
Conclusion and Future Work
In this work, we presented a method to extend conversational multi-modal emotion datasets with dialogue act labels. We successfully show this on two well-established emotion datasets: IEMOCAP and MELD, which we labeled with dialogue acts and made publicly available for further study and research. As a first insight, we found that many of the dialogue acts and emotion labels follow certain relations. These relations can be useful to learn about the emotional behaviours with dialogue acts to build a natural dialogue system and for deeper conversational analysis. The conversational agent might benefit in generating an appropriate response when considering both emotional states and dialogue acts in the utterances.
In future work, we foresee the human in the loop for the annotation process along with a pool of automated neural annotators. Robust annotations can be achieved with very little human effort and supervision, for example, observing and correcting the final labels produced by ensemble output labels from the neural annotators. The human-annotator might also help to achieve segmented-utterance labelling of the dialogue acts. We also plan to use these datasets for conversational analysis to infer interactive behaviours of the emotional states with respect to dialogue acts. In our recent work, where we used dialogue acts to build a dialogue system for a social robot, we find this study and dataset very helpful. For example, we can extend our robotic conversational system to consider emotion as an added linguistic feature to produce natural interaction.
Acknowledgements
We would like to acknowledge funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska Curie grant agreement No 642667 (SECURE). | Quotation (⌃q) dialogue acts, on the other hand, are mostly used with `Anger' and `Frustration', Action Directive (ad) dialogue act utterances, which are usually orders, frequently occur with `Anger' or `Frustration' although many with `Happy' emotion in case of the MELD dataset, Acknowledgements (b) are mostly with positive or neutral, Appreciation (ba) and Rhetorical (bh) backchannels often occur with a greater number in `Surprise', `Joy' and/or with `Excited' (in case of IEMOCAP), Questions (qh, qw, qy and qy⌃d) are mostly asked with emotions `Surprise', `Excited', `Frustration' or `Disgust' (in case of MELD) and many are neutral, No-answers (nn) are mostly `Sad' or `Frustrated' as compared to yes-answers (ny)., Forward-functions such as Apology (fa) are mostly with `Sadness' whereas Thanking (ft) and Conventional-closing or -opening (fc or fp) are usually with `Joy' or `Excited' |
dcd6f18922ac5c00c22cef33c53ff5ae08b42298 | dcd6f18922ac5c00c22cef33c53ff5ae08b42298_0 | Q: How does the ensemble annotator extract the final label?
Text: Introduction
With the growing demand for human-computer/robot interaction systems, detecting the emotional state of the user can heavily benefit a conversational agent to respond at an appropriate emotional level. Emotion recognition in conversations has proven important for potential applications such as response recommendation or generation, emotion-based text-to-speech, personalisation, etc. Human emotional states can be expressed verbally and non-verbally BIBREF0, BIBREF1, however, while building an interactive dialogue system, the interface needs dialogue acts. A typical dialogue system consists of a language understanding module which requires to determine the meaning of and intention in the human input utterances BIBREF2, BIBREF3. Also, in discourse or conversational analysis, dialogue acts are the main linguistic features to consider BIBREF4. A dialogue act provides an intention and performative function in an utterance of the dialogue. For example, it can infer a user's intention by distinguishing Question, Answer, Request, Agree/Reject, etc. and performative functions such as Acknowledgement, Conversational-opening or -closing, Thanking, etc. The dialogue act information together with emotional states can be very useful for a spoken dialogue system to produce natural interaction BIBREF5.
The research in emotion recognition is growing very rapidly and many datasets are available, such as text-based, speech- or vision-level, and multimodal emotion data. Emotion expression recognition is a challenging task and hence multimodality is crucial BIBREF0. However, few conversational multi-modal emotion recognition datasets are available, for example, IEMOCAP BIBREF6, SEMAINE BIBREF7, MELD BIBREF8. They are multi-modal dyadic conversational datasets containing audio-visual and conversational transcripts. Every utterance in these datasets is labeled with an emotion label.
In this work, we apply an automated neural ensemble annotation process for dialogue act labeling. Several neural models are trained with the Switchboard Dialogue Act (SwDA) Corpus BIBREF9, BIBREF10 and used for inferring dialogue acts on the emotion datasets. We ensemble five model output labels by checking majority occurrences (most of the model labels are the same) and ranking confidence values of the models. We have annotated two potential multi-modal conversation datasets for emotion recognition: IEMOCAP (Interactive Emotional dyadic MOtion CAPture database) BIBREF6 and MELD (Multimodal EmotionLines Dataset) BIBREF8. Figure FIGREF2, shows an example of dialogue acts with emotion and sentiment labels from the MELD dataset. We confirmed the reliability of annotations with inter-annotator metrics. We analysed the co-occurrences of the dialogue act and emotion labels and discovered a key relationship between them; certain dialogue acts of the utterances show significant and useful association with respective emotional states. For example, Accept/Agree dialogue act often occurs with the Joy emotion while Reject with Anger, Acknowledgements with Surprise, Thanking with Joy, and Apology with Sadness, etc. The detailed analysis of the emotional dialogue acts (EDAs) and annotated datasets are being made available at the SECURE EU Project website.
Annotation of Emotional Dialogue Acts ::: Data for Conversational Emotion Analysis
There are two emotion taxonomies: (1) discrete emotion categories (DEC) and (2) fined-grained dimensional basis of emotion states (DBE). The DECs are Joy, Sadness, Fear, Surprise, Disgust, Anger and Neutral; identified by Ekman et al. ekman1987universalemos. The DBE of the emotion is usually elicited from two or three dimensions BIBREF1, BIBREF11, BIBREF12. A two-dimensional model is commonly used with Valence and Arousal (also called activation), and in the three-dimensional model, the third dimension is Dominance. IEMOCAP is annotated with all DECs and two additional emotion classes, Frustration and Excited. IEMOCAP is also annotated with three DBE, that includes Valance, Arousal and Dominance BIBREF6. MELD BIBREF8, which is an evolved version of the Emotionlines dataset developed by BIBREF13, is annotated with exactly 7 DECs and sentiments (positive, negative and neutral).
Annotation of Emotional Dialogue Acts ::: Dialogue Act Tagset and SwDA Corpus
There have been many taxonomies for dialogue acts: speech acts BIBREF14 refer to the utterance, not only to present information but to the action at is performed. Speech acts were later modified into five classes (Assertive, Directive, Commissive, Expressive, Declarative) BIBREF15. There are many such standard taxonomies and schemes to annotate conversational data, and most of them follow the discourse compositionality. These schemes have proven their importance for discourse or conversational analysis BIBREF16. During the increased development of dialogue systems and discourse analysis, the standard taxonomy was introduced in recent decades, called Dialogue Act Markup in Several Layers (DAMSL) tag set. According to DAMSL, each DA has a forward-looking function (such as Statement, Info-request, Thanking) and a backwards-looking function (such as Accept, Reject, Answer) BIBREF17.
The DAMSL annotation includes not only the utterance-level but also segmented-utterance labelling. However, in the emotion datasets, the utterances are not segmented, as we can see in Figure FIGREF2 first or fourth utterances are not segmented as two separate. The fourth utterance, it could be segmented to have two dialogue act labels, for example, a statement (sd) and a question (qy). That provides very fine-grained DA classes and follows the concept of discourse compositionality. DAMSL distinguishes wh-question (qw), yes-no question (qy), open-ended (qo), and or-question (qr) classes, not just because these questions are syntactically distinct, but also because they have different forward functions BIBREF18. For example, yes-no question is more likely to get a “yes" answer than a wh-question (qw). This also gives an intuition that the answers follow the syntactic formulation of question, providing a context. For example, qy is used for a question that, from a discourse perspective, expects a Yes (ny) or No (nn) answer.
We have investigated the annotation method and trained our neural models with the Switchboard Dialogue Act (SwDA) Corpus BIBREF9, BIBREF10. SwDA Corpus is annotated with the DAMSL tag set and it is been used for reporting and bench-marking state-of-the-art results in dialogue act recognition tasks BIBREF19, BIBREF20, BIBREF21 which makes it ideal for our use case. The Switchboard DAMSL Coders Manual can be followed for knowing more about the dialogue act labels.
Annotation of Emotional Dialogue Acts ::: Neural Model Annotators
We adopted the neural architectures based on Bothe et al. bothe2018discourse where two variants are: non-context model (classifying at utterance level) and context model (recognizing the dialogue act of the current utterance given a few preceding utterances). From conversational analysis using dialogue acts in Bothe et al. bothe2018interspeech, we learned that the preceding two utterances contribute significantly to recognizing the dialogue act of the current utterance. Hence, we adapt this setting for the context model and create a pool of annotators using recurrent neural networks (RNNs). RNNs can model the contextual information in the sequence of words of an utterance and in the sequence of utterances of a dialogue. Each word in an utterance is represented with a word embedding vector of dimension 1024. We use the word embedding vectors from pre-trained ELMo (Embeddings from Language Models) embeddings BIBREF22. We have a pool of five neural annotators as shown in Figure FIGREF6. Our online tool called Discourse-Wizard is available to practice automated dialogue act labeling. In this tool we use the same neural architectures but model-trained embeddings (while, in this work we use pre-trained ELMo embeddings, as they are better performant but computationally and size-wise expensive to be hosted in the online tool). The annotators are:
Utt-level 1 Dialogue Act Neural Annotator (DANA) is an utterance-level classifier that uses word embeddings ($w$) as an input to an RNN layer, attention mechanism and computes the probability of dialogue acts ($da$) using the softmax function (see in Figure FIGREF10, dotted line utt-l1). This model achieved 75.13% accuracy on the SwDA corpus test set.
Context 1 DANA is a context model that uses 2 preceding utterances while recognizing the dialogue act of the current utterance (see context model with con1 line in Figure FIGREF10). It uses a hierarchical RNN with the first RNN layer to encode the utterance from word embeddings ($w$) and the second RNN layer is provided with three utterances ($u$) (current and two preceding) composed from the first layer followed by the attention mechanism ($a$), where $\sum _{n=0}^{n} a_{t-n} = 1$. Finally, the softmax function is used to compute the probability distribution. This model achieved 77.55% accuracy on the SwDA corpus test set.
Utt-level 2 DANA is another utterance-level classifier which takes an average of the word embeddings in the input utterance and uses a feedforward neural network hidden layer (see utt-l2 line in Figure FIGREF10, where $mean$ passed to $softmax$ directly). Similar to the previous model, it computes the probability of dialogue acts using the softmax function. This model achieved 72.59% accuracy on the test set of the SwDA corpus.
Context 2 DANA is another context model that uses three utterances similar to the Context 1 DANA model, but the utterances are composed as the mean of the word embeddings over each utterance, similar to the Utt-level 2 model ($mean$ passed to context model in Figure FIGREF10 with con2 line). Hence, the Context 2 DANA model is composed of one RNN layer with three input vectors, finally topped with the softmax function for computing the probability distribution of the dialogue acts. This model achieved 75.97% accuracy on the test set of the SwDA corpus.
Context 3 DANA is a context model that uses three utterances similar to the previous models, but the utterance representations combine both features from the Context 1 and Context 2 models (con1 and con2 together in Figure FIGREF10). Hence, the Context 3 DANA model combines features of almost all the previous four models to provide the recognition of the dialogue acts. This model achieves 75.91% accuracy on the SwDA corpus test set.
Annotation of Emotional Dialogue Acts ::: Ensemble of Neural Annotators
First preference is given to the labels that are perfectly matching in all the neural annotators. In Table TABREF11, we can see that both datasets have about 40% of exactly matching labels over all models (AM). Then priority is given to the context-based models to check if the label in all context models is matching perfectly. In case two out of three context models are correct, then it is being checked if that label is also produced by at least one of the non-context models. Then, we allow labels to rely on these at least two context models. As a result, about 47% of the labels are taken based on the context models (CM). When we see that none of the context models is producing the same results, then we rank the labels with their respective confidence values produced as a probability distribution using the $softmax$ function. The labels are sorted in descending order according to confidence values. Then we check if the first three (case when one context model and both non-context models produce the same label) or at least two labels are matching, then we allow to pick that one. There are about 3% in IEMOCAP and 5% in MELD (BM).
Finally, when none the above conditions are fulfilled, we leave out the label with an unknown category. This unknown category of the dialogue act is labeled with `xx' in the final annotations, and they are about 7% in IEMOCAP and 11% in MELD (NM). The statistics of the EDAs is reported in Table TABREF13 for both datasets. Total utterances in MELD includes training, validation and test datasets.
Annotation of Emotional Dialogue Acts ::: Reliability of Neural Annotators
The pool of neural annotators provides a fair range of annotations, and we checked the reliability with the following metrics BIBREF23. Krippendorff's Alpha ($\alpha $) is a reliability coefficient developed to measure the agreement among observers, annotators, and raters, and is often used in emotion annotation BIBREF24. We apply it on the five neural annotators at the nominal level of measurement of dialogue act categories. $\alpha $ is computed as follows:
where $D_{o}$ is the observed disagreement and $D_{e}$ is the disagreement that is expected by chance. $\alpha =1$ means all annotators produce the same label, while $\alpha =0$ would mean none agreed on any label. As we can see in Table TABREF20, both datasets IEMOCAP and MELD produce significant inter-neural annotator agreement, 0.553 and 0.494, respectively.
A very popular inter-annotator metric is Fleiss' Kappa score, also reported in Table TABREF20, which determines consistency in the ratings. The kappa $k$ can be defined as,
where the denominator $1 -\bar{P}_e$ elicits the degree of agreement that is attainable above chance, and the numerator $\bar{P} -\bar{P}_e$ provides the degree of the agreement actually achieved above chance. Hence, $k = 1$ if the raters agree completely, and $k = 0$ when none reach any agreement. We got 0.556 and 0.502 for IEOMOCAP and MELD respectively with our five neural annotators. This indicated that the annotators are labeling the dialogue acts reliably and consistently. We also report the Spearman's correlation between context-based models (Context1 and Context2), and it shows a strong correlation between them (Table TABREF20). While using the labels we checked the absolute match between all context-based models and hence their strong correlation indicates their robustness.
EDAs Analysis
We can see emotional dialogue act co-occurrences with respect to emotion labels in Figure FIGREF12 for both datasets. There are sets of three bars per dialogue act in the figure, the first and second bar represent emotion labels of IEMOCAP (IE) and MELD (ME), and the third bar is for MELD sentiment (MS) labels. MELD emotion and sentiment statistics are interesting as they are strongly correlated to each other. The bars contain the normalized number of utterances for emotion labels with respect to the total number of utterances for that particular dialogue act category. The statements without-opinion (sd) and with-opinion (sv) contain utterances with almost all emotions. Many neutral utterances are spanning over all the dialogue acts.
Quotation (⌃q) dialogue acts, on the other hand, are mostly used with `Anger' and `Frustration' (in case of IEMOCAP), however, some utterances with `Joy' or `Sadness' as well (see examples in Table TABREF21). Action Directive (ad) dialogue act utterances, which are usually orders, frequently occur with `Anger' or `Frustration' although many with `Happy' emotion in case of the MELD dataset. Acknowledgements (b) are mostly with positive or neutral, however, Appreciation (ba) and Rhetorical (bh) backchannels often occur with a greater number in `Surprise', `Joy' and/or with `Excited' (in case of IEMOCAP). Questions (qh, qw, qy and qy⌃d) are mostly asked with emotions `Surprise', `Excited', `Frustration' or `Disgust' (in case of MELD) and many are neutral. No-answers (nn) are mostly `Sad' or `Frustrated' as compared to yes-answers (ny). Forward-functions such as Apology (fa) are mostly with `Sadness' whereas Thanking (ft) and Conventional-closing or -opening (fc or fp) are usually with `Joy' or `Excited'.
We also noticed that both datasets exhibit a similar relation between dialogue act and emotion. It is important to notice that the dialogue act annotation is based on the given transcripts, however, the emotional expressions are better perceived with audio or video BIBREF6. We report some examples where we mark the utterances with an determined label (xx) in the last row of Table TABREF21. They are skipped from the final annotation because of not fulfilling the conditions explained in Section SECREF14 It is also interesting to see the previous utterance dialogue acts (P-DA) of those skipped utterances, and the sequence of the labels can be followed from Figure FIGREF6 (utt-l1, utt-l2, con1, con2, con3).
In the first example, the previous utterance was b, and three DANA models produced labels of the current utterance as b, but it is skipped because the confidence values were not sufficient to bring it as a final label. The second utterance can be challenging even for humans to perceive with any of the dialogue acts. However, the third and fourth utterances are followed by a yes-no question (qy), and hence, we can see in the third example, that context models tried their best to at least perceive it as an answer (ng, ny, nn). The last utterance, “I'm so sorry!", has been completely disagreed by all the five annotators. Similar apology phrases are mostly found with `Sadness' emotion label's, and the correct dialogue act is Apology (fa). However, they are placed either in the sd or in ba dialogue act category. We believe that with human annotator's help those labels of the utterances can be corrected with very limited efforts.
Conclusion and Future Work
In this work, we presented a method to extend conversational multi-modal emotion datasets with dialogue act labels. We successfully show this on two well-established emotion datasets: IEMOCAP and MELD, which we labeled with dialogue acts and made publicly available for further study and research. As a first insight, we found that many of the dialogue acts and emotion labels follow certain relations. These relations can be useful to learn about the emotional behaviours with dialogue acts to build a natural dialogue system and for deeper conversational analysis. The conversational agent might benefit in generating an appropriate response when considering both emotional states and dialogue acts in the utterances.
In future work, we foresee the human in the loop for the annotation process along with a pool of automated neural annotators. Robust annotations can be achieved with very little human effort and supervision, for example, observing and correcting the final labels produced by ensemble output labels from the neural annotators. The human-annotator might also help to achieve segmented-utterance labelling of the dialogue acts. We also plan to use these datasets for conversational analysis to infer interactive behaviours of the emotional states with respect to dialogue acts. In our recent work, where we used dialogue acts to build a dialogue system for a social robot, we find this study and dataset very helpful. For example, we can extend our robotic conversational system to consider emotion as an added linguistic feature to produce natural interaction.
Acknowledgements
We would like to acknowledge funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska Curie grant agreement No 642667 (SECURE). | First preference is given to the labels that are perfectly matching in all the neural annotators., In case two out of three context models are correct, then it is being checked if that label is also produced by at least one of the non-context models., When we see that none of the context models is producing the same results, then we rank the labels with their respective confidence values produced as a probability distribution using the $softmax$ function. The labels are sorted in descending order according to confidence values. Then we check if the first three (case when one context model and both non-context models produce the same label) or at least two labels are matching, then we allow to pick that one. , Finally, when none the above conditions are fulfilled, we leave out the label with an unknown category. |
2965c86467d12b79abc16e1457d848cb6ca88973 | 2965c86467d12b79abc16e1457d848cb6ca88973_0 | Q: How were dialogue act labels defined?
Text: Introduction
With the growing demand for human-computer/robot interaction systems, detecting the emotional state of the user can heavily benefit a conversational agent to respond at an appropriate emotional level. Emotion recognition in conversations has proven important for potential applications such as response recommendation or generation, emotion-based text-to-speech, personalisation, etc. Human emotional states can be expressed verbally and non-verbally BIBREF0, BIBREF1, however, while building an interactive dialogue system, the interface needs dialogue acts. A typical dialogue system consists of a language understanding module which requires to determine the meaning of and intention in the human input utterances BIBREF2, BIBREF3. Also, in discourse or conversational analysis, dialogue acts are the main linguistic features to consider BIBREF4. A dialogue act provides an intention and performative function in an utterance of the dialogue. For example, it can infer a user's intention by distinguishing Question, Answer, Request, Agree/Reject, etc. and performative functions such as Acknowledgement, Conversational-opening or -closing, Thanking, etc. The dialogue act information together with emotional states can be very useful for a spoken dialogue system to produce natural interaction BIBREF5.
The research in emotion recognition is growing very rapidly and many datasets are available, such as text-based, speech- or vision-level, and multimodal emotion data. Emotion expression recognition is a challenging task and hence multimodality is crucial BIBREF0. However, few conversational multi-modal emotion recognition datasets are available, for example, IEMOCAP BIBREF6, SEMAINE BIBREF7, MELD BIBREF8. They are multi-modal dyadic conversational datasets containing audio-visual and conversational transcripts. Every utterance in these datasets is labeled with an emotion label.
In this work, we apply an automated neural ensemble annotation process for dialogue act labeling. Several neural models are trained with the Switchboard Dialogue Act (SwDA) Corpus BIBREF9, BIBREF10 and used for inferring dialogue acts on the emotion datasets. We ensemble five model output labels by checking majority occurrences (most of the model labels are the same) and ranking confidence values of the models. We have annotated two potential multi-modal conversation datasets for emotion recognition: IEMOCAP (Interactive Emotional dyadic MOtion CAPture database) BIBREF6 and MELD (Multimodal EmotionLines Dataset) BIBREF8. Figure FIGREF2, shows an example of dialogue acts with emotion and sentiment labels from the MELD dataset. We confirmed the reliability of annotations with inter-annotator metrics. We analysed the co-occurrences of the dialogue act and emotion labels and discovered a key relationship between them; certain dialogue acts of the utterances show significant and useful association with respective emotional states. For example, Accept/Agree dialogue act often occurs with the Joy emotion while Reject with Anger, Acknowledgements with Surprise, Thanking with Joy, and Apology with Sadness, etc. The detailed analysis of the emotional dialogue acts (EDAs) and annotated datasets are being made available at the SECURE EU Project website.
Annotation of Emotional Dialogue Acts ::: Data for Conversational Emotion Analysis
There are two emotion taxonomies: (1) discrete emotion categories (DEC) and (2) fined-grained dimensional basis of emotion states (DBE). The DECs are Joy, Sadness, Fear, Surprise, Disgust, Anger and Neutral; identified by Ekman et al. ekman1987universalemos. The DBE of the emotion is usually elicited from two or three dimensions BIBREF1, BIBREF11, BIBREF12. A two-dimensional model is commonly used with Valence and Arousal (also called activation), and in the three-dimensional model, the third dimension is Dominance. IEMOCAP is annotated with all DECs and two additional emotion classes, Frustration and Excited. IEMOCAP is also annotated with three DBE, that includes Valance, Arousal and Dominance BIBREF6. MELD BIBREF8, which is an evolved version of the Emotionlines dataset developed by BIBREF13, is annotated with exactly 7 DECs and sentiments (positive, negative and neutral).
Annotation of Emotional Dialogue Acts ::: Dialogue Act Tagset and SwDA Corpus
There have been many taxonomies for dialogue acts: speech acts BIBREF14 refer to the utterance, not only to present information but to the action at is performed. Speech acts were later modified into five classes (Assertive, Directive, Commissive, Expressive, Declarative) BIBREF15. There are many such standard taxonomies and schemes to annotate conversational data, and most of them follow the discourse compositionality. These schemes have proven their importance for discourse or conversational analysis BIBREF16. During the increased development of dialogue systems and discourse analysis, the standard taxonomy was introduced in recent decades, called Dialogue Act Markup in Several Layers (DAMSL) tag set. According to DAMSL, each DA has a forward-looking function (such as Statement, Info-request, Thanking) and a backwards-looking function (such as Accept, Reject, Answer) BIBREF17.
The DAMSL annotation includes not only the utterance-level but also segmented-utterance labelling. However, in the emotion datasets, the utterances are not segmented, as we can see in Figure FIGREF2 first or fourth utterances are not segmented as two separate. The fourth utterance, it could be segmented to have two dialogue act labels, for example, a statement (sd) and a question (qy). That provides very fine-grained DA classes and follows the concept of discourse compositionality. DAMSL distinguishes wh-question (qw), yes-no question (qy), open-ended (qo), and or-question (qr) classes, not just because these questions are syntactically distinct, but also because they have different forward functions BIBREF18. For example, yes-no question is more likely to get a “yes" answer than a wh-question (qw). This also gives an intuition that the answers follow the syntactic formulation of question, providing a context. For example, qy is used for a question that, from a discourse perspective, expects a Yes (ny) or No (nn) answer.
We have investigated the annotation method and trained our neural models with the Switchboard Dialogue Act (SwDA) Corpus BIBREF9, BIBREF10. SwDA Corpus is annotated with the DAMSL tag set and it is been used for reporting and bench-marking state-of-the-art results in dialogue act recognition tasks BIBREF19, BIBREF20, BIBREF21 which makes it ideal for our use case. The Switchboard DAMSL Coders Manual can be followed for knowing more about the dialogue act labels.
Annotation of Emotional Dialogue Acts ::: Neural Model Annotators
We adopted the neural architectures based on Bothe et al. bothe2018discourse where two variants are: non-context model (classifying at utterance level) and context model (recognizing the dialogue act of the current utterance given a few preceding utterances). From conversational analysis using dialogue acts in Bothe et al. bothe2018interspeech, we learned that the preceding two utterances contribute significantly to recognizing the dialogue act of the current utterance. Hence, we adapt this setting for the context model and create a pool of annotators using recurrent neural networks (RNNs). RNNs can model the contextual information in the sequence of words of an utterance and in the sequence of utterances of a dialogue. Each word in an utterance is represented with a word embedding vector of dimension 1024. We use the word embedding vectors from pre-trained ELMo (Embeddings from Language Models) embeddings BIBREF22. We have a pool of five neural annotators as shown in Figure FIGREF6. Our online tool called Discourse-Wizard is available to practice automated dialogue act labeling. In this tool we use the same neural architectures but model-trained embeddings (while, in this work we use pre-trained ELMo embeddings, as they are better performant but computationally and size-wise expensive to be hosted in the online tool). The annotators are:
Utt-level 1 Dialogue Act Neural Annotator (DANA) is an utterance-level classifier that uses word embeddings ($w$) as an input to an RNN layer, attention mechanism and computes the probability of dialogue acts ($da$) using the softmax function (see in Figure FIGREF10, dotted line utt-l1). This model achieved 75.13% accuracy on the SwDA corpus test set.
Context 1 DANA is a context model that uses 2 preceding utterances while recognizing the dialogue act of the current utterance (see context model with con1 line in Figure FIGREF10). It uses a hierarchical RNN with the first RNN layer to encode the utterance from word embeddings ($w$) and the second RNN layer is provided with three utterances ($u$) (current and two preceding) composed from the first layer followed by the attention mechanism ($a$), where $\sum _{n=0}^{n} a_{t-n} = 1$. Finally, the softmax function is used to compute the probability distribution. This model achieved 77.55% accuracy on the SwDA corpus test set.
Utt-level 2 DANA is another utterance-level classifier which takes an average of the word embeddings in the input utterance and uses a feedforward neural network hidden layer (see utt-l2 line in Figure FIGREF10, where $mean$ passed to $softmax$ directly). Similar to the previous model, it computes the probability of dialogue acts using the softmax function. This model achieved 72.59% accuracy on the test set of the SwDA corpus.
Context 2 DANA is another context model that uses three utterances similar to the Context 1 DANA model, but the utterances are composed as the mean of the word embeddings over each utterance, similar to the Utt-level 2 model ($mean$ passed to context model in Figure FIGREF10 with con2 line). Hence, the Context 2 DANA model is composed of one RNN layer with three input vectors, finally topped with the softmax function for computing the probability distribution of the dialogue acts. This model achieved 75.97% accuracy on the test set of the SwDA corpus.
Context 3 DANA is a context model that uses three utterances similar to the previous models, but the utterance representations combine both features from the Context 1 and Context 2 models (con1 and con2 together in Figure FIGREF10). Hence, the Context 3 DANA model combines features of almost all the previous four models to provide the recognition of the dialogue acts. This model achieves 75.91% accuracy on the SwDA corpus test set.
Annotation of Emotional Dialogue Acts ::: Ensemble of Neural Annotators
First preference is given to the labels that are perfectly matching in all the neural annotators. In Table TABREF11, we can see that both datasets have about 40% of exactly matching labels over all models (AM). Then priority is given to the context-based models to check if the label in all context models is matching perfectly. In case two out of three context models are correct, then it is being checked if that label is also produced by at least one of the non-context models. Then, we allow labels to rely on these at least two context models. As a result, about 47% of the labels are taken based on the context models (CM). When we see that none of the context models is producing the same results, then we rank the labels with their respective confidence values produced as a probability distribution using the $softmax$ function. The labels are sorted in descending order according to confidence values. Then we check if the first three (case when one context model and both non-context models produce the same label) or at least two labels are matching, then we allow to pick that one. There are about 3% in IEMOCAP and 5% in MELD (BM).
Finally, when none the above conditions are fulfilled, we leave out the label with an unknown category. This unknown category of the dialogue act is labeled with `xx' in the final annotations, and they are about 7% in IEMOCAP and 11% in MELD (NM). The statistics of the EDAs is reported in Table TABREF13 for both datasets. Total utterances in MELD includes training, validation and test datasets.
Annotation of Emotional Dialogue Acts ::: Reliability of Neural Annotators
The pool of neural annotators provides a fair range of annotations, and we checked the reliability with the following metrics BIBREF23. Krippendorff's Alpha ($\alpha $) is a reliability coefficient developed to measure the agreement among observers, annotators, and raters, and is often used in emotion annotation BIBREF24. We apply it on the five neural annotators at the nominal level of measurement of dialogue act categories. $\alpha $ is computed as follows:
where $D_{o}$ is the observed disagreement and $D_{e}$ is the disagreement that is expected by chance. $\alpha =1$ means all annotators produce the same label, while $\alpha =0$ would mean none agreed on any label. As we can see in Table TABREF20, both datasets IEMOCAP and MELD produce significant inter-neural annotator agreement, 0.553 and 0.494, respectively.
A very popular inter-annotator metric is Fleiss' Kappa score, also reported in Table TABREF20, which determines consistency in the ratings. The kappa $k$ can be defined as,
where the denominator $1 -\bar{P}_e$ elicits the degree of agreement that is attainable above chance, and the numerator $\bar{P} -\bar{P}_e$ provides the degree of the agreement actually achieved above chance. Hence, $k = 1$ if the raters agree completely, and $k = 0$ when none reach any agreement. We got 0.556 and 0.502 for IEOMOCAP and MELD respectively with our five neural annotators. This indicated that the annotators are labeling the dialogue acts reliably and consistently. We also report the Spearman's correlation between context-based models (Context1 and Context2), and it shows a strong correlation between them (Table TABREF20). While using the labels we checked the absolute match between all context-based models and hence their strong correlation indicates their robustness.
EDAs Analysis
We can see emotional dialogue act co-occurrences with respect to emotion labels in Figure FIGREF12 for both datasets. There are sets of three bars per dialogue act in the figure, the first and second bar represent emotion labels of IEMOCAP (IE) and MELD (ME), and the third bar is for MELD sentiment (MS) labels. MELD emotion and sentiment statistics are interesting as they are strongly correlated to each other. The bars contain the normalized number of utterances for emotion labels with respect to the total number of utterances for that particular dialogue act category. The statements without-opinion (sd) and with-opinion (sv) contain utterances with almost all emotions. Many neutral utterances are spanning over all the dialogue acts.
Quotation (⌃q) dialogue acts, on the other hand, are mostly used with `Anger' and `Frustration' (in case of IEMOCAP), however, some utterances with `Joy' or `Sadness' as well (see examples in Table TABREF21). Action Directive (ad) dialogue act utterances, which are usually orders, frequently occur with `Anger' or `Frustration' although many with `Happy' emotion in case of the MELD dataset. Acknowledgements (b) are mostly with positive or neutral, however, Appreciation (ba) and Rhetorical (bh) backchannels often occur with a greater number in `Surprise', `Joy' and/or with `Excited' (in case of IEMOCAP). Questions (qh, qw, qy and qy⌃d) are mostly asked with emotions `Surprise', `Excited', `Frustration' or `Disgust' (in case of MELD) and many are neutral. No-answers (nn) are mostly `Sad' or `Frustrated' as compared to yes-answers (ny). Forward-functions such as Apology (fa) are mostly with `Sadness' whereas Thanking (ft) and Conventional-closing or -opening (fc or fp) are usually with `Joy' or `Excited'.
We also noticed that both datasets exhibit a similar relation between dialogue act and emotion. It is important to notice that the dialogue act annotation is based on the given transcripts, however, the emotional expressions are better perceived with audio or video BIBREF6. We report some examples where we mark the utterances with an determined label (xx) in the last row of Table TABREF21. They are skipped from the final annotation because of not fulfilling the conditions explained in Section SECREF14 It is also interesting to see the previous utterance dialogue acts (P-DA) of those skipped utterances, and the sequence of the labels can be followed from Figure FIGREF6 (utt-l1, utt-l2, con1, con2, con3).
In the first example, the previous utterance was b, and three DANA models produced labels of the current utterance as b, but it is skipped because the confidence values were not sufficient to bring it as a final label. The second utterance can be challenging even for humans to perceive with any of the dialogue acts. However, the third and fourth utterances are followed by a yes-no question (qy), and hence, we can see in the third example, that context models tried their best to at least perceive it as an answer (ng, ny, nn). The last utterance, “I'm so sorry!", has been completely disagreed by all the five annotators. Similar apology phrases are mostly found with `Sadness' emotion label's, and the correct dialogue act is Apology (fa). However, they are placed either in the sd or in ba dialogue act category. We believe that with human annotator's help those labels of the utterances can be corrected with very limited efforts.
Conclusion and Future Work
In this work, we presented a method to extend conversational multi-modal emotion datasets with dialogue act labels. We successfully show this on two well-established emotion datasets: IEMOCAP and MELD, which we labeled with dialogue acts and made publicly available for further study and research. As a first insight, we found that many of the dialogue acts and emotion labels follow certain relations. These relations can be useful to learn about the emotional behaviours with dialogue acts to build a natural dialogue system and for deeper conversational analysis. The conversational agent might benefit in generating an appropriate response when considering both emotional states and dialogue acts in the utterances.
In future work, we foresee the human in the loop for the annotation process along with a pool of automated neural annotators. Robust annotations can be achieved with very little human effort and supervision, for example, observing and correcting the final labels produced by ensemble output labels from the neural annotators. The human-annotator might also help to achieve segmented-utterance labelling of the dialogue acts. We also plan to use these datasets for conversational analysis to infer interactive behaviours of the emotional states with respect to dialogue acts. In our recent work, where we used dialogue acts to build a dialogue system for a social robot, we find this study and dataset very helpful. For example, we can extend our robotic conversational system to consider emotion as an added linguistic feature to produce natural interaction.
Acknowledgements
We would like to acknowledge funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska Curie grant agreement No 642667 (SECURE). | Dialogue Act Markup in Several Layers (DAMSL) tag set |
b99948ac4810a7fe3477f6591b8cf211d6398e67 | b99948ac4810a7fe3477f6591b8cf211d6398e67_0 | Q: How many models were used?
Text: Introduction
With the growing demand for human-computer/robot interaction systems, detecting the emotional state of the user can heavily benefit a conversational agent to respond at an appropriate emotional level. Emotion recognition in conversations has proven important for potential applications such as response recommendation or generation, emotion-based text-to-speech, personalisation, etc. Human emotional states can be expressed verbally and non-verbally BIBREF0, BIBREF1, however, while building an interactive dialogue system, the interface needs dialogue acts. A typical dialogue system consists of a language understanding module which requires to determine the meaning of and intention in the human input utterances BIBREF2, BIBREF3. Also, in discourse or conversational analysis, dialogue acts are the main linguistic features to consider BIBREF4. A dialogue act provides an intention and performative function in an utterance of the dialogue. For example, it can infer a user's intention by distinguishing Question, Answer, Request, Agree/Reject, etc. and performative functions such as Acknowledgement, Conversational-opening or -closing, Thanking, etc. The dialogue act information together with emotional states can be very useful for a spoken dialogue system to produce natural interaction BIBREF5.
The research in emotion recognition is growing very rapidly and many datasets are available, such as text-based, speech- or vision-level, and multimodal emotion data. Emotion expression recognition is a challenging task and hence multimodality is crucial BIBREF0. However, few conversational multi-modal emotion recognition datasets are available, for example, IEMOCAP BIBREF6, SEMAINE BIBREF7, MELD BIBREF8. They are multi-modal dyadic conversational datasets containing audio-visual and conversational transcripts. Every utterance in these datasets is labeled with an emotion label.
In this work, we apply an automated neural ensemble annotation process for dialogue act labeling. Several neural models are trained with the Switchboard Dialogue Act (SwDA) Corpus BIBREF9, BIBREF10 and used for inferring dialogue acts on the emotion datasets. We ensemble five model output labels by checking majority occurrences (most of the model labels are the same) and ranking confidence values of the models. We have annotated two potential multi-modal conversation datasets for emotion recognition: IEMOCAP (Interactive Emotional dyadic MOtion CAPture database) BIBREF6 and MELD (Multimodal EmotionLines Dataset) BIBREF8. Figure FIGREF2, shows an example of dialogue acts with emotion and sentiment labels from the MELD dataset. We confirmed the reliability of annotations with inter-annotator metrics. We analysed the co-occurrences of the dialogue act and emotion labels and discovered a key relationship between them; certain dialogue acts of the utterances show significant and useful association with respective emotional states. For example, Accept/Agree dialogue act often occurs with the Joy emotion while Reject with Anger, Acknowledgements with Surprise, Thanking with Joy, and Apology with Sadness, etc. The detailed analysis of the emotional dialogue acts (EDAs) and annotated datasets are being made available at the SECURE EU Project website.
Annotation of Emotional Dialogue Acts ::: Data for Conversational Emotion Analysis
There are two emotion taxonomies: (1) discrete emotion categories (DEC) and (2) fined-grained dimensional basis of emotion states (DBE). The DECs are Joy, Sadness, Fear, Surprise, Disgust, Anger and Neutral; identified by Ekman et al. ekman1987universalemos. The DBE of the emotion is usually elicited from two or three dimensions BIBREF1, BIBREF11, BIBREF12. A two-dimensional model is commonly used with Valence and Arousal (also called activation), and in the three-dimensional model, the third dimension is Dominance. IEMOCAP is annotated with all DECs and two additional emotion classes, Frustration and Excited. IEMOCAP is also annotated with three DBE, that includes Valance, Arousal and Dominance BIBREF6. MELD BIBREF8, which is an evolved version of the Emotionlines dataset developed by BIBREF13, is annotated with exactly 7 DECs and sentiments (positive, negative and neutral).
Annotation of Emotional Dialogue Acts ::: Dialogue Act Tagset and SwDA Corpus
There have been many taxonomies for dialogue acts: speech acts BIBREF14 refer to the utterance, not only to present information but to the action at is performed. Speech acts were later modified into five classes (Assertive, Directive, Commissive, Expressive, Declarative) BIBREF15. There are many such standard taxonomies and schemes to annotate conversational data, and most of them follow the discourse compositionality. These schemes have proven their importance for discourse or conversational analysis BIBREF16. During the increased development of dialogue systems and discourse analysis, the standard taxonomy was introduced in recent decades, called Dialogue Act Markup in Several Layers (DAMSL) tag set. According to DAMSL, each DA has a forward-looking function (such as Statement, Info-request, Thanking) and a backwards-looking function (such as Accept, Reject, Answer) BIBREF17.
The DAMSL annotation includes not only the utterance-level but also segmented-utterance labelling. However, in the emotion datasets, the utterances are not segmented, as we can see in Figure FIGREF2 first or fourth utterances are not segmented as two separate. The fourth utterance, it could be segmented to have two dialogue act labels, for example, a statement (sd) and a question (qy). That provides very fine-grained DA classes and follows the concept of discourse compositionality. DAMSL distinguishes wh-question (qw), yes-no question (qy), open-ended (qo), and or-question (qr) classes, not just because these questions are syntactically distinct, but also because they have different forward functions BIBREF18. For example, yes-no question is more likely to get a “yes" answer than a wh-question (qw). This also gives an intuition that the answers follow the syntactic formulation of question, providing a context. For example, qy is used for a question that, from a discourse perspective, expects a Yes (ny) or No (nn) answer.
We have investigated the annotation method and trained our neural models with the Switchboard Dialogue Act (SwDA) Corpus BIBREF9, BIBREF10. SwDA Corpus is annotated with the DAMSL tag set and it is been used for reporting and bench-marking state-of-the-art results in dialogue act recognition tasks BIBREF19, BIBREF20, BIBREF21 which makes it ideal for our use case. The Switchboard DAMSL Coders Manual can be followed for knowing more about the dialogue act labels.
Annotation of Emotional Dialogue Acts ::: Neural Model Annotators
We adopted the neural architectures based on Bothe et al. bothe2018discourse where two variants are: non-context model (classifying at utterance level) and context model (recognizing the dialogue act of the current utterance given a few preceding utterances). From conversational analysis using dialogue acts in Bothe et al. bothe2018interspeech, we learned that the preceding two utterances contribute significantly to recognizing the dialogue act of the current utterance. Hence, we adapt this setting for the context model and create a pool of annotators using recurrent neural networks (RNNs). RNNs can model the contextual information in the sequence of words of an utterance and in the sequence of utterances of a dialogue. Each word in an utterance is represented with a word embedding vector of dimension 1024. We use the word embedding vectors from pre-trained ELMo (Embeddings from Language Models) embeddings BIBREF22. We have a pool of five neural annotators as shown in Figure FIGREF6. Our online tool called Discourse-Wizard is available to practice automated dialogue act labeling. In this tool we use the same neural architectures but model-trained embeddings (while, in this work we use pre-trained ELMo embeddings, as they are better performant but computationally and size-wise expensive to be hosted in the online tool). The annotators are:
Utt-level 1 Dialogue Act Neural Annotator (DANA) is an utterance-level classifier that uses word embeddings ($w$) as an input to an RNN layer, attention mechanism and computes the probability of dialogue acts ($da$) using the softmax function (see in Figure FIGREF10, dotted line utt-l1). This model achieved 75.13% accuracy on the SwDA corpus test set.
Context 1 DANA is a context model that uses 2 preceding utterances while recognizing the dialogue act of the current utterance (see context model with con1 line in Figure FIGREF10). It uses a hierarchical RNN with the first RNN layer to encode the utterance from word embeddings ($w$) and the second RNN layer is provided with three utterances ($u$) (current and two preceding) composed from the first layer followed by the attention mechanism ($a$), where $\sum _{n=0}^{n} a_{t-n} = 1$. Finally, the softmax function is used to compute the probability distribution. This model achieved 77.55% accuracy on the SwDA corpus test set.
Utt-level 2 DANA is another utterance-level classifier which takes an average of the word embeddings in the input utterance and uses a feedforward neural network hidden layer (see utt-l2 line in Figure FIGREF10, where $mean$ passed to $softmax$ directly). Similar to the previous model, it computes the probability of dialogue acts using the softmax function. This model achieved 72.59% accuracy on the test set of the SwDA corpus.
Context 2 DANA is another context model that uses three utterances similar to the Context 1 DANA model, but the utterances are composed as the mean of the word embeddings over each utterance, similar to the Utt-level 2 model ($mean$ passed to context model in Figure FIGREF10 with con2 line). Hence, the Context 2 DANA model is composed of one RNN layer with three input vectors, finally topped with the softmax function for computing the probability distribution of the dialogue acts. This model achieved 75.97% accuracy on the test set of the SwDA corpus.
Context 3 DANA is a context model that uses three utterances similar to the previous models, but the utterance representations combine both features from the Context 1 and Context 2 models (con1 and con2 together in Figure FIGREF10). Hence, the Context 3 DANA model combines features of almost all the previous four models to provide the recognition of the dialogue acts. This model achieves 75.91% accuracy on the SwDA corpus test set.
Annotation of Emotional Dialogue Acts ::: Ensemble of Neural Annotators
First preference is given to the labels that are perfectly matching in all the neural annotators. In Table TABREF11, we can see that both datasets have about 40% of exactly matching labels over all models (AM). Then priority is given to the context-based models to check if the label in all context models is matching perfectly. In case two out of three context models are correct, then it is being checked if that label is also produced by at least one of the non-context models. Then, we allow labels to rely on these at least two context models. As a result, about 47% of the labels are taken based on the context models (CM). When we see that none of the context models is producing the same results, then we rank the labels with their respective confidence values produced as a probability distribution using the $softmax$ function. The labels are sorted in descending order according to confidence values. Then we check if the first three (case when one context model and both non-context models produce the same label) or at least two labels are matching, then we allow to pick that one. There are about 3% in IEMOCAP and 5% in MELD (BM).
Finally, when none the above conditions are fulfilled, we leave out the label with an unknown category. This unknown category of the dialogue act is labeled with `xx' in the final annotations, and they are about 7% in IEMOCAP and 11% in MELD (NM). The statistics of the EDAs is reported in Table TABREF13 for both datasets. Total utterances in MELD includes training, validation and test datasets.
Annotation of Emotional Dialogue Acts ::: Reliability of Neural Annotators
The pool of neural annotators provides a fair range of annotations, and we checked the reliability with the following metrics BIBREF23. Krippendorff's Alpha ($\alpha $) is a reliability coefficient developed to measure the agreement among observers, annotators, and raters, and is often used in emotion annotation BIBREF24. We apply it on the five neural annotators at the nominal level of measurement of dialogue act categories. $\alpha $ is computed as follows:
where $D_{o}$ is the observed disagreement and $D_{e}$ is the disagreement that is expected by chance. $\alpha =1$ means all annotators produce the same label, while $\alpha =0$ would mean none agreed on any label. As we can see in Table TABREF20, both datasets IEMOCAP and MELD produce significant inter-neural annotator agreement, 0.553 and 0.494, respectively.
A very popular inter-annotator metric is Fleiss' Kappa score, also reported in Table TABREF20, which determines consistency in the ratings. The kappa $k$ can be defined as,
where the denominator $1 -\bar{P}_e$ elicits the degree of agreement that is attainable above chance, and the numerator $\bar{P} -\bar{P}_e$ provides the degree of the agreement actually achieved above chance. Hence, $k = 1$ if the raters agree completely, and $k = 0$ when none reach any agreement. We got 0.556 and 0.502 for IEOMOCAP and MELD respectively with our five neural annotators. This indicated that the annotators are labeling the dialogue acts reliably and consistently. We also report the Spearman's correlation between context-based models (Context1 and Context2), and it shows a strong correlation between them (Table TABREF20). While using the labels we checked the absolute match between all context-based models and hence their strong correlation indicates their robustness.
EDAs Analysis
We can see emotional dialogue act co-occurrences with respect to emotion labels in Figure FIGREF12 for both datasets. There are sets of three bars per dialogue act in the figure, the first and second bar represent emotion labels of IEMOCAP (IE) and MELD (ME), and the third bar is for MELD sentiment (MS) labels. MELD emotion and sentiment statistics are interesting as they are strongly correlated to each other. The bars contain the normalized number of utterances for emotion labels with respect to the total number of utterances for that particular dialogue act category. The statements without-opinion (sd) and with-opinion (sv) contain utterances with almost all emotions. Many neutral utterances are spanning over all the dialogue acts.
Quotation (⌃q) dialogue acts, on the other hand, are mostly used with `Anger' and `Frustration' (in case of IEMOCAP), however, some utterances with `Joy' or `Sadness' as well (see examples in Table TABREF21). Action Directive (ad) dialogue act utterances, which are usually orders, frequently occur with `Anger' or `Frustration' although many with `Happy' emotion in case of the MELD dataset. Acknowledgements (b) are mostly with positive or neutral, however, Appreciation (ba) and Rhetorical (bh) backchannels often occur with a greater number in `Surprise', `Joy' and/or with `Excited' (in case of IEMOCAP). Questions (qh, qw, qy and qy⌃d) are mostly asked with emotions `Surprise', `Excited', `Frustration' or `Disgust' (in case of MELD) and many are neutral. No-answers (nn) are mostly `Sad' or `Frustrated' as compared to yes-answers (ny). Forward-functions such as Apology (fa) are mostly with `Sadness' whereas Thanking (ft) and Conventional-closing or -opening (fc or fp) are usually with `Joy' or `Excited'.
We also noticed that both datasets exhibit a similar relation between dialogue act and emotion. It is important to notice that the dialogue act annotation is based on the given transcripts, however, the emotional expressions are better perceived with audio or video BIBREF6. We report some examples where we mark the utterances with an determined label (xx) in the last row of Table TABREF21. They are skipped from the final annotation because of not fulfilling the conditions explained in Section SECREF14 It is also interesting to see the previous utterance dialogue acts (P-DA) of those skipped utterances, and the sequence of the labels can be followed from Figure FIGREF6 (utt-l1, utt-l2, con1, con2, con3).
In the first example, the previous utterance was b, and three DANA models produced labels of the current utterance as b, but it is skipped because the confidence values were not sufficient to bring it as a final label. The second utterance can be challenging even for humans to perceive with any of the dialogue acts. However, the third and fourth utterances are followed by a yes-no question (qy), and hence, we can see in the third example, that context models tried their best to at least perceive it as an answer (ng, ny, nn). The last utterance, “I'm so sorry!", has been completely disagreed by all the five annotators. Similar apology phrases are mostly found with `Sadness' emotion label's, and the correct dialogue act is Apology (fa). However, they are placed either in the sd or in ba dialogue act category. We believe that with human annotator's help those labels of the utterances can be corrected with very limited efforts.
Conclusion and Future Work
In this work, we presented a method to extend conversational multi-modal emotion datasets with dialogue act labels. We successfully show this on two well-established emotion datasets: IEMOCAP and MELD, which we labeled with dialogue acts and made publicly available for further study and research. As a first insight, we found that many of the dialogue acts and emotion labels follow certain relations. These relations can be useful to learn about the emotional behaviours with dialogue acts to build a natural dialogue system and for deeper conversational analysis. The conversational agent might benefit in generating an appropriate response when considering both emotional states and dialogue acts in the utterances.
In future work, we foresee the human in the loop for the annotation process along with a pool of automated neural annotators. Robust annotations can be achieved with very little human effort and supervision, for example, observing and correcting the final labels produced by ensemble output labels from the neural annotators. The human-annotator might also help to achieve segmented-utterance labelling of the dialogue acts. We also plan to use these datasets for conversational analysis to infer interactive behaviours of the emotional states with respect to dialogue acts. In our recent work, where we used dialogue acts to build a dialogue system for a social robot, we find this study and dataset very helpful. For example, we can extend our robotic conversational system to consider emotion as an added linguistic feature to produce natural interaction.
Acknowledgements
We would like to acknowledge funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska Curie grant agreement No 642667 (SECURE). | five |
73d657d6faed0c11c65b1ab60e553db57f4971ca | 73d657d6faed0c11c65b1ab60e553db57f4971ca_0 | Q: Do they compare their neural network against any other model?
Text: Introduction
Ultrasound tongue imaging (UTI) is a non-invasive way of observing the vocal tract during speech production BIBREF0 . Instrumental speech therapy relies on capturing ultrasound videos of the patient's tongue simultaneously with their speech audio in order to provide a diagnosis, design treatments, and measure therapy progress BIBREF1 . The two modalities must be correctly synchronised, with a minimum shift of INLINEFORM0 45ms if the audio leads and INLINEFORM1 125ms if the audio lags, based on synchronisation standards for broadcast audiovisual signals BIBREF2 . Errors beyond this range can render the data unusable – indeed, synchronisation errors do occur, resulting in significant wasted effort if not corrected. No mechanism currently exists to automatically correct these errors, and although manual synchronisation is possible in the presence of certain audiovisual cues such as stop consonants BIBREF3 , it is time consuming and tedious.
In this work, we exploit the correlation between the two modalities to synchronise them. We utilise a two-stream neural network architecture for the task BIBREF4 , using as our only source of supervision pairs of ultrasound and audio segments which have been automatically generated and labelled as positive (correctly synchronised) or negative (randomly desynchronised); a process known as self-supervision BIBREF5 . We demonstrate how this approach enables us to correctly synchronise the majority of utterances in our test set, and in particular, those exhibiting natural variation in speech.
Section SECREF2 reviews existing approaches for audiovisual synchronisation, and describes the challenges specifically associated with UTI data, compared with lip videos for which automatic synchronisation has been previously attempted. Section SECREF3 describes our approach. Section SECREF4 describes the data we use, including data preprocessing and positive and negative sample creation using a self-supervision strategy. Section SECREF5 describes our experiments, followed by an analysis of the results. We conclude with a summary and future directions in Section SECREF6 .
Background
Ultrasound and audio are recorded using separate components, and hardware synchronisation is achieved by translating information from the visual signal into audio at recording time. Specifically, for every ultrasound frame recorded, the ultrasound beam-forming unit releases a pulse signal, which is translated by an external hardware synchroniser into an audio pulse signal and captured by the sound card BIBREF6 , BIBREF7 . Synchronisation is achieved by aligning the ultrasound frames with the audio pulse signal, which is already time-aligned with the speech audio BIBREF8 .
Hardware synchronisation can fail for a number of reasons. The synchroniser is an external device which needs to be correctly connected and operated by therapists. Incorrect use can lead to missing the pulse signal, which would cause synchronisation to fail for entire therapy sessions BIBREF9 . Furthermore, low-quality sound cards report an approximate, rather than the exact, sample rate which leads to errors in the offset calculation BIBREF8 . There is currently no recovery mechanism for when synchronisation fails, and to the best of our knowledge, there has been no prior work on automatically correcting the synchronisation error between ultrasound tongue videos and audio. There is, however, some prior work on synchronising lip movement with audio which we describe next.
Audiovisual synchronisation for lip videos
Speech audio is generated by articulatory movement and is therefore fundamentally correlated with other manifestations of this movement, such as lip or tongue videos BIBREF10 . An alternative to the hardware approach is to exploit this correlation to find the offset. Previous approaches have investigated the effects of using different representations and feature extraction techniques on finding dimensions of high correlation BIBREF11 , BIBREF12 , BIBREF13 . More recently, neural networks, which learn features directly from input, have been employed for the task. SyncNet BIBREF4 uses a two-stream neural network and self-supervision to learn cross-modal embeddings, which are then used to synchronise audio with lip videos. It achieves near perfect accuracy ( INLINEFORM0 99 INLINEFORM1 ) using manual evaluation where lip-sync error is not detectable to a human. It has since been extended to use different sample creation methods for self-supervision BIBREF5 , BIBREF14 and different training objectives BIBREF14 . We adopt the original approach BIBREF4 , as it is both simpler and significantly less expensive to train than the more recent variants.
Lip videos vs. ultrasound tongue imaging (UTI)
Videos of lip movement can be obtained from various sources including TV, films, and YouTube, and are often cropped to include only the lips BIBREF4 . UTI data, on the other hand, is recorded in clinics by trained therapists BIBREF15 . An ultrasound probe placed under the chin of the patient captures the midsaggital view of their oral cavity as they speak. UTI data consists of sequences of 2D matrices of raw ultrasound reflection data, which can be interpreted as greyscale images BIBREF15 . There are several challenges specifically associated with UTI data compared with lip videos, which can potentially lower the performance of models relative to results reported on lip video data. These include:
Poor image quality: Ultrasound data is noisy, containing arbitrary high-contrast edges, speckle noise, artefacts, and interruptions to the tongue's surface BIBREF0 , BIBREF16 , BIBREF17 . The oral cavity is not entirely visible, missing the lips, the palate, and the pharyngeal wall, and visually interpreting the data requires specialised training. In contrast, videos of lip movement are of much higher quality and suffer from none of these issues.
Probe placement variation: Surfaces that are orthogonal to the ultrasound beam image better than those at an angle. Small shifts in probe placement during recording lead to high variation between otherwise similar tongue shapes BIBREF0 , BIBREF18 , BIBREF17 . In contrast, while the scaling and rotations of lip videos lead to variation, they do not lead to a degradation in image quality.
Inter-speaker variation: Age and physiology affect the quality of ultrasound data, and subjects with smaller vocal tracts and less tissue fat image better BIBREF0 , BIBREF17 . Dryness in the mouth, as a result of nervousness during speech therapy, leads to poor imaging. While inter-speaker variation is expected in lip videos, again, the variation does not lead to quality degradation.
Limited amount of data: Existing UTI datasets are considerably smaller than lip movement datasets. Consider for example VoxCeleb and VoxCeleb2 used to train SyncNet BIBREF4 , BIBREF14 , which together contain 1 million utterances from 7,363 identities BIBREF19 , BIBREF20 . In contrast, the UltraSuite repository (used in this work) contains 13,815 spoken utterances from 86 identities.
Uncorrelated segments: Speech therapy data contains interactions between the therapist and patient. The audio therefore contains speech from both speakers, while the ultrasound captures only the patient's tongue BIBREF15 . As a result, parts of the recordings will consist of completely uncorrelated audio and ultrasound. This issue is similar to that of dubbed voices in lip videos BIBREF4 , but is more prevalent in speech therapy data.
Model
We adopt the approach in BIBREF4 , modifying it to synchronise audio with UTI data. Our model, UltraSync, consists of two streams: the first takes as input a short segment of ultrasound and the second takes as input the corresponding audio. Both inputs are high-dimensional and are of different sizes. The objective is to learn a mapping from the inputs to a pair of low-dimensional vectors of the same length, such that the Euclidean distance between the two vectors is small when they correlate and large otherwise BIBREF21 , BIBREF22 . This model can be viewed as an extension of a siamese neural network BIBREF23 but with two asymmetrical streams and no shared parameters. Figure FIGREF1 illustrates the main architecture. The visual data INLINEFORM0 (ultrasound) and audio data INLINEFORM1 (MFCC), which have different shapes, are mapped to low dimensional embeddings INLINEFORM2 (visual) and INLINEFORM3 (audio) of the same size: DISPLAYFORM0
The model is trained using a contrastive loss function BIBREF21 , BIBREF22 , INLINEFORM0 , which minimises the Euclidean distance INLINEFORM1 between INLINEFORM2 and INLINEFORM3 for positive pairs ( INLINEFORM4 ), and maximises it for negative pairs ( INLINEFORM5 ), for a number of training samples INLINEFORM6 : DISPLAYFORM0
Given a pair of ultrasound and audio segments we can calculate the distance between them using our model. To predict the synchronisation offset for an utterance, we consider a discretised set of candidate offsets, calculate the average distance for each across utterance segments, and select the one with the minimum average distance. The candidate set is independent of the model, and is chosen based on task knowledge (Section SECREF5 ).
Data
For our experiments, we select a dataset whose utterances have been correctly synchronised at recording time. This allows us to control how the model is trained and verify its performance using ground truth synchronisation offsets. We use UltraSuite: a repository of ultrasound and acoustic data gathered from child speech therapy sessions BIBREF15 . We used all three datasets from the repository: UXTD (recorded with typically developing children), and UXSSD and UPX (recorded with children with speech sound disorders). In total, the dataset contains 13,815 spoken utterances from 86 speakers, corresponding to 35.9 hours of recordings. The utterances have been categorised by the type of task the child was given, and are labelled as: Words (A), Non-words (B), Sentence (C), Articulatory (D), Non-speech (E), or Conversations (F). See BIBREF15 for details.
Each utterance consists of 3 files: audio, ultrasound, and parameter. The audio file is a RIFF wave file, sampled at 22.05 KHz, containing the speech of the child and therapist. The ultrasound file consists of a sequence of ultrasound frames capturing the midsagittal view of the child's tongue. A single ultrasound frame is recorded as a 2D matrix where each column represents the ultrasound reflection intensities along a single scan line. Each ultrasound frame consists of 63 scan lines of 412 data points each, and is sampled at a rate of INLINEFORM0 121.5 fps. Raw ultrasound frames can be visualised as greyscale images and can thus be interpreted as videos. The parameter file contains the synchronisation offset value (in milliseconds), determined using hardware synchronisation at recording time and confirmed by the therapists to be correct for this dataset.
Preparing the data
First, we exclude utterances of type “Non-speech" (E) from our training data (and statistics). These are coughs recorded to obtain additional tongue shapes, or swallowing motions recorded to capture a trace of the hard palate. Both of these rarely contain audible content and are therefore not relevant to our task. Next, we apply the offset, which should be positive if the audio leads and negative if the audio lags. In this dataset, the offset is always positive. We apply it by cropping the leading audio and trimming the end of the longer signal to match the duration.
To process the ultrasound more efficiently, we first reduce the frame rate from INLINEFORM0 121.5 fps to INLINEFORM1 24.3 fps by retaining 1 out of every 5 frames. We then downsample by a factor of (1, 3), shrinking the frame size from 63x412 to 63x138 using max pixel value. This retains the number of ultrasound vectors (63), but reduces the number of pixels per vector (from 412 to 138).
The final pre-preprocessing step is to remove empty regions. UltraSuite was previously anonymised by zero-ing segments of audio which contained personally identifiable information. As a preprocessing step, we remove the zero regions from audio and corresponding ultrasound. We additionally experimented with removing regions of silence using voice activity detection, but obtained a higher performance by retaining them.
Creating samples using a self-supervision strategy
To train our model we need positive and negative training pairs. The model ingests short clips from each modality of INLINEFORM0 200ms long, calculated as INLINEFORM1 , where INLINEFORM2 is the time window, INLINEFORM3 is the number of ultrasound frames per window (5 in our case), and INLINEFORM4 is the ultrasound frame rate of the utterance ( INLINEFORM5 24.3 fps). For each recording, we split the ultrasound into non-overlapping windows of 5 frames each. We extract MFCC features (13 cepstral coefficients) from the audio using a window length of INLINEFORM6 20ms, calculated as INLINEFORM7 , and a step size of INLINEFORM8 10ms, calculated as INLINEFORM9 . This give us the input sizes shown in Figure FIGREF1 .
Positive samples are pairs of ultrasound windows and the corresponding MFCC frames. To create negative samples, we randomise pairings of ultrasound windows to MFCC frames within the same utterance, generating as many negative as positive samples to achieve a balanced dataset. We obtain 243,764 samples for UXTD (13.5hrs), 333,526 for UXSSD (18.5hrs), and 572,078 for UPX (31.8 hrs), or a total 1,149,368 samples (63.9hrs) which we divide into training, validation and test sets.
Dividing samples for training, validation and testing
We aim to test whether our model generalises to data from new speakers, and to data from new sessions recorded with known speakers. To simulate this, we select a group of speakers from each dataset, and hold out all of their data either for validation or for testing. Additionally, we hold out one entire session from each of the remaining speakers, and use the rest of their data for training. We aim to reserve approximately 80% of the created samples for training, 10% for validation, and 10% for testing, and select speakers and sessions on this basis.
Each speaker in UXTD recorded 1 session, but sessions are of different durations. We reserve 45 speakers for training, 5 for validation, and 8 for testing. UXSSD and UPX contain fewer speakers, but each recorded multiple sessions. We hold out 1 speaker for validation and 1 for testing from each of the two datasets. We also hold out a session from the first half of the remaining speakers for validation, and a session from the second half of the remaining speakers for testing. This selection process results in 909,858 (pooled) samples for training (50.5hrs), 128,414 for validation (7.1hrs) and 111,096 for testing (6.2hrs). From the training set, we create shuffled batches which are balanced in the number of positive and negative samples.
Experiments
We select the hyper-parameters of our model empirically by tuning on the validation set (Table ). Hyper-parameter exploration is guided by BIBREF24 . We train our model using the Adam optimiser BIBREF25 with a learning rate of 0.001, a batch size of 64 samples, and for 20 epochs. We implement learning rate scheduling which reduces the learning rate by a factor of 0.1 when the validation loss plateaus for 2 epochs.
Upon convergence, the model achieves 0.193 training loss, 0.215 validation loss, and 0.213 test loss. By placing a threshold of 0.5 on predicted distances, the model achieves 69.9% binary classification accuracy on training samples, 64.7% on validation samples, and 65.3% on test samples.
Synchronisation offset prediction: Section SECREF3 described briefly how to use our model to predict the synchronisation offset for test utterances. To obtain a discretised set of offset candidates, we retrieve the true offsets of the training utterances, and find that they fall in the range [0, 179] ms. We discretise this range taking 45ms steps and rendering 40 candidate values (45ms is the smaller of the absolute values of the detectability boundaries, INLINEFORM0 125 and INLINEFORM1 45 ms). We bin the true offsets in the candidate set and discard empty bins, reducing the set from 40 to 24 values. We consider all 24 candidates for each test utterance. We do this by aligning the two signals according to the given candidate, then producing the non-overlapping windows of ultrasound and MFCC pairs, as we did when preparing the data. We then use our model to predict the Euclidean distance for each pair, and average the distances. Finally, we select the offset with the smallest average distance as our prediction.
Evaluation: Because the true offsets are known, we evaluate the performance of our model by computing the discrepancy between the predicted and the true offset for each utterance. If the discrepancy falls within the minimum detectability range ( INLINEFORM0 125 INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 45) then the prediction is correct. Random prediction (averaged over 1000 runs) yields 14.6% accuracy with a mean and standard deviation discrepancy of 328 INLINEFORM5 518ms. We achieve 82.9% accuracy with a mean and standard deviation discrepancy of 32 INLINEFORM6 223ms. SyncNet reports INLINEFORM7 99% accuracy on lip video synchronisation using a manual evaluation where the lip error is not detectable to a human observer BIBREF4 . However, we argue that our data is more challenging (Section SECREF4 ).
Analysis: We analyse the performance of our model across different conditions. Table shows the model accuracy broken down by utterance type. The model achieves 91.2% accuracy on utterances containing words, sentences, and conversations, all of which exhibit natural variation in speech. The model is less successful with Articulatory utterances, which contain isolated phones occurring once or repeated (e.g., “sh sh sh"). Such utterances contain subtle tongue movement, making it more challenging to correlate the visual signal with the audio. And indeed, the model finds the correct offset for only 55.9% of Articulatory utterances. A further analysis shows that 84.4% (N INLINEFORM0 90) of stop consonants (e.g., “t”), which are relied upon by therapists as the most salient audiovisual synchronisation cues BIBREF3 , are correctly synchronised by our model, compared to 48.6% (N INLINEFORM1 140) of vowels, which contain less distinct movement and are also more challenging for therapists to synchronise.
Table shows accuracy broken down by test set. The model performs better on test sets containing entirely new speakers compared with test sets containing new sessions from previously seen speakers. This is contrary to expectation but could be due to the UTI challenges (described in Section SECREF4 ) affecting different subsets to different degrees. Table shows that the model performs considerably worse on UXTD compared to other test sets (64.8% accuracy). However, a further breakdown of the results in Table by test set and utterance type explains this poor performance; the majority of UXTD utterances (71%) are Articulatory utterances which the model struggles to correctly synchronise. In fact, for other utterance types (where there is a large enough sample, such as Words) performance on UXTD is on par with other test sets.
Conclusion
We have shown how a two-stream neural network originally designed to synchronise lip videos with audio can be used to synchronise UTI data with audio. Our model exploits the correlation between the modalities to learn cross-model embeddings which are used to find the synchronisation offset. It generalises well to held-out data, allowing us to correctly synchronise the majority of test utterances. The model is best-suited to utterances which contain natural variation in speech and least suited to those containing isolated phones, with the exception of stop consonants. Future directions include integrating the model and synchronisation offset prediction process into speech therapy software BIBREF6 , BIBREF7 , and using the learned embeddings for other tasks such as active speaker detection BIBREF4 .
Acknowledgements
Supported by EPSRC Healthcare Partnerships Programme grant number EP/P02338X/1 (Ultrax2020). | No |
9ef182b61461d0d8b6feb1d6174796ccde290a15 | 9ef182b61461d0d8b6feb1d6174796ccde290a15_0 | Q: Do they annotate their own dataset or use an existing one?
Text: Introduction
Ultrasound tongue imaging (UTI) is a non-invasive way of observing the vocal tract during speech production BIBREF0 . Instrumental speech therapy relies on capturing ultrasound videos of the patient's tongue simultaneously with their speech audio in order to provide a diagnosis, design treatments, and measure therapy progress BIBREF1 . The two modalities must be correctly synchronised, with a minimum shift of INLINEFORM0 45ms if the audio leads and INLINEFORM1 125ms if the audio lags, based on synchronisation standards for broadcast audiovisual signals BIBREF2 . Errors beyond this range can render the data unusable – indeed, synchronisation errors do occur, resulting in significant wasted effort if not corrected. No mechanism currently exists to automatically correct these errors, and although manual synchronisation is possible in the presence of certain audiovisual cues such as stop consonants BIBREF3 , it is time consuming and tedious.
In this work, we exploit the correlation between the two modalities to synchronise them. We utilise a two-stream neural network architecture for the task BIBREF4 , using as our only source of supervision pairs of ultrasound and audio segments which have been automatically generated and labelled as positive (correctly synchronised) or negative (randomly desynchronised); a process known as self-supervision BIBREF5 . We demonstrate how this approach enables us to correctly synchronise the majority of utterances in our test set, and in particular, those exhibiting natural variation in speech.
Section SECREF2 reviews existing approaches for audiovisual synchronisation, and describes the challenges specifically associated with UTI data, compared with lip videos for which automatic synchronisation has been previously attempted. Section SECREF3 describes our approach. Section SECREF4 describes the data we use, including data preprocessing and positive and negative sample creation using a self-supervision strategy. Section SECREF5 describes our experiments, followed by an analysis of the results. We conclude with a summary and future directions in Section SECREF6 .
Background
Ultrasound and audio are recorded using separate components, and hardware synchronisation is achieved by translating information from the visual signal into audio at recording time. Specifically, for every ultrasound frame recorded, the ultrasound beam-forming unit releases a pulse signal, which is translated by an external hardware synchroniser into an audio pulse signal and captured by the sound card BIBREF6 , BIBREF7 . Synchronisation is achieved by aligning the ultrasound frames with the audio pulse signal, which is already time-aligned with the speech audio BIBREF8 .
Hardware synchronisation can fail for a number of reasons. The synchroniser is an external device which needs to be correctly connected and operated by therapists. Incorrect use can lead to missing the pulse signal, which would cause synchronisation to fail for entire therapy sessions BIBREF9 . Furthermore, low-quality sound cards report an approximate, rather than the exact, sample rate which leads to errors in the offset calculation BIBREF8 . There is currently no recovery mechanism for when synchronisation fails, and to the best of our knowledge, there has been no prior work on automatically correcting the synchronisation error between ultrasound tongue videos and audio. There is, however, some prior work on synchronising lip movement with audio which we describe next.
Audiovisual synchronisation for lip videos
Speech audio is generated by articulatory movement and is therefore fundamentally correlated with other manifestations of this movement, such as lip or tongue videos BIBREF10 . An alternative to the hardware approach is to exploit this correlation to find the offset. Previous approaches have investigated the effects of using different representations and feature extraction techniques on finding dimensions of high correlation BIBREF11 , BIBREF12 , BIBREF13 . More recently, neural networks, which learn features directly from input, have been employed for the task. SyncNet BIBREF4 uses a two-stream neural network and self-supervision to learn cross-modal embeddings, which are then used to synchronise audio with lip videos. It achieves near perfect accuracy ( INLINEFORM0 99 INLINEFORM1 ) using manual evaluation where lip-sync error is not detectable to a human. It has since been extended to use different sample creation methods for self-supervision BIBREF5 , BIBREF14 and different training objectives BIBREF14 . We adopt the original approach BIBREF4 , as it is both simpler and significantly less expensive to train than the more recent variants.
Lip videos vs. ultrasound tongue imaging (UTI)
Videos of lip movement can be obtained from various sources including TV, films, and YouTube, and are often cropped to include only the lips BIBREF4 . UTI data, on the other hand, is recorded in clinics by trained therapists BIBREF15 . An ultrasound probe placed under the chin of the patient captures the midsaggital view of their oral cavity as they speak. UTI data consists of sequences of 2D matrices of raw ultrasound reflection data, which can be interpreted as greyscale images BIBREF15 . There are several challenges specifically associated with UTI data compared with lip videos, which can potentially lower the performance of models relative to results reported on lip video data. These include:
Poor image quality: Ultrasound data is noisy, containing arbitrary high-contrast edges, speckle noise, artefacts, and interruptions to the tongue's surface BIBREF0 , BIBREF16 , BIBREF17 . The oral cavity is not entirely visible, missing the lips, the palate, and the pharyngeal wall, and visually interpreting the data requires specialised training. In contrast, videos of lip movement are of much higher quality and suffer from none of these issues.
Probe placement variation: Surfaces that are orthogonal to the ultrasound beam image better than those at an angle. Small shifts in probe placement during recording lead to high variation between otherwise similar tongue shapes BIBREF0 , BIBREF18 , BIBREF17 . In contrast, while the scaling and rotations of lip videos lead to variation, they do not lead to a degradation in image quality.
Inter-speaker variation: Age and physiology affect the quality of ultrasound data, and subjects with smaller vocal tracts and less tissue fat image better BIBREF0 , BIBREF17 . Dryness in the mouth, as a result of nervousness during speech therapy, leads to poor imaging. While inter-speaker variation is expected in lip videos, again, the variation does not lead to quality degradation.
Limited amount of data: Existing UTI datasets are considerably smaller than lip movement datasets. Consider for example VoxCeleb and VoxCeleb2 used to train SyncNet BIBREF4 , BIBREF14 , which together contain 1 million utterances from 7,363 identities BIBREF19 , BIBREF20 . In contrast, the UltraSuite repository (used in this work) contains 13,815 spoken utterances from 86 identities.
Uncorrelated segments: Speech therapy data contains interactions between the therapist and patient. The audio therefore contains speech from both speakers, while the ultrasound captures only the patient's tongue BIBREF15 . As a result, parts of the recordings will consist of completely uncorrelated audio and ultrasound. This issue is similar to that of dubbed voices in lip videos BIBREF4 , but is more prevalent in speech therapy data.
Model
We adopt the approach in BIBREF4 , modifying it to synchronise audio with UTI data. Our model, UltraSync, consists of two streams: the first takes as input a short segment of ultrasound and the second takes as input the corresponding audio. Both inputs are high-dimensional and are of different sizes. The objective is to learn a mapping from the inputs to a pair of low-dimensional vectors of the same length, such that the Euclidean distance between the two vectors is small when they correlate and large otherwise BIBREF21 , BIBREF22 . This model can be viewed as an extension of a siamese neural network BIBREF23 but with two asymmetrical streams and no shared parameters. Figure FIGREF1 illustrates the main architecture. The visual data INLINEFORM0 (ultrasound) and audio data INLINEFORM1 (MFCC), which have different shapes, are mapped to low dimensional embeddings INLINEFORM2 (visual) and INLINEFORM3 (audio) of the same size: DISPLAYFORM0
The model is trained using a contrastive loss function BIBREF21 , BIBREF22 , INLINEFORM0 , which minimises the Euclidean distance INLINEFORM1 between INLINEFORM2 and INLINEFORM3 for positive pairs ( INLINEFORM4 ), and maximises it for negative pairs ( INLINEFORM5 ), for a number of training samples INLINEFORM6 : DISPLAYFORM0
Given a pair of ultrasound and audio segments we can calculate the distance between them using our model. To predict the synchronisation offset for an utterance, we consider a discretised set of candidate offsets, calculate the average distance for each across utterance segments, and select the one with the minimum average distance. The candidate set is independent of the model, and is chosen based on task knowledge (Section SECREF5 ).
Data
For our experiments, we select a dataset whose utterances have been correctly synchronised at recording time. This allows us to control how the model is trained and verify its performance using ground truth synchronisation offsets. We use UltraSuite: a repository of ultrasound and acoustic data gathered from child speech therapy sessions BIBREF15 . We used all three datasets from the repository: UXTD (recorded with typically developing children), and UXSSD and UPX (recorded with children with speech sound disorders). In total, the dataset contains 13,815 spoken utterances from 86 speakers, corresponding to 35.9 hours of recordings. The utterances have been categorised by the type of task the child was given, and are labelled as: Words (A), Non-words (B), Sentence (C), Articulatory (D), Non-speech (E), or Conversations (F). See BIBREF15 for details.
Each utterance consists of 3 files: audio, ultrasound, and parameter. The audio file is a RIFF wave file, sampled at 22.05 KHz, containing the speech of the child and therapist. The ultrasound file consists of a sequence of ultrasound frames capturing the midsagittal view of the child's tongue. A single ultrasound frame is recorded as a 2D matrix where each column represents the ultrasound reflection intensities along a single scan line. Each ultrasound frame consists of 63 scan lines of 412 data points each, and is sampled at a rate of INLINEFORM0 121.5 fps. Raw ultrasound frames can be visualised as greyscale images and can thus be interpreted as videos. The parameter file contains the synchronisation offset value (in milliseconds), determined using hardware synchronisation at recording time and confirmed by the therapists to be correct for this dataset.
Preparing the data
First, we exclude utterances of type “Non-speech" (E) from our training data (and statistics). These are coughs recorded to obtain additional tongue shapes, or swallowing motions recorded to capture a trace of the hard palate. Both of these rarely contain audible content and are therefore not relevant to our task. Next, we apply the offset, which should be positive if the audio leads and negative if the audio lags. In this dataset, the offset is always positive. We apply it by cropping the leading audio and trimming the end of the longer signal to match the duration.
To process the ultrasound more efficiently, we first reduce the frame rate from INLINEFORM0 121.5 fps to INLINEFORM1 24.3 fps by retaining 1 out of every 5 frames. We then downsample by a factor of (1, 3), shrinking the frame size from 63x412 to 63x138 using max pixel value. This retains the number of ultrasound vectors (63), but reduces the number of pixels per vector (from 412 to 138).
The final pre-preprocessing step is to remove empty regions. UltraSuite was previously anonymised by zero-ing segments of audio which contained personally identifiable information. As a preprocessing step, we remove the zero regions from audio and corresponding ultrasound. We additionally experimented with removing regions of silence using voice activity detection, but obtained a higher performance by retaining them.
Creating samples using a self-supervision strategy
To train our model we need positive and negative training pairs. The model ingests short clips from each modality of INLINEFORM0 200ms long, calculated as INLINEFORM1 , where INLINEFORM2 is the time window, INLINEFORM3 is the number of ultrasound frames per window (5 in our case), and INLINEFORM4 is the ultrasound frame rate of the utterance ( INLINEFORM5 24.3 fps). For each recording, we split the ultrasound into non-overlapping windows of 5 frames each. We extract MFCC features (13 cepstral coefficients) from the audio using a window length of INLINEFORM6 20ms, calculated as INLINEFORM7 , and a step size of INLINEFORM8 10ms, calculated as INLINEFORM9 . This give us the input sizes shown in Figure FIGREF1 .
Positive samples are pairs of ultrasound windows and the corresponding MFCC frames. To create negative samples, we randomise pairings of ultrasound windows to MFCC frames within the same utterance, generating as many negative as positive samples to achieve a balanced dataset. We obtain 243,764 samples for UXTD (13.5hrs), 333,526 for UXSSD (18.5hrs), and 572,078 for UPX (31.8 hrs), or a total 1,149,368 samples (63.9hrs) which we divide into training, validation and test sets.
Dividing samples for training, validation and testing
We aim to test whether our model generalises to data from new speakers, and to data from new sessions recorded with known speakers. To simulate this, we select a group of speakers from each dataset, and hold out all of their data either for validation or for testing. Additionally, we hold out one entire session from each of the remaining speakers, and use the rest of their data for training. We aim to reserve approximately 80% of the created samples for training, 10% for validation, and 10% for testing, and select speakers and sessions on this basis.
Each speaker in UXTD recorded 1 session, but sessions are of different durations. We reserve 45 speakers for training, 5 for validation, and 8 for testing. UXSSD and UPX contain fewer speakers, but each recorded multiple sessions. We hold out 1 speaker for validation and 1 for testing from each of the two datasets. We also hold out a session from the first half of the remaining speakers for validation, and a session from the second half of the remaining speakers for testing. This selection process results in 909,858 (pooled) samples for training (50.5hrs), 128,414 for validation (7.1hrs) and 111,096 for testing (6.2hrs). From the training set, we create shuffled batches which are balanced in the number of positive and negative samples.
Experiments
We select the hyper-parameters of our model empirically by tuning on the validation set (Table ). Hyper-parameter exploration is guided by BIBREF24 . We train our model using the Adam optimiser BIBREF25 with a learning rate of 0.001, a batch size of 64 samples, and for 20 epochs. We implement learning rate scheduling which reduces the learning rate by a factor of 0.1 when the validation loss plateaus for 2 epochs.
Upon convergence, the model achieves 0.193 training loss, 0.215 validation loss, and 0.213 test loss. By placing a threshold of 0.5 on predicted distances, the model achieves 69.9% binary classification accuracy on training samples, 64.7% on validation samples, and 65.3% on test samples.
Synchronisation offset prediction: Section SECREF3 described briefly how to use our model to predict the synchronisation offset for test utterances. To obtain a discretised set of offset candidates, we retrieve the true offsets of the training utterances, and find that they fall in the range [0, 179] ms. We discretise this range taking 45ms steps and rendering 40 candidate values (45ms is the smaller of the absolute values of the detectability boundaries, INLINEFORM0 125 and INLINEFORM1 45 ms). We bin the true offsets in the candidate set and discard empty bins, reducing the set from 40 to 24 values. We consider all 24 candidates for each test utterance. We do this by aligning the two signals according to the given candidate, then producing the non-overlapping windows of ultrasound and MFCC pairs, as we did when preparing the data. We then use our model to predict the Euclidean distance for each pair, and average the distances. Finally, we select the offset with the smallest average distance as our prediction.
Evaluation: Because the true offsets are known, we evaluate the performance of our model by computing the discrepancy between the predicted and the true offset for each utterance. If the discrepancy falls within the minimum detectability range ( INLINEFORM0 125 INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 45) then the prediction is correct. Random prediction (averaged over 1000 runs) yields 14.6% accuracy with a mean and standard deviation discrepancy of 328 INLINEFORM5 518ms. We achieve 82.9% accuracy with a mean and standard deviation discrepancy of 32 INLINEFORM6 223ms. SyncNet reports INLINEFORM7 99% accuracy on lip video synchronisation using a manual evaluation where the lip error is not detectable to a human observer BIBREF4 . However, we argue that our data is more challenging (Section SECREF4 ).
Analysis: We analyse the performance of our model across different conditions. Table shows the model accuracy broken down by utterance type. The model achieves 91.2% accuracy on utterances containing words, sentences, and conversations, all of which exhibit natural variation in speech. The model is less successful with Articulatory utterances, which contain isolated phones occurring once or repeated (e.g., “sh sh sh"). Such utterances contain subtle tongue movement, making it more challenging to correlate the visual signal with the audio. And indeed, the model finds the correct offset for only 55.9% of Articulatory utterances. A further analysis shows that 84.4% (N INLINEFORM0 90) of stop consonants (e.g., “t”), which are relied upon by therapists as the most salient audiovisual synchronisation cues BIBREF3 , are correctly synchronised by our model, compared to 48.6% (N INLINEFORM1 140) of vowels, which contain less distinct movement and are also more challenging for therapists to synchronise.
Table shows accuracy broken down by test set. The model performs better on test sets containing entirely new speakers compared with test sets containing new sessions from previously seen speakers. This is contrary to expectation but could be due to the UTI challenges (described in Section SECREF4 ) affecting different subsets to different degrees. Table shows that the model performs considerably worse on UXTD compared to other test sets (64.8% accuracy). However, a further breakdown of the results in Table by test set and utterance type explains this poor performance; the majority of UXTD utterances (71%) are Articulatory utterances which the model struggles to correctly synchronise. In fact, for other utterance types (where there is a large enough sample, such as Words) performance on UXTD is on par with other test sets.
Conclusion
We have shown how a two-stream neural network originally designed to synchronise lip videos with audio can be used to synchronise UTI data with audio. Our model exploits the correlation between the modalities to learn cross-model embeddings which are used to find the synchronisation offset. It generalises well to held-out data, allowing us to correctly synchronise the majority of test utterances. The model is best-suited to utterances which contain natural variation in speech and least suited to those containing isolated phones, with the exception of stop consonants. Future directions include integrating the model and synchronisation offset prediction process into speech therapy software BIBREF6 , BIBREF7 , and using the learned embeddings for other tasks such as active speaker detection BIBREF4 .
Acknowledgements
Supported by EPSRC Healthcare Partnerships Programme grant number EP/P02338X/1 (Ultrax2020). | Use an existing one |
f6f8054f327a2c084a73faca16cf24a180c094ae | f6f8054f327a2c084a73faca16cf24a180c094ae_0 | Q: Does their neural network predict a single offset in a recording?
Text: Introduction
Ultrasound tongue imaging (UTI) is a non-invasive way of observing the vocal tract during speech production BIBREF0 . Instrumental speech therapy relies on capturing ultrasound videos of the patient's tongue simultaneously with their speech audio in order to provide a diagnosis, design treatments, and measure therapy progress BIBREF1 . The two modalities must be correctly synchronised, with a minimum shift of INLINEFORM0 45ms if the audio leads and INLINEFORM1 125ms if the audio lags, based on synchronisation standards for broadcast audiovisual signals BIBREF2 . Errors beyond this range can render the data unusable – indeed, synchronisation errors do occur, resulting in significant wasted effort if not corrected. No mechanism currently exists to automatically correct these errors, and although manual synchronisation is possible in the presence of certain audiovisual cues such as stop consonants BIBREF3 , it is time consuming and tedious.
In this work, we exploit the correlation between the two modalities to synchronise them. We utilise a two-stream neural network architecture for the task BIBREF4 , using as our only source of supervision pairs of ultrasound and audio segments which have been automatically generated and labelled as positive (correctly synchronised) or negative (randomly desynchronised); a process known as self-supervision BIBREF5 . We demonstrate how this approach enables us to correctly synchronise the majority of utterances in our test set, and in particular, those exhibiting natural variation in speech.
Section SECREF2 reviews existing approaches for audiovisual synchronisation, and describes the challenges specifically associated with UTI data, compared with lip videos for which automatic synchronisation has been previously attempted. Section SECREF3 describes our approach. Section SECREF4 describes the data we use, including data preprocessing and positive and negative sample creation using a self-supervision strategy. Section SECREF5 describes our experiments, followed by an analysis of the results. We conclude with a summary and future directions in Section SECREF6 .
Background
Ultrasound and audio are recorded using separate components, and hardware synchronisation is achieved by translating information from the visual signal into audio at recording time. Specifically, for every ultrasound frame recorded, the ultrasound beam-forming unit releases a pulse signal, which is translated by an external hardware synchroniser into an audio pulse signal and captured by the sound card BIBREF6 , BIBREF7 . Synchronisation is achieved by aligning the ultrasound frames with the audio pulse signal, which is already time-aligned with the speech audio BIBREF8 .
Hardware synchronisation can fail for a number of reasons. The synchroniser is an external device which needs to be correctly connected and operated by therapists. Incorrect use can lead to missing the pulse signal, which would cause synchronisation to fail for entire therapy sessions BIBREF9 . Furthermore, low-quality sound cards report an approximate, rather than the exact, sample rate which leads to errors in the offset calculation BIBREF8 . There is currently no recovery mechanism for when synchronisation fails, and to the best of our knowledge, there has been no prior work on automatically correcting the synchronisation error between ultrasound tongue videos and audio. There is, however, some prior work on synchronising lip movement with audio which we describe next.
Audiovisual synchronisation for lip videos
Speech audio is generated by articulatory movement and is therefore fundamentally correlated with other manifestations of this movement, such as lip or tongue videos BIBREF10 . An alternative to the hardware approach is to exploit this correlation to find the offset. Previous approaches have investigated the effects of using different representations and feature extraction techniques on finding dimensions of high correlation BIBREF11 , BIBREF12 , BIBREF13 . More recently, neural networks, which learn features directly from input, have been employed for the task. SyncNet BIBREF4 uses a two-stream neural network and self-supervision to learn cross-modal embeddings, which are then used to synchronise audio with lip videos. It achieves near perfect accuracy ( INLINEFORM0 99 INLINEFORM1 ) using manual evaluation where lip-sync error is not detectable to a human. It has since been extended to use different sample creation methods for self-supervision BIBREF5 , BIBREF14 and different training objectives BIBREF14 . We adopt the original approach BIBREF4 , as it is both simpler and significantly less expensive to train than the more recent variants.
Lip videos vs. ultrasound tongue imaging (UTI)
Videos of lip movement can be obtained from various sources including TV, films, and YouTube, and are often cropped to include only the lips BIBREF4 . UTI data, on the other hand, is recorded in clinics by trained therapists BIBREF15 . An ultrasound probe placed under the chin of the patient captures the midsaggital view of their oral cavity as they speak. UTI data consists of sequences of 2D matrices of raw ultrasound reflection data, which can be interpreted as greyscale images BIBREF15 . There are several challenges specifically associated with UTI data compared with lip videos, which can potentially lower the performance of models relative to results reported on lip video data. These include:
Poor image quality: Ultrasound data is noisy, containing arbitrary high-contrast edges, speckle noise, artefacts, and interruptions to the tongue's surface BIBREF0 , BIBREF16 , BIBREF17 . The oral cavity is not entirely visible, missing the lips, the palate, and the pharyngeal wall, and visually interpreting the data requires specialised training. In contrast, videos of lip movement are of much higher quality and suffer from none of these issues.
Probe placement variation: Surfaces that are orthogonal to the ultrasound beam image better than those at an angle. Small shifts in probe placement during recording lead to high variation between otherwise similar tongue shapes BIBREF0 , BIBREF18 , BIBREF17 . In contrast, while the scaling and rotations of lip videos lead to variation, they do not lead to a degradation in image quality.
Inter-speaker variation: Age and physiology affect the quality of ultrasound data, and subjects with smaller vocal tracts and less tissue fat image better BIBREF0 , BIBREF17 . Dryness in the mouth, as a result of nervousness during speech therapy, leads to poor imaging. While inter-speaker variation is expected in lip videos, again, the variation does not lead to quality degradation.
Limited amount of data: Existing UTI datasets are considerably smaller than lip movement datasets. Consider for example VoxCeleb and VoxCeleb2 used to train SyncNet BIBREF4 , BIBREF14 , which together contain 1 million utterances from 7,363 identities BIBREF19 , BIBREF20 . In contrast, the UltraSuite repository (used in this work) contains 13,815 spoken utterances from 86 identities.
Uncorrelated segments: Speech therapy data contains interactions between the therapist and patient. The audio therefore contains speech from both speakers, while the ultrasound captures only the patient's tongue BIBREF15 . As a result, parts of the recordings will consist of completely uncorrelated audio and ultrasound. This issue is similar to that of dubbed voices in lip videos BIBREF4 , but is more prevalent in speech therapy data.
Model
We adopt the approach in BIBREF4 , modifying it to synchronise audio with UTI data. Our model, UltraSync, consists of two streams: the first takes as input a short segment of ultrasound and the second takes as input the corresponding audio. Both inputs are high-dimensional and are of different sizes. The objective is to learn a mapping from the inputs to a pair of low-dimensional vectors of the same length, such that the Euclidean distance between the two vectors is small when they correlate and large otherwise BIBREF21 , BIBREF22 . This model can be viewed as an extension of a siamese neural network BIBREF23 but with two asymmetrical streams and no shared parameters. Figure FIGREF1 illustrates the main architecture. The visual data INLINEFORM0 (ultrasound) and audio data INLINEFORM1 (MFCC), which have different shapes, are mapped to low dimensional embeddings INLINEFORM2 (visual) and INLINEFORM3 (audio) of the same size: DISPLAYFORM0
The model is trained using a contrastive loss function BIBREF21 , BIBREF22 , INLINEFORM0 , which minimises the Euclidean distance INLINEFORM1 between INLINEFORM2 and INLINEFORM3 for positive pairs ( INLINEFORM4 ), and maximises it for negative pairs ( INLINEFORM5 ), for a number of training samples INLINEFORM6 : DISPLAYFORM0
Given a pair of ultrasound and audio segments we can calculate the distance between them using our model. To predict the synchronisation offset for an utterance, we consider a discretised set of candidate offsets, calculate the average distance for each across utterance segments, and select the one with the minimum average distance. The candidate set is independent of the model, and is chosen based on task knowledge (Section SECREF5 ).
Data
For our experiments, we select a dataset whose utterances have been correctly synchronised at recording time. This allows us to control how the model is trained and verify its performance using ground truth synchronisation offsets. We use UltraSuite: a repository of ultrasound and acoustic data gathered from child speech therapy sessions BIBREF15 . We used all three datasets from the repository: UXTD (recorded with typically developing children), and UXSSD and UPX (recorded with children with speech sound disorders). In total, the dataset contains 13,815 spoken utterances from 86 speakers, corresponding to 35.9 hours of recordings. The utterances have been categorised by the type of task the child was given, and are labelled as: Words (A), Non-words (B), Sentence (C), Articulatory (D), Non-speech (E), or Conversations (F). See BIBREF15 for details.
Each utterance consists of 3 files: audio, ultrasound, and parameter. The audio file is a RIFF wave file, sampled at 22.05 KHz, containing the speech of the child and therapist. The ultrasound file consists of a sequence of ultrasound frames capturing the midsagittal view of the child's tongue. A single ultrasound frame is recorded as a 2D matrix where each column represents the ultrasound reflection intensities along a single scan line. Each ultrasound frame consists of 63 scan lines of 412 data points each, and is sampled at a rate of INLINEFORM0 121.5 fps. Raw ultrasound frames can be visualised as greyscale images and can thus be interpreted as videos. The parameter file contains the synchronisation offset value (in milliseconds), determined using hardware synchronisation at recording time and confirmed by the therapists to be correct for this dataset.
Preparing the data
First, we exclude utterances of type “Non-speech" (E) from our training data (and statistics). These are coughs recorded to obtain additional tongue shapes, or swallowing motions recorded to capture a trace of the hard palate. Both of these rarely contain audible content and are therefore not relevant to our task. Next, we apply the offset, which should be positive if the audio leads and negative if the audio lags. In this dataset, the offset is always positive. We apply it by cropping the leading audio and trimming the end of the longer signal to match the duration.
To process the ultrasound more efficiently, we first reduce the frame rate from INLINEFORM0 121.5 fps to INLINEFORM1 24.3 fps by retaining 1 out of every 5 frames. We then downsample by a factor of (1, 3), shrinking the frame size from 63x412 to 63x138 using max pixel value. This retains the number of ultrasound vectors (63), but reduces the number of pixels per vector (from 412 to 138).
The final pre-preprocessing step is to remove empty regions. UltraSuite was previously anonymised by zero-ing segments of audio which contained personally identifiable information. As a preprocessing step, we remove the zero regions from audio and corresponding ultrasound. We additionally experimented with removing regions of silence using voice activity detection, but obtained a higher performance by retaining them.
Creating samples using a self-supervision strategy
To train our model we need positive and negative training pairs. The model ingests short clips from each modality of INLINEFORM0 200ms long, calculated as INLINEFORM1 , where INLINEFORM2 is the time window, INLINEFORM3 is the number of ultrasound frames per window (5 in our case), and INLINEFORM4 is the ultrasound frame rate of the utterance ( INLINEFORM5 24.3 fps). For each recording, we split the ultrasound into non-overlapping windows of 5 frames each. We extract MFCC features (13 cepstral coefficients) from the audio using a window length of INLINEFORM6 20ms, calculated as INLINEFORM7 , and a step size of INLINEFORM8 10ms, calculated as INLINEFORM9 . This give us the input sizes shown in Figure FIGREF1 .
Positive samples are pairs of ultrasound windows and the corresponding MFCC frames. To create negative samples, we randomise pairings of ultrasound windows to MFCC frames within the same utterance, generating as many negative as positive samples to achieve a balanced dataset. We obtain 243,764 samples for UXTD (13.5hrs), 333,526 for UXSSD (18.5hrs), and 572,078 for UPX (31.8 hrs), or a total 1,149,368 samples (63.9hrs) which we divide into training, validation and test sets.
Dividing samples for training, validation and testing
We aim to test whether our model generalises to data from new speakers, and to data from new sessions recorded with known speakers. To simulate this, we select a group of speakers from each dataset, and hold out all of their data either for validation or for testing. Additionally, we hold out one entire session from each of the remaining speakers, and use the rest of their data for training. We aim to reserve approximately 80% of the created samples for training, 10% for validation, and 10% for testing, and select speakers and sessions on this basis.
Each speaker in UXTD recorded 1 session, but sessions are of different durations. We reserve 45 speakers for training, 5 for validation, and 8 for testing. UXSSD and UPX contain fewer speakers, but each recorded multiple sessions. We hold out 1 speaker for validation and 1 for testing from each of the two datasets. We also hold out a session from the first half of the remaining speakers for validation, and a session from the second half of the remaining speakers for testing. This selection process results in 909,858 (pooled) samples for training (50.5hrs), 128,414 for validation (7.1hrs) and 111,096 for testing (6.2hrs). From the training set, we create shuffled batches which are balanced in the number of positive and negative samples.
Experiments
We select the hyper-parameters of our model empirically by tuning on the validation set (Table ). Hyper-parameter exploration is guided by BIBREF24 . We train our model using the Adam optimiser BIBREF25 with a learning rate of 0.001, a batch size of 64 samples, and for 20 epochs. We implement learning rate scheduling which reduces the learning rate by a factor of 0.1 when the validation loss plateaus for 2 epochs.
Upon convergence, the model achieves 0.193 training loss, 0.215 validation loss, and 0.213 test loss. By placing a threshold of 0.5 on predicted distances, the model achieves 69.9% binary classification accuracy on training samples, 64.7% on validation samples, and 65.3% on test samples.
Synchronisation offset prediction: Section SECREF3 described briefly how to use our model to predict the synchronisation offset for test utterances. To obtain a discretised set of offset candidates, we retrieve the true offsets of the training utterances, and find that they fall in the range [0, 179] ms. We discretise this range taking 45ms steps and rendering 40 candidate values (45ms is the smaller of the absolute values of the detectability boundaries, INLINEFORM0 125 and INLINEFORM1 45 ms). We bin the true offsets in the candidate set and discard empty bins, reducing the set from 40 to 24 values. We consider all 24 candidates for each test utterance. We do this by aligning the two signals according to the given candidate, then producing the non-overlapping windows of ultrasound and MFCC pairs, as we did when preparing the data. We then use our model to predict the Euclidean distance for each pair, and average the distances. Finally, we select the offset with the smallest average distance as our prediction.
Evaluation: Because the true offsets are known, we evaluate the performance of our model by computing the discrepancy between the predicted and the true offset for each utterance. If the discrepancy falls within the minimum detectability range ( INLINEFORM0 125 INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 45) then the prediction is correct. Random prediction (averaged over 1000 runs) yields 14.6% accuracy with a mean and standard deviation discrepancy of 328 INLINEFORM5 518ms. We achieve 82.9% accuracy with a mean and standard deviation discrepancy of 32 INLINEFORM6 223ms. SyncNet reports INLINEFORM7 99% accuracy on lip video synchronisation using a manual evaluation where the lip error is not detectable to a human observer BIBREF4 . However, we argue that our data is more challenging (Section SECREF4 ).
Analysis: We analyse the performance of our model across different conditions. Table shows the model accuracy broken down by utterance type. The model achieves 91.2% accuracy on utterances containing words, sentences, and conversations, all of which exhibit natural variation in speech. The model is less successful with Articulatory utterances, which contain isolated phones occurring once or repeated (e.g., “sh sh sh"). Such utterances contain subtle tongue movement, making it more challenging to correlate the visual signal with the audio. And indeed, the model finds the correct offset for only 55.9% of Articulatory utterances. A further analysis shows that 84.4% (N INLINEFORM0 90) of stop consonants (e.g., “t”), which are relied upon by therapists as the most salient audiovisual synchronisation cues BIBREF3 , are correctly synchronised by our model, compared to 48.6% (N INLINEFORM1 140) of vowels, which contain less distinct movement and are also more challenging for therapists to synchronise.
Table shows accuracy broken down by test set. The model performs better on test sets containing entirely new speakers compared with test sets containing new sessions from previously seen speakers. This is contrary to expectation but could be due to the UTI challenges (described in Section SECREF4 ) affecting different subsets to different degrees. Table shows that the model performs considerably worse on UXTD compared to other test sets (64.8% accuracy). However, a further breakdown of the results in Table by test set and utterance type explains this poor performance; the majority of UXTD utterances (71%) are Articulatory utterances which the model struggles to correctly synchronise. In fact, for other utterance types (where there is a large enough sample, such as Words) performance on UXTD is on par with other test sets.
Conclusion
We have shown how a two-stream neural network originally designed to synchronise lip videos with audio can be used to synchronise UTI data with audio. Our model exploits the correlation between the modalities to learn cross-model embeddings which are used to find the synchronisation offset. It generalises well to held-out data, allowing us to correctly synchronise the majority of test utterances. The model is best-suited to utterances which contain natural variation in speech and least suited to those containing isolated phones, with the exception of stop consonants. Future directions include integrating the model and synchronisation offset prediction process into speech therapy software BIBREF6 , BIBREF7 , and using the learned embeddings for other tasks such as active speaker detection BIBREF4 .
Acknowledgements
Supported by EPSRC Healthcare Partnerships Programme grant number EP/P02338X/1 (Ultrax2020). | Yes |
b8f711179a468fec9a0d8a961fb0f51894af4b31 | b8f711179a468fec9a0d8a961fb0f51894af4b31_0 | Q: What kind of neural network architecture do they use?
Text: Introduction
Ultrasound tongue imaging (UTI) is a non-invasive way of observing the vocal tract during speech production BIBREF0 . Instrumental speech therapy relies on capturing ultrasound videos of the patient's tongue simultaneously with their speech audio in order to provide a diagnosis, design treatments, and measure therapy progress BIBREF1 . The two modalities must be correctly synchronised, with a minimum shift of INLINEFORM0 45ms if the audio leads and INLINEFORM1 125ms if the audio lags, based on synchronisation standards for broadcast audiovisual signals BIBREF2 . Errors beyond this range can render the data unusable – indeed, synchronisation errors do occur, resulting in significant wasted effort if not corrected. No mechanism currently exists to automatically correct these errors, and although manual synchronisation is possible in the presence of certain audiovisual cues such as stop consonants BIBREF3 , it is time consuming and tedious.
In this work, we exploit the correlation between the two modalities to synchronise them. We utilise a two-stream neural network architecture for the task BIBREF4 , using as our only source of supervision pairs of ultrasound and audio segments which have been automatically generated and labelled as positive (correctly synchronised) or negative (randomly desynchronised); a process known as self-supervision BIBREF5 . We demonstrate how this approach enables us to correctly synchronise the majority of utterances in our test set, and in particular, those exhibiting natural variation in speech.
Section SECREF2 reviews existing approaches for audiovisual synchronisation, and describes the challenges specifically associated with UTI data, compared with lip videos for which automatic synchronisation has been previously attempted. Section SECREF3 describes our approach. Section SECREF4 describes the data we use, including data preprocessing and positive and negative sample creation using a self-supervision strategy. Section SECREF5 describes our experiments, followed by an analysis of the results. We conclude with a summary and future directions in Section SECREF6 .
Background
Ultrasound and audio are recorded using separate components, and hardware synchronisation is achieved by translating information from the visual signal into audio at recording time. Specifically, for every ultrasound frame recorded, the ultrasound beam-forming unit releases a pulse signal, which is translated by an external hardware synchroniser into an audio pulse signal and captured by the sound card BIBREF6 , BIBREF7 . Synchronisation is achieved by aligning the ultrasound frames with the audio pulse signal, which is already time-aligned with the speech audio BIBREF8 .
Hardware synchronisation can fail for a number of reasons. The synchroniser is an external device which needs to be correctly connected and operated by therapists. Incorrect use can lead to missing the pulse signal, which would cause synchronisation to fail for entire therapy sessions BIBREF9 . Furthermore, low-quality sound cards report an approximate, rather than the exact, sample rate which leads to errors in the offset calculation BIBREF8 . There is currently no recovery mechanism for when synchronisation fails, and to the best of our knowledge, there has been no prior work on automatically correcting the synchronisation error between ultrasound tongue videos and audio. There is, however, some prior work on synchronising lip movement with audio which we describe next.
Audiovisual synchronisation for lip videos
Speech audio is generated by articulatory movement and is therefore fundamentally correlated with other manifestations of this movement, such as lip or tongue videos BIBREF10 . An alternative to the hardware approach is to exploit this correlation to find the offset. Previous approaches have investigated the effects of using different representations and feature extraction techniques on finding dimensions of high correlation BIBREF11 , BIBREF12 , BIBREF13 . More recently, neural networks, which learn features directly from input, have been employed for the task. SyncNet BIBREF4 uses a two-stream neural network and self-supervision to learn cross-modal embeddings, which are then used to synchronise audio with lip videos. It achieves near perfect accuracy ( INLINEFORM0 99 INLINEFORM1 ) using manual evaluation where lip-sync error is not detectable to a human. It has since been extended to use different sample creation methods for self-supervision BIBREF5 , BIBREF14 and different training objectives BIBREF14 . We adopt the original approach BIBREF4 , as it is both simpler and significantly less expensive to train than the more recent variants.
Lip videos vs. ultrasound tongue imaging (UTI)
Videos of lip movement can be obtained from various sources including TV, films, and YouTube, and are often cropped to include only the lips BIBREF4 . UTI data, on the other hand, is recorded in clinics by trained therapists BIBREF15 . An ultrasound probe placed under the chin of the patient captures the midsaggital view of their oral cavity as they speak. UTI data consists of sequences of 2D matrices of raw ultrasound reflection data, which can be interpreted as greyscale images BIBREF15 . There are several challenges specifically associated with UTI data compared with lip videos, which can potentially lower the performance of models relative to results reported on lip video data. These include:
Poor image quality: Ultrasound data is noisy, containing arbitrary high-contrast edges, speckle noise, artefacts, and interruptions to the tongue's surface BIBREF0 , BIBREF16 , BIBREF17 . The oral cavity is not entirely visible, missing the lips, the palate, and the pharyngeal wall, and visually interpreting the data requires specialised training. In contrast, videos of lip movement are of much higher quality and suffer from none of these issues.
Probe placement variation: Surfaces that are orthogonal to the ultrasound beam image better than those at an angle. Small shifts in probe placement during recording lead to high variation between otherwise similar tongue shapes BIBREF0 , BIBREF18 , BIBREF17 . In contrast, while the scaling and rotations of lip videos lead to variation, they do not lead to a degradation in image quality.
Inter-speaker variation: Age and physiology affect the quality of ultrasound data, and subjects with smaller vocal tracts and less tissue fat image better BIBREF0 , BIBREF17 . Dryness in the mouth, as a result of nervousness during speech therapy, leads to poor imaging. While inter-speaker variation is expected in lip videos, again, the variation does not lead to quality degradation.
Limited amount of data: Existing UTI datasets are considerably smaller than lip movement datasets. Consider for example VoxCeleb and VoxCeleb2 used to train SyncNet BIBREF4 , BIBREF14 , which together contain 1 million utterances from 7,363 identities BIBREF19 , BIBREF20 . In contrast, the UltraSuite repository (used in this work) contains 13,815 spoken utterances from 86 identities.
Uncorrelated segments: Speech therapy data contains interactions between the therapist and patient. The audio therefore contains speech from both speakers, while the ultrasound captures only the patient's tongue BIBREF15 . As a result, parts of the recordings will consist of completely uncorrelated audio and ultrasound. This issue is similar to that of dubbed voices in lip videos BIBREF4 , but is more prevalent in speech therapy data.
Model
We adopt the approach in BIBREF4 , modifying it to synchronise audio with UTI data. Our model, UltraSync, consists of two streams: the first takes as input a short segment of ultrasound and the second takes as input the corresponding audio. Both inputs are high-dimensional and are of different sizes. The objective is to learn a mapping from the inputs to a pair of low-dimensional vectors of the same length, such that the Euclidean distance between the two vectors is small when they correlate and large otherwise BIBREF21 , BIBREF22 . This model can be viewed as an extension of a siamese neural network BIBREF23 but with two asymmetrical streams and no shared parameters. Figure FIGREF1 illustrates the main architecture. The visual data INLINEFORM0 (ultrasound) and audio data INLINEFORM1 (MFCC), which have different shapes, are mapped to low dimensional embeddings INLINEFORM2 (visual) and INLINEFORM3 (audio) of the same size: DISPLAYFORM0
The model is trained using a contrastive loss function BIBREF21 , BIBREF22 , INLINEFORM0 , which minimises the Euclidean distance INLINEFORM1 between INLINEFORM2 and INLINEFORM3 for positive pairs ( INLINEFORM4 ), and maximises it for negative pairs ( INLINEFORM5 ), for a number of training samples INLINEFORM6 : DISPLAYFORM0
Given a pair of ultrasound and audio segments we can calculate the distance between them using our model. To predict the synchronisation offset for an utterance, we consider a discretised set of candidate offsets, calculate the average distance for each across utterance segments, and select the one with the minimum average distance. The candidate set is independent of the model, and is chosen based on task knowledge (Section SECREF5 ).
Data
For our experiments, we select a dataset whose utterances have been correctly synchronised at recording time. This allows us to control how the model is trained and verify its performance using ground truth synchronisation offsets. We use UltraSuite: a repository of ultrasound and acoustic data gathered from child speech therapy sessions BIBREF15 . We used all three datasets from the repository: UXTD (recorded with typically developing children), and UXSSD and UPX (recorded with children with speech sound disorders). In total, the dataset contains 13,815 spoken utterances from 86 speakers, corresponding to 35.9 hours of recordings. The utterances have been categorised by the type of task the child was given, and are labelled as: Words (A), Non-words (B), Sentence (C), Articulatory (D), Non-speech (E), or Conversations (F). See BIBREF15 for details.
Each utterance consists of 3 files: audio, ultrasound, and parameter. The audio file is a RIFF wave file, sampled at 22.05 KHz, containing the speech of the child and therapist. The ultrasound file consists of a sequence of ultrasound frames capturing the midsagittal view of the child's tongue. A single ultrasound frame is recorded as a 2D matrix where each column represents the ultrasound reflection intensities along a single scan line. Each ultrasound frame consists of 63 scan lines of 412 data points each, and is sampled at a rate of INLINEFORM0 121.5 fps. Raw ultrasound frames can be visualised as greyscale images and can thus be interpreted as videos. The parameter file contains the synchronisation offset value (in milliseconds), determined using hardware synchronisation at recording time and confirmed by the therapists to be correct for this dataset.
Preparing the data
First, we exclude utterances of type “Non-speech" (E) from our training data (and statistics). These are coughs recorded to obtain additional tongue shapes, or swallowing motions recorded to capture a trace of the hard palate. Both of these rarely contain audible content and are therefore not relevant to our task. Next, we apply the offset, which should be positive if the audio leads and negative if the audio lags. In this dataset, the offset is always positive. We apply it by cropping the leading audio and trimming the end of the longer signal to match the duration.
To process the ultrasound more efficiently, we first reduce the frame rate from INLINEFORM0 121.5 fps to INLINEFORM1 24.3 fps by retaining 1 out of every 5 frames. We then downsample by a factor of (1, 3), shrinking the frame size from 63x412 to 63x138 using max pixel value. This retains the number of ultrasound vectors (63), but reduces the number of pixels per vector (from 412 to 138).
The final pre-preprocessing step is to remove empty regions. UltraSuite was previously anonymised by zero-ing segments of audio which contained personally identifiable information. As a preprocessing step, we remove the zero regions from audio and corresponding ultrasound. We additionally experimented with removing regions of silence using voice activity detection, but obtained a higher performance by retaining them.
Creating samples using a self-supervision strategy
To train our model we need positive and negative training pairs. The model ingests short clips from each modality of INLINEFORM0 200ms long, calculated as INLINEFORM1 , where INLINEFORM2 is the time window, INLINEFORM3 is the number of ultrasound frames per window (5 in our case), and INLINEFORM4 is the ultrasound frame rate of the utterance ( INLINEFORM5 24.3 fps). For each recording, we split the ultrasound into non-overlapping windows of 5 frames each. We extract MFCC features (13 cepstral coefficients) from the audio using a window length of INLINEFORM6 20ms, calculated as INLINEFORM7 , and a step size of INLINEFORM8 10ms, calculated as INLINEFORM9 . This give us the input sizes shown in Figure FIGREF1 .
Positive samples are pairs of ultrasound windows and the corresponding MFCC frames. To create negative samples, we randomise pairings of ultrasound windows to MFCC frames within the same utterance, generating as many negative as positive samples to achieve a balanced dataset. We obtain 243,764 samples for UXTD (13.5hrs), 333,526 for UXSSD (18.5hrs), and 572,078 for UPX (31.8 hrs), or a total 1,149,368 samples (63.9hrs) which we divide into training, validation and test sets.
Dividing samples for training, validation and testing
We aim to test whether our model generalises to data from new speakers, and to data from new sessions recorded with known speakers. To simulate this, we select a group of speakers from each dataset, and hold out all of their data either for validation or for testing. Additionally, we hold out one entire session from each of the remaining speakers, and use the rest of their data for training. We aim to reserve approximately 80% of the created samples for training, 10% for validation, and 10% for testing, and select speakers and sessions on this basis.
Each speaker in UXTD recorded 1 session, but sessions are of different durations. We reserve 45 speakers for training, 5 for validation, and 8 for testing. UXSSD and UPX contain fewer speakers, but each recorded multiple sessions. We hold out 1 speaker for validation and 1 for testing from each of the two datasets. We also hold out a session from the first half of the remaining speakers for validation, and a session from the second half of the remaining speakers for testing. This selection process results in 909,858 (pooled) samples for training (50.5hrs), 128,414 for validation (7.1hrs) and 111,096 for testing (6.2hrs). From the training set, we create shuffled batches which are balanced in the number of positive and negative samples.
Experiments
We select the hyper-parameters of our model empirically by tuning on the validation set (Table ). Hyper-parameter exploration is guided by BIBREF24 . We train our model using the Adam optimiser BIBREF25 with a learning rate of 0.001, a batch size of 64 samples, and for 20 epochs. We implement learning rate scheduling which reduces the learning rate by a factor of 0.1 when the validation loss plateaus for 2 epochs.
Upon convergence, the model achieves 0.193 training loss, 0.215 validation loss, and 0.213 test loss. By placing a threshold of 0.5 on predicted distances, the model achieves 69.9% binary classification accuracy on training samples, 64.7% on validation samples, and 65.3% on test samples.
Synchronisation offset prediction: Section SECREF3 described briefly how to use our model to predict the synchronisation offset for test utterances. To obtain a discretised set of offset candidates, we retrieve the true offsets of the training utterances, and find that they fall in the range [0, 179] ms. We discretise this range taking 45ms steps and rendering 40 candidate values (45ms is the smaller of the absolute values of the detectability boundaries, INLINEFORM0 125 and INLINEFORM1 45 ms). We bin the true offsets in the candidate set and discard empty bins, reducing the set from 40 to 24 values. We consider all 24 candidates for each test utterance. We do this by aligning the two signals according to the given candidate, then producing the non-overlapping windows of ultrasound and MFCC pairs, as we did when preparing the data. We then use our model to predict the Euclidean distance for each pair, and average the distances. Finally, we select the offset with the smallest average distance as our prediction.
Evaluation: Because the true offsets are known, we evaluate the performance of our model by computing the discrepancy between the predicted and the true offset for each utterance. If the discrepancy falls within the minimum detectability range ( INLINEFORM0 125 INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 45) then the prediction is correct. Random prediction (averaged over 1000 runs) yields 14.6% accuracy with a mean and standard deviation discrepancy of 328 INLINEFORM5 518ms. We achieve 82.9% accuracy with a mean and standard deviation discrepancy of 32 INLINEFORM6 223ms. SyncNet reports INLINEFORM7 99% accuracy on lip video synchronisation using a manual evaluation where the lip error is not detectable to a human observer BIBREF4 . However, we argue that our data is more challenging (Section SECREF4 ).
Analysis: We analyse the performance of our model across different conditions. Table shows the model accuracy broken down by utterance type. The model achieves 91.2% accuracy on utterances containing words, sentences, and conversations, all of which exhibit natural variation in speech. The model is less successful with Articulatory utterances, which contain isolated phones occurring once or repeated (e.g., “sh sh sh"). Such utterances contain subtle tongue movement, making it more challenging to correlate the visual signal with the audio. And indeed, the model finds the correct offset for only 55.9% of Articulatory utterances. A further analysis shows that 84.4% (N INLINEFORM0 90) of stop consonants (e.g., “t”), which are relied upon by therapists as the most salient audiovisual synchronisation cues BIBREF3 , are correctly synchronised by our model, compared to 48.6% (N INLINEFORM1 140) of vowels, which contain less distinct movement and are also more challenging for therapists to synchronise.
Table shows accuracy broken down by test set. The model performs better on test sets containing entirely new speakers compared with test sets containing new sessions from previously seen speakers. This is contrary to expectation but could be due to the UTI challenges (described in Section SECREF4 ) affecting different subsets to different degrees. Table shows that the model performs considerably worse on UXTD compared to other test sets (64.8% accuracy). However, a further breakdown of the results in Table by test set and utterance type explains this poor performance; the majority of UXTD utterances (71%) are Articulatory utterances which the model struggles to correctly synchronise. In fact, for other utterance types (where there is a large enough sample, such as Words) performance on UXTD is on par with other test sets.
Conclusion
We have shown how a two-stream neural network originally designed to synchronise lip videos with audio can be used to synchronise UTI data with audio. Our model exploits the correlation between the modalities to learn cross-model embeddings which are used to find the synchronisation offset. It generalises well to held-out data, allowing us to correctly synchronise the majority of test utterances. The model is best-suited to utterances which contain natural variation in speech and least suited to those containing isolated phones, with the exception of stop consonants. Future directions include integrating the model and synchronisation offset prediction process into speech therapy software BIBREF6 , BIBREF7 , and using the learned embeddings for other tasks such as active speaker detection BIBREF4 .
Acknowledgements
Supported by EPSRC Healthcare Partnerships Programme grant number EP/P02338X/1 (Ultrax2020). | CNN |
3bf429633ecbbfec3d7ffbcfa61fa90440cc918b | 3bf429633ecbbfec3d7ffbcfa61fa90440cc918b_0 | Q: How are aspects identified in aspect extraction?
Text: Affiliation
School of Computer Science and Engineering, Nanyang Technological University, Singapore
Synonyms
Sentiment Analysis, Subjectivity Detection, Deep Learning Aspect Extraction, Polarity Distribution, Convolutional Neural Network.
Glossary
Aspect : Feature related to an opinion target
Convolution : features made of consecutive words
BOW : Bag of Words
NLP : Natural Language Processing
CNN : Convolutional Neural Network
LDA : Latent Dirichlet Allocation
Definition
Subjectivity detection is the task of identifying objective and subjective sentences. Objective sentences are those which do not exhibit any sentiment. So, it is desired for a sentiment analysis engine to find and separate the objective sentences for further analysis e.g., polarity detection. In subjective sentences, opinions can often be expressed on one or multiple topics. Aspect extraction is a subtask of sentiment analysis that consists in identifying opinion targets in opinionated text, i.e., in detecting the specific aspects of a product or service the opinion holder is either praising or complaining about.
Key Points
We consider deep convolutional neural networks where each layer is learned independent of the others resulting in low complexity.
We model temporal dynamics in product reviews by pre-training the deep CNN using dynamic Gaussian Bayesian networks.
We combine linguistic aspect mining with CNN features for effective sentiment detection.
Historical Background
Traditional methods prior to 2001 used hand-crafted templates to identify subjectivity and did not generalize well for resource-deficient languages such as Spanish. Later works published between 2002 and 2009 proposed the use of deep neural networks to automatically learn a dictionary of features (in the form of convolution kernels) that is portable to new languages. Recently, recurrent deep neural networks are being used to model alternating subjective and objective sentences within a single review. Such networks are difficult to train for a large vocabulary of words due to the problem of vanishing gradients. Hence, in this chapter we consider use of heuristics to learn dynamic Gaussian networks to select significant word dependencies between sentences in a single review.
Further, in order to relation between opinion targets and the corresponding polarity in a review, aspect based opinion mining is used. Explicit aspects were models by several authors using statistical observations such mutual information between noun phrase and the product class. However this method was unable to detect implicit aspects due to high level of noise in the data. Hence, topic modeling was widely used to extract and group aspects, where the latent variable 'topic' is introduced between the observed variables 'document' and 'word'. In this chapter, we demonstrate the use of 'common sense reasoning' when computing word distributions that enable shifting from a syntactic word model to a semantic concept model.
Introduction
While sentiment analysis research has become very popular in the past ten years, most companies and researchers still approach it simply as a polarity detection problem. In reality, sentiment analysis is a `suitcase problem' that requires tackling many natural language processing (NLP) subtasks, including microtext analysis, sarcasm detection, anaphora resolution, subjectivity detection and aspect extraction. In this chapter, we focus on the last two subtasks as they are key for ensuring a minimum level of accuracy in the detection of polarity from social media.
The two basic issues associated with sentiment analysis on the Web, in fact, are that (1) a lot of factual or non-opinionated information needs to be filtered out and (2) opinions are most times about different aspects of the same product or service rather than on the whole item and reviewers tend to praise some and criticize others. Subjectivity detection, hence, ensures that factual information is filtered out and only opinionated information is passed on to the polarity classifier and aspect extraction enables the correct distribution of polarity among the different features of the opinion target (in stead of having one unique, averaged polarity assigned to it). In this chapter, we offer some insights about each task and apply an ensemble of deep learning and linguistics to tackle both.
The opportunity to capture the opinion of the general public about social events, political movements, company strategies, marketing campaigns, and product preferences has raised increasing interest of both the scientific community (because of the exciting open challenges) and the business world (because of the remarkable benefits for marketing and financial market prediction). Today, sentiment analysis research has its applications in several different scenarios. There are a good number of companies, both large- and small-scale, that focus on the analysis of opinions and sentiments as part of their mission BIBREF0 . Opinion mining techniques can be used for the creation and automated upkeep of review and opinion aggregation websites, in which opinions are continuously gathered from the Web and not restricted to just product reviews, but also to broader topics such as political issues and brand perception. Sentiment analysis also has a great potential as a sub-component technology for other systems. It can enhance the capabilities of customer relationship management and recommendation systems; for example, allowing users to find out which features customers are particularly interested in or to exclude items that have received overtly negative feedback from recommendation lists. Similarly, it can be used in social communication for troll filtering and to enhance anti-spam systems. Business intelligence is also one of the main factors behind corporate interest in the field of sentiment analysis BIBREF1 .
Sentiment analysis is a `suitcase' research problem that requires tackling many NLP sub-tasks, including semantic parsing BIBREF2 , named entity recognition BIBREF3 , sarcasm detection BIBREF4 , subjectivity detection and aspect extraction. In opinion mining, different levels of analysis granularity have been proposed, each one having its own advantages and drawbacks BIBREF5 , BIBREF6 . Aspect-based opinion mining BIBREF7 , BIBREF8 focuses on the relations between aspects and document polarity. An aspect, also known as an opinion target, is a concept in which the opinion is expressed in the given document. For example, in the sentence, “The screen of my phone is really nice and its resolution is superb” for a phone review contains positive polarity, i.e., the author likes the phone. However, more specifically, the positive opinion is about its screen and resolution; these concepts are thus called opinion targets, or aspects, of this opinion. The task of identifying the aspects in a given opinionated text is called aspect extraction. There are two types of aspects defined in aspect-based opinion mining: explicit aspects and implicit aspects. Explicit aspects are words in the opinionated document that explicitly denote the opinion target. For instance, in the above example, the opinion targets screen and resolution are explicitly mentioned in the text. In contrast, an implicit aspect is a concept that represents the opinion target of an opinionated document but which is not specified explicitly in the text. One can infer that the sentence, “This camera is sleek and very affordable” implicitly contains a positive opinion of the aspects appearance and price of the entity camera. These same aspects would be explicit in an equivalent sentence: “The appearance of this camera is sleek and its price is very affordable.”
Most of the previous works in aspect term extraction have either used conditional random fields (CRFs) BIBREF9 , BIBREF10 or linguistic patterns BIBREF7 , BIBREF11 . Both of these approaches have their own limitations: CRF is a linear model, so it needs a large number of features to work well; linguistic patterns need to be crafted by hand, and they crucially depend on the grammatical accuracy of the sentences. In this chapter, we apply an ensemble of deep learning and linguistics to tackle both the problem of aspect extraction and subjectivity detection.
The remainder of this chapter is organized as follows: Section SECREF3 and SECREF4 propose some introductory explanation and some literature for the tasks of subjectivity detection and aspect extraction, respectively; Section SECREF5 illustrates the basic concepts of deep learning adopted in this work; Section SECREF6 describes in detail the proposed algorithm; Section SECREF7 shows evaluation results; finally, Section SECREF9 concludes the chapter.
Subjectivity detection
Subjectivity detection is an important subtask of sentiment analysis that can prevent a sentiment classifier from considering irrelevant or potentially misleading text in online social platforms such as Twitter and Facebook. Subjective extraction can reduce the amount of review data to only 60 INLINEFORM0 and still produce the same polarity results as full text classification BIBREF12 . This allows analysts in government, commercial and political domains who need to determine the response of people to different crisis events BIBREF12 , BIBREF13 , BIBREF14 . Similarly, online reviews need to be summarized in a manner that allows comparison of opinions, so that a user can clearly see the advantages and weaknesses of each product merely with a single glance, both in unimodal BIBREF15 and multimodal BIBREF16 , BIBREF17 contexts. Further, we can do in-depth opinion assessment, such as finding reasons or aspects BIBREF18 in opinion-bearing texts. For example, INLINEFORM1 , which makes the film INLINEFORM2 . Several works have explored sentiment composition through careful engineering of features or polarity shifting rules on syntactic structures. However, sentiment accuracies for classifying a sentence as positive/negative/neutral has not exceeded 60 INLINEFORM3 .
Early attempts used general subjectivity clues to generate training data from un-annotated text BIBREF19 . Next, bag-of-words (BOW) classifiers were introduced that represent a document as a multi set of its words disregarding grammar and word order. These methods did not work well on short tweets. Co-occurrence matrices also were unable to capture difference in antonyms such as `good/bad' that have similar distributions. Subjectivity detection hence progressed from syntactic to semantic methods in BIBREF19 , where the authors used extraction pattern to represent subjective expressions. For example, the pattern `hijacking' of INLINEFORM0 , looks for the noun `hijacking' and the object of the preposition INLINEFORM1 . Extracted features are used to train machine-learning classifiers such as SVM BIBREF20 and ELM BIBREF21 . Subjectivity detection is also useful for constructing and maintaining sentiment lexicons, as objective words or concepts need to be omitted from them BIBREF22 .
Since, subjective sentences tend to be longer than neutral sentences, recursive neural networks were proposed where the sentiment class at each node in the parse tree was captured using matrix multiplication of parent nodes BIBREF23 , BIBREF24 . However, the number of possible parent composition functions is exponential, hence in BIBREF25 recursive neural tensor network was introduced that use a single tensor composition function to define multiple bilinear dependencies between words. In BIBREF26 , the authors used logistic regression predictor that defines a hyperplane in the word vector space where a word vectors positive sentiment probability depends on where it lies with respect to this hyperplane. However, it was found that while incorporating words that are more subjective can generally yield better results, the performance gain by employing extra neutral words is less significant BIBREF27 . Another class of probabilistic models called Latent Dirichlet Allocation assumes each document is a mixture of latent topics. Lastly, sentence-level subjectivity detection was integrated into document-level sentiment detection using graphs where each node is a sentence. The contextual constraints between sentences in a graph led to significant improvement in polarity classification BIBREF28 .
Similarly, in BIBREF29 the authors take advantage of the sequence encoding method for trees and treat them as sequence kernels for sentences. Templates are not suitable for semantic role labeling, because relevant context might be very far away. Hence, deep neural networks have become popular to process text. In word2vec, for example, a word's meaning is simply a signal that helps to classify larger entities such as documents. Every word is mapped to a unique vector, represented by a column in a weight matrix. The concatenation or sum of the vectors is then used as features for prediction of the next word in a sentence BIBREF30 . Related words appear next to each other in a INLINEFORM0 dimensional vector space. Vectorizing them allows us to measure their similarities and cluster them. For semantic role labeling, we need to know the relative position of verbs, hence the features can include prefix, suffix, distance from verbs in the sentence etc. However, each feature has a corresponding vector representation in INLINEFORM1 dimensional space learned from the training data.
Recently, convolutional neural network (CNN) is being used for subjectivity detection. In particular, BIBREF31 used recurrent CNNs. These show high accuracy on certain datasets such as Twitter we are also concerned with a specific sentence within the context of the previous discussion, the order of the sentences preceding the one at hand results in a sequence of sentences also known as a time series of sentences BIBREF31 . However, their model suffers from overfitting, hence in this work we consider deep convolutional neural networks, where temporal information is modeled via dynamic Gaussian Bayesian networks.
Aspect-Based Sentiment Analysis
Aspect extraction from opinions was first studied by BIBREF7 . They introduced the distinction between explicit and implicit aspects. However, the authors only dealt with explicit aspects and used a set of rules based on statistical observations. Hu and Liu's method was later improved by BIBREF32 and by BIBREF33 . BIBREF32 assumed the product class is known in advance. Their algorithm detects whether a noun or noun phrase is a product feature by computing the point-wise mutual information between the noun phrase and the product class.
BIBREF34 presented a method that uses language model to identify product features. They assumed that product features are more frequent in product reviews than in a general natural language text. However, their method seems to have low precision since retrieved aspects are affected by noise. Some methods treated the aspect term extraction as sequence labeling and used CRF for that. Such methods have performed very well on the datasets even in cross-domain experiments BIBREF9 , BIBREF10 .
Topic modeling has been widely used as a basis to perform extraction and grouping of aspects BIBREF35 , BIBREF36 . Two models were considered: pLSA BIBREF37 and LDA BIBREF38 . Both models introduce a latent variable “topic” between the observable variables “document” and “word” to analyze the semantic topic distribution of documents. In topic models, each document is represented as a random mixture over latent topics, where each topic is characterized by a distribution over words.
Such methods have been gaining popularity in social media analysis like emerging political topic detection in Twitter BIBREF39 . The LDA model defines a Dirichlet probabilistic generative process for document-topic distribution; in each document, a latent aspect is chosen according to a multinomial distribution, controlled by a Dirichlet prior INLINEFORM0 . Then, given an aspect, a word is extracted according to another multinomial distribution, controlled by another Dirichlet prior INLINEFORM1 . Among existing works employing these models are the extraction of global aspects ( such as the brand of a product) and local aspects (such as the property of a product BIBREF40 ), the extraction of key phrases BIBREF41 , the rating of multi-aspects BIBREF42 , and the summarization of aspects and sentiments BIBREF43 . BIBREF44 employed the maximum entropy method to train a switch variable based on POS tags of words and used it to separate aspect and sentiment words.
BIBREF45 added user feedback to LDA as a response-variable related to each document. BIBREF46 proposed a semi-supervised model. DF-LDA BIBREF47 also represents a semi-supervised model, which allows the user to set must-link and cannot-link constraints. A must-link constraint means that two terms must be in the same topic, while a cannot-link constraint means that two terms cannot be in the same topic. BIBREF48 integrated commonsense in the calculation of word distributions in the LDA algorithm, thus enabling the shift from syntax to semantics in aspect-based sentiment analysis. BIBREF49 proposed two semi-supervised models for product aspect extraction based on the use of seeding aspects. In the category of supervised methods, BIBREF50 employed seed words to guide topic models to learn topics of specific interest to a user, while BIBREF42 and BIBREF51 employed seeding words to extract related product aspects from product reviews. On the other hand, recent approaches using deep CNNs BIBREF52 , BIBREF53 showed significant performance improvement over the state-of-the-art methods on a range of NLP tasks. BIBREF52 fed word embeddings to a CNN to solve standard NLP problems such as named entity recognition (NER), part-of-speech (POS) tagging and semantic role labeling.
Preliminaries
In this section, we briefly review the theoretical concepts necessary to comprehend the present work. We begin with a description of maximum likelihood estimation of edges in dynamic Gaussian Bayesian networks where each node is a word in a sentence. Next, we show that weights in the CNN can be learned by minimizing a global error function that corresponds to an exponential distribution over a linear combination of input sequence of word features.
Notations : Consider a Gaussian network (GN) with time delays which comprises a set of INLINEFORM0 nodes and observations gathered over INLINEFORM1 instances for all the nodes. Nodes can take real values from a multivariate distribution determined by the parent set. Let the dataset of samples be INLINEFORM2 , where INLINEFORM3 represents the sample value of the INLINEFORM4 random variable in instance INLINEFORM5 . Lastly, let INLINEFORM6 be the set of parent variables regulating variable INLINEFORM7 .
Gaussian Bayesian Networks
In tasks where one is concerned with a specific sentence within the context of the previous discourse, capturing the order of the sequences preceding the one at hand may be particularly crucial.
We take as given a sequence of sentences INLINEFORM0 , each in turn being a sequence of words so that INLINEFORM1 , where INLINEFORM2 is the length of sentence INLINEFORM3 . Thus, the probability of a word INLINEFORM4 follows the distribution : DISPLAYFORM0
A Bayesian network is a graphical model that represents a joint multivariate probability distribution for a set of random variables BIBREF54 . It is a directed acyclic graph INLINEFORM0 with a set of parameters INLINEFORM1 that represents the strengths of connections by conditional probabilities.
The BN decomposes the likelihood of node expressions into a product of conditional probabilities by assuming independence of non-descendant nodes, given their parents. DISPLAYFORM0
where INLINEFORM0 denotes the conditional probability of node expression INLINEFORM1 given its parent node expressions INLINEFORM2 , and INLINEFORM3 denotes the maximum likelihood(ML) estimate of the conditional probabilities.
Figure FIGREF11 (a) illustrates the state space of a Gaussian Bayesian network (GBN) at time instant INLINEFORM0 where each node INLINEFORM1 is a word in the sentence INLINEFORM2 . The connections represent causal dependencies over one or more time instants. The observed state vector of variable INLINEFORM3 is denoted as INLINEFORM4 and the conditional probability of variable INLINEFORM5 given variable INLINEFORM6 is INLINEFORM7 . The optimal Gaussian network INLINEFORM8 is obtained by maximizing the posterior probability of INLINEFORM9 given the data INLINEFORM10 . From Bayes theorem, the optimal Gaussian network INLINEFORM11 is given by: DISPLAYFORM0
where INLINEFORM0 is the probability of the Gaussian network and INLINEFORM1 is the likelihood of the expression data given the Gaussian network.
Given the set of conditional distributions with parameters INLINEFORM0 , the likelihood of the data is given by DISPLAYFORM0
To find the likelihood in ( EQREF14 ), and to obtain the optimal Gaussian network as in ( EQREF13 ), Gaussian BN assumes that the nodes are multivariate Gaussian. That is, expression of node INLINEFORM0 can be described with mean INLINEFORM1 and covariance matrix INLINEFORM2 of size INLINEFORM3 . The joint probability of the network can be the product of a set of conditional probability distributions given by: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 denotes the regression coefficient matrix, INLINEFORM2 is the conditional variance of INLINEFORM3 given its parent set INLINEFORM4 , INLINEFORM5 is the covariance between observations of INLINEFORM6 and the variables in INLINEFORM7 , and INLINEFORM8 is the covariance matrix of INLINEFORM9 . The acyclic condition of BN does not allow feedback among nodes, and feedback is an essential characteristic of real world GN.
Therefore, dynamic Bayesian networks have recently become popular in building GN with time delays mainly due to their ability to model causal interactions as well as feedback regulations BIBREF55 . A first-order dynamic BN is defined by a transition network of interactions between a pair of Gaussian networks connecting nodes at time instants INLINEFORM0 and INLINEFORM1 . In time instant INLINEFORM2 , the parents of nodes are those specified in the time instant INLINEFORM3 . Similarly, the Gaussian network of a INLINEFORM4 -order dynamic system is represented by a Gaussian network comprising INLINEFORM5 consecutive time points and INLINEFORM6 nodes, or a graph of INLINEFORM7 nodes. In practice, the sentence data is transformed to a BOW model where each sentence is a vector of frequencies for each word in the vocabulary. Figure FIGREF11 (b) illustrates the state space of a first-order Dynamic GBN models transition networks among words in sentences INLINEFORM8 and INLINEFORM9 in consecutive time points, the lines correspond to first-order edges among the words learned using BOW.
Hence, a sequence of sentences results in a time series of word frequencies. It can be seen that such a discourse model produces compelling discourse vector representations that are sensitive to the structure of the discourse and promise to capture subtle aspects of discourse comprehension, especially when coupled to further semantic data and unsupervised pre-training.
Convolutional Neural Networks
The idea behind convolution is to take the dot product of a vector of INLINEFORM0 weights INLINEFORM1 also known as kernel vector with each INLINEFORM2 -gram in the sentence INLINEFORM3 to obtain another sequence of features INLINEFORM4 . DISPLAYFORM0
We then apply a max pooling operation over the feature map and take the maximum value INLINEFORM0 as the feature corresponding to this particular kernel vector. Similarly, varying kernel vectors and window sizes are used to obtain multiple features BIBREF23 .
For each word INLINEFORM0 in the vocabulary, an INLINEFORM1 dimensional vector representation is given in a look up table that is learned from the data BIBREF30 . The vector representation of a sentence is hence a concatenation of vectors for individual words. Similarly, we can have look up tables for other features. One might want to provide features other than words if these features are suspected to be helpful. Now, the convolution kernels are applied to word vectors instead of individual words.
We use these features to train higher layers of the CNN that can represent bigger groups of words in sentences. We denote the feature learned at hidden neuron INLINEFORM0 in layer INLINEFORM1 as INLINEFORM2 . Multiple features may be learned in parallel in the same CNN layer. The features learned in each layer are used to train the next layer DISPLAYFORM0
where * indicates convolution and INLINEFORM0 is a weight kernel for hidden neuron INLINEFORM1 and INLINEFORM2 is the total number of hidden neurons. Training a CNN becomes difficult as the number of layers increases, as the Hessian matrix of second-order derivatives often does not exist. Recently, deep learning has been used to improve the scalability of a model that has inherent parallel computation. This is because hierarchies of modules can provide a compact representation in the form of input-output pairs. Each layer tries to minimize the error between the original state of the input nodes and the state of the input nodes predicted by the hidden neurons.
This results in a downward coupling between modules. The more abstract representation at the output of a higher layer module is combined with the less abstract representation at the internal nodes from the module in the layer below. In the next section, we describe deep CNN that can have arbitrary number of layers.
Convolution Deep Belief Network
A deep belief network (DBN) is a type of deep neural network that can be viewed as a composite of simple, unsupervised models such as restricted Boltzmann machines (RBMs) where each RBMs hidden layer serves as the visible layer for the next RBM BIBREF56 . RBM is a bipartite graph comprising two layers of neurons: a visible and a hidden layer; it is restricted such that the connections among neurons in the same layer are not allowed. To compute the weights INLINEFORM0 of an RBM, we assume that the probability distribution over the input vector INLINEFORM1 is given as: DISPLAYFORM0
where INLINEFORM0 is a normalisation constant. Computing the maximum likelihood is difficult as it involves solving the normalisation constant, which is a sum of an exponential number of terms. The standard approach is to approximate the average over the distribution with an average over a sample from INLINEFORM1 , obtained by Markov chain Monte Carlo until convergence.
To train such a multi-layer system, we must compute the gradient of the total energy function INLINEFORM0 with respect to weights in all the layers. To learn these weights and maximize the global energy function, the approximate maximum likelihood contrastive divergence (CD) approach can be used. This method employs each training sample to initialize the visible layer. Next, it uses the Gibbs sampling algorithm to update the hidden layer and then reconstruct the visible layer consecutively, until convergence BIBREF57 . As an example, here we use a logistic regression model to learn the binary hidden neurons and each visible unit is assumed to be a sample from a normal distribution BIBREF58 .
The continuous state INLINEFORM0 of the hidden neuron INLINEFORM1 , with bias INLINEFORM2 , is a weighted sum over all continuous visible nodes INLINEFORM3 and is given by: DISPLAYFORM0
where INLINEFORM0 is the connection weight to hidden neuron INLINEFORM1 from visible node INLINEFORM2 . The binary state INLINEFORM3 of the hidden neuron can be defined by a sigmoid activation function: DISPLAYFORM0
Similarly, in the next iteration, the binary state of each visible node is reconstructed and labeled as INLINEFORM0 . Here, we determine the value to the visible node INLINEFORM1 , with bias INLINEFORM2 , as a random sample from the normal distribution where the mean is a weighted sum over all binary hidden neurons and is given by: DISPLAYFORM0
where INLINEFORM0 is the connection weight to hidden neuron INLINEFORM1 from visible node INLINEFORM2 . The continuous state INLINEFORM3 is a random sample from INLINEFORM4 , where INLINEFORM5 is the variance of all visible nodes. Lastly, the weights are updated as the difference between the original and reconstructed visible layer using: DISPLAYFORM0
where INLINEFORM0 is the learning rate and INLINEFORM1 is the expected frequency with which visible unit INLINEFORM2 and hidden unit INLINEFORM3 are active together when the visible vectors are sampled from the training set and the hidden units are determined by ( EQREF21 ). Finally, the energy of a DNN can be determined in the final layer using INLINEFORM4 .
To extend the deep belief networks to convolution deep belief network (CDBN) we simply partition the hidden layer into INLINEFORM0 groups. Each of the INLINEFORM1 groups is associated with a INLINEFORM2 filter where INLINEFORM3 is the width of the kernel and INLINEFORM4 is the number of dimensions in the word vector. Let us assume that the input layer has dimension INLINEFORM5 where INLINEFORM6 is the length of the sentence. Then the convolution operation given by ( EQREF17 ) will result in a hidden layer of INLINEFORM7 groups each of dimension INLINEFORM8 . These learned kernel weights are shared among all hidden units in a particular group. The energy function is now a sum over the energy of individual blocks given by: DISPLAYFORM0
The CNN sentence model preserve the order of words by adopting convolution kernels of gradually increasing sizes that span an increasing number of words and ultimately the entire sentence BIBREF31 . However, several word dependencies may occur across sentences hence, in this work we propose a Bayesian CNN model that uses dynamic Bayesian networks to model a sequence of sentences.
Subjectivity Detection
In this work, we integrate a higher-order GBN for sentences into the first layer of the CNN. The GBN layer of connections INLINEFORM0 is learned using maximum likelihood approach on the BOW model of the training data. The input sequence of sentences INLINEFORM1 are parsed through this layer prior to training the CNN. Only sentences or groups of sentences containing high ML motifs are then used to train the CNN. Hence, motifs are convolved with the input sentences to generate a new set of sentences for pre-training. DISPLAYFORM0
where INLINEFORM0 is the number of high ML motifs and INLINEFORM1 is the training set of sentences in a particular class.
Fig. FIGREF28 illustrates the state space of Bayesian CNN where the input layer is pre-trained using a dynamic GBN with up-to two time point delays shown for three sentences in a review on iPhone. The dashed lines correspond to second-order edges among the words learned using BOW. Each hidden layer does convolution followed by pooling across the length of the sentence. To preserve the order of words we adopt kernels of increasing sizes.
Since, the number of possible words in the vocabulary is very large, we consider only the top subjectivity clue words to learn the GBN layer. Lastly, In-order to preserve the context of words in conceptual phrases such as `touchscreen'; we consider additional nodes in the Bayesian network for phrases with subjectivity clues. Further, the word embeddings in the CNN are initialized using the log-bilinear language model (LBL) where the INLINEFORM0 dimensional vector representation of each word INLINEFORM1 in ( EQREF10 ) is given by : DISPLAYFORM0
where INLINEFORM0 are the INLINEFORM1 co-occurrence or context matrices computed from the data.
The time series of sentences is used to generate a sub-set of sentences containing high ML motifs using ( EQREF27 ). The frequency of a sentence in the new dataset will also correspond to the corresponding number of high ML motifs in the sentence. In this way, we are able to increase the weights of the corresponding causal features among words and concepts extracted using Gaussian Bayesian networks.
The new set of sentences is used to pre-train the deep neural network prior to training with the complete dataset. Each sentence can be divided into chunks or phrases using POS taggers. The phrases have hierarchical structures and combine in distinct ways to form sentences. The INLINEFORM0 -gram kernels learned in the first layer hence correspond to a chunk in the sentence.
Aspect Extraction
In order to train the CNN for aspect extraction, instead, we used a special training algorithm suitable for sequential data, proposed by BIBREF52 . We will summarize it here, mainly following BIBREF59 . The algorithm trains the neural network by back-propagation in order to maximize the likelihood over training sentences. Consider the network parameter INLINEFORM0 . We say that INLINEFORM1 is the output score for the likelihood of an input INLINEFORM2 to have the tag INLINEFORM3 . Then, the probability to assign the label INLINEFORM4 to INLINEFORM5 is calculated as DISPLAYFORM0
Define the logadd operation as DISPLAYFORM0
then for a training example, the log-likelihood becomes DISPLAYFORM0
In aspect term extraction, the terms can be organized as chunks and are also often surrounded by opinion terms. Hence, it is important to consider sentence structure on a whole in order to obtain additional clues. Let it be given that there are INLINEFORM0 tokens in a sentence and INLINEFORM1 is the tag sequence while INLINEFORM2 is the network score for the INLINEFORM3 -th tag having INLINEFORM4 -th tag. We introduce INLINEFORM5 transition score from moving tag INLINEFORM6 to tag INLINEFORM7 . Then, the score tag for the sentence INLINEFORM8 to have the tag path INLINEFORM9 is defined by: DISPLAYFORM0
This formula represents the tag path probability over all possible paths. Now, from ( EQREF32 ) we can write the log-likelihood DISPLAYFORM0
The number of tag paths has exponential growth. However, using dynamic programming techniques, one can compute in polynomial time the score for all paths that end in a given tag BIBREF52 . Let INLINEFORM0 denote all paths that end with the tag INLINEFORM1 at the token INLINEFORM2 . Then, using recursion, we obtain DISPLAYFORM0
For the sake of brevity, we shall not delve into details of the recursive procedure, which can be found in BIBREF52 . The next equation gives the log-add for all the paths to the token INLINEFORM0 : DISPLAYFORM0
Using these equations, we can maximize the likelihood of ( EQREF35 ) over all training pairs. For inference, we need to find the best tag path using the Viterbi algorithm; e.g., we need to find the best tag path that minimizes the sentence score ( EQREF34 ).
The features of an aspect term depend on its surrounding words. Thus, we used a window of 5 words around each word in a sentence, i.e., INLINEFORM0 words. We formed the local features of that window and considered them to be features of the middle word. Then, the feature vector was fed to a CNN.
The network contained one input layer, two convolution layers, two max-pool layers, and a fully connected layer with softmax output. The first convolution layer consisted of 100 feature maps with filter size 2. The second convolution layer had 50 feature maps with filter size 3. The stride in each convolution layer is 1 as we wanted to tag each word. A max-pooling layer followed each convolution layer. The pool size we use in the max-pool layers was 2. We used regularization with dropout on the penultimate layer with a constraint on L2-norms of the weight vectors, with 30 epochs. The output of each convolution layer was computed using a non-linear function; in our case we used INLINEFORM0 .
As features, we used word embeddings trained on two different corpora. We also used some additional features and rules to boost the accuracy; see Section UID49 . The CNN produces local features around each word in a sentence and then combines these features into a global feature vector. Since the kernel size for the two convolution layers was different, the dimensionality INLINEFORM0 mentioned in Section SECREF16 was INLINEFORM1 and INLINEFORM2 , respectively. The input layer was INLINEFORM3 , where 65 was the maximum number of words in a sentence, and 300 the dimensionality of the word embeddings used, per each word.
The process was performed for each word in a sentence. Unlike traditional max-likelihood leaning scheme, we trained the system using propagation after convolving all tokens in the sentence. Namely, we stored the weights, biases, and features for each token after convolution and only back-propagated the error in order to correct them once all tokens were processed using the training scheme as explained in Section SECREF30 .
If a training instance INLINEFORM0 had INLINEFORM1 words, then we represented the input vector for that instance as INLINEFORM2 . Here, INLINEFORM3 is a INLINEFORM4 -dimensional feature vector for the word INLINEFORM5 . We found that this network architecture produced good results on both of our benchmark datasets. Adding extra layers or changing the pooling size and window size did not contribute to the accuracy much, and instead, only served to increase computational cost.
In this subsection, we present the data used in our experiments.
BIBREF64 presented two different neural network models for creating word embeddings. The models were log-linear in nature, trained on large corpora. One of them is a bag-of-words based model called CBOW; it uses word context in order to obtain the embeddings. The other one is called skip-gram model; it predicts the word embeddings of surrounding words given the current word. Those authors made a dataset called word2vec publicly available. These 300-dimensional vectors were trained on a 100-billion-word corpus from Google News using the CBOW architecture.
We trained the CBOW architecture proposed by BIBREF64 on a large Amazon product review dataset developed by BIBREF65 . This dataset consists of 34,686,770 reviews (4.7 billion words) of 2,441,053 Amazon products from June 1995 to March 2013. We kept the word embeddings 300-dimensional (http://sentic.net/AmazonWE.zip). Due to the nature of the text used to train this model, this includes opinionated/affective information, which is not present in ordinary texts such as the Google News corpus.
For training and evaluation of the proposed approach, we used two corpora:
Aspect-based sentiment analysis dataset developed by BIBREF66 ; and
SemEval 2014 dataset. The dataset consists of training and test sets from two domains, Laptop and Restaurant; see Table TABREF52 .
The annotations in both corpora were encoded according to IOB2, a widely used coding scheme for representing sequences. In this encoding, the first word of each chunk starts with a “B-Type” tag, “I-Type” is the continuation of the chunk and “O” is used to tag a word which is out of the chunk. In our case, we are interested to determine whether a word or chunk is an aspect, so we only have “B–A”, “I–A” and “O” tags for the words.
Here is an example of IOB2 tags:
also/O excellent/O operating/B-A system/I-A ,/O size/B-A and/O weight/B-A for/O optimal/O mobility/B-A excellent/O durability/B-A of/O the/O battery/B-A the/O functions/O provided/O by/O the/O trackpad/B-A is/O unmatched/O by/O any/O other/O brand/O
In this section, we present the features, the representation of the text, and linguistic rules used in our experiments.
We used the following the features:
Word Embeddings We used the word embeddings described earlier as features for the network. This way, each word was encoded as 300-dimensional vector, which was fed to the network.
Part of speech tags Most of the aspect terms are either nouns or noun chunk. This justifies the importance of POS features. We used the POS tag of the word as its additional feature. We used 6 basic parts of speech (noun, verb, adjective, adverb, preposition, conjunction) encoded as a 6- dimensional binary vector. We used Stanford Tagger as a POS tagger.
These two features vectors were concatenated and fed to CNN.
So, for each word the final feature vector is 306 dimensional.
In some of our experiments, we used a set of linguistic patterns (LPs) derived from sentic patterns (LP) BIBREF11 , a linguistic framework based on SenticNet BIBREF22 . SenticNet is a concept-level knowledge base for sentiment analysis built by means of sentic computing BIBREF67 , a multi-disciplinary approach to natural language processing and understanding at the crossroads between affective computing, information extraction, and commonsense reasoning, which exploits both computer and human sciences to better interpret and process social information on the Web. In particular, we used the following linguistic rules:
Let a noun h be a subject of a word t, which has an adverbial or adjective modifier present in a large sentiment lexicon, SenticNet. Then mark h as an aspect.
Except when the sentence has an auxiliary verb, such as is, was, would, should, could, etc., we apply:
If the verb t is modified by an adjective or adverb or is in adverbial clause modifier relation with another token, then mark h as an aspect. E.g., in “The battery lasts little”,
battery is the subject of lasts, which is modified by an adjective modifier little, so battery is marked as an aspect.
If t has a direct object, a noun n, not found in SenticNet, then mark n an aspect, as, e.g., in “I like the lens of this camera”.
If a noun h is a complement of a couplar verb, then mark h as an explicit aspect. E.g., in “The camera is nice”, camera is marked as an aspect.
If a term marked as an aspect by the CNN or the other rules is in a noun-noun compound relationship with another word, then instead form one aspect term composed of both of them. E.g., if in “battery life”, “battery” or “life” is marked as an aspect, then the whole expression is marked as an aspect.
The above rules 1–4 improve recall by discovering more aspect terms. However, to improve precision, we apply some heuristics: e.g., we remove stop-words such as of, the, a, etc., even if they were marked as aspect terms by the CNN or the other rules.
We used the Stanford parser to determine syntactic relations in the sentences.
We combined LPs with the CNN as follows: both LPs and CNN-based classifier are run on the text; then all terms marked by any of the two classifiers are reported as aspect terms, except for those unmarked by the last rule.
Table TABREF63 shows the accuracy of our aspect term extraction framework in laptop and restaurant domains. The framework gave better accuracy on restaurant domain reviews, because of the lower variety of aspect available terms than in laptop domain. However, in both cases recall was lower than precision.
Table TABREF63 shows improvement in terms of both precision and recall when the POS feature is used. Pre-trained word embeddings performed better than randomized features (each word's vector initialized randomly); see Table TABREF62 . Amazon embeddings performed better than Google word2vec embeddings. This supports our claim that the former contains opinion-specific information which helped it to outperform the accuracy of Google embeddings trained on more formal text—the Google news corpus. Because of this, in the sequel we only show the performance using Amazon embeddings, which we denote simply as WE (word embeddings).
In both domains, CNN suffered from low recall, i.e., it missed some valid aspect terms. Linguistic analysis of the syntactic structure of the sentences substantially helped to overcome some drawbacks of machine learning-based analysis. Our experiments showed good improvement in both precision and recall when LPs were used together with CNN; see Table TABREF64 .
As to the LPs, the removal of stop-words, Rule 1, and Rule 3 were most beneficial. Figure FIGREF66 shows a visualization for the Table TABREF64 . Table TABREF65 and Figure FIGREF61 shows the comparison between the proposed method and the state of the art on the Semeval dataset. It is noted that about 36.55% aspect terms present in the laptop domain corpus are phrase and restaurant corpus consists of 24.56% aspect terms. The performance of detecting aspect phrases are lower than single word aspect tokens in both domains. This shows that the sequential tagging is indeed a tough task to do. Lack of sufficient training data for aspect phrases is also one of the reasons to get lower accuracy in this case.
In particular, we got 79.20% and 83.55% F-score to detect aspect phrases in laptop and restaurant domain respectively. We observed some cases where only 1 term in an aspect phrase is detected as aspect term. In those cases Rule 4 of the LPs helped to correctly detect the aspect phrases. We also carried out experiments on the aspect dataset originally developed by BIBREF66 . This is to date the largest comprehensive aspect-based sentiment analysis dataset. The best accuracy on this dataset was obtained when word embedding features were used together with the POS features. This shows that while the word embedding features are most useful, the POS feature also plays a major role in aspect extraction.
As on the SemEval dataset, LPs together with CNN increased the overall accuracy. However, LPs have performed much better on this dataset than on the SemEval dataset. This supports the observation made previously BIBREF66 that on this dataset LPs are more useful. One of the possible reasons for this is that most of the sentences in this dataset are grammatically correct and contain only one aspect term. Here we combined LPs and a CNN to achieve even better results than the approach of by BIBREF66 based only on LPs. Our experimental results showed that this ensemble algorithm (CNN+LP) can better understand the semantics of the text than BIBREF66 's pure LP-based algorithm, and thus extracts more salient aspect terms. Table TABREF69 and Figure FIGREF68 shows the performance and comparisons of different frameworks.
Figure FIGREF70 compares the proposed method with the state of the art. We believe that there are two key reasons for our framework to outperform state-of-the-art approaches. First, a deep CNN, which is non-linear in nature, better fits the data than linear models such as CRF. Second, the pre-trained word embedding features help our framework to outperform state-of-the-art methods that do not use word embeddings. The main advantage of our framework is that it does not need any feature engineering. This minimizes development cost and time.
Subjectivity Detection
We use the MPQA corpus BIBREF20 , a collection of 535 English news articles from a variety of sources manually annotated with subjectivity flag. From the total of 9,700 sentences in this corpus, 55 INLINEFORM0 of the sentences are labeled as subjective while the rest are objective. We also compare with the Movie Review (MR) benchmark dataset BIBREF28 , that contains 5000 subjective movie review snippets from Rotten Tomatoes website and another 5000 objective sentences from plot summaries available from the Internet Movies Database. All sentences are at least ten words long and drawn from reviews or plot summaries of movies released post 2001.
The data pre-processing included removing top 50 stop words and punctuation marks from the sentences. Next, we used a POS tagger to determine the part-of-speech for each word in a sentence. Subjectivity clues dataset BIBREF19 contains a list of over 8,000 clues identified manually as well as automatically using both annotated and unannotated data. Each clue is a word and the corresponding part of speech.
The frequency of each clue was computed in both subjective and objective sentences of the MPQA corpus. Here we consider the top 50 clue words with highest frequency of occurrence in the subjective sentences. We also extracted 25 top concepts containing the top clue words using the method described in BIBREF11 . The CNN is collectively pre-trained with both subjective and objective sentences that contain high ML word and concept motifs. The word vectors are initialized using the LBL model and a context window of size 5 and 30 features. Each sentence is wrapped to a window of 50 words to reduce the number of parameters and hence the over-fitting of the model. A CNN with three hidden layers of 100 neurons and kernels of size INLINEFORM0 is used. The output layer corresponds to two neurons for each class of sentiments.
We used 10 fold cross validation to determine the accuracy of classifying new sentences using the trained CNN classifier. A comparison is done with classifying the time series data using baseline classifiers such as Naive Bayes SVM (NBSVM) BIBREF60 , Multichannel CNN (CNN-MC) BIBREF61 , Subjectivity Word Sense Disambiguation (SWSD) BIBREF62 and Unsupervised-WSD (UWSD) BIBREF63 . Table TABREF41 shows that BCDBN outperforms previous methods by INLINEFORM0 in accuracy on both datasets. Almost INLINEFORM1 improvement is observed over NBSVM on the movie review dataset. In addition, we only consider word vectors of 30 features instead of the 300 features used by CNN-MC and hence are 10 times faster.
Key Applications
Subjectivity detection can prevent the sentiment classifier from considering irrelevant or potentially misleading text. This is particularly useful in multi-perspective question answering summarization systems that need to summarize different opinions and perspectives and present multiple answers to the user based on opinions derived from different sources. It is also useful to analysts in government, commercial and political domains who need to determine the response of the people to different crisis events. After filtering of subjective sentences, aspect mining can be used to provide clearer visibility into the emotions of people by connecting different polarities to the corresponding target attribute.
Conclusion
In this chapter, we tackled the two basic tasks of sentiment analysis in social media: subjectivity detection and aspect extraction. We used an ensemble of deep learning and linguistics to collect opinionated information and, hence, perform fine-grained (aspect-based) sentiment analysis. In particular, we proposed a Bayesian deep convolutional belief network to classify a sequence of sentences as either subjective or objective and used a convolutional neural network for aspect extraction. Coupled with some linguistic rules, this ensemble approach gave a significant improvement in performance over state-of-the-art techniques and paved the way for a more multifaceted (i.e., covering more NLP subtasks) and multidisciplinary (i.e., integrating techniques from linguistics and other disciplines) approach to the complex problem of sentiment analysis.
Future Directions
In the future we will try to visualize the hierarchies of features learned via deep learning. We can also consider fusion with other modalities such as YouTube videos.
Acknowledgement
This work was funded by Complexity Institute, Nanyang Technological University.
Cross References
Sentiment Quantification of User-Generated Content, 110170 Semantic Sentiment Analysis of Twitter Data, 110167 Twitter Microblog Sentiment Analysis, 265 | apply an ensemble of deep learning and linguistics t |
94e0cf44345800ef46a8c7d52902f074a1139e1a | 94e0cf44345800ef46a8c7d52902f074a1139e1a_0 | Q: What web and user-generated NER datasets are used for the analysis?
Text: Introduction
Named entity recognition and classification (NERC, short NER), the task of recognising and assigning a class to mentions of proper names (named entities, NEs) in text, has attracted many years of research BIBREF0 , BIBREF1 , analyses BIBREF2 , starting from the first MUC challenge in 1995 BIBREF3 . Recognising entities is key to many applications, including text summarisation BIBREF4 , search BIBREF5 , the semantic web BIBREF6 , topic modelling BIBREF7 , and machine translation BIBREF8 , BIBREF9 .
As NER is being applied to increasingly diverse and challenging text genres BIBREF10 , BIBREF11 , BIBREF12 , this has lead to a noisier, sparser feature space, which in turn requires regularisation BIBREF13 and the avoidance of overfitting. This has been the case even for large corpora all of the same genre and with the same entity classification scheme, such as ACE BIBREF14 . Recall, in particular, has been a persistent problem, as named entities often seem to have unusual surface forms, e.g. unusual character sequences for the given language (e.g. Szeged in an English-language document) or words that individually are typically not NEs, unless they are combined together (e.g. the White House).
Indeed, the move from ACE and MUC to broader kinds of corpora has presented existing NER systems and resources with a great deal of difficulty BIBREF15 , which some researchers have tried to address through domain adaptation, specifically with entity recognition in mind BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 . However, more recent performance comparisons of NER methods over different corpora showed that older tools tend to simply fail to adapt, even when given a fair amount of in-domain data and resources BIBREF21 , BIBREF11 . Simultaneously, the value of NER in non-newswire data BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 has rocketed: for example, social media now provides us with a sample of all human discourse, unmolested by editors, publishing guidelines and the like, and all in digital format – leading to, for example, whole new fields of research opening in computational social science BIBREF26 , BIBREF27 , BIBREF28 .
The prevailing assumption has been that this lower NER performance is due to domain differences arising from using newswire (NW) as training data, as well as from the irregular, noisy nature of new media (e.g. BIBREF21 ). Existing studies BIBREF11 further suggest that named entity diversity, discrepancy between named entities in the training set and the test set (entity drift over time in particular), and diverse context, are the likely reasons behind the significantly lower NER performance on social media corpora, as compared to newswire.
No prior studies, however, have investigated these hypotheses quantitatively. For example, it is not yet established whether this performance drop is really due to a higher proportion of unseen NEs in the social media, or is it instead due to NEs being situated in different kinds of linguistic context.
Accordingly, the contributions of this paper lie in investigating the following open research questions:
In particular, the paper carries out a comparative analyses of the performance of several different approaches to statistical NER over multiple text genres, with varying NE and lexical diversity. In line with prior analyses of NER performance BIBREF2 , BIBREF11 , we carry out corpus analysis and introduce briefly the NER methods used for experimentation. Unlike prior efforts, however, our main objectives are to uncover the impact of NE diversity and context diversity on performance (measured primarily by F1 score), and also to study the relationship between OOV NEs and features and F1. See Section "Experiments" for details.
To ensure representativeness and comprehensiveness, our experimental findings are based on key benchmark NER corpora spanning multiple genres, time periods, and corpus annotation methodologies and guidelines. As detailed in Section "Datasets" , the corpora studied are OntoNotes BIBREF29 , ACE BIBREF30 , MUC 7 BIBREF31 , the Ritter NER corpus BIBREF21 , the MSM 2013 corpus BIBREF32 , and the UMBC Twitter corpus BIBREF33 . To eliminate potential bias from the choice of statistical NER approach, experiments are carried out with three differently-principled NER approaches, namely Stanford NER BIBREF34 , SENNA BIBREF35 and CRFSuite BIBREF36 (see Section "NER Models and Features" for details).
Datasets
Since the goal of this study is to compare NER performance on corpora from diverse domains and genres, seven benchmark NER corpora are included, spanning newswire, broadcast conversation, Web content, and social media (see Table 1 for details). These datasets were chosen such that they have been annotated with the same or very similar entity classes, in particular, names of people, locations, and organisations. Thus corpora including only domain-specific entities (e.g. biomedical corpora) were excluded. The choice of corpora was also motivated by their chronological age; we wanted to ensure a good temporal spread, in order to study possible effects of entity drift over time.
A note is required about terminology. This paper refers to text genre and also text domain. These are two dimensions by which a document or corpus can be described. Genre here accounts the general characteristics of the text, measurable with things like register, tone, reading ease, sentence length, vocabulary and so on. Domain describes the dominant subject matter of text, which might give specialised vocabulary or specific, unusal word senses. For example, “broadcast news" is a genre, describing the manner of use of language, whereas “financial text" or “popular culture" are domains, describing the topic. One notable exception to this terminology is social media, which tends to be a blend of myriad domains and genres, with huge variation in both these dimensions BIBREF38 , BIBREF39 ; for simplicity, we also refer to this as a genre here.
In chronological order, the first corpus included here is MUC 7, which is the last of the MUC challenges BIBREF31 . This is an important corpus, since the Message Understanding Conference (MUC) was the first one to introduce the NER task in 1995 BIBREF3 , with focus on recognising persons, locations and organisations in newswire text.
A subsequent evaluation campaign was the CoNLL 2003 NER shared task BIBREF40 , which created gold standard data for newswire in Spanish, Dutch, English and German. The corpus of this evaluation effort is now one of the most popular gold standards for NER, with new NER approaches and methods often reporting performance on that.
Later evaluation campaigns began addressing NER for genres other than newswire, specifically ACE BIBREF30 and OntoNotes BIBREF29 . Both of those contain subcorpora in several genres, namely newswire, broadcast news, broadcast conversation, weblogs, and conversational telephone speech. ACE, in addition, contains a subcorpus with usenet newsgroups. Like CoNLL 2003, the OntoNotes corpus is also a popular benchmark dataset for NER. The languages covered are English, Arabic and Chinese. A further difference between the ACE and OntoNotes corpora on one hand, and CoNLL and MUC on the other, is that they contain annotations not only for NER, but also for other tasks such as coreference resolution, relation and event extraction and word sense disambiguation. In this paper, however, we restrict ourselves purely to the English NER annotations, for consistency across datasets. The ACE corpus contains HEAD as well as EXTENT annotations for NE spans. For our experiments we use the EXTENT tags.
With the emergence of social media, studying NER performance on this genre gained momentum. So far, there have been no big evaluation efforts, such as ACE and OntoNotes, resulting in substantial amounts of gold standard data. Instead, benchmark corpora were created as part of smaller challenges or individual projects. The first such corpus is the UMBC corpus for Twitter NER BIBREF33 , where researchers used crowdsourcing to obtain annotations for persons, locations and organisations. A further Twitter NER corpus was created by BIBREF21 , which, in contrast to other corpora, contains more fine-grained classes defined by the Freebase schema BIBREF41 . Next, the Making Sense of Microposts initiative BIBREF32 (MSM) provides single annotated data for named entity recognition on Twitter for persons, locations, organisations and miscellaneous. MSM initiatives from 2014 onwards in addition feature a named entity linking task, but since we only focus on NER here, we use the 2013 corpus.
These corpora are diverse not only in terms of genres and time periods covered, but also in terms of NE classes and their definitions. In particular, the ACE and OntoNotes corpora try to model entity metonymy by introducing facilities and geo-political entities (GPEs). Since the rest of the benchmark datasets do not make this distinction, metonymous entities are mapped to a more common entity class (see below).
In order to ensure consistency across corpora, only Person (PER), Location (LOC) and Organisation (ORG) are used in our experiments, and other NE classes are mapped to O (no NE). For the Ritter corpus, the 10 entity classes are collapsed to three as in BIBREF21 . For the ACE and OntoNotes corpora, the following mapping is used: PERSON $\rightarrow $ PER; LOCATION, FACILITY, GPE $\rightarrow $ LOC; ORGANIZATION $\rightarrow $ ORG; all other classes $\rightarrow $ O.
Tokens are annotated with BIO sequence tags, indicating that they are the beginning (B) or inside (I) of NE mentions, or outside of NE mentions (O). For the Ritter and ACE 2005 corpora, separate training and test corpora are not publicly available, so we randomly sample 1/3 for testing and use the rest for training. The resulting training and testing data sizes measured in number of NEs are listed in Table 2 . Separate models are then trained on the training parts of each corpus and evaluated on the development (if available) and test parts of the same corpus. If development parts are available, as they are for CoNLL (CoNLL Test A) and MUC (MUC 7 Dev), they are not merged with the training corpora for testing, as it was permitted to do in the context of those evaluation challenges.
[t]
P, R and F1 of NERC with different models evaluated on different testing corpora, trained on corpora normalised by size
Table 1 shows which genres the different corpora belong to, the number of NEs and the proportions of NE classes per corpus. Sizes of NER corpora have increased over time, from MUC to OntoNotes.
Further, the class distribution varies between corpora: while the CoNLL corpus is very balanced and contains about equal numbers of PER, LOC and ORG NEs, other corpora are not. The least balanced corpus is the MSM 2013 Test corpus, which contains 98 LOC NEs, but 1110 PER NEs. This makes it difficult to compare NER performance here, since performance partly depends on training data size. Since comparing NER performance as such is not the goal of this paper, we will illustrate the impact of training data size by using learning curves in the next section; illustrate NERC performance on trained corpora normalised by size in Table UID9 ; and then only use the original training data size for subsequent experiments.
In order to compare corpus diversity across genres, we measure NE and token/type diversity (following e.g. BIBREF2 ). Note that types are the unique tokens, so the ratio can be understood as ratio of total tokens to unique ones. Table 4 shows the ratios between the number of NEs and the number of unique NEs per corpus, while Table 5 reports the token/type ratios. The lower those ratios are, the more diverse a corpus is. While token/type ratios also include tokens which are NEs, they are a good measure of broader linguistic diversity.
Aside from these metrics, there are other factors which contribute to corpus diversity, including how big a corpus is and how well sampled it is, e.g. if a corpus is only about one story, it should not be surprising to see a high token/type ratio. Therefore, by experimenting on multiple corpora, from different genres and created through different methodologies, we aim to encompass these other aspects of corpus diversity.
Since the original NE and token/type ratios do not account for corpus size, Tables 5 and 4 present also the normalised ratios. For those, a number of tokens equivalent to those in the corpus, e.g. 7037 for UMBC (Table 5 ) or, respectively, a number of NEs equivalent to those in the corpus (506 for UMBC) are selected (Table 4 ).
An easy choice of sampling method would be to sample tokens and NEs randomly. However, this would not reflect the composition of corpora appropriately. Corpora consist of several documents, tweets or blog entries, which are likely to repeat the words or NEs since they are about one story. The difference between bigger and smaller corpora is then that bigger corpora consist of more of those documents, tweets, blog entries, interviews, etc. Therefore, when we downsample, we take the first $n$ tokens for the token/type ratios or the first $n$ NEs for the NEs/Unique NEs ratios.
Looking at the normalised diversity metrics, the lowest NE/Unique NE ratios $<= 1.5$ (in bold, Table 4 ) are observed on the Twitter and CoNLL Test corpora. Seeing this for Twitter is not surprising since one would expect noise in social media text (e.g. spelling variations or mistakes) to also have an impact on how often the same NEs are seen. Observing this in the latter, though, is less intuitive and suggests that the CoNLL corpora are well balanced in terms of stories. Low NE/Unique ratios ( $<= 1.7$ ) can also be observed for ACE WL, ACE UN and OntoNotes TC. Similar to social media text, content from weblogs, usenet dicussions and telephone conversations also contains a larger amount of noise compared to the traditionally-studied newswire genre, so this is not a surprising result. Corpora bearing high NE/Unique NE ratios ( $> 2.5$ ) are ACE CTS, OntoNotes MZ and OntoNotes BN. These results are also not surprising. The telephone conversations in ACE CTS are all about the same story, and newswire and broadcast news tend to contain longer stories (reducing variety in any fixed-size set) and are more regular due to editing.
The token/type ratios reflect similar trends (Table 5 ). Low token/type ratios $<= 2.8$ (in bold) are observed for the Twitter corpora (Ritter and UMBC), as well as for the CoNLL Test corpus. Token/type ratios are also low ( $<= 3.2$ ) for CoNLL Train and ACE WL. Interestingly, ACE UN and MSM Train and Test do not have low token/type ratios although they have low NE/Unique ratios. That is, many diverse persons, locations and organisations are mentioned in those corpora, but similar context vocabulary is used. Token/type ratios are high ( $>= 4.4$ ) for MUC7 Dev, ACE BC, ACE CTS, ACE UN and OntoNotes TC. Telephone conversations (TC) having high token/type ratios can be attributed to the high amount filler words (e.g. “uh”, “you know”). NE corpora are generally expected to have regular language use – for ACE, at least, in this instance.
Furthermore, it is worth pointing out that, especially for the larger corpora (e.g. OntoNotes NW), size normalisation makes a big difference. The normalised NE/Unique NE ratios drop by almost a half compared to the un-normalised ratios, and normalised Token/Type ratios drop by up to 85%. This strengthens our argument for size normalisation and also poses the question of low NERC performance for diverse genres being mostly due to the lack of large training corpora. This is examined in Section "RQ2: NER performance in Different Genres" .
Lastly, Table 6 reports tag density (percentage of tokens tagged as part of a NE), which is another useful metric of corpus diversity that can be interpreted as the information density of a corpus. What can be observed here is that the NW corpora have the highest tag density and generally tend to have higher tag density than corpora of other genres; that is, newswire bears a lot of entities. Corpora with especially low tag density $<= 0.06$ (in bold) are the TC corpora, Ritter, OntoNotes WB, ACE UN, ACE BN and ACE BC. As already mentioned, conversational corpora, to which ACE BC also belong, tend to have many filler words, thus it is not surprising that they have a low tag density. There are only minor differences between the tag density and the normalised tag density, since corpus size as such does not impact tag density.
NER Models and Features
To avoid system-specific bias in our experiments, three widely-used supervised statistical approaches to NER are included: Stanford NER, SENNA, and CRFSuite. These systems each have contrasting notable attributes.
Stanford NER BIBREF34 is the most popular of the three, deployed widely in both research and commerce. The system has been developed in terms of both generalising the underlying technology and also specific additions for certain languages. The majority of openly-available additions to Stanford NER, in terms of models, gazetteers, prefix/suffix handling and so on, have been created for newswire-style text. Named entity recognition and classification is modelled as a sequence labelling task with first-order conditional random fields (CRFs) BIBREF43 .
SENNA BIBREF35 is a more recent system for named entity extraction and other NLP tasks. Using word representations and deep learning with deep convolutional neural networks, the general principle for SENNA is to avoid task-specific engineering while also doing well on multiple benchmarks. The approach taken to fit these desiderata is to use representations induced from large unlabelled datasets, including LM2 (introduced in the paper itself) and Brown clusters BIBREF44 , BIBREF45 . The outcome is a flexible system that is readily adaptable, given training data. Although the system is more flexible in general, it relies on learning language models from unlabelled data, which might take a long time to gather and retrain. For the setup in BIBREF35 language models are trained for seven weeks on the English Wikipedia, Reuters RCV1 BIBREF46 and parts of the Wall Street Journal, and results are reported over the CoNLL 2003 NER dataset. Reuters RCV1 is chosen as unlabelled data because the English CoNLL 2003 corpus is created from the Reuters RCV1 corpus. For this paper, we use the original language models distributed with SENNA and evaluate SENNA with the DeepNL framework BIBREF47 . As such, it is to some degree also biased towards the CoNLL 2003 benchmark data.
Finally, we use the classical NER approach from CRFsuite BIBREF36 , which also uses first-order CRFs. This frames NER as a structured sequence prediction task, using features derived directly from the training text. Unlike the other systems, no external knowledge (e.g. gazetteers and unsupervised representations) are used. This provides a strong basic supervised system, and – unlike Stanford NER and SENNA – has not been tuned for any particular domain, giving potential to reveal more challenging domains without any intrinsic bias.
We use the feature extractors natively distributed with the NER frameworks. For Stanford NER we use the feature set “chris2009” without distributional similarity, which has been tuned for the CoNLL 2003 data. This feature was tuned to handle OOV words through word shape, i.e. capitalisation of constituent characters. The goal is to reduce feature sparsity – the basic problem behind OOV named entities – by reducing the complexity of word shapes for long words, while retaining word shape resolution for shorter words. In addition, word clusters, neighbouring n-grams, label sequences and quasi-Newton minima search are included. SENNA uses word embedding features and gazetteer features; for the training configuration see https://github.com/attardi/deepnl#benchmarks. Finally, for CRFSuite, we use the provided feature extractor without POS or chunking features, which leaves unigram and bigram word features of the mention and in a window of 2 to the left and the right of the mention, character shape, prefixes and suffixes of tokens.
These systems are compared against a simple surface form memorisation tagger. The memorisation baseline picks the most frequent NE label for each token sequence as observed in the training corpus. There are two kinds of ambiguity: one is overlapping sequences, e.g. if both “New York City” and “New York” are memorised as a location. In that case the longest-matching sequence is labelled with the corresponding NE class. The second, class ambiguity, occurs when the same textual label refers to different NE classes, e.g. “Google” could either refer to the name of a company, in which case it would be labelled as ORG, or to the company's search engine, which would be labelled as O (no NE).
RQ1: NER performance with Different Approaches
[t]
P, R and F1 of NERC with different models trained on original corpora
[t]
F1 per NE type with different models trained on original corpora
Our first research question is how NERC performance differs for corpora between approaches. In order to answer this, Precision (P), Recall (R) and F1 metrics are reported on size-normalised corpora (Table UID9 ) and original corpora (Tables "RQ1: NER performance with Different Approaches" and "RQ1: NER performance with Different Approaches" ). The reason for size normalisation is to make results comparable across corpora. For size normalisation, the training corpora are downsampled to include the same number of NEs as the smallest corpus, UMBC. For that, sentences are selected from the beginning of the train part of the corpora so that they include the same number of NEs as UMBC. Other ways of downsampling the corpora would be to select the first $n$ sentences or the first $n$ tokens, where $n$ is the number of sentences in the smallest corpus. The reason that the number of NEs, which represent the number of positive training examples, is chosen for downsampling the corpora is that the number of positive training examples have a much bigger impact on learning than the number of negative training examples. For instance, BIBREF48 , among others, study topic classification performance for small corpora and sample from the Reuters corpus. They find that adding more negative training data gives little to no improvement, whereas adding positive examples drastically improves performance.
Table UID9 shows results with size normalised precision (P), recall (R), and F1-Score (F1). The five lowest P, R and F1 values per method (CRFSuite, Stanford NER, SENNA) are in bold to highlight underperformers. Results for all corpora are summed with macro average.
Comparing the different methods, the highest F1 results are achieved with SENNA, followed by Stanford NER and CRFSuite. SENNA has a balanced P and R, which can be explained by the use of word embeddings as features, which help with the unseen word problem. For Stanford NER as well as CRFSuite, which do not make use of embeddings, recall is about half of precision. These findings are in line with other work reporting the usefulness of word embeddings and deep learning for a variety of NLP tasks and domains BIBREF49 , BIBREF50 , BIBREF51 . With respect to individual corpora, the ones where SENNA outperforms other methods by a large margin ( $>=$ 13 points in F1) are CoNLL Test A, ACE CTS and OntoNotes TC. The first success can be attributed to being from the same the domain SENNA was originally tuned for. The second is more unexpected and could be due to those corpora containing a disproportional amount of PER and LOC NEs (which are easier to tag correctly) compared to ORG NEs, as can be seen in Table "RQ1: NER performance with Different Approaches" , where F1 of NERC methods is reported on the original training data.
Our analysis of CRFSuite here is that it is less tuned for NW corpora and might therefore have a more balanced performance across genres does not hold. Results with CRFSuite for every corpus are worse than the results for that corpus with Stanford NER, which is also CRF-based.
To summarise, our findings are:
[noitemsep]
F1 is highest with SENNA, followed by Stanford NER and CRFSuite
SENNA outperforms other methods by a large margin (e.g. $>=$ 13 points in F1) for CoNLL Test A, ACE CTS and OntoNotes TC
Our hypothesis that CRFSuite is less tuned for NW corpora and will therefore have a more balanced performance across genres does not hold, as results for CRFSuite for every corpus are worse than with Stanford NER
RQ2: NER performance in Different Genres
Our second research question is whether existing NER approaches generalise well over corpora in different genres. To do this we study again Precision (P), Recall (R) and F1 metrics on size-normalised corpora (Table UID9 ), on original corpora (Tables "RQ1: NER performance with Different Approaches" and "RQ1: NER performance with Different Approaches" ), and we further test performance per genre in a separate table (Table 3 ).
F1 scores over size-normalised corpora vary widely (Table UID9 ). For example, the SENNA scores range from 9.35% F1 (ACE UN) to 71.48% (CoNLL Test A). Lowest results are consistently observed for the ACE subcorpora, UMBC, and OntoNotes BC and WB. The ACE corpora are large and so may be more prone to non-uniformities emerging during downsampling; they also have special rules for some kinds of organisation which can skew results (as described in Section UID9 ). The highest results are on the CoNLL Test A corpus, OntoNotes BN and MUC 7 Dev. This moderately supports our hypothesis that NER systems perform better on NW than on other genres, probably due to extra fitting from many researchers using them as benchmarks for tuning their approaches. Looking at the Twitter (TWI) corpora present the most challenge due to increased diversity, the trends are unstable. Although results for UMBC are among the lowest, results for MSM 2013 and Ritter are in the same range or even higher than those on NW datasets. This begs the question whether low results for Twitter corpora reported previously were due to the lack of sufficient in-genre training data.
Comparing results on normalised to non-normalised data, Twitter results are lower than those for most OntoNotes corpora and CoNLL test corpora, mostly due to low recall. Other difficult corpora having low performance are ACE UN and WEB corpora. We further explicitly examine results on size normalised corpora grouped by corpus type, shown in Table 3 . It becomes clear that, on average, newswire corpora and OntoNotes MZ are the easiest corpora and ACE UN, WEB and TWI are harder. This confirms our hypothesis that social media and Web corpora are challenging for NERC.
The CoNLL results, on the other hand, are the highest across all corpora irrespective of the NERC method. What is very interesting to see is that they are much higher than the results on the biggest training corpus, OntoNotes NW. For instance, SENNA has an F1 of 78.04 on OntoNotes, compared to an F1 of 92.39 and 86.44 for CoNLL Test A and Test B respectively. So even though OntoNotes NW is more than twice the size of CoNLL in terms of NEs (see Table 4 ), NERC performance is much higher on CoNLL. NERC performance with respect to training corpus size is represented in Figure 1 . The latter figure confirms that although there is some correlation between corpus size and F1, the variance between results on comparably sized corpora is big. This strengthens our argument that there is a need for experimental studies, such as those reported below, to find out what, apart from corpus size, impacts NERC performance.
Another set of results presented in Table "RQ1: NER performance with Different Approaches" are those of the simple NERC memorisation baseline. It can be observed that corpora with a low F1 for NERC methods, such as UMBC and ACE UN, also have a low memorisation performance. Memorisation is discussed in more depth in Section "RQ5: Out-Of-Domain NER Performance and Memorisation" .
When NERC results are compared to the corpus diversity statistics, i.e. NE/Unique NE ratios (Table 4 ), token/type ratios (Table 5 ), and tag density (Table 6 ), the strongest predictor for F1 is tag density, as can be evidenced by the R correlation values between the ratios and F1 scores with the Stanford NER system, shown in the respective tables.
There is a positive correlation between high F1 and high tag density (R of 0.57 and R of 0.62 with normalised tag density), a weak positive correlation for NE/unique ratios (R of 0.20 and R of 0.15 for normalised ratio), whereas for token/type ratios, no such clear correlation can be observed (R of 0.25 and R of -0.07 for normalised ratio).
However, tag density is also not an absolute predictor for NERC performance. While NW corpora have both high NERC performance and high tag density, this high density is not necessarily an indicator of high performance. For example, systems might not find high tag density corpora of other genres necessarily so easy.
One factor that can explain the difference in genre performance between e.g. newswire and social media is entity drift – the change in observed entity terms over time. In this case, it is evident from the differing surface forms and contexts for a given entity class. For example, the concept of “location" that NER systems try to learn might be frequently represented in English newswire from 1991 with terms like Iraq or Kuwait, but more with Atlanta, Bosnia and Kabul in the same language and genre from 1996. Informally, drift on Twitter is often characterised as both high-frequency and high-magnitude; that is, the changes are both rapid and correspond to a large amount of surface form occurrences (e.g. BIBREF12 , BIBREF52 ).
We examined the impact of drift in newswire and Twitter corpora, taking datasets based in different timeframes. The goal is to gauge how much diversity is due to new entities appearing over time. To do this, we used just the surface lexicalisations of entities as the entity representation. The overlap of surface forms was measured across different corpora of the same genre and language. We used an additional corpus based on recent data – that from the W-NUT 2015 challenge BIBREF25 . This is measured in terms of occurrences, rather than distinct surface forms, so that the magnitude of the drift is shown instead of having skew in results from the the noisy long tail. Results are given in Table 7 for newswire and Table 8 for Twitter corpora.
It is evident that the within-class commonalities in surface forms are much higher in newswire than in Twitter. That is to say, observations of entity texts in one newswire corpus are more helpful in labelling other newswire corpora, than if the same technique is used to label other twitter corpora.
This indicates that drift is lower in newswire than in tweets. Certainly, the proportion of entity mentions in most recent corpora (the rightmost-columns) are consistently low compared to entity forms available in earlier data. These reflect the raised OOV and drift rates found in previous work BIBREF12 , BIBREF53 . Another explanation is that there is higher noise in variation, and that the drift is not longitudinal, but rather general. This is partially addressed by RQ3, which we will address next, in Section "RQ3: Impact of NE Diversity" .
To summarise, our findings are:
[noitemsep]
Overall, F1 scores vary widely across corpora.
Trends can be marked in some genres. On average, newswire corpora and OntoNotes MZ are the easiest corpora and ACE UN, WEB and TWI are the hardest corpora for NER methods to reach good performance on.
Normalising corpora by size results in more noisy data such as TWI and WEB data achieving similar results to NW corpora.
Increasing the amount of available in-domain training data will likely result in improved NERC performance.
There is a strong positive correlation between high F1 and high tag density, a weak positive correlation for NE/unique ratios and no clear correlation between token/type ratios and F1
Temporal NE drift is lower in newswire than in tweets
The next section will take a closer look at the impact of seen and unseen NEs on NER performance.
RQ3: Impact of NE Diversity
Unseen NEs are those with surface forms present only in the test, but not training data, whereas seen NEs are those also encountered in the training data. As discussed previously, the ratio between those two measures is an indicator of corpus NE diversity.
Table 9 shows how the number of unseen NEs per test corpus relates to the total number of NEs per corpus. The proportion of unseen forms varies widely by corpus, ranging from 0.351 (ACE NW) to 0.931 (UMBC). As expected there is a correlation between corpus size and percentage of unseen NEs, i.e. smaller corpora such as MUC and UMBC tend to contain a larger proportion of unseen NEs than bigger corpora such as ACE NW. In addition, similar to the token/type ratios listed in Table 5 , we observe that TWI and WEB corpora have a higher proportion of unseen entities.
As can be seen from Table "RQ1: NER performance with Different Approaches" , corpora with a low percentage of unseen NEs (e.g. CoNLL Test A and OntoNotes NW) tend to have high NERC performance, whereas corpora with high percentage of unseen NEs (e.g. UMBC) tend to have low NERC performance. This suggests that systems struggle to recognise and classify unseen NEs correctly.
To check this seen/unseen performance split, next we examine NERC performance for unseen and seen NEs separately; results are given in Table 10 . The “All" column group represents an averaged performance result. What becomes clear from the macro averages is that F1 on unseen NEs is significantly lower than F1 on seen NEs for all three NERC approaches. This is mostly due to recall on unseen NEs being lower than that on seen NEs, and suggests some memorisation and poor generalisation in existing systems. In particular, Stanford NER and CRFSuite have almost 50% lower recall on unseen NEs compared to seen NEs. One outlier is ACE UN, for which the average seen F1 is 1.01 and the average unseen F1 is 1.52, though both are miniscule and the different negligible.
Of the three approaches, SENNA exhibits the narrowest F1 difference between seen and unseen NEs. In fact it performs below Stanford NER for seen NEs on many corpora. This may be because SENNA has but a few features, based on word embeddings, which reduces feature sparsity; intuitively, the simplicity of the representation is likely to help with unseen NEs, at the cost of slightly reduced performance on seen NEs through slower fitting. Although SENNA appears to be better at generalising than Stanford NER and our CRFSuite baseline, the difference between its performance on seen NEs and unseen NEs is still noticeable. This is 21.77 for SENNA (macro average), whereas it is 29.41 for CRFSuite and 35.68 for Stanford NER.
The fact that performance over unseen entities is significantly lower than on seen NEs partly explains what we observed in the previous section; i.e., that corpora with a high proportion of unseen entities, such as the ACE WL corpus, are harder to label than corpora of a similar size from other genres, such as the ACE BC corpus (e.g. systems reach F1 of $\sim $ 30 compared to $\sim $ 50; Table "RQ1: NER performance with Different Approaches" ).
However, even though performance on seen NEs is higher than on unseen, there is also a difference between seen NEs in corpora of different sizes and genres. For instance, performance on seen NEs in ACE WL is 70.86 (averaged over the three different approaches), whereas performance on seen NEs in the less-diverse ACE BC corpus is higher at 76.42; the less diverse data is, on average, easier to tag. Interestingly, average F1 on seen NEs in the Twitter corpora (MSM and Ritter) is around 80, whereas average F1 on the ACE corpora, which are of similar size, is lower, at around 70.
To summarise, our findings are:
[noitemsep]
F1 on unseen NEs is significantly lower than F1 on seen NEs for all three NERC approaches, which is mostly due to recall on unseen NEs being lower than that on seen NEs.
Performance on seen NEs is significantly and consistently higher than that of unseen NEs in different corpora, with the lower scores mostly attributable to lower recall.
However, there are still significant differences at labelling seen NEs in different corpora, which means that if NEs are seen or unseen does not account for all of the difference of F1 between corpora of different genres.
RQ4: Unseen Features, unseen NEs and NER performance
Having examined the impact of seen/unseen NEs on NERC performance in RQ3, and touched upon surface form drift in RQ2, we now turn our attention towards establishing the impact of seen features, i.e. features appearing in the test set that are observed also in the training set. While feature sparsity can help to explain low F1, it is not a good predictor of performance across methods: sparse features can be good if mixed with high-frequency ones. For instance, Stanford NER often outperforms CRFSuite (see Table "RQ1: NER performance with Different Approaches" ) despite having a lower proportion of seen features (i.e. those that occur both in test data and during training). Also, some approaches such as SENNA use a small number of features and base their features almost entirely on the NEs and not on their context.
Subsequently, we want to measure F1 for unseens and seen NEs, as in Section "RQ3: Impact of NE Diversity" , but also examine how the proportion of seen features impacts on the result. We define seen features as those observed in the test data and also the training data. In turn, unseen features are those observed in the test data but not in the training data. That is, they have not been previously encountered by the system at the time of labeling. Unseen features are different from unseen words in that they are the difference in representation, not surface form. For example, the entity “Xoxarle" may be an unseen entity not found in training data This entity could reasonably have “shape:Xxxxxxx" and “last-letter:e" as part of its feature representation. If the training data contains entities “Kenneth" and “Simone", each of this will have generated these two features respectively. Thus, these example features will not be unseen features in this case, despite coming from an unseen entity. Conversely, continuing this example, if the training data contains no feature “first-letter:X" – which applies to the unseen entity in question – then this will be an unseen feature.
We therefore measure the proportion of unseen features per unseen and seen proportion of different corpora. An analysis of this with Stanford NER is shown in Figure 2 . Each data point represents a corpus. The blue squares are data points for seen NEs and the red circles are data points for unseen NEs. The figure shows a negative correlation between F1 and percentage of unseen features, i.e. the lower the percentage of unseen features, the higher the F1. Seen and unseen performance and features separate into two groups, with only two outlier points. The figure shows that novel, previously unseen NEs have more unseen features and that systems score a lower F1 on them. This suggests that despite the presence of feature extractors for tackling unseen NEs, the features generated often do not overlap with those from seen NEs. However, one would expect individual features to give different generalisation power for other sets of entities, and for systems use these features in different ways. That is, machine learning approaches to the NER task do not seem to learn clear-cut decision boundaries based on a small set of features. This is reflected in the softness of the correlation.
Finally, the proportion of seen features is higher for seen NEs. The two outlier points are ACE UN (low F1 for seen NEs despite low percentage of unseen features) and UMBC (high F1 for seen NEs despite high percentage of unseen features). An error analysis shows that the ACE UN corpus suffers from the problem that the seen NEs are ambiguous, meaning even if they have been seen in the training corpus, a majority of the time they have been observed with a different NE label. For the UMBC corpus, the opposite is true: seen NEs are unambiguous. This kind of metonymy is a known and challenging issue in NER, and the results on these corpora highlight the impact is still has on modern systems.
For all approaches the proportion of observed features for seen NEs is bigger than the proportion of observed features for unseen NEs, as it should be. However, within the seen and unseen testing instances, there is no clear trend indicating whether having more observed features overall increases F1 performance. One trend that is observable is that the smaller the token/type ratio is (Table 5 ), the bigger the variance between the smallest and biggest $n$ for each corpus, or, in other words, the smaller the token/type ratio is, the more diverse the features.
To summarise, our findings are:
[noitemsep]
Seen NEs have more unseen features and systems score a lower F1 on them.
Outliers are due to low/high ambiguity of seen NEs.
The proportion of observed features for seen NEs is bigger than the proportion of observed features for unseen NEs
Within the seen and unseen testing instances, there is no clear trend indicating whether having more observed features overall increases F1 performance.
The smaller the token/type ratio is, the more diverse the features.
RQ5: Out-Of-Domain NER Performance and Memorisation
This section explores baseline out-of-domain NERC performance without domain adaptation; what percentage of NEs are seen if there is a difference between the the training and the testing domains; and how the difference in performance on unseen and seen NEs compares to in-domain performance.
As demonstrated by the above experiments, and in line with related work, NERC performance varies across domains while also being influenced by the size of the available in-domain training data. Prior work on transfer learning and domain adaptation (e.g. BIBREF16 ) has aimed at increasing performance in domains where only small amounts of training data are available. This is achieved by adding out-of domain data from domains where larger amounts of training data exist. For domain adaptation to be successful, however, the seed domain needs to be similar to the target domain, i.e. if there is no or very little overlap in terms of contexts of the training and testing instances, the model does not learn any additional helpful weights. As a confounding factor, Twitter and other social media generally consist of many (thousands-millions) of micro-domains, with each author BIBREF54 community BIBREF55 and even conversation BIBREF56 having its own style, which makes it hard to adapt to it as a single, monolithic genre; accordingly, adding out-of-domain NER data gives bad results in this situation BIBREF21 . And even if recognised perfectly, entities that occur just once cause problems beyond NER, e.g. in co-reference BIBREF57 .
In particular, BIBREF58 has reported improving F1 by around 6% through adaptation from the CoNLL to the ACE dataset. However, transfer learning becomes more difficult if the target domain is very noisy or, as mentioned already, too different from the seed domain. For example, BIBREF59 unsuccessfully tried to adapt the CoNLL 2003 corpus to a Twitter corpus spanning several topics. They found that hand-annotating a Twitter corpus consisting of 24,000 tokens performs better on new Twitter data than their transfer learning efforts with the CoNLL 2003 corpus.
The seed domain for the experiments here is newswire, where we use the classifier trained on the biggest NW corpus investigated in this study, i.e. OntoNotes NW. That classifier is then applied to all other corpora. The rationale is to test how suitable such a big corpus would be for improving Twitter NER, for which only small training corpora are available.
Results for out-of-domain performance are reported in Table 11 . The highest F1 performance is on the OntoNotes BC corpus, with similar results to the in-domain task. This is unsurprising as it belongs to a similar domain as the training corpus (broadcast conversation) the data was collected in the same time period, and it was annotated using the same guidelines. In contrast, out-of-domain results are much lower than in-domain results for the CoNLL corpora, even though they belong to the same genre as OntoNotes NW. Memorisation recall performance on CoNLL TestA and TestB with OntoNotes NW test suggest that this is partly due to the relatively low overlap in NEs between the two datasets. This could be attributed to the CoNLL corpus having been collected in a different time period to the OntoNotes corpus, when other entities were popular in the news; an example of drift BIBREF37 . Conversely, Stanford NER does better on these corpora than it does on other news data, e.g. ACE NW. This indicates that Stanford NER is capable of some degree of generalisation and can detect novel entity surface forms; however, recall is still lower than precision here, achieving roughly the same scores across these three (from 44.11 to 44.96), showing difficulty in picking up novel entities in novel settings.
In addition, there are differences in annotation guidelines between the two datasets. If the CoNLL annotation guidelines were more inclusive than the Ontonotes ones, then even a memorisation evaluation over the same dataset would yield this result. This is, in fact, the case: OntoNotes divides entities into more classes, not all of which can be readily mapped to PER/LOC/ORG. For example, OntoNotes includes PRODUCT, EVENT, and WORK OF ART classes, which are not represented in the CoNLL data. It also includes the NORP class, which blends nationalities, religious and political groups. This has some overlap with ORG, but also includes terms such as “muslims" and “Danes", which are too broad for the ACE-related definition of ORGANIZATION. Full details can be found in the OntoNotes 5.0 release notes and the (brief) CoNLL 2003 annotation categories. Notice how the CoNLL guidelines are much more terse, being generally non-prose, but also manage to cram in fairly comprehensive lists of sub-kinds of entities in each case. This is likely to make the CoNLL classes include a diverse range of entities, with the many suggestions acting as generative material for the annotator, and therefore providing a broader range of annotations from which to generalise from – i.e., slightly easier to tag.
The lowest F1 of 0 is “achieved" on ACE BN. An examination of that corpus reveals the NEs contained in that corpus are all lower case, whereas those in OntoNotes NW have initial capital letters.
Results on unseen NEs for the out-of-domain setting are in Table 12 . The last section's observation of NERC performance being lower for unseen NEs also generally holds true in this out-of-domain setting. The macro average over F1 for the in-domain setting is 76.74% for seen NEs vs. 53.76 for unseen NEs, whereas for the out-of-domain setting the F1 is 56.10% for seen NEs and 47.73% for unseen NEs.
Corpora with a particularly big F1 difference between seen and unseen NEs ( $<=$ 20% averaged over all NERC methods) are ACE NW, ACE BC, ACE UN, OntoNotes BN and OntoNotes MZ. For some corpora (CoNLL Test A and B, MSM and Ritter), out-of-domain F1 (macro average over all methods) of unseen NEs is better than for seen NEs. We suspect that this is due to the out-of-domain evaluation setting encouraging better generalisation, as well as the regularity in entity context observed in the fairly limited CoNLL news data – for example, this corpus contains a large proportion of cricket score reports and many cricketer names, occurring in linguistically similar contexts. Others have also noted that the CoNLL datasets are low-diversity compared to OntoNotes, in the context of named entity recognition BIBREF60 . In each of the exceptions except MSM, the difference is relatively small. We note that the MSM test corpus is one of the smallest datasets used in the evaluation, also based on a noisier genre than most others, and so regard this discrepancy as an outlier.
Corpora for which out-of-domain F1 is better than in-domain F1 for at least one of the NERC methods are: MUC7 Test, ACE WL, ACE UN, OntoNotes WB, OntoNotes TC and UMBC. Most of those corpora are small, with combined training and testing bearing fewer than 1,000 NEs (MUC7 Test, ACE UN, UMBC). In such cases, it appears beneficial to have a larger amount of training data, even if it is from a different domain and/or time period. The remaining 3 corpora contain weblogs (ACE WL, ACE WB) and online Usenet discussions (ACE UN). Those three are diverse corpora, as can be observed by the relatively low NEs/Unique NEs ratios (Table 4 ). However, NE/Unique NEs ratios are not an absolute predictor for better out-of-domain than in-domain performance: there are corpora with lower NEs/Unique NEs ratios than ACE WB which have better in-domain than out-of-domain performance. As for the other Twitter corpora, MSM 2013 and Ritter, performance is very low, especially for the memorisation system. This reflects that, as well as surface form variation, the context or other information represented by features shifts significantly more in Twitter than across different samples of newswire, and that the generalisations that can be drawn from newswire by modern NER systems are not sufficient to give any useful performance in this natural, unconstrained kind of text.
In fact, it is interesting to see that the memorisation baseline is so effective with many genres, including broadcast news, weblog and newswire. This indicates that there is low variation in the topics discussed by these sources – only a few named entities are mentioned by each. When named entities are seen as micro-topics, each indicating a grounded and small topic of interest, this reflects the nature of news having low topic variation, focusing on a few specific issues – e.g., location referred to tend to be big; persons tend to be politically or financially significant; and organisations rich or governmental BIBREF61 . In contrast, social media users also discuss local locations like restaurants, organisations such as music band and sports clubs, and are content to discuss people that are not necessarily mentioned in Wikipedia. The low overlap and memorisation scores on tweets, when taking entity lexica based on newswire, are therefore symptomatic of the lack of variation in newswire text, which has a limited authorship demographic BIBREF62 and often has to comply to editorial guidelines.
The other genre that was particularly difficult for the systems was ACE Usenet. This is a form of user-generated content, not intended for publication but rather discussion among communities. In this sense, it is social media, and so it is not surprising that system performance on ACE UN resembles performance on social media more than other genres.
Crucially, the computationally-cheap memorisation method actually acts as a reasonable predictor of the performance of other methods. This suggests that high entity diversity predicts difficulty for current NER systems. As we know that social media tends to have high entity diversity – certainly higher than other genres examined – this offers an explanation for why NER systems perform so poorly when taken outside the relatively conservative newswire domain. Indeed, if memorisation offers a consistent prediction of performance, then it is reasonable to say that memorisation and memorisation-like behaviour accounts for a large proportion of NER system performance.
To conclude regarding memorisation and out-of-domain performance, there are multiple issues to consider: is the corpus a sub-corpus of the same corpus as the training corpus, does it belong to the same genre, is it collected in the same time period, and was it created with similar annotation guidelines. Yet it is very difficult to explain high/low out-of-domain performance compared to in-domain performance with those factors.
A consistent trend is that, if out-of-domain memorisation is better in-domain memorisation, out-of-domain NERC performance with supervised learning is better than in-domain NERC performance with supervised learning too. This reinforces discussions in previous sections: an overlap in NEs is a good predictor for NERC performance. This is useful when a suitable training corpus has to be identified for a new domain. It can be time-consuming to engineer features or study and compare machine learning methods for different domains, while memorisation performance can be checked quickly.
Indeed, memorisation consistently predicts NER performance. The prediction applies both within and across domains. This has implications for the focus of future work in NER: the ability to generalise well enough to recognise unseen entities is a significant and still-open problem.
To summarise, our findings are:
[noitemsep]
What time period an out of domain corpus is collected in plays an important role in NER performance.
The context or other information represented by features shifts significantly more in Twitter than across different samples of newswire.
The generalisations that can be drawn from newswire by modern NER systems are not sufficient to give any useful performance in this varied kind of text.
Memorisation consistently predicts NER performance, both inside and outside genres or domains.
Conclusion
This paper investigated the ability of modern NER systems to generalise effectively over a variety of genres. Firstly, by analysing different corpora, we demonstrated that datasets differ widely in many regards: in terms of size; balance of entity classes; proportion of NEs; and how often NEs and tokens are repeated. The most balanced corpus in terms of NE classes is the CoNLL corpus, which, incidentally, is also the most widely used NERC corpus, both for method tuning of off-the-shelf NERC systems (e.g. Stanford NER, SENNA), as well as for comparative evaluation. Corpora, traditionally viewed as noisy, i.e. the Twitter and Web corpora, were found to have a low repetition of NEs and tokens. More surprisingly, however, so does the CoNLL corpus, which indicates that it is well balanced in terms of stories. Newswire corpora have a large proportion of NEs as percentage of all tokens, which indicates high information density. Web, Twitter and telephone conversation corpora, on the other hand, have low information density.
Our second set of findings relates to the NERC approaches studied. Overall, SENNA achieves consistently the highest performance across most corpora, and thus has the best approach to generalising from training to testing data. This can mostly be attributed to SENNA's use of word embeddings, trained with deep convolutional neural nets. The default parameters of SENNA achieve a balanced precision and recall, while for Stanford NER and CRFSuite, precision is almost twice as high as recall.
Our experiments also confirmed the correlation between NERC performance and training corpus size, although size alone is not an absolute predictor. In particular, the biggest NE-annotated corpus amongst those studied is OntoNotes NW – almost twice the size of CoNLL in terms of number of NEs. Nevertheless, the average F1 for CoNLL is the highest of all corpora and, in particular, SENNA has 11 points higher F1 on CoNLL than on OntoNotes NW.
Studying NERC on size-normalised corpora, it becomes clear that there is also a big difference in performance on corpora from the same genre. When normalising training data by size, diverse corpora, such as Web and social media, still yield lower F1 than newswire corpora. This indicates that annotating more training examples for diverse genres would likely lead to a dramatic increase in F1.
What is found to be a good predictor of F1 is a memorisation baseline, which picks the most frequent NE label for each token sequence in the test corpus as observed in the training corpus. This supported our hypothesis that entity diversity plays an important role, being negatively correlated with F1. Studying proportions of unseen entity surface forms, experiments showed corpora with a large proportion of unseen NEs tend to yield lower F1, due to much lower performance on unseen than seen NEs (about 17 points lower averaged over all NERC methods and corpora). This finally explains why the performance is highest for the benchmark CoNLL newswire corpus – it contains the lowest proportion of unseen NEs. It also explains the difference in performance between NERC on other corpora. Out of all the possible indicators for high NER F1 studied, this is found to be the most reliable one. This directly supports our hypothesis that generalising for unseen named entities is both difficult and important.
Also studied is the proportion of unseen features per unseen and seen NE portions of different corpora. However, this is found to not be very helpful. The proportion of seen features is higher for seen NEs, as it should be. However, within the seen and unseen NE splits, there is no clear trend indicating if having more seen features helps.
We also showed that hand-annotating more training examples is a straight-forward and reliable way of improving NERC performance. However, this is costly, which is why it can be useful to study if using different, larger corpora for training might be helpful. Indeed, substituting in-domain training corpora with other training corpora for the same genre created at the same time improves performance, and studying how such corpora can be combined with transfer learning or domain adaptation strategies might improve performance even further. However, for most corpora, there is a significant drop in performance for out-of-domain training. What is again found to be reliable is to check the memorisation baseline: if results for the out-of-domain memorisation baseline are higher than for in-domain memorisation, than using the out-of-domain corpus for training is likely to be helpful.
Across a broad range of corpora and genres, characterised in different ways, we have examined how named entities are embedded and presented. While there is great variation in the range and class of entities found, it is consistent that the more varied texts are harder to do named entity recognition in. This connection with variation occurs to such an extent that, in fact, performance when memorising lexical forms stably predicts system accuracy. The result of this is that systems are not sufficiently effective at generalising beyond the entity surface forms and contexts found in training data. To close this gap and advance NER systems, and cope with the modern reality of streamed NER, as opposed to the prior generation of batch-learning based systems with static evaluation sets being used as research benchmarks, future work needs to address named entity generalisation and out-of-vocabulary lexical forms.
Acknowledgement
This work was partially supported by the UK EPSRC Grant No. EP/K017896/1 uComp and by the European Union under Grant Agreements No. 611233 PHEME. The authors wish to thank the CS&L reviewers for their helpful and constructive feedback. | MUC, CoNLL, ACE, OntoNotes, MSM, Ritter, UMBC |
ad67ca844c63bf8ac9fdd0fa5f58c5a438f16211 | ad67ca844c63bf8ac9fdd0fa5f58c5a438f16211_0 | Q: Which unlabeled data do they pretrain with?
Text: Introduction
Current state of the art models for speech recognition require large amounts of transcribed audio data to attain good performance BIBREF1 . Recently, pre-training of neural networks has emerged as an effective technique for settings where labeled data is scarce. The key idea is to learn general representations in a setup where substantial amounts of labeled or unlabeled data is available and to leverage the learned representations to improve performance on a downstream task for which the amount of data is limited. This is particularly interesting for tasks where substantial effort is required to obtain labeled data, such as speech recognition.
In computer vision, representations for ImageNet BIBREF2 and COCO BIBREF3 have proven to be useful to initialize models for tasks such as image captioning BIBREF4 or pose estimation BIBREF5 . Unsupervised pre-training for computer vision has also shown promise BIBREF6 . In natural language processing (NLP), unsupervised pre-training of language models BIBREF7 , BIBREF8 , BIBREF9 improved many tasks such as text classification, phrase structure parsing and machine translation BIBREF10 , BIBREF11 . In speech processing, pre-training has focused on emotion recogniton BIBREF12 , speaker identification BIBREF13 , phoneme discrimination BIBREF14 , BIBREF15 as well as transferring ASR representations from one language to another BIBREF16 . There has been work on unsupervised learning for speech but the resulting representations have not been applied to improve supervised speech recognition BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 .
In this paper, we apply unsupervised pre-training to improve supervised speech recognition. This enables exploiting unlabeled audio data which is much easier to collect than labeled data. Our model, , is a convolutional neural network that takes raw audio as input and computes a general representation that can be input to a speech recognition system. The objective is a contrastive loss that requires distinguishing a true future audio sample from negatives BIBREF22 , BIBREF23 , BIBREF15 . Different to previous work BIBREF15 , we move beyond frame-wise phoneme classification and apply the learned representations to improve strong supervised ASR systems. relies on a fully convolutional architecture which can be easily parallelized over time on modern hardware compared to recurrent autoregressive models used in previous work (§ SECREF2 ).
Our experimental results on the WSJ benchmark demonstrate that pre-trained representations estimated on about 1,000 hours of unlabeled speech can substantially improve a character-based ASR system and outperform the best character-based result in the literature, Deep Speech 2. On the TIMIT task, pre-training enables us to match the best reported result in the literature. In a simulated low-resource setup with only eight hours of transcriped audio data, reduces WER by up to 32% compared to a baseline model that relies on labeled data only (§ SECREF3 & § SECREF4 ).
Pre-training Approach
Given an audio signal as input, we optimize our model (§ SECREF3 ) to predict future samples from a given signal context. A common problem with these approaches is the requirement to accurately model the data distribution INLINEFORM0 , which is challenging. We avoid this problem by first encoding raw speech samples INLINEFORM1 into a feature representation INLINEFORM2 at a lower temporal frequency and then implicitly model a density function INLINEFORM3 similar to BIBREF15 .
Model
Our model takes raw audio signal as input and then applies two networks. The encoder network embeds the audio signal in latent space and the context network combines multiple time-steps of the encoder to obtain contextualized representations (Figure FIGREF2 ). Both networks are then used to compute the objective function (§ SECREF4 ).
Given raw audio samples INLINEFORM0 , we apply the encoder network INLINEFORM1 which we parameterize as a five-layer convolutional network similar to BIBREF15 . Alternatively, one could use other architectures such as the trainable frontend of BIBREF24 amongst others. The encoder layers have kernel sizes INLINEFORM2 and strides INLINEFORM3 . The output of the encoder is a low frequency feature representation INLINEFORM4 which encodes about 30ms of 16KHz of audio and the striding results in representation INLINEFORM5 every 10ms.
Next, we apply the context network INLINEFORM0 to the output of the encoder network to mix multiple latent representations INLINEFORM1 into a single contextualized tensor INLINEFORM2 for a receptive field size INLINEFORM3 . The context network has seven layers and each layer has kernel size three and stride one. The total receptive field of the context network is about 180ms.
The layers of both networks consist of a causal convolution with 512 channels, a group normalization layer and a ReLU nonlinearity. We normalize both across the feature and temporal dimension for each sample which is equivalent to group normalization with a single normalization group BIBREF25 . We found it important to choose a normalization scheme that is invariant to the scaling and the offset of the input data. This choice resulted in representations that generalize well across datasets.
Objective
We train the model to distinguish a sample INLINEFORM0 that is k steps in the future from distractor samples INLINEFORM1 drawn from a proposal distribution INLINEFORM2 , by minimizing the contrastive loss for each step INLINEFORM3 : DISPLAYFORM0
where we denote the sigmoid INLINEFORM0 , and where INLINEFORM1 is the probability of INLINEFORM2 being the true sample. We consider a step-specific affine transformation INLINEFORM3 for each step INLINEFORM4 , that is applied to INLINEFORM5 BIBREF15 . We optimize the loss INLINEFORM6 , summing ( EQREF5 ) over different step sizes. In practice, we approximate the expectation by sampling ten negatives examples by uniformly choosing distractors from each audio sequence, i.e., INLINEFORM7 , where INLINEFORM8 is the sequence length and we set INLINEFORM9 to the number of negatives.
After training, we input the representations produced by the context network INLINEFORM0 to the acoustic model instead of log-mel filterbank features.
Data
We consider the following corpora: For phoneme recognition on TIMIT BIBREF26 we use the standard train, dev and test split where the training data contains just over three hours of audio data. Wall Street Journal (WSJ; Woodland et al., 1994) comprises about 81 hours of transcribed audio data. We train on si284, validate on nov93dev and test on nov92. Librispeech BIBREF27 contains a total of 960 hours of clean and noisy speech for training. For pre-training, we use either the full 81 hours of the WSJ corpus, an 80 hour subset of clean Librispeech, the full 960 hour Librispeech training set, or a combination of all of them.
To train the baseline acoustic model we compute 80 log-mel filterbank coefficients for a 25ms sliding window with stride 10ms. Final models are evaluated in terms of both word error rate (WER) and letter error rate (LER).
Acoustic Models
We use the wav2letter++ toolkit for training and evaluation of acoustic models BIBREF28 . For the TIMIT task, we follow the character-based wav2letter++ setup of BIBREF24 which uses seven consecutive blocks of convolutions (kernel size 5 with 1,000 channels), followed by a PReLU nonlinearity and a dropout rate of 0.7. The final representation is projected to a 39-dimensional phoneme probability. The model is trained using the Auto Segmentation Criterion (ASG; Collobert et al., 2016)) using SGD with momentum.
Our baseline for the WSJ benchmark is the wav2letter++ setup described in BIBREF29 which is a 17 layer model with gated convolutions BIBREF30 . The model predicts probabilities for 31 graphemes, including the standard English alphabet, the apostrophe and period, two repetition characters (e.g. the word ann is transcribed as an1), and a silence token (|) used as word boundary.
All acoustic models are trained on 8 Nvidia V100 GPUs using the distributed training implementations of fairseq and wav2letter++. When training acoustic models on WSJ, we use plain SGD with learning rate 5.6 as well as gradient clipping BIBREF29 and train for 1,000 epochs with a total batch size of 64 audio sequences. We use early stopping and choose models based on validation WER after evaluating checkpoints with a 4-gram language model. For TIMIT we use learning rate 0.12, momentum of 0.9 and train for 1,000 epochs on 8 GPUs with a batch size of 16 audio sequences.
Decoding
For decoding the emissions from the acoustic model we use a lexicon as well as a separate language model trained on the WSJ language modeling data only. We consider a 4-gram KenLM language model BIBREF31 , a word-based convolutional language model BIBREF29 , and a character based convolutional language model BIBREF32 . We decode the word sequence INLINEFORM0 from the output of the context network INLINEFORM1 or log-mel filterbanks using the beam search decoder of BIBREF29 by maximizing DISPLAYFORM0
where INLINEFORM0 is the acoustic model, INLINEFORM1 is the language model, INLINEFORM2 are the characters of INLINEFORM3 . Hyper-parameters INLINEFORM4 , INLINEFORM5 and INLINEFORM6 are weights for the language model, the word penalty, and the silence penalty.
For decoding WSJ, we tune the hyperparameters INLINEFORM0 , INLINEFORM1 and INLINEFORM2 using a random search. Finally, we decode the emissions from the acoustic model with the best parameter setting for INLINEFORM3 , INLINEFORM4 and INLINEFORM5 , and a beam size of 4000 and beam score threshold of 250.
Pre-training Models
The pre-training models are implemented in PyTorch in the fairseq toolkit BIBREF0 . We optimize them with Adam BIBREF33 and a cosine learning rate schedule BIBREF34 annealed over 40K update steps for both WSJ and the clean Librispeech training datasets. We start with a learning rate of 1e-7, and the gradually warm it up for 500 updates up to 0.005 and then decay it following the cosine curve up to 1e-6. We train for 400K steps for full Librispeech. To compute the objective, we sample ten negatives and we use INLINEFORM0 tasks.
We train on 8 GPUs and put a variable number of audio sequences on each GPU, up to a pre-defined limit of 1.5M frames per GPU. Sequences are grouped by length and we crop them to a maximum size of 150K frames each, or the length of the shortest sequence in the batch, whichever is smaller. Cropping removes speech signal from either the beginning or end of the sequence and we randomly decide the cropping offsets for each sample; we re-sample every epoch. This is a form of data augmentation but also ensures equal length of all sequences on a GPU and removes on average 25% of the training data. After cropping the total effective batch size across GPUs is about 556 seconds of speech signal (for a variable number of audio sequences).
Results
Different to BIBREF15 , we evaluate the pre-trained representations directly on downstream speech recognition tasks. We measure speech recognition performance on the WSJ benchmark and simulate various low resource setups (§ SECREF12 ). We also evaluate on the TIMIT phoneme recognition task (§ SECREF13 ) and ablate various modeling choices (§ SECREF14 ).
Pre-training for the WSJ benchmark
We consider pre-training on the audio data (without labels) of WSJ, part of clean Librispeech (about 80h) and full Librispeech as well as a combination of all datasets (§ SECREF7 ). For the pre-training experiments we feed the output of the context network to the acoustic model, instead of log-mel filterbank features.
Table shows that pre-training on more data leads to better accuracy on the WSJ benchmark. Pre-trained representations can substantially improve performance over our character-based baseline which is trained on log-mel filterbank features. This shows that pre-training on unlabeled audio data can improve over the best character-based approach, Deep Speech 2 BIBREF1 , by 0.3 WER on nov92. Our best pre-training model performs as well as the phoneme-based model of BIBREF35 . BIBREF36 is a phoneme-based approach that pre-trains on the transcribed Libirspeech data and then fine-tunes on WSJ. In comparison, our method requires only unlabeled audio data and BIBREF36 also rely on a stronger baseline model than our setup.
What is the impact of pre-trained representations with less transcribed data? In order to get a better understanding of this, we train acoustic models with different amounts of labeled training data and measure accuracy with and without pre-trained representations (log-mel filterbanks). The pre-trained representations are trained on the full Librispeech corpus and we measure accuracy in terms of WER when decoding with a 4-gram language model. Figure shows that pre-training reduces WER by 32% on nov93dev when only about eight hours of transcribed data is available. Pre-training only on the audio data of WSJ ( WSJ) performs worse compared to the much larger Librispeech ( Libri). This further confirms that pre-training on more data is crucial to good performance.
Pre-training for TIMIT
On the TIMIT task we use a 7-layer wav2letter++ model with high dropout (§ SECREF3 ; Synnaeve et al., 2016). Table shows that we can match the state of the art when we pre-train on Librispeech and WSJ audio data. Accuracy steadily increases with more data for pre-training and the best accuracy is achieved when we use the largest amount of data for pre-training.
Ablations
In this section we analyze some of the design choices we made for . We pre-train on the 80 hour subset of clean Librispeech and evaluate on TIMIT. Table shows that increasing the number of negative samples only helps up to ten samples. Thereafter, performance plateaus while training time increases. We suspect that this is because the training signal from the positive samples decreases as the number of negative samples increases. In this experiment, everything is kept equal except for the number of negative samples.
Next, we analyze the effect of data augmentation through cropping audio sequences (§ SECREF11 ). When creating batches we crop sequences to a pre-defined maximum length. Table shows that a crop size of 150K frames results in the best performance. Not restricting the maximum length (None) gives an average sequence length of about 207K frames and results in the worst accuracy. This is most likely because the setting provides the least amount of data augmentation.
Table shows that predicting more than 12 steps ahead in the future does not result in better performance and increasing the number of steps increases training time.
Conclusions
We introduce , the first application of unsupervised pre-training to speech recognition with a fully convolutional model. Our approach achieves 2.78 WER on the test set of WSJ, a result that outperforms the next best known character-based speech recognition model in the literature BIBREF1 while using three orders of magnitude less transcribed training data. We show that more data for pre-training improves performance and that this approach not only improves resource-poor setups, but also settings where all WSJ training data is used. In future work, we will investigate different architectures and fine-tuning which is likely to further improve performance.
Acknowledgements
We thank the Speech team at FAIR, especially Jacob Kahn, Vineel Pratap and Qiantong Xu for help with wav2letter++ experiments, and Tatiana Likhomanenko for providing convolutional language models for our experiments. | 1000 hours of WSJ audio data |
12eaaf3b6ebc51846448c6e1ad210dbef7d25a96 | 12eaaf3b6ebc51846448c6e1ad210dbef7d25a96_0 | Q: How many convolutional layers does their model have?
Text: Introduction
Current state of the art models for speech recognition require large amounts of transcribed audio data to attain good performance BIBREF1 . Recently, pre-training of neural networks has emerged as an effective technique for settings where labeled data is scarce. The key idea is to learn general representations in a setup where substantial amounts of labeled or unlabeled data is available and to leverage the learned representations to improve performance on a downstream task for which the amount of data is limited. This is particularly interesting for tasks where substantial effort is required to obtain labeled data, such as speech recognition.
In computer vision, representations for ImageNet BIBREF2 and COCO BIBREF3 have proven to be useful to initialize models for tasks such as image captioning BIBREF4 or pose estimation BIBREF5 . Unsupervised pre-training for computer vision has also shown promise BIBREF6 . In natural language processing (NLP), unsupervised pre-training of language models BIBREF7 , BIBREF8 , BIBREF9 improved many tasks such as text classification, phrase structure parsing and machine translation BIBREF10 , BIBREF11 . In speech processing, pre-training has focused on emotion recogniton BIBREF12 , speaker identification BIBREF13 , phoneme discrimination BIBREF14 , BIBREF15 as well as transferring ASR representations from one language to another BIBREF16 . There has been work on unsupervised learning for speech but the resulting representations have not been applied to improve supervised speech recognition BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 .
In this paper, we apply unsupervised pre-training to improve supervised speech recognition. This enables exploiting unlabeled audio data which is much easier to collect than labeled data. Our model, , is a convolutional neural network that takes raw audio as input and computes a general representation that can be input to a speech recognition system. The objective is a contrastive loss that requires distinguishing a true future audio sample from negatives BIBREF22 , BIBREF23 , BIBREF15 . Different to previous work BIBREF15 , we move beyond frame-wise phoneme classification and apply the learned representations to improve strong supervised ASR systems. relies on a fully convolutional architecture which can be easily parallelized over time on modern hardware compared to recurrent autoregressive models used in previous work (§ SECREF2 ).
Our experimental results on the WSJ benchmark demonstrate that pre-trained representations estimated on about 1,000 hours of unlabeled speech can substantially improve a character-based ASR system and outperform the best character-based result in the literature, Deep Speech 2. On the TIMIT task, pre-training enables us to match the best reported result in the literature. In a simulated low-resource setup with only eight hours of transcriped audio data, reduces WER by up to 32% compared to a baseline model that relies on labeled data only (§ SECREF3 & § SECREF4 ).
Pre-training Approach
Given an audio signal as input, we optimize our model (§ SECREF3 ) to predict future samples from a given signal context. A common problem with these approaches is the requirement to accurately model the data distribution INLINEFORM0 , which is challenging. We avoid this problem by first encoding raw speech samples INLINEFORM1 into a feature representation INLINEFORM2 at a lower temporal frequency and then implicitly model a density function INLINEFORM3 similar to BIBREF15 .
Model
Our model takes raw audio signal as input and then applies two networks. The encoder network embeds the audio signal in latent space and the context network combines multiple time-steps of the encoder to obtain contextualized representations (Figure FIGREF2 ). Both networks are then used to compute the objective function (§ SECREF4 ).
Given raw audio samples INLINEFORM0 , we apply the encoder network INLINEFORM1 which we parameterize as a five-layer convolutional network similar to BIBREF15 . Alternatively, one could use other architectures such as the trainable frontend of BIBREF24 amongst others. The encoder layers have kernel sizes INLINEFORM2 and strides INLINEFORM3 . The output of the encoder is a low frequency feature representation INLINEFORM4 which encodes about 30ms of 16KHz of audio and the striding results in representation INLINEFORM5 every 10ms.
Next, we apply the context network INLINEFORM0 to the output of the encoder network to mix multiple latent representations INLINEFORM1 into a single contextualized tensor INLINEFORM2 for a receptive field size INLINEFORM3 . The context network has seven layers and each layer has kernel size three and stride one. The total receptive field of the context network is about 180ms.
The layers of both networks consist of a causal convolution with 512 channels, a group normalization layer and a ReLU nonlinearity. We normalize both across the feature and temporal dimension for each sample which is equivalent to group normalization with a single normalization group BIBREF25 . We found it important to choose a normalization scheme that is invariant to the scaling and the offset of the input data. This choice resulted in representations that generalize well across datasets.
Objective
We train the model to distinguish a sample INLINEFORM0 that is k steps in the future from distractor samples INLINEFORM1 drawn from a proposal distribution INLINEFORM2 , by minimizing the contrastive loss for each step INLINEFORM3 : DISPLAYFORM0
where we denote the sigmoid INLINEFORM0 , and where INLINEFORM1 is the probability of INLINEFORM2 being the true sample. We consider a step-specific affine transformation INLINEFORM3 for each step INLINEFORM4 , that is applied to INLINEFORM5 BIBREF15 . We optimize the loss INLINEFORM6 , summing ( EQREF5 ) over different step sizes. In practice, we approximate the expectation by sampling ten negatives examples by uniformly choosing distractors from each audio sequence, i.e., INLINEFORM7 , where INLINEFORM8 is the sequence length and we set INLINEFORM9 to the number of negatives.
After training, we input the representations produced by the context network INLINEFORM0 to the acoustic model instead of log-mel filterbank features.
Data
We consider the following corpora: For phoneme recognition on TIMIT BIBREF26 we use the standard train, dev and test split where the training data contains just over three hours of audio data. Wall Street Journal (WSJ; Woodland et al., 1994) comprises about 81 hours of transcribed audio data. We train on si284, validate on nov93dev and test on nov92. Librispeech BIBREF27 contains a total of 960 hours of clean and noisy speech for training. For pre-training, we use either the full 81 hours of the WSJ corpus, an 80 hour subset of clean Librispeech, the full 960 hour Librispeech training set, or a combination of all of them.
To train the baseline acoustic model we compute 80 log-mel filterbank coefficients for a 25ms sliding window with stride 10ms. Final models are evaluated in terms of both word error rate (WER) and letter error rate (LER).
Acoustic Models
We use the wav2letter++ toolkit for training and evaluation of acoustic models BIBREF28 . For the TIMIT task, we follow the character-based wav2letter++ setup of BIBREF24 which uses seven consecutive blocks of convolutions (kernel size 5 with 1,000 channels), followed by a PReLU nonlinearity and a dropout rate of 0.7. The final representation is projected to a 39-dimensional phoneme probability. The model is trained using the Auto Segmentation Criterion (ASG; Collobert et al., 2016)) using SGD with momentum.
Our baseline for the WSJ benchmark is the wav2letter++ setup described in BIBREF29 which is a 17 layer model with gated convolutions BIBREF30 . The model predicts probabilities for 31 graphemes, including the standard English alphabet, the apostrophe and period, two repetition characters (e.g. the word ann is transcribed as an1), and a silence token (|) used as word boundary.
All acoustic models are trained on 8 Nvidia V100 GPUs using the distributed training implementations of fairseq and wav2letter++. When training acoustic models on WSJ, we use plain SGD with learning rate 5.6 as well as gradient clipping BIBREF29 and train for 1,000 epochs with a total batch size of 64 audio sequences. We use early stopping and choose models based on validation WER after evaluating checkpoints with a 4-gram language model. For TIMIT we use learning rate 0.12, momentum of 0.9 and train for 1,000 epochs on 8 GPUs with a batch size of 16 audio sequences.
Decoding
For decoding the emissions from the acoustic model we use a lexicon as well as a separate language model trained on the WSJ language modeling data only. We consider a 4-gram KenLM language model BIBREF31 , a word-based convolutional language model BIBREF29 , and a character based convolutional language model BIBREF32 . We decode the word sequence INLINEFORM0 from the output of the context network INLINEFORM1 or log-mel filterbanks using the beam search decoder of BIBREF29 by maximizing DISPLAYFORM0
where INLINEFORM0 is the acoustic model, INLINEFORM1 is the language model, INLINEFORM2 are the characters of INLINEFORM3 . Hyper-parameters INLINEFORM4 , INLINEFORM5 and INLINEFORM6 are weights for the language model, the word penalty, and the silence penalty.
For decoding WSJ, we tune the hyperparameters INLINEFORM0 , INLINEFORM1 and INLINEFORM2 using a random search. Finally, we decode the emissions from the acoustic model with the best parameter setting for INLINEFORM3 , INLINEFORM4 and INLINEFORM5 , and a beam size of 4000 and beam score threshold of 250.
Pre-training Models
The pre-training models are implemented in PyTorch in the fairseq toolkit BIBREF0 . We optimize them with Adam BIBREF33 and a cosine learning rate schedule BIBREF34 annealed over 40K update steps for both WSJ and the clean Librispeech training datasets. We start with a learning rate of 1e-7, and the gradually warm it up for 500 updates up to 0.005 and then decay it following the cosine curve up to 1e-6. We train for 400K steps for full Librispeech. To compute the objective, we sample ten negatives and we use INLINEFORM0 tasks.
We train on 8 GPUs and put a variable number of audio sequences on each GPU, up to a pre-defined limit of 1.5M frames per GPU. Sequences are grouped by length and we crop them to a maximum size of 150K frames each, or the length of the shortest sequence in the batch, whichever is smaller. Cropping removes speech signal from either the beginning or end of the sequence and we randomly decide the cropping offsets for each sample; we re-sample every epoch. This is a form of data augmentation but also ensures equal length of all sequences on a GPU and removes on average 25% of the training data. After cropping the total effective batch size across GPUs is about 556 seconds of speech signal (for a variable number of audio sequences).
Results
Different to BIBREF15 , we evaluate the pre-trained representations directly on downstream speech recognition tasks. We measure speech recognition performance on the WSJ benchmark and simulate various low resource setups (§ SECREF12 ). We also evaluate on the TIMIT phoneme recognition task (§ SECREF13 ) and ablate various modeling choices (§ SECREF14 ).
Pre-training for the WSJ benchmark
We consider pre-training on the audio data (without labels) of WSJ, part of clean Librispeech (about 80h) and full Librispeech as well as a combination of all datasets (§ SECREF7 ). For the pre-training experiments we feed the output of the context network to the acoustic model, instead of log-mel filterbank features.
Table shows that pre-training on more data leads to better accuracy on the WSJ benchmark. Pre-trained representations can substantially improve performance over our character-based baseline which is trained on log-mel filterbank features. This shows that pre-training on unlabeled audio data can improve over the best character-based approach, Deep Speech 2 BIBREF1 , by 0.3 WER on nov92. Our best pre-training model performs as well as the phoneme-based model of BIBREF35 . BIBREF36 is a phoneme-based approach that pre-trains on the transcribed Libirspeech data and then fine-tunes on WSJ. In comparison, our method requires only unlabeled audio data and BIBREF36 also rely on a stronger baseline model than our setup.
What is the impact of pre-trained representations with less transcribed data? In order to get a better understanding of this, we train acoustic models with different amounts of labeled training data and measure accuracy with and without pre-trained representations (log-mel filterbanks). The pre-trained representations are trained on the full Librispeech corpus and we measure accuracy in terms of WER when decoding with a 4-gram language model. Figure shows that pre-training reduces WER by 32% on nov93dev when only about eight hours of transcribed data is available. Pre-training only on the audio data of WSJ ( WSJ) performs worse compared to the much larger Librispeech ( Libri). This further confirms that pre-training on more data is crucial to good performance.
Pre-training for TIMIT
On the TIMIT task we use a 7-layer wav2letter++ model with high dropout (§ SECREF3 ; Synnaeve et al., 2016). Table shows that we can match the state of the art when we pre-train on Librispeech and WSJ audio data. Accuracy steadily increases with more data for pre-training and the best accuracy is achieved when we use the largest amount of data for pre-training.
Ablations
In this section we analyze some of the design choices we made for . We pre-train on the 80 hour subset of clean Librispeech and evaluate on TIMIT. Table shows that increasing the number of negative samples only helps up to ten samples. Thereafter, performance plateaus while training time increases. We suspect that this is because the training signal from the positive samples decreases as the number of negative samples increases. In this experiment, everything is kept equal except for the number of negative samples.
Next, we analyze the effect of data augmentation through cropping audio sequences (§ SECREF11 ). When creating batches we crop sequences to a pre-defined maximum length. Table shows that a crop size of 150K frames results in the best performance. Not restricting the maximum length (None) gives an average sequence length of about 207K frames and results in the worst accuracy. This is most likely because the setting provides the least amount of data augmentation.
Table shows that predicting more than 12 steps ahead in the future does not result in better performance and increasing the number of steps increases training time.
Conclusions
We introduce , the first application of unsupervised pre-training to speech recognition with a fully convolutional model. Our approach achieves 2.78 WER on the test set of WSJ, a result that outperforms the next best known character-based speech recognition model in the literature BIBREF1 while using three orders of magnitude less transcribed training data. We show that more data for pre-training improves performance and that this approach not only improves resource-poor setups, but also settings where all WSJ training data is used. In future work, we will investigate different architectures and fine-tuning which is likely to further improve performance.
Acknowledgements
We thank the Speech team at FAIR, especially Jacob Kahn, Vineel Pratap and Qiantong Xu for help with wav2letter++ experiments, and Tatiana Likhomanenko for providing convolutional language models for our experiments. | wav2vec has 12 convolutional layers |
828615a874512844ede9d7f7d92bdc48bb48b18d | 828615a874512844ede9d7f7d92bdc48bb48b18d_0 | Q: Do they explore how much traning data is needed for which magnitude of improvement for WER?
Text: Introduction
Current state of the art models for speech recognition require large amounts of transcribed audio data to attain good performance BIBREF1 . Recently, pre-training of neural networks has emerged as an effective technique for settings where labeled data is scarce. The key idea is to learn general representations in a setup where substantial amounts of labeled or unlabeled data is available and to leverage the learned representations to improve performance on a downstream task for which the amount of data is limited. This is particularly interesting for tasks where substantial effort is required to obtain labeled data, such as speech recognition.
In computer vision, representations for ImageNet BIBREF2 and COCO BIBREF3 have proven to be useful to initialize models for tasks such as image captioning BIBREF4 or pose estimation BIBREF5 . Unsupervised pre-training for computer vision has also shown promise BIBREF6 . In natural language processing (NLP), unsupervised pre-training of language models BIBREF7 , BIBREF8 , BIBREF9 improved many tasks such as text classification, phrase structure parsing and machine translation BIBREF10 , BIBREF11 . In speech processing, pre-training has focused on emotion recogniton BIBREF12 , speaker identification BIBREF13 , phoneme discrimination BIBREF14 , BIBREF15 as well as transferring ASR representations from one language to another BIBREF16 . There has been work on unsupervised learning for speech but the resulting representations have not been applied to improve supervised speech recognition BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 .
In this paper, we apply unsupervised pre-training to improve supervised speech recognition. This enables exploiting unlabeled audio data which is much easier to collect than labeled data. Our model, , is a convolutional neural network that takes raw audio as input and computes a general representation that can be input to a speech recognition system. The objective is a contrastive loss that requires distinguishing a true future audio sample from negatives BIBREF22 , BIBREF23 , BIBREF15 . Different to previous work BIBREF15 , we move beyond frame-wise phoneme classification and apply the learned representations to improve strong supervised ASR systems. relies on a fully convolutional architecture which can be easily parallelized over time on modern hardware compared to recurrent autoregressive models used in previous work (§ SECREF2 ).
Our experimental results on the WSJ benchmark demonstrate that pre-trained representations estimated on about 1,000 hours of unlabeled speech can substantially improve a character-based ASR system and outperform the best character-based result in the literature, Deep Speech 2. On the TIMIT task, pre-training enables us to match the best reported result in the literature. In a simulated low-resource setup with only eight hours of transcriped audio data, reduces WER by up to 32% compared to a baseline model that relies on labeled data only (§ SECREF3 & § SECREF4 ).
Pre-training Approach
Given an audio signal as input, we optimize our model (§ SECREF3 ) to predict future samples from a given signal context. A common problem with these approaches is the requirement to accurately model the data distribution INLINEFORM0 , which is challenging. We avoid this problem by first encoding raw speech samples INLINEFORM1 into a feature representation INLINEFORM2 at a lower temporal frequency and then implicitly model a density function INLINEFORM3 similar to BIBREF15 .
Model
Our model takes raw audio signal as input and then applies two networks. The encoder network embeds the audio signal in latent space and the context network combines multiple time-steps of the encoder to obtain contextualized representations (Figure FIGREF2 ). Both networks are then used to compute the objective function (§ SECREF4 ).
Given raw audio samples INLINEFORM0 , we apply the encoder network INLINEFORM1 which we parameterize as a five-layer convolutional network similar to BIBREF15 . Alternatively, one could use other architectures such as the trainable frontend of BIBREF24 amongst others. The encoder layers have kernel sizes INLINEFORM2 and strides INLINEFORM3 . The output of the encoder is a low frequency feature representation INLINEFORM4 which encodes about 30ms of 16KHz of audio and the striding results in representation INLINEFORM5 every 10ms.
Next, we apply the context network INLINEFORM0 to the output of the encoder network to mix multiple latent representations INLINEFORM1 into a single contextualized tensor INLINEFORM2 for a receptive field size INLINEFORM3 . The context network has seven layers and each layer has kernel size three and stride one. The total receptive field of the context network is about 180ms.
The layers of both networks consist of a causal convolution with 512 channels, a group normalization layer and a ReLU nonlinearity. We normalize both across the feature and temporal dimension for each sample which is equivalent to group normalization with a single normalization group BIBREF25 . We found it important to choose a normalization scheme that is invariant to the scaling and the offset of the input data. This choice resulted in representations that generalize well across datasets.
Objective
We train the model to distinguish a sample INLINEFORM0 that is k steps in the future from distractor samples INLINEFORM1 drawn from a proposal distribution INLINEFORM2 , by minimizing the contrastive loss for each step INLINEFORM3 : DISPLAYFORM0
where we denote the sigmoid INLINEFORM0 , and where INLINEFORM1 is the probability of INLINEFORM2 being the true sample. We consider a step-specific affine transformation INLINEFORM3 for each step INLINEFORM4 , that is applied to INLINEFORM5 BIBREF15 . We optimize the loss INLINEFORM6 , summing ( EQREF5 ) over different step sizes. In practice, we approximate the expectation by sampling ten negatives examples by uniformly choosing distractors from each audio sequence, i.e., INLINEFORM7 , where INLINEFORM8 is the sequence length and we set INLINEFORM9 to the number of negatives.
After training, we input the representations produced by the context network INLINEFORM0 to the acoustic model instead of log-mel filterbank features.
Data
We consider the following corpora: For phoneme recognition on TIMIT BIBREF26 we use the standard train, dev and test split where the training data contains just over three hours of audio data. Wall Street Journal (WSJ; Woodland et al., 1994) comprises about 81 hours of transcribed audio data. We train on si284, validate on nov93dev and test on nov92. Librispeech BIBREF27 contains a total of 960 hours of clean and noisy speech for training. For pre-training, we use either the full 81 hours of the WSJ corpus, an 80 hour subset of clean Librispeech, the full 960 hour Librispeech training set, or a combination of all of them.
To train the baseline acoustic model we compute 80 log-mel filterbank coefficients for a 25ms sliding window with stride 10ms. Final models are evaluated in terms of both word error rate (WER) and letter error rate (LER).
Acoustic Models
We use the wav2letter++ toolkit for training and evaluation of acoustic models BIBREF28 . For the TIMIT task, we follow the character-based wav2letter++ setup of BIBREF24 which uses seven consecutive blocks of convolutions (kernel size 5 with 1,000 channels), followed by a PReLU nonlinearity and a dropout rate of 0.7. The final representation is projected to a 39-dimensional phoneme probability. The model is trained using the Auto Segmentation Criterion (ASG; Collobert et al., 2016)) using SGD with momentum.
Our baseline for the WSJ benchmark is the wav2letter++ setup described in BIBREF29 which is a 17 layer model with gated convolutions BIBREF30 . The model predicts probabilities for 31 graphemes, including the standard English alphabet, the apostrophe and period, two repetition characters (e.g. the word ann is transcribed as an1), and a silence token (|) used as word boundary.
All acoustic models are trained on 8 Nvidia V100 GPUs using the distributed training implementations of fairseq and wav2letter++. When training acoustic models on WSJ, we use plain SGD with learning rate 5.6 as well as gradient clipping BIBREF29 and train for 1,000 epochs with a total batch size of 64 audio sequences. We use early stopping and choose models based on validation WER after evaluating checkpoints with a 4-gram language model. For TIMIT we use learning rate 0.12, momentum of 0.9 and train for 1,000 epochs on 8 GPUs with a batch size of 16 audio sequences.
Decoding
For decoding the emissions from the acoustic model we use a lexicon as well as a separate language model trained on the WSJ language modeling data only. We consider a 4-gram KenLM language model BIBREF31 , a word-based convolutional language model BIBREF29 , and a character based convolutional language model BIBREF32 . We decode the word sequence INLINEFORM0 from the output of the context network INLINEFORM1 or log-mel filterbanks using the beam search decoder of BIBREF29 by maximizing DISPLAYFORM0
where INLINEFORM0 is the acoustic model, INLINEFORM1 is the language model, INLINEFORM2 are the characters of INLINEFORM3 . Hyper-parameters INLINEFORM4 , INLINEFORM5 and INLINEFORM6 are weights for the language model, the word penalty, and the silence penalty.
For decoding WSJ, we tune the hyperparameters INLINEFORM0 , INLINEFORM1 and INLINEFORM2 using a random search. Finally, we decode the emissions from the acoustic model with the best parameter setting for INLINEFORM3 , INLINEFORM4 and INLINEFORM5 , and a beam size of 4000 and beam score threshold of 250.
Pre-training Models
The pre-training models are implemented in PyTorch in the fairseq toolkit BIBREF0 . We optimize them with Adam BIBREF33 and a cosine learning rate schedule BIBREF34 annealed over 40K update steps for both WSJ and the clean Librispeech training datasets. We start with a learning rate of 1e-7, and the gradually warm it up for 500 updates up to 0.005 and then decay it following the cosine curve up to 1e-6. We train for 400K steps for full Librispeech. To compute the objective, we sample ten negatives and we use INLINEFORM0 tasks.
We train on 8 GPUs and put a variable number of audio sequences on each GPU, up to a pre-defined limit of 1.5M frames per GPU. Sequences are grouped by length and we crop them to a maximum size of 150K frames each, or the length of the shortest sequence in the batch, whichever is smaller. Cropping removes speech signal from either the beginning or end of the sequence and we randomly decide the cropping offsets for each sample; we re-sample every epoch. This is a form of data augmentation but also ensures equal length of all sequences on a GPU and removes on average 25% of the training data. After cropping the total effective batch size across GPUs is about 556 seconds of speech signal (for a variable number of audio sequences).
Results
Different to BIBREF15 , we evaluate the pre-trained representations directly on downstream speech recognition tasks. We measure speech recognition performance on the WSJ benchmark and simulate various low resource setups (§ SECREF12 ). We also evaluate on the TIMIT phoneme recognition task (§ SECREF13 ) and ablate various modeling choices (§ SECREF14 ).
Pre-training for the WSJ benchmark
We consider pre-training on the audio data (without labels) of WSJ, part of clean Librispeech (about 80h) and full Librispeech as well as a combination of all datasets (§ SECREF7 ). For the pre-training experiments we feed the output of the context network to the acoustic model, instead of log-mel filterbank features.
Table shows that pre-training on more data leads to better accuracy on the WSJ benchmark. Pre-trained representations can substantially improve performance over our character-based baseline which is trained on log-mel filterbank features. This shows that pre-training on unlabeled audio data can improve over the best character-based approach, Deep Speech 2 BIBREF1 , by 0.3 WER on nov92. Our best pre-training model performs as well as the phoneme-based model of BIBREF35 . BIBREF36 is a phoneme-based approach that pre-trains on the transcribed Libirspeech data and then fine-tunes on WSJ. In comparison, our method requires only unlabeled audio data and BIBREF36 also rely on a stronger baseline model than our setup.
What is the impact of pre-trained representations with less transcribed data? In order to get a better understanding of this, we train acoustic models with different amounts of labeled training data and measure accuracy with and without pre-trained representations (log-mel filterbanks). The pre-trained representations are trained on the full Librispeech corpus and we measure accuracy in terms of WER when decoding with a 4-gram language model. Figure shows that pre-training reduces WER by 32% on nov93dev when only about eight hours of transcribed data is available. Pre-training only on the audio data of WSJ ( WSJ) performs worse compared to the much larger Librispeech ( Libri). This further confirms that pre-training on more data is crucial to good performance.
Pre-training for TIMIT
On the TIMIT task we use a 7-layer wav2letter++ model with high dropout (§ SECREF3 ; Synnaeve et al., 2016). Table shows that we can match the state of the art when we pre-train on Librispeech and WSJ audio data. Accuracy steadily increases with more data for pre-training and the best accuracy is achieved when we use the largest amount of data for pre-training.
Ablations
In this section we analyze some of the design choices we made for . We pre-train on the 80 hour subset of clean Librispeech and evaluate on TIMIT. Table shows that increasing the number of negative samples only helps up to ten samples. Thereafter, performance plateaus while training time increases. We suspect that this is because the training signal from the positive samples decreases as the number of negative samples increases. In this experiment, everything is kept equal except for the number of negative samples.
Next, we analyze the effect of data augmentation through cropping audio sequences (§ SECREF11 ). When creating batches we crop sequences to a pre-defined maximum length. Table shows that a crop size of 150K frames results in the best performance. Not restricting the maximum length (None) gives an average sequence length of about 207K frames and results in the worst accuracy. This is most likely because the setting provides the least amount of data augmentation.
Table shows that predicting more than 12 steps ahead in the future does not result in better performance and increasing the number of steps increases training time.
Conclusions
We introduce , the first application of unsupervised pre-training to speech recognition with a fully convolutional model. Our approach achieves 2.78 WER on the test set of WSJ, a result that outperforms the next best known character-based speech recognition model in the literature BIBREF1 while using three orders of magnitude less transcribed training data. We show that more data for pre-training improves performance and that this approach not only improves resource-poor setups, but also settings where all WSJ training data is used. In future work, we will investigate different architectures and fine-tuning which is likely to further improve performance.
Acknowledgements
We thank the Speech team at FAIR, especially Jacob Kahn, Vineel Pratap and Qiantong Xu for help with wav2letter++ experiments, and Tatiana Likhomanenko for providing convolutional language models for our experiments. | Yes |
a43c400ae37a8705ff2effb4828f4b0b177a74c4 | a43c400ae37a8705ff2effb4828f4b0b177a74c4_0 | Q: How are character representations from various languages joint?
Text: Introduction
State-of-the-art morphological taggers require thousands of annotated sentences to train. For the majority of the world's languages, however, sufficient, large-scale annotation is not available and obtaining it would often be infeasible. Accordingly, an important road forward in low-resource NLP is the development of methods that allow for the training of high-quality tools from smaller amounts of data. In this work, we focus on transfer learning—we train a recurrent neural tagger for a low-resource language jointly with a tagger for a related high-resource language. Forcing the models to share character-level features among the languages allows large gains in accuracy when tagging the low-resource languages, while maintaining (or even improving) accuracy on the high-resource language.
Recurrent neural networks constitute the state of the art for a myriad of tasks in NLP, e.g., multi-lingual part-of-speech tagging BIBREF0 , syntactic parsing BIBREF1 , BIBREF2 , morphological paradigm completion BIBREF3 , BIBREF4 and language modeling BIBREF5 , BIBREF6 ; recently, such models have also improved morphological tagging BIBREF7 , BIBREF8 . In addition to increased performance over classical approaches, neural networks also offer a second advantage: they admit a clean paradigm for multi-task learning. If the learned representations for all of the tasks are embedded jointly into a shared vector space, the various tasks reap benefits from each other and often performance improves for all BIBREF9 . We exploit this idea for language-to-language transfer to develop an approach for cross-lingual morphological tagging.
We experiment on 18 languages taken from four different language families. Using the Universal Dependencies treebanks, we emulate a low-resource setting for our experiments, e.g., we attempt to train a morphological tagger for Catalan using primarily data from a related language like Spanish. Our results demonstrate the successful transfer of morphological knowledge from the high-resource languages to the low-resource languages without relying on an externally acquired bilingual lexicon or bitext. We consider both the single- and multi-source transfer case and explore how similar two languages must be in order to enable high-quality transfer of morphological taggers.
Morphological Tagging
Many languages in the world exhibit rich inflectional morphology: the form of individual words mutates to reflect the syntactic function. For example, the Spanish verb soñar will appear as sueño in the first person present singular, but soñáis in the second person present plural, depending on the bundle of syntaco-semantic attributes associated with the given form (in a sentential context). For concreteness, we list a more complete table of Spanish verbal inflections in tab:paradigm. [author=Ryan,color=purple!40,size=,fancyline,caption=,]Notation in table is different. Note that some languages, e.g. the Northeastern Caucasian language Archi, display a veritable cornucopia of potential forms with the size of the verbal paradigm exceeding 10,000 BIBREF10 .
Standard NLP annotation, e.g., the scheme in sylakglassman-EtAl:2015:ACL-IJCNLP, marks forms in terms of universal key–attribute pairs, e.g., the first person present singular is represented as $\left[\right.$ pos=V, per=1, num=sg, tns=pres $\left.\right]$ . This bundle of key–attributes pairs is typically termed a morphological tag and we may view the goal of morphological tagging to label each word in its sentential context with the appropriate tag BIBREF11 , BIBREF12 . As the part-of-speech (POS) is a component of the tag, we may view morphological tagging as a strict generalization of POS tagging, where we have significantly refined the set of available tags. All of the experiments in this paper make use of the universal morphological tag set available in the Universal Dependencies (UD) BIBREF13 . As an example, we have provided a Russian sentence with its UD tagging in fig:russian-sentence.
Character-Level Neural Transfer
Our formulation of transfer learning builds on work in multi-task learning BIBREF15 , BIBREF9 . We treat each individual language as a task and train a joint model for all the tasks. We first discuss the current state of the art in morphological tagging: a character-level recurrent neural network. After that, we explore three augmentations to the architecture that allow for the transfer learning scenario. All of our proposals force the embedding of the characters for both the source and the target language to share the same vector space, but involve different mechanisms, by which the model may learn language-specific features.
Character-Level Neural Networks
Character-level neural networks currently constitute the state of the art in morphological tagging BIBREF8 . We draw on previous work in defining a conditional distribution over taggings ${t}$ for a sentence ${w}$ of length $|{w}| = N$ as
$$p_{{\theta }}({{t}} \mid {{w}}) = \prod _{i=1}^N p_{{\theta }}(t_i \mid {{w}}), $$ (Eq. 12)
which may be seen as a $0^\text{th}$ order conditional random field (CRF) BIBREF16 with parameter vector ${{\theta }}$ . Importantly, this factorization of the distribution $p_{{\theta }}({{t}} \mid {{w}})$ also allows for efficient exact decoding and marginal inference in ${\cal O}(N)$ -time, but at the cost of not admitting any explicit interactions in the output structure, i.e., between adjacent tags. We parameterize the distribution over tags at each time step as
$$p_{{\theta }}(t_i \mid {{w}}) = \text{softmax}\left(W {e}_i + {b}\right), $$ (Eq. 15)
where $W \in \mathbb {R}^{|{\cal T}| \times n}$ is an embedding matrix, ${b}\in \mathbb {R}^{|{\cal T}|}$ is a bias vector and positional embeddings ${e}_i$ are taken from a concatenation of the output of two long short-term memory recurrent neural networks (LSTMs) BIBREF18 , folded forward and backward, respectively, over a sequence of input vectors. This constitutes a bidirectional LSTM BIBREF19 . We define the positional embedding vector as follows
$${e}_i = \left[{\text{LSTM}}({v}_{1:i}); {\text{LSTM}}({v}_{i+1:N})\right], $$ (Eq. 17)
where each ${v}_i \in \mathbb {R}^n$ is, itself, a word embedding. Note that the function $\text{LSTM}$ returns the last final hidden state vector of the network. This architecture is the context bidirectional recurrent neural network of plank-sogaard-goldberg:2016:P16-2. Finally, we derive each word embedding vector ${v}_i$ from a character-level bidirectional LSTM embedder. Namely, we define each word embedding as the concatenation
$${v}_i = &\left[ {\text{LSTM}}\left(\langle c_{i_1}, \ldots , c_{i_{M_i}}\rangle \right); \right. \\ &\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \left. {\text{LSTM}} \left(\langle c_{i_{M_i}}, \ldots , c_{i_1}\rangle \right) \right]. \nonumber $$ (Eq. 18)
In other words, we run a bidirectional LSTM over the character stream. This bidirectional LSTM is the sequence bidirectional recurrent neural network of plank-sogaard-goldberg:2016:P16-2. Note a concatenation of the sequence of character symbols $\langle c_{i_1}, \ldots , c_{i_{M_i}} \rangle $ results in the word string $w_i$ . Each of the $M_i$ characters $c_{i_k}$ is a member of the set $\Sigma $ . We take $\Sigma $ to be the union of sets of characters in the languages considered.
We direct the reader to heigold2017 for a more in-depth discussion of this and various additional architectures for the computation of ${v}_i$ ; the architecture we have presented in eq:embedder-v is competitive with the best performing setting in Heigold et al.'s study.
Cross-Lingual Morphological Transfer as Multi-Task Learning
Cross-lingual morphological tagging may be formulated as a multi-task learning problem. We seek to learn a set of shared character embeddings for taggers in both languages together through optimization of a joint loss function that combines the high-resource tagger and the low-resource one. The first loss function we consider is the following:
$${\cal L}_{\textit {multi}}({\theta }) = -\!\!\!\sum _{({t}, {w}) \in {\cal D}_s} \!\!\!\! \log &\, p_{{\theta }} ({t}\mid {w}, \ell _s ) \\[-5] \nonumber & -\!\!\!\!\sum _{({t}, {w}) \in {\cal D}_t} \!\! \log p_{{\theta }}\left({t}\mid {w}, \ell _t \right).$$ (Eq. 20)
Crucially, our cross-lingual objective forces both taggers to share part of the parameter vector ${\theta }$ , which allows it to represent morphological regularities between the two languages in a common embedding space and, thus, enables transfer of knowledge. This is no different from monolingual multi-task settings, e.g., jointly training a chunker and a tagger for the transfer of syntactic information BIBREF9 . We point out that, in contrast to our approach, almost all multi-task transfer learning, e.g., for dependency parsing BIBREF20 , has shared word-level embeddings rather than character-level embeddings. See sec:related-work for a more complete discussion.
We consider two parameterizations of this distribution $p_{{\theta }}(t_i \mid {w}, \ell )$ . First, we modify the initial character-level LSTM embedding such that it also encodes the identity of the language. Second, we modify the softmax layer, creating a language-specific softmax.
Our first architecture has one softmax, as in eq:tagger, over all morphological tags in ${\cal T}$ (shared among all the languages). To allow the architecture to encode morphological features specific to one language, e.g., the third person present plural ending in Spanish is -an, but -ão in Portuguese, we modify the creation of the character-level embeddings. Specifically, we augment the character alphabet $\Sigma $ with a distinguished symbol that indicates the language: $\text{{\tt id}}_\ell $ . We then pre- and postpend this symbol to the character stream for every word before feeding the characters into the bidirectional LSTM Thus, we arrive at the new language-specific word embeddings,
$${v}^{\ell }_i = &\left[ {\text{LSTM}}\left(\langle \text{{\tt id}}_\ell , c_{i_1}, \ldots , c_{i_{M_i}}, \text{{\tt id}}_\ell \rangle \right); \right. \\ &\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \left. {\text{LSTM}} \left(\langle \text{{\tt id}}_\ell , c_{i_{M_i}}, \ldots , c_{i_1}, \text{{\tt id}}_\ell \rangle \right) \right]. \nonumber $$ (Eq. 22)
This model creates a language-specific embedding vector ${v}_i$ , but the individual embeddings for a given character are shared among the languages jointly trained on. The remainder of the architecture is held constant.
Next, inspired by the architecture of heigold2013multilingual, we consider a language-specific softmax layer, i.e., we define a new output layer for every language:
$$p_{{\theta }}\left(t_i \mid {w}, \ell \right) = \text{softmax}\left(W_{\ell } {e}_i + {b}_{\ell }\right),$$ (Eq. 24)
where $W_{\ell } \in \mathbb {R}^{|{\cal T}| \times n}$ and ${b}_{\ell } \in \mathbb {R}^{|{\cal T}|}$ are now language-specific. In this architecture, the embeddings ${e}_i$ are the same for all languages—the model has to learn language-specific behavior exclusively through the output softmax of the tagging LSTM.
The third model we exhibit is a joint architecture for tagging and language identification. We consider the following loss function:
$${\cal L}_{\textit {joint}} ({\theta }) = -\!\!\!\sum _{({t}, {w}) \in {\cal D}_s} \!\!\! \log \, & p_{{\theta }}(\ell _s, {t}\mid {w}) \\[-5] \nonumber &-\!\sum _{({t}, {w}) \in {\cal D}_t} \!\!\!\!\! \log p_{{\theta }}\left(\ell _t, {t}\mid {w}\right),$$ (Eq. 26)
where we factor the joint distribution as
$$p_{{\theta }}\left(\ell , {t}\mid {w}\right) &= p_{{\theta }}\left(\ell \mid {w}\right) \cdot p_{{\theta }}\left({t}\mid {w}, \ell \right).$$ (Eq. 27)
Just as before, we define $p_{{\theta }}\left({t}\mid {w}, \ell \right)$ above as in eq:lang-specific and we define
$$p_{{\theta }}(\ell \mid {w}) = \text{softmax}\left(U\tanh (V{e}_i)\right),$$ (Eq. 28)
which is a multi-layer perceptron with a binary softmax (over the two languages) as an output layer; we have added the additional parameters $V \in \mathbb {R}^{2 \times n}$ and $U \in \mathbb {R}^{2 \times 2}$ . In the case of multi-source transfer, this is a softmax over the set of languages.
The first two architectures discussed in par:arch1 represent two possibilities for a multi-task objective, where we condition on the language of the sentence. The first integrates this knowledge at a lower level and the second at a higher level. The third architecture discussed in sec:joint-arch takes a different tack—rather than conditioning on the language, it predicts it. The joint model offers one interesting advantage over the two architectures proposed. Namely, it allows us to perform a morphological analysis on a sentence where the language is unknown. This effectively alleviates an early step in the NLP pipeline, where language id is performed and is useful in conditions where the language to be tagged may not be known a-priori, e.g., when tagging social media data.
While there are certainly more complex architectures one could engineer for the task, we believe we have found a relatively diverse sampling, enabling an interesting experimental comparison. Indeed, it is an important empirical question which architectures are most appropriate for transfer learning. Since transfer learning affords the opportunity to reduce the sample complexity of the “data-hungry” neural networks that currently dominate NLP research, finding a good solution for cross-lingual transfer in state-of-the-art neural models will likely be a boon for low-resource NLP in general.
Experiments
Empirically, we ask three questions of our architectures. i) How well can we transfer morphological tagging models from high-resource languages to low-resource languages in each architecture? (Does one of the three outperform the others?) ii) How many annotated data in the low-resource language do we need? iii) How closely related do the languages need to be to get good transfer?
Experimental Languages
We experiment with the language families: Romance (Indo-European), Northern Germanic (Indo-European), Slavic (Indo-European) and Uralic. In the Romance sub-grouping of the wider Indo-European family, we experiment on Catalan (ca), French (fr), Italian (it), Portuguese (pt), Romanian (ro) and Spanish (es). In the Northern Germanic family, we experiment on Danish (da), Norwegian (no) and Swedish (sv). In the Slavic family, we experiment on Bulgarian (bg), Czech (bg), Polish (pl), Russian (ru), Slovak (sk) and Ukrainian (uk). Finally, in the Uralic family we experiment on Estonian (et), Finnish (fi) and Hungarian (hu).
Datasets
We use the morphological tagging datasets provided by the Universal Dependencies (UD) treebanks (the concatenation of the $4^\text{th}$ and $6^\text{th}$ columns of the file format) BIBREF13 . We list the size of the training, development and test splits of the UD treebanks we used in tab:lang-size. Also, we list the number of unique morphological tags in each language in tab:num-tags, which serves as an approximate measure of the morphological complexity each language exhibits. Crucially, the data are annotated in a cross-linguistically consistent manner, such that words in the different languages that have the same syntacto-semantic function have the same bundle of tags (see sec:morpho-tagging for a discussion). Potentially, further gains would be possible by using a more universal scheme, e.g., the UniMorph scheme.
Baselines
We consider two baselines in our work. First, we consider the MarMoT tagger BIBREF17 , which is currently the best performing non-neural model. The source code for MarMoT is freely available online, which allows us to perform fully controlled experiments with this model. Second, we consider the alignment-based projection approach of buys-botha:2016:P16-1. We discuss each of the two baselines in turn.
The MarMoT tagger is the leading non-neural approach to morphological tagging. This baseline is important since non-neural, feature-based approaches have been found empirically to be more efficient, in the sense that their learning curves tend to be steeper. Thus, in the low-resource setting we would be remiss to not consider a feature-based approach. Note that this is not a transfer approach, but rather only uses the low-resource data.
The projection approach of buys-botha:2016:P16-1 provides an alternative method for transfer learning. The idea is to construct pseudo-annotations for bitext given an alignments BIBREF21 . Then, one trains a standard tagger using the projected annotations. The specific tagger employed is the wsabie model of DBLP:conf/ijcai/WestonBU11, which—like our approach— is a $0^\text{th}$ -order discriminative neural model. In contrast to ours, however, their network is shallow. We compare the two methods in more detail in sec:related-work.
Additionally, we perform a thorough study of the neural transfer learner, considering all three architectures. A primary goal of our experiments is to determine which of our three proposed neural transfer techniques is superior. Even though our experiments focus on morphological tagging, these architectures are more general in that they may be easily applied to other tasks, e.g., parsing or machine translation. We additionally explore the viability of multi-source transfer, i.e., the case where we have multiple source languages. All of our architectures generalize to the multi-source case without any complications.
Experimental Details
We train our models with the following conditions.
We evaluate using average per token accuracy, as is standard for both POS tagging and morphological tagging, and per feature $F_1$ as employed in buys-botha:2016:P16-1. The per feature $F_1$ calculates a key $F^k_1$ for each key in the target language's tags by asking if the key-attribute pair $k_i$ $=$ $v_i$ is in the predicted tag. Then, the key-specific $F^k_1$ values are averaged equally. Note that $F_1$ is a more flexible metric as it gives partial credit for getting some of the attributes in the bundle correct, where accuracy does not.
[author=Ryan,color=purple!40,size=,fancyline,caption=,]Georg needs to check. Taken from: http://www.dfki.de/ neumann/publications/new-ps/BigNLP2016.pdf Our networks are four layers deep (two LSTM layers for the character embedder, i.e., to compute ${v_i}$ and two LSTM layers for the tagger, i.e., to compute ${e_i}$ ) and we use an embedding size of 128 for the character input vector size and hidden layers of 256 nodes in all other cases. All networks are trained with the stochastic gradient method RMSProp BIBREF22 , with a fixed initial learning rate and a learning rate decay that is adjusted for the other languages according to the amount of training data. The batch size is always 16. Furthermore, we use dropout BIBREF23 . The dropout probability is set to 0.2. We used Torch 7 BIBREF24 to configure the computation graphs implementing the network architectures.
Results and Discussion
[author=Ryan,color=purple!40,size=,fancyline,caption=,]Needs to be updated! We report our results in two tables. First, we report a detailed cross-lingual evaluation in tab:results. Secondly, we report a comparison against two baselines in tab:baseline-table1 (accuracy) and tab:baseline-table2 ( $F_1$ ). We see two general trends of the data. First, we find that genetically closer languages yield better source languages. Second, we find that the multi-softmax architecture is the best in terms of transfer ability, as evinced by the results in tab:results. We find a wider gap between our model and the baselines under the accuracy than under $F_1$ . We attribute this to the fact that $F_1$ is a softer metric in that it assigns credit to partially correct guesses.
Related Work
We divide the discussion of related work topically into three parts for ease of intellectual digestion.
Alignment-Based Distant Supervision.
Most cross-lingual work in NLP—focusing on morphology or otherwise—has concentrated on indirect supervision, rather than transfer learning. The goal in such a regime is to provide noisy labels for training the tagger in the low-resource language through annotations projected over aligned bitext with a high-resource language. This method of projection was first introduced by DBLP:conf/naacl/YarowskyN01 for the projection of POS annotation. While follow-up work BIBREF26 , BIBREF27 , BIBREF28 has continually demonstrated the efficacy of projecting simple part-of-speech annotations, buys-botha:2016:P16-1 were the first to show the use of bitext-based projection for the training of a morphological tagger for low-resource languages.
As we also discuss the training of a morphological tagger, our work is most closely related to buys-botha:2016:P16-1 in terms of the task itself. We contrast the approaches. The main difference lies therein, that our approach is not projection-based and, thus, does not require the construction of a bilingual lexicon for projection based on bitext. Rather, our method jointly learns multiple taggers and forces them to share features—a true transfer learning scenario. In contrast to projection-based methods, our procedure always requires a minimal amount of annotated data in the low-resource target language—in practice, however, this distinction is non-critical as projection-based methods without a small mount of seed target language data perform poorly BIBREF29 .
Character-level NLP.
Our work also follows a recent trend in NLP, whereby traditional word-level neural representations are being replaced by character-level representations for a myriad tasks, e.g., POS tagging DBLP:conf/icml/SantosZ14, parsing BIBREF30 , language modeling BIBREF31 , sentiment analysis BIBREF32 as well as the tagger of heigold2017, whose work we build upon. Our work is also related to recent work on character-level morphological generation using neural architectures BIBREF33 , BIBREF34 .
Neural Cross-lingual Transfer in NLP.
In terms of methodology, however, our proposal bears similarity to recent work in speech and machine translation–we discuss each in turn. In speech recognition, heigold2013multilingual train a cross-lingual neural acoustic model on five Romance languages. The architecture bears similarity to our multi-language softmax approach. Dependency parsing benefits from cross-lingual learning in a similar fashion BIBREF35 , BIBREF20 .
In neural machine translation BIBREF36 , BIBREF37 , recent work BIBREF38 , BIBREF39 , BIBREF40 has explored the possibility of jointly train translation models for a wide variety of languages. Our work addresses a different task, but the undergirding philosophical motivation is similar, i.e., attack low-resource NLP through multi-task transfer learning. kann-cotterell-schutze:2017:ACL2017 offer a similar method for cross-lingual transfer in morphological inflection generation.
Conclusion
We have presented three character-level recurrent neural network architectures for multi-task cross-lingual transfer of morphological taggers. We provided an empirical evaluation of the technique on 18 languages from four different language families, showing wide-spread applicability of the method. We found that the transfer of morphological taggers is an eminently viable endeavor among related language and, in general, the closer the languages, the easier the transfer of morphology becomes. Our technique outperforms two strong baselines proposed in previous work. Moreover, we define standard low-resource training splits in UD for future research in low-resource morphological tagging. Future work should focus on extending the neural morphological tagger to a joint lemmatizer BIBREF41 and evaluate its functionality in the low-resource setting.
Acknowledgements
RC acknowledges the support of an NDSEG fellowship. Also, we would like to thank Jan Buys and Jan Botha who helped us compare to the numbers reported in their paper. We would also like to thank Hinrich Schütze for reading an early draft and Tim Vieira and Jason Naradowsky for helpful initial discussions. | shared character embeddings for taggers in both languages together through optimization of a joint loss function |
4056ee2fd7a0a0f444275e627bb881134a1c2a10 | 4056ee2fd7a0a0f444275e627bb881134a1c2a10_0 | Q: On which dataset is the experiment conducted?
Text: Introduction
State-of-the-art morphological taggers require thousands of annotated sentences to train. For the majority of the world's languages, however, sufficient, large-scale annotation is not available and obtaining it would often be infeasible. Accordingly, an important road forward in low-resource NLP is the development of methods that allow for the training of high-quality tools from smaller amounts of data. In this work, we focus on transfer learning—we train a recurrent neural tagger for a low-resource language jointly with a tagger for a related high-resource language. Forcing the models to share character-level features among the languages allows large gains in accuracy when tagging the low-resource languages, while maintaining (or even improving) accuracy on the high-resource language.
Recurrent neural networks constitute the state of the art for a myriad of tasks in NLP, e.g., multi-lingual part-of-speech tagging BIBREF0 , syntactic parsing BIBREF1 , BIBREF2 , morphological paradigm completion BIBREF3 , BIBREF4 and language modeling BIBREF5 , BIBREF6 ; recently, such models have also improved morphological tagging BIBREF7 , BIBREF8 . In addition to increased performance over classical approaches, neural networks also offer a second advantage: they admit a clean paradigm for multi-task learning. If the learned representations for all of the tasks are embedded jointly into a shared vector space, the various tasks reap benefits from each other and often performance improves for all BIBREF9 . We exploit this idea for language-to-language transfer to develop an approach for cross-lingual morphological tagging.
We experiment on 18 languages taken from four different language families. Using the Universal Dependencies treebanks, we emulate a low-resource setting for our experiments, e.g., we attempt to train a morphological tagger for Catalan using primarily data from a related language like Spanish. Our results demonstrate the successful transfer of morphological knowledge from the high-resource languages to the low-resource languages without relying on an externally acquired bilingual lexicon or bitext. We consider both the single- and multi-source transfer case and explore how similar two languages must be in order to enable high-quality transfer of morphological taggers.
Morphological Tagging
Many languages in the world exhibit rich inflectional morphology: the form of individual words mutates to reflect the syntactic function. For example, the Spanish verb soñar will appear as sueño in the first person present singular, but soñáis in the second person present plural, depending on the bundle of syntaco-semantic attributes associated with the given form (in a sentential context). For concreteness, we list a more complete table of Spanish verbal inflections in tab:paradigm. [author=Ryan,color=purple!40,size=,fancyline,caption=,]Notation in table is different. Note that some languages, e.g. the Northeastern Caucasian language Archi, display a veritable cornucopia of potential forms with the size of the verbal paradigm exceeding 10,000 BIBREF10 .
Standard NLP annotation, e.g., the scheme in sylakglassman-EtAl:2015:ACL-IJCNLP, marks forms in terms of universal key–attribute pairs, e.g., the first person present singular is represented as $\left[\right.$ pos=V, per=1, num=sg, tns=pres $\left.\right]$ . This bundle of key–attributes pairs is typically termed a morphological tag and we may view the goal of morphological tagging to label each word in its sentential context with the appropriate tag BIBREF11 , BIBREF12 . As the part-of-speech (POS) is a component of the tag, we may view morphological tagging as a strict generalization of POS tagging, where we have significantly refined the set of available tags. All of the experiments in this paper make use of the universal morphological tag set available in the Universal Dependencies (UD) BIBREF13 . As an example, we have provided a Russian sentence with its UD tagging in fig:russian-sentence.
Character-Level Neural Transfer
Our formulation of transfer learning builds on work in multi-task learning BIBREF15 , BIBREF9 . We treat each individual language as a task and train a joint model for all the tasks. We first discuss the current state of the art in morphological tagging: a character-level recurrent neural network. After that, we explore three augmentations to the architecture that allow for the transfer learning scenario. All of our proposals force the embedding of the characters for both the source and the target language to share the same vector space, but involve different mechanisms, by which the model may learn language-specific features.
Character-Level Neural Networks
Character-level neural networks currently constitute the state of the art in morphological tagging BIBREF8 . We draw on previous work in defining a conditional distribution over taggings ${t}$ for a sentence ${w}$ of length $|{w}| = N$ as
$$p_{{\theta }}({{t}} \mid {{w}}) = \prod _{i=1}^N p_{{\theta }}(t_i \mid {{w}}), $$ (Eq. 12)
which may be seen as a $0^\text{th}$ order conditional random field (CRF) BIBREF16 with parameter vector ${{\theta }}$ . Importantly, this factorization of the distribution $p_{{\theta }}({{t}} \mid {{w}})$ also allows for efficient exact decoding and marginal inference in ${\cal O}(N)$ -time, but at the cost of not admitting any explicit interactions in the output structure, i.e., between adjacent tags. We parameterize the distribution over tags at each time step as
$$p_{{\theta }}(t_i \mid {{w}}) = \text{softmax}\left(W {e}_i + {b}\right), $$ (Eq. 15)
where $W \in \mathbb {R}^{|{\cal T}| \times n}$ is an embedding matrix, ${b}\in \mathbb {R}^{|{\cal T}|}$ is a bias vector and positional embeddings ${e}_i$ are taken from a concatenation of the output of two long short-term memory recurrent neural networks (LSTMs) BIBREF18 , folded forward and backward, respectively, over a sequence of input vectors. This constitutes a bidirectional LSTM BIBREF19 . We define the positional embedding vector as follows
$${e}_i = \left[{\text{LSTM}}({v}_{1:i}); {\text{LSTM}}({v}_{i+1:N})\right], $$ (Eq. 17)
where each ${v}_i \in \mathbb {R}^n$ is, itself, a word embedding. Note that the function $\text{LSTM}$ returns the last final hidden state vector of the network. This architecture is the context bidirectional recurrent neural network of plank-sogaard-goldberg:2016:P16-2. Finally, we derive each word embedding vector ${v}_i$ from a character-level bidirectional LSTM embedder. Namely, we define each word embedding as the concatenation
$${v}_i = &\left[ {\text{LSTM}}\left(\langle c_{i_1}, \ldots , c_{i_{M_i}}\rangle \right); \right. \\ &\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \left. {\text{LSTM}} \left(\langle c_{i_{M_i}}, \ldots , c_{i_1}\rangle \right) \right]. \nonumber $$ (Eq. 18)
In other words, we run a bidirectional LSTM over the character stream. This bidirectional LSTM is the sequence bidirectional recurrent neural network of plank-sogaard-goldberg:2016:P16-2. Note a concatenation of the sequence of character symbols $\langle c_{i_1}, \ldots , c_{i_{M_i}} \rangle $ results in the word string $w_i$ . Each of the $M_i$ characters $c_{i_k}$ is a member of the set $\Sigma $ . We take $\Sigma $ to be the union of sets of characters in the languages considered.
We direct the reader to heigold2017 for a more in-depth discussion of this and various additional architectures for the computation of ${v}_i$ ; the architecture we have presented in eq:embedder-v is competitive with the best performing setting in Heigold et al.'s study.
Cross-Lingual Morphological Transfer as Multi-Task Learning
Cross-lingual morphological tagging may be formulated as a multi-task learning problem. We seek to learn a set of shared character embeddings for taggers in both languages together through optimization of a joint loss function that combines the high-resource tagger and the low-resource one. The first loss function we consider is the following:
$${\cal L}_{\textit {multi}}({\theta }) = -\!\!\!\sum _{({t}, {w}) \in {\cal D}_s} \!\!\!\! \log &\, p_{{\theta }} ({t}\mid {w}, \ell _s ) \\[-5] \nonumber & -\!\!\!\!\sum _{({t}, {w}) \in {\cal D}_t} \!\! \log p_{{\theta }}\left({t}\mid {w}, \ell _t \right).$$ (Eq. 20)
Crucially, our cross-lingual objective forces both taggers to share part of the parameter vector ${\theta }$ , which allows it to represent morphological regularities between the two languages in a common embedding space and, thus, enables transfer of knowledge. This is no different from monolingual multi-task settings, e.g., jointly training a chunker and a tagger for the transfer of syntactic information BIBREF9 . We point out that, in contrast to our approach, almost all multi-task transfer learning, e.g., for dependency parsing BIBREF20 , has shared word-level embeddings rather than character-level embeddings. See sec:related-work for a more complete discussion.
We consider two parameterizations of this distribution $p_{{\theta }}(t_i \mid {w}, \ell )$ . First, we modify the initial character-level LSTM embedding such that it also encodes the identity of the language. Second, we modify the softmax layer, creating a language-specific softmax.
Our first architecture has one softmax, as in eq:tagger, over all morphological tags in ${\cal T}$ (shared among all the languages). To allow the architecture to encode morphological features specific to one language, e.g., the third person present plural ending in Spanish is -an, but -ão in Portuguese, we modify the creation of the character-level embeddings. Specifically, we augment the character alphabet $\Sigma $ with a distinguished symbol that indicates the language: $\text{{\tt id}}_\ell $ . We then pre- and postpend this symbol to the character stream for every word before feeding the characters into the bidirectional LSTM Thus, we arrive at the new language-specific word embeddings,
$${v}^{\ell }_i = &\left[ {\text{LSTM}}\left(\langle \text{{\tt id}}_\ell , c_{i_1}, \ldots , c_{i_{M_i}}, \text{{\tt id}}_\ell \rangle \right); \right. \\ &\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \left. {\text{LSTM}} \left(\langle \text{{\tt id}}_\ell , c_{i_{M_i}}, \ldots , c_{i_1}, \text{{\tt id}}_\ell \rangle \right) \right]. \nonumber $$ (Eq. 22)
This model creates a language-specific embedding vector ${v}_i$ , but the individual embeddings for a given character are shared among the languages jointly trained on. The remainder of the architecture is held constant.
Next, inspired by the architecture of heigold2013multilingual, we consider a language-specific softmax layer, i.e., we define a new output layer for every language:
$$p_{{\theta }}\left(t_i \mid {w}, \ell \right) = \text{softmax}\left(W_{\ell } {e}_i + {b}_{\ell }\right),$$ (Eq. 24)
where $W_{\ell } \in \mathbb {R}^{|{\cal T}| \times n}$ and ${b}_{\ell } \in \mathbb {R}^{|{\cal T}|}$ are now language-specific. In this architecture, the embeddings ${e}_i$ are the same for all languages—the model has to learn language-specific behavior exclusively through the output softmax of the tagging LSTM.
The third model we exhibit is a joint architecture for tagging and language identification. We consider the following loss function:
$${\cal L}_{\textit {joint}} ({\theta }) = -\!\!\!\sum _{({t}, {w}) \in {\cal D}_s} \!\!\! \log \, & p_{{\theta }}(\ell _s, {t}\mid {w}) \\[-5] \nonumber &-\!\sum _{({t}, {w}) \in {\cal D}_t} \!\!\!\!\! \log p_{{\theta }}\left(\ell _t, {t}\mid {w}\right),$$ (Eq. 26)
where we factor the joint distribution as
$$p_{{\theta }}\left(\ell , {t}\mid {w}\right) &= p_{{\theta }}\left(\ell \mid {w}\right) \cdot p_{{\theta }}\left({t}\mid {w}, \ell \right).$$ (Eq. 27)
Just as before, we define $p_{{\theta }}\left({t}\mid {w}, \ell \right)$ above as in eq:lang-specific and we define
$$p_{{\theta }}(\ell \mid {w}) = \text{softmax}\left(U\tanh (V{e}_i)\right),$$ (Eq. 28)
which is a multi-layer perceptron with a binary softmax (over the two languages) as an output layer; we have added the additional parameters $V \in \mathbb {R}^{2 \times n}$ and $U \in \mathbb {R}^{2 \times 2}$ . In the case of multi-source transfer, this is a softmax over the set of languages.
The first two architectures discussed in par:arch1 represent two possibilities for a multi-task objective, where we condition on the language of the sentence. The first integrates this knowledge at a lower level and the second at a higher level. The third architecture discussed in sec:joint-arch takes a different tack—rather than conditioning on the language, it predicts it. The joint model offers one interesting advantage over the two architectures proposed. Namely, it allows us to perform a morphological analysis on a sentence where the language is unknown. This effectively alleviates an early step in the NLP pipeline, where language id is performed and is useful in conditions where the language to be tagged may not be known a-priori, e.g., when tagging social media data.
While there are certainly more complex architectures one could engineer for the task, we believe we have found a relatively diverse sampling, enabling an interesting experimental comparison. Indeed, it is an important empirical question which architectures are most appropriate for transfer learning. Since transfer learning affords the opportunity to reduce the sample complexity of the “data-hungry” neural networks that currently dominate NLP research, finding a good solution for cross-lingual transfer in state-of-the-art neural models will likely be a boon for low-resource NLP in general.
Experiments
Empirically, we ask three questions of our architectures. i) How well can we transfer morphological tagging models from high-resource languages to low-resource languages in each architecture? (Does one of the three outperform the others?) ii) How many annotated data in the low-resource language do we need? iii) How closely related do the languages need to be to get good transfer?
Experimental Languages
We experiment with the language families: Romance (Indo-European), Northern Germanic (Indo-European), Slavic (Indo-European) and Uralic. In the Romance sub-grouping of the wider Indo-European family, we experiment on Catalan (ca), French (fr), Italian (it), Portuguese (pt), Romanian (ro) and Spanish (es). In the Northern Germanic family, we experiment on Danish (da), Norwegian (no) and Swedish (sv). In the Slavic family, we experiment on Bulgarian (bg), Czech (bg), Polish (pl), Russian (ru), Slovak (sk) and Ukrainian (uk). Finally, in the Uralic family we experiment on Estonian (et), Finnish (fi) and Hungarian (hu).
Datasets
We use the morphological tagging datasets provided by the Universal Dependencies (UD) treebanks (the concatenation of the $4^\text{th}$ and $6^\text{th}$ columns of the file format) BIBREF13 . We list the size of the training, development and test splits of the UD treebanks we used in tab:lang-size. Also, we list the number of unique morphological tags in each language in tab:num-tags, which serves as an approximate measure of the morphological complexity each language exhibits. Crucially, the data are annotated in a cross-linguistically consistent manner, such that words in the different languages that have the same syntacto-semantic function have the same bundle of tags (see sec:morpho-tagging for a discussion). Potentially, further gains would be possible by using a more universal scheme, e.g., the UniMorph scheme.
Baselines
We consider two baselines in our work. First, we consider the MarMoT tagger BIBREF17 , which is currently the best performing non-neural model. The source code for MarMoT is freely available online, which allows us to perform fully controlled experiments with this model. Second, we consider the alignment-based projection approach of buys-botha:2016:P16-1. We discuss each of the two baselines in turn.
The MarMoT tagger is the leading non-neural approach to morphological tagging. This baseline is important since non-neural, feature-based approaches have been found empirically to be more efficient, in the sense that their learning curves tend to be steeper. Thus, in the low-resource setting we would be remiss to not consider a feature-based approach. Note that this is not a transfer approach, but rather only uses the low-resource data.
The projection approach of buys-botha:2016:P16-1 provides an alternative method for transfer learning. The idea is to construct pseudo-annotations for bitext given an alignments BIBREF21 . Then, one trains a standard tagger using the projected annotations. The specific tagger employed is the wsabie model of DBLP:conf/ijcai/WestonBU11, which—like our approach— is a $0^\text{th}$ -order discriminative neural model. In contrast to ours, however, their network is shallow. We compare the two methods in more detail in sec:related-work.
Additionally, we perform a thorough study of the neural transfer learner, considering all three architectures. A primary goal of our experiments is to determine which of our three proposed neural transfer techniques is superior. Even though our experiments focus on morphological tagging, these architectures are more general in that they may be easily applied to other tasks, e.g., parsing or machine translation. We additionally explore the viability of multi-source transfer, i.e., the case where we have multiple source languages. All of our architectures generalize to the multi-source case without any complications.
Experimental Details
We train our models with the following conditions.
We evaluate using average per token accuracy, as is standard for both POS tagging and morphological tagging, and per feature $F_1$ as employed in buys-botha:2016:P16-1. The per feature $F_1$ calculates a key $F^k_1$ for each key in the target language's tags by asking if the key-attribute pair $k_i$ $=$ $v_i$ is in the predicted tag. Then, the key-specific $F^k_1$ values are averaged equally. Note that $F_1$ is a more flexible metric as it gives partial credit for getting some of the attributes in the bundle correct, where accuracy does not.
[author=Ryan,color=purple!40,size=,fancyline,caption=,]Georg needs to check. Taken from: http://www.dfki.de/ neumann/publications/new-ps/BigNLP2016.pdf Our networks are four layers deep (two LSTM layers for the character embedder, i.e., to compute ${v_i}$ and two LSTM layers for the tagger, i.e., to compute ${e_i}$ ) and we use an embedding size of 128 for the character input vector size and hidden layers of 256 nodes in all other cases. All networks are trained with the stochastic gradient method RMSProp BIBREF22 , with a fixed initial learning rate and a learning rate decay that is adjusted for the other languages according to the amount of training data. The batch size is always 16. Furthermore, we use dropout BIBREF23 . The dropout probability is set to 0.2. We used Torch 7 BIBREF24 to configure the computation graphs implementing the network architectures.
Results and Discussion
[author=Ryan,color=purple!40,size=,fancyline,caption=,]Needs to be updated! We report our results in two tables. First, we report a detailed cross-lingual evaluation in tab:results. Secondly, we report a comparison against two baselines in tab:baseline-table1 (accuracy) and tab:baseline-table2 ( $F_1$ ). We see two general trends of the data. First, we find that genetically closer languages yield better source languages. Second, we find that the multi-softmax architecture is the best in terms of transfer ability, as evinced by the results in tab:results. We find a wider gap between our model and the baselines under the accuracy than under $F_1$ . We attribute this to the fact that $F_1$ is a softer metric in that it assigns credit to partially correct guesses.
Related Work
We divide the discussion of related work topically into three parts for ease of intellectual digestion.
Alignment-Based Distant Supervision.
Most cross-lingual work in NLP—focusing on morphology or otherwise—has concentrated on indirect supervision, rather than transfer learning. The goal in such a regime is to provide noisy labels for training the tagger in the low-resource language through annotations projected over aligned bitext with a high-resource language. This method of projection was first introduced by DBLP:conf/naacl/YarowskyN01 for the projection of POS annotation. While follow-up work BIBREF26 , BIBREF27 , BIBREF28 has continually demonstrated the efficacy of projecting simple part-of-speech annotations, buys-botha:2016:P16-1 were the first to show the use of bitext-based projection for the training of a morphological tagger for low-resource languages.
As we also discuss the training of a morphological tagger, our work is most closely related to buys-botha:2016:P16-1 in terms of the task itself. We contrast the approaches. The main difference lies therein, that our approach is not projection-based and, thus, does not require the construction of a bilingual lexicon for projection based on bitext. Rather, our method jointly learns multiple taggers and forces them to share features—a true transfer learning scenario. In contrast to projection-based methods, our procedure always requires a minimal amount of annotated data in the low-resource target language—in practice, however, this distinction is non-critical as projection-based methods without a small mount of seed target language data perform poorly BIBREF29 .
Character-level NLP.
Our work also follows a recent trend in NLP, whereby traditional word-level neural representations are being replaced by character-level representations for a myriad tasks, e.g., POS tagging DBLP:conf/icml/SantosZ14, parsing BIBREF30 , language modeling BIBREF31 , sentiment analysis BIBREF32 as well as the tagger of heigold2017, whose work we build upon. Our work is also related to recent work on character-level morphological generation using neural architectures BIBREF33 , BIBREF34 .
Neural Cross-lingual Transfer in NLP.
In terms of methodology, however, our proposal bears similarity to recent work in speech and machine translation–we discuss each in turn. In speech recognition, heigold2013multilingual train a cross-lingual neural acoustic model on five Romance languages. The architecture bears similarity to our multi-language softmax approach. Dependency parsing benefits from cross-lingual learning in a similar fashion BIBREF35 , BIBREF20 .
In neural machine translation BIBREF36 , BIBREF37 , recent work BIBREF38 , BIBREF39 , BIBREF40 has explored the possibility of jointly train translation models for a wide variety of languages. Our work addresses a different task, but the undergirding philosophical motivation is similar, i.e., attack low-resource NLP through multi-task transfer learning. kann-cotterell-schutze:2017:ACL2017 offer a similar method for cross-lingual transfer in morphological inflection generation.
Conclusion
We have presented three character-level recurrent neural network architectures for multi-task cross-lingual transfer of morphological taggers. We provided an empirical evaluation of the technique on 18 languages from four different language families, showing wide-spread applicability of the method. We found that the transfer of morphological taggers is an eminently viable endeavor among related language and, in general, the closer the languages, the easier the transfer of morphology becomes. Our technique outperforms two strong baselines proposed in previous work. Moreover, we define standard low-resource training splits in UD for future research in low-resource morphological tagging. Future work should focus on extending the neural morphological tagger to a joint lemmatizer BIBREF41 and evaluate its functionality in the low-resource setting.
Acknowledgements
RC acknowledges the support of an NDSEG fellowship. Also, we would like to thank Jan Buys and Jan Botha who helped us compare to the numbers reported in their paper. We would also like to thank Hinrich Schütze for reading an early draft and Tim Vieira and Jason Naradowsky for helpful initial discussions. | We use the morphological tagging datasets provided by the Universal Dependencies (UD) treebanks (the concatenation of the $4^\text{th}$ and $6^\text{th}$ columns of the file format) BIBREF13 . |
f6496b8d09911cdf3a9b72aec0b0be6232a6dba1 | f6496b8d09911cdf3a9b72aec0b0be6232a6dba1_0 | Q: Do they train their own RE model?
Text: Introduction
Relation extraction (RE) is an important information extraction task that seeks to detect and classify semantic relationships between entities like persons, organizations, geo-political entities, locations, and events. It provides useful information for many NLP applications such as knowledge base construction, text mining and question answering. For example, the entity Washington, D.C. and the entity United States have a CapitalOf relationship, and extraction of such relationships can help answer questions like “What is the capital city of the United States?"
Traditional RE models (e.g., BIBREF0, BIBREF1, BIBREF2) require careful feature engineering to derive and combine various lexical, syntactic and semantic features. Recently, neural network RE models (e.g., BIBREF3, BIBREF4, BIBREF5, BIBREF6) have become very successful. These models employ a certain level of automatic feature learning by using word embeddings, which significantly simplifies the feature engineering task while considerably improving the accuracy, achieving the state-of-the-art performance for relation extraction.
All the above RE models are supervised machine learning models that need to be trained with large amounts of manually annotated RE data to achieve high accuracy. However, annotating RE data by human is expensive and time-consuming, and can be quite difficult for a new language. Moreover, most RE models require language-specific resources such as dependency parsers and part-of-speech (POS) taggers, which also makes it very challenging to transfer an RE model of a resource-rich language to a resource-poor language.
There are a few existing weakly supervised cross-lingual RE approaches that require no human annotation in the target languages, e.g., BIBREF7, BIBREF8, BIBREF9, BIBREF10. However, the existing approaches require aligned parallel corpora or machine translation systems, which may not be readily available in practice.
In this paper, we make the following contributions to cross-lingual RE:
We propose a new approach for direct cross-lingual RE model transfer based on bilingual word embedding mapping. It projects word embeddings from a target language to a source language (e.g., English), so that a well-trained source-language RE model can be directly applied to the target language, with no manually annotated RE data needed for the target language.
We design a deep neural network architecture for the source-language (English) RE model that uses word embeddings and generic language-independent features as the input. The English RE model achieves the-state-of-the-art performance without using language-specific resources.
We conduct extensive experiments which show that the proposed approach achieves very good performance (up to $79\%$ of the accuracy of the supervised target-language RE model) for a number of target languages on both in-house and the ACE05 datasets BIBREF11, using a small bilingual dictionary with only 1K word pairs. To the best of our knowledge, this is the first work that includes empirical studies for cross-lingual RE on several languages across a variety of language families, without using aligned parallel corpora or machine translation systems.
We organize the paper as follows. In Section 2 we provide an overview of our approach. In Section 3 we describe how to build monolingual word embeddings and learn a linear mapping between two languages. In Section 4 we present a neural network architecture for the source-language (English). In Section 5 we evaluate the performance of the proposed approach for a number of target languages. We discuss related work in Section 6 and conclude the paper in Section 7.
Overview of the Approach
We summarize the main steps of our neural cross-lingual RE model transfer approach as follows.
Build word embeddings for the source language and the target language separately using monolingual data.
Learn a linear mapping that projects the target-language word embeddings into the source-language embedding space using a small bilingual dictionary.
Build a neural network source-language RE model that uses word embeddings and generic language-independent features as the input.
For a target-language sentence and any two entities in it, project the word embeddings of the words in the sentence to the source-language word embeddings using the linear mapping, and then apply the source-language RE model on the projected word embeddings to classify the relationship between the two entities. An example is shown in Figure FIGREF4, where the target language is Portuguese and the source language is English.
We will describe each component of our approach in the subsequent sections.
Cross-Lingual Word Embeddings
In recent years, vector representations of words, known as word embeddings, become ubiquitous for many NLP applications BIBREF12, BIBREF13, BIBREF14.
A monolingual word embedding model maps words in the vocabulary $\mathcal {V}$ of a language to real-valued vectors in $\mathbb {R}^{d\times 1}$. The dimension of the vector space $d$ is normally much smaller than the size of the vocabulary $V=|\mathcal {V}|$ for efficient representation. It also aims to capture semantic similarities between the words based on their distributional properties in large samples of monolingual data.
Cross-lingual word embedding models try to build word embeddings across multiple languages BIBREF15, BIBREF16. One approach builds monolingual word embeddings separately and then maps them to the same vector space using a bilingual dictionary BIBREF17, BIBREF18. Another approach builds multilingual word embeddings in a shared vector space simultaneously, by generating mixed language corpora using aligned sentences BIBREF19, BIBREF20.
In this paper, we adopt the technique in BIBREF17 because it only requires a small bilingual dictionary of aligned word pairs, and does not require parallel corpora of aligned sentences which could be more difficult to obtain.
Cross-Lingual Word Embeddings ::: Monolingual Word Embeddings
To build monolingual word embeddings for the source and target languages, we use a variant of the Continuous Bag-of-Words (CBOW) word2vec model BIBREF13.
The standard CBOW model has two matrices, the input word matrix $\tilde{\mathbf {X}} \in \mathbb {R}^{d\times V}$ and the output word matrix $\mathbf {X} \in \mathbb {R}^{d\times V}$. For the $i$th word $w_i$ in $\mathcal {V}$, let $\mathbf {e}(w_i) \in \mathbb {R}^{V \times 1}$ be a one-hot vector with 1 at index $i$ and 0s at other indexes, so that $\tilde{\mathbf {x}}_i = \tilde{\mathbf {X}}\mathbf {e}(w_i)$ (the $i$th column of $\tilde{\mathbf {X}}$) is the input vector representation of word $w_i$, and $\mathbf {x}_i = \mathbf {X}\mathbf {e}(w_i)$ (the $i$th column of $\mathbf {X}$) is the output vector representation (i.e., word embedding) of word $w_i$.
Given a sequence of training words $w_1, w_2, ..., w_N$, the CBOW model seeks to predict a target word $w_t$ using a window of $2c$ context words surrounding $w_t$, by maximizing the following objective function:
The conditional probability is calculated using a softmax function:
where $\mathbf {x}_t=\mathbf {X}\mathbf {e}(w_t)$ is the output vector representation of word $w_t$, and
is the sum of the input vector representations of the context words.
In our variant of the CBOW model, we use a separate input word matrix $\tilde{\mathbf {X}}_j$ for a context word at position $j, -c \le j \le c, j\ne 0$. In addition, we employ weights that decay with the distances of the context words to the target word. Under these modifications, we have
We use the variant to build monolingual word embeddings because experiments on named entity recognition and word similarity tasks showed this variant leads to small improvements over the standard CBOW model BIBREF21.
Cross-Lingual Word Embeddings ::: Bilingual Word Embedding Mapping
BIBREF17 observed that word embeddings of different languages often have similar geometric arrangements, and suggested to learn a linear mapping between the vector spaces.
Let $\mathcal {D}$ be a bilingual dictionary with aligned word pairs ($w_i, v_i)_{i=1,...,D}$ between a source language $s$ and a target language $t$, where $w_i$ is a source-language word and $v_i$ is the translation of $w_i$ in the target language. Let $\mathbf {x}_i \in \mathbb {R}^{d \times 1}$ be the word embedding of the source-language word $w_i$, $\mathbf {y}_i \in \mathbb {R}^{d \times 1}$ be the word embedding of the target-language word $v_i$.
We find a linear mapping (matrix) $\mathbf {M}_{t\rightarrow s}$ such that $\mathbf {M}_{t\rightarrow s}\mathbf {y}_i$ approximates $\mathbf {x}_i$, by solving the following least squares problem using the dictionary as the training set:
Using $\mathbf {M}_{t\rightarrow s}$, for any target-language word $v$ with word embedding $\mathbf {y}$, we can project it into the source-language embedding space as $\mathbf {M}_{t\rightarrow s}\mathbf {y}$.
Cross-Lingual Word Embeddings ::: Bilingual Word Embedding Mapping ::: Length Normalization and Orthogonal Transformation
To ensure that all the training instances in the dictionary $\mathcal {D}$ contribute equally to the optimization objective in (DISPLAY_FORM14) and to preserve vector norms after projection, we have tried length normalization and orthogonal transformation for learning the bilingual mapping as in BIBREF22, BIBREF23, BIBREF24.
First, we normalize the source-language and target-language word embeddings to be unit vectors: $\mathbf {x}^{\prime }=\frac{\mathbf {x}}{||\mathbf {x}||}$ for each source-language word embedding $\mathbf {x}$, and $\mathbf {y}^{\prime }= \frac{\mathbf {y}}{||\mathbf {y}||}$ for each target-language word embedding $\mathbf {y}$.
Next, we add an orthogonality constraint to (DISPLAY_FORM14) such that $\mathbf {M}$ is an orthogonal matrix, i.e., $\mathbf {M}^\mathrm {T}\mathbf {M} = \mathbf {I}$ where $\mathbf {I}$ denotes the identity matrix:
$\mathbf {M}^{O} _{t\rightarrow s}$ can be computed using singular-value decomposition (SVD).
Cross-Lingual Word Embeddings ::: Bilingual Word Embedding Mapping ::: Semi-Supervised and Unsupervised Mappings
The mapping learned in (DISPLAY_FORM14) or (DISPLAY_FORM16) requires a seed dictionary. To relax this requirement, BIBREF25 proposed a self-learning procedure that can be combined with a dictionary-based mapping technique. Starting with a small seed dictionary, the procedure iteratively 1) learns a mapping using the current dictionary; and 2) computes a new dictionary using the learned mapping.
BIBREF26 proposed an unsupervised method to learn the bilingual mapping without using a seed dictionary. The method first uses a heuristic to build an initial dictionary that aligns the vocabularies of two languages, and then applies a robust self-learning procedure to iteratively improve the mapping. Another unsupervised method based on adversarial training was proposed in BIBREF27.
We compare the performance of different mappings for cross-lingual RE model transfer in Section SECREF45.
Neural Network RE Models
For any two entities in a sentence, an RE model determines whether these two entities have a relationship, and if yes, classifies the relationship into one of the pre-defined relation types. We focus on neural network RE models since these models achieve the state-of-the-art performance for relation extraction. Most importantly, neural network RE models use word embeddings as the input, which are amenable to cross-lingual model transfer via cross-lingual word embeddings. In this paper, we use English as the source language.
Our neural network architecture has four layers. The first layer is the embedding layer which maps input words in a sentence to word embeddings. The second layer is a context layer which transforms the word embeddings to context-aware vector representations using a recurrent or convolutional neural network layer. The third layer is a summarization layer which summarizes the vectors in a sentence by grouping and pooling. The final layer is the output layer which returns the classification label for the relation type.
Neural Network RE Models ::: Embedding Layer
For an English sentence with $n$ words $\mathbf {s}=(w_1,w_2,...,w_n)$, the embedding layer maps each word $w_t$ to a real-valued vector (word embedding) $\mathbf {x}_t\in \mathbb {R}^{d \times 1}$ using the English word embedding model (Section SECREF9). In addition, for each entity $m$ in the sentence, the embedding layer maps its entity type to a real-valued vector (entity label embedding) $\mathbf {l}_m \in \mathbb {R}^{d_m \times 1}$ (initialized randomly). In our experiments we use $d=300$ and $d_m = 50$.
Neural Network RE Models ::: Context Layer
Given the word embeddings $\mathbf {x}_t$'s of the words in the sentence, the context layer tries to build a sentence-context-aware vector representation for each word. We consider two types of neural network layers that aim to achieve this.
Neural Network RE Models ::: Context Layer ::: Bi-LSTM Context Layer
The first type of context layer is based on Long Short-Term Memory (LSTM) type recurrent neural networks BIBREF28, BIBREF29. Recurrent neural networks (RNNs) are a class of neural networks that operate on sequential data such as sequences of words. LSTM networks are a type of RNNs that have been invented to better capture long-range dependencies in sequential data.
We pass the word embeddings $\mathbf {x}_t$'s to a forward and a backward LSTM layer. A forward or backward LSTM layer consists of a set of recurrently connected blocks known as memory blocks. The memory block at the $t$-th word in the forward LSTM layer contains a memory cell $\overrightarrow{\mathbf {c}}_t$ and three gates: an input gate $\overrightarrow{\mathbf {i}}_t$, a forget gate $\overrightarrow{\mathbf {f}}_t$ and an output gate $\overrightarrow{\mathbf {o}}_t$ ($\overrightarrow{\cdot }$ indicates the forward direction), which are updated as follows:
where $\sigma $ is the element-wise sigmoid function and $\odot $ is the element-wise multiplication.
The hidden state vector $\overrightarrow{\mathbf {h}}_t$ in the forward LSTM layer incorporates information from the left (past) tokens of $w_t$ in the sentence. Similarly, we can compute the hidden state vector $\overleftarrow{\mathbf {h}}_t$ in the backward LSTM layer, which incorporates information from the right (future) tokens of $w_t$ in the sentence. The concatenation of the two vectors $\mathbf {h}_t = [\overrightarrow{\mathbf {h}}_t, \overleftarrow{\mathbf {h}}_t]$ is a good representation of the word $w_t$ with both left and right contextual information in the sentence.
Neural Network RE Models ::: Context Layer ::: CNN Context Layer
The second type of context layer is based on Convolutional Neural Networks (CNNs) BIBREF3, BIBREF4, which applies convolution-like operation on successive windows of size $k$ around each word in the sentence. Let $\mathbf {z}_t = [\mathbf {x}_{t-(k-1)/2},...,\mathbf {x}_{t+(k-1)/2}]$ be the concatenation of $k$ word embeddings around $w_t$. The convolutional layer computes a hidden state vector
for each word $w_t$, where $\mathbf {W}$ is a weight matrix and $\mathbf {b}$ is a bias vector, and $\tanh (\cdot )$ is the element-wise hyperbolic tangent function.
Neural Network RE Models ::: Summarization Layer
After the context layer, the sentence $(w_1,w_2,...,w_n)$ is represented by $(\mathbf {h}_1,....,\mathbf {h}_n)$. Suppose $m_1=(w_{b_1},..,w_{e_1})$ and $m_2=(w_{b_2},..,w_{e_2})$ are two entities in the sentence where $m_1$ is on the left of $m_2$ (i.e., $e_1 < b_2$). As different sentences and entities may have various lengths, the summarization layer tries to build a fixed-length vector that best summarizes the representations of the sentence and the two entities for relation type classification.
We divide the hidden state vectors $\mathbf {h}_t$'s into 5 groups:
$G_1=\lbrace \mathbf {h}_{1},..,\mathbf {h}_{b_1-1}\rbrace $ includes vectors that are left to the first entity $m_1$.
$G_2=\lbrace \mathbf {h}_{b_1},..,\mathbf {h}_{e_1}\rbrace $ includes vectors that are in the first entity $m_1$.
$G_3=\lbrace \mathbf {h}_{e_1+1},..,\mathbf {h}_{b_2-1}\rbrace $ includes vectors that are between the two entities.
$G_4=\lbrace \mathbf {h}_{b_2},..,\mathbf {h}_{e_2}\rbrace $ includes vectors that are in the second entity $m_2$.
$G_5=\lbrace \mathbf {h}_{e_2+1},..,\mathbf {h}_{n}\rbrace $ includes vectors that are right to the second entity $m_2$.
We perform element-wise max pooling among the vectors in each group:
where $d_h$ is the dimension of the hidden state vectors. Concatenating the $\mathbf {h}_{G_i}$'s we get a fixed-length vector $\mathbf {h}_s=[\mathbf {h}_{G_1},...,\mathbf {h}_{G_5}]$.
Neural Network RE Models ::: Output Layer
The output layer receives inputs from the previous layers (the summarization vector $\mathbf {h}_s$, the entity label embeddings $\mathbf {l}_{m_1}$ and $\mathbf {l}_{m_2}$ for the two entities under consideration) and returns a probability distribution over the relation type labels:
Neural Network RE Models ::: Cross-Lingual RE Model Transfer
Given the word embeddings of a sequence of words in a target language $t$, $(\mathbf {y}_1,...,\mathbf {y}_n)$, we project them into the English embedding space by applying the linear mapping $\mathbf {M}_{t\rightarrow s}$ learned in Section SECREF13: $(\mathbf {M}_{t\rightarrow s}\mathbf {y}_1, \mathbf {M}_{t\rightarrow s}\mathbf {y}_2,...,\mathbf {M}_{t\rightarrow s}\mathbf {y}_n)$. The neural network English RE model is then applied on the projected word embeddings and the entity label embeddings (which are language independent) to perform relationship classification.
Note that our models do not use language-specific resources such as dependency parsers or POS taggers because these resources might not be readily available for a target language. Also our models do not use precise word position features since word positions in sentences can vary a lot across languages.
Experiments
In this section, we evaluate the performance of the proposed cross-lingual RE approach on both in-house dataset and the ACE (Automatic Content Extraction) 2005 multilingual dataset BIBREF11.
Experiments ::: Datasets
Our in-house dataset includes manually annotated RE data for 6 languages: English, German, Spanish, Italian, Japanese and Portuguese. It defines 56 entity types (e.g., Person, Organization, Geo-Political Entity, Location, Facility, Time, Event_Violence, etc.) and 53 relation types between the entities (e.g., AgentOf, LocatedAt, PartOf, TimeOf, AffectedBy, etc.).
The ACE05 dataset includes manually annotated RE data for 3 languages: English, Arabic and Chinese. It defines 7 entity types (Person, Organization, Geo-Political Entity, Location, Facility, Weapon, Vehicle) and 6 relation types between the entities (Agent-Artifact, General-Affiliation, ORG-Affiliation, Part-Whole, Personal-Social, Physical).
For both datasets, we create a class label “O" to denote that the two entities under consideration do not have a relationship belonging to one of the relation types of interest.
Experiments ::: Source (English) RE Model Performance
We build 3 neural network English RE models under the architecture described in Section SECREF4:
The first neural network RE model does not have a context layer and the word embeddings are directly passed to the summarization layer. We call it Pass-Through for short.
The second neural network RE model has a Bi-LSTM context layer. We call it Bi-LSTM for short.
The third neural network model has a CNN context layer with a window size 3. We call it CNN for short.
First we compare our neural network English RE models with the state-of-the-art RE models on the ACE05 English data. The ACE05 English data can be divided to 6 different domains: broadcast conversation (bc), broadcast news (bn), telephone conversation (cts), newswire (nw), usenet (un) and webblogs (wl). We apply the same data split in BIBREF31, BIBREF30, BIBREF6, which uses news (the union of bn and nw) as the training set, a half of bc as the development set and the remaining data as the test set.
We learn the model parameters using Adam BIBREF32. We apply dropout BIBREF33 to the hidden layers to reduce overfitting. The development set is used for tuning the model hyperparameters and for early stopping.
In Table TABREF40 we compare our models with the best models in BIBREF30 and BIBREF6. Our Bi-LSTM model outperforms the best model (single or ensemble) in BIBREF30 and the best single model in BIBREF6, without using any language-specific resources such as dependency parsers.
While the data split in the previous works was motivated by domain adaptation, the focus of this paper is on cross-lingual model transfer, and hence we apply a random data split as follows. For the source language English and each target language, we randomly select $80\%$ of the data as the training set, $10\%$ as the development set, and keep the remaining $10\%$ as the test set. The sizes of the sets are summarized in Table TABREF41.
We report the Precision, Recall and $F_1$ score of the 3 neural network English RE models in Table TABREF42. Note that adding an additional context layer with either Bi-LSTM or CNN significantly improves the performance of our English RE model, compared with the simple Pass-Through model. Therefore, we will focus on the Bi-LSTM model and the CNN model in the subsequent experiments.
Experiments ::: Cross-Lingual RE Performance
We apply the English RE models to the 7 target languages across a variety of language families.
Experiments ::: Cross-Lingual RE Performance ::: Dictionary Size
The bilingual dictionary includes the most frequent target-language words and their translations in English. To determine how many word pairs are needed to learn an effective bilingual word embedding mapping for cross-lingual RE, we first evaluate the performance ($F_1$ score) of our cross-lingual RE approach on the target-language development sets with an increasing dictionary size, as plotted in Figure FIGREF35.
We found that for most target languages, once the dictionary size reaches 1K, further increasing the dictionary size may not improve the transfer performance. Therefore, we select the dictionary size to be 1K.
Experiments ::: Cross-Lingual RE Performance ::: Comparison of Different Mappings
We compare the performance of cross-lingual RE model transfer under the following bilingual word embedding mappings:
Regular-1K: the regular mapping learned in (DISPLAY_FORM14) using 1K word pairs;
Orthogonal-1K: the orthogonal mapping with length normalization learned in (DISPLAY_FORM16) using 1K word pairs (in this case we train the English RE models with the normalized English word embeddings);
Semi-Supervised-1K: the mapping learned with 1K word pairs and improved by the self-learning method in BIBREF25;
Unsupervised: the mapping learned by the unsupervised method in BIBREF26.
The results are summarized in Table TABREF46. The regular mapping outperforms the orthogonal mapping consistently across the target languages. While the orthogonal mapping was shown to work better than the regular mapping for the word translation task BIBREF22, BIBREF23, BIBREF24, our cross-lingual RE approach directly maps target-language word embeddings to the English embedding space without conducting word translations. Moreover, the orthogonal mapping requires length normalization, but we observed that length normalization adversely affects the performance of the English RE models (about 2.0 $F_1$ points drop).
We apply the vecmap toolkit to obtain the semi-supervised and unsupervised mappings. The unsupervised mapping has the lowest average accuracy over the target languages, but it does not require a seed dictionary. Among all the mappings, the regular mapping achieves the best average accuracy over the target languages using a dictionary with only 1K word pairs, and hence we adopt it for the cross-lingual RE task.
Experiments ::: Cross-Lingual RE Performance ::: Performance on Test Data
The cross-lingual RE model transfer results for the in-house test data are summarized in Table TABREF52 and the results for the ACE05 test data are summarized in Table TABREF53, using the regular mapping learned with a bilingual dictionary of size 1K. In the tables, we also provide the performance of the supervised RE model (Bi-LSTM) for each target language, which is trained with a few hundred thousand tokens of manually annotated RE data in the target-language, and may serve as an upper bound for the cross-lingual model transfer performance.
Among the 2 neural network models, the Bi-LSTM model achieves a better cross-lingual RE performance than the CNN model for 6 out of the 7 target languages. In terms of absolute performance, the Bi-LSTM model achieves over $40.0$ $F_1$ scores for German, Spanish, Portuguese and Chinese. In terms of relative performance, it reaches over $75\%$ of the accuracy of the supervised target-language RE model for German, Spanish, Italian and Portuguese. While Japanese and Arabic appear to be more difficult to transfer, it still achieves $55\%$ and $52\%$ of the accuracy of the supervised Japanese and Arabic RE model, respectively, without using any manually annotated RE data in Japanese/Arabic.
We apply model ensemble to further improve the accuracy of the Bi-LSTM model. We train 5 Bi-LSTM English RE models initiated with different random seeds, apply the 5 models on the target languages, and combine the outputs by selecting the relation type labels with the highest probabilities among the 5 models. This Ensemble approach improves the single model by 0.6-1.9 $F_1$ points, except for Arabic.
Experiments ::: Cross-Lingual RE Performance ::: Discussion
Since our approach projects the target-language word embeddings to the source-language embedding space preserving the word order, it is expected to work better for a target language that has more similar word order as the source language. This has been verified by our experiments. The source language, English, belongs to the SVO (Subject, Verb, Object) language family where in a sentence the subject comes first, the verb second, and the object third. Spanish, Italian, Portuguese, German (in conventional typology) and Chinese also belong to the SVO language family, and our approach achieves over $70\%$ relative accuracy for these languages. On the other hand, Japanese belongs to the SOV (Subject, Object, Verb) language family and Arabic belongs to the VSO (Verb, Subject, Object) language family, and our approach achieves lower relative accuracy for these two languages.
Related Work
There are a few weakly supervised cross-lingual RE approaches. BIBREF7 and BIBREF8 project annotated English RE data to Korean to create weakly labeled training data via aligned parallel corpora. BIBREF9 translates a target-language sentence into English, performs RE in English, and then projects the relation phrases back to the target-language sentence. BIBREF10 proposes an adversarial feature adaptation approach for cross-lingual relation classification, which uses a machine translation system to translate source-language sentences into target-language sentences. Unlike the existing approaches, our approach does not require aligned parallel corpora or machine translation systems. There are also several multilingual RE approaches, e.g., BIBREF34, BIBREF35, BIBREF36, where the focus is to improve monolingual RE by jointly modeling texts in multiple languages.
Many cross-lingual word embedding models have been developed recently BIBREF15, BIBREF16. An important application of cross-lingual word embeddings is to enable cross-lingual model transfer. In this paper, we apply the bilingual word embedding mapping technique in BIBREF17 to cross-lingual RE model transfer. Similar approaches have been applied to other NLP tasks such as dependency parsing BIBREF37, POS tagging BIBREF38 and named entity recognition BIBREF21, BIBREF39.
Conclusion
In this paper, we developed a simple yet effective neural cross-lingual RE model transfer approach, which has very low resource requirements (a small bilingual dictionary with 1K word pairs) and can be easily extended to a new language. Extensive experiments for 7 target languages across a variety of language families on both in-house and open datasets show that the proposed approach achieves very good performance (up to $79\%$ of the accuracy of the supervised target-language RE model), which provides a strong baseline for building cross-lingual RE models with minimal resources.
Acknowledgments
We thank Mo Yu for sharing their ACE05 English data split and the anonymous reviewers for their valuable comments. | Yes |
5c90e1ed208911dbcae7e760a553e912f8c237a5 | 5c90e1ed208911dbcae7e760a553e912f8c237a5_0 | Q: How big are the datasets?
Text: Introduction
Relation extraction (RE) is an important information extraction task that seeks to detect and classify semantic relationships between entities like persons, organizations, geo-political entities, locations, and events. It provides useful information for many NLP applications such as knowledge base construction, text mining and question answering. For example, the entity Washington, D.C. and the entity United States have a CapitalOf relationship, and extraction of such relationships can help answer questions like “What is the capital city of the United States?"
Traditional RE models (e.g., BIBREF0, BIBREF1, BIBREF2) require careful feature engineering to derive and combine various lexical, syntactic and semantic features. Recently, neural network RE models (e.g., BIBREF3, BIBREF4, BIBREF5, BIBREF6) have become very successful. These models employ a certain level of automatic feature learning by using word embeddings, which significantly simplifies the feature engineering task while considerably improving the accuracy, achieving the state-of-the-art performance for relation extraction.
All the above RE models are supervised machine learning models that need to be trained with large amounts of manually annotated RE data to achieve high accuracy. However, annotating RE data by human is expensive and time-consuming, and can be quite difficult for a new language. Moreover, most RE models require language-specific resources such as dependency parsers and part-of-speech (POS) taggers, which also makes it very challenging to transfer an RE model of a resource-rich language to a resource-poor language.
There are a few existing weakly supervised cross-lingual RE approaches that require no human annotation in the target languages, e.g., BIBREF7, BIBREF8, BIBREF9, BIBREF10. However, the existing approaches require aligned parallel corpora or machine translation systems, which may not be readily available in practice.
In this paper, we make the following contributions to cross-lingual RE:
We propose a new approach for direct cross-lingual RE model transfer based on bilingual word embedding mapping. It projects word embeddings from a target language to a source language (e.g., English), so that a well-trained source-language RE model can be directly applied to the target language, with no manually annotated RE data needed for the target language.
We design a deep neural network architecture for the source-language (English) RE model that uses word embeddings and generic language-independent features as the input. The English RE model achieves the-state-of-the-art performance without using language-specific resources.
We conduct extensive experiments which show that the proposed approach achieves very good performance (up to $79\%$ of the accuracy of the supervised target-language RE model) for a number of target languages on both in-house and the ACE05 datasets BIBREF11, using a small bilingual dictionary with only 1K word pairs. To the best of our knowledge, this is the first work that includes empirical studies for cross-lingual RE on several languages across a variety of language families, without using aligned parallel corpora or machine translation systems.
We organize the paper as follows. In Section 2 we provide an overview of our approach. In Section 3 we describe how to build monolingual word embeddings and learn a linear mapping between two languages. In Section 4 we present a neural network architecture for the source-language (English). In Section 5 we evaluate the performance of the proposed approach for a number of target languages. We discuss related work in Section 6 and conclude the paper in Section 7.
Overview of the Approach
We summarize the main steps of our neural cross-lingual RE model transfer approach as follows.
Build word embeddings for the source language and the target language separately using monolingual data.
Learn a linear mapping that projects the target-language word embeddings into the source-language embedding space using a small bilingual dictionary.
Build a neural network source-language RE model that uses word embeddings and generic language-independent features as the input.
For a target-language sentence and any two entities in it, project the word embeddings of the words in the sentence to the source-language word embeddings using the linear mapping, and then apply the source-language RE model on the projected word embeddings to classify the relationship between the two entities. An example is shown in Figure FIGREF4, where the target language is Portuguese and the source language is English.
We will describe each component of our approach in the subsequent sections.
Cross-Lingual Word Embeddings
In recent years, vector representations of words, known as word embeddings, become ubiquitous for many NLP applications BIBREF12, BIBREF13, BIBREF14.
A monolingual word embedding model maps words in the vocabulary $\mathcal {V}$ of a language to real-valued vectors in $\mathbb {R}^{d\times 1}$. The dimension of the vector space $d$ is normally much smaller than the size of the vocabulary $V=|\mathcal {V}|$ for efficient representation. It also aims to capture semantic similarities between the words based on their distributional properties in large samples of monolingual data.
Cross-lingual word embedding models try to build word embeddings across multiple languages BIBREF15, BIBREF16. One approach builds monolingual word embeddings separately and then maps them to the same vector space using a bilingual dictionary BIBREF17, BIBREF18. Another approach builds multilingual word embeddings in a shared vector space simultaneously, by generating mixed language corpora using aligned sentences BIBREF19, BIBREF20.
In this paper, we adopt the technique in BIBREF17 because it only requires a small bilingual dictionary of aligned word pairs, and does not require parallel corpora of aligned sentences which could be more difficult to obtain.
Cross-Lingual Word Embeddings ::: Monolingual Word Embeddings
To build monolingual word embeddings for the source and target languages, we use a variant of the Continuous Bag-of-Words (CBOW) word2vec model BIBREF13.
The standard CBOW model has two matrices, the input word matrix $\tilde{\mathbf {X}} \in \mathbb {R}^{d\times V}$ and the output word matrix $\mathbf {X} \in \mathbb {R}^{d\times V}$. For the $i$th word $w_i$ in $\mathcal {V}$, let $\mathbf {e}(w_i) \in \mathbb {R}^{V \times 1}$ be a one-hot vector with 1 at index $i$ and 0s at other indexes, so that $\tilde{\mathbf {x}}_i = \tilde{\mathbf {X}}\mathbf {e}(w_i)$ (the $i$th column of $\tilde{\mathbf {X}}$) is the input vector representation of word $w_i$, and $\mathbf {x}_i = \mathbf {X}\mathbf {e}(w_i)$ (the $i$th column of $\mathbf {X}$) is the output vector representation (i.e., word embedding) of word $w_i$.
Given a sequence of training words $w_1, w_2, ..., w_N$, the CBOW model seeks to predict a target word $w_t$ using a window of $2c$ context words surrounding $w_t$, by maximizing the following objective function:
The conditional probability is calculated using a softmax function:
where $\mathbf {x}_t=\mathbf {X}\mathbf {e}(w_t)$ is the output vector representation of word $w_t$, and
is the sum of the input vector representations of the context words.
In our variant of the CBOW model, we use a separate input word matrix $\tilde{\mathbf {X}}_j$ for a context word at position $j, -c \le j \le c, j\ne 0$. In addition, we employ weights that decay with the distances of the context words to the target word. Under these modifications, we have
We use the variant to build monolingual word embeddings because experiments on named entity recognition and word similarity tasks showed this variant leads to small improvements over the standard CBOW model BIBREF21.
Cross-Lingual Word Embeddings ::: Bilingual Word Embedding Mapping
BIBREF17 observed that word embeddings of different languages often have similar geometric arrangements, and suggested to learn a linear mapping between the vector spaces.
Let $\mathcal {D}$ be a bilingual dictionary with aligned word pairs ($w_i, v_i)_{i=1,...,D}$ between a source language $s$ and a target language $t$, where $w_i$ is a source-language word and $v_i$ is the translation of $w_i$ in the target language. Let $\mathbf {x}_i \in \mathbb {R}^{d \times 1}$ be the word embedding of the source-language word $w_i$, $\mathbf {y}_i \in \mathbb {R}^{d \times 1}$ be the word embedding of the target-language word $v_i$.
We find a linear mapping (matrix) $\mathbf {M}_{t\rightarrow s}$ such that $\mathbf {M}_{t\rightarrow s}\mathbf {y}_i$ approximates $\mathbf {x}_i$, by solving the following least squares problem using the dictionary as the training set:
Using $\mathbf {M}_{t\rightarrow s}$, for any target-language word $v$ with word embedding $\mathbf {y}$, we can project it into the source-language embedding space as $\mathbf {M}_{t\rightarrow s}\mathbf {y}$.
Cross-Lingual Word Embeddings ::: Bilingual Word Embedding Mapping ::: Length Normalization and Orthogonal Transformation
To ensure that all the training instances in the dictionary $\mathcal {D}$ contribute equally to the optimization objective in (DISPLAY_FORM14) and to preserve vector norms after projection, we have tried length normalization and orthogonal transformation for learning the bilingual mapping as in BIBREF22, BIBREF23, BIBREF24.
First, we normalize the source-language and target-language word embeddings to be unit vectors: $\mathbf {x}^{\prime }=\frac{\mathbf {x}}{||\mathbf {x}||}$ for each source-language word embedding $\mathbf {x}$, and $\mathbf {y}^{\prime }= \frac{\mathbf {y}}{||\mathbf {y}||}$ for each target-language word embedding $\mathbf {y}$.
Next, we add an orthogonality constraint to (DISPLAY_FORM14) such that $\mathbf {M}$ is an orthogonal matrix, i.e., $\mathbf {M}^\mathrm {T}\mathbf {M} = \mathbf {I}$ where $\mathbf {I}$ denotes the identity matrix:
$\mathbf {M}^{O} _{t\rightarrow s}$ can be computed using singular-value decomposition (SVD).
Cross-Lingual Word Embeddings ::: Bilingual Word Embedding Mapping ::: Semi-Supervised and Unsupervised Mappings
The mapping learned in (DISPLAY_FORM14) or (DISPLAY_FORM16) requires a seed dictionary. To relax this requirement, BIBREF25 proposed a self-learning procedure that can be combined with a dictionary-based mapping technique. Starting with a small seed dictionary, the procedure iteratively 1) learns a mapping using the current dictionary; and 2) computes a new dictionary using the learned mapping.
BIBREF26 proposed an unsupervised method to learn the bilingual mapping without using a seed dictionary. The method first uses a heuristic to build an initial dictionary that aligns the vocabularies of two languages, and then applies a robust self-learning procedure to iteratively improve the mapping. Another unsupervised method based on adversarial training was proposed in BIBREF27.
We compare the performance of different mappings for cross-lingual RE model transfer in Section SECREF45.
Neural Network RE Models
For any two entities in a sentence, an RE model determines whether these two entities have a relationship, and if yes, classifies the relationship into one of the pre-defined relation types. We focus on neural network RE models since these models achieve the state-of-the-art performance for relation extraction. Most importantly, neural network RE models use word embeddings as the input, which are amenable to cross-lingual model transfer via cross-lingual word embeddings. In this paper, we use English as the source language.
Our neural network architecture has four layers. The first layer is the embedding layer which maps input words in a sentence to word embeddings. The second layer is a context layer which transforms the word embeddings to context-aware vector representations using a recurrent or convolutional neural network layer. The third layer is a summarization layer which summarizes the vectors in a sentence by grouping and pooling. The final layer is the output layer which returns the classification label for the relation type.
Neural Network RE Models ::: Embedding Layer
For an English sentence with $n$ words $\mathbf {s}=(w_1,w_2,...,w_n)$, the embedding layer maps each word $w_t$ to a real-valued vector (word embedding) $\mathbf {x}_t\in \mathbb {R}^{d \times 1}$ using the English word embedding model (Section SECREF9). In addition, for each entity $m$ in the sentence, the embedding layer maps its entity type to a real-valued vector (entity label embedding) $\mathbf {l}_m \in \mathbb {R}^{d_m \times 1}$ (initialized randomly). In our experiments we use $d=300$ and $d_m = 50$.
Neural Network RE Models ::: Context Layer
Given the word embeddings $\mathbf {x}_t$'s of the words in the sentence, the context layer tries to build a sentence-context-aware vector representation for each word. We consider two types of neural network layers that aim to achieve this.
Neural Network RE Models ::: Context Layer ::: Bi-LSTM Context Layer
The first type of context layer is based on Long Short-Term Memory (LSTM) type recurrent neural networks BIBREF28, BIBREF29. Recurrent neural networks (RNNs) are a class of neural networks that operate on sequential data such as sequences of words. LSTM networks are a type of RNNs that have been invented to better capture long-range dependencies in sequential data.
We pass the word embeddings $\mathbf {x}_t$'s to a forward and a backward LSTM layer. A forward or backward LSTM layer consists of a set of recurrently connected blocks known as memory blocks. The memory block at the $t$-th word in the forward LSTM layer contains a memory cell $\overrightarrow{\mathbf {c}}_t$ and three gates: an input gate $\overrightarrow{\mathbf {i}}_t$, a forget gate $\overrightarrow{\mathbf {f}}_t$ and an output gate $\overrightarrow{\mathbf {o}}_t$ ($\overrightarrow{\cdot }$ indicates the forward direction), which are updated as follows:
where $\sigma $ is the element-wise sigmoid function and $\odot $ is the element-wise multiplication.
The hidden state vector $\overrightarrow{\mathbf {h}}_t$ in the forward LSTM layer incorporates information from the left (past) tokens of $w_t$ in the sentence. Similarly, we can compute the hidden state vector $\overleftarrow{\mathbf {h}}_t$ in the backward LSTM layer, which incorporates information from the right (future) tokens of $w_t$ in the sentence. The concatenation of the two vectors $\mathbf {h}_t = [\overrightarrow{\mathbf {h}}_t, \overleftarrow{\mathbf {h}}_t]$ is a good representation of the word $w_t$ with both left and right contextual information in the sentence.
Neural Network RE Models ::: Context Layer ::: CNN Context Layer
The second type of context layer is based on Convolutional Neural Networks (CNNs) BIBREF3, BIBREF4, which applies convolution-like operation on successive windows of size $k$ around each word in the sentence. Let $\mathbf {z}_t = [\mathbf {x}_{t-(k-1)/2},...,\mathbf {x}_{t+(k-1)/2}]$ be the concatenation of $k$ word embeddings around $w_t$. The convolutional layer computes a hidden state vector
for each word $w_t$, where $\mathbf {W}$ is a weight matrix and $\mathbf {b}$ is a bias vector, and $\tanh (\cdot )$ is the element-wise hyperbolic tangent function.
Neural Network RE Models ::: Summarization Layer
After the context layer, the sentence $(w_1,w_2,...,w_n)$ is represented by $(\mathbf {h}_1,....,\mathbf {h}_n)$. Suppose $m_1=(w_{b_1},..,w_{e_1})$ and $m_2=(w_{b_2},..,w_{e_2})$ are two entities in the sentence where $m_1$ is on the left of $m_2$ (i.e., $e_1 < b_2$). As different sentences and entities may have various lengths, the summarization layer tries to build a fixed-length vector that best summarizes the representations of the sentence and the two entities for relation type classification.
We divide the hidden state vectors $\mathbf {h}_t$'s into 5 groups:
$G_1=\lbrace \mathbf {h}_{1},..,\mathbf {h}_{b_1-1}\rbrace $ includes vectors that are left to the first entity $m_1$.
$G_2=\lbrace \mathbf {h}_{b_1},..,\mathbf {h}_{e_1}\rbrace $ includes vectors that are in the first entity $m_1$.
$G_3=\lbrace \mathbf {h}_{e_1+1},..,\mathbf {h}_{b_2-1}\rbrace $ includes vectors that are between the two entities.
$G_4=\lbrace \mathbf {h}_{b_2},..,\mathbf {h}_{e_2}\rbrace $ includes vectors that are in the second entity $m_2$.
$G_5=\lbrace \mathbf {h}_{e_2+1},..,\mathbf {h}_{n}\rbrace $ includes vectors that are right to the second entity $m_2$.
We perform element-wise max pooling among the vectors in each group:
where $d_h$ is the dimension of the hidden state vectors. Concatenating the $\mathbf {h}_{G_i}$'s we get a fixed-length vector $\mathbf {h}_s=[\mathbf {h}_{G_1},...,\mathbf {h}_{G_5}]$.
Neural Network RE Models ::: Output Layer
The output layer receives inputs from the previous layers (the summarization vector $\mathbf {h}_s$, the entity label embeddings $\mathbf {l}_{m_1}$ and $\mathbf {l}_{m_2}$ for the two entities under consideration) and returns a probability distribution over the relation type labels:
Neural Network RE Models ::: Cross-Lingual RE Model Transfer
Given the word embeddings of a sequence of words in a target language $t$, $(\mathbf {y}_1,...,\mathbf {y}_n)$, we project them into the English embedding space by applying the linear mapping $\mathbf {M}_{t\rightarrow s}$ learned in Section SECREF13: $(\mathbf {M}_{t\rightarrow s}\mathbf {y}_1, \mathbf {M}_{t\rightarrow s}\mathbf {y}_2,...,\mathbf {M}_{t\rightarrow s}\mathbf {y}_n)$. The neural network English RE model is then applied on the projected word embeddings and the entity label embeddings (which are language independent) to perform relationship classification.
Note that our models do not use language-specific resources such as dependency parsers or POS taggers because these resources might not be readily available for a target language. Also our models do not use precise word position features since word positions in sentences can vary a lot across languages.
Experiments
In this section, we evaluate the performance of the proposed cross-lingual RE approach on both in-house dataset and the ACE (Automatic Content Extraction) 2005 multilingual dataset BIBREF11.
Experiments ::: Datasets
Our in-house dataset includes manually annotated RE data for 6 languages: English, German, Spanish, Italian, Japanese and Portuguese. It defines 56 entity types (e.g., Person, Organization, Geo-Political Entity, Location, Facility, Time, Event_Violence, etc.) and 53 relation types between the entities (e.g., AgentOf, LocatedAt, PartOf, TimeOf, AffectedBy, etc.).
The ACE05 dataset includes manually annotated RE data for 3 languages: English, Arabic and Chinese. It defines 7 entity types (Person, Organization, Geo-Political Entity, Location, Facility, Weapon, Vehicle) and 6 relation types between the entities (Agent-Artifact, General-Affiliation, ORG-Affiliation, Part-Whole, Personal-Social, Physical).
For both datasets, we create a class label “O" to denote that the two entities under consideration do not have a relationship belonging to one of the relation types of interest.
Experiments ::: Source (English) RE Model Performance
We build 3 neural network English RE models under the architecture described in Section SECREF4:
The first neural network RE model does not have a context layer and the word embeddings are directly passed to the summarization layer. We call it Pass-Through for short.
The second neural network RE model has a Bi-LSTM context layer. We call it Bi-LSTM for short.
The third neural network model has a CNN context layer with a window size 3. We call it CNN for short.
First we compare our neural network English RE models with the state-of-the-art RE models on the ACE05 English data. The ACE05 English data can be divided to 6 different domains: broadcast conversation (bc), broadcast news (bn), telephone conversation (cts), newswire (nw), usenet (un) and webblogs (wl). We apply the same data split in BIBREF31, BIBREF30, BIBREF6, which uses news (the union of bn and nw) as the training set, a half of bc as the development set and the remaining data as the test set.
We learn the model parameters using Adam BIBREF32. We apply dropout BIBREF33 to the hidden layers to reduce overfitting. The development set is used for tuning the model hyperparameters and for early stopping.
In Table TABREF40 we compare our models with the best models in BIBREF30 and BIBREF6. Our Bi-LSTM model outperforms the best model (single or ensemble) in BIBREF30 and the best single model in BIBREF6, without using any language-specific resources such as dependency parsers.
While the data split in the previous works was motivated by domain adaptation, the focus of this paper is on cross-lingual model transfer, and hence we apply a random data split as follows. For the source language English and each target language, we randomly select $80\%$ of the data as the training set, $10\%$ as the development set, and keep the remaining $10\%$ as the test set. The sizes of the sets are summarized in Table TABREF41.
We report the Precision, Recall and $F_1$ score of the 3 neural network English RE models in Table TABREF42. Note that adding an additional context layer with either Bi-LSTM or CNN significantly improves the performance of our English RE model, compared with the simple Pass-Through model. Therefore, we will focus on the Bi-LSTM model and the CNN model in the subsequent experiments.
Experiments ::: Cross-Lingual RE Performance
We apply the English RE models to the 7 target languages across a variety of language families.
Experiments ::: Cross-Lingual RE Performance ::: Dictionary Size
The bilingual dictionary includes the most frequent target-language words and their translations in English. To determine how many word pairs are needed to learn an effective bilingual word embedding mapping for cross-lingual RE, we first evaluate the performance ($F_1$ score) of our cross-lingual RE approach on the target-language development sets with an increasing dictionary size, as plotted in Figure FIGREF35.
We found that for most target languages, once the dictionary size reaches 1K, further increasing the dictionary size may not improve the transfer performance. Therefore, we select the dictionary size to be 1K.
Experiments ::: Cross-Lingual RE Performance ::: Comparison of Different Mappings
We compare the performance of cross-lingual RE model transfer under the following bilingual word embedding mappings:
Regular-1K: the regular mapping learned in (DISPLAY_FORM14) using 1K word pairs;
Orthogonal-1K: the orthogonal mapping with length normalization learned in (DISPLAY_FORM16) using 1K word pairs (in this case we train the English RE models with the normalized English word embeddings);
Semi-Supervised-1K: the mapping learned with 1K word pairs and improved by the self-learning method in BIBREF25;
Unsupervised: the mapping learned by the unsupervised method in BIBREF26.
The results are summarized in Table TABREF46. The regular mapping outperforms the orthogonal mapping consistently across the target languages. While the orthogonal mapping was shown to work better than the regular mapping for the word translation task BIBREF22, BIBREF23, BIBREF24, our cross-lingual RE approach directly maps target-language word embeddings to the English embedding space without conducting word translations. Moreover, the orthogonal mapping requires length normalization, but we observed that length normalization adversely affects the performance of the English RE models (about 2.0 $F_1$ points drop).
We apply the vecmap toolkit to obtain the semi-supervised and unsupervised mappings. The unsupervised mapping has the lowest average accuracy over the target languages, but it does not require a seed dictionary. Among all the mappings, the regular mapping achieves the best average accuracy over the target languages using a dictionary with only 1K word pairs, and hence we adopt it for the cross-lingual RE task.
Experiments ::: Cross-Lingual RE Performance ::: Performance on Test Data
The cross-lingual RE model transfer results for the in-house test data are summarized in Table TABREF52 and the results for the ACE05 test data are summarized in Table TABREF53, using the regular mapping learned with a bilingual dictionary of size 1K. In the tables, we also provide the performance of the supervised RE model (Bi-LSTM) for each target language, which is trained with a few hundred thousand tokens of manually annotated RE data in the target-language, and may serve as an upper bound for the cross-lingual model transfer performance.
Among the 2 neural network models, the Bi-LSTM model achieves a better cross-lingual RE performance than the CNN model for 6 out of the 7 target languages. In terms of absolute performance, the Bi-LSTM model achieves over $40.0$ $F_1$ scores for German, Spanish, Portuguese and Chinese. In terms of relative performance, it reaches over $75\%$ of the accuracy of the supervised target-language RE model for German, Spanish, Italian and Portuguese. While Japanese and Arabic appear to be more difficult to transfer, it still achieves $55\%$ and $52\%$ of the accuracy of the supervised Japanese and Arabic RE model, respectively, without using any manually annotated RE data in Japanese/Arabic.
We apply model ensemble to further improve the accuracy of the Bi-LSTM model. We train 5 Bi-LSTM English RE models initiated with different random seeds, apply the 5 models on the target languages, and combine the outputs by selecting the relation type labels with the highest probabilities among the 5 models. This Ensemble approach improves the single model by 0.6-1.9 $F_1$ points, except for Arabic.
Experiments ::: Cross-Lingual RE Performance ::: Discussion
Since our approach projects the target-language word embeddings to the source-language embedding space preserving the word order, it is expected to work better for a target language that has more similar word order as the source language. This has been verified by our experiments. The source language, English, belongs to the SVO (Subject, Verb, Object) language family where in a sentence the subject comes first, the verb second, and the object third. Spanish, Italian, Portuguese, German (in conventional typology) and Chinese also belong to the SVO language family, and our approach achieves over $70\%$ relative accuracy for these languages. On the other hand, Japanese belongs to the SOV (Subject, Object, Verb) language family and Arabic belongs to the VSO (Verb, Subject, Object) language family, and our approach achieves lower relative accuracy for these two languages.
Related Work
There are a few weakly supervised cross-lingual RE approaches. BIBREF7 and BIBREF8 project annotated English RE data to Korean to create weakly labeled training data via aligned parallel corpora. BIBREF9 translates a target-language sentence into English, performs RE in English, and then projects the relation phrases back to the target-language sentence. BIBREF10 proposes an adversarial feature adaptation approach for cross-lingual relation classification, which uses a machine translation system to translate source-language sentences into target-language sentences. Unlike the existing approaches, our approach does not require aligned parallel corpora or machine translation systems. There are also several multilingual RE approaches, e.g., BIBREF34, BIBREF35, BIBREF36, where the focus is to improve monolingual RE by jointly modeling texts in multiple languages.
Many cross-lingual word embedding models have been developed recently BIBREF15, BIBREF16. An important application of cross-lingual word embeddings is to enable cross-lingual model transfer. In this paper, we apply the bilingual word embedding mapping technique in BIBREF17 to cross-lingual RE model transfer. Similar approaches have been applied to other NLP tasks such as dependency parsing BIBREF37, POS tagging BIBREF38 and named entity recognition BIBREF21, BIBREF39.
Conclusion
In this paper, we developed a simple yet effective neural cross-lingual RE model transfer approach, which has very low resource requirements (a small bilingual dictionary with 1K word pairs) and can be easily extended to a new language. Extensive experiments for 7 target languages across a variety of language families on both in-house and open datasets show that the proposed approach achieves very good performance (up to $79\%$ of the accuracy of the supervised target-language RE model), which provides a strong baseline for building cross-lingual RE models with minimal resources.
Acknowledgments
We thank Mo Yu for sharing their ACE05 English data split and the anonymous reviewers for their valuable comments. | In-house dataset consists of 3716 documents
ACE05 dataset consists of 1635 documents |
3c3b4797e2b21e2c31cf117ad9e52f327721790f | 3c3b4797e2b21e2c31cf117ad9e52f327721790f_0 | Q: What languages do they experiment on?
Text: Introduction
Relation extraction (RE) is an important information extraction task that seeks to detect and classify semantic relationships between entities like persons, organizations, geo-political entities, locations, and events. It provides useful information for many NLP applications such as knowledge base construction, text mining and question answering. For example, the entity Washington, D.C. and the entity United States have a CapitalOf relationship, and extraction of such relationships can help answer questions like “What is the capital city of the United States?"
Traditional RE models (e.g., BIBREF0, BIBREF1, BIBREF2) require careful feature engineering to derive and combine various lexical, syntactic and semantic features. Recently, neural network RE models (e.g., BIBREF3, BIBREF4, BIBREF5, BIBREF6) have become very successful. These models employ a certain level of automatic feature learning by using word embeddings, which significantly simplifies the feature engineering task while considerably improving the accuracy, achieving the state-of-the-art performance for relation extraction.
All the above RE models are supervised machine learning models that need to be trained with large amounts of manually annotated RE data to achieve high accuracy. However, annotating RE data by human is expensive and time-consuming, and can be quite difficult for a new language. Moreover, most RE models require language-specific resources such as dependency parsers and part-of-speech (POS) taggers, which also makes it very challenging to transfer an RE model of a resource-rich language to a resource-poor language.
There are a few existing weakly supervised cross-lingual RE approaches that require no human annotation in the target languages, e.g., BIBREF7, BIBREF8, BIBREF9, BIBREF10. However, the existing approaches require aligned parallel corpora or machine translation systems, which may not be readily available in practice.
In this paper, we make the following contributions to cross-lingual RE:
We propose a new approach for direct cross-lingual RE model transfer based on bilingual word embedding mapping. It projects word embeddings from a target language to a source language (e.g., English), so that a well-trained source-language RE model can be directly applied to the target language, with no manually annotated RE data needed for the target language.
We design a deep neural network architecture for the source-language (English) RE model that uses word embeddings and generic language-independent features as the input. The English RE model achieves the-state-of-the-art performance without using language-specific resources.
We conduct extensive experiments which show that the proposed approach achieves very good performance (up to $79\%$ of the accuracy of the supervised target-language RE model) for a number of target languages on both in-house and the ACE05 datasets BIBREF11, using a small bilingual dictionary with only 1K word pairs. To the best of our knowledge, this is the first work that includes empirical studies for cross-lingual RE on several languages across a variety of language families, without using aligned parallel corpora or machine translation systems.
We organize the paper as follows. In Section 2 we provide an overview of our approach. In Section 3 we describe how to build monolingual word embeddings and learn a linear mapping between two languages. In Section 4 we present a neural network architecture for the source-language (English). In Section 5 we evaluate the performance of the proposed approach for a number of target languages. We discuss related work in Section 6 and conclude the paper in Section 7.
Overview of the Approach
We summarize the main steps of our neural cross-lingual RE model transfer approach as follows.
Build word embeddings for the source language and the target language separately using monolingual data.
Learn a linear mapping that projects the target-language word embeddings into the source-language embedding space using a small bilingual dictionary.
Build a neural network source-language RE model that uses word embeddings and generic language-independent features as the input.
For a target-language sentence and any two entities in it, project the word embeddings of the words in the sentence to the source-language word embeddings using the linear mapping, and then apply the source-language RE model on the projected word embeddings to classify the relationship between the two entities. An example is shown in Figure FIGREF4, where the target language is Portuguese and the source language is English.
We will describe each component of our approach in the subsequent sections.
Cross-Lingual Word Embeddings
In recent years, vector representations of words, known as word embeddings, become ubiquitous for many NLP applications BIBREF12, BIBREF13, BIBREF14.
A monolingual word embedding model maps words in the vocabulary $\mathcal {V}$ of a language to real-valued vectors in $\mathbb {R}^{d\times 1}$. The dimension of the vector space $d$ is normally much smaller than the size of the vocabulary $V=|\mathcal {V}|$ for efficient representation. It also aims to capture semantic similarities between the words based on their distributional properties in large samples of monolingual data.
Cross-lingual word embedding models try to build word embeddings across multiple languages BIBREF15, BIBREF16. One approach builds monolingual word embeddings separately and then maps them to the same vector space using a bilingual dictionary BIBREF17, BIBREF18. Another approach builds multilingual word embeddings in a shared vector space simultaneously, by generating mixed language corpora using aligned sentences BIBREF19, BIBREF20.
In this paper, we adopt the technique in BIBREF17 because it only requires a small bilingual dictionary of aligned word pairs, and does not require parallel corpora of aligned sentences which could be more difficult to obtain.
Cross-Lingual Word Embeddings ::: Monolingual Word Embeddings
To build monolingual word embeddings for the source and target languages, we use a variant of the Continuous Bag-of-Words (CBOW) word2vec model BIBREF13.
The standard CBOW model has two matrices, the input word matrix $\tilde{\mathbf {X}} \in \mathbb {R}^{d\times V}$ and the output word matrix $\mathbf {X} \in \mathbb {R}^{d\times V}$. For the $i$th word $w_i$ in $\mathcal {V}$, let $\mathbf {e}(w_i) \in \mathbb {R}^{V \times 1}$ be a one-hot vector with 1 at index $i$ and 0s at other indexes, so that $\tilde{\mathbf {x}}_i = \tilde{\mathbf {X}}\mathbf {e}(w_i)$ (the $i$th column of $\tilde{\mathbf {X}}$) is the input vector representation of word $w_i$, and $\mathbf {x}_i = \mathbf {X}\mathbf {e}(w_i)$ (the $i$th column of $\mathbf {X}$) is the output vector representation (i.e., word embedding) of word $w_i$.
Given a sequence of training words $w_1, w_2, ..., w_N$, the CBOW model seeks to predict a target word $w_t$ using a window of $2c$ context words surrounding $w_t$, by maximizing the following objective function:
The conditional probability is calculated using a softmax function:
where $\mathbf {x}_t=\mathbf {X}\mathbf {e}(w_t)$ is the output vector representation of word $w_t$, and
is the sum of the input vector representations of the context words.
In our variant of the CBOW model, we use a separate input word matrix $\tilde{\mathbf {X}}_j$ for a context word at position $j, -c \le j \le c, j\ne 0$. In addition, we employ weights that decay with the distances of the context words to the target word. Under these modifications, we have
We use the variant to build monolingual word embeddings because experiments on named entity recognition and word similarity tasks showed this variant leads to small improvements over the standard CBOW model BIBREF21.
Cross-Lingual Word Embeddings ::: Bilingual Word Embedding Mapping
BIBREF17 observed that word embeddings of different languages often have similar geometric arrangements, and suggested to learn a linear mapping between the vector spaces.
Let $\mathcal {D}$ be a bilingual dictionary with aligned word pairs ($w_i, v_i)_{i=1,...,D}$ between a source language $s$ and a target language $t$, where $w_i$ is a source-language word and $v_i$ is the translation of $w_i$ in the target language. Let $\mathbf {x}_i \in \mathbb {R}^{d \times 1}$ be the word embedding of the source-language word $w_i$, $\mathbf {y}_i \in \mathbb {R}^{d \times 1}$ be the word embedding of the target-language word $v_i$.
We find a linear mapping (matrix) $\mathbf {M}_{t\rightarrow s}$ such that $\mathbf {M}_{t\rightarrow s}\mathbf {y}_i$ approximates $\mathbf {x}_i$, by solving the following least squares problem using the dictionary as the training set:
Using $\mathbf {M}_{t\rightarrow s}$, for any target-language word $v$ with word embedding $\mathbf {y}$, we can project it into the source-language embedding space as $\mathbf {M}_{t\rightarrow s}\mathbf {y}$.
Cross-Lingual Word Embeddings ::: Bilingual Word Embedding Mapping ::: Length Normalization and Orthogonal Transformation
To ensure that all the training instances in the dictionary $\mathcal {D}$ contribute equally to the optimization objective in (DISPLAY_FORM14) and to preserve vector norms after projection, we have tried length normalization and orthogonal transformation for learning the bilingual mapping as in BIBREF22, BIBREF23, BIBREF24.
First, we normalize the source-language and target-language word embeddings to be unit vectors: $\mathbf {x}^{\prime }=\frac{\mathbf {x}}{||\mathbf {x}||}$ for each source-language word embedding $\mathbf {x}$, and $\mathbf {y}^{\prime }= \frac{\mathbf {y}}{||\mathbf {y}||}$ for each target-language word embedding $\mathbf {y}$.
Next, we add an orthogonality constraint to (DISPLAY_FORM14) such that $\mathbf {M}$ is an orthogonal matrix, i.e., $\mathbf {M}^\mathrm {T}\mathbf {M} = \mathbf {I}$ where $\mathbf {I}$ denotes the identity matrix:
$\mathbf {M}^{O} _{t\rightarrow s}$ can be computed using singular-value decomposition (SVD).
Cross-Lingual Word Embeddings ::: Bilingual Word Embedding Mapping ::: Semi-Supervised and Unsupervised Mappings
The mapping learned in (DISPLAY_FORM14) or (DISPLAY_FORM16) requires a seed dictionary. To relax this requirement, BIBREF25 proposed a self-learning procedure that can be combined with a dictionary-based mapping technique. Starting with a small seed dictionary, the procedure iteratively 1) learns a mapping using the current dictionary; and 2) computes a new dictionary using the learned mapping.
BIBREF26 proposed an unsupervised method to learn the bilingual mapping without using a seed dictionary. The method first uses a heuristic to build an initial dictionary that aligns the vocabularies of two languages, and then applies a robust self-learning procedure to iteratively improve the mapping. Another unsupervised method based on adversarial training was proposed in BIBREF27.
We compare the performance of different mappings for cross-lingual RE model transfer in Section SECREF45.
Neural Network RE Models
For any two entities in a sentence, an RE model determines whether these two entities have a relationship, and if yes, classifies the relationship into one of the pre-defined relation types. We focus on neural network RE models since these models achieve the state-of-the-art performance for relation extraction. Most importantly, neural network RE models use word embeddings as the input, which are amenable to cross-lingual model transfer via cross-lingual word embeddings. In this paper, we use English as the source language.
Our neural network architecture has four layers. The first layer is the embedding layer which maps input words in a sentence to word embeddings. The second layer is a context layer which transforms the word embeddings to context-aware vector representations using a recurrent or convolutional neural network layer. The third layer is a summarization layer which summarizes the vectors in a sentence by grouping and pooling. The final layer is the output layer which returns the classification label for the relation type.
Neural Network RE Models ::: Embedding Layer
For an English sentence with $n$ words $\mathbf {s}=(w_1,w_2,...,w_n)$, the embedding layer maps each word $w_t$ to a real-valued vector (word embedding) $\mathbf {x}_t\in \mathbb {R}^{d \times 1}$ using the English word embedding model (Section SECREF9). In addition, for each entity $m$ in the sentence, the embedding layer maps its entity type to a real-valued vector (entity label embedding) $\mathbf {l}_m \in \mathbb {R}^{d_m \times 1}$ (initialized randomly). In our experiments we use $d=300$ and $d_m = 50$.
Neural Network RE Models ::: Context Layer
Given the word embeddings $\mathbf {x}_t$'s of the words in the sentence, the context layer tries to build a sentence-context-aware vector representation for each word. We consider two types of neural network layers that aim to achieve this.
Neural Network RE Models ::: Context Layer ::: Bi-LSTM Context Layer
The first type of context layer is based on Long Short-Term Memory (LSTM) type recurrent neural networks BIBREF28, BIBREF29. Recurrent neural networks (RNNs) are a class of neural networks that operate on sequential data such as sequences of words. LSTM networks are a type of RNNs that have been invented to better capture long-range dependencies in sequential data.
We pass the word embeddings $\mathbf {x}_t$'s to a forward and a backward LSTM layer. A forward or backward LSTM layer consists of a set of recurrently connected blocks known as memory blocks. The memory block at the $t$-th word in the forward LSTM layer contains a memory cell $\overrightarrow{\mathbf {c}}_t$ and three gates: an input gate $\overrightarrow{\mathbf {i}}_t$, a forget gate $\overrightarrow{\mathbf {f}}_t$ and an output gate $\overrightarrow{\mathbf {o}}_t$ ($\overrightarrow{\cdot }$ indicates the forward direction), which are updated as follows:
where $\sigma $ is the element-wise sigmoid function and $\odot $ is the element-wise multiplication.
The hidden state vector $\overrightarrow{\mathbf {h}}_t$ in the forward LSTM layer incorporates information from the left (past) tokens of $w_t$ in the sentence. Similarly, we can compute the hidden state vector $\overleftarrow{\mathbf {h}}_t$ in the backward LSTM layer, which incorporates information from the right (future) tokens of $w_t$ in the sentence. The concatenation of the two vectors $\mathbf {h}_t = [\overrightarrow{\mathbf {h}}_t, \overleftarrow{\mathbf {h}}_t]$ is a good representation of the word $w_t$ with both left and right contextual information in the sentence.
Neural Network RE Models ::: Context Layer ::: CNN Context Layer
The second type of context layer is based on Convolutional Neural Networks (CNNs) BIBREF3, BIBREF4, which applies convolution-like operation on successive windows of size $k$ around each word in the sentence. Let $\mathbf {z}_t = [\mathbf {x}_{t-(k-1)/2},...,\mathbf {x}_{t+(k-1)/2}]$ be the concatenation of $k$ word embeddings around $w_t$. The convolutional layer computes a hidden state vector
for each word $w_t$, where $\mathbf {W}$ is a weight matrix and $\mathbf {b}$ is a bias vector, and $\tanh (\cdot )$ is the element-wise hyperbolic tangent function.
Neural Network RE Models ::: Summarization Layer
After the context layer, the sentence $(w_1,w_2,...,w_n)$ is represented by $(\mathbf {h}_1,....,\mathbf {h}_n)$. Suppose $m_1=(w_{b_1},..,w_{e_1})$ and $m_2=(w_{b_2},..,w_{e_2})$ are two entities in the sentence where $m_1$ is on the left of $m_2$ (i.e., $e_1 < b_2$). As different sentences and entities may have various lengths, the summarization layer tries to build a fixed-length vector that best summarizes the representations of the sentence and the two entities for relation type classification.
We divide the hidden state vectors $\mathbf {h}_t$'s into 5 groups:
$G_1=\lbrace \mathbf {h}_{1},..,\mathbf {h}_{b_1-1}\rbrace $ includes vectors that are left to the first entity $m_1$.
$G_2=\lbrace \mathbf {h}_{b_1},..,\mathbf {h}_{e_1}\rbrace $ includes vectors that are in the first entity $m_1$.
$G_3=\lbrace \mathbf {h}_{e_1+1},..,\mathbf {h}_{b_2-1}\rbrace $ includes vectors that are between the two entities.
$G_4=\lbrace \mathbf {h}_{b_2},..,\mathbf {h}_{e_2}\rbrace $ includes vectors that are in the second entity $m_2$.
$G_5=\lbrace \mathbf {h}_{e_2+1},..,\mathbf {h}_{n}\rbrace $ includes vectors that are right to the second entity $m_2$.
We perform element-wise max pooling among the vectors in each group:
where $d_h$ is the dimension of the hidden state vectors. Concatenating the $\mathbf {h}_{G_i}$'s we get a fixed-length vector $\mathbf {h}_s=[\mathbf {h}_{G_1},...,\mathbf {h}_{G_5}]$.
Neural Network RE Models ::: Output Layer
The output layer receives inputs from the previous layers (the summarization vector $\mathbf {h}_s$, the entity label embeddings $\mathbf {l}_{m_1}$ and $\mathbf {l}_{m_2}$ for the two entities under consideration) and returns a probability distribution over the relation type labels:
Neural Network RE Models ::: Cross-Lingual RE Model Transfer
Given the word embeddings of a sequence of words in a target language $t$, $(\mathbf {y}_1,...,\mathbf {y}_n)$, we project them into the English embedding space by applying the linear mapping $\mathbf {M}_{t\rightarrow s}$ learned in Section SECREF13: $(\mathbf {M}_{t\rightarrow s}\mathbf {y}_1, \mathbf {M}_{t\rightarrow s}\mathbf {y}_2,...,\mathbf {M}_{t\rightarrow s}\mathbf {y}_n)$. The neural network English RE model is then applied on the projected word embeddings and the entity label embeddings (which are language independent) to perform relationship classification.
Note that our models do not use language-specific resources such as dependency parsers or POS taggers because these resources might not be readily available for a target language. Also our models do not use precise word position features since word positions in sentences can vary a lot across languages.
Experiments
In this section, we evaluate the performance of the proposed cross-lingual RE approach on both in-house dataset and the ACE (Automatic Content Extraction) 2005 multilingual dataset BIBREF11.
Experiments ::: Datasets
Our in-house dataset includes manually annotated RE data for 6 languages: English, German, Spanish, Italian, Japanese and Portuguese. It defines 56 entity types (e.g., Person, Organization, Geo-Political Entity, Location, Facility, Time, Event_Violence, etc.) and 53 relation types between the entities (e.g., AgentOf, LocatedAt, PartOf, TimeOf, AffectedBy, etc.).
The ACE05 dataset includes manually annotated RE data for 3 languages: English, Arabic and Chinese. It defines 7 entity types (Person, Organization, Geo-Political Entity, Location, Facility, Weapon, Vehicle) and 6 relation types between the entities (Agent-Artifact, General-Affiliation, ORG-Affiliation, Part-Whole, Personal-Social, Physical).
For both datasets, we create a class label “O" to denote that the two entities under consideration do not have a relationship belonging to one of the relation types of interest.
Experiments ::: Source (English) RE Model Performance
We build 3 neural network English RE models under the architecture described in Section SECREF4:
The first neural network RE model does not have a context layer and the word embeddings are directly passed to the summarization layer. We call it Pass-Through for short.
The second neural network RE model has a Bi-LSTM context layer. We call it Bi-LSTM for short.
The third neural network model has a CNN context layer with a window size 3. We call it CNN for short.
First we compare our neural network English RE models with the state-of-the-art RE models on the ACE05 English data. The ACE05 English data can be divided to 6 different domains: broadcast conversation (bc), broadcast news (bn), telephone conversation (cts), newswire (nw), usenet (un) and webblogs (wl). We apply the same data split in BIBREF31, BIBREF30, BIBREF6, which uses news (the union of bn and nw) as the training set, a half of bc as the development set and the remaining data as the test set.
We learn the model parameters using Adam BIBREF32. We apply dropout BIBREF33 to the hidden layers to reduce overfitting. The development set is used for tuning the model hyperparameters and for early stopping.
In Table TABREF40 we compare our models with the best models in BIBREF30 and BIBREF6. Our Bi-LSTM model outperforms the best model (single or ensemble) in BIBREF30 and the best single model in BIBREF6, without using any language-specific resources such as dependency parsers.
While the data split in the previous works was motivated by domain adaptation, the focus of this paper is on cross-lingual model transfer, and hence we apply a random data split as follows. For the source language English and each target language, we randomly select $80\%$ of the data as the training set, $10\%$ as the development set, and keep the remaining $10\%$ as the test set. The sizes of the sets are summarized in Table TABREF41.
We report the Precision, Recall and $F_1$ score of the 3 neural network English RE models in Table TABREF42. Note that adding an additional context layer with either Bi-LSTM or CNN significantly improves the performance of our English RE model, compared with the simple Pass-Through model. Therefore, we will focus on the Bi-LSTM model and the CNN model in the subsequent experiments.
Experiments ::: Cross-Lingual RE Performance
We apply the English RE models to the 7 target languages across a variety of language families.
Experiments ::: Cross-Lingual RE Performance ::: Dictionary Size
The bilingual dictionary includes the most frequent target-language words and their translations in English. To determine how many word pairs are needed to learn an effective bilingual word embedding mapping for cross-lingual RE, we first evaluate the performance ($F_1$ score) of our cross-lingual RE approach on the target-language development sets with an increasing dictionary size, as plotted in Figure FIGREF35.
We found that for most target languages, once the dictionary size reaches 1K, further increasing the dictionary size may not improve the transfer performance. Therefore, we select the dictionary size to be 1K.
Experiments ::: Cross-Lingual RE Performance ::: Comparison of Different Mappings
We compare the performance of cross-lingual RE model transfer under the following bilingual word embedding mappings:
Regular-1K: the regular mapping learned in (DISPLAY_FORM14) using 1K word pairs;
Orthogonal-1K: the orthogonal mapping with length normalization learned in (DISPLAY_FORM16) using 1K word pairs (in this case we train the English RE models with the normalized English word embeddings);
Semi-Supervised-1K: the mapping learned with 1K word pairs and improved by the self-learning method in BIBREF25;
Unsupervised: the mapping learned by the unsupervised method in BIBREF26.
The results are summarized in Table TABREF46. The regular mapping outperforms the orthogonal mapping consistently across the target languages. While the orthogonal mapping was shown to work better than the regular mapping for the word translation task BIBREF22, BIBREF23, BIBREF24, our cross-lingual RE approach directly maps target-language word embeddings to the English embedding space without conducting word translations. Moreover, the orthogonal mapping requires length normalization, but we observed that length normalization adversely affects the performance of the English RE models (about 2.0 $F_1$ points drop).
We apply the vecmap toolkit to obtain the semi-supervised and unsupervised mappings. The unsupervised mapping has the lowest average accuracy over the target languages, but it does not require a seed dictionary. Among all the mappings, the regular mapping achieves the best average accuracy over the target languages using a dictionary with only 1K word pairs, and hence we adopt it for the cross-lingual RE task.
Experiments ::: Cross-Lingual RE Performance ::: Performance on Test Data
The cross-lingual RE model transfer results for the in-house test data are summarized in Table TABREF52 and the results for the ACE05 test data are summarized in Table TABREF53, using the regular mapping learned with a bilingual dictionary of size 1K. In the tables, we also provide the performance of the supervised RE model (Bi-LSTM) for each target language, which is trained with a few hundred thousand tokens of manually annotated RE data in the target-language, and may serve as an upper bound for the cross-lingual model transfer performance.
Among the 2 neural network models, the Bi-LSTM model achieves a better cross-lingual RE performance than the CNN model for 6 out of the 7 target languages. In terms of absolute performance, the Bi-LSTM model achieves over $40.0$ $F_1$ scores for German, Spanish, Portuguese and Chinese. In terms of relative performance, it reaches over $75\%$ of the accuracy of the supervised target-language RE model for German, Spanish, Italian and Portuguese. While Japanese and Arabic appear to be more difficult to transfer, it still achieves $55\%$ and $52\%$ of the accuracy of the supervised Japanese and Arabic RE model, respectively, without using any manually annotated RE data in Japanese/Arabic.
We apply model ensemble to further improve the accuracy of the Bi-LSTM model. We train 5 Bi-LSTM English RE models initiated with different random seeds, apply the 5 models on the target languages, and combine the outputs by selecting the relation type labels with the highest probabilities among the 5 models. This Ensemble approach improves the single model by 0.6-1.9 $F_1$ points, except for Arabic.
Experiments ::: Cross-Lingual RE Performance ::: Discussion
Since our approach projects the target-language word embeddings to the source-language embedding space preserving the word order, it is expected to work better for a target language that has more similar word order as the source language. This has been verified by our experiments. The source language, English, belongs to the SVO (Subject, Verb, Object) language family where in a sentence the subject comes first, the verb second, and the object third. Spanish, Italian, Portuguese, German (in conventional typology) and Chinese also belong to the SVO language family, and our approach achieves over $70\%$ relative accuracy for these languages. On the other hand, Japanese belongs to the SOV (Subject, Object, Verb) language family and Arabic belongs to the VSO (Verb, Subject, Object) language family, and our approach achieves lower relative accuracy for these two languages.
Related Work
There are a few weakly supervised cross-lingual RE approaches. BIBREF7 and BIBREF8 project annotated English RE data to Korean to create weakly labeled training data via aligned parallel corpora. BIBREF9 translates a target-language sentence into English, performs RE in English, and then projects the relation phrases back to the target-language sentence. BIBREF10 proposes an adversarial feature adaptation approach for cross-lingual relation classification, which uses a machine translation system to translate source-language sentences into target-language sentences. Unlike the existing approaches, our approach does not require aligned parallel corpora or machine translation systems. There are also several multilingual RE approaches, e.g., BIBREF34, BIBREF35, BIBREF36, where the focus is to improve monolingual RE by jointly modeling texts in multiple languages.
Many cross-lingual word embedding models have been developed recently BIBREF15, BIBREF16. An important application of cross-lingual word embeddings is to enable cross-lingual model transfer. In this paper, we apply the bilingual word embedding mapping technique in BIBREF17 to cross-lingual RE model transfer. Similar approaches have been applied to other NLP tasks such as dependency parsing BIBREF37, POS tagging BIBREF38 and named entity recognition BIBREF21, BIBREF39.
Conclusion
In this paper, we developed a simple yet effective neural cross-lingual RE model transfer approach, which has very low resource requirements (a small bilingual dictionary with 1K word pairs) and can be easily extended to a new language. Extensive experiments for 7 target languages across a variety of language families on both in-house and open datasets show that the proposed approach achieves very good performance (up to $79\%$ of the accuracy of the supervised target-language RE model), which provides a strong baseline for building cross-lingual RE models with minimal resources.
Acknowledgments
We thank Mo Yu for sharing their ACE05 English data split and the anonymous reviewers for their valuable comments. | English, German, Spanish, Italian, Japanese and Portuguese, English, Arabic and Chinese |
a7d72f308444616a0befc8db7ad388b3216e2143 | a7d72f308444616a0befc8db7ad388b3216e2143_0 | Q: What datasets are used?
Text: Introduction
Relation extraction (RE) is an important information extraction task that seeks to detect and classify semantic relationships between entities like persons, organizations, geo-political entities, locations, and events. It provides useful information for many NLP applications such as knowledge base construction, text mining and question answering. For example, the entity Washington, D.C. and the entity United States have a CapitalOf relationship, and extraction of such relationships can help answer questions like “What is the capital city of the United States?"
Traditional RE models (e.g., BIBREF0, BIBREF1, BIBREF2) require careful feature engineering to derive and combine various lexical, syntactic and semantic features. Recently, neural network RE models (e.g., BIBREF3, BIBREF4, BIBREF5, BIBREF6) have become very successful. These models employ a certain level of automatic feature learning by using word embeddings, which significantly simplifies the feature engineering task while considerably improving the accuracy, achieving the state-of-the-art performance for relation extraction.
All the above RE models are supervised machine learning models that need to be trained with large amounts of manually annotated RE data to achieve high accuracy. However, annotating RE data by human is expensive and time-consuming, and can be quite difficult for a new language. Moreover, most RE models require language-specific resources such as dependency parsers and part-of-speech (POS) taggers, which also makes it very challenging to transfer an RE model of a resource-rich language to a resource-poor language.
There are a few existing weakly supervised cross-lingual RE approaches that require no human annotation in the target languages, e.g., BIBREF7, BIBREF8, BIBREF9, BIBREF10. However, the existing approaches require aligned parallel corpora or machine translation systems, which may not be readily available in practice.
In this paper, we make the following contributions to cross-lingual RE:
We propose a new approach for direct cross-lingual RE model transfer based on bilingual word embedding mapping. It projects word embeddings from a target language to a source language (e.g., English), so that a well-trained source-language RE model can be directly applied to the target language, with no manually annotated RE data needed for the target language.
We design a deep neural network architecture for the source-language (English) RE model that uses word embeddings and generic language-independent features as the input. The English RE model achieves the-state-of-the-art performance without using language-specific resources.
We conduct extensive experiments which show that the proposed approach achieves very good performance (up to $79\%$ of the accuracy of the supervised target-language RE model) for a number of target languages on both in-house and the ACE05 datasets BIBREF11, using a small bilingual dictionary with only 1K word pairs. To the best of our knowledge, this is the first work that includes empirical studies for cross-lingual RE on several languages across a variety of language families, without using aligned parallel corpora or machine translation systems.
We organize the paper as follows. In Section 2 we provide an overview of our approach. In Section 3 we describe how to build monolingual word embeddings and learn a linear mapping between two languages. In Section 4 we present a neural network architecture for the source-language (English). In Section 5 we evaluate the performance of the proposed approach for a number of target languages. We discuss related work in Section 6 and conclude the paper in Section 7.
Overview of the Approach
We summarize the main steps of our neural cross-lingual RE model transfer approach as follows.
Build word embeddings for the source language and the target language separately using monolingual data.
Learn a linear mapping that projects the target-language word embeddings into the source-language embedding space using a small bilingual dictionary.
Build a neural network source-language RE model that uses word embeddings and generic language-independent features as the input.
For a target-language sentence and any two entities in it, project the word embeddings of the words in the sentence to the source-language word embeddings using the linear mapping, and then apply the source-language RE model on the projected word embeddings to classify the relationship between the two entities. An example is shown in Figure FIGREF4, where the target language is Portuguese and the source language is English.
We will describe each component of our approach in the subsequent sections.
Cross-Lingual Word Embeddings
In recent years, vector representations of words, known as word embeddings, become ubiquitous for many NLP applications BIBREF12, BIBREF13, BIBREF14.
A monolingual word embedding model maps words in the vocabulary $\mathcal {V}$ of a language to real-valued vectors in $\mathbb {R}^{d\times 1}$. The dimension of the vector space $d$ is normally much smaller than the size of the vocabulary $V=|\mathcal {V}|$ for efficient representation. It also aims to capture semantic similarities between the words based on their distributional properties in large samples of monolingual data.
Cross-lingual word embedding models try to build word embeddings across multiple languages BIBREF15, BIBREF16. One approach builds monolingual word embeddings separately and then maps them to the same vector space using a bilingual dictionary BIBREF17, BIBREF18. Another approach builds multilingual word embeddings in a shared vector space simultaneously, by generating mixed language corpora using aligned sentences BIBREF19, BIBREF20.
In this paper, we adopt the technique in BIBREF17 because it only requires a small bilingual dictionary of aligned word pairs, and does not require parallel corpora of aligned sentences which could be more difficult to obtain.
Cross-Lingual Word Embeddings ::: Monolingual Word Embeddings
To build monolingual word embeddings for the source and target languages, we use a variant of the Continuous Bag-of-Words (CBOW) word2vec model BIBREF13.
The standard CBOW model has two matrices, the input word matrix $\tilde{\mathbf {X}} \in \mathbb {R}^{d\times V}$ and the output word matrix $\mathbf {X} \in \mathbb {R}^{d\times V}$. For the $i$th word $w_i$ in $\mathcal {V}$, let $\mathbf {e}(w_i) \in \mathbb {R}^{V \times 1}$ be a one-hot vector with 1 at index $i$ and 0s at other indexes, so that $\tilde{\mathbf {x}}_i = \tilde{\mathbf {X}}\mathbf {e}(w_i)$ (the $i$th column of $\tilde{\mathbf {X}}$) is the input vector representation of word $w_i$, and $\mathbf {x}_i = \mathbf {X}\mathbf {e}(w_i)$ (the $i$th column of $\mathbf {X}$) is the output vector representation (i.e., word embedding) of word $w_i$.
Given a sequence of training words $w_1, w_2, ..., w_N$, the CBOW model seeks to predict a target word $w_t$ using a window of $2c$ context words surrounding $w_t$, by maximizing the following objective function:
The conditional probability is calculated using a softmax function:
where $\mathbf {x}_t=\mathbf {X}\mathbf {e}(w_t)$ is the output vector representation of word $w_t$, and
is the sum of the input vector representations of the context words.
In our variant of the CBOW model, we use a separate input word matrix $\tilde{\mathbf {X}}_j$ for a context word at position $j, -c \le j \le c, j\ne 0$. In addition, we employ weights that decay with the distances of the context words to the target word. Under these modifications, we have
We use the variant to build monolingual word embeddings because experiments on named entity recognition and word similarity tasks showed this variant leads to small improvements over the standard CBOW model BIBREF21.
Cross-Lingual Word Embeddings ::: Bilingual Word Embedding Mapping
BIBREF17 observed that word embeddings of different languages often have similar geometric arrangements, and suggested to learn a linear mapping between the vector spaces.
Let $\mathcal {D}$ be a bilingual dictionary with aligned word pairs ($w_i, v_i)_{i=1,...,D}$ between a source language $s$ and a target language $t$, where $w_i$ is a source-language word and $v_i$ is the translation of $w_i$ in the target language. Let $\mathbf {x}_i \in \mathbb {R}^{d \times 1}$ be the word embedding of the source-language word $w_i$, $\mathbf {y}_i \in \mathbb {R}^{d \times 1}$ be the word embedding of the target-language word $v_i$.
We find a linear mapping (matrix) $\mathbf {M}_{t\rightarrow s}$ such that $\mathbf {M}_{t\rightarrow s}\mathbf {y}_i$ approximates $\mathbf {x}_i$, by solving the following least squares problem using the dictionary as the training set:
Using $\mathbf {M}_{t\rightarrow s}$, for any target-language word $v$ with word embedding $\mathbf {y}$, we can project it into the source-language embedding space as $\mathbf {M}_{t\rightarrow s}\mathbf {y}$.
Cross-Lingual Word Embeddings ::: Bilingual Word Embedding Mapping ::: Length Normalization and Orthogonal Transformation
To ensure that all the training instances in the dictionary $\mathcal {D}$ contribute equally to the optimization objective in (DISPLAY_FORM14) and to preserve vector norms after projection, we have tried length normalization and orthogonal transformation for learning the bilingual mapping as in BIBREF22, BIBREF23, BIBREF24.
First, we normalize the source-language and target-language word embeddings to be unit vectors: $\mathbf {x}^{\prime }=\frac{\mathbf {x}}{||\mathbf {x}||}$ for each source-language word embedding $\mathbf {x}$, and $\mathbf {y}^{\prime }= \frac{\mathbf {y}}{||\mathbf {y}||}$ for each target-language word embedding $\mathbf {y}$.
Next, we add an orthogonality constraint to (DISPLAY_FORM14) such that $\mathbf {M}$ is an orthogonal matrix, i.e., $\mathbf {M}^\mathrm {T}\mathbf {M} = \mathbf {I}$ where $\mathbf {I}$ denotes the identity matrix:
$\mathbf {M}^{O} _{t\rightarrow s}$ can be computed using singular-value decomposition (SVD).
Cross-Lingual Word Embeddings ::: Bilingual Word Embedding Mapping ::: Semi-Supervised and Unsupervised Mappings
The mapping learned in (DISPLAY_FORM14) or (DISPLAY_FORM16) requires a seed dictionary. To relax this requirement, BIBREF25 proposed a self-learning procedure that can be combined with a dictionary-based mapping technique. Starting with a small seed dictionary, the procedure iteratively 1) learns a mapping using the current dictionary; and 2) computes a new dictionary using the learned mapping.
BIBREF26 proposed an unsupervised method to learn the bilingual mapping without using a seed dictionary. The method first uses a heuristic to build an initial dictionary that aligns the vocabularies of two languages, and then applies a robust self-learning procedure to iteratively improve the mapping. Another unsupervised method based on adversarial training was proposed in BIBREF27.
We compare the performance of different mappings for cross-lingual RE model transfer in Section SECREF45.
Neural Network RE Models
For any two entities in a sentence, an RE model determines whether these two entities have a relationship, and if yes, classifies the relationship into one of the pre-defined relation types. We focus on neural network RE models since these models achieve the state-of-the-art performance for relation extraction. Most importantly, neural network RE models use word embeddings as the input, which are amenable to cross-lingual model transfer via cross-lingual word embeddings. In this paper, we use English as the source language.
Our neural network architecture has four layers. The first layer is the embedding layer which maps input words in a sentence to word embeddings. The second layer is a context layer which transforms the word embeddings to context-aware vector representations using a recurrent or convolutional neural network layer. The third layer is a summarization layer which summarizes the vectors in a sentence by grouping and pooling. The final layer is the output layer which returns the classification label for the relation type.
Neural Network RE Models ::: Embedding Layer
For an English sentence with $n$ words $\mathbf {s}=(w_1,w_2,...,w_n)$, the embedding layer maps each word $w_t$ to a real-valued vector (word embedding) $\mathbf {x}_t\in \mathbb {R}^{d \times 1}$ using the English word embedding model (Section SECREF9). In addition, for each entity $m$ in the sentence, the embedding layer maps its entity type to a real-valued vector (entity label embedding) $\mathbf {l}_m \in \mathbb {R}^{d_m \times 1}$ (initialized randomly). In our experiments we use $d=300$ and $d_m = 50$.
Neural Network RE Models ::: Context Layer
Given the word embeddings $\mathbf {x}_t$'s of the words in the sentence, the context layer tries to build a sentence-context-aware vector representation for each word. We consider two types of neural network layers that aim to achieve this.
Neural Network RE Models ::: Context Layer ::: Bi-LSTM Context Layer
The first type of context layer is based on Long Short-Term Memory (LSTM) type recurrent neural networks BIBREF28, BIBREF29. Recurrent neural networks (RNNs) are a class of neural networks that operate on sequential data such as sequences of words. LSTM networks are a type of RNNs that have been invented to better capture long-range dependencies in sequential data.
We pass the word embeddings $\mathbf {x}_t$'s to a forward and a backward LSTM layer. A forward or backward LSTM layer consists of a set of recurrently connected blocks known as memory blocks. The memory block at the $t$-th word in the forward LSTM layer contains a memory cell $\overrightarrow{\mathbf {c}}_t$ and three gates: an input gate $\overrightarrow{\mathbf {i}}_t$, a forget gate $\overrightarrow{\mathbf {f}}_t$ and an output gate $\overrightarrow{\mathbf {o}}_t$ ($\overrightarrow{\cdot }$ indicates the forward direction), which are updated as follows:
where $\sigma $ is the element-wise sigmoid function and $\odot $ is the element-wise multiplication.
The hidden state vector $\overrightarrow{\mathbf {h}}_t$ in the forward LSTM layer incorporates information from the left (past) tokens of $w_t$ in the sentence. Similarly, we can compute the hidden state vector $\overleftarrow{\mathbf {h}}_t$ in the backward LSTM layer, which incorporates information from the right (future) tokens of $w_t$ in the sentence. The concatenation of the two vectors $\mathbf {h}_t = [\overrightarrow{\mathbf {h}}_t, \overleftarrow{\mathbf {h}}_t]$ is a good representation of the word $w_t$ with both left and right contextual information in the sentence.
Neural Network RE Models ::: Context Layer ::: CNN Context Layer
The second type of context layer is based on Convolutional Neural Networks (CNNs) BIBREF3, BIBREF4, which applies convolution-like operation on successive windows of size $k$ around each word in the sentence. Let $\mathbf {z}_t = [\mathbf {x}_{t-(k-1)/2},...,\mathbf {x}_{t+(k-1)/2}]$ be the concatenation of $k$ word embeddings around $w_t$. The convolutional layer computes a hidden state vector
for each word $w_t$, where $\mathbf {W}$ is a weight matrix and $\mathbf {b}$ is a bias vector, and $\tanh (\cdot )$ is the element-wise hyperbolic tangent function.
Neural Network RE Models ::: Summarization Layer
After the context layer, the sentence $(w_1,w_2,...,w_n)$ is represented by $(\mathbf {h}_1,....,\mathbf {h}_n)$. Suppose $m_1=(w_{b_1},..,w_{e_1})$ and $m_2=(w_{b_2},..,w_{e_2})$ are two entities in the sentence where $m_1$ is on the left of $m_2$ (i.e., $e_1 < b_2$). As different sentences and entities may have various lengths, the summarization layer tries to build a fixed-length vector that best summarizes the representations of the sentence and the two entities for relation type classification.
We divide the hidden state vectors $\mathbf {h}_t$'s into 5 groups:
$G_1=\lbrace \mathbf {h}_{1},..,\mathbf {h}_{b_1-1}\rbrace $ includes vectors that are left to the first entity $m_1$.
$G_2=\lbrace \mathbf {h}_{b_1},..,\mathbf {h}_{e_1}\rbrace $ includes vectors that are in the first entity $m_1$.
$G_3=\lbrace \mathbf {h}_{e_1+1},..,\mathbf {h}_{b_2-1}\rbrace $ includes vectors that are between the two entities.
$G_4=\lbrace \mathbf {h}_{b_2},..,\mathbf {h}_{e_2}\rbrace $ includes vectors that are in the second entity $m_2$.
$G_5=\lbrace \mathbf {h}_{e_2+1},..,\mathbf {h}_{n}\rbrace $ includes vectors that are right to the second entity $m_2$.
We perform element-wise max pooling among the vectors in each group:
where $d_h$ is the dimension of the hidden state vectors. Concatenating the $\mathbf {h}_{G_i}$'s we get a fixed-length vector $\mathbf {h}_s=[\mathbf {h}_{G_1},...,\mathbf {h}_{G_5}]$.
Neural Network RE Models ::: Output Layer
The output layer receives inputs from the previous layers (the summarization vector $\mathbf {h}_s$, the entity label embeddings $\mathbf {l}_{m_1}$ and $\mathbf {l}_{m_2}$ for the two entities under consideration) and returns a probability distribution over the relation type labels:
Neural Network RE Models ::: Cross-Lingual RE Model Transfer
Given the word embeddings of a sequence of words in a target language $t$, $(\mathbf {y}_1,...,\mathbf {y}_n)$, we project them into the English embedding space by applying the linear mapping $\mathbf {M}_{t\rightarrow s}$ learned in Section SECREF13: $(\mathbf {M}_{t\rightarrow s}\mathbf {y}_1, \mathbf {M}_{t\rightarrow s}\mathbf {y}_2,...,\mathbf {M}_{t\rightarrow s}\mathbf {y}_n)$. The neural network English RE model is then applied on the projected word embeddings and the entity label embeddings (which are language independent) to perform relationship classification.
Note that our models do not use language-specific resources such as dependency parsers or POS taggers because these resources might not be readily available for a target language. Also our models do not use precise word position features since word positions in sentences can vary a lot across languages.
Experiments
In this section, we evaluate the performance of the proposed cross-lingual RE approach on both in-house dataset and the ACE (Automatic Content Extraction) 2005 multilingual dataset BIBREF11.
Experiments ::: Datasets
Our in-house dataset includes manually annotated RE data for 6 languages: English, German, Spanish, Italian, Japanese and Portuguese. It defines 56 entity types (e.g., Person, Organization, Geo-Political Entity, Location, Facility, Time, Event_Violence, etc.) and 53 relation types between the entities (e.g., AgentOf, LocatedAt, PartOf, TimeOf, AffectedBy, etc.).
The ACE05 dataset includes manually annotated RE data for 3 languages: English, Arabic and Chinese. It defines 7 entity types (Person, Organization, Geo-Political Entity, Location, Facility, Weapon, Vehicle) and 6 relation types between the entities (Agent-Artifact, General-Affiliation, ORG-Affiliation, Part-Whole, Personal-Social, Physical).
For both datasets, we create a class label “O" to denote that the two entities under consideration do not have a relationship belonging to one of the relation types of interest.
Experiments ::: Source (English) RE Model Performance
We build 3 neural network English RE models under the architecture described in Section SECREF4:
The first neural network RE model does not have a context layer and the word embeddings are directly passed to the summarization layer. We call it Pass-Through for short.
The second neural network RE model has a Bi-LSTM context layer. We call it Bi-LSTM for short.
The third neural network model has a CNN context layer with a window size 3. We call it CNN for short.
First we compare our neural network English RE models with the state-of-the-art RE models on the ACE05 English data. The ACE05 English data can be divided to 6 different domains: broadcast conversation (bc), broadcast news (bn), telephone conversation (cts), newswire (nw), usenet (un) and webblogs (wl). We apply the same data split in BIBREF31, BIBREF30, BIBREF6, which uses news (the union of bn and nw) as the training set, a half of bc as the development set and the remaining data as the test set.
We learn the model parameters using Adam BIBREF32. We apply dropout BIBREF33 to the hidden layers to reduce overfitting. The development set is used for tuning the model hyperparameters and for early stopping.
In Table TABREF40 we compare our models with the best models in BIBREF30 and BIBREF6. Our Bi-LSTM model outperforms the best model (single or ensemble) in BIBREF30 and the best single model in BIBREF6, without using any language-specific resources such as dependency parsers.
While the data split in the previous works was motivated by domain adaptation, the focus of this paper is on cross-lingual model transfer, and hence we apply a random data split as follows. For the source language English and each target language, we randomly select $80\%$ of the data as the training set, $10\%$ as the development set, and keep the remaining $10\%$ as the test set. The sizes of the sets are summarized in Table TABREF41.
We report the Precision, Recall and $F_1$ score of the 3 neural network English RE models in Table TABREF42. Note that adding an additional context layer with either Bi-LSTM or CNN significantly improves the performance of our English RE model, compared with the simple Pass-Through model. Therefore, we will focus on the Bi-LSTM model and the CNN model in the subsequent experiments.
Experiments ::: Cross-Lingual RE Performance
We apply the English RE models to the 7 target languages across a variety of language families.
Experiments ::: Cross-Lingual RE Performance ::: Dictionary Size
The bilingual dictionary includes the most frequent target-language words and their translations in English. To determine how many word pairs are needed to learn an effective bilingual word embedding mapping for cross-lingual RE, we first evaluate the performance ($F_1$ score) of our cross-lingual RE approach on the target-language development sets with an increasing dictionary size, as plotted in Figure FIGREF35.
We found that for most target languages, once the dictionary size reaches 1K, further increasing the dictionary size may not improve the transfer performance. Therefore, we select the dictionary size to be 1K.
Experiments ::: Cross-Lingual RE Performance ::: Comparison of Different Mappings
We compare the performance of cross-lingual RE model transfer under the following bilingual word embedding mappings:
Regular-1K: the regular mapping learned in (DISPLAY_FORM14) using 1K word pairs;
Orthogonal-1K: the orthogonal mapping with length normalization learned in (DISPLAY_FORM16) using 1K word pairs (in this case we train the English RE models with the normalized English word embeddings);
Semi-Supervised-1K: the mapping learned with 1K word pairs and improved by the self-learning method in BIBREF25;
Unsupervised: the mapping learned by the unsupervised method in BIBREF26.
The results are summarized in Table TABREF46. The regular mapping outperforms the orthogonal mapping consistently across the target languages. While the orthogonal mapping was shown to work better than the regular mapping for the word translation task BIBREF22, BIBREF23, BIBREF24, our cross-lingual RE approach directly maps target-language word embeddings to the English embedding space without conducting word translations. Moreover, the orthogonal mapping requires length normalization, but we observed that length normalization adversely affects the performance of the English RE models (about 2.0 $F_1$ points drop).
We apply the vecmap toolkit to obtain the semi-supervised and unsupervised mappings. The unsupervised mapping has the lowest average accuracy over the target languages, but it does not require a seed dictionary. Among all the mappings, the regular mapping achieves the best average accuracy over the target languages using a dictionary with only 1K word pairs, and hence we adopt it for the cross-lingual RE task.
Experiments ::: Cross-Lingual RE Performance ::: Performance on Test Data
The cross-lingual RE model transfer results for the in-house test data are summarized in Table TABREF52 and the results for the ACE05 test data are summarized in Table TABREF53, using the regular mapping learned with a bilingual dictionary of size 1K. In the tables, we also provide the performance of the supervised RE model (Bi-LSTM) for each target language, which is trained with a few hundred thousand tokens of manually annotated RE data in the target-language, and may serve as an upper bound for the cross-lingual model transfer performance.
Among the 2 neural network models, the Bi-LSTM model achieves a better cross-lingual RE performance than the CNN model for 6 out of the 7 target languages. In terms of absolute performance, the Bi-LSTM model achieves over $40.0$ $F_1$ scores for German, Spanish, Portuguese and Chinese. In terms of relative performance, it reaches over $75\%$ of the accuracy of the supervised target-language RE model for German, Spanish, Italian and Portuguese. While Japanese and Arabic appear to be more difficult to transfer, it still achieves $55\%$ and $52\%$ of the accuracy of the supervised Japanese and Arabic RE model, respectively, without using any manually annotated RE data in Japanese/Arabic.
We apply model ensemble to further improve the accuracy of the Bi-LSTM model. We train 5 Bi-LSTM English RE models initiated with different random seeds, apply the 5 models on the target languages, and combine the outputs by selecting the relation type labels with the highest probabilities among the 5 models. This Ensemble approach improves the single model by 0.6-1.9 $F_1$ points, except for Arabic.
Experiments ::: Cross-Lingual RE Performance ::: Discussion
Since our approach projects the target-language word embeddings to the source-language embedding space preserving the word order, it is expected to work better for a target language that has more similar word order as the source language. This has been verified by our experiments. The source language, English, belongs to the SVO (Subject, Verb, Object) language family where in a sentence the subject comes first, the verb second, and the object third. Spanish, Italian, Portuguese, German (in conventional typology) and Chinese also belong to the SVO language family, and our approach achieves over $70\%$ relative accuracy for these languages. On the other hand, Japanese belongs to the SOV (Subject, Object, Verb) language family and Arabic belongs to the VSO (Verb, Subject, Object) language family, and our approach achieves lower relative accuracy for these two languages.
Related Work
There are a few weakly supervised cross-lingual RE approaches. BIBREF7 and BIBREF8 project annotated English RE data to Korean to create weakly labeled training data via aligned parallel corpora. BIBREF9 translates a target-language sentence into English, performs RE in English, and then projects the relation phrases back to the target-language sentence. BIBREF10 proposes an adversarial feature adaptation approach for cross-lingual relation classification, which uses a machine translation system to translate source-language sentences into target-language sentences. Unlike the existing approaches, our approach does not require aligned parallel corpora or machine translation systems. There are also several multilingual RE approaches, e.g., BIBREF34, BIBREF35, BIBREF36, where the focus is to improve monolingual RE by jointly modeling texts in multiple languages.
Many cross-lingual word embedding models have been developed recently BIBREF15, BIBREF16. An important application of cross-lingual word embeddings is to enable cross-lingual model transfer. In this paper, we apply the bilingual word embedding mapping technique in BIBREF17 to cross-lingual RE model transfer. Similar approaches have been applied to other NLP tasks such as dependency parsing BIBREF37, POS tagging BIBREF38 and named entity recognition BIBREF21, BIBREF39.
Conclusion
In this paper, we developed a simple yet effective neural cross-lingual RE model transfer approach, which has very low resource requirements (a small bilingual dictionary with 1K word pairs) and can be easily extended to a new language. Extensive experiments for 7 target languages across a variety of language families on both in-house and open datasets show that the proposed approach achieves very good performance (up to $79\%$ of the accuracy of the supervised target-language RE model), which provides a strong baseline for building cross-lingual RE models with minimal resources.
Acknowledgments
We thank Mo Yu for sharing their ACE05 English data split and the anonymous reviewers for their valuable comments. | in-house dataset, ACE05 dataset |
dfb0351e8fa62ceb51ce77b0f607885523d1b8e8 | dfb0351e8fa62ceb51ce77b0f607885523d1b8e8_0 | Q: How better does auto-completion perform when using both language and vision than only language?
Text: Introduction
This work focuses on the problem of finding objects in an image based on natural language descriptions. Existing solutions take into account both the image and the query BIBREF0, BIBREF1, BIBREF2. In our problem formulation, rather than having the entire text, we are given only a prefix of the text which requires completing the text based on a language model and the image, and finding a relevant object in the image. We decompose the problem into three components: (i) completing the query from text prefix and an image; (ii) estimating probabilities of objects based on the completed text, and (iii) segmenting and classifying all instances in the image. We combine, extend, and modify state of the art components: (i) we extend a FactorCell LSTM BIBREF3, BIBREF4 which conditionally completes text to complete a query from both a text prefix and an image; (ii) we fine tune a BERT embedding to compute instance probabilities from a complete sentence, and (iii) we use Mask-RCNN BIBREF5 for instance segmentation.
Recent natural language embeddings BIBREF6 have been trained with the objectives of predicting masked words and determining whether sentences follow each other, and are efficiently used across a dozen of natural language processing tasks. Sequence models have been conditioned to complete text from a prefix and index BIBREF3, however have not been extended to take into account an image. Deep neural networks have been trained to segment all instances in an image at very high quality BIBREF5, BIBREF7. We propose a novel method of natural language query auto-completion for estimating instance probabilities conditioned on the image and a user query prefix. Our system combines and modifies state of the art components used in query completion, language embedding, and masked instance segmentation. Estimating a broad set of instance probabilities enables selection which is agnostic to the segmentation procedure.
Methods
Figure FIGREF2 shows the architecture of our approach. First, we extract image features with a pre-trained CNN. We incorporate the image features into a modified FactorCell LSTM language model along with the user query prefix to complete the query. The completed query is then fed into a fine-tuned BERT embedding to estimate instance probabilities, which in turn are used for instance selection.
We denote a set of objects $o_k \in O$ where O is the entire set of recognizable object classes. The user inputs a prefix, $p$, an incomplete query on an image, $I$. Given $p$, we auto-complete the intended query $q$. We define the auto-completion query problem in equation DISPLAY_FORM3 as the maximization of the probability of a query conditioned on an image where $w_i \in A$ is the word in position $i$.
We pose our instance probability estimation problem given an auto-completed query $\mathbf {q^*}$ as a multilabel problem where each class can independently exist. Let $O_{q*}$ be the set of instances referred to in $\mathbf {q^*}$. Given $\hat{p}_k$ is our estimate of $P(o_k \in O_{q*})$ and $y_k = \mathbb {1}[o_k \in O_{q*}]$, the instance selection model minimizes the sigmoid cross-entropy loss function:
Methods ::: Modifying FactorCell LSTM for Image Query Auto-Completion
We utilize the FactorCell (FC) adaptation of an LSTM with coupled input and forget gates BIBREF4 to autocomplete queries. The FactorCell is an LSTM with a context-dependent weight matrix $\mathbf {W^{\prime }} = \mathbf {W} + \mathbf {A}$ in place of $\mathbf {W}$. Given a character embedding $w_t \in \mathbb {R}^e$, a previous hidden state $h_{t-1} \in \mathbb {R}^h$ , the adaptation matrix, $\mathbf {A}$, is formed by taking the product of the context, c, with two basis tensors $\mathbf {Z_L} \in \mathbb {R}^{m\times (e+h)\times r}$ and $\mathbf {Z_R} \in \mathbb {R}^{r\times h \times m}$.
To adapt the FactorCell BIBREF4 for our purposes, we replace user embeddings with a low-dimensional image representation. Thus, we are able to modify each query completion to be personalized to a specific image representation. We extract features from an input image using a CNN pretrained on ImageNet, retraining only the last two fully connected layers. The image feature vector is fed into the FactorCell through the adaptation matrix. We perform beam search over the sequence of predicted characters to chose the optimal completion for the given prefix.
Methods ::: Fine Tuning BERT for Instance Probability Estimation
We fine tune a pre-trained BERT embedding to perform transfer learning for our instance selection task. We use a 12-layer implementation which has been shown to generalize and perform well when fine-tuned for new tasks such as question answering, text classification, and named entity recognition. To apply the model to our task, we add an additional dense layer to the BERT architecture with 10% dropout, mapping the last pooled layer to the object classes in our data.
Methods ::: Data and Training Details
We use the Visual Genome (VG) BIBREF8 and ReferIt BIBREF9 datasets which are suitable for our purposes. The VG data contains images, region descriptions, relationships, question-answers, attributes, and object instances. The region descriptions provide a replacement for queries since they mention various objects in different regions of each image. However, while some region descriptions are referring phrases, some are more similar to descriptions (see examples in Table TABREF10). The large number of examples makes the Visual Genome dataset particularly useful for our task. The smaller ReferIt dataset consists of referring expressions attached to images which more closely resemble potential user queries of images. We train separate models using both datasets.
For training, we aggregated (query, image) pairs using the region descriptions from the VG dataset and referring expressions from the ReferIt dataset. Our VG training set consists of 85% of the data: 16k images and 740k corresponding region descriptions. The Referit training data consists of 9k images and 54k referring expressions.
The query completion models are trained using a 128 dimensional image representation, a rank $r=64$ personalized matrix, 24 dimensional character embeddings, 512 dimensional LSTM hidden units, and a max length of 50 characters per query, with Adam at a 5e-4 learning rate, and a batch size of 32 for 80K iterations. The instance selection model is trained using (region description, object set) pairs from the VG dataset resulting in a training set of approximately 1.73M samples. The remaining 300K samples are split into validation and testing. Our training procedure for the instance selection model fine tunes all 12 layers of BERT with 32 sample batch sizes for 250K iterations, using Adam and performing learning rate warm-up for the first 10% of iterations with a target 5e-5 learning rate. The entire training processes takes around a day on an NVIDIA Tesla P100 GPU.
Results
Figure 3 shows example results. We evaluate query completion by language perplexity and mean reciprocal rank (MRR) and evaluate instance selection by F1-score. We compare the perplexity on both sets of test queries using corresponding images vs. random noise as context. Table TABREF11 shows perplexity on the VG and ReferIt test queries with both corresponding images and random noise. The VG and ReferIt datasets have character vocabulary sizes of 89 and 77 respectively.
Given the matching index $t$ of the true query in the top 10 completions we compute the MRR as $\sum _{n}{\frac{1}{t}}$ where we replace the reciprocal rank with 0 if the true query does not appear in the top ten completions. We evaluate the VG and ReferIt test queries with varying prefix sizes and compare performance with the corresponding image and random noise as context. MRR is influenced by the length of the query, as longer queries are more difficult to match. Therefore, as expected we observe better performance on the ReferIt dataset for all prefix lengths. Finally, our instance selection achieves an F1-score of 0.7618 over all 2,909 instance classes.
Results ::: Conclusions
Our results demonstrate that auto-completion based on both language and vision performs better than by using only language, and that fine tuning a BERT embedding allows to efficiently rank instances in the image. In future work we would like to extract referring expressions using simple grammatical rules to differentiate between referring and non-referring region descriptions. We would also like to combine the VG and ReferIt datasets to train a single model and scale up our datasets to improve query completions. | Unanswerable |
a130aa735de3b65c71f27018f20d3c068bafb826 | a130aa735de3b65c71f27018f20d3c068bafb826_0 | Q: How big is data provided by this research?
Text: Introduction
This work focuses on the problem of finding objects in an image based on natural language descriptions. Existing solutions take into account both the image and the query BIBREF0, BIBREF1, BIBREF2. In our problem formulation, rather than having the entire text, we are given only a prefix of the text which requires completing the text based on a language model and the image, and finding a relevant object in the image. We decompose the problem into three components: (i) completing the query from text prefix and an image; (ii) estimating probabilities of objects based on the completed text, and (iii) segmenting and classifying all instances in the image. We combine, extend, and modify state of the art components: (i) we extend a FactorCell LSTM BIBREF3, BIBREF4 which conditionally completes text to complete a query from both a text prefix and an image; (ii) we fine tune a BERT embedding to compute instance probabilities from a complete sentence, and (iii) we use Mask-RCNN BIBREF5 for instance segmentation.
Recent natural language embeddings BIBREF6 have been trained with the objectives of predicting masked words and determining whether sentences follow each other, and are efficiently used across a dozen of natural language processing tasks. Sequence models have been conditioned to complete text from a prefix and index BIBREF3, however have not been extended to take into account an image. Deep neural networks have been trained to segment all instances in an image at very high quality BIBREF5, BIBREF7. We propose a novel method of natural language query auto-completion for estimating instance probabilities conditioned on the image and a user query prefix. Our system combines and modifies state of the art components used in query completion, language embedding, and masked instance segmentation. Estimating a broad set of instance probabilities enables selection which is agnostic to the segmentation procedure.
Methods
Figure FIGREF2 shows the architecture of our approach. First, we extract image features with a pre-trained CNN. We incorporate the image features into a modified FactorCell LSTM language model along with the user query prefix to complete the query. The completed query is then fed into a fine-tuned BERT embedding to estimate instance probabilities, which in turn are used for instance selection.
We denote a set of objects $o_k \in O$ where O is the entire set of recognizable object classes. The user inputs a prefix, $p$, an incomplete query on an image, $I$. Given $p$, we auto-complete the intended query $q$. We define the auto-completion query problem in equation DISPLAY_FORM3 as the maximization of the probability of a query conditioned on an image where $w_i \in A$ is the word in position $i$.
We pose our instance probability estimation problem given an auto-completed query $\mathbf {q^*}$ as a multilabel problem where each class can independently exist. Let $O_{q*}$ be the set of instances referred to in $\mathbf {q^*}$. Given $\hat{p}_k$ is our estimate of $P(o_k \in O_{q*})$ and $y_k = \mathbb {1}[o_k \in O_{q*}]$, the instance selection model minimizes the sigmoid cross-entropy loss function:
Methods ::: Modifying FactorCell LSTM for Image Query Auto-Completion
We utilize the FactorCell (FC) adaptation of an LSTM with coupled input and forget gates BIBREF4 to autocomplete queries. The FactorCell is an LSTM with a context-dependent weight matrix $\mathbf {W^{\prime }} = \mathbf {W} + \mathbf {A}$ in place of $\mathbf {W}$. Given a character embedding $w_t \in \mathbb {R}^e$, a previous hidden state $h_{t-1} \in \mathbb {R}^h$ , the adaptation matrix, $\mathbf {A}$, is formed by taking the product of the context, c, with two basis tensors $\mathbf {Z_L} \in \mathbb {R}^{m\times (e+h)\times r}$ and $\mathbf {Z_R} \in \mathbb {R}^{r\times h \times m}$.
To adapt the FactorCell BIBREF4 for our purposes, we replace user embeddings with a low-dimensional image representation. Thus, we are able to modify each query completion to be personalized to a specific image representation. We extract features from an input image using a CNN pretrained on ImageNet, retraining only the last two fully connected layers. The image feature vector is fed into the FactorCell through the adaptation matrix. We perform beam search over the sequence of predicted characters to chose the optimal completion for the given prefix.
Methods ::: Fine Tuning BERT for Instance Probability Estimation
We fine tune a pre-trained BERT embedding to perform transfer learning for our instance selection task. We use a 12-layer implementation which has been shown to generalize and perform well when fine-tuned for new tasks such as question answering, text classification, and named entity recognition. To apply the model to our task, we add an additional dense layer to the BERT architecture with 10% dropout, mapping the last pooled layer to the object classes in our data.
Methods ::: Data and Training Details
We use the Visual Genome (VG) BIBREF8 and ReferIt BIBREF9 datasets which are suitable for our purposes. The VG data contains images, region descriptions, relationships, question-answers, attributes, and object instances. The region descriptions provide a replacement for queries since they mention various objects in different regions of each image. However, while some region descriptions are referring phrases, some are more similar to descriptions (see examples in Table TABREF10). The large number of examples makes the Visual Genome dataset particularly useful for our task. The smaller ReferIt dataset consists of referring expressions attached to images which more closely resemble potential user queries of images. We train separate models using both datasets.
For training, we aggregated (query, image) pairs using the region descriptions from the VG dataset and referring expressions from the ReferIt dataset. Our VG training set consists of 85% of the data: 16k images and 740k corresponding region descriptions. The Referit training data consists of 9k images and 54k referring expressions.
The query completion models are trained using a 128 dimensional image representation, a rank $r=64$ personalized matrix, 24 dimensional character embeddings, 512 dimensional LSTM hidden units, and a max length of 50 characters per query, with Adam at a 5e-4 learning rate, and a batch size of 32 for 80K iterations. The instance selection model is trained using (region description, object set) pairs from the VG dataset resulting in a training set of approximately 1.73M samples. The remaining 300K samples are split into validation and testing. Our training procedure for the instance selection model fine tunes all 12 layers of BERT with 32 sample batch sizes for 250K iterations, using Adam and performing learning rate warm-up for the first 10% of iterations with a target 5e-5 learning rate. The entire training processes takes around a day on an NVIDIA Tesla P100 GPU.
Results
Figure 3 shows example results. We evaluate query completion by language perplexity and mean reciprocal rank (MRR) and evaluate instance selection by F1-score. We compare the perplexity on both sets of test queries using corresponding images vs. random noise as context. Table TABREF11 shows perplexity on the VG and ReferIt test queries with both corresponding images and random noise. The VG and ReferIt datasets have character vocabulary sizes of 89 and 77 respectively.
Given the matching index $t$ of the true query in the top 10 completions we compute the MRR as $\sum _{n}{\frac{1}{t}}$ where we replace the reciprocal rank with 0 if the true query does not appear in the top ten completions. We evaluate the VG and ReferIt test queries with varying prefix sizes and compare performance with the corresponding image and random noise as context. MRR is influenced by the length of the query, as longer queries are more difficult to match. Therefore, as expected we observe better performance on the ReferIt dataset for all prefix lengths. Finally, our instance selection achieves an F1-score of 0.7618 over all 2,909 instance classes.
Results ::: Conclusions
Our results demonstrate that auto-completion based on both language and vision performs better than by using only language, and that fine tuning a BERT embedding allows to efficiently rank instances in the image. In future work we would like to extract referring expressions using simple grammatical rules to differentiate between referring and non-referring region descriptions. We would also like to combine the VG and ReferIt datasets to train a single model and scale up our datasets to improve query completions. | 16k images and 740k corresponding region descriptions |
0c1663a7f7750b399f40ef7b4bf19d5c598890ff | 0c1663a7f7750b399f40ef7b4bf19d5c598890ff_0 | Q: How they complete a user query prefix conditioned upon an image?
Text: Introduction
This work focuses on the problem of finding objects in an image based on natural language descriptions. Existing solutions take into account both the image and the query BIBREF0, BIBREF1, BIBREF2. In our problem formulation, rather than having the entire text, we are given only a prefix of the text which requires completing the text based on a language model and the image, and finding a relevant object in the image. We decompose the problem into three components: (i) completing the query from text prefix and an image; (ii) estimating probabilities of objects based on the completed text, and (iii) segmenting and classifying all instances in the image. We combine, extend, and modify state of the art components: (i) we extend a FactorCell LSTM BIBREF3, BIBREF4 which conditionally completes text to complete a query from both a text prefix and an image; (ii) we fine tune a BERT embedding to compute instance probabilities from a complete sentence, and (iii) we use Mask-RCNN BIBREF5 for instance segmentation.
Recent natural language embeddings BIBREF6 have been trained with the objectives of predicting masked words and determining whether sentences follow each other, and are efficiently used across a dozen of natural language processing tasks. Sequence models have been conditioned to complete text from a prefix and index BIBREF3, however have not been extended to take into account an image. Deep neural networks have been trained to segment all instances in an image at very high quality BIBREF5, BIBREF7. We propose a novel method of natural language query auto-completion for estimating instance probabilities conditioned on the image and a user query prefix. Our system combines and modifies state of the art components used in query completion, language embedding, and masked instance segmentation. Estimating a broad set of instance probabilities enables selection which is agnostic to the segmentation procedure.
Methods
Figure FIGREF2 shows the architecture of our approach. First, we extract image features with a pre-trained CNN. We incorporate the image features into a modified FactorCell LSTM language model along with the user query prefix to complete the query. The completed query is then fed into a fine-tuned BERT embedding to estimate instance probabilities, which in turn are used for instance selection.
We denote a set of objects $o_k \in O$ where O is the entire set of recognizable object classes. The user inputs a prefix, $p$, an incomplete query on an image, $I$. Given $p$, we auto-complete the intended query $q$. We define the auto-completion query problem in equation DISPLAY_FORM3 as the maximization of the probability of a query conditioned on an image where $w_i \in A$ is the word in position $i$.
We pose our instance probability estimation problem given an auto-completed query $\mathbf {q^*}$ as a multilabel problem where each class can independently exist. Let $O_{q*}$ be the set of instances referred to in $\mathbf {q^*}$. Given $\hat{p}_k$ is our estimate of $P(o_k \in O_{q*})$ and $y_k = \mathbb {1}[o_k \in O_{q*}]$, the instance selection model minimizes the sigmoid cross-entropy loss function:
Methods ::: Modifying FactorCell LSTM for Image Query Auto-Completion
We utilize the FactorCell (FC) adaptation of an LSTM with coupled input and forget gates BIBREF4 to autocomplete queries. The FactorCell is an LSTM with a context-dependent weight matrix $\mathbf {W^{\prime }} = \mathbf {W} + \mathbf {A}$ in place of $\mathbf {W}$. Given a character embedding $w_t \in \mathbb {R}^e$, a previous hidden state $h_{t-1} \in \mathbb {R}^h$ , the adaptation matrix, $\mathbf {A}$, is formed by taking the product of the context, c, with two basis tensors $\mathbf {Z_L} \in \mathbb {R}^{m\times (e+h)\times r}$ and $\mathbf {Z_R} \in \mathbb {R}^{r\times h \times m}$.
To adapt the FactorCell BIBREF4 for our purposes, we replace user embeddings with a low-dimensional image representation. Thus, we are able to modify each query completion to be personalized to a specific image representation. We extract features from an input image using a CNN pretrained on ImageNet, retraining only the last two fully connected layers. The image feature vector is fed into the FactorCell through the adaptation matrix. We perform beam search over the sequence of predicted characters to chose the optimal completion for the given prefix.
Methods ::: Fine Tuning BERT for Instance Probability Estimation
We fine tune a pre-trained BERT embedding to perform transfer learning for our instance selection task. We use a 12-layer implementation which has been shown to generalize and perform well when fine-tuned for new tasks such as question answering, text classification, and named entity recognition. To apply the model to our task, we add an additional dense layer to the BERT architecture with 10% dropout, mapping the last pooled layer to the object classes in our data.
Methods ::: Data and Training Details
We use the Visual Genome (VG) BIBREF8 and ReferIt BIBREF9 datasets which are suitable for our purposes. The VG data contains images, region descriptions, relationships, question-answers, attributes, and object instances. The region descriptions provide a replacement for queries since they mention various objects in different regions of each image. However, while some region descriptions are referring phrases, some are more similar to descriptions (see examples in Table TABREF10). The large number of examples makes the Visual Genome dataset particularly useful for our task. The smaller ReferIt dataset consists of referring expressions attached to images which more closely resemble potential user queries of images. We train separate models using both datasets.
For training, we aggregated (query, image) pairs using the region descriptions from the VG dataset and referring expressions from the ReferIt dataset. Our VG training set consists of 85% of the data: 16k images and 740k corresponding region descriptions. The Referit training data consists of 9k images and 54k referring expressions.
The query completion models are trained using a 128 dimensional image representation, a rank $r=64$ personalized matrix, 24 dimensional character embeddings, 512 dimensional LSTM hidden units, and a max length of 50 characters per query, with Adam at a 5e-4 learning rate, and a batch size of 32 for 80K iterations. The instance selection model is trained using (region description, object set) pairs from the VG dataset resulting in a training set of approximately 1.73M samples. The remaining 300K samples are split into validation and testing. Our training procedure for the instance selection model fine tunes all 12 layers of BERT with 32 sample batch sizes for 250K iterations, using Adam and performing learning rate warm-up for the first 10% of iterations with a target 5e-5 learning rate. The entire training processes takes around a day on an NVIDIA Tesla P100 GPU.
Results
Figure 3 shows example results. We evaluate query completion by language perplexity and mean reciprocal rank (MRR) and evaluate instance selection by F1-score. We compare the perplexity on both sets of test queries using corresponding images vs. random noise as context. Table TABREF11 shows perplexity on the VG and ReferIt test queries with both corresponding images and random noise. The VG and ReferIt datasets have character vocabulary sizes of 89 and 77 respectively.
Given the matching index $t$ of the true query in the top 10 completions we compute the MRR as $\sum _{n}{\frac{1}{t}}$ where we replace the reciprocal rank with 0 if the true query does not appear in the top ten completions. We evaluate the VG and ReferIt test queries with varying prefix sizes and compare performance with the corresponding image and random noise as context. MRR is influenced by the length of the query, as longer queries are more difficult to match. Therefore, as expected we observe better performance on the ReferIt dataset for all prefix lengths. Finally, our instance selection achieves an F1-score of 0.7618 over all 2,909 instance classes.
Results ::: Conclusions
Our results demonstrate that auto-completion based on both language and vision performs better than by using only language, and that fine tuning a BERT embedding allows to efficiently rank instances in the image. In future work we would like to extract referring expressions using simple grammatical rules to differentiate between referring and non-referring region descriptions. We would also like to combine the VG and ReferIt datasets to train a single model and scale up our datasets to improve query completions. | we replace user embeddings with a low-dimensional image representation |
aa800b424db77e634e82680f804894bfa37f2a34 | aa800b424db77e634e82680f804894bfa37f2a34_0 | Q: Did the collection process use a WoZ method?
Text: Introduction
Enabling robots to follow navigation instructions in natural language can facilitate human-robot interaction across a variety of applications. For instance, within the service robotics domain, robots can follow navigation instructions to help with mobile manipulation BIBREF0 and delivery tasks BIBREF1 .
Interpreting navigation instructions in natural language is difficult due to the high variability in the way people describe routes BIBREF2 . For example, there are a variety of ways to describe the route in Fig. FIGREF4 (a):
Each fragment of a sentence within these instructions can be mapped to one or more than one navigation behaviors. For instance, assume that a robot counts with a number of primitive, navigation behaviors, such as “enter the room on the left (or on right)” , “follow the corridor”, “cross the intersection”, etc. Then, the fragment “advance forward” in a navigation instruction could be interpreted as a “follow the corridor” behavior, or as a sequence of “follow the corridor” interspersed with “cross the intersection” behaviors depending on the topology of the environment. Resolving such ambiguities often requires reasoning about “common-sense” concepts, as well as interpreting spatial information and landmarks, e.g., in sentences such as “the room on the left right before the end of the corridor” and “the room which is in the middle of two vases”.
In this work, we pose the problem of interpreting navigation instructions as finding a mapping (or grounding) of the commands into an executable navigation plan. While the plan is typically modeled as a formal specification of low-level motions BIBREF2 or a grammar BIBREF3 , BIBREF4 , we focus specifically on translating instructions to a high-level navigation plan based on a topological representation of the environment. This representation is a behavioral navigation graph, as recently proposed by BIBREF5 , designed to take advantage of the semantic structure typical of human environments. The nodes of the graph correspond to semantically meaningful locations for the navigation task, such as kitchens or entrances to rooms in corridors. The edges are parameterized, visuo-motor behaviors that allow a robot to navigate between neighboring nodes, as illustrated in Fig. FIGREF4 (b). Under this framework, complex navigation routes can be achieved by sequencing behaviors without an explicit metric representation of the world.
We formulate the problem of following instructions under the framework of BIBREF5 as finding a path in the behavioral navigation graph that follows the desired route, given a known starting location. The edges (behaviors) along this path serve to reach the – sometimes implicit – destination requested by the user. As in BIBREF6 , our focus is on the problem of interpreting navigation directions. We assume that a robot can realize valid navigation plans according to the graph.
We contribute a new end-to-end model for following directions in natural language under the behavioral navigation framework. Inspired by the information retrieval and question answering literature BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , we propose to leverage the behavioral graph as a knowledge base to facilitate the interpretation of navigation commands. More specifically, the proposed model takes as input user directions in text form, the behavioral graph of the environment encoded as INLINEFORM0 node; edge; node INLINEFORM1 triplets, and the initial location of the robot in the graph. The model then predicts a set of behaviors to reach the desired destination according to the instructions and the map (Fig. FIGREF4 (c)). Our main insight is that using attention mechanisms to correlate navigation instructions with the topological map of the environment can facilitate predicting correct navigation plans.
This work also contributes a new dataset of INLINEFORM0 pairs of free-form natural language instructions and high-level navigation plans. This dataset was collected through Mechanical Turk using 100 simulated environments with a corresponding topological map and, to the best of our knowledge, it is the first of its kind for behavioral navigation. The dataset opens up opportunities to explore data-driven methods for grounding navigation commands into high-level motion plans.
We conduct extensive experiments to study the generalization capabilities of the proposed model for following natural language instructions. We investigate both generalization to new instructions in known and in new environments. We conclude this paper by discussing the benefits of the proposed approach as well as opportunities for future research based on our findings.
Related work
This section reviews relevant prior work on following navigation instructions. Readers interested in an in-depth review of methods to interpret spatial natural language for robotics are encouraged to refer to BIBREF11 .
Typical approaches to follow navigation commands deal with the complexity of natural language by manually parsing commands, constraining language descriptions, or using statistical machine translation methods. While manually parsing commands is often impractical, the first type of approaches are foundational: they showed that it is possible to leverage the compositionality of semantic units to interpret spatial language BIBREF12 , BIBREF13 .
Constraining language descriptions can reduce the size of the input space to facilitate the interpretation of user commands. For example, BIBREF14 explored using structured, symbolic language phrases for navigation. As in this earlier work, we are also interested in navigation with a topological map of the environment. However, we do not process symbolic phrases. Our aim is to translate free-form natural language instructions to a navigation plan using information from a high-level representation of the environment. This translation problem requires dealing with missing actions in navigation instructions and actions with preconditions, such as “at the end of the corridor, turn right” BIBREF15 .
Statistical machine translation BIBREF16 is at the core of recent approaches to enable robots to follow navigation instructions. These methods aim to automatically discover translation rules from a corpus of data, and often leverage the fact that navigation directions are composed of sequential commands. For instance, BIBREF17 , BIBREF4 , BIBREF2 used statistical machine translation to map instructions to a formal language defined by a grammar. Likewise, BIBREF18 , BIBREF0 mapped commands to spatial description clauses based on the hierarchical structure of language in the navigation problem. Our approach to machine translation builds on insights from these prior efforts. In particular, we focus on end-to-end learning for statistical machine translation due to the recent success of Neural Networks in Natural Language Processing BIBREF19 .
Our work is inspired by methods that reduce the task of interpreting user commands to a sequential prediction problem BIBREF20 , BIBREF21 , BIBREF22 . Similar to BIBREF21 and BIBREF22 , we use a sequence-to-sequence model to enable a mobile agent to follow routes. But instead leveraging visual information to output low-level navigation commands, we focus on using a topological map of the environment to output a high-level navigation plan. This plan is a sequence of behaviors that can be executed by a robot to reach a desired destination BIBREF5 , BIBREF6 .
We explore machine translation from the perspective of automatic question answering. Following BIBREF8 , BIBREF9 , our approach uses attention mechanisms to learn alignments between different input modalities. In our case, the inputs to our model are navigation instructions, a topological environment map, and the start location of the robot (Fig. FIGREF4 (c)). Our results show that the map can serve as an effective source of contextual information for the translation task. Additionally, it is possible to leverage this kind of information in an end-to-end fashion.
Problem Formulation
Our goal is to translate navigation instructions in text form into a sequence of behaviors that a robot can execute to reach a desired destination from a known start location. We frame this problem under a behavioral approach to indoor autonomous navigation BIBREF5 and assume that prior knowledge about the environment is available for the translation task. This prior knowledge is a topological map, in the form of a behavioral navigation graph (Fig. FIGREF4 (b)). The nodes of the graph correspond to semantically-meaningful locations for the navigation task, and its directed edges are visuo-motor behaviors that a robot can use to move between nodes. This formulation takes advantage of the rich semantic structure behind man-made environments, resulting in a compact route representation for robot navigation.
Fig. FIGREF4 (c) provides a schematic view of the problem setting. The inputs are: (1) a navigation graph INLINEFORM0 , (2) the starting node INLINEFORM1 of the robot in INLINEFORM2 , and (3) a set of free-form navigation instructions INLINEFORM3 in natural language. The instructions describe a path in the graph to reach from INLINEFORM4 to a – potentially implicit – destination node INLINEFORM5 . Using this information, the objective is to predict a suitable sequence of robot behaviors INLINEFORM6 to navigate from INLINEFORM7 to INLINEFORM8 according to INLINEFORM9 . From a supervised learning perspective, the goal is then to estimate: DISPLAYFORM0
based on a dataset of input-target pairs INLINEFORM0 , where INLINEFORM1 and INLINEFORM2 , respectively. The sequential execution of the behaviors INLINEFORM3 should replicate the route intended by the instructions INLINEFORM4 . We assume no prior linguistic knowledge. Thus, translation approaches have to cope with the semantics and syntax of the language by discovering corresponding patterns in the data.
The Behavioral Graph: A Knowledge Base For Navigation
We view the behavioral graph INLINEFORM0 as a knowledge base that encodes a set of navigational rules as triplets INLINEFORM1 , where INLINEFORM2 and INLINEFORM3 are adjacent nodes in the graph, and the edge INLINEFORM4 is an executable behavior to navigate from INLINEFORM5 to INLINEFORM6 . In general, each behaviors includes a list of relevant navigational attributes INLINEFORM7 that the robot might encounter when moving between nodes.
We consider 7 types of semantic locations, 11 types of behaviors, and 20 different types of landmarks. A location in the navigation graph can be a room, a lab, an office, a kitchen, a hall, a corridor, or a bathroom. These places are labeled with unique tags, such as "room-1" or "lab-2", except for bathrooms and kitchens which people do not typically refer to by unique names when describing navigation routes.
Table TABREF7 lists the navigation behaviors that we consider in this work. These behaviors can be described in reference to visual landmarks or objects, such as paintings, book shelfs, tables, etc. As in Fig. FIGREF4 , maps might contain multiple landmarks of the same type. Please see the supplementary material (Appendix A) for more details.
Approach
We leverage recent advances in deep learning to translate natural language instructions to a sequence of navigation behaviors in an end-to-end fashion. Our proposed model builds on the sequence-to-sequence translation model of BIBREF23 , which computes a soft-alignment between a source sequence (natural language instructions in our case) and the corresponding target sequence (navigation behaviors).
As one of our main contributions, we augment the neural machine translation approach of BIBREF23 to take as input not only natural language instructions, but also the corresponding behavioral navigation graph INLINEFORM0 of the environment where navigation should take place. Specifically, at each step, the graph INLINEFORM1 operates as a knowledge base that the model can access to obtain information about path connectivity, facilitating the grounding of navigation commands.
Figure FIGREF8 shows the structure of the proposed model for interpreting navigation instructions. The model consists of six layers:
Embed layer: The model first encodes each word and symbol in the input sequences INLINEFORM0 and INLINEFORM1 into fixed-length representations. The instructions INLINEFORM2 are embedded into a 100-dimensional pre-trained GloVe vector BIBREF24 . Each of the triplet components, INLINEFORM3 , INLINEFORM4 , and INLINEFORM5 of the graph INLINEFORM6 , are one-hot encoded into vectors of dimensionality INLINEFORM7 , where INLINEFORM8 and INLINEFORM9 are the number of nodes and edges in INLINEFORM10 , respectively.
Encoder layer: The model then uses two bidirectional Gated Recurrent Units (GRUs) BIBREF25 to independently process the information from INLINEFORM0 and INLINEFORM1 , and incorporate contextual cues from the surrounding embeddings in each sequence. The outputs of the encoder layer are the matrix INLINEFORM2 for the navigational commands and the matrix INLINEFORM3 for the behavioral graph, where INLINEFORM4 is the hidden size of each GRU, INLINEFORM5 is the number of words in the instruction INLINEFORM6 , and INLINEFORM7 is the number of triplets in the graph INLINEFORM8 .
Attention layer: Matrices INLINEFORM0 and INLINEFORM1 generated by the encoder layer are combined using an attention mechanism. We use one-way attention because the graph contains information about the whole environment, while the instruction has (potentially incomplete) local information about the route of interest. The use of attention provides our model with a two-step strategy to interpret commands. This resembles the way people find paths on a map: first, relevant parts on the map are selected according to their affinity to each of the words in the input instruction (attention layer); second, the selected parts are connected to assemble a valid path (decoder layer). More formally, let INLINEFORM2 ( INLINEFORM3 ) be the INLINEFORM4 -th row of INLINEFORM5 , and INLINEFORM6 ( INLINEFORM7 ) the INLINEFORM8 -th row of INLINEFORM9 . We use each encoded triplet INLINEFORM10 in INLINEFORM11 to calculate its associated attention distribution INLINEFORM12 over all the atomic instructions INLINEFORM13 : DISPLAYFORM0
where the matrix INLINEFORM0 serves to combine the different sources of information INLINEFORM1 and INLINEFORM2 . Each component INLINEFORM3 of the attention distributions INLINEFORM4 quantifies the affinity between the INLINEFORM5 -th triplet in INLINEFORM6 and the INLINEFORM7 -th word in the corresponding input INLINEFORM8 .
The model then uses each attention distribution INLINEFORM0 to obtain a weighted sum of the encodings of the words in INLINEFORM1 , according to their relevance to the corresponding triplet INLINEFORM2 . This results in L attention vectors INLINEFORM3 , INLINEFORM4 .
The final step in the attention layer concatenates each INLINEFORM0 with INLINEFORM1 to generate the outputs INLINEFORM2 , INLINEFORM3 . Following BIBREF8 , we include the encoded triplet INLINEFORM4 in the output tensor INLINEFORM5 of this layer to prevent early summaries of relevant map information.
FC layer: The model reduces the dimensionality of each individual vector INLINEFORM0 from INLINEFORM1 to INLINEFORM2 with a fully-connected (FC) layer. The resulting L vectors are output to the next layer as columns of a context matrix INLINEFORM3 .
Decoder layer: After the FC layer, the model predicts likelihoods over the sequence of behaviors that correspond to the input instructions with a GRU network. Without loss of generality, consider the INLINEFORM0 -th recurrent cell in the GRU network. This cell takes two inputs: a hidden state vector INLINEFORM1 from the prior cell, and a one-hot embedding of the previous behavior INLINEFORM2 that was predicted by the model. Based on these inputs, the GRU cell outputs a new hidden state INLINEFORM3 to compute likelihoods for the next behavior. These likelihoods are estimated by combining the output state INLINEFORM4 with relevant information from the context INLINEFORM5 : DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are trainable parameters. The attention vector INLINEFORM3 in Eq. () quantifies the affinity of INLINEFORM4 with respect to each of the columns INLINEFORM5 of INLINEFORM6 , where INLINEFORM7 . The attention vector also helps to estimate a dynamic contextual vector INLINEFORM8 that the INLINEFORM9 -th GRU cell uses to compute logits for the next behavior: DISPLAYFORM0
with INLINEFORM0 trainable parameters. Note that INLINEFORM1 includes a value for each of the pre-defined behaviors in the graph INLINEFORM2 , as well as for a special “stop” symbol to identify the end of the output sequence.
Output layer: The final layer of the model searches for a valid sequence of robot behaviors based on the robot's initial node, the connectivity of the graph INLINEFORM0 , and the output logits from the previous decoder layer. Again, without loss of generality, consider the INLINEFORM1 -th behavior INLINEFORM2 that is finally predicted by the model. The search for this behavior is implemented as: DISPLAYFORM0
with INLINEFORM0 a masking function that takes as input the graph INLINEFORM1 and the node INLINEFORM2 that the robot reaches after following the sequence of behaviors INLINEFORM3 previously predicted by the model. The INLINEFORM4 function returns a vector of the same dimensionality as the logits INLINEFORM5 , but with zeros for the valid behaviors after the last location INLINEFORM6 and for the special stop symbol, and INLINEFORM7 for any invalid predictions according to the connectivity of the behavioral navigation graph.
Dataset
We created a new dataset for the problem of following navigation instructions under the behavioral navigation framework of BIBREF5 . This dataset was created using Amazon Mechanical Turk and 100 maps of simulated indoor environments, each with 6 to 65 rooms. To the best of our knowledge, this is the first benchmark for comparing translation models in the context of behavioral robot navigation.
As shown in Table TABREF16 , the dataset consists of 8066 pairs of free-form natural language instructions and navigation plans for training. This training data was collected from 88 unique simulated environments, totaling 6064 distinct navigation plans (2002 plans have two different navigation instructions each; the rest has one). The dataset contains two test set variants:
While the dataset was collected with simulated environments, no structure was imposed on the navigation instructions while crowd-sourcing data. Thus, many instructions in our dataset are ambiguous. Moreover, the order of the behaviors in the instructions is not always the same. For instance, a person said “turn right and advance” to describe part of a route, while another person said “go straight after turning right” in a similar situation. The high variability present in the natural language descriptions of our dataset makes the problem of decoding instructions into behaviors not trivial. See Appendix A of the supplementary material for additional details on our data collection effort.
Experiments
This section describes our evaluation of the proposed approach for interpreting navigation commands in natural language. We provide both quantitative and qualitative results.
Evaluation Metrics
While computing evaluation metrics, we only consider the behaviors present in the route because they are sufficient to recover the high-level navigation plan from the graph. Our metrics treat each behavior as a single token. For example, the sample plan “R-1 oor C-1 cf C-1 lt C-0 cf C-0 iol O-3" is considered to have 5 tokens, each corresponding to one of its behaviors (“oor", “cf", “lt", “cf", “iol"). In this plan, “R-1",“C-1", “C-0", and “O-3" are symbols for locations (nodes) in the graph.
We compare the performance of translation approaches based on four metrics:
[align=left,leftmargin=0em,labelsep=0.4em,font=]
As in BIBREF20 , EM is 1 if a predicted plan matches exactly the ground truth; otherwise it is 0.
The harmonic average of the precision and recall over all the test set BIBREF26 .
The minimum number of insertions, deletions or swap operations required to transform a predicted sequence of behaviors into the ground truth sequence BIBREF27 .
GM is 1 if a predicted plan reaches the ground truth destination (even if the full sequence of behaviors does not match exactly the ground truth). Otherwise, GM is 0.
Models Used in the Evaluation
We compare the proposed approach for translating natural language instructions into a navigation plan against alternative deep-learning models:
[align=left,leftmargin=0em,labelsep=0.4em,font=]
The baseline approach is based on BIBREF20 . It divides the task of interpreting commands for behavioral navigation into two steps: path generation, and path verification. For path generation, this baseline uses a standard sequence-to-sequence model augmented with an attention mechanism, similar to BIBREF23 , BIBREF6 . For path verification, the baseline uses depth-first search to find a route in the graph that matches the sequence of predicted behaviors. If no route matches perfectly, the baseline changes up to three behaviors in the predicted sequence to try to turn it into a valid path.
To test the impact of using the behavioral graphs as an extra input to our translation model, we implemented a version of our approach that only takes natural language instructions as input. In this ablation model, the output of the bidirectional GRU that encodes the input instruction INLINEFORM0 is directly fed to the decoder layer. This model does not have the attention and FC layers described in Sec. SECREF4 , nor uses the masking function in the output layer.
This model is the same as the previous Ablation model, but with the masking function in the output layer.
Implementation Details
We pre-processed the inputs to the various models that are considered in our experiment. In particular, we lowercased, tokenized, spell-checked and lemmatized the input instructions in text-form using WordNet BIBREF28 . We also truncated the graphs to a maximum of 300 triplets, and the navigational instructions to a maximum of 150 words. Only 6.4% (5.4%) of the unique graphs in the training (validation) set had more than 300 triplets, and less than 0.15% of the natural language instructions in these sets had more than 150 tokens.
The dimensionality of the hidden state of the GRU networks was set to 128 in all the experiments. In general, we used 12.5% of the training set as validation for choosing models' hyper-parameters. In particular, we used dropout after the encoder and the fully-connected layers of the proposed model to reduce overfitting. Best performance was achieved with a dropout rate of 0.5 and batch size equal to 256. We also used scheduled sampling BIBREF29 at training time for all models except the baseline.
We input the triplets from the graph to our proposed model in alphabetical order, and consider a modification where the triplets that surround the start location of the robot are provided first in the input graph sequence. We hypothesized that such rearrangement would help identify the starting location (node) of the robot in the graph. In turn, this could facilitate the prediction of correct output sequences. In the remaining of the paper, we refer to models that were provided a rearranged graph, beginning with the starting location of the robot, as models with “Ordered Triplets”.
Quantitative Evaluation
Table TABREF28 shows the performance of the models considered in our evaluation on both test sets. The next two sections discuss the results in detail.
First, we can observe that the final model “Ours with Mask and Ordered Triplets” outperforms the Baseline and Ablation models on all metrics in previously seen environments. The difference in performance is particularly evident for the Exact Match and Goal Match metrics, with our model increasing accuracy by 35% and 25% in comparison to the Baseline and Ablation models, respectively. These results suggest that providing the behavioral navigation graph to the model and allowing it to process this information as a knowledge base in an end-to-end fashion is beneficial.
We can also observe from Table TABREF28 that the masking function of Eq. ( EQREF12 ) tends to increase performance in the Test-Repeated Set by constraining the output sequence to a valid set of navigation behaviors. For the Ablation model, using the masking function leads to about INLINEFORM0 increase in EM and GM accuracy. For the proposed model (with or without reordering the graph triplets), the increase in accuracy is around INLINEFORM1 . Note that the impact of the masking function is less evident in terms of the F1 score because this metric considers if a predicted behavior exists in the ground truth navigation plan, irrespective of its specific position in the output sequence.
The results in the last four rows of Table TABREF28 suggest that ordering the graph triplets can facilitate predicting correct navigation plans in previously seen environments. Providing the triplets that surround the starting location of the robot first to the model leads to a boost of INLINEFORM0 in EM and GM performance. The rearrangement of the graph triplets also helps to reduce ED and increase F1.
Lastly, it is worth noting that our proposed model (last row of Table TABREF28 ) outperforms all other models in previously seen environments. In particular, we obtain over INLINEFORM0 increase in EM and GM between our model and the next best two models.
The previous section evaluated model performance on new instructions (and corresponding navigation plans) for environments that were previously seen at training time. Here, we examine whether the trained models succeed on environments that are completely new.
The evaluation on the Test-New Set helps understand the generalization capabilities of the models under consideration. This experiment is more challenging than the one in the previous section, as can be seen in performance drops in Table TABREF28 for the new environments. Nonetheless, the insights from the previous section still hold: masking in the output layer and reordering the graph triplets tend to increase performance.
Even though the results in Table TABREF28 suggest that there is room for future work on decoding natural language instructions, our model still outperforms the baselines by a clear margin in new environments. For instance, the difference between our model and the second best model in the Test-New set is about INLINEFORM0 EM and GM. Note that the average number of actions in the ground truth output sequences is 7.07 for the Test-New set. Our model's predictions are just INLINEFORM1 edits off on average from the correct navigation plans.
Qualitative Evaluation
This section discusses qualitative results to better understand how the proposed model uses the navigation graph.
We analyze the evolution of the attention weights INLINEFORM0 in Eq. () to assess if the decoder layer of the proposed model is attending to the correct parts of the behavioral graph when making predictions. Fig FIGREF33 (b) shows an example of the resulting attention map for the case of a correct prediction. In the Figure, the attention map is depicted as a scaled and normalized 2D array of color codes. Each column in the array shows the attention distribution INLINEFORM1 used to generate the predicted output at step INLINEFORM2 . Consequently, each row in the array represents a triplet in the corresponding behavioral graph. This graph consists of 72 triplets for Fig FIGREF33 (b).
We observe a locality effect associated to the attention coefficients corresponding to high values (bright areas) in each column of Fig FIGREF33 (b). This suggests that the decoder is paying attention to graph triplets associated to particular neighborhoods of the environment in each prediction step. We include additional attention visualizations in the supplementary Appendix, including cases where the dynamics of the attention distribution are harder to interpret.
All the routes in our dataset are the shortest paths from a start location to a given destination. Thus, we collected a few additional natural language instructions to check if our model was able to follow navigation instructions describing sub-optimal paths. One such example is shown in Fig. FIGREF37 , where the blue route (shortest path) and the red route (alternative path) are described by:
[leftmargin=*, labelsep=0.2em, itemsep=0em]
“Go out the office and make a left. Turn right at the corner and go down the hall. Make a right at the next corner and enter the kitchen in front of table.”
“Exit the room 0 and turn right, go to the end of the corridor and turn left, go straight to the end of the corridor and turn left again. After passing bookshelf on your left and table on your right, Enter the kitchen on your right.”
For both routes, the proposed model was able to predict the correct sequence of navigation behaviors. This result suggests that the model is indeed using the input instructions and is not just approximating shortest paths in the behavioral graph. Other examples on the prediction of sub-obtimal paths are described in the Appendix.
Conclusion
This work introduced behavioral navigation through free-form natural language instructions as a challenging and a novel task that falls at the intersection of natural language processing and robotics. This problem has a range of interesting cross-domain applications, including information retrieval.
We proposed an end-to-end system to translate user instructions to a high-level navigation plan. Our model utilized an attention mechanism to merge relevant information from the navigation instructions with a behavioral graph of the environment. The model then used a decoder to predict a sequence of navigation behaviors that matched the input commands.
As part of this effort, we contributed a new dataset of 11,051 pairs of user instructions and navigation plans from 100 different environments. Our model achieved the best performance in this dataset in comparison to a two-step baseline approach for interpreting navigation instructions, and a sequence-to-sequence model that does not consider the behavioral graph. Our quantitative and qualitative results suggest that attention mechanisms can help leverage the behavioral graph as a relevant knowledge base to facilitate the translation of free-form navigation instructions. Overall, our approach demonstrated practical form of learning for a complex and useful task. In future work, we are interested in investigating mechanisms to improve generalization to new environments. For example, pointer and graph networks BIBREF30 , BIBREF31 are a promising direction to help supervise translation models and predict motion behaviors.
Acknowledgments
The Toyota Research Institute (TRI) provided funds to assist with this research, but this paper solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity. This work is also partially funded by Fondecyt grant 1181739, Conicyt, Chile. The authors would also like to thank Gabriel Sepúlveda for his assistance with parts of this project. | No |
fbd47705262bfa0a2ba1440a2589152def64cbbd | fbd47705262bfa0a2ba1440a2589152def64cbbd_0 | Q: By how much did their model outperform the baseline?
Text: Introduction
Enabling robots to follow navigation instructions in natural language can facilitate human-robot interaction across a variety of applications. For instance, within the service robotics domain, robots can follow navigation instructions to help with mobile manipulation BIBREF0 and delivery tasks BIBREF1 .
Interpreting navigation instructions in natural language is difficult due to the high variability in the way people describe routes BIBREF2 . For example, there are a variety of ways to describe the route in Fig. FIGREF4 (a):
Each fragment of a sentence within these instructions can be mapped to one or more than one navigation behaviors. For instance, assume that a robot counts with a number of primitive, navigation behaviors, such as “enter the room on the left (or on right)” , “follow the corridor”, “cross the intersection”, etc. Then, the fragment “advance forward” in a navigation instruction could be interpreted as a “follow the corridor” behavior, or as a sequence of “follow the corridor” interspersed with “cross the intersection” behaviors depending on the topology of the environment. Resolving such ambiguities often requires reasoning about “common-sense” concepts, as well as interpreting spatial information and landmarks, e.g., in sentences such as “the room on the left right before the end of the corridor” and “the room which is in the middle of two vases”.
In this work, we pose the problem of interpreting navigation instructions as finding a mapping (or grounding) of the commands into an executable navigation plan. While the plan is typically modeled as a formal specification of low-level motions BIBREF2 or a grammar BIBREF3 , BIBREF4 , we focus specifically on translating instructions to a high-level navigation plan based on a topological representation of the environment. This representation is a behavioral navigation graph, as recently proposed by BIBREF5 , designed to take advantage of the semantic structure typical of human environments. The nodes of the graph correspond to semantically meaningful locations for the navigation task, such as kitchens or entrances to rooms in corridors. The edges are parameterized, visuo-motor behaviors that allow a robot to navigate between neighboring nodes, as illustrated in Fig. FIGREF4 (b). Under this framework, complex navigation routes can be achieved by sequencing behaviors without an explicit metric representation of the world.
We formulate the problem of following instructions under the framework of BIBREF5 as finding a path in the behavioral navigation graph that follows the desired route, given a known starting location. The edges (behaviors) along this path serve to reach the – sometimes implicit – destination requested by the user. As in BIBREF6 , our focus is on the problem of interpreting navigation directions. We assume that a robot can realize valid navigation plans according to the graph.
We contribute a new end-to-end model for following directions in natural language under the behavioral navigation framework. Inspired by the information retrieval and question answering literature BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , we propose to leverage the behavioral graph as a knowledge base to facilitate the interpretation of navigation commands. More specifically, the proposed model takes as input user directions in text form, the behavioral graph of the environment encoded as INLINEFORM0 node; edge; node INLINEFORM1 triplets, and the initial location of the robot in the graph. The model then predicts a set of behaviors to reach the desired destination according to the instructions and the map (Fig. FIGREF4 (c)). Our main insight is that using attention mechanisms to correlate navigation instructions with the topological map of the environment can facilitate predicting correct navigation plans.
This work also contributes a new dataset of INLINEFORM0 pairs of free-form natural language instructions and high-level navigation plans. This dataset was collected through Mechanical Turk using 100 simulated environments with a corresponding topological map and, to the best of our knowledge, it is the first of its kind for behavioral navigation. The dataset opens up opportunities to explore data-driven methods for grounding navigation commands into high-level motion plans.
We conduct extensive experiments to study the generalization capabilities of the proposed model for following natural language instructions. We investigate both generalization to new instructions in known and in new environments. We conclude this paper by discussing the benefits of the proposed approach as well as opportunities for future research based on our findings.
Related work
This section reviews relevant prior work on following navigation instructions. Readers interested in an in-depth review of methods to interpret spatial natural language for robotics are encouraged to refer to BIBREF11 .
Typical approaches to follow navigation commands deal with the complexity of natural language by manually parsing commands, constraining language descriptions, or using statistical machine translation methods. While manually parsing commands is often impractical, the first type of approaches are foundational: they showed that it is possible to leverage the compositionality of semantic units to interpret spatial language BIBREF12 , BIBREF13 .
Constraining language descriptions can reduce the size of the input space to facilitate the interpretation of user commands. For example, BIBREF14 explored using structured, symbolic language phrases for navigation. As in this earlier work, we are also interested in navigation with a topological map of the environment. However, we do not process symbolic phrases. Our aim is to translate free-form natural language instructions to a navigation plan using information from a high-level representation of the environment. This translation problem requires dealing with missing actions in navigation instructions and actions with preconditions, such as “at the end of the corridor, turn right” BIBREF15 .
Statistical machine translation BIBREF16 is at the core of recent approaches to enable robots to follow navigation instructions. These methods aim to automatically discover translation rules from a corpus of data, and often leverage the fact that navigation directions are composed of sequential commands. For instance, BIBREF17 , BIBREF4 , BIBREF2 used statistical machine translation to map instructions to a formal language defined by a grammar. Likewise, BIBREF18 , BIBREF0 mapped commands to spatial description clauses based on the hierarchical structure of language in the navigation problem. Our approach to machine translation builds on insights from these prior efforts. In particular, we focus on end-to-end learning for statistical machine translation due to the recent success of Neural Networks in Natural Language Processing BIBREF19 .
Our work is inspired by methods that reduce the task of interpreting user commands to a sequential prediction problem BIBREF20 , BIBREF21 , BIBREF22 . Similar to BIBREF21 and BIBREF22 , we use a sequence-to-sequence model to enable a mobile agent to follow routes. But instead leveraging visual information to output low-level navigation commands, we focus on using a topological map of the environment to output a high-level navigation plan. This plan is a sequence of behaviors that can be executed by a robot to reach a desired destination BIBREF5 , BIBREF6 .
We explore machine translation from the perspective of automatic question answering. Following BIBREF8 , BIBREF9 , our approach uses attention mechanisms to learn alignments between different input modalities. In our case, the inputs to our model are navigation instructions, a topological environment map, and the start location of the robot (Fig. FIGREF4 (c)). Our results show that the map can serve as an effective source of contextual information for the translation task. Additionally, it is possible to leverage this kind of information in an end-to-end fashion.
Problem Formulation
Our goal is to translate navigation instructions in text form into a sequence of behaviors that a robot can execute to reach a desired destination from a known start location. We frame this problem under a behavioral approach to indoor autonomous navigation BIBREF5 and assume that prior knowledge about the environment is available for the translation task. This prior knowledge is a topological map, in the form of a behavioral navigation graph (Fig. FIGREF4 (b)). The nodes of the graph correspond to semantically-meaningful locations for the navigation task, and its directed edges are visuo-motor behaviors that a robot can use to move between nodes. This formulation takes advantage of the rich semantic structure behind man-made environments, resulting in a compact route representation for robot navigation.
Fig. FIGREF4 (c) provides a schematic view of the problem setting. The inputs are: (1) a navigation graph INLINEFORM0 , (2) the starting node INLINEFORM1 of the robot in INLINEFORM2 , and (3) a set of free-form navigation instructions INLINEFORM3 in natural language. The instructions describe a path in the graph to reach from INLINEFORM4 to a – potentially implicit – destination node INLINEFORM5 . Using this information, the objective is to predict a suitable sequence of robot behaviors INLINEFORM6 to navigate from INLINEFORM7 to INLINEFORM8 according to INLINEFORM9 . From a supervised learning perspective, the goal is then to estimate: DISPLAYFORM0
based on a dataset of input-target pairs INLINEFORM0 , where INLINEFORM1 and INLINEFORM2 , respectively. The sequential execution of the behaviors INLINEFORM3 should replicate the route intended by the instructions INLINEFORM4 . We assume no prior linguistic knowledge. Thus, translation approaches have to cope with the semantics and syntax of the language by discovering corresponding patterns in the data.
The Behavioral Graph: A Knowledge Base For Navigation
We view the behavioral graph INLINEFORM0 as a knowledge base that encodes a set of navigational rules as triplets INLINEFORM1 , where INLINEFORM2 and INLINEFORM3 are adjacent nodes in the graph, and the edge INLINEFORM4 is an executable behavior to navigate from INLINEFORM5 to INLINEFORM6 . In general, each behaviors includes a list of relevant navigational attributes INLINEFORM7 that the robot might encounter when moving between nodes.
We consider 7 types of semantic locations, 11 types of behaviors, and 20 different types of landmarks. A location in the navigation graph can be a room, a lab, an office, a kitchen, a hall, a corridor, or a bathroom. These places are labeled with unique tags, such as "room-1" or "lab-2", except for bathrooms and kitchens which people do not typically refer to by unique names when describing navigation routes.
Table TABREF7 lists the navigation behaviors that we consider in this work. These behaviors can be described in reference to visual landmarks or objects, such as paintings, book shelfs, tables, etc. As in Fig. FIGREF4 , maps might contain multiple landmarks of the same type. Please see the supplementary material (Appendix A) for more details.
Approach
We leverage recent advances in deep learning to translate natural language instructions to a sequence of navigation behaviors in an end-to-end fashion. Our proposed model builds on the sequence-to-sequence translation model of BIBREF23 , which computes a soft-alignment between a source sequence (natural language instructions in our case) and the corresponding target sequence (navigation behaviors).
As one of our main contributions, we augment the neural machine translation approach of BIBREF23 to take as input not only natural language instructions, but also the corresponding behavioral navigation graph INLINEFORM0 of the environment where navigation should take place. Specifically, at each step, the graph INLINEFORM1 operates as a knowledge base that the model can access to obtain information about path connectivity, facilitating the grounding of navigation commands.
Figure FIGREF8 shows the structure of the proposed model for interpreting navigation instructions. The model consists of six layers:
Embed layer: The model first encodes each word and symbol in the input sequences INLINEFORM0 and INLINEFORM1 into fixed-length representations. The instructions INLINEFORM2 are embedded into a 100-dimensional pre-trained GloVe vector BIBREF24 . Each of the triplet components, INLINEFORM3 , INLINEFORM4 , and INLINEFORM5 of the graph INLINEFORM6 , are one-hot encoded into vectors of dimensionality INLINEFORM7 , where INLINEFORM8 and INLINEFORM9 are the number of nodes and edges in INLINEFORM10 , respectively.
Encoder layer: The model then uses two bidirectional Gated Recurrent Units (GRUs) BIBREF25 to independently process the information from INLINEFORM0 and INLINEFORM1 , and incorporate contextual cues from the surrounding embeddings in each sequence. The outputs of the encoder layer are the matrix INLINEFORM2 for the navigational commands and the matrix INLINEFORM3 for the behavioral graph, where INLINEFORM4 is the hidden size of each GRU, INLINEFORM5 is the number of words in the instruction INLINEFORM6 , and INLINEFORM7 is the number of triplets in the graph INLINEFORM8 .
Attention layer: Matrices INLINEFORM0 and INLINEFORM1 generated by the encoder layer are combined using an attention mechanism. We use one-way attention because the graph contains information about the whole environment, while the instruction has (potentially incomplete) local information about the route of interest. The use of attention provides our model with a two-step strategy to interpret commands. This resembles the way people find paths on a map: first, relevant parts on the map are selected according to their affinity to each of the words in the input instruction (attention layer); second, the selected parts are connected to assemble a valid path (decoder layer). More formally, let INLINEFORM2 ( INLINEFORM3 ) be the INLINEFORM4 -th row of INLINEFORM5 , and INLINEFORM6 ( INLINEFORM7 ) the INLINEFORM8 -th row of INLINEFORM9 . We use each encoded triplet INLINEFORM10 in INLINEFORM11 to calculate its associated attention distribution INLINEFORM12 over all the atomic instructions INLINEFORM13 : DISPLAYFORM0
where the matrix INLINEFORM0 serves to combine the different sources of information INLINEFORM1 and INLINEFORM2 . Each component INLINEFORM3 of the attention distributions INLINEFORM4 quantifies the affinity between the INLINEFORM5 -th triplet in INLINEFORM6 and the INLINEFORM7 -th word in the corresponding input INLINEFORM8 .
The model then uses each attention distribution INLINEFORM0 to obtain a weighted sum of the encodings of the words in INLINEFORM1 , according to their relevance to the corresponding triplet INLINEFORM2 . This results in L attention vectors INLINEFORM3 , INLINEFORM4 .
The final step in the attention layer concatenates each INLINEFORM0 with INLINEFORM1 to generate the outputs INLINEFORM2 , INLINEFORM3 . Following BIBREF8 , we include the encoded triplet INLINEFORM4 in the output tensor INLINEFORM5 of this layer to prevent early summaries of relevant map information.
FC layer: The model reduces the dimensionality of each individual vector INLINEFORM0 from INLINEFORM1 to INLINEFORM2 with a fully-connected (FC) layer. The resulting L vectors are output to the next layer as columns of a context matrix INLINEFORM3 .
Decoder layer: After the FC layer, the model predicts likelihoods over the sequence of behaviors that correspond to the input instructions with a GRU network. Without loss of generality, consider the INLINEFORM0 -th recurrent cell in the GRU network. This cell takes two inputs: a hidden state vector INLINEFORM1 from the prior cell, and a one-hot embedding of the previous behavior INLINEFORM2 that was predicted by the model. Based on these inputs, the GRU cell outputs a new hidden state INLINEFORM3 to compute likelihoods for the next behavior. These likelihoods are estimated by combining the output state INLINEFORM4 with relevant information from the context INLINEFORM5 : DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are trainable parameters. The attention vector INLINEFORM3 in Eq. () quantifies the affinity of INLINEFORM4 with respect to each of the columns INLINEFORM5 of INLINEFORM6 , where INLINEFORM7 . The attention vector also helps to estimate a dynamic contextual vector INLINEFORM8 that the INLINEFORM9 -th GRU cell uses to compute logits for the next behavior: DISPLAYFORM0
with INLINEFORM0 trainable parameters. Note that INLINEFORM1 includes a value for each of the pre-defined behaviors in the graph INLINEFORM2 , as well as for a special “stop” symbol to identify the end of the output sequence.
Output layer: The final layer of the model searches for a valid sequence of robot behaviors based on the robot's initial node, the connectivity of the graph INLINEFORM0 , and the output logits from the previous decoder layer. Again, without loss of generality, consider the INLINEFORM1 -th behavior INLINEFORM2 that is finally predicted by the model. The search for this behavior is implemented as: DISPLAYFORM0
with INLINEFORM0 a masking function that takes as input the graph INLINEFORM1 and the node INLINEFORM2 that the robot reaches after following the sequence of behaviors INLINEFORM3 previously predicted by the model. The INLINEFORM4 function returns a vector of the same dimensionality as the logits INLINEFORM5 , but with zeros for the valid behaviors after the last location INLINEFORM6 and for the special stop symbol, and INLINEFORM7 for any invalid predictions according to the connectivity of the behavioral navigation graph.
Dataset
We created a new dataset for the problem of following navigation instructions under the behavioral navigation framework of BIBREF5 . This dataset was created using Amazon Mechanical Turk and 100 maps of simulated indoor environments, each with 6 to 65 rooms. To the best of our knowledge, this is the first benchmark for comparing translation models in the context of behavioral robot navigation.
As shown in Table TABREF16 , the dataset consists of 8066 pairs of free-form natural language instructions and navigation plans for training. This training data was collected from 88 unique simulated environments, totaling 6064 distinct navigation plans (2002 plans have two different navigation instructions each; the rest has one). The dataset contains two test set variants:
While the dataset was collected with simulated environments, no structure was imposed on the navigation instructions while crowd-sourcing data. Thus, many instructions in our dataset are ambiguous. Moreover, the order of the behaviors in the instructions is not always the same. For instance, a person said “turn right and advance” to describe part of a route, while another person said “go straight after turning right” in a similar situation. The high variability present in the natural language descriptions of our dataset makes the problem of decoding instructions into behaviors not trivial. See Appendix A of the supplementary material for additional details on our data collection effort.
Experiments
This section describes our evaluation of the proposed approach for interpreting navigation commands in natural language. We provide both quantitative and qualitative results.
Evaluation Metrics
While computing evaluation metrics, we only consider the behaviors present in the route because they are sufficient to recover the high-level navigation plan from the graph. Our metrics treat each behavior as a single token. For example, the sample plan “R-1 oor C-1 cf C-1 lt C-0 cf C-0 iol O-3" is considered to have 5 tokens, each corresponding to one of its behaviors (“oor", “cf", “lt", “cf", “iol"). In this plan, “R-1",“C-1", “C-0", and “O-3" are symbols for locations (nodes) in the graph.
We compare the performance of translation approaches based on four metrics:
[align=left,leftmargin=0em,labelsep=0.4em,font=]
As in BIBREF20 , EM is 1 if a predicted plan matches exactly the ground truth; otherwise it is 0.
The harmonic average of the precision and recall over all the test set BIBREF26 .
The minimum number of insertions, deletions or swap operations required to transform a predicted sequence of behaviors into the ground truth sequence BIBREF27 .
GM is 1 if a predicted plan reaches the ground truth destination (even if the full sequence of behaviors does not match exactly the ground truth). Otherwise, GM is 0.
Models Used in the Evaluation
We compare the proposed approach for translating natural language instructions into a navigation plan against alternative deep-learning models:
[align=left,leftmargin=0em,labelsep=0.4em,font=]
The baseline approach is based on BIBREF20 . It divides the task of interpreting commands for behavioral navigation into two steps: path generation, and path verification. For path generation, this baseline uses a standard sequence-to-sequence model augmented with an attention mechanism, similar to BIBREF23 , BIBREF6 . For path verification, the baseline uses depth-first search to find a route in the graph that matches the sequence of predicted behaviors. If no route matches perfectly, the baseline changes up to three behaviors in the predicted sequence to try to turn it into a valid path.
To test the impact of using the behavioral graphs as an extra input to our translation model, we implemented a version of our approach that only takes natural language instructions as input. In this ablation model, the output of the bidirectional GRU that encodes the input instruction INLINEFORM0 is directly fed to the decoder layer. This model does not have the attention and FC layers described in Sec. SECREF4 , nor uses the masking function in the output layer.
This model is the same as the previous Ablation model, but with the masking function in the output layer.
Implementation Details
We pre-processed the inputs to the various models that are considered in our experiment. In particular, we lowercased, tokenized, spell-checked and lemmatized the input instructions in text-form using WordNet BIBREF28 . We also truncated the graphs to a maximum of 300 triplets, and the navigational instructions to a maximum of 150 words. Only 6.4% (5.4%) of the unique graphs in the training (validation) set had more than 300 triplets, and less than 0.15% of the natural language instructions in these sets had more than 150 tokens.
The dimensionality of the hidden state of the GRU networks was set to 128 in all the experiments. In general, we used 12.5% of the training set as validation for choosing models' hyper-parameters. In particular, we used dropout after the encoder and the fully-connected layers of the proposed model to reduce overfitting. Best performance was achieved with a dropout rate of 0.5 and batch size equal to 256. We also used scheduled sampling BIBREF29 at training time for all models except the baseline.
We input the triplets from the graph to our proposed model in alphabetical order, and consider a modification where the triplets that surround the start location of the robot are provided first in the input graph sequence. We hypothesized that such rearrangement would help identify the starting location (node) of the robot in the graph. In turn, this could facilitate the prediction of correct output sequences. In the remaining of the paper, we refer to models that were provided a rearranged graph, beginning with the starting location of the robot, as models with “Ordered Triplets”.
Quantitative Evaluation
Table TABREF28 shows the performance of the models considered in our evaluation on both test sets. The next two sections discuss the results in detail.
First, we can observe that the final model “Ours with Mask and Ordered Triplets” outperforms the Baseline and Ablation models on all metrics in previously seen environments. The difference in performance is particularly evident for the Exact Match and Goal Match metrics, with our model increasing accuracy by 35% and 25% in comparison to the Baseline and Ablation models, respectively. These results suggest that providing the behavioral navigation graph to the model and allowing it to process this information as a knowledge base in an end-to-end fashion is beneficial.
We can also observe from Table TABREF28 that the masking function of Eq. ( EQREF12 ) tends to increase performance in the Test-Repeated Set by constraining the output sequence to a valid set of navigation behaviors. For the Ablation model, using the masking function leads to about INLINEFORM0 increase in EM and GM accuracy. For the proposed model (with or without reordering the graph triplets), the increase in accuracy is around INLINEFORM1 . Note that the impact of the masking function is less evident in terms of the F1 score because this metric considers if a predicted behavior exists in the ground truth navigation plan, irrespective of its specific position in the output sequence.
The results in the last four rows of Table TABREF28 suggest that ordering the graph triplets can facilitate predicting correct navigation plans in previously seen environments. Providing the triplets that surround the starting location of the robot first to the model leads to a boost of INLINEFORM0 in EM and GM performance. The rearrangement of the graph triplets also helps to reduce ED and increase F1.
Lastly, it is worth noting that our proposed model (last row of Table TABREF28 ) outperforms all other models in previously seen environments. In particular, we obtain over INLINEFORM0 increase in EM and GM between our model and the next best two models.
The previous section evaluated model performance on new instructions (and corresponding navigation plans) for environments that were previously seen at training time. Here, we examine whether the trained models succeed on environments that are completely new.
The evaluation on the Test-New Set helps understand the generalization capabilities of the models under consideration. This experiment is more challenging than the one in the previous section, as can be seen in performance drops in Table TABREF28 for the new environments. Nonetheless, the insights from the previous section still hold: masking in the output layer and reordering the graph triplets tend to increase performance.
Even though the results in Table TABREF28 suggest that there is room for future work on decoding natural language instructions, our model still outperforms the baselines by a clear margin in new environments. For instance, the difference between our model and the second best model in the Test-New set is about INLINEFORM0 EM and GM. Note that the average number of actions in the ground truth output sequences is 7.07 for the Test-New set. Our model's predictions are just INLINEFORM1 edits off on average from the correct navigation plans.
Qualitative Evaluation
This section discusses qualitative results to better understand how the proposed model uses the navigation graph.
We analyze the evolution of the attention weights INLINEFORM0 in Eq. () to assess if the decoder layer of the proposed model is attending to the correct parts of the behavioral graph when making predictions. Fig FIGREF33 (b) shows an example of the resulting attention map for the case of a correct prediction. In the Figure, the attention map is depicted as a scaled and normalized 2D array of color codes. Each column in the array shows the attention distribution INLINEFORM1 used to generate the predicted output at step INLINEFORM2 . Consequently, each row in the array represents a triplet in the corresponding behavioral graph. This graph consists of 72 triplets for Fig FIGREF33 (b).
We observe a locality effect associated to the attention coefficients corresponding to high values (bright areas) in each column of Fig FIGREF33 (b). This suggests that the decoder is paying attention to graph triplets associated to particular neighborhoods of the environment in each prediction step. We include additional attention visualizations in the supplementary Appendix, including cases where the dynamics of the attention distribution are harder to interpret.
All the routes in our dataset are the shortest paths from a start location to a given destination. Thus, we collected a few additional natural language instructions to check if our model was able to follow navigation instructions describing sub-optimal paths. One such example is shown in Fig. FIGREF37 , where the blue route (shortest path) and the red route (alternative path) are described by:
[leftmargin=*, labelsep=0.2em, itemsep=0em]
“Go out the office and make a left. Turn right at the corner and go down the hall. Make a right at the next corner and enter the kitchen in front of table.”
“Exit the room 0 and turn right, go to the end of the corridor and turn left, go straight to the end of the corridor and turn left again. After passing bookshelf on your left and table on your right, Enter the kitchen on your right.”
For both routes, the proposed model was able to predict the correct sequence of navigation behaviors. This result suggests that the model is indeed using the input instructions and is not just approximating shortest paths in the behavioral graph. Other examples on the prediction of sub-obtimal paths are described in the Appendix.
Conclusion
This work introduced behavioral navigation through free-form natural language instructions as a challenging and a novel task that falls at the intersection of natural language processing and robotics. This problem has a range of interesting cross-domain applications, including information retrieval.
We proposed an end-to-end system to translate user instructions to a high-level navigation plan. Our model utilized an attention mechanism to merge relevant information from the navigation instructions with a behavioral graph of the environment. The model then used a decoder to predict a sequence of navigation behaviors that matched the input commands.
As part of this effort, we contributed a new dataset of 11,051 pairs of user instructions and navigation plans from 100 different environments. Our model achieved the best performance in this dataset in comparison to a two-step baseline approach for interpreting navigation instructions, and a sequence-to-sequence model that does not consider the behavioral graph. Our quantitative and qualitative results suggest that attention mechanisms can help leverage the behavioral graph as a relevant knowledge base to facilitate the translation of free-form navigation instructions. Overall, our approach demonstrated practical form of learning for a complex and useful task. In future work, we are interested in investigating mechanisms to improve generalization to new environments. For example, pointer and graph networks BIBREF30 , BIBREF31 are a promising direction to help supervise translation models and predict motion behaviors.
Acknowledgments
The Toyota Research Institute (TRI) provided funds to assist with this research, but this paper solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity. This work is also partially funded by Fondecyt grant 1181739, Conicyt, Chile. The authors would also like to thank Gabriel Sepúlveda for his assistance with parts of this project. | increasing accuracy by 35% and 25% in comparison to the Baseline and Ablation models, respectively, over INLINEFORM0 increase in EM and GM between our model and the next best two models |
51aaec4c511d96ef5f5c8bae3d5d856d8bc288d3 | 51aaec4c511d96ef5f5c8bae3d5d856d8bc288d3_0 | Q: What baselines did they compare their model with?
Text: Introduction
Enabling robots to follow navigation instructions in natural language can facilitate human-robot interaction across a variety of applications. For instance, within the service robotics domain, robots can follow navigation instructions to help with mobile manipulation BIBREF0 and delivery tasks BIBREF1 .
Interpreting navigation instructions in natural language is difficult due to the high variability in the way people describe routes BIBREF2 . For example, there are a variety of ways to describe the route in Fig. FIGREF4 (a):
Each fragment of a sentence within these instructions can be mapped to one or more than one navigation behaviors. For instance, assume that a robot counts with a number of primitive, navigation behaviors, such as “enter the room on the left (or on right)” , “follow the corridor”, “cross the intersection”, etc. Then, the fragment “advance forward” in a navigation instruction could be interpreted as a “follow the corridor” behavior, or as a sequence of “follow the corridor” interspersed with “cross the intersection” behaviors depending on the topology of the environment. Resolving such ambiguities often requires reasoning about “common-sense” concepts, as well as interpreting spatial information and landmarks, e.g., in sentences such as “the room on the left right before the end of the corridor” and “the room which is in the middle of two vases”.
In this work, we pose the problem of interpreting navigation instructions as finding a mapping (or grounding) of the commands into an executable navigation plan. While the plan is typically modeled as a formal specification of low-level motions BIBREF2 or a grammar BIBREF3 , BIBREF4 , we focus specifically on translating instructions to a high-level navigation plan based on a topological representation of the environment. This representation is a behavioral navigation graph, as recently proposed by BIBREF5 , designed to take advantage of the semantic structure typical of human environments. The nodes of the graph correspond to semantically meaningful locations for the navigation task, such as kitchens or entrances to rooms in corridors. The edges are parameterized, visuo-motor behaviors that allow a robot to navigate between neighboring nodes, as illustrated in Fig. FIGREF4 (b). Under this framework, complex navigation routes can be achieved by sequencing behaviors without an explicit metric representation of the world.
We formulate the problem of following instructions under the framework of BIBREF5 as finding a path in the behavioral navigation graph that follows the desired route, given a known starting location. The edges (behaviors) along this path serve to reach the – sometimes implicit – destination requested by the user. As in BIBREF6 , our focus is on the problem of interpreting navigation directions. We assume that a robot can realize valid navigation plans according to the graph.
We contribute a new end-to-end model for following directions in natural language under the behavioral navigation framework. Inspired by the information retrieval and question answering literature BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , we propose to leverage the behavioral graph as a knowledge base to facilitate the interpretation of navigation commands. More specifically, the proposed model takes as input user directions in text form, the behavioral graph of the environment encoded as INLINEFORM0 node; edge; node INLINEFORM1 triplets, and the initial location of the robot in the graph. The model then predicts a set of behaviors to reach the desired destination according to the instructions and the map (Fig. FIGREF4 (c)). Our main insight is that using attention mechanisms to correlate navigation instructions with the topological map of the environment can facilitate predicting correct navigation plans.
This work also contributes a new dataset of INLINEFORM0 pairs of free-form natural language instructions and high-level navigation plans. This dataset was collected through Mechanical Turk using 100 simulated environments with a corresponding topological map and, to the best of our knowledge, it is the first of its kind for behavioral navigation. The dataset opens up opportunities to explore data-driven methods for grounding navigation commands into high-level motion plans.
We conduct extensive experiments to study the generalization capabilities of the proposed model for following natural language instructions. We investigate both generalization to new instructions in known and in new environments. We conclude this paper by discussing the benefits of the proposed approach as well as opportunities for future research based on our findings.
Related work
This section reviews relevant prior work on following navigation instructions. Readers interested in an in-depth review of methods to interpret spatial natural language for robotics are encouraged to refer to BIBREF11 .
Typical approaches to follow navigation commands deal with the complexity of natural language by manually parsing commands, constraining language descriptions, or using statistical machine translation methods. While manually parsing commands is often impractical, the first type of approaches are foundational: they showed that it is possible to leverage the compositionality of semantic units to interpret spatial language BIBREF12 , BIBREF13 .
Constraining language descriptions can reduce the size of the input space to facilitate the interpretation of user commands. For example, BIBREF14 explored using structured, symbolic language phrases for navigation. As in this earlier work, we are also interested in navigation with a topological map of the environment. However, we do not process symbolic phrases. Our aim is to translate free-form natural language instructions to a navigation plan using information from a high-level representation of the environment. This translation problem requires dealing with missing actions in navigation instructions and actions with preconditions, such as “at the end of the corridor, turn right” BIBREF15 .
Statistical machine translation BIBREF16 is at the core of recent approaches to enable robots to follow navigation instructions. These methods aim to automatically discover translation rules from a corpus of data, and often leverage the fact that navigation directions are composed of sequential commands. For instance, BIBREF17 , BIBREF4 , BIBREF2 used statistical machine translation to map instructions to a formal language defined by a grammar. Likewise, BIBREF18 , BIBREF0 mapped commands to spatial description clauses based on the hierarchical structure of language in the navigation problem. Our approach to machine translation builds on insights from these prior efforts. In particular, we focus on end-to-end learning for statistical machine translation due to the recent success of Neural Networks in Natural Language Processing BIBREF19 .
Our work is inspired by methods that reduce the task of interpreting user commands to a sequential prediction problem BIBREF20 , BIBREF21 , BIBREF22 . Similar to BIBREF21 and BIBREF22 , we use a sequence-to-sequence model to enable a mobile agent to follow routes. But instead leveraging visual information to output low-level navigation commands, we focus on using a topological map of the environment to output a high-level navigation plan. This plan is a sequence of behaviors that can be executed by a robot to reach a desired destination BIBREF5 , BIBREF6 .
We explore machine translation from the perspective of automatic question answering. Following BIBREF8 , BIBREF9 , our approach uses attention mechanisms to learn alignments between different input modalities. In our case, the inputs to our model are navigation instructions, a topological environment map, and the start location of the robot (Fig. FIGREF4 (c)). Our results show that the map can serve as an effective source of contextual information for the translation task. Additionally, it is possible to leverage this kind of information in an end-to-end fashion.
Problem Formulation
Our goal is to translate navigation instructions in text form into a sequence of behaviors that a robot can execute to reach a desired destination from a known start location. We frame this problem under a behavioral approach to indoor autonomous navigation BIBREF5 and assume that prior knowledge about the environment is available for the translation task. This prior knowledge is a topological map, in the form of a behavioral navigation graph (Fig. FIGREF4 (b)). The nodes of the graph correspond to semantically-meaningful locations for the navigation task, and its directed edges are visuo-motor behaviors that a robot can use to move between nodes. This formulation takes advantage of the rich semantic structure behind man-made environments, resulting in a compact route representation for robot navigation.
Fig. FIGREF4 (c) provides a schematic view of the problem setting. The inputs are: (1) a navigation graph INLINEFORM0 , (2) the starting node INLINEFORM1 of the robot in INLINEFORM2 , and (3) a set of free-form navigation instructions INLINEFORM3 in natural language. The instructions describe a path in the graph to reach from INLINEFORM4 to a – potentially implicit – destination node INLINEFORM5 . Using this information, the objective is to predict a suitable sequence of robot behaviors INLINEFORM6 to navigate from INLINEFORM7 to INLINEFORM8 according to INLINEFORM9 . From a supervised learning perspective, the goal is then to estimate: DISPLAYFORM0
based on a dataset of input-target pairs INLINEFORM0 , where INLINEFORM1 and INLINEFORM2 , respectively. The sequential execution of the behaviors INLINEFORM3 should replicate the route intended by the instructions INLINEFORM4 . We assume no prior linguistic knowledge. Thus, translation approaches have to cope with the semantics and syntax of the language by discovering corresponding patterns in the data.
The Behavioral Graph: A Knowledge Base For Navigation
We view the behavioral graph INLINEFORM0 as a knowledge base that encodes a set of navigational rules as triplets INLINEFORM1 , where INLINEFORM2 and INLINEFORM3 are adjacent nodes in the graph, and the edge INLINEFORM4 is an executable behavior to navigate from INLINEFORM5 to INLINEFORM6 . In general, each behaviors includes a list of relevant navigational attributes INLINEFORM7 that the robot might encounter when moving between nodes.
We consider 7 types of semantic locations, 11 types of behaviors, and 20 different types of landmarks. A location in the navigation graph can be a room, a lab, an office, a kitchen, a hall, a corridor, or a bathroom. These places are labeled with unique tags, such as "room-1" or "lab-2", except for bathrooms and kitchens which people do not typically refer to by unique names when describing navigation routes.
Table TABREF7 lists the navigation behaviors that we consider in this work. These behaviors can be described in reference to visual landmarks or objects, such as paintings, book shelfs, tables, etc. As in Fig. FIGREF4 , maps might contain multiple landmarks of the same type. Please see the supplementary material (Appendix A) for more details.
Approach
We leverage recent advances in deep learning to translate natural language instructions to a sequence of navigation behaviors in an end-to-end fashion. Our proposed model builds on the sequence-to-sequence translation model of BIBREF23 , which computes a soft-alignment between a source sequence (natural language instructions in our case) and the corresponding target sequence (navigation behaviors).
As one of our main contributions, we augment the neural machine translation approach of BIBREF23 to take as input not only natural language instructions, but also the corresponding behavioral navigation graph INLINEFORM0 of the environment where navigation should take place. Specifically, at each step, the graph INLINEFORM1 operates as a knowledge base that the model can access to obtain information about path connectivity, facilitating the grounding of navigation commands.
Figure FIGREF8 shows the structure of the proposed model for interpreting navigation instructions. The model consists of six layers:
Embed layer: The model first encodes each word and symbol in the input sequences INLINEFORM0 and INLINEFORM1 into fixed-length representations. The instructions INLINEFORM2 are embedded into a 100-dimensional pre-trained GloVe vector BIBREF24 . Each of the triplet components, INLINEFORM3 , INLINEFORM4 , and INLINEFORM5 of the graph INLINEFORM6 , are one-hot encoded into vectors of dimensionality INLINEFORM7 , where INLINEFORM8 and INLINEFORM9 are the number of nodes and edges in INLINEFORM10 , respectively.
Encoder layer: The model then uses two bidirectional Gated Recurrent Units (GRUs) BIBREF25 to independently process the information from INLINEFORM0 and INLINEFORM1 , and incorporate contextual cues from the surrounding embeddings in each sequence. The outputs of the encoder layer are the matrix INLINEFORM2 for the navigational commands and the matrix INLINEFORM3 for the behavioral graph, where INLINEFORM4 is the hidden size of each GRU, INLINEFORM5 is the number of words in the instruction INLINEFORM6 , and INLINEFORM7 is the number of triplets in the graph INLINEFORM8 .
Attention layer: Matrices INLINEFORM0 and INLINEFORM1 generated by the encoder layer are combined using an attention mechanism. We use one-way attention because the graph contains information about the whole environment, while the instruction has (potentially incomplete) local information about the route of interest. The use of attention provides our model with a two-step strategy to interpret commands. This resembles the way people find paths on a map: first, relevant parts on the map are selected according to their affinity to each of the words in the input instruction (attention layer); second, the selected parts are connected to assemble a valid path (decoder layer). More formally, let INLINEFORM2 ( INLINEFORM3 ) be the INLINEFORM4 -th row of INLINEFORM5 , and INLINEFORM6 ( INLINEFORM7 ) the INLINEFORM8 -th row of INLINEFORM9 . We use each encoded triplet INLINEFORM10 in INLINEFORM11 to calculate its associated attention distribution INLINEFORM12 over all the atomic instructions INLINEFORM13 : DISPLAYFORM0
where the matrix INLINEFORM0 serves to combine the different sources of information INLINEFORM1 and INLINEFORM2 . Each component INLINEFORM3 of the attention distributions INLINEFORM4 quantifies the affinity between the INLINEFORM5 -th triplet in INLINEFORM6 and the INLINEFORM7 -th word in the corresponding input INLINEFORM8 .
The model then uses each attention distribution INLINEFORM0 to obtain a weighted sum of the encodings of the words in INLINEFORM1 , according to their relevance to the corresponding triplet INLINEFORM2 . This results in L attention vectors INLINEFORM3 , INLINEFORM4 .
The final step in the attention layer concatenates each INLINEFORM0 with INLINEFORM1 to generate the outputs INLINEFORM2 , INLINEFORM3 . Following BIBREF8 , we include the encoded triplet INLINEFORM4 in the output tensor INLINEFORM5 of this layer to prevent early summaries of relevant map information.
FC layer: The model reduces the dimensionality of each individual vector INLINEFORM0 from INLINEFORM1 to INLINEFORM2 with a fully-connected (FC) layer. The resulting L vectors are output to the next layer as columns of a context matrix INLINEFORM3 .
Decoder layer: After the FC layer, the model predicts likelihoods over the sequence of behaviors that correspond to the input instructions with a GRU network. Without loss of generality, consider the INLINEFORM0 -th recurrent cell in the GRU network. This cell takes two inputs: a hidden state vector INLINEFORM1 from the prior cell, and a one-hot embedding of the previous behavior INLINEFORM2 that was predicted by the model. Based on these inputs, the GRU cell outputs a new hidden state INLINEFORM3 to compute likelihoods for the next behavior. These likelihoods are estimated by combining the output state INLINEFORM4 with relevant information from the context INLINEFORM5 : DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are trainable parameters. The attention vector INLINEFORM3 in Eq. () quantifies the affinity of INLINEFORM4 with respect to each of the columns INLINEFORM5 of INLINEFORM6 , where INLINEFORM7 . The attention vector also helps to estimate a dynamic contextual vector INLINEFORM8 that the INLINEFORM9 -th GRU cell uses to compute logits for the next behavior: DISPLAYFORM0
with INLINEFORM0 trainable parameters. Note that INLINEFORM1 includes a value for each of the pre-defined behaviors in the graph INLINEFORM2 , as well as for a special “stop” symbol to identify the end of the output sequence.
Output layer: The final layer of the model searches for a valid sequence of robot behaviors based on the robot's initial node, the connectivity of the graph INLINEFORM0 , and the output logits from the previous decoder layer. Again, without loss of generality, consider the INLINEFORM1 -th behavior INLINEFORM2 that is finally predicted by the model. The search for this behavior is implemented as: DISPLAYFORM0
with INLINEFORM0 a masking function that takes as input the graph INLINEFORM1 and the node INLINEFORM2 that the robot reaches after following the sequence of behaviors INLINEFORM3 previously predicted by the model. The INLINEFORM4 function returns a vector of the same dimensionality as the logits INLINEFORM5 , but with zeros for the valid behaviors after the last location INLINEFORM6 and for the special stop symbol, and INLINEFORM7 for any invalid predictions according to the connectivity of the behavioral navigation graph.
Dataset
We created a new dataset for the problem of following navigation instructions under the behavioral navigation framework of BIBREF5 . This dataset was created using Amazon Mechanical Turk and 100 maps of simulated indoor environments, each with 6 to 65 rooms. To the best of our knowledge, this is the first benchmark for comparing translation models in the context of behavioral robot navigation.
As shown in Table TABREF16 , the dataset consists of 8066 pairs of free-form natural language instructions and navigation plans for training. This training data was collected from 88 unique simulated environments, totaling 6064 distinct navigation plans (2002 plans have two different navigation instructions each; the rest has one). The dataset contains two test set variants:
While the dataset was collected with simulated environments, no structure was imposed on the navigation instructions while crowd-sourcing data. Thus, many instructions in our dataset are ambiguous. Moreover, the order of the behaviors in the instructions is not always the same. For instance, a person said “turn right and advance” to describe part of a route, while another person said “go straight after turning right” in a similar situation. The high variability present in the natural language descriptions of our dataset makes the problem of decoding instructions into behaviors not trivial. See Appendix A of the supplementary material for additional details on our data collection effort.
Experiments
This section describes our evaluation of the proposed approach for interpreting navigation commands in natural language. We provide both quantitative and qualitative results.
Evaluation Metrics
While computing evaluation metrics, we only consider the behaviors present in the route because they are sufficient to recover the high-level navigation plan from the graph. Our metrics treat each behavior as a single token. For example, the sample plan “R-1 oor C-1 cf C-1 lt C-0 cf C-0 iol O-3" is considered to have 5 tokens, each corresponding to one of its behaviors (“oor", “cf", “lt", “cf", “iol"). In this plan, “R-1",“C-1", “C-0", and “O-3" are symbols for locations (nodes) in the graph.
We compare the performance of translation approaches based on four metrics:
[align=left,leftmargin=0em,labelsep=0.4em,font=]
As in BIBREF20 , EM is 1 if a predicted plan matches exactly the ground truth; otherwise it is 0.
The harmonic average of the precision and recall over all the test set BIBREF26 .
The minimum number of insertions, deletions or swap operations required to transform a predicted sequence of behaviors into the ground truth sequence BIBREF27 .
GM is 1 if a predicted plan reaches the ground truth destination (even if the full sequence of behaviors does not match exactly the ground truth). Otherwise, GM is 0.
Models Used in the Evaluation
We compare the proposed approach for translating natural language instructions into a navigation plan against alternative deep-learning models:
[align=left,leftmargin=0em,labelsep=0.4em,font=]
The baseline approach is based on BIBREF20 . It divides the task of interpreting commands for behavioral navigation into two steps: path generation, and path verification. For path generation, this baseline uses a standard sequence-to-sequence model augmented with an attention mechanism, similar to BIBREF23 , BIBREF6 . For path verification, the baseline uses depth-first search to find a route in the graph that matches the sequence of predicted behaviors. If no route matches perfectly, the baseline changes up to three behaviors in the predicted sequence to try to turn it into a valid path.
To test the impact of using the behavioral graphs as an extra input to our translation model, we implemented a version of our approach that only takes natural language instructions as input. In this ablation model, the output of the bidirectional GRU that encodes the input instruction INLINEFORM0 is directly fed to the decoder layer. This model does not have the attention and FC layers described in Sec. SECREF4 , nor uses the masking function in the output layer.
This model is the same as the previous Ablation model, but with the masking function in the output layer.
Implementation Details
We pre-processed the inputs to the various models that are considered in our experiment. In particular, we lowercased, tokenized, spell-checked and lemmatized the input instructions in text-form using WordNet BIBREF28 . We also truncated the graphs to a maximum of 300 triplets, and the navigational instructions to a maximum of 150 words. Only 6.4% (5.4%) of the unique graphs in the training (validation) set had more than 300 triplets, and less than 0.15% of the natural language instructions in these sets had more than 150 tokens.
The dimensionality of the hidden state of the GRU networks was set to 128 in all the experiments. In general, we used 12.5% of the training set as validation for choosing models' hyper-parameters. In particular, we used dropout after the encoder and the fully-connected layers of the proposed model to reduce overfitting. Best performance was achieved with a dropout rate of 0.5 and batch size equal to 256. We also used scheduled sampling BIBREF29 at training time for all models except the baseline.
We input the triplets from the graph to our proposed model in alphabetical order, and consider a modification where the triplets that surround the start location of the robot are provided first in the input graph sequence. We hypothesized that such rearrangement would help identify the starting location (node) of the robot in the graph. In turn, this could facilitate the prediction of correct output sequences. In the remaining of the paper, we refer to models that were provided a rearranged graph, beginning with the starting location of the robot, as models with “Ordered Triplets”.
Quantitative Evaluation
Table TABREF28 shows the performance of the models considered in our evaluation on both test sets. The next two sections discuss the results in detail.
First, we can observe that the final model “Ours with Mask and Ordered Triplets” outperforms the Baseline and Ablation models on all metrics in previously seen environments. The difference in performance is particularly evident for the Exact Match and Goal Match metrics, with our model increasing accuracy by 35% and 25% in comparison to the Baseline and Ablation models, respectively. These results suggest that providing the behavioral navigation graph to the model and allowing it to process this information as a knowledge base in an end-to-end fashion is beneficial.
We can also observe from Table TABREF28 that the masking function of Eq. ( EQREF12 ) tends to increase performance in the Test-Repeated Set by constraining the output sequence to a valid set of navigation behaviors. For the Ablation model, using the masking function leads to about INLINEFORM0 increase in EM and GM accuracy. For the proposed model (with or without reordering the graph triplets), the increase in accuracy is around INLINEFORM1 . Note that the impact of the masking function is less evident in terms of the F1 score because this metric considers if a predicted behavior exists in the ground truth navigation plan, irrespective of its specific position in the output sequence.
The results in the last four rows of Table TABREF28 suggest that ordering the graph triplets can facilitate predicting correct navigation plans in previously seen environments. Providing the triplets that surround the starting location of the robot first to the model leads to a boost of INLINEFORM0 in EM and GM performance. The rearrangement of the graph triplets also helps to reduce ED and increase F1.
Lastly, it is worth noting that our proposed model (last row of Table TABREF28 ) outperforms all other models in previously seen environments. In particular, we obtain over INLINEFORM0 increase in EM and GM between our model and the next best two models.
The previous section evaluated model performance on new instructions (and corresponding navigation plans) for environments that were previously seen at training time. Here, we examine whether the trained models succeed on environments that are completely new.
The evaluation on the Test-New Set helps understand the generalization capabilities of the models under consideration. This experiment is more challenging than the one in the previous section, as can be seen in performance drops in Table TABREF28 for the new environments. Nonetheless, the insights from the previous section still hold: masking in the output layer and reordering the graph triplets tend to increase performance.
Even though the results in Table TABREF28 suggest that there is room for future work on decoding natural language instructions, our model still outperforms the baselines by a clear margin in new environments. For instance, the difference between our model and the second best model in the Test-New set is about INLINEFORM0 EM and GM. Note that the average number of actions in the ground truth output sequences is 7.07 for the Test-New set. Our model's predictions are just INLINEFORM1 edits off on average from the correct navigation plans.
Qualitative Evaluation
This section discusses qualitative results to better understand how the proposed model uses the navigation graph.
We analyze the evolution of the attention weights INLINEFORM0 in Eq. () to assess if the decoder layer of the proposed model is attending to the correct parts of the behavioral graph when making predictions. Fig FIGREF33 (b) shows an example of the resulting attention map for the case of a correct prediction. In the Figure, the attention map is depicted as a scaled and normalized 2D array of color codes. Each column in the array shows the attention distribution INLINEFORM1 used to generate the predicted output at step INLINEFORM2 . Consequently, each row in the array represents a triplet in the corresponding behavioral graph. This graph consists of 72 triplets for Fig FIGREF33 (b).
We observe a locality effect associated to the attention coefficients corresponding to high values (bright areas) in each column of Fig FIGREF33 (b). This suggests that the decoder is paying attention to graph triplets associated to particular neighborhoods of the environment in each prediction step. We include additional attention visualizations in the supplementary Appendix, including cases where the dynamics of the attention distribution are harder to interpret.
All the routes in our dataset are the shortest paths from a start location to a given destination. Thus, we collected a few additional natural language instructions to check if our model was able to follow navigation instructions describing sub-optimal paths. One such example is shown in Fig. FIGREF37 , where the blue route (shortest path) and the red route (alternative path) are described by:
[leftmargin=*, labelsep=0.2em, itemsep=0em]
“Go out the office and make a left. Turn right at the corner and go down the hall. Make a right at the next corner and enter the kitchen in front of table.”
“Exit the room 0 and turn right, go to the end of the corridor and turn left, go straight to the end of the corridor and turn left again. After passing bookshelf on your left and table on your right, Enter the kitchen on your right.”
For both routes, the proposed model was able to predict the correct sequence of navigation behaviors. This result suggests that the model is indeed using the input instructions and is not just approximating shortest paths in the behavioral graph. Other examples on the prediction of sub-obtimal paths are described in the Appendix.
Conclusion
This work introduced behavioral navigation through free-form natural language instructions as a challenging and a novel task that falls at the intersection of natural language processing and robotics. This problem has a range of interesting cross-domain applications, including information retrieval.
We proposed an end-to-end system to translate user instructions to a high-level navigation plan. Our model utilized an attention mechanism to merge relevant information from the navigation instructions with a behavioral graph of the environment. The model then used a decoder to predict a sequence of navigation behaviors that matched the input commands.
As part of this effort, we contributed a new dataset of 11,051 pairs of user instructions and navigation plans from 100 different environments. Our model achieved the best performance in this dataset in comparison to a two-step baseline approach for interpreting navigation instructions, and a sequence-to-sequence model that does not consider the behavioral graph. Our quantitative and qualitative results suggest that attention mechanisms can help leverage the behavioral graph as a relevant knowledge base to facilitate the translation of free-form navigation instructions. Overall, our approach demonstrated practical form of learning for a complex and useful task. In future work, we are interested in investigating mechanisms to improve generalization to new environments. For example, pointer and graph networks BIBREF30 , BIBREF31 are a promising direction to help supervise translation models and predict motion behaviors.
Acknowledgments
The Toyota Research Institute (TRI) provided funds to assist with this research, but this paper solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity. This work is also partially funded by Fondecyt grant 1181739, Conicyt, Chile. The authors would also like to thank Gabriel Sepúlveda for his assistance with parts of this project. | the baseline where path generation uses a standard sequence-to-sequence model augmented with attention mechanism and path verification uses depth-first search |
3aee5c856e0ee608a7664289ffdd11455d153234 | 3aee5c856e0ee608a7664289ffdd11455d153234_0 | Q: What was the performance of their model?
Text: Introduction
Enabling robots to follow navigation instructions in natural language can facilitate human-robot interaction across a variety of applications. For instance, within the service robotics domain, robots can follow navigation instructions to help with mobile manipulation BIBREF0 and delivery tasks BIBREF1 .
Interpreting navigation instructions in natural language is difficult due to the high variability in the way people describe routes BIBREF2 . For example, there are a variety of ways to describe the route in Fig. FIGREF4 (a):
Each fragment of a sentence within these instructions can be mapped to one or more than one navigation behaviors. For instance, assume that a robot counts with a number of primitive, navigation behaviors, such as “enter the room on the left (or on right)” , “follow the corridor”, “cross the intersection”, etc. Then, the fragment “advance forward” in a navigation instruction could be interpreted as a “follow the corridor” behavior, or as a sequence of “follow the corridor” interspersed with “cross the intersection” behaviors depending on the topology of the environment. Resolving such ambiguities often requires reasoning about “common-sense” concepts, as well as interpreting spatial information and landmarks, e.g., in sentences such as “the room on the left right before the end of the corridor” and “the room which is in the middle of two vases”.
In this work, we pose the problem of interpreting navigation instructions as finding a mapping (or grounding) of the commands into an executable navigation plan. While the plan is typically modeled as a formal specification of low-level motions BIBREF2 or a grammar BIBREF3 , BIBREF4 , we focus specifically on translating instructions to a high-level navigation plan based on a topological representation of the environment. This representation is a behavioral navigation graph, as recently proposed by BIBREF5 , designed to take advantage of the semantic structure typical of human environments. The nodes of the graph correspond to semantically meaningful locations for the navigation task, such as kitchens or entrances to rooms in corridors. The edges are parameterized, visuo-motor behaviors that allow a robot to navigate between neighboring nodes, as illustrated in Fig. FIGREF4 (b). Under this framework, complex navigation routes can be achieved by sequencing behaviors without an explicit metric representation of the world.
We formulate the problem of following instructions under the framework of BIBREF5 as finding a path in the behavioral navigation graph that follows the desired route, given a known starting location. The edges (behaviors) along this path serve to reach the – sometimes implicit – destination requested by the user. As in BIBREF6 , our focus is on the problem of interpreting navigation directions. We assume that a robot can realize valid navigation plans according to the graph.
We contribute a new end-to-end model for following directions in natural language under the behavioral navigation framework. Inspired by the information retrieval and question answering literature BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , we propose to leverage the behavioral graph as a knowledge base to facilitate the interpretation of navigation commands. More specifically, the proposed model takes as input user directions in text form, the behavioral graph of the environment encoded as INLINEFORM0 node; edge; node INLINEFORM1 triplets, and the initial location of the robot in the graph. The model then predicts a set of behaviors to reach the desired destination according to the instructions and the map (Fig. FIGREF4 (c)). Our main insight is that using attention mechanisms to correlate navigation instructions with the topological map of the environment can facilitate predicting correct navigation plans.
This work also contributes a new dataset of INLINEFORM0 pairs of free-form natural language instructions and high-level navigation plans. This dataset was collected through Mechanical Turk using 100 simulated environments with a corresponding topological map and, to the best of our knowledge, it is the first of its kind for behavioral navigation. The dataset opens up opportunities to explore data-driven methods for grounding navigation commands into high-level motion plans.
We conduct extensive experiments to study the generalization capabilities of the proposed model for following natural language instructions. We investigate both generalization to new instructions in known and in new environments. We conclude this paper by discussing the benefits of the proposed approach as well as opportunities for future research based on our findings.
Related work
This section reviews relevant prior work on following navigation instructions. Readers interested in an in-depth review of methods to interpret spatial natural language for robotics are encouraged to refer to BIBREF11 .
Typical approaches to follow navigation commands deal with the complexity of natural language by manually parsing commands, constraining language descriptions, or using statistical machine translation methods. While manually parsing commands is often impractical, the first type of approaches are foundational: they showed that it is possible to leverage the compositionality of semantic units to interpret spatial language BIBREF12 , BIBREF13 .
Constraining language descriptions can reduce the size of the input space to facilitate the interpretation of user commands. For example, BIBREF14 explored using structured, symbolic language phrases for navigation. As in this earlier work, we are also interested in navigation with a topological map of the environment. However, we do not process symbolic phrases. Our aim is to translate free-form natural language instructions to a navigation plan using information from a high-level representation of the environment. This translation problem requires dealing with missing actions in navigation instructions and actions with preconditions, such as “at the end of the corridor, turn right” BIBREF15 .
Statistical machine translation BIBREF16 is at the core of recent approaches to enable robots to follow navigation instructions. These methods aim to automatically discover translation rules from a corpus of data, and often leverage the fact that navigation directions are composed of sequential commands. For instance, BIBREF17 , BIBREF4 , BIBREF2 used statistical machine translation to map instructions to a formal language defined by a grammar. Likewise, BIBREF18 , BIBREF0 mapped commands to spatial description clauses based on the hierarchical structure of language in the navigation problem. Our approach to machine translation builds on insights from these prior efforts. In particular, we focus on end-to-end learning for statistical machine translation due to the recent success of Neural Networks in Natural Language Processing BIBREF19 .
Our work is inspired by methods that reduce the task of interpreting user commands to a sequential prediction problem BIBREF20 , BIBREF21 , BIBREF22 . Similar to BIBREF21 and BIBREF22 , we use a sequence-to-sequence model to enable a mobile agent to follow routes. But instead leveraging visual information to output low-level navigation commands, we focus on using a topological map of the environment to output a high-level navigation plan. This plan is a sequence of behaviors that can be executed by a robot to reach a desired destination BIBREF5 , BIBREF6 .
We explore machine translation from the perspective of automatic question answering. Following BIBREF8 , BIBREF9 , our approach uses attention mechanisms to learn alignments between different input modalities. In our case, the inputs to our model are navigation instructions, a topological environment map, and the start location of the robot (Fig. FIGREF4 (c)). Our results show that the map can serve as an effective source of contextual information for the translation task. Additionally, it is possible to leverage this kind of information in an end-to-end fashion.
Problem Formulation
Our goal is to translate navigation instructions in text form into a sequence of behaviors that a robot can execute to reach a desired destination from a known start location. We frame this problem under a behavioral approach to indoor autonomous navigation BIBREF5 and assume that prior knowledge about the environment is available for the translation task. This prior knowledge is a topological map, in the form of a behavioral navigation graph (Fig. FIGREF4 (b)). The nodes of the graph correspond to semantically-meaningful locations for the navigation task, and its directed edges are visuo-motor behaviors that a robot can use to move between nodes. This formulation takes advantage of the rich semantic structure behind man-made environments, resulting in a compact route representation for robot navigation.
Fig. FIGREF4 (c) provides a schematic view of the problem setting. The inputs are: (1) a navigation graph INLINEFORM0 , (2) the starting node INLINEFORM1 of the robot in INLINEFORM2 , and (3) a set of free-form navigation instructions INLINEFORM3 in natural language. The instructions describe a path in the graph to reach from INLINEFORM4 to a – potentially implicit – destination node INLINEFORM5 . Using this information, the objective is to predict a suitable sequence of robot behaviors INLINEFORM6 to navigate from INLINEFORM7 to INLINEFORM8 according to INLINEFORM9 . From a supervised learning perspective, the goal is then to estimate: DISPLAYFORM0
based on a dataset of input-target pairs INLINEFORM0 , where INLINEFORM1 and INLINEFORM2 , respectively. The sequential execution of the behaviors INLINEFORM3 should replicate the route intended by the instructions INLINEFORM4 . We assume no prior linguistic knowledge. Thus, translation approaches have to cope with the semantics and syntax of the language by discovering corresponding patterns in the data.
The Behavioral Graph: A Knowledge Base For Navigation
We view the behavioral graph INLINEFORM0 as a knowledge base that encodes a set of navigational rules as triplets INLINEFORM1 , where INLINEFORM2 and INLINEFORM3 are adjacent nodes in the graph, and the edge INLINEFORM4 is an executable behavior to navigate from INLINEFORM5 to INLINEFORM6 . In general, each behaviors includes a list of relevant navigational attributes INLINEFORM7 that the robot might encounter when moving between nodes.
We consider 7 types of semantic locations, 11 types of behaviors, and 20 different types of landmarks. A location in the navigation graph can be a room, a lab, an office, a kitchen, a hall, a corridor, or a bathroom. These places are labeled with unique tags, such as "room-1" or "lab-2", except for bathrooms and kitchens which people do not typically refer to by unique names when describing navigation routes.
Table TABREF7 lists the navigation behaviors that we consider in this work. These behaviors can be described in reference to visual landmarks or objects, such as paintings, book shelfs, tables, etc. As in Fig. FIGREF4 , maps might contain multiple landmarks of the same type. Please see the supplementary material (Appendix A) for more details.
Approach
We leverage recent advances in deep learning to translate natural language instructions to a sequence of navigation behaviors in an end-to-end fashion. Our proposed model builds on the sequence-to-sequence translation model of BIBREF23 , which computes a soft-alignment between a source sequence (natural language instructions in our case) and the corresponding target sequence (navigation behaviors).
As one of our main contributions, we augment the neural machine translation approach of BIBREF23 to take as input not only natural language instructions, but also the corresponding behavioral navigation graph INLINEFORM0 of the environment where navigation should take place. Specifically, at each step, the graph INLINEFORM1 operates as a knowledge base that the model can access to obtain information about path connectivity, facilitating the grounding of navigation commands.
Figure FIGREF8 shows the structure of the proposed model for interpreting navigation instructions. The model consists of six layers:
Embed layer: The model first encodes each word and symbol in the input sequences INLINEFORM0 and INLINEFORM1 into fixed-length representations. The instructions INLINEFORM2 are embedded into a 100-dimensional pre-trained GloVe vector BIBREF24 . Each of the triplet components, INLINEFORM3 , INLINEFORM4 , and INLINEFORM5 of the graph INLINEFORM6 , are one-hot encoded into vectors of dimensionality INLINEFORM7 , where INLINEFORM8 and INLINEFORM9 are the number of nodes and edges in INLINEFORM10 , respectively.
Encoder layer: The model then uses two bidirectional Gated Recurrent Units (GRUs) BIBREF25 to independently process the information from INLINEFORM0 and INLINEFORM1 , and incorporate contextual cues from the surrounding embeddings in each sequence. The outputs of the encoder layer are the matrix INLINEFORM2 for the navigational commands and the matrix INLINEFORM3 for the behavioral graph, where INLINEFORM4 is the hidden size of each GRU, INLINEFORM5 is the number of words in the instruction INLINEFORM6 , and INLINEFORM7 is the number of triplets in the graph INLINEFORM8 .
Attention layer: Matrices INLINEFORM0 and INLINEFORM1 generated by the encoder layer are combined using an attention mechanism. We use one-way attention because the graph contains information about the whole environment, while the instruction has (potentially incomplete) local information about the route of interest. The use of attention provides our model with a two-step strategy to interpret commands. This resembles the way people find paths on a map: first, relevant parts on the map are selected according to their affinity to each of the words in the input instruction (attention layer); second, the selected parts are connected to assemble a valid path (decoder layer). More formally, let INLINEFORM2 ( INLINEFORM3 ) be the INLINEFORM4 -th row of INLINEFORM5 , and INLINEFORM6 ( INLINEFORM7 ) the INLINEFORM8 -th row of INLINEFORM9 . We use each encoded triplet INLINEFORM10 in INLINEFORM11 to calculate its associated attention distribution INLINEFORM12 over all the atomic instructions INLINEFORM13 : DISPLAYFORM0
where the matrix INLINEFORM0 serves to combine the different sources of information INLINEFORM1 and INLINEFORM2 . Each component INLINEFORM3 of the attention distributions INLINEFORM4 quantifies the affinity between the INLINEFORM5 -th triplet in INLINEFORM6 and the INLINEFORM7 -th word in the corresponding input INLINEFORM8 .
The model then uses each attention distribution INLINEFORM0 to obtain a weighted sum of the encodings of the words in INLINEFORM1 , according to their relevance to the corresponding triplet INLINEFORM2 . This results in L attention vectors INLINEFORM3 , INLINEFORM4 .
The final step in the attention layer concatenates each INLINEFORM0 with INLINEFORM1 to generate the outputs INLINEFORM2 , INLINEFORM3 . Following BIBREF8 , we include the encoded triplet INLINEFORM4 in the output tensor INLINEFORM5 of this layer to prevent early summaries of relevant map information.
FC layer: The model reduces the dimensionality of each individual vector INLINEFORM0 from INLINEFORM1 to INLINEFORM2 with a fully-connected (FC) layer. The resulting L vectors are output to the next layer as columns of a context matrix INLINEFORM3 .
Decoder layer: After the FC layer, the model predicts likelihoods over the sequence of behaviors that correspond to the input instructions with a GRU network. Without loss of generality, consider the INLINEFORM0 -th recurrent cell in the GRU network. This cell takes two inputs: a hidden state vector INLINEFORM1 from the prior cell, and a one-hot embedding of the previous behavior INLINEFORM2 that was predicted by the model. Based on these inputs, the GRU cell outputs a new hidden state INLINEFORM3 to compute likelihoods for the next behavior. These likelihoods are estimated by combining the output state INLINEFORM4 with relevant information from the context INLINEFORM5 : DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are trainable parameters. The attention vector INLINEFORM3 in Eq. () quantifies the affinity of INLINEFORM4 with respect to each of the columns INLINEFORM5 of INLINEFORM6 , where INLINEFORM7 . The attention vector also helps to estimate a dynamic contextual vector INLINEFORM8 that the INLINEFORM9 -th GRU cell uses to compute logits for the next behavior: DISPLAYFORM0
with INLINEFORM0 trainable parameters. Note that INLINEFORM1 includes a value for each of the pre-defined behaviors in the graph INLINEFORM2 , as well as for a special “stop” symbol to identify the end of the output sequence.
Output layer: The final layer of the model searches for a valid sequence of robot behaviors based on the robot's initial node, the connectivity of the graph INLINEFORM0 , and the output logits from the previous decoder layer. Again, without loss of generality, consider the INLINEFORM1 -th behavior INLINEFORM2 that is finally predicted by the model. The search for this behavior is implemented as: DISPLAYFORM0
with INLINEFORM0 a masking function that takes as input the graph INLINEFORM1 and the node INLINEFORM2 that the robot reaches after following the sequence of behaviors INLINEFORM3 previously predicted by the model. The INLINEFORM4 function returns a vector of the same dimensionality as the logits INLINEFORM5 , but with zeros for the valid behaviors after the last location INLINEFORM6 and for the special stop symbol, and INLINEFORM7 for any invalid predictions according to the connectivity of the behavioral navigation graph.
Dataset
We created a new dataset for the problem of following navigation instructions under the behavioral navigation framework of BIBREF5 . This dataset was created using Amazon Mechanical Turk and 100 maps of simulated indoor environments, each with 6 to 65 rooms. To the best of our knowledge, this is the first benchmark for comparing translation models in the context of behavioral robot navigation.
As shown in Table TABREF16 , the dataset consists of 8066 pairs of free-form natural language instructions and navigation plans for training. This training data was collected from 88 unique simulated environments, totaling 6064 distinct navigation plans (2002 plans have two different navigation instructions each; the rest has one). The dataset contains two test set variants:
While the dataset was collected with simulated environments, no structure was imposed on the navigation instructions while crowd-sourcing data. Thus, many instructions in our dataset are ambiguous. Moreover, the order of the behaviors in the instructions is not always the same. For instance, a person said “turn right and advance” to describe part of a route, while another person said “go straight after turning right” in a similar situation. The high variability present in the natural language descriptions of our dataset makes the problem of decoding instructions into behaviors not trivial. See Appendix A of the supplementary material for additional details on our data collection effort.
Experiments
This section describes our evaluation of the proposed approach for interpreting navigation commands in natural language. We provide both quantitative and qualitative results.
Evaluation Metrics
While computing evaluation metrics, we only consider the behaviors present in the route because they are sufficient to recover the high-level navigation plan from the graph. Our metrics treat each behavior as a single token. For example, the sample plan “R-1 oor C-1 cf C-1 lt C-0 cf C-0 iol O-3" is considered to have 5 tokens, each corresponding to one of its behaviors (“oor", “cf", “lt", “cf", “iol"). In this plan, “R-1",“C-1", “C-0", and “O-3" are symbols for locations (nodes) in the graph.
We compare the performance of translation approaches based on four metrics:
[align=left,leftmargin=0em,labelsep=0.4em,font=]
As in BIBREF20 , EM is 1 if a predicted plan matches exactly the ground truth; otherwise it is 0.
The harmonic average of the precision and recall over all the test set BIBREF26 .
The minimum number of insertions, deletions or swap operations required to transform a predicted sequence of behaviors into the ground truth sequence BIBREF27 .
GM is 1 if a predicted plan reaches the ground truth destination (even if the full sequence of behaviors does not match exactly the ground truth). Otherwise, GM is 0.
Models Used in the Evaluation
We compare the proposed approach for translating natural language instructions into a navigation plan against alternative deep-learning models:
[align=left,leftmargin=0em,labelsep=0.4em,font=]
The baseline approach is based on BIBREF20 . It divides the task of interpreting commands for behavioral navigation into two steps: path generation, and path verification. For path generation, this baseline uses a standard sequence-to-sequence model augmented with an attention mechanism, similar to BIBREF23 , BIBREF6 . For path verification, the baseline uses depth-first search to find a route in the graph that matches the sequence of predicted behaviors. If no route matches perfectly, the baseline changes up to three behaviors in the predicted sequence to try to turn it into a valid path.
To test the impact of using the behavioral graphs as an extra input to our translation model, we implemented a version of our approach that only takes natural language instructions as input. In this ablation model, the output of the bidirectional GRU that encodes the input instruction INLINEFORM0 is directly fed to the decoder layer. This model does not have the attention and FC layers described in Sec. SECREF4 , nor uses the masking function in the output layer.
This model is the same as the previous Ablation model, but with the masking function in the output layer.
Implementation Details
We pre-processed the inputs to the various models that are considered in our experiment. In particular, we lowercased, tokenized, spell-checked and lemmatized the input instructions in text-form using WordNet BIBREF28 . We also truncated the graphs to a maximum of 300 triplets, and the navigational instructions to a maximum of 150 words. Only 6.4% (5.4%) of the unique graphs in the training (validation) set had more than 300 triplets, and less than 0.15% of the natural language instructions in these sets had more than 150 tokens.
The dimensionality of the hidden state of the GRU networks was set to 128 in all the experiments. In general, we used 12.5% of the training set as validation for choosing models' hyper-parameters. In particular, we used dropout after the encoder and the fully-connected layers of the proposed model to reduce overfitting. Best performance was achieved with a dropout rate of 0.5 and batch size equal to 256. We also used scheduled sampling BIBREF29 at training time for all models except the baseline.
We input the triplets from the graph to our proposed model in alphabetical order, and consider a modification where the triplets that surround the start location of the robot are provided first in the input graph sequence. We hypothesized that such rearrangement would help identify the starting location (node) of the robot in the graph. In turn, this could facilitate the prediction of correct output sequences. In the remaining of the paper, we refer to models that were provided a rearranged graph, beginning with the starting location of the robot, as models with “Ordered Triplets”.
Quantitative Evaluation
Table TABREF28 shows the performance of the models considered in our evaluation on both test sets. The next two sections discuss the results in detail.
First, we can observe that the final model “Ours with Mask and Ordered Triplets” outperforms the Baseline and Ablation models on all metrics in previously seen environments. The difference in performance is particularly evident for the Exact Match and Goal Match metrics, with our model increasing accuracy by 35% and 25% in comparison to the Baseline and Ablation models, respectively. These results suggest that providing the behavioral navigation graph to the model and allowing it to process this information as a knowledge base in an end-to-end fashion is beneficial.
We can also observe from Table TABREF28 that the masking function of Eq. ( EQREF12 ) tends to increase performance in the Test-Repeated Set by constraining the output sequence to a valid set of navigation behaviors. For the Ablation model, using the masking function leads to about INLINEFORM0 increase in EM and GM accuracy. For the proposed model (with or without reordering the graph triplets), the increase in accuracy is around INLINEFORM1 . Note that the impact of the masking function is less evident in terms of the F1 score because this metric considers if a predicted behavior exists in the ground truth navigation plan, irrespective of its specific position in the output sequence.
The results in the last four rows of Table TABREF28 suggest that ordering the graph triplets can facilitate predicting correct navigation plans in previously seen environments. Providing the triplets that surround the starting location of the robot first to the model leads to a boost of INLINEFORM0 in EM and GM performance. The rearrangement of the graph triplets also helps to reduce ED and increase F1.
Lastly, it is worth noting that our proposed model (last row of Table TABREF28 ) outperforms all other models in previously seen environments. In particular, we obtain over INLINEFORM0 increase in EM and GM between our model and the next best two models.
The previous section evaluated model performance on new instructions (and corresponding navigation plans) for environments that were previously seen at training time. Here, we examine whether the trained models succeed on environments that are completely new.
The evaluation on the Test-New Set helps understand the generalization capabilities of the models under consideration. This experiment is more challenging than the one in the previous section, as can be seen in performance drops in Table TABREF28 for the new environments. Nonetheless, the insights from the previous section still hold: masking in the output layer and reordering the graph triplets tend to increase performance.
Even though the results in Table TABREF28 suggest that there is room for future work on decoding natural language instructions, our model still outperforms the baselines by a clear margin in new environments. For instance, the difference between our model and the second best model in the Test-New set is about INLINEFORM0 EM and GM. Note that the average number of actions in the ground truth output sequences is 7.07 for the Test-New set. Our model's predictions are just INLINEFORM1 edits off on average from the correct navigation plans.
Qualitative Evaluation
This section discusses qualitative results to better understand how the proposed model uses the navigation graph.
We analyze the evolution of the attention weights INLINEFORM0 in Eq. () to assess if the decoder layer of the proposed model is attending to the correct parts of the behavioral graph when making predictions. Fig FIGREF33 (b) shows an example of the resulting attention map for the case of a correct prediction. In the Figure, the attention map is depicted as a scaled and normalized 2D array of color codes. Each column in the array shows the attention distribution INLINEFORM1 used to generate the predicted output at step INLINEFORM2 . Consequently, each row in the array represents a triplet in the corresponding behavioral graph. This graph consists of 72 triplets for Fig FIGREF33 (b).
We observe a locality effect associated to the attention coefficients corresponding to high values (bright areas) in each column of Fig FIGREF33 (b). This suggests that the decoder is paying attention to graph triplets associated to particular neighborhoods of the environment in each prediction step. We include additional attention visualizations in the supplementary Appendix, including cases where the dynamics of the attention distribution are harder to interpret.
All the routes in our dataset are the shortest paths from a start location to a given destination. Thus, we collected a few additional natural language instructions to check if our model was able to follow navigation instructions describing sub-optimal paths. One such example is shown in Fig. FIGREF37 , where the blue route (shortest path) and the red route (alternative path) are described by:
[leftmargin=*, labelsep=0.2em, itemsep=0em]
“Go out the office and make a left. Turn right at the corner and go down the hall. Make a right at the next corner and enter the kitchen in front of table.”
“Exit the room 0 and turn right, go to the end of the corridor and turn left, go straight to the end of the corridor and turn left again. After passing bookshelf on your left and table on your right, Enter the kitchen on your right.”
For both routes, the proposed model was able to predict the correct sequence of navigation behaviors. This result suggests that the model is indeed using the input instructions and is not just approximating shortest paths in the behavioral graph. Other examples on the prediction of sub-obtimal paths are described in the Appendix.
Conclusion
This work introduced behavioral navigation through free-form natural language instructions as a challenging and a novel task that falls at the intersection of natural language processing and robotics. This problem has a range of interesting cross-domain applications, including information retrieval.
We proposed an end-to-end system to translate user instructions to a high-level navigation plan. Our model utilized an attention mechanism to merge relevant information from the navigation instructions with a behavioral graph of the environment. The model then used a decoder to predict a sequence of navigation behaviors that matched the input commands.
As part of this effort, we contributed a new dataset of 11,051 pairs of user instructions and navigation plans from 100 different environments. Our model achieved the best performance in this dataset in comparison to a two-step baseline approach for interpreting navigation instructions, and a sequence-to-sequence model that does not consider the behavioral graph. Our quantitative and qualitative results suggest that attention mechanisms can help leverage the behavioral graph as a relevant knowledge base to facilitate the translation of free-form navigation instructions. Overall, our approach demonstrated practical form of learning for a complex and useful task. In future work, we are interested in investigating mechanisms to improve generalization to new environments. For example, pointer and graph networks BIBREF30 , BIBREF31 are a promising direction to help supervise translation models and predict motion behaviors.
Acknowledgments
The Toyota Research Institute (TRI) provided funds to assist with this research, but this paper solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity. This work is also partially funded by Fondecyt grant 1181739, Conicyt, Chile. The authors would also like to thank Gabriel Sepúlveda for his assistance with parts of this project. | For test-repeated set, EM score of 61.17, F1 of 93.54, ED of 0.75 and GM of 61.36. For test-new set, EM score of 41.71, F1 of 91.02, ED of 1.22 and GM of 41.81 |
f42d470384ca63a8e106c7caf1cb59c7b92dbc27 | f42d470384ca63a8e106c7caf1cb59c7b92dbc27_0 | Q: What evaluation metrics are used?
Text: Introduction
Enabling robots to follow navigation instructions in natural language can facilitate human-robot interaction across a variety of applications. For instance, within the service robotics domain, robots can follow navigation instructions to help with mobile manipulation BIBREF0 and delivery tasks BIBREF1 .
Interpreting navigation instructions in natural language is difficult due to the high variability in the way people describe routes BIBREF2 . For example, there are a variety of ways to describe the route in Fig. FIGREF4 (a):
Each fragment of a sentence within these instructions can be mapped to one or more than one navigation behaviors. For instance, assume that a robot counts with a number of primitive, navigation behaviors, such as “enter the room on the left (or on right)” , “follow the corridor”, “cross the intersection”, etc. Then, the fragment “advance forward” in a navigation instruction could be interpreted as a “follow the corridor” behavior, or as a sequence of “follow the corridor” interspersed with “cross the intersection” behaviors depending on the topology of the environment. Resolving such ambiguities often requires reasoning about “common-sense” concepts, as well as interpreting spatial information and landmarks, e.g., in sentences such as “the room on the left right before the end of the corridor” and “the room which is in the middle of two vases”.
In this work, we pose the problem of interpreting navigation instructions as finding a mapping (or grounding) of the commands into an executable navigation plan. While the plan is typically modeled as a formal specification of low-level motions BIBREF2 or a grammar BIBREF3 , BIBREF4 , we focus specifically on translating instructions to a high-level navigation plan based on a topological representation of the environment. This representation is a behavioral navigation graph, as recently proposed by BIBREF5 , designed to take advantage of the semantic structure typical of human environments. The nodes of the graph correspond to semantically meaningful locations for the navigation task, such as kitchens or entrances to rooms in corridors. The edges are parameterized, visuo-motor behaviors that allow a robot to navigate between neighboring nodes, as illustrated in Fig. FIGREF4 (b). Under this framework, complex navigation routes can be achieved by sequencing behaviors without an explicit metric representation of the world.
We formulate the problem of following instructions under the framework of BIBREF5 as finding a path in the behavioral navigation graph that follows the desired route, given a known starting location. The edges (behaviors) along this path serve to reach the – sometimes implicit – destination requested by the user. As in BIBREF6 , our focus is on the problem of interpreting navigation directions. We assume that a robot can realize valid navigation plans according to the graph.
We contribute a new end-to-end model for following directions in natural language under the behavioral navigation framework. Inspired by the information retrieval and question answering literature BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , we propose to leverage the behavioral graph as a knowledge base to facilitate the interpretation of navigation commands. More specifically, the proposed model takes as input user directions in text form, the behavioral graph of the environment encoded as INLINEFORM0 node; edge; node INLINEFORM1 triplets, and the initial location of the robot in the graph. The model then predicts a set of behaviors to reach the desired destination according to the instructions and the map (Fig. FIGREF4 (c)). Our main insight is that using attention mechanisms to correlate navigation instructions with the topological map of the environment can facilitate predicting correct navigation plans.
This work also contributes a new dataset of INLINEFORM0 pairs of free-form natural language instructions and high-level navigation plans. This dataset was collected through Mechanical Turk using 100 simulated environments with a corresponding topological map and, to the best of our knowledge, it is the first of its kind for behavioral navigation. The dataset opens up opportunities to explore data-driven methods for grounding navigation commands into high-level motion plans.
We conduct extensive experiments to study the generalization capabilities of the proposed model for following natural language instructions. We investigate both generalization to new instructions in known and in new environments. We conclude this paper by discussing the benefits of the proposed approach as well as opportunities for future research based on our findings.
Related work
This section reviews relevant prior work on following navigation instructions. Readers interested in an in-depth review of methods to interpret spatial natural language for robotics are encouraged to refer to BIBREF11 .
Typical approaches to follow navigation commands deal with the complexity of natural language by manually parsing commands, constraining language descriptions, or using statistical machine translation methods. While manually parsing commands is often impractical, the first type of approaches are foundational: they showed that it is possible to leverage the compositionality of semantic units to interpret spatial language BIBREF12 , BIBREF13 .
Constraining language descriptions can reduce the size of the input space to facilitate the interpretation of user commands. For example, BIBREF14 explored using structured, symbolic language phrases for navigation. As in this earlier work, we are also interested in navigation with a topological map of the environment. However, we do not process symbolic phrases. Our aim is to translate free-form natural language instructions to a navigation plan using information from a high-level representation of the environment. This translation problem requires dealing with missing actions in navigation instructions and actions with preconditions, such as “at the end of the corridor, turn right” BIBREF15 .
Statistical machine translation BIBREF16 is at the core of recent approaches to enable robots to follow navigation instructions. These methods aim to automatically discover translation rules from a corpus of data, and often leverage the fact that navigation directions are composed of sequential commands. For instance, BIBREF17 , BIBREF4 , BIBREF2 used statistical machine translation to map instructions to a formal language defined by a grammar. Likewise, BIBREF18 , BIBREF0 mapped commands to spatial description clauses based on the hierarchical structure of language in the navigation problem. Our approach to machine translation builds on insights from these prior efforts. In particular, we focus on end-to-end learning for statistical machine translation due to the recent success of Neural Networks in Natural Language Processing BIBREF19 .
Our work is inspired by methods that reduce the task of interpreting user commands to a sequential prediction problem BIBREF20 , BIBREF21 , BIBREF22 . Similar to BIBREF21 and BIBREF22 , we use a sequence-to-sequence model to enable a mobile agent to follow routes. But instead leveraging visual information to output low-level navigation commands, we focus on using a topological map of the environment to output a high-level navigation plan. This plan is a sequence of behaviors that can be executed by a robot to reach a desired destination BIBREF5 , BIBREF6 .
We explore machine translation from the perspective of automatic question answering. Following BIBREF8 , BIBREF9 , our approach uses attention mechanisms to learn alignments between different input modalities. In our case, the inputs to our model are navigation instructions, a topological environment map, and the start location of the robot (Fig. FIGREF4 (c)). Our results show that the map can serve as an effective source of contextual information for the translation task. Additionally, it is possible to leverage this kind of information in an end-to-end fashion.
Problem Formulation
Our goal is to translate navigation instructions in text form into a sequence of behaviors that a robot can execute to reach a desired destination from a known start location. We frame this problem under a behavioral approach to indoor autonomous navigation BIBREF5 and assume that prior knowledge about the environment is available for the translation task. This prior knowledge is a topological map, in the form of a behavioral navigation graph (Fig. FIGREF4 (b)). The nodes of the graph correspond to semantically-meaningful locations for the navigation task, and its directed edges are visuo-motor behaviors that a robot can use to move between nodes. This formulation takes advantage of the rich semantic structure behind man-made environments, resulting in a compact route representation for robot navigation.
Fig. FIGREF4 (c) provides a schematic view of the problem setting. The inputs are: (1) a navigation graph INLINEFORM0 , (2) the starting node INLINEFORM1 of the robot in INLINEFORM2 , and (3) a set of free-form navigation instructions INLINEFORM3 in natural language. The instructions describe a path in the graph to reach from INLINEFORM4 to a – potentially implicit – destination node INLINEFORM5 . Using this information, the objective is to predict a suitable sequence of robot behaviors INLINEFORM6 to navigate from INLINEFORM7 to INLINEFORM8 according to INLINEFORM9 . From a supervised learning perspective, the goal is then to estimate: DISPLAYFORM0
based on a dataset of input-target pairs INLINEFORM0 , where INLINEFORM1 and INLINEFORM2 , respectively. The sequential execution of the behaviors INLINEFORM3 should replicate the route intended by the instructions INLINEFORM4 . We assume no prior linguistic knowledge. Thus, translation approaches have to cope with the semantics and syntax of the language by discovering corresponding patterns in the data.
The Behavioral Graph: A Knowledge Base For Navigation
We view the behavioral graph INLINEFORM0 as a knowledge base that encodes a set of navigational rules as triplets INLINEFORM1 , where INLINEFORM2 and INLINEFORM3 are adjacent nodes in the graph, and the edge INLINEFORM4 is an executable behavior to navigate from INLINEFORM5 to INLINEFORM6 . In general, each behaviors includes a list of relevant navigational attributes INLINEFORM7 that the robot might encounter when moving between nodes.
We consider 7 types of semantic locations, 11 types of behaviors, and 20 different types of landmarks. A location in the navigation graph can be a room, a lab, an office, a kitchen, a hall, a corridor, or a bathroom. These places are labeled with unique tags, such as "room-1" or "lab-2", except for bathrooms and kitchens which people do not typically refer to by unique names when describing navigation routes.
Table TABREF7 lists the navigation behaviors that we consider in this work. These behaviors can be described in reference to visual landmarks or objects, such as paintings, book shelfs, tables, etc. As in Fig. FIGREF4 , maps might contain multiple landmarks of the same type. Please see the supplementary material (Appendix A) for more details.
Approach
We leverage recent advances in deep learning to translate natural language instructions to a sequence of navigation behaviors in an end-to-end fashion. Our proposed model builds on the sequence-to-sequence translation model of BIBREF23 , which computes a soft-alignment between a source sequence (natural language instructions in our case) and the corresponding target sequence (navigation behaviors).
As one of our main contributions, we augment the neural machine translation approach of BIBREF23 to take as input not only natural language instructions, but also the corresponding behavioral navigation graph INLINEFORM0 of the environment where navigation should take place. Specifically, at each step, the graph INLINEFORM1 operates as a knowledge base that the model can access to obtain information about path connectivity, facilitating the grounding of navigation commands.
Figure FIGREF8 shows the structure of the proposed model for interpreting navigation instructions. The model consists of six layers:
Embed layer: The model first encodes each word and symbol in the input sequences INLINEFORM0 and INLINEFORM1 into fixed-length representations. The instructions INLINEFORM2 are embedded into a 100-dimensional pre-trained GloVe vector BIBREF24 . Each of the triplet components, INLINEFORM3 , INLINEFORM4 , and INLINEFORM5 of the graph INLINEFORM6 , are one-hot encoded into vectors of dimensionality INLINEFORM7 , where INLINEFORM8 and INLINEFORM9 are the number of nodes and edges in INLINEFORM10 , respectively.
Encoder layer: The model then uses two bidirectional Gated Recurrent Units (GRUs) BIBREF25 to independently process the information from INLINEFORM0 and INLINEFORM1 , and incorporate contextual cues from the surrounding embeddings in each sequence. The outputs of the encoder layer are the matrix INLINEFORM2 for the navigational commands and the matrix INLINEFORM3 for the behavioral graph, where INLINEFORM4 is the hidden size of each GRU, INLINEFORM5 is the number of words in the instruction INLINEFORM6 , and INLINEFORM7 is the number of triplets in the graph INLINEFORM8 .
Attention layer: Matrices INLINEFORM0 and INLINEFORM1 generated by the encoder layer are combined using an attention mechanism. We use one-way attention because the graph contains information about the whole environment, while the instruction has (potentially incomplete) local information about the route of interest. The use of attention provides our model with a two-step strategy to interpret commands. This resembles the way people find paths on a map: first, relevant parts on the map are selected according to their affinity to each of the words in the input instruction (attention layer); second, the selected parts are connected to assemble a valid path (decoder layer). More formally, let INLINEFORM2 ( INLINEFORM3 ) be the INLINEFORM4 -th row of INLINEFORM5 , and INLINEFORM6 ( INLINEFORM7 ) the INLINEFORM8 -th row of INLINEFORM9 . We use each encoded triplet INLINEFORM10 in INLINEFORM11 to calculate its associated attention distribution INLINEFORM12 over all the atomic instructions INLINEFORM13 : DISPLAYFORM0
where the matrix INLINEFORM0 serves to combine the different sources of information INLINEFORM1 and INLINEFORM2 . Each component INLINEFORM3 of the attention distributions INLINEFORM4 quantifies the affinity between the INLINEFORM5 -th triplet in INLINEFORM6 and the INLINEFORM7 -th word in the corresponding input INLINEFORM8 .
The model then uses each attention distribution INLINEFORM0 to obtain a weighted sum of the encodings of the words in INLINEFORM1 , according to their relevance to the corresponding triplet INLINEFORM2 . This results in L attention vectors INLINEFORM3 , INLINEFORM4 .
The final step in the attention layer concatenates each INLINEFORM0 with INLINEFORM1 to generate the outputs INLINEFORM2 , INLINEFORM3 . Following BIBREF8 , we include the encoded triplet INLINEFORM4 in the output tensor INLINEFORM5 of this layer to prevent early summaries of relevant map information.
FC layer: The model reduces the dimensionality of each individual vector INLINEFORM0 from INLINEFORM1 to INLINEFORM2 with a fully-connected (FC) layer. The resulting L vectors are output to the next layer as columns of a context matrix INLINEFORM3 .
Decoder layer: After the FC layer, the model predicts likelihoods over the sequence of behaviors that correspond to the input instructions with a GRU network. Without loss of generality, consider the INLINEFORM0 -th recurrent cell in the GRU network. This cell takes two inputs: a hidden state vector INLINEFORM1 from the prior cell, and a one-hot embedding of the previous behavior INLINEFORM2 that was predicted by the model. Based on these inputs, the GRU cell outputs a new hidden state INLINEFORM3 to compute likelihoods for the next behavior. These likelihoods are estimated by combining the output state INLINEFORM4 with relevant information from the context INLINEFORM5 : DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are trainable parameters. The attention vector INLINEFORM3 in Eq. () quantifies the affinity of INLINEFORM4 with respect to each of the columns INLINEFORM5 of INLINEFORM6 , where INLINEFORM7 . The attention vector also helps to estimate a dynamic contextual vector INLINEFORM8 that the INLINEFORM9 -th GRU cell uses to compute logits for the next behavior: DISPLAYFORM0
with INLINEFORM0 trainable parameters. Note that INLINEFORM1 includes a value for each of the pre-defined behaviors in the graph INLINEFORM2 , as well as for a special “stop” symbol to identify the end of the output sequence.
Output layer: The final layer of the model searches for a valid sequence of robot behaviors based on the robot's initial node, the connectivity of the graph INLINEFORM0 , and the output logits from the previous decoder layer. Again, without loss of generality, consider the INLINEFORM1 -th behavior INLINEFORM2 that is finally predicted by the model. The search for this behavior is implemented as: DISPLAYFORM0
with INLINEFORM0 a masking function that takes as input the graph INLINEFORM1 and the node INLINEFORM2 that the robot reaches after following the sequence of behaviors INLINEFORM3 previously predicted by the model. The INLINEFORM4 function returns a vector of the same dimensionality as the logits INLINEFORM5 , but with zeros for the valid behaviors after the last location INLINEFORM6 and for the special stop symbol, and INLINEFORM7 for any invalid predictions according to the connectivity of the behavioral navigation graph.
Dataset
We created a new dataset for the problem of following navigation instructions under the behavioral navigation framework of BIBREF5 . This dataset was created using Amazon Mechanical Turk and 100 maps of simulated indoor environments, each with 6 to 65 rooms. To the best of our knowledge, this is the first benchmark for comparing translation models in the context of behavioral robot navigation.
As shown in Table TABREF16 , the dataset consists of 8066 pairs of free-form natural language instructions and navigation plans for training. This training data was collected from 88 unique simulated environments, totaling 6064 distinct navigation plans (2002 plans have two different navigation instructions each; the rest has one). The dataset contains two test set variants:
While the dataset was collected with simulated environments, no structure was imposed on the navigation instructions while crowd-sourcing data. Thus, many instructions in our dataset are ambiguous. Moreover, the order of the behaviors in the instructions is not always the same. For instance, a person said “turn right and advance” to describe part of a route, while another person said “go straight after turning right” in a similar situation. The high variability present in the natural language descriptions of our dataset makes the problem of decoding instructions into behaviors not trivial. See Appendix A of the supplementary material for additional details on our data collection effort.
Experiments
This section describes our evaluation of the proposed approach for interpreting navigation commands in natural language. We provide both quantitative and qualitative results.
Evaluation Metrics
While computing evaluation metrics, we only consider the behaviors present in the route because they are sufficient to recover the high-level navigation plan from the graph. Our metrics treat each behavior as a single token. For example, the sample plan “R-1 oor C-1 cf C-1 lt C-0 cf C-0 iol O-3" is considered to have 5 tokens, each corresponding to one of its behaviors (“oor", “cf", “lt", “cf", “iol"). In this plan, “R-1",“C-1", “C-0", and “O-3" are symbols for locations (nodes) in the graph.
We compare the performance of translation approaches based on four metrics:
[align=left,leftmargin=0em,labelsep=0.4em,font=]
As in BIBREF20 , EM is 1 if a predicted plan matches exactly the ground truth; otherwise it is 0.
The harmonic average of the precision and recall over all the test set BIBREF26 .
The minimum number of insertions, deletions or swap operations required to transform a predicted sequence of behaviors into the ground truth sequence BIBREF27 .
GM is 1 if a predicted plan reaches the ground truth destination (even if the full sequence of behaviors does not match exactly the ground truth). Otherwise, GM is 0.
Models Used in the Evaluation
We compare the proposed approach for translating natural language instructions into a navigation plan against alternative deep-learning models:
[align=left,leftmargin=0em,labelsep=0.4em,font=]
The baseline approach is based on BIBREF20 . It divides the task of interpreting commands for behavioral navigation into two steps: path generation, and path verification. For path generation, this baseline uses a standard sequence-to-sequence model augmented with an attention mechanism, similar to BIBREF23 , BIBREF6 . For path verification, the baseline uses depth-first search to find a route in the graph that matches the sequence of predicted behaviors. If no route matches perfectly, the baseline changes up to three behaviors in the predicted sequence to try to turn it into a valid path.
To test the impact of using the behavioral graphs as an extra input to our translation model, we implemented a version of our approach that only takes natural language instructions as input. In this ablation model, the output of the bidirectional GRU that encodes the input instruction INLINEFORM0 is directly fed to the decoder layer. This model does not have the attention and FC layers described in Sec. SECREF4 , nor uses the masking function in the output layer.
This model is the same as the previous Ablation model, but with the masking function in the output layer.
Implementation Details
We pre-processed the inputs to the various models that are considered in our experiment. In particular, we lowercased, tokenized, spell-checked and lemmatized the input instructions in text-form using WordNet BIBREF28 . We also truncated the graphs to a maximum of 300 triplets, and the navigational instructions to a maximum of 150 words. Only 6.4% (5.4%) of the unique graphs in the training (validation) set had more than 300 triplets, and less than 0.15% of the natural language instructions in these sets had more than 150 tokens.
The dimensionality of the hidden state of the GRU networks was set to 128 in all the experiments. In general, we used 12.5% of the training set as validation for choosing models' hyper-parameters. In particular, we used dropout after the encoder and the fully-connected layers of the proposed model to reduce overfitting. Best performance was achieved with a dropout rate of 0.5 and batch size equal to 256. We also used scheduled sampling BIBREF29 at training time for all models except the baseline.
We input the triplets from the graph to our proposed model in alphabetical order, and consider a modification where the triplets that surround the start location of the robot are provided first in the input graph sequence. We hypothesized that such rearrangement would help identify the starting location (node) of the robot in the graph. In turn, this could facilitate the prediction of correct output sequences. In the remaining of the paper, we refer to models that were provided a rearranged graph, beginning with the starting location of the robot, as models with “Ordered Triplets”.
Quantitative Evaluation
Table TABREF28 shows the performance of the models considered in our evaluation on both test sets. The next two sections discuss the results in detail.
First, we can observe that the final model “Ours with Mask and Ordered Triplets” outperforms the Baseline and Ablation models on all metrics in previously seen environments. The difference in performance is particularly evident for the Exact Match and Goal Match metrics, with our model increasing accuracy by 35% and 25% in comparison to the Baseline and Ablation models, respectively. These results suggest that providing the behavioral navigation graph to the model and allowing it to process this information as a knowledge base in an end-to-end fashion is beneficial.
We can also observe from Table TABREF28 that the masking function of Eq. ( EQREF12 ) tends to increase performance in the Test-Repeated Set by constraining the output sequence to a valid set of navigation behaviors. For the Ablation model, using the masking function leads to about INLINEFORM0 increase in EM and GM accuracy. For the proposed model (with or without reordering the graph triplets), the increase in accuracy is around INLINEFORM1 . Note that the impact of the masking function is less evident in terms of the F1 score because this metric considers if a predicted behavior exists in the ground truth navigation plan, irrespective of its specific position in the output sequence.
The results in the last four rows of Table TABREF28 suggest that ordering the graph triplets can facilitate predicting correct navigation plans in previously seen environments. Providing the triplets that surround the starting location of the robot first to the model leads to a boost of INLINEFORM0 in EM and GM performance. The rearrangement of the graph triplets also helps to reduce ED and increase F1.
Lastly, it is worth noting that our proposed model (last row of Table TABREF28 ) outperforms all other models in previously seen environments. In particular, we obtain over INLINEFORM0 increase in EM and GM between our model and the next best two models.
The previous section evaluated model performance on new instructions (and corresponding navigation plans) for environments that were previously seen at training time. Here, we examine whether the trained models succeed on environments that are completely new.
The evaluation on the Test-New Set helps understand the generalization capabilities of the models under consideration. This experiment is more challenging than the one in the previous section, as can be seen in performance drops in Table TABREF28 for the new environments. Nonetheless, the insights from the previous section still hold: masking in the output layer and reordering the graph triplets tend to increase performance.
Even though the results in Table TABREF28 suggest that there is room for future work on decoding natural language instructions, our model still outperforms the baselines by a clear margin in new environments. For instance, the difference between our model and the second best model in the Test-New set is about INLINEFORM0 EM and GM. Note that the average number of actions in the ground truth output sequences is 7.07 for the Test-New set. Our model's predictions are just INLINEFORM1 edits off on average from the correct navigation plans.
Qualitative Evaluation
This section discusses qualitative results to better understand how the proposed model uses the navigation graph.
We analyze the evolution of the attention weights INLINEFORM0 in Eq. () to assess if the decoder layer of the proposed model is attending to the correct parts of the behavioral graph when making predictions. Fig FIGREF33 (b) shows an example of the resulting attention map for the case of a correct prediction. In the Figure, the attention map is depicted as a scaled and normalized 2D array of color codes. Each column in the array shows the attention distribution INLINEFORM1 used to generate the predicted output at step INLINEFORM2 . Consequently, each row in the array represents a triplet in the corresponding behavioral graph. This graph consists of 72 triplets for Fig FIGREF33 (b).
We observe a locality effect associated to the attention coefficients corresponding to high values (bright areas) in each column of Fig FIGREF33 (b). This suggests that the decoder is paying attention to graph triplets associated to particular neighborhoods of the environment in each prediction step. We include additional attention visualizations in the supplementary Appendix, including cases where the dynamics of the attention distribution are harder to interpret.
All the routes in our dataset are the shortest paths from a start location to a given destination. Thus, we collected a few additional natural language instructions to check if our model was able to follow navigation instructions describing sub-optimal paths. One such example is shown in Fig. FIGREF37 , where the blue route (shortest path) and the red route (alternative path) are described by:
[leftmargin=*, labelsep=0.2em, itemsep=0em]
“Go out the office and make a left. Turn right at the corner and go down the hall. Make a right at the next corner and enter the kitchen in front of table.”
“Exit the room 0 and turn right, go to the end of the corridor and turn left, go straight to the end of the corridor and turn left again. After passing bookshelf on your left and table on your right, Enter the kitchen on your right.”
For both routes, the proposed model was able to predict the correct sequence of navigation behaviors. This result suggests that the model is indeed using the input instructions and is not just approximating shortest paths in the behavioral graph. Other examples on the prediction of sub-obtimal paths are described in the Appendix.
Conclusion
This work introduced behavioral navigation through free-form natural language instructions as a challenging and a novel task that falls at the intersection of natural language processing and robotics. This problem has a range of interesting cross-domain applications, including information retrieval.
We proposed an end-to-end system to translate user instructions to a high-level navigation plan. Our model utilized an attention mechanism to merge relevant information from the navigation instructions with a behavioral graph of the environment. The model then used a decoder to predict a sequence of navigation behaviors that matched the input commands.
As part of this effort, we contributed a new dataset of 11,051 pairs of user instructions and navigation plans from 100 different environments. Our model achieved the best performance in this dataset in comparison to a two-step baseline approach for interpreting navigation instructions, and a sequence-to-sequence model that does not consider the behavioral graph. Our quantitative and qualitative results suggest that attention mechanisms can help leverage the behavioral graph as a relevant knowledge base to facilitate the translation of free-form navigation instructions. Overall, our approach demonstrated practical form of learning for a complex and useful task. In future work, we are interested in investigating mechanisms to improve generalization to new environments. For example, pointer and graph networks BIBREF30 , BIBREF31 are a promising direction to help supervise translation models and predict motion behaviors.
Acknowledgments
The Toyota Research Institute (TRI) provided funds to assist with this research, but this paper solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity. This work is also partially funded by Fondecyt grant 1181739, Conicyt, Chile. The authors would also like to thank Gabriel Sepúlveda for his assistance with parts of this project. | exact match, f1 score, edit distance and goal match |
29bdd1fb20d013b23b3962a065de3a564b14f0fb | 29bdd1fb20d013b23b3962a065de3a564b14f0fb_0 | Q: Did the authors use a crowdsourcing platform?
Text: Introduction
Enabling robots to follow navigation instructions in natural language can facilitate human-robot interaction across a variety of applications. For instance, within the service robotics domain, robots can follow navigation instructions to help with mobile manipulation BIBREF0 and delivery tasks BIBREF1 .
Interpreting navigation instructions in natural language is difficult due to the high variability in the way people describe routes BIBREF2 . For example, there are a variety of ways to describe the route in Fig. FIGREF4 (a):
Each fragment of a sentence within these instructions can be mapped to one or more than one navigation behaviors. For instance, assume that a robot counts with a number of primitive, navigation behaviors, such as “enter the room on the left (or on right)” , “follow the corridor”, “cross the intersection”, etc. Then, the fragment “advance forward” in a navigation instruction could be interpreted as a “follow the corridor” behavior, or as a sequence of “follow the corridor” interspersed with “cross the intersection” behaviors depending on the topology of the environment. Resolving such ambiguities often requires reasoning about “common-sense” concepts, as well as interpreting spatial information and landmarks, e.g., in sentences such as “the room on the left right before the end of the corridor” and “the room which is in the middle of two vases”.
In this work, we pose the problem of interpreting navigation instructions as finding a mapping (or grounding) of the commands into an executable navigation plan. While the plan is typically modeled as a formal specification of low-level motions BIBREF2 or a grammar BIBREF3 , BIBREF4 , we focus specifically on translating instructions to a high-level navigation plan based on a topological representation of the environment. This representation is a behavioral navigation graph, as recently proposed by BIBREF5 , designed to take advantage of the semantic structure typical of human environments. The nodes of the graph correspond to semantically meaningful locations for the navigation task, such as kitchens or entrances to rooms in corridors. The edges are parameterized, visuo-motor behaviors that allow a robot to navigate between neighboring nodes, as illustrated in Fig. FIGREF4 (b). Under this framework, complex navigation routes can be achieved by sequencing behaviors without an explicit metric representation of the world.
We formulate the problem of following instructions under the framework of BIBREF5 as finding a path in the behavioral navigation graph that follows the desired route, given a known starting location. The edges (behaviors) along this path serve to reach the – sometimes implicit – destination requested by the user. As in BIBREF6 , our focus is on the problem of interpreting navigation directions. We assume that a robot can realize valid navigation plans according to the graph.
We contribute a new end-to-end model for following directions in natural language under the behavioral navigation framework. Inspired by the information retrieval and question answering literature BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , we propose to leverage the behavioral graph as a knowledge base to facilitate the interpretation of navigation commands. More specifically, the proposed model takes as input user directions in text form, the behavioral graph of the environment encoded as INLINEFORM0 node; edge; node INLINEFORM1 triplets, and the initial location of the robot in the graph. The model then predicts a set of behaviors to reach the desired destination according to the instructions and the map (Fig. FIGREF4 (c)). Our main insight is that using attention mechanisms to correlate navigation instructions with the topological map of the environment can facilitate predicting correct navigation plans.
This work also contributes a new dataset of INLINEFORM0 pairs of free-form natural language instructions and high-level navigation plans. This dataset was collected through Mechanical Turk using 100 simulated environments with a corresponding topological map and, to the best of our knowledge, it is the first of its kind for behavioral navigation. The dataset opens up opportunities to explore data-driven methods for grounding navigation commands into high-level motion plans.
We conduct extensive experiments to study the generalization capabilities of the proposed model for following natural language instructions. We investigate both generalization to new instructions in known and in new environments. We conclude this paper by discussing the benefits of the proposed approach as well as opportunities for future research based on our findings.
Related work
This section reviews relevant prior work on following navigation instructions. Readers interested in an in-depth review of methods to interpret spatial natural language for robotics are encouraged to refer to BIBREF11 .
Typical approaches to follow navigation commands deal with the complexity of natural language by manually parsing commands, constraining language descriptions, or using statistical machine translation methods. While manually parsing commands is often impractical, the first type of approaches are foundational: they showed that it is possible to leverage the compositionality of semantic units to interpret spatial language BIBREF12 , BIBREF13 .
Constraining language descriptions can reduce the size of the input space to facilitate the interpretation of user commands. For example, BIBREF14 explored using structured, symbolic language phrases for navigation. As in this earlier work, we are also interested in navigation with a topological map of the environment. However, we do not process symbolic phrases. Our aim is to translate free-form natural language instructions to a navigation plan using information from a high-level representation of the environment. This translation problem requires dealing with missing actions in navigation instructions and actions with preconditions, such as “at the end of the corridor, turn right” BIBREF15 .
Statistical machine translation BIBREF16 is at the core of recent approaches to enable robots to follow navigation instructions. These methods aim to automatically discover translation rules from a corpus of data, and often leverage the fact that navigation directions are composed of sequential commands. For instance, BIBREF17 , BIBREF4 , BIBREF2 used statistical machine translation to map instructions to a formal language defined by a grammar. Likewise, BIBREF18 , BIBREF0 mapped commands to spatial description clauses based on the hierarchical structure of language in the navigation problem. Our approach to machine translation builds on insights from these prior efforts. In particular, we focus on end-to-end learning for statistical machine translation due to the recent success of Neural Networks in Natural Language Processing BIBREF19 .
Our work is inspired by methods that reduce the task of interpreting user commands to a sequential prediction problem BIBREF20 , BIBREF21 , BIBREF22 . Similar to BIBREF21 and BIBREF22 , we use a sequence-to-sequence model to enable a mobile agent to follow routes. But instead leveraging visual information to output low-level navigation commands, we focus on using a topological map of the environment to output a high-level navigation plan. This plan is a sequence of behaviors that can be executed by a robot to reach a desired destination BIBREF5 , BIBREF6 .
We explore machine translation from the perspective of automatic question answering. Following BIBREF8 , BIBREF9 , our approach uses attention mechanisms to learn alignments between different input modalities. In our case, the inputs to our model are navigation instructions, a topological environment map, and the start location of the robot (Fig. FIGREF4 (c)). Our results show that the map can serve as an effective source of contextual information for the translation task. Additionally, it is possible to leverage this kind of information in an end-to-end fashion.
Problem Formulation
Our goal is to translate navigation instructions in text form into a sequence of behaviors that a robot can execute to reach a desired destination from a known start location. We frame this problem under a behavioral approach to indoor autonomous navigation BIBREF5 and assume that prior knowledge about the environment is available for the translation task. This prior knowledge is a topological map, in the form of a behavioral navigation graph (Fig. FIGREF4 (b)). The nodes of the graph correspond to semantically-meaningful locations for the navigation task, and its directed edges are visuo-motor behaviors that a robot can use to move between nodes. This formulation takes advantage of the rich semantic structure behind man-made environments, resulting in a compact route representation for robot navigation.
Fig. FIGREF4 (c) provides a schematic view of the problem setting. The inputs are: (1) a navigation graph INLINEFORM0 , (2) the starting node INLINEFORM1 of the robot in INLINEFORM2 , and (3) a set of free-form navigation instructions INLINEFORM3 in natural language. The instructions describe a path in the graph to reach from INLINEFORM4 to a – potentially implicit – destination node INLINEFORM5 . Using this information, the objective is to predict a suitable sequence of robot behaviors INLINEFORM6 to navigate from INLINEFORM7 to INLINEFORM8 according to INLINEFORM9 . From a supervised learning perspective, the goal is then to estimate: DISPLAYFORM0
based on a dataset of input-target pairs INLINEFORM0 , where INLINEFORM1 and INLINEFORM2 , respectively. The sequential execution of the behaviors INLINEFORM3 should replicate the route intended by the instructions INLINEFORM4 . We assume no prior linguistic knowledge. Thus, translation approaches have to cope with the semantics and syntax of the language by discovering corresponding patterns in the data.
The Behavioral Graph: A Knowledge Base For Navigation
We view the behavioral graph INLINEFORM0 as a knowledge base that encodes a set of navigational rules as triplets INLINEFORM1 , where INLINEFORM2 and INLINEFORM3 are adjacent nodes in the graph, and the edge INLINEFORM4 is an executable behavior to navigate from INLINEFORM5 to INLINEFORM6 . In general, each behaviors includes a list of relevant navigational attributes INLINEFORM7 that the robot might encounter when moving between nodes.
We consider 7 types of semantic locations, 11 types of behaviors, and 20 different types of landmarks. A location in the navigation graph can be a room, a lab, an office, a kitchen, a hall, a corridor, or a bathroom. These places are labeled with unique tags, such as "room-1" or "lab-2", except for bathrooms and kitchens which people do not typically refer to by unique names when describing navigation routes.
Table TABREF7 lists the navigation behaviors that we consider in this work. These behaviors can be described in reference to visual landmarks or objects, such as paintings, book shelfs, tables, etc. As in Fig. FIGREF4 , maps might contain multiple landmarks of the same type. Please see the supplementary material (Appendix A) for more details.
Approach
We leverage recent advances in deep learning to translate natural language instructions to a sequence of navigation behaviors in an end-to-end fashion. Our proposed model builds on the sequence-to-sequence translation model of BIBREF23 , which computes a soft-alignment between a source sequence (natural language instructions in our case) and the corresponding target sequence (navigation behaviors).
As one of our main contributions, we augment the neural machine translation approach of BIBREF23 to take as input not only natural language instructions, but also the corresponding behavioral navigation graph INLINEFORM0 of the environment where navigation should take place. Specifically, at each step, the graph INLINEFORM1 operates as a knowledge base that the model can access to obtain information about path connectivity, facilitating the grounding of navigation commands.
Figure FIGREF8 shows the structure of the proposed model for interpreting navigation instructions. The model consists of six layers:
Embed layer: The model first encodes each word and symbol in the input sequences INLINEFORM0 and INLINEFORM1 into fixed-length representations. The instructions INLINEFORM2 are embedded into a 100-dimensional pre-trained GloVe vector BIBREF24 . Each of the triplet components, INLINEFORM3 , INLINEFORM4 , and INLINEFORM5 of the graph INLINEFORM6 , are one-hot encoded into vectors of dimensionality INLINEFORM7 , where INLINEFORM8 and INLINEFORM9 are the number of nodes and edges in INLINEFORM10 , respectively.
Encoder layer: The model then uses two bidirectional Gated Recurrent Units (GRUs) BIBREF25 to independently process the information from INLINEFORM0 and INLINEFORM1 , and incorporate contextual cues from the surrounding embeddings in each sequence. The outputs of the encoder layer are the matrix INLINEFORM2 for the navigational commands and the matrix INLINEFORM3 for the behavioral graph, where INLINEFORM4 is the hidden size of each GRU, INLINEFORM5 is the number of words in the instruction INLINEFORM6 , and INLINEFORM7 is the number of triplets in the graph INLINEFORM8 .
Attention layer: Matrices INLINEFORM0 and INLINEFORM1 generated by the encoder layer are combined using an attention mechanism. We use one-way attention because the graph contains information about the whole environment, while the instruction has (potentially incomplete) local information about the route of interest. The use of attention provides our model with a two-step strategy to interpret commands. This resembles the way people find paths on a map: first, relevant parts on the map are selected according to their affinity to each of the words in the input instruction (attention layer); second, the selected parts are connected to assemble a valid path (decoder layer). More formally, let INLINEFORM2 ( INLINEFORM3 ) be the INLINEFORM4 -th row of INLINEFORM5 , and INLINEFORM6 ( INLINEFORM7 ) the INLINEFORM8 -th row of INLINEFORM9 . We use each encoded triplet INLINEFORM10 in INLINEFORM11 to calculate its associated attention distribution INLINEFORM12 over all the atomic instructions INLINEFORM13 : DISPLAYFORM0
where the matrix INLINEFORM0 serves to combine the different sources of information INLINEFORM1 and INLINEFORM2 . Each component INLINEFORM3 of the attention distributions INLINEFORM4 quantifies the affinity between the INLINEFORM5 -th triplet in INLINEFORM6 and the INLINEFORM7 -th word in the corresponding input INLINEFORM8 .
The model then uses each attention distribution INLINEFORM0 to obtain a weighted sum of the encodings of the words in INLINEFORM1 , according to their relevance to the corresponding triplet INLINEFORM2 . This results in L attention vectors INLINEFORM3 , INLINEFORM4 .
The final step in the attention layer concatenates each INLINEFORM0 with INLINEFORM1 to generate the outputs INLINEFORM2 , INLINEFORM3 . Following BIBREF8 , we include the encoded triplet INLINEFORM4 in the output tensor INLINEFORM5 of this layer to prevent early summaries of relevant map information.
FC layer: The model reduces the dimensionality of each individual vector INLINEFORM0 from INLINEFORM1 to INLINEFORM2 with a fully-connected (FC) layer. The resulting L vectors are output to the next layer as columns of a context matrix INLINEFORM3 .
Decoder layer: After the FC layer, the model predicts likelihoods over the sequence of behaviors that correspond to the input instructions with a GRU network. Without loss of generality, consider the INLINEFORM0 -th recurrent cell in the GRU network. This cell takes two inputs: a hidden state vector INLINEFORM1 from the prior cell, and a one-hot embedding of the previous behavior INLINEFORM2 that was predicted by the model. Based on these inputs, the GRU cell outputs a new hidden state INLINEFORM3 to compute likelihoods for the next behavior. These likelihoods are estimated by combining the output state INLINEFORM4 with relevant information from the context INLINEFORM5 : DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are trainable parameters. The attention vector INLINEFORM3 in Eq. () quantifies the affinity of INLINEFORM4 with respect to each of the columns INLINEFORM5 of INLINEFORM6 , where INLINEFORM7 . The attention vector also helps to estimate a dynamic contextual vector INLINEFORM8 that the INLINEFORM9 -th GRU cell uses to compute logits for the next behavior: DISPLAYFORM0
with INLINEFORM0 trainable parameters. Note that INLINEFORM1 includes a value for each of the pre-defined behaviors in the graph INLINEFORM2 , as well as for a special “stop” symbol to identify the end of the output sequence.
Output layer: The final layer of the model searches for a valid sequence of robot behaviors based on the robot's initial node, the connectivity of the graph INLINEFORM0 , and the output logits from the previous decoder layer. Again, without loss of generality, consider the INLINEFORM1 -th behavior INLINEFORM2 that is finally predicted by the model. The search for this behavior is implemented as: DISPLAYFORM0
with INLINEFORM0 a masking function that takes as input the graph INLINEFORM1 and the node INLINEFORM2 that the robot reaches after following the sequence of behaviors INLINEFORM3 previously predicted by the model. The INLINEFORM4 function returns a vector of the same dimensionality as the logits INLINEFORM5 , but with zeros for the valid behaviors after the last location INLINEFORM6 and for the special stop symbol, and INLINEFORM7 for any invalid predictions according to the connectivity of the behavioral navigation graph.
Dataset
We created a new dataset for the problem of following navigation instructions under the behavioral navigation framework of BIBREF5 . This dataset was created using Amazon Mechanical Turk and 100 maps of simulated indoor environments, each with 6 to 65 rooms. To the best of our knowledge, this is the first benchmark for comparing translation models in the context of behavioral robot navigation.
As shown in Table TABREF16 , the dataset consists of 8066 pairs of free-form natural language instructions and navigation plans for training. This training data was collected from 88 unique simulated environments, totaling 6064 distinct navigation plans (2002 plans have two different navigation instructions each; the rest has one). The dataset contains two test set variants:
While the dataset was collected with simulated environments, no structure was imposed on the navigation instructions while crowd-sourcing data. Thus, many instructions in our dataset are ambiguous. Moreover, the order of the behaviors in the instructions is not always the same. For instance, a person said “turn right and advance” to describe part of a route, while another person said “go straight after turning right” in a similar situation. The high variability present in the natural language descriptions of our dataset makes the problem of decoding instructions into behaviors not trivial. See Appendix A of the supplementary material for additional details on our data collection effort.
Experiments
This section describes our evaluation of the proposed approach for interpreting navigation commands in natural language. We provide both quantitative and qualitative results.
Evaluation Metrics
While computing evaluation metrics, we only consider the behaviors present in the route because they are sufficient to recover the high-level navigation plan from the graph. Our metrics treat each behavior as a single token. For example, the sample plan “R-1 oor C-1 cf C-1 lt C-0 cf C-0 iol O-3" is considered to have 5 tokens, each corresponding to one of its behaviors (“oor", “cf", “lt", “cf", “iol"). In this plan, “R-1",“C-1", “C-0", and “O-3" are symbols for locations (nodes) in the graph.
We compare the performance of translation approaches based on four metrics:
[align=left,leftmargin=0em,labelsep=0.4em,font=]
As in BIBREF20 , EM is 1 if a predicted plan matches exactly the ground truth; otherwise it is 0.
The harmonic average of the precision and recall over all the test set BIBREF26 .
The minimum number of insertions, deletions or swap operations required to transform a predicted sequence of behaviors into the ground truth sequence BIBREF27 .
GM is 1 if a predicted plan reaches the ground truth destination (even if the full sequence of behaviors does not match exactly the ground truth). Otherwise, GM is 0.
Models Used in the Evaluation
We compare the proposed approach for translating natural language instructions into a navigation plan against alternative deep-learning models:
[align=left,leftmargin=0em,labelsep=0.4em,font=]
The baseline approach is based on BIBREF20 . It divides the task of interpreting commands for behavioral navigation into two steps: path generation, and path verification. For path generation, this baseline uses a standard sequence-to-sequence model augmented with an attention mechanism, similar to BIBREF23 , BIBREF6 . For path verification, the baseline uses depth-first search to find a route in the graph that matches the sequence of predicted behaviors. If no route matches perfectly, the baseline changes up to three behaviors in the predicted sequence to try to turn it into a valid path.
To test the impact of using the behavioral graphs as an extra input to our translation model, we implemented a version of our approach that only takes natural language instructions as input. In this ablation model, the output of the bidirectional GRU that encodes the input instruction INLINEFORM0 is directly fed to the decoder layer. This model does not have the attention and FC layers described in Sec. SECREF4 , nor uses the masking function in the output layer.
This model is the same as the previous Ablation model, but with the masking function in the output layer.
Implementation Details
We pre-processed the inputs to the various models that are considered in our experiment. In particular, we lowercased, tokenized, spell-checked and lemmatized the input instructions in text-form using WordNet BIBREF28 . We also truncated the graphs to a maximum of 300 triplets, and the navigational instructions to a maximum of 150 words. Only 6.4% (5.4%) of the unique graphs in the training (validation) set had more than 300 triplets, and less than 0.15% of the natural language instructions in these sets had more than 150 tokens.
The dimensionality of the hidden state of the GRU networks was set to 128 in all the experiments. In general, we used 12.5% of the training set as validation for choosing models' hyper-parameters. In particular, we used dropout after the encoder and the fully-connected layers of the proposed model to reduce overfitting. Best performance was achieved with a dropout rate of 0.5 and batch size equal to 256. We also used scheduled sampling BIBREF29 at training time for all models except the baseline.
We input the triplets from the graph to our proposed model in alphabetical order, and consider a modification where the triplets that surround the start location of the robot are provided first in the input graph sequence. We hypothesized that such rearrangement would help identify the starting location (node) of the robot in the graph. In turn, this could facilitate the prediction of correct output sequences. In the remaining of the paper, we refer to models that were provided a rearranged graph, beginning with the starting location of the robot, as models with “Ordered Triplets”.
Quantitative Evaluation
Table TABREF28 shows the performance of the models considered in our evaluation on both test sets. The next two sections discuss the results in detail.
First, we can observe that the final model “Ours with Mask and Ordered Triplets” outperforms the Baseline and Ablation models on all metrics in previously seen environments. The difference in performance is particularly evident for the Exact Match and Goal Match metrics, with our model increasing accuracy by 35% and 25% in comparison to the Baseline and Ablation models, respectively. These results suggest that providing the behavioral navigation graph to the model and allowing it to process this information as a knowledge base in an end-to-end fashion is beneficial.
We can also observe from Table TABREF28 that the masking function of Eq. ( EQREF12 ) tends to increase performance in the Test-Repeated Set by constraining the output sequence to a valid set of navigation behaviors. For the Ablation model, using the masking function leads to about INLINEFORM0 increase in EM and GM accuracy. For the proposed model (with or without reordering the graph triplets), the increase in accuracy is around INLINEFORM1 . Note that the impact of the masking function is less evident in terms of the F1 score because this metric considers if a predicted behavior exists in the ground truth navigation plan, irrespective of its specific position in the output sequence.
The results in the last four rows of Table TABREF28 suggest that ordering the graph triplets can facilitate predicting correct navigation plans in previously seen environments. Providing the triplets that surround the starting location of the robot first to the model leads to a boost of INLINEFORM0 in EM and GM performance. The rearrangement of the graph triplets also helps to reduce ED and increase F1.
Lastly, it is worth noting that our proposed model (last row of Table TABREF28 ) outperforms all other models in previously seen environments. In particular, we obtain over INLINEFORM0 increase in EM and GM between our model and the next best two models.
The previous section evaluated model performance on new instructions (and corresponding navigation plans) for environments that were previously seen at training time. Here, we examine whether the trained models succeed on environments that are completely new.
The evaluation on the Test-New Set helps understand the generalization capabilities of the models under consideration. This experiment is more challenging than the one in the previous section, as can be seen in performance drops in Table TABREF28 for the new environments. Nonetheless, the insights from the previous section still hold: masking in the output layer and reordering the graph triplets tend to increase performance.
Even though the results in Table TABREF28 suggest that there is room for future work on decoding natural language instructions, our model still outperforms the baselines by a clear margin in new environments. For instance, the difference between our model and the second best model in the Test-New set is about INLINEFORM0 EM and GM. Note that the average number of actions in the ground truth output sequences is 7.07 for the Test-New set. Our model's predictions are just INLINEFORM1 edits off on average from the correct navigation plans.
Qualitative Evaluation
This section discusses qualitative results to better understand how the proposed model uses the navigation graph.
We analyze the evolution of the attention weights INLINEFORM0 in Eq. () to assess if the decoder layer of the proposed model is attending to the correct parts of the behavioral graph when making predictions. Fig FIGREF33 (b) shows an example of the resulting attention map for the case of a correct prediction. In the Figure, the attention map is depicted as a scaled and normalized 2D array of color codes. Each column in the array shows the attention distribution INLINEFORM1 used to generate the predicted output at step INLINEFORM2 . Consequently, each row in the array represents a triplet in the corresponding behavioral graph. This graph consists of 72 triplets for Fig FIGREF33 (b).
We observe a locality effect associated to the attention coefficients corresponding to high values (bright areas) in each column of Fig FIGREF33 (b). This suggests that the decoder is paying attention to graph triplets associated to particular neighborhoods of the environment in each prediction step. We include additional attention visualizations in the supplementary Appendix, including cases where the dynamics of the attention distribution are harder to interpret.
All the routes in our dataset are the shortest paths from a start location to a given destination. Thus, we collected a few additional natural language instructions to check if our model was able to follow navigation instructions describing sub-optimal paths. One such example is shown in Fig. FIGREF37 , where the blue route (shortest path) and the red route (alternative path) are described by:
[leftmargin=*, labelsep=0.2em, itemsep=0em]
“Go out the office and make a left. Turn right at the corner and go down the hall. Make a right at the next corner and enter the kitchen in front of table.”
“Exit the room 0 and turn right, go to the end of the corridor and turn left, go straight to the end of the corridor and turn left again. After passing bookshelf on your left and table on your right, Enter the kitchen on your right.”
For both routes, the proposed model was able to predict the correct sequence of navigation behaviors. This result suggests that the model is indeed using the input instructions and is not just approximating shortest paths in the behavioral graph. Other examples on the prediction of sub-obtimal paths are described in the Appendix.
Conclusion
This work introduced behavioral navigation through free-form natural language instructions as a challenging and a novel task that falls at the intersection of natural language processing and robotics. This problem has a range of interesting cross-domain applications, including information retrieval.
We proposed an end-to-end system to translate user instructions to a high-level navigation plan. Our model utilized an attention mechanism to merge relevant information from the navigation instructions with a behavioral graph of the environment. The model then used a decoder to predict a sequence of navigation behaviors that matched the input commands.
As part of this effort, we contributed a new dataset of 11,051 pairs of user instructions and navigation plans from 100 different environments. Our model achieved the best performance in this dataset in comparison to a two-step baseline approach for interpreting navigation instructions, and a sequence-to-sequence model that does not consider the behavioral graph. Our quantitative and qualitative results suggest that attention mechanisms can help leverage the behavioral graph as a relevant knowledge base to facilitate the translation of free-form navigation instructions. Overall, our approach demonstrated practical form of learning for a complex and useful task. In future work, we are interested in investigating mechanisms to improve generalization to new environments. For example, pointer and graph networks BIBREF30 , BIBREF31 are a promising direction to help supervise translation models and predict motion behaviors.
Acknowledgments
The Toyota Research Institute (TRI) provided funds to assist with this research, but this paper solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity. This work is also partially funded by Fondecyt grant 1181739, Conicyt, Chile. The authors would also like to thank Gabriel Sepúlveda for his assistance with parts of this project. | Yes |
25b2ae2d86b74ea69b09c140a41593c00c47a82b | 25b2ae2d86b74ea69b09c140a41593c00c47a82b_0 | Q: How were the navigation instructions collected?
Text: Introduction
Enabling robots to follow navigation instructions in natural language can facilitate human-robot interaction across a variety of applications. For instance, within the service robotics domain, robots can follow navigation instructions to help with mobile manipulation BIBREF0 and delivery tasks BIBREF1 .
Interpreting navigation instructions in natural language is difficult due to the high variability in the way people describe routes BIBREF2 . For example, there are a variety of ways to describe the route in Fig. FIGREF4 (a):
Each fragment of a sentence within these instructions can be mapped to one or more than one navigation behaviors. For instance, assume that a robot counts with a number of primitive, navigation behaviors, such as “enter the room on the left (or on right)” , “follow the corridor”, “cross the intersection”, etc. Then, the fragment “advance forward” in a navigation instruction could be interpreted as a “follow the corridor” behavior, or as a sequence of “follow the corridor” interspersed with “cross the intersection” behaviors depending on the topology of the environment. Resolving such ambiguities often requires reasoning about “common-sense” concepts, as well as interpreting spatial information and landmarks, e.g., in sentences such as “the room on the left right before the end of the corridor” and “the room which is in the middle of two vases”.
In this work, we pose the problem of interpreting navigation instructions as finding a mapping (or grounding) of the commands into an executable navigation plan. While the plan is typically modeled as a formal specification of low-level motions BIBREF2 or a grammar BIBREF3 , BIBREF4 , we focus specifically on translating instructions to a high-level navigation plan based on a topological representation of the environment. This representation is a behavioral navigation graph, as recently proposed by BIBREF5 , designed to take advantage of the semantic structure typical of human environments. The nodes of the graph correspond to semantically meaningful locations for the navigation task, such as kitchens or entrances to rooms in corridors. The edges are parameterized, visuo-motor behaviors that allow a robot to navigate between neighboring nodes, as illustrated in Fig. FIGREF4 (b). Under this framework, complex navigation routes can be achieved by sequencing behaviors without an explicit metric representation of the world.
We formulate the problem of following instructions under the framework of BIBREF5 as finding a path in the behavioral navigation graph that follows the desired route, given a known starting location. The edges (behaviors) along this path serve to reach the – sometimes implicit – destination requested by the user. As in BIBREF6 , our focus is on the problem of interpreting navigation directions. We assume that a robot can realize valid navigation plans according to the graph.
We contribute a new end-to-end model for following directions in natural language under the behavioral navigation framework. Inspired by the information retrieval and question answering literature BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , we propose to leverage the behavioral graph as a knowledge base to facilitate the interpretation of navigation commands. More specifically, the proposed model takes as input user directions in text form, the behavioral graph of the environment encoded as INLINEFORM0 node; edge; node INLINEFORM1 triplets, and the initial location of the robot in the graph. The model then predicts a set of behaviors to reach the desired destination according to the instructions and the map (Fig. FIGREF4 (c)). Our main insight is that using attention mechanisms to correlate navigation instructions with the topological map of the environment can facilitate predicting correct navigation plans.
This work also contributes a new dataset of INLINEFORM0 pairs of free-form natural language instructions and high-level navigation plans. This dataset was collected through Mechanical Turk using 100 simulated environments with a corresponding topological map and, to the best of our knowledge, it is the first of its kind for behavioral navigation. The dataset opens up opportunities to explore data-driven methods for grounding navigation commands into high-level motion plans.
We conduct extensive experiments to study the generalization capabilities of the proposed model for following natural language instructions. We investigate both generalization to new instructions in known and in new environments. We conclude this paper by discussing the benefits of the proposed approach as well as opportunities for future research based on our findings.
Related work
This section reviews relevant prior work on following navigation instructions. Readers interested in an in-depth review of methods to interpret spatial natural language for robotics are encouraged to refer to BIBREF11 .
Typical approaches to follow navigation commands deal with the complexity of natural language by manually parsing commands, constraining language descriptions, or using statistical machine translation methods. While manually parsing commands is often impractical, the first type of approaches are foundational: they showed that it is possible to leverage the compositionality of semantic units to interpret spatial language BIBREF12 , BIBREF13 .
Constraining language descriptions can reduce the size of the input space to facilitate the interpretation of user commands. For example, BIBREF14 explored using structured, symbolic language phrases for navigation. As in this earlier work, we are also interested in navigation with a topological map of the environment. However, we do not process symbolic phrases. Our aim is to translate free-form natural language instructions to a navigation plan using information from a high-level representation of the environment. This translation problem requires dealing with missing actions in navigation instructions and actions with preconditions, such as “at the end of the corridor, turn right” BIBREF15 .
Statistical machine translation BIBREF16 is at the core of recent approaches to enable robots to follow navigation instructions. These methods aim to automatically discover translation rules from a corpus of data, and often leverage the fact that navigation directions are composed of sequential commands. For instance, BIBREF17 , BIBREF4 , BIBREF2 used statistical machine translation to map instructions to a formal language defined by a grammar. Likewise, BIBREF18 , BIBREF0 mapped commands to spatial description clauses based on the hierarchical structure of language in the navigation problem. Our approach to machine translation builds on insights from these prior efforts. In particular, we focus on end-to-end learning for statistical machine translation due to the recent success of Neural Networks in Natural Language Processing BIBREF19 .
Our work is inspired by methods that reduce the task of interpreting user commands to a sequential prediction problem BIBREF20 , BIBREF21 , BIBREF22 . Similar to BIBREF21 and BIBREF22 , we use a sequence-to-sequence model to enable a mobile agent to follow routes. But instead leveraging visual information to output low-level navigation commands, we focus on using a topological map of the environment to output a high-level navigation plan. This plan is a sequence of behaviors that can be executed by a robot to reach a desired destination BIBREF5 , BIBREF6 .
We explore machine translation from the perspective of automatic question answering. Following BIBREF8 , BIBREF9 , our approach uses attention mechanisms to learn alignments between different input modalities. In our case, the inputs to our model are navigation instructions, a topological environment map, and the start location of the robot (Fig. FIGREF4 (c)). Our results show that the map can serve as an effective source of contextual information for the translation task. Additionally, it is possible to leverage this kind of information in an end-to-end fashion.
Problem Formulation
Our goal is to translate navigation instructions in text form into a sequence of behaviors that a robot can execute to reach a desired destination from a known start location. We frame this problem under a behavioral approach to indoor autonomous navigation BIBREF5 and assume that prior knowledge about the environment is available for the translation task. This prior knowledge is a topological map, in the form of a behavioral navigation graph (Fig. FIGREF4 (b)). The nodes of the graph correspond to semantically-meaningful locations for the navigation task, and its directed edges are visuo-motor behaviors that a robot can use to move between nodes. This formulation takes advantage of the rich semantic structure behind man-made environments, resulting in a compact route representation for robot navigation.
Fig. FIGREF4 (c) provides a schematic view of the problem setting. The inputs are: (1) a navigation graph INLINEFORM0 , (2) the starting node INLINEFORM1 of the robot in INLINEFORM2 , and (3) a set of free-form navigation instructions INLINEFORM3 in natural language. The instructions describe a path in the graph to reach from INLINEFORM4 to a – potentially implicit – destination node INLINEFORM5 . Using this information, the objective is to predict a suitable sequence of robot behaviors INLINEFORM6 to navigate from INLINEFORM7 to INLINEFORM8 according to INLINEFORM9 . From a supervised learning perspective, the goal is then to estimate: DISPLAYFORM0
based on a dataset of input-target pairs INLINEFORM0 , where INLINEFORM1 and INLINEFORM2 , respectively. The sequential execution of the behaviors INLINEFORM3 should replicate the route intended by the instructions INLINEFORM4 . We assume no prior linguistic knowledge. Thus, translation approaches have to cope with the semantics and syntax of the language by discovering corresponding patterns in the data.
The Behavioral Graph: A Knowledge Base For Navigation
We view the behavioral graph INLINEFORM0 as a knowledge base that encodes a set of navigational rules as triplets INLINEFORM1 , where INLINEFORM2 and INLINEFORM3 are adjacent nodes in the graph, and the edge INLINEFORM4 is an executable behavior to navigate from INLINEFORM5 to INLINEFORM6 . In general, each behaviors includes a list of relevant navigational attributes INLINEFORM7 that the robot might encounter when moving between nodes.
We consider 7 types of semantic locations, 11 types of behaviors, and 20 different types of landmarks. A location in the navigation graph can be a room, a lab, an office, a kitchen, a hall, a corridor, or a bathroom. These places are labeled with unique tags, such as "room-1" or "lab-2", except for bathrooms and kitchens which people do not typically refer to by unique names when describing navigation routes.
Table TABREF7 lists the navigation behaviors that we consider in this work. These behaviors can be described in reference to visual landmarks or objects, such as paintings, book shelfs, tables, etc. As in Fig. FIGREF4 , maps might contain multiple landmarks of the same type. Please see the supplementary material (Appendix A) for more details.
Approach
We leverage recent advances in deep learning to translate natural language instructions to a sequence of navigation behaviors in an end-to-end fashion. Our proposed model builds on the sequence-to-sequence translation model of BIBREF23 , which computes a soft-alignment between a source sequence (natural language instructions in our case) and the corresponding target sequence (navigation behaviors).
As one of our main contributions, we augment the neural machine translation approach of BIBREF23 to take as input not only natural language instructions, but also the corresponding behavioral navigation graph INLINEFORM0 of the environment where navigation should take place. Specifically, at each step, the graph INLINEFORM1 operates as a knowledge base that the model can access to obtain information about path connectivity, facilitating the grounding of navigation commands.
Figure FIGREF8 shows the structure of the proposed model for interpreting navigation instructions. The model consists of six layers:
Embed layer: The model first encodes each word and symbol in the input sequences INLINEFORM0 and INLINEFORM1 into fixed-length representations. The instructions INLINEFORM2 are embedded into a 100-dimensional pre-trained GloVe vector BIBREF24 . Each of the triplet components, INLINEFORM3 , INLINEFORM4 , and INLINEFORM5 of the graph INLINEFORM6 , are one-hot encoded into vectors of dimensionality INLINEFORM7 , where INLINEFORM8 and INLINEFORM9 are the number of nodes and edges in INLINEFORM10 , respectively.
Encoder layer: The model then uses two bidirectional Gated Recurrent Units (GRUs) BIBREF25 to independently process the information from INLINEFORM0 and INLINEFORM1 , and incorporate contextual cues from the surrounding embeddings in each sequence. The outputs of the encoder layer are the matrix INLINEFORM2 for the navigational commands and the matrix INLINEFORM3 for the behavioral graph, where INLINEFORM4 is the hidden size of each GRU, INLINEFORM5 is the number of words in the instruction INLINEFORM6 , and INLINEFORM7 is the number of triplets in the graph INLINEFORM8 .
Attention layer: Matrices INLINEFORM0 and INLINEFORM1 generated by the encoder layer are combined using an attention mechanism. We use one-way attention because the graph contains information about the whole environment, while the instruction has (potentially incomplete) local information about the route of interest. The use of attention provides our model with a two-step strategy to interpret commands. This resembles the way people find paths on a map: first, relevant parts on the map are selected according to their affinity to each of the words in the input instruction (attention layer); second, the selected parts are connected to assemble a valid path (decoder layer). More formally, let INLINEFORM2 ( INLINEFORM3 ) be the INLINEFORM4 -th row of INLINEFORM5 , and INLINEFORM6 ( INLINEFORM7 ) the INLINEFORM8 -th row of INLINEFORM9 . We use each encoded triplet INLINEFORM10 in INLINEFORM11 to calculate its associated attention distribution INLINEFORM12 over all the atomic instructions INLINEFORM13 : DISPLAYFORM0
where the matrix INLINEFORM0 serves to combine the different sources of information INLINEFORM1 and INLINEFORM2 . Each component INLINEFORM3 of the attention distributions INLINEFORM4 quantifies the affinity between the INLINEFORM5 -th triplet in INLINEFORM6 and the INLINEFORM7 -th word in the corresponding input INLINEFORM8 .
The model then uses each attention distribution INLINEFORM0 to obtain a weighted sum of the encodings of the words in INLINEFORM1 , according to their relevance to the corresponding triplet INLINEFORM2 . This results in L attention vectors INLINEFORM3 , INLINEFORM4 .
The final step in the attention layer concatenates each INLINEFORM0 with INLINEFORM1 to generate the outputs INLINEFORM2 , INLINEFORM3 . Following BIBREF8 , we include the encoded triplet INLINEFORM4 in the output tensor INLINEFORM5 of this layer to prevent early summaries of relevant map information.
FC layer: The model reduces the dimensionality of each individual vector INLINEFORM0 from INLINEFORM1 to INLINEFORM2 with a fully-connected (FC) layer. The resulting L vectors are output to the next layer as columns of a context matrix INLINEFORM3 .
Decoder layer: After the FC layer, the model predicts likelihoods over the sequence of behaviors that correspond to the input instructions with a GRU network. Without loss of generality, consider the INLINEFORM0 -th recurrent cell in the GRU network. This cell takes two inputs: a hidden state vector INLINEFORM1 from the prior cell, and a one-hot embedding of the previous behavior INLINEFORM2 that was predicted by the model. Based on these inputs, the GRU cell outputs a new hidden state INLINEFORM3 to compute likelihoods for the next behavior. These likelihoods are estimated by combining the output state INLINEFORM4 with relevant information from the context INLINEFORM5 : DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are trainable parameters. The attention vector INLINEFORM3 in Eq. () quantifies the affinity of INLINEFORM4 with respect to each of the columns INLINEFORM5 of INLINEFORM6 , where INLINEFORM7 . The attention vector also helps to estimate a dynamic contextual vector INLINEFORM8 that the INLINEFORM9 -th GRU cell uses to compute logits for the next behavior: DISPLAYFORM0
with INLINEFORM0 trainable parameters. Note that INLINEFORM1 includes a value for each of the pre-defined behaviors in the graph INLINEFORM2 , as well as for a special “stop” symbol to identify the end of the output sequence.
Output layer: The final layer of the model searches for a valid sequence of robot behaviors based on the robot's initial node, the connectivity of the graph INLINEFORM0 , and the output logits from the previous decoder layer. Again, without loss of generality, consider the INLINEFORM1 -th behavior INLINEFORM2 that is finally predicted by the model. The search for this behavior is implemented as: DISPLAYFORM0
with INLINEFORM0 a masking function that takes as input the graph INLINEFORM1 and the node INLINEFORM2 that the robot reaches after following the sequence of behaviors INLINEFORM3 previously predicted by the model. The INLINEFORM4 function returns a vector of the same dimensionality as the logits INLINEFORM5 , but with zeros for the valid behaviors after the last location INLINEFORM6 and for the special stop symbol, and INLINEFORM7 for any invalid predictions according to the connectivity of the behavioral navigation graph.
Dataset
We created a new dataset for the problem of following navigation instructions under the behavioral navigation framework of BIBREF5 . This dataset was created using Amazon Mechanical Turk and 100 maps of simulated indoor environments, each with 6 to 65 rooms. To the best of our knowledge, this is the first benchmark for comparing translation models in the context of behavioral robot navigation.
As shown in Table TABREF16 , the dataset consists of 8066 pairs of free-form natural language instructions and navigation plans for training. This training data was collected from 88 unique simulated environments, totaling 6064 distinct navigation plans (2002 plans have two different navigation instructions each; the rest has one). The dataset contains two test set variants:
While the dataset was collected with simulated environments, no structure was imposed on the navigation instructions while crowd-sourcing data. Thus, many instructions in our dataset are ambiguous. Moreover, the order of the behaviors in the instructions is not always the same. For instance, a person said “turn right and advance” to describe part of a route, while another person said “go straight after turning right” in a similar situation. The high variability present in the natural language descriptions of our dataset makes the problem of decoding instructions into behaviors not trivial. See Appendix A of the supplementary material for additional details on our data collection effort.
Experiments
This section describes our evaluation of the proposed approach for interpreting navigation commands in natural language. We provide both quantitative and qualitative results.
Evaluation Metrics
While computing evaluation metrics, we only consider the behaviors present in the route because they are sufficient to recover the high-level navigation plan from the graph. Our metrics treat each behavior as a single token. For example, the sample plan “R-1 oor C-1 cf C-1 lt C-0 cf C-0 iol O-3" is considered to have 5 tokens, each corresponding to one of its behaviors (“oor", “cf", “lt", “cf", “iol"). In this plan, “R-1",“C-1", “C-0", and “O-3" are symbols for locations (nodes) in the graph.
We compare the performance of translation approaches based on four metrics:
[align=left,leftmargin=0em,labelsep=0.4em,font=]
As in BIBREF20 , EM is 1 if a predicted plan matches exactly the ground truth; otherwise it is 0.
The harmonic average of the precision and recall over all the test set BIBREF26 .
The minimum number of insertions, deletions or swap operations required to transform a predicted sequence of behaviors into the ground truth sequence BIBREF27 .
GM is 1 if a predicted plan reaches the ground truth destination (even if the full sequence of behaviors does not match exactly the ground truth). Otherwise, GM is 0.
Models Used in the Evaluation
We compare the proposed approach for translating natural language instructions into a navigation plan against alternative deep-learning models:
[align=left,leftmargin=0em,labelsep=0.4em,font=]
The baseline approach is based on BIBREF20 . It divides the task of interpreting commands for behavioral navigation into two steps: path generation, and path verification. For path generation, this baseline uses a standard sequence-to-sequence model augmented with an attention mechanism, similar to BIBREF23 , BIBREF6 . For path verification, the baseline uses depth-first search to find a route in the graph that matches the sequence of predicted behaviors. If no route matches perfectly, the baseline changes up to three behaviors in the predicted sequence to try to turn it into a valid path.
To test the impact of using the behavioral graphs as an extra input to our translation model, we implemented a version of our approach that only takes natural language instructions as input. In this ablation model, the output of the bidirectional GRU that encodes the input instruction INLINEFORM0 is directly fed to the decoder layer. This model does not have the attention and FC layers described in Sec. SECREF4 , nor uses the masking function in the output layer.
This model is the same as the previous Ablation model, but with the masking function in the output layer.
Implementation Details
We pre-processed the inputs to the various models that are considered in our experiment. In particular, we lowercased, tokenized, spell-checked and lemmatized the input instructions in text-form using WordNet BIBREF28 . We also truncated the graphs to a maximum of 300 triplets, and the navigational instructions to a maximum of 150 words. Only 6.4% (5.4%) of the unique graphs in the training (validation) set had more than 300 triplets, and less than 0.15% of the natural language instructions in these sets had more than 150 tokens.
The dimensionality of the hidden state of the GRU networks was set to 128 in all the experiments. In general, we used 12.5% of the training set as validation for choosing models' hyper-parameters. In particular, we used dropout after the encoder and the fully-connected layers of the proposed model to reduce overfitting. Best performance was achieved with a dropout rate of 0.5 and batch size equal to 256. We also used scheduled sampling BIBREF29 at training time for all models except the baseline.
We input the triplets from the graph to our proposed model in alphabetical order, and consider a modification where the triplets that surround the start location of the robot are provided first in the input graph sequence. We hypothesized that such rearrangement would help identify the starting location (node) of the robot in the graph. In turn, this could facilitate the prediction of correct output sequences. In the remaining of the paper, we refer to models that were provided a rearranged graph, beginning with the starting location of the robot, as models with “Ordered Triplets”.
Quantitative Evaluation
Table TABREF28 shows the performance of the models considered in our evaluation on both test sets. The next two sections discuss the results in detail.
First, we can observe that the final model “Ours with Mask and Ordered Triplets” outperforms the Baseline and Ablation models on all metrics in previously seen environments. The difference in performance is particularly evident for the Exact Match and Goal Match metrics, with our model increasing accuracy by 35% and 25% in comparison to the Baseline and Ablation models, respectively. These results suggest that providing the behavioral navigation graph to the model and allowing it to process this information as a knowledge base in an end-to-end fashion is beneficial.
We can also observe from Table TABREF28 that the masking function of Eq. ( EQREF12 ) tends to increase performance in the Test-Repeated Set by constraining the output sequence to a valid set of navigation behaviors. For the Ablation model, using the masking function leads to about INLINEFORM0 increase in EM and GM accuracy. For the proposed model (with or without reordering the graph triplets), the increase in accuracy is around INLINEFORM1 . Note that the impact of the masking function is less evident in terms of the F1 score because this metric considers if a predicted behavior exists in the ground truth navigation plan, irrespective of its specific position in the output sequence.
The results in the last four rows of Table TABREF28 suggest that ordering the graph triplets can facilitate predicting correct navigation plans in previously seen environments. Providing the triplets that surround the starting location of the robot first to the model leads to a boost of INLINEFORM0 in EM and GM performance. The rearrangement of the graph triplets also helps to reduce ED and increase F1.
Lastly, it is worth noting that our proposed model (last row of Table TABREF28 ) outperforms all other models in previously seen environments. In particular, we obtain over INLINEFORM0 increase in EM and GM between our model and the next best two models.
The previous section evaluated model performance on new instructions (and corresponding navigation plans) for environments that were previously seen at training time. Here, we examine whether the trained models succeed on environments that are completely new.
The evaluation on the Test-New Set helps understand the generalization capabilities of the models under consideration. This experiment is more challenging than the one in the previous section, as can be seen in performance drops in Table TABREF28 for the new environments. Nonetheless, the insights from the previous section still hold: masking in the output layer and reordering the graph triplets tend to increase performance.
Even though the results in Table TABREF28 suggest that there is room for future work on decoding natural language instructions, our model still outperforms the baselines by a clear margin in new environments. For instance, the difference between our model and the second best model in the Test-New set is about INLINEFORM0 EM and GM. Note that the average number of actions in the ground truth output sequences is 7.07 for the Test-New set. Our model's predictions are just INLINEFORM1 edits off on average from the correct navigation plans.
Qualitative Evaluation
This section discusses qualitative results to better understand how the proposed model uses the navigation graph.
We analyze the evolution of the attention weights INLINEFORM0 in Eq. () to assess if the decoder layer of the proposed model is attending to the correct parts of the behavioral graph when making predictions. Fig FIGREF33 (b) shows an example of the resulting attention map for the case of a correct prediction. In the Figure, the attention map is depicted as a scaled and normalized 2D array of color codes. Each column in the array shows the attention distribution INLINEFORM1 used to generate the predicted output at step INLINEFORM2 . Consequently, each row in the array represents a triplet in the corresponding behavioral graph. This graph consists of 72 triplets for Fig FIGREF33 (b).
We observe a locality effect associated to the attention coefficients corresponding to high values (bright areas) in each column of Fig FIGREF33 (b). This suggests that the decoder is paying attention to graph triplets associated to particular neighborhoods of the environment in each prediction step. We include additional attention visualizations in the supplementary Appendix, including cases where the dynamics of the attention distribution are harder to interpret.
All the routes in our dataset are the shortest paths from a start location to a given destination. Thus, we collected a few additional natural language instructions to check if our model was able to follow navigation instructions describing sub-optimal paths. One such example is shown in Fig. FIGREF37 , where the blue route (shortest path) and the red route (alternative path) are described by:
[leftmargin=*, labelsep=0.2em, itemsep=0em]
“Go out the office and make a left. Turn right at the corner and go down the hall. Make a right at the next corner and enter the kitchen in front of table.”
“Exit the room 0 and turn right, go to the end of the corridor and turn left, go straight to the end of the corridor and turn left again. After passing bookshelf on your left and table on your right, Enter the kitchen on your right.”
For both routes, the proposed model was able to predict the correct sequence of navigation behaviors. This result suggests that the model is indeed using the input instructions and is not just approximating shortest paths in the behavioral graph. Other examples on the prediction of sub-obtimal paths are described in the Appendix.
Conclusion
This work introduced behavioral navigation through free-form natural language instructions as a challenging and a novel task that falls at the intersection of natural language processing and robotics. This problem has a range of interesting cross-domain applications, including information retrieval.
We proposed an end-to-end system to translate user instructions to a high-level navigation plan. Our model utilized an attention mechanism to merge relevant information from the navigation instructions with a behavioral graph of the environment. The model then used a decoder to predict a sequence of navigation behaviors that matched the input commands.
As part of this effort, we contributed a new dataset of 11,051 pairs of user instructions and navigation plans from 100 different environments. Our model achieved the best performance in this dataset in comparison to a two-step baseline approach for interpreting navigation instructions, and a sequence-to-sequence model that does not consider the behavioral graph. Our quantitative and qualitative results suggest that attention mechanisms can help leverage the behavioral graph as a relevant knowledge base to facilitate the translation of free-form navigation instructions. Overall, our approach demonstrated practical form of learning for a complex and useful task. In future work, we are interested in investigating mechanisms to improve generalization to new environments. For example, pointer and graph networks BIBREF30 , BIBREF31 are a promising direction to help supervise translation models and predict motion behaviors.
Acknowledgments
The Toyota Research Institute (TRI) provided funds to assist with this research, but this paper solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity. This work is also partially funded by Fondecyt grant 1181739, Conicyt, Chile. The authors would also like to thank Gabriel Sepúlveda for his assistance with parts of this project. | using Amazon Mechanical Turk using simulated environments with topological maps |
fd7f13b63f6ba674f5d5447b6114a201fe3137cb | fd7f13b63f6ba674f5d5447b6114a201fe3137cb_0 | Q: What language is the experiment done in?
Text: Introduction
Enabling robots to follow navigation instructions in natural language can facilitate human-robot interaction across a variety of applications. For instance, within the service robotics domain, robots can follow navigation instructions to help with mobile manipulation BIBREF0 and delivery tasks BIBREF1 .
Interpreting navigation instructions in natural language is difficult due to the high variability in the way people describe routes BIBREF2 . For example, there are a variety of ways to describe the route in Fig. FIGREF4 (a):
Each fragment of a sentence within these instructions can be mapped to one or more than one navigation behaviors. For instance, assume that a robot counts with a number of primitive, navigation behaviors, such as “enter the room on the left (or on right)” , “follow the corridor”, “cross the intersection”, etc. Then, the fragment “advance forward” in a navigation instruction could be interpreted as a “follow the corridor” behavior, or as a sequence of “follow the corridor” interspersed with “cross the intersection” behaviors depending on the topology of the environment. Resolving such ambiguities often requires reasoning about “common-sense” concepts, as well as interpreting spatial information and landmarks, e.g., in sentences such as “the room on the left right before the end of the corridor” and “the room which is in the middle of two vases”.
In this work, we pose the problem of interpreting navigation instructions as finding a mapping (or grounding) of the commands into an executable navigation plan. While the plan is typically modeled as a formal specification of low-level motions BIBREF2 or a grammar BIBREF3 , BIBREF4 , we focus specifically on translating instructions to a high-level navigation plan based on a topological representation of the environment. This representation is a behavioral navigation graph, as recently proposed by BIBREF5 , designed to take advantage of the semantic structure typical of human environments. The nodes of the graph correspond to semantically meaningful locations for the navigation task, such as kitchens or entrances to rooms in corridors. The edges are parameterized, visuo-motor behaviors that allow a robot to navigate between neighboring nodes, as illustrated in Fig. FIGREF4 (b). Under this framework, complex navigation routes can be achieved by sequencing behaviors without an explicit metric representation of the world.
We formulate the problem of following instructions under the framework of BIBREF5 as finding a path in the behavioral navigation graph that follows the desired route, given a known starting location. The edges (behaviors) along this path serve to reach the – sometimes implicit – destination requested by the user. As in BIBREF6 , our focus is on the problem of interpreting navigation directions. We assume that a robot can realize valid navigation plans according to the graph.
We contribute a new end-to-end model for following directions in natural language under the behavioral navigation framework. Inspired by the information retrieval and question answering literature BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , we propose to leverage the behavioral graph as a knowledge base to facilitate the interpretation of navigation commands. More specifically, the proposed model takes as input user directions in text form, the behavioral graph of the environment encoded as INLINEFORM0 node; edge; node INLINEFORM1 triplets, and the initial location of the robot in the graph. The model then predicts a set of behaviors to reach the desired destination according to the instructions and the map (Fig. FIGREF4 (c)). Our main insight is that using attention mechanisms to correlate navigation instructions with the topological map of the environment can facilitate predicting correct navigation plans.
This work also contributes a new dataset of INLINEFORM0 pairs of free-form natural language instructions and high-level navigation plans. This dataset was collected through Mechanical Turk using 100 simulated environments with a corresponding topological map and, to the best of our knowledge, it is the first of its kind for behavioral navigation. The dataset opens up opportunities to explore data-driven methods for grounding navigation commands into high-level motion plans.
We conduct extensive experiments to study the generalization capabilities of the proposed model for following natural language instructions. We investigate both generalization to new instructions in known and in new environments. We conclude this paper by discussing the benefits of the proposed approach as well as opportunities for future research based on our findings.
Related work
This section reviews relevant prior work on following navigation instructions. Readers interested in an in-depth review of methods to interpret spatial natural language for robotics are encouraged to refer to BIBREF11 .
Typical approaches to follow navigation commands deal with the complexity of natural language by manually parsing commands, constraining language descriptions, or using statistical machine translation methods. While manually parsing commands is often impractical, the first type of approaches are foundational: they showed that it is possible to leverage the compositionality of semantic units to interpret spatial language BIBREF12 , BIBREF13 .
Constraining language descriptions can reduce the size of the input space to facilitate the interpretation of user commands. For example, BIBREF14 explored using structured, symbolic language phrases for navigation. As in this earlier work, we are also interested in navigation with a topological map of the environment. However, we do not process symbolic phrases. Our aim is to translate free-form natural language instructions to a navigation plan using information from a high-level representation of the environment. This translation problem requires dealing with missing actions in navigation instructions and actions with preconditions, such as “at the end of the corridor, turn right” BIBREF15 .
Statistical machine translation BIBREF16 is at the core of recent approaches to enable robots to follow navigation instructions. These methods aim to automatically discover translation rules from a corpus of data, and often leverage the fact that navigation directions are composed of sequential commands. For instance, BIBREF17 , BIBREF4 , BIBREF2 used statistical machine translation to map instructions to a formal language defined by a grammar. Likewise, BIBREF18 , BIBREF0 mapped commands to spatial description clauses based on the hierarchical structure of language in the navigation problem. Our approach to machine translation builds on insights from these prior efforts. In particular, we focus on end-to-end learning for statistical machine translation due to the recent success of Neural Networks in Natural Language Processing BIBREF19 .
Our work is inspired by methods that reduce the task of interpreting user commands to a sequential prediction problem BIBREF20 , BIBREF21 , BIBREF22 . Similar to BIBREF21 and BIBREF22 , we use a sequence-to-sequence model to enable a mobile agent to follow routes. But instead leveraging visual information to output low-level navigation commands, we focus on using a topological map of the environment to output a high-level navigation plan. This plan is a sequence of behaviors that can be executed by a robot to reach a desired destination BIBREF5 , BIBREF6 .
We explore machine translation from the perspective of automatic question answering. Following BIBREF8 , BIBREF9 , our approach uses attention mechanisms to learn alignments between different input modalities. In our case, the inputs to our model are navigation instructions, a topological environment map, and the start location of the robot (Fig. FIGREF4 (c)). Our results show that the map can serve as an effective source of contextual information for the translation task. Additionally, it is possible to leverage this kind of information in an end-to-end fashion.
Problem Formulation
Our goal is to translate navigation instructions in text form into a sequence of behaviors that a robot can execute to reach a desired destination from a known start location. We frame this problem under a behavioral approach to indoor autonomous navigation BIBREF5 and assume that prior knowledge about the environment is available for the translation task. This prior knowledge is a topological map, in the form of a behavioral navigation graph (Fig. FIGREF4 (b)). The nodes of the graph correspond to semantically-meaningful locations for the navigation task, and its directed edges are visuo-motor behaviors that a robot can use to move between nodes. This formulation takes advantage of the rich semantic structure behind man-made environments, resulting in a compact route representation for robot navigation.
Fig. FIGREF4 (c) provides a schematic view of the problem setting. The inputs are: (1) a navigation graph INLINEFORM0 , (2) the starting node INLINEFORM1 of the robot in INLINEFORM2 , and (3) a set of free-form navigation instructions INLINEFORM3 in natural language. The instructions describe a path in the graph to reach from INLINEFORM4 to a – potentially implicit – destination node INLINEFORM5 . Using this information, the objective is to predict a suitable sequence of robot behaviors INLINEFORM6 to navigate from INLINEFORM7 to INLINEFORM8 according to INLINEFORM9 . From a supervised learning perspective, the goal is then to estimate: DISPLAYFORM0
based on a dataset of input-target pairs INLINEFORM0 , where INLINEFORM1 and INLINEFORM2 , respectively. The sequential execution of the behaviors INLINEFORM3 should replicate the route intended by the instructions INLINEFORM4 . We assume no prior linguistic knowledge. Thus, translation approaches have to cope with the semantics and syntax of the language by discovering corresponding patterns in the data.
The Behavioral Graph: A Knowledge Base For Navigation
We view the behavioral graph INLINEFORM0 as a knowledge base that encodes a set of navigational rules as triplets INLINEFORM1 , where INLINEFORM2 and INLINEFORM3 are adjacent nodes in the graph, and the edge INLINEFORM4 is an executable behavior to navigate from INLINEFORM5 to INLINEFORM6 . In general, each behaviors includes a list of relevant navigational attributes INLINEFORM7 that the robot might encounter when moving between nodes.
We consider 7 types of semantic locations, 11 types of behaviors, and 20 different types of landmarks. A location in the navigation graph can be a room, a lab, an office, a kitchen, a hall, a corridor, or a bathroom. These places are labeled with unique tags, such as "room-1" or "lab-2", except for bathrooms and kitchens which people do not typically refer to by unique names when describing navigation routes.
Table TABREF7 lists the navigation behaviors that we consider in this work. These behaviors can be described in reference to visual landmarks or objects, such as paintings, book shelfs, tables, etc. As in Fig. FIGREF4 , maps might contain multiple landmarks of the same type. Please see the supplementary material (Appendix A) for more details.
Approach
We leverage recent advances in deep learning to translate natural language instructions to a sequence of navigation behaviors in an end-to-end fashion. Our proposed model builds on the sequence-to-sequence translation model of BIBREF23 , which computes a soft-alignment between a source sequence (natural language instructions in our case) and the corresponding target sequence (navigation behaviors).
As one of our main contributions, we augment the neural machine translation approach of BIBREF23 to take as input not only natural language instructions, but also the corresponding behavioral navigation graph INLINEFORM0 of the environment where navigation should take place. Specifically, at each step, the graph INLINEFORM1 operates as a knowledge base that the model can access to obtain information about path connectivity, facilitating the grounding of navigation commands.
Figure FIGREF8 shows the structure of the proposed model for interpreting navigation instructions. The model consists of six layers:
Embed layer: The model first encodes each word and symbol in the input sequences INLINEFORM0 and INLINEFORM1 into fixed-length representations. The instructions INLINEFORM2 are embedded into a 100-dimensional pre-trained GloVe vector BIBREF24 . Each of the triplet components, INLINEFORM3 , INLINEFORM4 , and INLINEFORM5 of the graph INLINEFORM6 , are one-hot encoded into vectors of dimensionality INLINEFORM7 , where INLINEFORM8 and INLINEFORM9 are the number of nodes and edges in INLINEFORM10 , respectively.
Encoder layer: The model then uses two bidirectional Gated Recurrent Units (GRUs) BIBREF25 to independently process the information from INLINEFORM0 and INLINEFORM1 , and incorporate contextual cues from the surrounding embeddings in each sequence. The outputs of the encoder layer are the matrix INLINEFORM2 for the navigational commands and the matrix INLINEFORM3 for the behavioral graph, where INLINEFORM4 is the hidden size of each GRU, INLINEFORM5 is the number of words in the instruction INLINEFORM6 , and INLINEFORM7 is the number of triplets in the graph INLINEFORM8 .
Attention layer: Matrices INLINEFORM0 and INLINEFORM1 generated by the encoder layer are combined using an attention mechanism. We use one-way attention because the graph contains information about the whole environment, while the instruction has (potentially incomplete) local information about the route of interest. The use of attention provides our model with a two-step strategy to interpret commands. This resembles the way people find paths on a map: first, relevant parts on the map are selected according to their affinity to each of the words in the input instruction (attention layer); second, the selected parts are connected to assemble a valid path (decoder layer). More formally, let INLINEFORM2 ( INLINEFORM3 ) be the INLINEFORM4 -th row of INLINEFORM5 , and INLINEFORM6 ( INLINEFORM7 ) the INLINEFORM8 -th row of INLINEFORM9 . We use each encoded triplet INLINEFORM10 in INLINEFORM11 to calculate its associated attention distribution INLINEFORM12 over all the atomic instructions INLINEFORM13 : DISPLAYFORM0
where the matrix INLINEFORM0 serves to combine the different sources of information INLINEFORM1 and INLINEFORM2 . Each component INLINEFORM3 of the attention distributions INLINEFORM4 quantifies the affinity between the INLINEFORM5 -th triplet in INLINEFORM6 and the INLINEFORM7 -th word in the corresponding input INLINEFORM8 .
The model then uses each attention distribution INLINEFORM0 to obtain a weighted sum of the encodings of the words in INLINEFORM1 , according to their relevance to the corresponding triplet INLINEFORM2 . This results in L attention vectors INLINEFORM3 , INLINEFORM4 .
The final step in the attention layer concatenates each INLINEFORM0 with INLINEFORM1 to generate the outputs INLINEFORM2 , INLINEFORM3 . Following BIBREF8 , we include the encoded triplet INLINEFORM4 in the output tensor INLINEFORM5 of this layer to prevent early summaries of relevant map information.
FC layer: The model reduces the dimensionality of each individual vector INLINEFORM0 from INLINEFORM1 to INLINEFORM2 with a fully-connected (FC) layer. The resulting L vectors are output to the next layer as columns of a context matrix INLINEFORM3 .
Decoder layer: After the FC layer, the model predicts likelihoods over the sequence of behaviors that correspond to the input instructions with a GRU network. Without loss of generality, consider the INLINEFORM0 -th recurrent cell in the GRU network. This cell takes two inputs: a hidden state vector INLINEFORM1 from the prior cell, and a one-hot embedding of the previous behavior INLINEFORM2 that was predicted by the model. Based on these inputs, the GRU cell outputs a new hidden state INLINEFORM3 to compute likelihoods for the next behavior. These likelihoods are estimated by combining the output state INLINEFORM4 with relevant information from the context INLINEFORM5 : DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are trainable parameters. The attention vector INLINEFORM3 in Eq. () quantifies the affinity of INLINEFORM4 with respect to each of the columns INLINEFORM5 of INLINEFORM6 , where INLINEFORM7 . The attention vector also helps to estimate a dynamic contextual vector INLINEFORM8 that the INLINEFORM9 -th GRU cell uses to compute logits for the next behavior: DISPLAYFORM0
with INLINEFORM0 trainable parameters. Note that INLINEFORM1 includes a value for each of the pre-defined behaviors in the graph INLINEFORM2 , as well as for a special “stop” symbol to identify the end of the output sequence.
Output layer: The final layer of the model searches for a valid sequence of robot behaviors based on the robot's initial node, the connectivity of the graph INLINEFORM0 , and the output logits from the previous decoder layer. Again, without loss of generality, consider the INLINEFORM1 -th behavior INLINEFORM2 that is finally predicted by the model. The search for this behavior is implemented as: DISPLAYFORM0
with INLINEFORM0 a masking function that takes as input the graph INLINEFORM1 and the node INLINEFORM2 that the robot reaches after following the sequence of behaviors INLINEFORM3 previously predicted by the model. The INLINEFORM4 function returns a vector of the same dimensionality as the logits INLINEFORM5 , but with zeros for the valid behaviors after the last location INLINEFORM6 and for the special stop symbol, and INLINEFORM7 for any invalid predictions according to the connectivity of the behavioral navigation graph.
Dataset
We created a new dataset for the problem of following navigation instructions under the behavioral navigation framework of BIBREF5 . This dataset was created using Amazon Mechanical Turk and 100 maps of simulated indoor environments, each with 6 to 65 rooms. To the best of our knowledge, this is the first benchmark for comparing translation models in the context of behavioral robot navigation.
As shown in Table TABREF16 , the dataset consists of 8066 pairs of free-form natural language instructions and navigation plans for training. This training data was collected from 88 unique simulated environments, totaling 6064 distinct navigation plans (2002 plans have two different navigation instructions each; the rest has one). The dataset contains two test set variants:
While the dataset was collected with simulated environments, no structure was imposed on the navigation instructions while crowd-sourcing data. Thus, many instructions in our dataset are ambiguous. Moreover, the order of the behaviors in the instructions is not always the same. For instance, a person said “turn right and advance” to describe part of a route, while another person said “go straight after turning right” in a similar situation. The high variability present in the natural language descriptions of our dataset makes the problem of decoding instructions into behaviors not trivial. See Appendix A of the supplementary material for additional details on our data collection effort.
Experiments
This section describes our evaluation of the proposed approach for interpreting navigation commands in natural language. We provide both quantitative and qualitative results.
Evaluation Metrics
While computing evaluation metrics, we only consider the behaviors present in the route because they are sufficient to recover the high-level navigation plan from the graph. Our metrics treat each behavior as a single token. For example, the sample plan “R-1 oor C-1 cf C-1 lt C-0 cf C-0 iol O-3" is considered to have 5 tokens, each corresponding to one of its behaviors (“oor", “cf", “lt", “cf", “iol"). In this plan, “R-1",“C-1", “C-0", and “O-3" are symbols for locations (nodes) in the graph.
We compare the performance of translation approaches based on four metrics:
[align=left,leftmargin=0em,labelsep=0.4em,font=]
As in BIBREF20 , EM is 1 if a predicted plan matches exactly the ground truth; otherwise it is 0.
The harmonic average of the precision and recall over all the test set BIBREF26 .
The minimum number of insertions, deletions or swap operations required to transform a predicted sequence of behaviors into the ground truth sequence BIBREF27 .
GM is 1 if a predicted plan reaches the ground truth destination (even if the full sequence of behaviors does not match exactly the ground truth). Otherwise, GM is 0.
Models Used in the Evaluation
We compare the proposed approach for translating natural language instructions into a navigation plan against alternative deep-learning models:
[align=left,leftmargin=0em,labelsep=0.4em,font=]
The baseline approach is based on BIBREF20 . It divides the task of interpreting commands for behavioral navigation into two steps: path generation, and path verification. For path generation, this baseline uses a standard sequence-to-sequence model augmented with an attention mechanism, similar to BIBREF23 , BIBREF6 . For path verification, the baseline uses depth-first search to find a route in the graph that matches the sequence of predicted behaviors. If no route matches perfectly, the baseline changes up to three behaviors in the predicted sequence to try to turn it into a valid path.
To test the impact of using the behavioral graphs as an extra input to our translation model, we implemented a version of our approach that only takes natural language instructions as input. In this ablation model, the output of the bidirectional GRU that encodes the input instruction INLINEFORM0 is directly fed to the decoder layer. This model does not have the attention and FC layers described in Sec. SECREF4 , nor uses the masking function in the output layer.
This model is the same as the previous Ablation model, but with the masking function in the output layer.
Implementation Details
We pre-processed the inputs to the various models that are considered in our experiment. In particular, we lowercased, tokenized, spell-checked and lemmatized the input instructions in text-form using WordNet BIBREF28 . We also truncated the graphs to a maximum of 300 triplets, and the navigational instructions to a maximum of 150 words. Only 6.4% (5.4%) of the unique graphs in the training (validation) set had more than 300 triplets, and less than 0.15% of the natural language instructions in these sets had more than 150 tokens.
The dimensionality of the hidden state of the GRU networks was set to 128 in all the experiments. In general, we used 12.5% of the training set as validation for choosing models' hyper-parameters. In particular, we used dropout after the encoder and the fully-connected layers of the proposed model to reduce overfitting. Best performance was achieved with a dropout rate of 0.5 and batch size equal to 256. We also used scheduled sampling BIBREF29 at training time for all models except the baseline.
We input the triplets from the graph to our proposed model in alphabetical order, and consider a modification where the triplets that surround the start location of the robot are provided first in the input graph sequence. We hypothesized that such rearrangement would help identify the starting location (node) of the robot in the graph. In turn, this could facilitate the prediction of correct output sequences. In the remaining of the paper, we refer to models that were provided a rearranged graph, beginning with the starting location of the robot, as models with “Ordered Triplets”.
Quantitative Evaluation
Table TABREF28 shows the performance of the models considered in our evaluation on both test sets. The next two sections discuss the results in detail.
First, we can observe that the final model “Ours with Mask and Ordered Triplets” outperforms the Baseline and Ablation models on all metrics in previously seen environments. The difference in performance is particularly evident for the Exact Match and Goal Match metrics, with our model increasing accuracy by 35% and 25% in comparison to the Baseline and Ablation models, respectively. These results suggest that providing the behavioral navigation graph to the model and allowing it to process this information as a knowledge base in an end-to-end fashion is beneficial.
We can also observe from Table TABREF28 that the masking function of Eq. ( EQREF12 ) tends to increase performance in the Test-Repeated Set by constraining the output sequence to a valid set of navigation behaviors. For the Ablation model, using the masking function leads to about INLINEFORM0 increase in EM and GM accuracy. For the proposed model (with or without reordering the graph triplets), the increase in accuracy is around INLINEFORM1 . Note that the impact of the masking function is less evident in terms of the F1 score because this metric considers if a predicted behavior exists in the ground truth navigation plan, irrespective of its specific position in the output sequence.
The results in the last four rows of Table TABREF28 suggest that ordering the graph triplets can facilitate predicting correct navigation plans in previously seen environments. Providing the triplets that surround the starting location of the robot first to the model leads to a boost of INLINEFORM0 in EM and GM performance. The rearrangement of the graph triplets also helps to reduce ED and increase F1.
Lastly, it is worth noting that our proposed model (last row of Table TABREF28 ) outperforms all other models in previously seen environments. In particular, we obtain over INLINEFORM0 increase in EM and GM between our model and the next best two models.
The previous section evaluated model performance on new instructions (and corresponding navigation plans) for environments that were previously seen at training time. Here, we examine whether the trained models succeed on environments that are completely new.
The evaluation on the Test-New Set helps understand the generalization capabilities of the models under consideration. This experiment is more challenging than the one in the previous section, as can be seen in performance drops in Table TABREF28 for the new environments. Nonetheless, the insights from the previous section still hold: masking in the output layer and reordering the graph triplets tend to increase performance.
Even though the results in Table TABREF28 suggest that there is room for future work on decoding natural language instructions, our model still outperforms the baselines by a clear margin in new environments. For instance, the difference between our model and the second best model in the Test-New set is about INLINEFORM0 EM and GM. Note that the average number of actions in the ground truth output sequences is 7.07 for the Test-New set. Our model's predictions are just INLINEFORM1 edits off on average from the correct navigation plans.
Qualitative Evaluation
This section discusses qualitative results to better understand how the proposed model uses the navigation graph.
We analyze the evolution of the attention weights INLINEFORM0 in Eq. () to assess if the decoder layer of the proposed model is attending to the correct parts of the behavioral graph when making predictions. Fig FIGREF33 (b) shows an example of the resulting attention map for the case of a correct prediction. In the Figure, the attention map is depicted as a scaled and normalized 2D array of color codes. Each column in the array shows the attention distribution INLINEFORM1 used to generate the predicted output at step INLINEFORM2 . Consequently, each row in the array represents a triplet in the corresponding behavioral graph. This graph consists of 72 triplets for Fig FIGREF33 (b).
We observe a locality effect associated to the attention coefficients corresponding to high values (bright areas) in each column of Fig FIGREF33 (b). This suggests that the decoder is paying attention to graph triplets associated to particular neighborhoods of the environment in each prediction step. We include additional attention visualizations in the supplementary Appendix, including cases where the dynamics of the attention distribution are harder to interpret.
All the routes in our dataset are the shortest paths from a start location to a given destination. Thus, we collected a few additional natural language instructions to check if our model was able to follow navigation instructions describing sub-optimal paths. One such example is shown in Fig. FIGREF37 , where the blue route (shortest path) and the red route (alternative path) are described by:
[leftmargin=*, labelsep=0.2em, itemsep=0em]
“Go out the office and make a left. Turn right at the corner and go down the hall. Make a right at the next corner and enter the kitchen in front of table.”
“Exit the room 0 and turn right, go to the end of the corridor and turn left, go straight to the end of the corridor and turn left again. After passing bookshelf on your left and table on your right, Enter the kitchen on your right.”
For both routes, the proposed model was able to predict the correct sequence of navigation behaviors. This result suggests that the model is indeed using the input instructions and is not just approximating shortest paths in the behavioral graph. Other examples on the prediction of sub-obtimal paths are described in the Appendix.
Conclusion
This work introduced behavioral navigation through free-form natural language instructions as a challenging and a novel task that falls at the intersection of natural language processing and robotics. This problem has a range of interesting cross-domain applications, including information retrieval.
We proposed an end-to-end system to translate user instructions to a high-level navigation plan. Our model utilized an attention mechanism to merge relevant information from the navigation instructions with a behavioral graph of the environment. The model then used a decoder to predict a sequence of navigation behaviors that matched the input commands.
As part of this effort, we contributed a new dataset of 11,051 pairs of user instructions and navigation plans from 100 different environments. Our model achieved the best performance in this dataset in comparison to a two-step baseline approach for interpreting navigation instructions, and a sequence-to-sequence model that does not consider the behavioral graph. Our quantitative and qualitative results suggest that attention mechanisms can help leverage the behavioral graph as a relevant knowledge base to facilitate the translation of free-form navigation instructions. Overall, our approach demonstrated practical form of learning for a complex and useful task. In future work, we are interested in investigating mechanisms to improve generalization to new environments. For example, pointer and graph networks BIBREF30 , BIBREF31 are a promising direction to help supervise translation models and predict motion behaviors.
Acknowledgments
The Toyota Research Institute (TRI) provided funds to assist with this research, but this paper solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity. This work is also partially funded by Fondecyt grant 1181739, Conicyt, Chile. The authors would also like to thank Gabriel Sepúlveda for his assistance with parts of this project. | english language |
c82e945b43b2e61c8ea567727e239662309e9508 | c82e945b43b2e61c8ea567727e239662309e9508_0 | Q: What additional features are proposed for future work?
Text: Introduction
Psychotic disorders typically emerge in late adolescence or early adulthood BIBREF0 , BIBREF1 and affect approximately 2.5-4% of the population BIBREF2 , BIBREF3 , making them one of the leading causes of disability worldwide BIBREF4 . A substantial proportion of psychiatric inpatients are readmitted after discharge BIBREF5 . Readmissions are disruptive for patients and families, and are a key driver of rising healthcare costs BIBREF6 , BIBREF7 . Reducing readmission risk is therefore a major unmet need of psychiatric care. Developing clinically implementable machine learning tools to enable accurate assessment of risk factors associated with readmission offers opportunities to inform the selection of treatment interventions and implement appropriate preventive measures.
In psychiatry, traditional strategies to study readmission risk factors rely on clinical observation and manual retrospective chart review BIBREF8 , BIBREF9 . This approach, although benefitting from clinical expertise, does not scale well for large data sets, is effort-intensive, and lacks automation. An efficient, more robust, and cheaper NLP-based alternative approach has been developed and met with some success in other medical fields BIBREF10 . However, this approach has seldom been applied in psychiatry because of the unique characteristics of psychiatric medical record content.
There are several challenges for topic extraction when dealing with clinical narratives in psychiatric EHRs. First, the vocabulary used is highly varied and context-sensitive. A patient may report “feeling `really great and excited'" – symptoms of mania – without any explicit mention of keywords that differ from everyday vocabulary. Also, many technical terms in clinical narratives are multiword expressions (MWEs) such as `obsessive body image', `linear thinking', `short attention span', or `panic attack'. These phrasemes are comprised of words that in isolation do not impart much information in determining relatedness to a given topic but do in the context of the expression.
Second, the narrative structure in psychiatric clinical narratives varies considerably in how the same phenomenon can be described. Hallucinations, for example, could be described as “the patient reports auditory hallucinations," or “the patient has been hearing voices for several months," amongst many other possibilities.
Third, phenomena can be directly mentioned without necessarily being relevant to the patient specifically. Psychosis patient discharge summaries, for instance, can include future treatment plans (e.g. “Prevent relapse of a manic or major depressive episode.", “Prevent recurrence of psychosis.") containing vocabulary that at the word-level seem strongly correlated with readmission risk. Yet at the paragraph-level these do not indicate the presence of a readmission risk factor in the patient and in fact indicate the absence of a risk factor that was formerly present.
Lastly, given the complexity of phenotypic assessment in psychiatric illnesses, patients with psychosis exhibit considerable differences in terms of illness and symptom presentation. The constellation of symptoms leads to various diagnoses and comorbidities that can change over time, including schizophrenia, schizoaffective disorder, bipolar disorder with psychosis, and substance use induced psychosis. Thus, the lexicon of words and phrases used in EHRs differs not only across diagnoses but also across patients and time.
Taken together, these factors make topic extraction a difficult task that cannot be accomplished by keyword search or other simple text-mining techniques.
To identify specific risk factors to focus on, we not only reviewed clinical literature of risk factors associated with readmission BIBREF11 , BIBREF12 , but also considered research related to functional remission BIBREF13 , forensic risk factors BIBREF14 , and consulted clinicians involved with this project. Seven risk factor domains – Appearance, Mood, Interpersonal, Occupation, Thought Content, Thought Process, and Substance – were chosen because they are clinically relevant, consistent with literature, replicable across data sets, explainable, and implementable in NLP algorithms.
In our present study, we evaluate multiple approaches to automatically identify which risk factor domains are associated with which paragraphs in psychotic patient EHRs. We perform this study in support of our long-term goal of creating a readmission risk classifier that can aid clinicians in targeting individual treatment interventions and assessing patient risk of harm (e.g. suicide risk, homicidal risk). Unlike other contemporary approaches in machine learning, we intend to create a model that is clinically explainable and flexible across training data while maintaining consistent performance.
To incorporate clinical expertise in the identification of risk factor domains, we undertake an annotation project, detailed in section 3.1. We identify a test set of over 1,600 EHR paragraphs which a team of three domain-expert clinicians annotate paragraph-by-paragraph for relevant risk factor domains. Section 3.2 describes the results of this annotation task. We then use the gold standard from the annotation project to assess the performance of multiple neural classification models trained exclusively on Term Frequency – Inverse Document Frequency (TF-IDF) vectorized EHR data, described in section 4. To further improve the performance of our model, we incorporate domain-relevant MWEs identified using all in-house data.
Related Work
McCoy et al. mccoy2015clinical constructed a corpus of web data based on the Research Domain Criteria (RDoC) BIBREF15 , and used this corpus to create a vector space document similarity model for topic extraction. They found that the `negative valence' and `social' RDoC domains were associated with readmission. Using web data (in this case data retrieved from the Bing API) to train a similarity model for EHR texts is problematic since it differs from the target data in both structure and content. Based on reconstruction of the procedure, we conclude that many of the informative MWEs critical to understanding the topics of paragraphs in EHRs are not captured in the web data. Additionally, RDoC is by design a generalized research construct to describe the entire spectrum of mental disorders and does not include domains that are based on observation or causes of symptoms. Important indicators within EHRs of patient health, like appearance or occupation, are not included in the RDoC constructs.
Rumshisky et al. rumshisky2016predicting used a corpus of EHRs from patients with a primary diagnosis of major depressive disorder to create a 75-topic LDA topic model that they then used in a readmission prediction classifier pipeline. Like with McCoy et al. mccoy2015clinical, the data used to train the LDA model was not ideal as the generalizability of the data was narrow, focusing on only one disorder. Their model achieved readmission prediction performance with an area under the curve of .784 compared to a baseline of .618. To perform clinical validation of the topics derived from the LDA model, they manually evaluated and annotated the topics, identifying the most informative vocabulary for the top ten topics. With their training data, they found the strongest coherence occurred in topics involving substance use, suicidality, and anxiety disorders. But given the unsupervised nature of the LDA clustering algorithm, the topic coherence they observed is not guaranteed across data sets.
Data
[2]The vast majority of patients in our target cohort are
dependents on a parental private health insurance plan.
Our target data set consists of a corpus of discharge summaries, admission notes, individual encounter notes, and other clinical notes from 220 patients in the OnTrackTM program at McLean Hospital. OnTrackTM is an outpatient program, focusing on treating adults ages 18 to 30 who are experiencing their first episodes of psychosis. The length of time in the program varies depending on patient improvement and insurance coverage, with an average of two to three years. The program focuses primarily on early intervention via individual therapy, group therapy, medication evaluation, and medication management. See Table TABREF2 for a demographic breakdown of the 220 patients, for which we have so far extracted approximately 240,000 total EHR paragraphs spanning from 2011 to 2014 using Meditech, the software employed by McLean for storing and organizing EHR data.
These patients are part of a larger research cohort of approximately 1,800 psychosis patients, which will allow us to connect the results of this EHR study with other ongoing research studies incorporating genetic, cognitive, neurobiological, and functional outcome data from this cohort.
We also use an additional data set for training our vector space model, comprised of EHR texts queried from the Research Patient Data Registry (RPDR), a centralized regional data repository of clinical data from all institutions in the Partners HealthCare network. These records are highly comparable in style and vocabulary to our target data set. The corpus consists of discharge summaries, encounter notes, and visit notes from approximately 30,000 patients admitted to the system's hospitals with psychiatric diagnoses and symptoms. This breadth of data captures a wide range of clinical narratives, creating a comprehensive foundation for topic extraction.
After using the RPDR query tool to extract EHR paragraphs from the RPDR database, we created a training corpus by categorizing the extracted paragraphs according to their risk factor domain using a lexicon of 120 keywords that were identified by the clinicians involved in this project. Certain domains – particularly those involving thoughts and other abstract concepts – are often identifiable by MWEs rather than single words. The same clinicians who identified the keywords manually examined the bigrams and trigrams with the highest TF-IDF scores for each domain in the categorized paragraphs, identifying those which are conceptually related to the given domain. We then used this lexicon of 775 keyphrases to identify more relevant training paragraphs in RPDR and treat them as (non-stemmed) unigrams when generating the matrix. By converting MWEs such as `shortened attention span', `unusual motor activity', `wide-ranging affect', or `linear thinking' to non-stemmed unigrams, the TF-IDF score (and therefore the predictive value) of these terms is magnified. In total, we constructed a corpus of roughly 100,000 paragraphs consisting of 7,000,000 tokens for training our model.
Annotation Task
In order to evaluate our models, we annotated 1,654 paragraphs selected from the 240,000 paragraphs extracted from Meditech with the clinically relevant domains described in Table TABREF3 . The annotation task was completed by three licensed clinicians. All paragraphs were removed from the surrounding EHR context to ensure annotators were not influenced by the additional contextual information. Our domain classification models consider each paragraph independently and thus we designed the annotation task to mirror the information available to the models.
The annotators were instructed to label each paragraph with one or more of the seven risk factor domains. In instances where more than one domain was applicable, annotators assigned the domains in order of prevalence within the paragraph. An eighth label, `Other', was included if a paragraph was ambiguous, uninterpretable, or about a domain not included in the seven risk factor domains (e.g. non-psychiatric medical concerns and lab results). The annotations were then reviewed by a team of two clinicians who adjudicated collaboratively to create a gold standard. The gold standard and the clinician-identified keywords and MWEs have received IRB approval for release to the community. They are available as supplementary data to this paper.
Inter-Annotator Agreement
Inter-annotator agreement (IAA) was assessed using a combination of Fleiss's Kappa (a variant of Scott's Pi that measures pairwise agreement for annotation tasks involving more than two annotators) BIBREF16 and Cohen's Multi-Kappa as proposed by Davies and Fleiss davies1982measuring. Table TABREF6 shows IAA calculations for both overall agreement and agreement on the first (most important) domain only. Following adjudication, accuracy scores were calculated for each annotator by evaluating their annotations against the gold standard.
Overall agreement was generally good and aligned almost exactly with the IAA on the first domain only. Out of the 1,654 annotated paragraphs, 671 (41%) had total agreement across all three annotators. We defined total agreement for the task as a set-theoretic complete intersection of domains for a paragraph identified by all annotators. 98% of paragraphs in total agreement involved one domain. Only 35 paragraphs had total disagreement, which we defined as a set-theoretic null intersection between the three annotators. An analysis of the 35 paragraphs with total disagreement showed that nearly 30% included the term “blunted/restricted". In clinical terminology, these terms can be used to refer to appearance, affect, mood, or emotion. Because the paragraphs being annotated were extracted from larger clinical narratives and examined independently of any surrounding context, it was difficult for the annotators to determine the most appropriate domain. This lack of contextual information resulted in each annotator using a different `default' label: Appearance, Mood, and Other. During adjudication, Other was decided as the most appropriate label unless the paragraph contained additional content that encompassed other domains, as it avoids making unnecessary assumptions. [3]Suicidal ideation [4]Homicidal ideation [5]Ethyl alcohol and ethanol
A Fleiss's Kappa of 0.575 lies on the boundary between `Moderate' and `Substantial' agreement as proposed by Landis and Koch landis1977measurement. This is a promising indication that our risk factor domains are adequately defined by our present guidelines and can be employed by clinicians involved in similar work at other institutions.
The fourth column in Table TABREF6 , Mean Accuracy, was calculated by averaging the three annotator accuracies as evaluated against the gold standard. This provides us with an informative baseline of human parity on the domain classification task.
[6]Rectified Linear Units, INLINEFORM0 BIBREF17 [7]Adaptive Moment Estimation BIBREF18
Topic Extraction
Figure FIGREF8 illustrates the data pipeline for generating our training and testing corpora, and applying them to our classification models.
We use the TfidfVectorizer tool included in the scikit-learn machine learning toolkit BIBREF19 to generate our TF-IDF vector space models, stemming tokens with the Porter Stemmer tool provided by the NLTK library BIBREF20 , and calculating TF-IDF scores for unigrams, bigrams, and trigrams. Applying Singular Value Decomposition (SVD) to the TF-IDF matrix, we reduce the vector space to 100 dimensions, which Zhang et al. zhang2011comparative found to improve classifier performance.
Starting with the approach taken by McCoy et al. mccoy2015clinical, who used aggregate cosine similarity scores to compute domain similarity directly from their TF-IDF vector space model, we extend this method by training a suite of three-layer multilayer perceptron (MLP) and radial basis function (RBF) neural networks using a variety of parameters to compare performance. We employ the Keras deep learning library BIBREF21 using a TensorFlow backend BIBREF22 for this task. The architectures of our highest performing MLP and RBF models are summarized in Table TABREF7 . Prototype vectors for the nodes in the hidden layer of our RBF model are selected via k-means clustering BIBREF23 on each domain paragraph megadocument individually. The RBF transfer function for each hidden layer node is assigned the same width, which is based off the maximum Euclidean distance between the centroids that were computed using k-means.
To prevent overfitting to the training data, we utilize a dropout rate BIBREF24 of 0.2 on the input layer of all models and 0.5 on the MLP hidden layer.
Since our classification problem is multiclass, multilabel, and open-world, we employ seven nodes with sigmoid activations in the output layer, one for each risk factor domain. This allows us to identify paragraphs that fall into more than one of the seven domains, as well as determine paragraphs that should be classified as Other. Unlike the traditionally used softmax activation function, which is ideal for single-label, closed-world classification tasks, sigmoid nodes output class likelihoods for each node independently without the normalization across all classes that occurs in softmax.
We find that the risk factor domains vary in the degree of homogeneity of language used, and as such certain domains produce higher similarity scores, on average, than others. To account for this, we calculate threshold similarity scores for each domain using the formula min=avg(sim)+ INLINEFORM0 * INLINEFORM1 (sim), where INLINEFORM2 is standard deviation and INLINEFORM3 is a constant, which we set to 0.78 for our MLP model and 1.2 for our RBF model through trial-and-error. Employing a generalized formula as opposed to manually identifying threshold similarity scores for each domain has the advantage of flexibility in regards to the target data, which may vary in average similarity scores depending on its similarity to the training data. If a paragraph does not meet threshold on any domain, it is classified as Other.
Results and Discussion
Table TABREF9 shows the performance of our models on classifying the paragraphs in our gold standard. To assess relative performance of feature representations, we also include performance metrics of our models without MWEs. Because this is a multilabel classification task we use macro-averaging to compute precision, recall, and F1 scores for each paragraph in the testing set. In identifying domains individually, our models achieved the highest per-domain scores on Substance (F1 INLINEFORM0 0.8) and the lowest scores on Interpersonal and Mood (F1 INLINEFORM1 0.5). We observe a consistency in per-domain performance rankings between our MLP and RBF models.
The wide variance in per-domain performance is due to a number of factors. Most notably, the training examples we extracted from RPDR – while very comparable to our target OnTrackTM data – may not have an adequate variety of content and range of vocabulary. Although using keyword and MWE matching to create our training corpus has the advantage of being significantly less labor intensive than manually labeling every paragraph in the corpus, it is likely that the homogeneity of language used in the training paragraphs is higher than it would be otherwise. Additionally, all of the paragraphs in the training data are assigned exactly one risk factor domain even if they actually involve multiple risk factor domains, making the clustering behavior of the paragraphs more difficult to define. Figure FIGREF10 illustrates the distribution of paragraphs in vector space using 2-component Linear Discriminant Analysis (LDA) BIBREF26 .
Despite prior research indicating that similar classification tasks to ours are more effectively performed by RBF networks BIBREF27 , BIBREF28 , BIBREF29 , we find that a MLP network performs marginally better with significantly less preprocessing (i.e. k-means and width calculations) involved. We can see in Figure FIGREF10 that Thought Process, Appearance, Substance, and – to a certain extent – Occupation clearly occupy specific regions, whereas Interpersonal, Mood, and Thought Content occupy the same noisy region where multiple domains overlap. Given that similarity is computed using Euclidean distance in an RBF network, it is difficult to accurately classify paragraphs that fall in regions occupied by multiple risk factor domain clusters since prototype centroids from the risk factor domains will overlap and be less differentiable. This is confirmed by the results in Table TABREF9 , where the differences in performance between the RBF and MLP models are more pronounced in the three overlapping domains (0.496 vs 0.448 for Interpersonal, 0.530 vs 0.496 for Mood, and 0.721 vs 0.678 for Thought Content) compared to the non-overlapping domains (0.564 vs 0.566 for Appearance, 0.592 vs 0.598 for Occupation, 0.797 vs 0.792 for Substance, and 0.635 vs 0.624 for Thought Process). We also observe a similarity in the words and phrases with the highest TF-IDF scores across the overlapping domains: many of the Thought Content words and phrases with the highest TF-IDF scores involve interpersonal relations (e.g. `fear surrounding daughter', `father', `family history', `familial conflict') and there is a high degree of similarity between high-scoring words for Mood (e.g. `meets anxiety criteria', `cope with mania', `ocd'[8]) and Thought Content (e.g. `mania', `feels anxious', `feels exhausted').
[8]Obsessive-compulsive disorder
MWEs play a large role in correctly identifying risk factor domains. Factoring them into our models increased classification performance by 15%, a marked improvement over our baseline model. This aligns with our expectations that MWEs comprised of a quotidian vocabulary hold much more clinical significance than when the words in the expressions are treated independently.
Threshold similarity scores also play a large role in determining the precision and recall of our models: higher thresholds lead to a smaller number of false positives and a greater number of false negatives for each risk factor domain. Conversely, more paragraphs are incorrectly classified as Other when thresholds are set higher. Since our classifier will be used in future work as an early step in a data analysis pipeline for determining readmission risk, misclassifying a paragraph with an incorrect risk factor domain at this stage can lead to greater inaccuracies at later stages. Paragraphs misclassified as Other, however, will be discarded from the data pipeline. Therefore, we intentionally set a conservative threshold where only the most confidently labeled paragraphs are assigned membership in a particular domain.
Future Work and Conclusion
To achieve our goal of creating a framework for a readmission risk classifier, the present study performed necessary evaluation steps by updating and adding to our model iteratively. In the first stage of the project, we focused on collecting the data necessary for training and testing, and on the domain classification annotation task. At the same time, we began creating the tools necessary for automatically extracting domain relevance scores at the paragraph and document level from patient EHRs using several forms of vectorization and topic modeling. In future versions of our risk factor domain classification model we will explore increasing robustness through sequence modeling that considers more contextual information.
Our current feature set for training a machine learning classifier is relatively small, consisting of paragraph domain scores, bag-of-words, length of stay, and number of previous admissions, but we intend to factor in many additional features that extend beyond the scope of the present study. These include a deeper analysis of clinical narratives in EHRs: our next task will be to extend our EHR data pipeline by distinguishing between clinically positive and negative phenomena within each risk factor domain. This will involve a series of annotation tasks that will allow us to generate lexicon-based and corpus-based sentiment analysis tools. We can then use these clinical sentiment scores to generate a gradient of patient improvement or deterioration over time.
We will also take into account structured data that have been collected on the target cohort throughout the course of this study such as brain based electrophysiological (EEG) biomarkers, structural brain anatomy from MRI scans (gray matter volume, cortical thickness, cortical surface-area), social and role functioning assessments, personality assessment (NEO-FFI[9]), and various symptom scales (PANSS[10], MADRS[11], YMRS[12]). For each feature we consider adding, we will evaluate the performance of the classifier with and without the feature to determine its contribution as a predictor of readmission.
Acknowledgments
This work was supported by a grant from the National Institute of Mental Health (grant no. 5R01MH109687 to Mei-Hua Hall). We would also like to thank the LOUHI 2018 Workshop reviewers for their constructive and helpful comments.
[9]NEO Five-Factor Inventory BIBREF30 [10]Positive and Negative Syndrome Scale BIBREF31 [11]Montgomery-Asperg Depression Rating Scale BIBREF32 [12]Young Mania Rating Scale BIBREF33 | distinguishing between clinically positive and negative phenomena within each risk factor domain and accounting for structured data collected on the target cohort |
fbee81a9d90ff23603ee4f5986f9e8c0eb035b52 | fbee81a9d90ff23603ee4f5986f9e8c0eb035b52_0 | Q: What are their initial results on this task?
Text: Introduction
Psychotic disorders typically emerge in late adolescence or early adulthood BIBREF0 , BIBREF1 and affect approximately 2.5-4% of the population BIBREF2 , BIBREF3 , making them one of the leading causes of disability worldwide BIBREF4 . A substantial proportion of psychiatric inpatients are readmitted after discharge BIBREF5 . Readmissions are disruptive for patients and families, and are a key driver of rising healthcare costs BIBREF6 , BIBREF7 . Reducing readmission risk is therefore a major unmet need of psychiatric care. Developing clinically implementable machine learning tools to enable accurate assessment of risk factors associated with readmission offers opportunities to inform the selection of treatment interventions and implement appropriate preventive measures.
In psychiatry, traditional strategies to study readmission risk factors rely on clinical observation and manual retrospective chart review BIBREF8 , BIBREF9 . This approach, although benefitting from clinical expertise, does not scale well for large data sets, is effort-intensive, and lacks automation. An efficient, more robust, and cheaper NLP-based alternative approach has been developed and met with some success in other medical fields BIBREF10 . However, this approach has seldom been applied in psychiatry because of the unique characteristics of psychiatric medical record content.
There are several challenges for topic extraction when dealing with clinical narratives in psychiatric EHRs. First, the vocabulary used is highly varied and context-sensitive. A patient may report “feeling `really great and excited'" – symptoms of mania – without any explicit mention of keywords that differ from everyday vocabulary. Also, many technical terms in clinical narratives are multiword expressions (MWEs) such as `obsessive body image', `linear thinking', `short attention span', or `panic attack'. These phrasemes are comprised of words that in isolation do not impart much information in determining relatedness to a given topic but do in the context of the expression.
Second, the narrative structure in psychiatric clinical narratives varies considerably in how the same phenomenon can be described. Hallucinations, for example, could be described as “the patient reports auditory hallucinations," or “the patient has been hearing voices for several months," amongst many other possibilities.
Third, phenomena can be directly mentioned without necessarily being relevant to the patient specifically. Psychosis patient discharge summaries, for instance, can include future treatment plans (e.g. “Prevent relapse of a manic or major depressive episode.", “Prevent recurrence of psychosis.") containing vocabulary that at the word-level seem strongly correlated with readmission risk. Yet at the paragraph-level these do not indicate the presence of a readmission risk factor in the patient and in fact indicate the absence of a risk factor that was formerly present.
Lastly, given the complexity of phenotypic assessment in psychiatric illnesses, patients with psychosis exhibit considerable differences in terms of illness and symptom presentation. The constellation of symptoms leads to various diagnoses and comorbidities that can change over time, including schizophrenia, schizoaffective disorder, bipolar disorder with psychosis, and substance use induced psychosis. Thus, the lexicon of words and phrases used in EHRs differs not only across diagnoses but also across patients and time.
Taken together, these factors make topic extraction a difficult task that cannot be accomplished by keyword search or other simple text-mining techniques.
To identify specific risk factors to focus on, we not only reviewed clinical literature of risk factors associated with readmission BIBREF11 , BIBREF12 , but also considered research related to functional remission BIBREF13 , forensic risk factors BIBREF14 , and consulted clinicians involved with this project. Seven risk factor domains – Appearance, Mood, Interpersonal, Occupation, Thought Content, Thought Process, and Substance – were chosen because they are clinically relevant, consistent with literature, replicable across data sets, explainable, and implementable in NLP algorithms.
In our present study, we evaluate multiple approaches to automatically identify which risk factor domains are associated with which paragraphs in psychotic patient EHRs. We perform this study in support of our long-term goal of creating a readmission risk classifier that can aid clinicians in targeting individual treatment interventions and assessing patient risk of harm (e.g. suicide risk, homicidal risk). Unlike other contemporary approaches in machine learning, we intend to create a model that is clinically explainable and flexible across training data while maintaining consistent performance.
To incorporate clinical expertise in the identification of risk factor domains, we undertake an annotation project, detailed in section 3.1. We identify a test set of over 1,600 EHR paragraphs which a team of three domain-expert clinicians annotate paragraph-by-paragraph for relevant risk factor domains. Section 3.2 describes the results of this annotation task. We then use the gold standard from the annotation project to assess the performance of multiple neural classification models trained exclusively on Term Frequency – Inverse Document Frequency (TF-IDF) vectorized EHR data, described in section 4. To further improve the performance of our model, we incorporate domain-relevant MWEs identified using all in-house data.
Related Work
McCoy et al. mccoy2015clinical constructed a corpus of web data based on the Research Domain Criteria (RDoC) BIBREF15 , and used this corpus to create a vector space document similarity model for topic extraction. They found that the `negative valence' and `social' RDoC domains were associated with readmission. Using web data (in this case data retrieved from the Bing API) to train a similarity model for EHR texts is problematic since it differs from the target data in both structure and content. Based on reconstruction of the procedure, we conclude that many of the informative MWEs critical to understanding the topics of paragraphs in EHRs are not captured in the web data. Additionally, RDoC is by design a generalized research construct to describe the entire spectrum of mental disorders and does not include domains that are based on observation or causes of symptoms. Important indicators within EHRs of patient health, like appearance or occupation, are not included in the RDoC constructs.
Rumshisky et al. rumshisky2016predicting used a corpus of EHRs from patients with a primary diagnosis of major depressive disorder to create a 75-topic LDA topic model that they then used in a readmission prediction classifier pipeline. Like with McCoy et al. mccoy2015clinical, the data used to train the LDA model was not ideal as the generalizability of the data was narrow, focusing on only one disorder. Their model achieved readmission prediction performance with an area under the curve of .784 compared to a baseline of .618. To perform clinical validation of the topics derived from the LDA model, they manually evaluated and annotated the topics, identifying the most informative vocabulary for the top ten topics. With their training data, they found the strongest coherence occurred in topics involving substance use, suicidality, and anxiety disorders. But given the unsupervised nature of the LDA clustering algorithm, the topic coherence they observed is not guaranteed across data sets.
Data
[2]The vast majority of patients in our target cohort are
dependents on a parental private health insurance plan.
Our target data set consists of a corpus of discharge summaries, admission notes, individual encounter notes, and other clinical notes from 220 patients in the OnTrackTM program at McLean Hospital. OnTrackTM is an outpatient program, focusing on treating adults ages 18 to 30 who are experiencing their first episodes of psychosis. The length of time in the program varies depending on patient improvement and insurance coverage, with an average of two to three years. The program focuses primarily on early intervention via individual therapy, group therapy, medication evaluation, and medication management. See Table TABREF2 for a demographic breakdown of the 220 patients, for which we have so far extracted approximately 240,000 total EHR paragraphs spanning from 2011 to 2014 using Meditech, the software employed by McLean for storing and organizing EHR data.
These patients are part of a larger research cohort of approximately 1,800 psychosis patients, which will allow us to connect the results of this EHR study with other ongoing research studies incorporating genetic, cognitive, neurobiological, and functional outcome data from this cohort.
We also use an additional data set for training our vector space model, comprised of EHR texts queried from the Research Patient Data Registry (RPDR), a centralized regional data repository of clinical data from all institutions in the Partners HealthCare network. These records are highly comparable in style and vocabulary to our target data set. The corpus consists of discharge summaries, encounter notes, and visit notes from approximately 30,000 patients admitted to the system's hospitals with psychiatric diagnoses and symptoms. This breadth of data captures a wide range of clinical narratives, creating a comprehensive foundation for topic extraction.
After using the RPDR query tool to extract EHR paragraphs from the RPDR database, we created a training corpus by categorizing the extracted paragraphs according to their risk factor domain using a lexicon of 120 keywords that were identified by the clinicians involved in this project. Certain domains – particularly those involving thoughts and other abstract concepts – are often identifiable by MWEs rather than single words. The same clinicians who identified the keywords manually examined the bigrams and trigrams with the highest TF-IDF scores for each domain in the categorized paragraphs, identifying those which are conceptually related to the given domain. We then used this lexicon of 775 keyphrases to identify more relevant training paragraphs in RPDR and treat them as (non-stemmed) unigrams when generating the matrix. By converting MWEs such as `shortened attention span', `unusual motor activity', `wide-ranging affect', or `linear thinking' to non-stemmed unigrams, the TF-IDF score (and therefore the predictive value) of these terms is magnified. In total, we constructed a corpus of roughly 100,000 paragraphs consisting of 7,000,000 tokens for training our model.
Annotation Task
In order to evaluate our models, we annotated 1,654 paragraphs selected from the 240,000 paragraphs extracted from Meditech with the clinically relevant domains described in Table TABREF3 . The annotation task was completed by three licensed clinicians. All paragraphs were removed from the surrounding EHR context to ensure annotators were not influenced by the additional contextual information. Our domain classification models consider each paragraph independently and thus we designed the annotation task to mirror the information available to the models.
The annotators were instructed to label each paragraph with one or more of the seven risk factor domains. In instances where more than one domain was applicable, annotators assigned the domains in order of prevalence within the paragraph. An eighth label, `Other', was included if a paragraph was ambiguous, uninterpretable, or about a domain not included in the seven risk factor domains (e.g. non-psychiatric medical concerns and lab results). The annotations were then reviewed by a team of two clinicians who adjudicated collaboratively to create a gold standard. The gold standard and the clinician-identified keywords and MWEs have received IRB approval for release to the community. They are available as supplementary data to this paper.
Inter-Annotator Agreement
Inter-annotator agreement (IAA) was assessed using a combination of Fleiss's Kappa (a variant of Scott's Pi that measures pairwise agreement for annotation tasks involving more than two annotators) BIBREF16 and Cohen's Multi-Kappa as proposed by Davies and Fleiss davies1982measuring. Table TABREF6 shows IAA calculations for both overall agreement and agreement on the first (most important) domain only. Following adjudication, accuracy scores were calculated for each annotator by evaluating their annotations against the gold standard.
Overall agreement was generally good and aligned almost exactly with the IAA on the first domain only. Out of the 1,654 annotated paragraphs, 671 (41%) had total agreement across all three annotators. We defined total agreement for the task as a set-theoretic complete intersection of domains for a paragraph identified by all annotators. 98% of paragraphs in total agreement involved one domain. Only 35 paragraphs had total disagreement, which we defined as a set-theoretic null intersection between the three annotators. An analysis of the 35 paragraphs with total disagreement showed that nearly 30% included the term “blunted/restricted". In clinical terminology, these terms can be used to refer to appearance, affect, mood, or emotion. Because the paragraphs being annotated were extracted from larger clinical narratives and examined independently of any surrounding context, it was difficult for the annotators to determine the most appropriate domain. This lack of contextual information resulted in each annotator using a different `default' label: Appearance, Mood, and Other. During adjudication, Other was decided as the most appropriate label unless the paragraph contained additional content that encompassed other domains, as it avoids making unnecessary assumptions. [3]Suicidal ideation [4]Homicidal ideation [5]Ethyl alcohol and ethanol
A Fleiss's Kappa of 0.575 lies on the boundary between `Moderate' and `Substantial' agreement as proposed by Landis and Koch landis1977measurement. This is a promising indication that our risk factor domains are adequately defined by our present guidelines and can be employed by clinicians involved in similar work at other institutions.
The fourth column in Table TABREF6 , Mean Accuracy, was calculated by averaging the three annotator accuracies as evaluated against the gold standard. This provides us with an informative baseline of human parity on the domain classification task.
[6]Rectified Linear Units, INLINEFORM0 BIBREF17 [7]Adaptive Moment Estimation BIBREF18
Topic Extraction
Figure FIGREF8 illustrates the data pipeline for generating our training and testing corpora, and applying them to our classification models.
We use the TfidfVectorizer tool included in the scikit-learn machine learning toolkit BIBREF19 to generate our TF-IDF vector space models, stemming tokens with the Porter Stemmer tool provided by the NLTK library BIBREF20 , and calculating TF-IDF scores for unigrams, bigrams, and trigrams. Applying Singular Value Decomposition (SVD) to the TF-IDF matrix, we reduce the vector space to 100 dimensions, which Zhang et al. zhang2011comparative found to improve classifier performance.
Starting with the approach taken by McCoy et al. mccoy2015clinical, who used aggregate cosine similarity scores to compute domain similarity directly from their TF-IDF vector space model, we extend this method by training a suite of three-layer multilayer perceptron (MLP) and radial basis function (RBF) neural networks using a variety of parameters to compare performance. We employ the Keras deep learning library BIBREF21 using a TensorFlow backend BIBREF22 for this task. The architectures of our highest performing MLP and RBF models are summarized in Table TABREF7 . Prototype vectors for the nodes in the hidden layer of our RBF model are selected via k-means clustering BIBREF23 on each domain paragraph megadocument individually. The RBF transfer function for each hidden layer node is assigned the same width, which is based off the maximum Euclidean distance between the centroids that were computed using k-means.
To prevent overfitting to the training data, we utilize a dropout rate BIBREF24 of 0.2 on the input layer of all models and 0.5 on the MLP hidden layer.
Since our classification problem is multiclass, multilabel, and open-world, we employ seven nodes with sigmoid activations in the output layer, one for each risk factor domain. This allows us to identify paragraphs that fall into more than one of the seven domains, as well as determine paragraphs that should be classified as Other. Unlike the traditionally used softmax activation function, which is ideal for single-label, closed-world classification tasks, sigmoid nodes output class likelihoods for each node independently without the normalization across all classes that occurs in softmax.
We find that the risk factor domains vary in the degree of homogeneity of language used, and as such certain domains produce higher similarity scores, on average, than others. To account for this, we calculate threshold similarity scores for each domain using the formula min=avg(sim)+ INLINEFORM0 * INLINEFORM1 (sim), where INLINEFORM2 is standard deviation and INLINEFORM3 is a constant, which we set to 0.78 for our MLP model and 1.2 for our RBF model through trial-and-error. Employing a generalized formula as opposed to manually identifying threshold similarity scores for each domain has the advantage of flexibility in regards to the target data, which may vary in average similarity scores depending on its similarity to the training data. If a paragraph does not meet threshold on any domain, it is classified as Other.
Results and Discussion
Table TABREF9 shows the performance of our models on classifying the paragraphs in our gold standard. To assess relative performance of feature representations, we also include performance metrics of our models without MWEs. Because this is a multilabel classification task we use macro-averaging to compute precision, recall, and F1 scores for each paragraph in the testing set. In identifying domains individually, our models achieved the highest per-domain scores on Substance (F1 INLINEFORM0 0.8) and the lowest scores on Interpersonal and Mood (F1 INLINEFORM1 0.5). We observe a consistency in per-domain performance rankings between our MLP and RBF models.
The wide variance in per-domain performance is due to a number of factors. Most notably, the training examples we extracted from RPDR – while very comparable to our target OnTrackTM data – may not have an adequate variety of content and range of vocabulary. Although using keyword and MWE matching to create our training corpus has the advantage of being significantly less labor intensive than manually labeling every paragraph in the corpus, it is likely that the homogeneity of language used in the training paragraphs is higher than it would be otherwise. Additionally, all of the paragraphs in the training data are assigned exactly one risk factor domain even if they actually involve multiple risk factor domains, making the clustering behavior of the paragraphs more difficult to define. Figure FIGREF10 illustrates the distribution of paragraphs in vector space using 2-component Linear Discriminant Analysis (LDA) BIBREF26 .
Despite prior research indicating that similar classification tasks to ours are more effectively performed by RBF networks BIBREF27 , BIBREF28 , BIBREF29 , we find that a MLP network performs marginally better with significantly less preprocessing (i.e. k-means and width calculations) involved. We can see in Figure FIGREF10 that Thought Process, Appearance, Substance, and – to a certain extent – Occupation clearly occupy specific regions, whereas Interpersonal, Mood, and Thought Content occupy the same noisy region where multiple domains overlap. Given that similarity is computed using Euclidean distance in an RBF network, it is difficult to accurately classify paragraphs that fall in regions occupied by multiple risk factor domain clusters since prototype centroids from the risk factor domains will overlap and be less differentiable. This is confirmed by the results in Table TABREF9 , where the differences in performance between the RBF and MLP models are more pronounced in the three overlapping domains (0.496 vs 0.448 for Interpersonal, 0.530 vs 0.496 for Mood, and 0.721 vs 0.678 for Thought Content) compared to the non-overlapping domains (0.564 vs 0.566 for Appearance, 0.592 vs 0.598 for Occupation, 0.797 vs 0.792 for Substance, and 0.635 vs 0.624 for Thought Process). We also observe a similarity in the words and phrases with the highest TF-IDF scores across the overlapping domains: many of the Thought Content words and phrases with the highest TF-IDF scores involve interpersonal relations (e.g. `fear surrounding daughter', `father', `family history', `familial conflict') and there is a high degree of similarity between high-scoring words for Mood (e.g. `meets anxiety criteria', `cope with mania', `ocd'[8]) and Thought Content (e.g. `mania', `feels anxious', `feels exhausted').
[8]Obsessive-compulsive disorder
MWEs play a large role in correctly identifying risk factor domains. Factoring them into our models increased classification performance by 15%, a marked improvement over our baseline model. This aligns with our expectations that MWEs comprised of a quotidian vocabulary hold much more clinical significance than when the words in the expressions are treated independently.
Threshold similarity scores also play a large role in determining the precision and recall of our models: higher thresholds lead to a smaller number of false positives and a greater number of false negatives for each risk factor domain. Conversely, more paragraphs are incorrectly classified as Other when thresholds are set higher. Since our classifier will be used in future work as an early step in a data analysis pipeline for determining readmission risk, misclassifying a paragraph with an incorrect risk factor domain at this stage can lead to greater inaccuracies at later stages. Paragraphs misclassified as Other, however, will be discarded from the data pipeline. Therefore, we intentionally set a conservative threshold where only the most confidently labeled paragraphs are assigned membership in a particular domain.
Future Work and Conclusion
To achieve our goal of creating a framework for a readmission risk classifier, the present study performed necessary evaluation steps by updating and adding to our model iteratively. In the first stage of the project, we focused on collecting the data necessary for training and testing, and on the domain classification annotation task. At the same time, we began creating the tools necessary for automatically extracting domain relevance scores at the paragraph and document level from patient EHRs using several forms of vectorization and topic modeling. In future versions of our risk factor domain classification model we will explore increasing robustness through sequence modeling that considers more contextual information.
Our current feature set for training a machine learning classifier is relatively small, consisting of paragraph domain scores, bag-of-words, length of stay, and number of previous admissions, but we intend to factor in many additional features that extend beyond the scope of the present study. These include a deeper analysis of clinical narratives in EHRs: our next task will be to extend our EHR data pipeline by distinguishing between clinically positive and negative phenomena within each risk factor domain. This will involve a series of annotation tasks that will allow us to generate lexicon-based and corpus-based sentiment analysis tools. We can then use these clinical sentiment scores to generate a gradient of patient improvement or deterioration over time.
We will also take into account structured data that have been collected on the target cohort throughout the course of this study such as brain based electrophysiological (EEG) biomarkers, structural brain anatomy from MRI scans (gray matter volume, cortical thickness, cortical surface-area), social and role functioning assessments, personality assessment (NEO-FFI[9]), and various symptom scales (PANSS[10], MADRS[11], YMRS[12]). For each feature we consider adding, we will evaluate the performance of the classifier with and without the feature to determine its contribution as a predictor of readmission.
Acknowledgments
This work was supported by a grant from the National Institute of Mental Health (grant no. 5R01MH109687 to Mei-Hua Hall). We would also like to thank the LOUHI 2018 Workshop reviewers for their constructive and helpful comments.
[9]NEO Five-Factor Inventory BIBREF30 [10]Positive and Negative Syndrome Scale BIBREF31 [11]Montgomery-Asperg Depression Rating Scale BIBREF32 [12]Young Mania Rating Scale BIBREF33 | Achieved the highest per-domain scores on Substance (F1 ≈ 0.8) and the lowest scores on Interpersonal and Mood (F1 ≈ 0.5), and show consistency in per-domain performance rankings between MLP and RBF models. |
39cf0b3974e8a19f3745ad0bcd1e916bf20eeab8 | 39cf0b3974e8a19f3745ad0bcd1e916bf20eeab8_0 | Q: What datasets did the authors use?
Text: Introduction
Psychotic disorders typically emerge in late adolescence or early adulthood BIBREF0 , BIBREF1 and affect approximately 2.5-4% of the population BIBREF2 , BIBREF3 , making them one of the leading causes of disability worldwide BIBREF4 . A substantial proportion of psychiatric inpatients are readmitted after discharge BIBREF5 . Readmissions are disruptive for patients and families, and are a key driver of rising healthcare costs BIBREF6 , BIBREF7 . Reducing readmission risk is therefore a major unmet need of psychiatric care. Developing clinically implementable machine learning tools to enable accurate assessment of risk factors associated with readmission offers opportunities to inform the selection of treatment interventions and implement appropriate preventive measures.
In psychiatry, traditional strategies to study readmission risk factors rely on clinical observation and manual retrospective chart review BIBREF8 , BIBREF9 . This approach, although benefitting from clinical expertise, does not scale well for large data sets, is effort-intensive, and lacks automation. An efficient, more robust, and cheaper NLP-based alternative approach has been developed and met with some success in other medical fields BIBREF10 . However, this approach has seldom been applied in psychiatry because of the unique characteristics of psychiatric medical record content.
There are several challenges for topic extraction when dealing with clinical narratives in psychiatric EHRs. First, the vocabulary used is highly varied and context-sensitive. A patient may report “feeling `really great and excited'" – symptoms of mania – without any explicit mention of keywords that differ from everyday vocabulary. Also, many technical terms in clinical narratives are multiword expressions (MWEs) such as `obsessive body image', `linear thinking', `short attention span', or `panic attack'. These phrasemes are comprised of words that in isolation do not impart much information in determining relatedness to a given topic but do in the context of the expression.
Second, the narrative structure in psychiatric clinical narratives varies considerably in how the same phenomenon can be described. Hallucinations, for example, could be described as “the patient reports auditory hallucinations," or “the patient has been hearing voices for several months," amongst many other possibilities.
Third, phenomena can be directly mentioned without necessarily being relevant to the patient specifically. Psychosis patient discharge summaries, for instance, can include future treatment plans (e.g. “Prevent relapse of a manic or major depressive episode.", “Prevent recurrence of psychosis.") containing vocabulary that at the word-level seem strongly correlated with readmission risk. Yet at the paragraph-level these do not indicate the presence of a readmission risk factor in the patient and in fact indicate the absence of a risk factor that was formerly present.
Lastly, given the complexity of phenotypic assessment in psychiatric illnesses, patients with psychosis exhibit considerable differences in terms of illness and symptom presentation. The constellation of symptoms leads to various diagnoses and comorbidities that can change over time, including schizophrenia, schizoaffective disorder, bipolar disorder with psychosis, and substance use induced psychosis. Thus, the lexicon of words and phrases used in EHRs differs not only across diagnoses but also across patients and time.
Taken together, these factors make topic extraction a difficult task that cannot be accomplished by keyword search or other simple text-mining techniques.
To identify specific risk factors to focus on, we not only reviewed clinical literature of risk factors associated with readmission BIBREF11 , BIBREF12 , but also considered research related to functional remission BIBREF13 , forensic risk factors BIBREF14 , and consulted clinicians involved with this project. Seven risk factor domains – Appearance, Mood, Interpersonal, Occupation, Thought Content, Thought Process, and Substance – were chosen because they are clinically relevant, consistent with literature, replicable across data sets, explainable, and implementable in NLP algorithms.
In our present study, we evaluate multiple approaches to automatically identify which risk factor domains are associated with which paragraphs in psychotic patient EHRs. We perform this study in support of our long-term goal of creating a readmission risk classifier that can aid clinicians in targeting individual treatment interventions and assessing patient risk of harm (e.g. suicide risk, homicidal risk). Unlike other contemporary approaches in machine learning, we intend to create a model that is clinically explainable and flexible across training data while maintaining consistent performance.
To incorporate clinical expertise in the identification of risk factor domains, we undertake an annotation project, detailed in section 3.1. We identify a test set of over 1,600 EHR paragraphs which a team of three domain-expert clinicians annotate paragraph-by-paragraph for relevant risk factor domains. Section 3.2 describes the results of this annotation task. We then use the gold standard from the annotation project to assess the performance of multiple neural classification models trained exclusively on Term Frequency – Inverse Document Frequency (TF-IDF) vectorized EHR data, described in section 4. To further improve the performance of our model, we incorporate domain-relevant MWEs identified using all in-house data.
Related Work
McCoy et al. mccoy2015clinical constructed a corpus of web data based on the Research Domain Criteria (RDoC) BIBREF15 , and used this corpus to create a vector space document similarity model for topic extraction. They found that the `negative valence' and `social' RDoC domains were associated with readmission. Using web data (in this case data retrieved from the Bing API) to train a similarity model for EHR texts is problematic since it differs from the target data in both structure and content. Based on reconstruction of the procedure, we conclude that many of the informative MWEs critical to understanding the topics of paragraphs in EHRs are not captured in the web data. Additionally, RDoC is by design a generalized research construct to describe the entire spectrum of mental disorders and does not include domains that are based on observation or causes of symptoms. Important indicators within EHRs of patient health, like appearance or occupation, are not included in the RDoC constructs.
Rumshisky et al. rumshisky2016predicting used a corpus of EHRs from patients with a primary diagnosis of major depressive disorder to create a 75-topic LDA topic model that they then used in a readmission prediction classifier pipeline. Like with McCoy et al. mccoy2015clinical, the data used to train the LDA model was not ideal as the generalizability of the data was narrow, focusing on only one disorder. Their model achieved readmission prediction performance with an area under the curve of .784 compared to a baseline of .618. To perform clinical validation of the topics derived from the LDA model, they manually evaluated and annotated the topics, identifying the most informative vocabulary for the top ten topics. With their training data, they found the strongest coherence occurred in topics involving substance use, suicidality, and anxiety disorders. But given the unsupervised nature of the LDA clustering algorithm, the topic coherence they observed is not guaranteed across data sets.
Data
[2]The vast majority of patients in our target cohort are
dependents on a parental private health insurance plan.
Our target data set consists of a corpus of discharge summaries, admission notes, individual encounter notes, and other clinical notes from 220 patients in the OnTrackTM program at McLean Hospital. OnTrackTM is an outpatient program, focusing on treating adults ages 18 to 30 who are experiencing their first episodes of psychosis. The length of time in the program varies depending on patient improvement and insurance coverage, with an average of two to three years. The program focuses primarily on early intervention via individual therapy, group therapy, medication evaluation, and medication management. See Table TABREF2 for a demographic breakdown of the 220 patients, for which we have so far extracted approximately 240,000 total EHR paragraphs spanning from 2011 to 2014 using Meditech, the software employed by McLean for storing and organizing EHR data.
These patients are part of a larger research cohort of approximately 1,800 psychosis patients, which will allow us to connect the results of this EHR study with other ongoing research studies incorporating genetic, cognitive, neurobiological, and functional outcome data from this cohort.
We also use an additional data set for training our vector space model, comprised of EHR texts queried from the Research Patient Data Registry (RPDR), a centralized regional data repository of clinical data from all institutions in the Partners HealthCare network. These records are highly comparable in style and vocabulary to our target data set. The corpus consists of discharge summaries, encounter notes, and visit notes from approximately 30,000 patients admitted to the system's hospitals with psychiatric diagnoses and symptoms. This breadth of data captures a wide range of clinical narratives, creating a comprehensive foundation for topic extraction.
After using the RPDR query tool to extract EHR paragraphs from the RPDR database, we created a training corpus by categorizing the extracted paragraphs according to their risk factor domain using a lexicon of 120 keywords that were identified by the clinicians involved in this project. Certain domains – particularly those involving thoughts and other abstract concepts – are often identifiable by MWEs rather than single words. The same clinicians who identified the keywords manually examined the bigrams and trigrams with the highest TF-IDF scores for each domain in the categorized paragraphs, identifying those which are conceptually related to the given domain. We then used this lexicon of 775 keyphrases to identify more relevant training paragraphs in RPDR and treat them as (non-stemmed) unigrams when generating the matrix. By converting MWEs such as `shortened attention span', `unusual motor activity', `wide-ranging affect', or `linear thinking' to non-stemmed unigrams, the TF-IDF score (and therefore the predictive value) of these terms is magnified. In total, we constructed a corpus of roughly 100,000 paragraphs consisting of 7,000,000 tokens for training our model.
Annotation Task
In order to evaluate our models, we annotated 1,654 paragraphs selected from the 240,000 paragraphs extracted from Meditech with the clinically relevant domains described in Table TABREF3 . The annotation task was completed by three licensed clinicians. All paragraphs were removed from the surrounding EHR context to ensure annotators were not influenced by the additional contextual information. Our domain classification models consider each paragraph independently and thus we designed the annotation task to mirror the information available to the models.
The annotators were instructed to label each paragraph with one or more of the seven risk factor domains. In instances where more than one domain was applicable, annotators assigned the domains in order of prevalence within the paragraph. An eighth label, `Other', was included if a paragraph was ambiguous, uninterpretable, or about a domain not included in the seven risk factor domains (e.g. non-psychiatric medical concerns and lab results). The annotations were then reviewed by a team of two clinicians who adjudicated collaboratively to create a gold standard. The gold standard and the clinician-identified keywords and MWEs have received IRB approval for release to the community. They are available as supplementary data to this paper.
Inter-Annotator Agreement
Inter-annotator agreement (IAA) was assessed using a combination of Fleiss's Kappa (a variant of Scott's Pi that measures pairwise agreement for annotation tasks involving more than two annotators) BIBREF16 and Cohen's Multi-Kappa as proposed by Davies and Fleiss davies1982measuring. Table TABREF6 shows IAA calculations for both overall agreement and agreement on the first (most important) domain only. Following adjudication, accuracy scores were calculated for each annotator by evaluating their annotations against the gold standard.
Overall agreement was generally good and aligned almost exactly with the IAA on the first domain only. Out of the 1,654 annotated paragraphs, 671 (41%) had total agreement across all three annotators. We defined total agreement for the task as a set-theoretic complete intersection of domains for a paragraph identified by all annotators. 98% of paragraphs in total agreement involved one domain. Only 35 paragraphs had total disagreement, which we defined as a set-theoretic null intersection between the three annotators. An analysis of the 35 paragraphs with total disagreement showed that nearly 30% included the term “blunted/restricted". In clinical terminology, these terms can be used to refer to appearance, affect, mood, or emotion. Because the paragraphs being annotated were extracted from larger clinical narratives and examined independently of any surrounding context, it was difficult for the annotators to determine the most appropriate domain. This lack of contextual information resulted in each annotator using a different `default' label: Appearance, Mood, and Other. During adjudication, Other was decided as the most appropriate label unless the paragraph contained additional content that encompassed other domains, as it avoids making unnecessary assumptions. [3]Suicidal ideation [4]Homicidal ideation [5]Ethyl alcohol and ethanol
A Fleiss's Kappa of 0.575 lies on the boundary between `Moderate' and `Substantial' agreement as proposed by Landis and Koch landis1977measurement. This is a promising indication that our risk factor domains are adequately defined by our present guidelines and can be employed by clinicians involved in similar work at other institutions.
The fourth column in Table TABREF6 , Mean Accuracy, was calculated by averaging the three annotator accuracies as evaluated against the gold standard. This provides us with an informative baseline of human parity on the domain classification task.
[6]Rectified Linear Units, INLINEFORM0 BIBREF17 [7]Adaptive Moment Estimation BIBREF18
Topic Extraction
Figure FIGREF8 illustrates the data pipeline for generating our training and testing corpora, and applying them to our classification models.
We use the TfidfVectorizer tool included in the scikit-learn machine learning toolkit BIBREF19 to generate our TF-IDF vector space models, stemming tokens with the Porter Stemmer tool provided by the NLTK library BIBREF20 , and calculating TF-IDF scores for unigrams, bigrams, and trigrams. Applying Singular Value Decomposition (SVD) to the TF-IDF matrix, we reduce the vector space to 100 dimensions, which Zhang et al. zhang2011comparative found to improve classifier performance.
Starting with the approach taken by McCoy et al. mccoy2015clinical, who used aggregate cosine similarity scores to compute domain similarity directly from their TF-IDF vector space model, we extend this method by training a suite of three-layer multilayer perceptron (MLP) and radial basis function (RBF) neural networks using a variety of parameters to compare performance. We employ the Keras deep learning library BIBREF21 using a TensorFlow backend BIBREF22 for this task. The architectures of our highest performing MLP and RBF models are summarized in Table TABREF7 . Prototype vectors for the nodes in the hidden layer of our RBF model are selected via k-means clustering BIBREF23 on each domain paragraph megadocument individually. The RBF transfer function for each hidden layer node is assigned the same width, which is based off the maximum Euclidean distance between the centroids that were computed using k-means.
To prevent overfitting to the training data, we utilize a dropout rate BIBREF24 of 0.2 on the input layer of all models and 0.5 on the MLP hidden layer.
Since our classification problem is multiclass, multilabel, and open-world, we employ seven nodes with sigmoid activations in the output layer, one for each risk factor domain. This allows us to identify paragraphs that fall into more than one of the seven domains, as well as determine paragraphs that should be classified as Other. Unlike the traditionally used softmax activation function, which is ideal for single-label, closed-world classification tasks, sigmoid nodes output class likelihoods for each node independently without the normalization across all classes that occurs in softmax.
We find that the risk factor domains vary in the degree of homogeneity of language used, and as such certain domains produce higher similarity scores, on average, than others. To account for this, we calculate threshold similarity scores for each domain using the formula min=avg(sim)+ INLINEFORM0 * INLINEFORM1 (sim), where INLINEFORM2 is standard deviation and INLINEFORM3 is a constant, which we set to 0.78 for our MLP model and 1.2 for our RBF model through trial-and-error. Employing a generalized formula as opposed to manually identifying threshold similarity scores for each domain has the advantage of flexibility in regards to the target data, which may vary in average similarity scores depending on its similarity to the training data. If a paragraph does not meet threshold on any domain, it is classified as Other.
Results and Discussion
Table TABREF9 shows the performance of our models on classifying the paragraphs in our gold standard. To assess relative performance of feature representations, we also include performance metrics of our models without MWEs. Because this is a multilabel classification task we use macro-averaging to compute precision, recall, and F1 scores for each paragraph in the testing set. In identifying domains individually, our models achieved the highest per-domain scores on Substance (F1 INLINEFORM0 0.8) and the lowest scores on Interpersonal and Mood (F1 INLINEFORM1 0.5). We observe a consistency in per-domain performance rankings between our MLP and RBF models.
The wide variance in per-domain performance is due to a number of factors. Most notably, the training examples we extracted from RPDR – while very comparable to our target OnTrackTM data – may not have an adequate variety of content and range of vocabulary. Although using keyword and MWE matching to create our training corpus has the advantage of being significantly less labor intensive than manually labeling every paragraph in the corpus, it is likely that the homogeneity of language used in the training paragraphs is higher than it would be otherwise. Additionally, all of the paragraphs in the training data are assigned exactly one risk factor domain even if they actually involve multiple risk factor domains, making the clustering behavior of the paragraphs more difficult to define. Figure FIGREF10 illustrates the distribution of paragraphs in vector space using 2-component Linear Discriminant Analysis (LDA) BIBREF26 .
Despite prior research indicating that similar classification tasks to ours are more effectively performed by RBF networks BIBREF27 , BIBREF28 , BIBREF29 , we find that a MLP network performs marginally better with significantly less preprocessing (i.e. k-means and width calculations) involved. We can see in Figure FIGREF10 that Thought Process, Appearance, Substance, and – to a certain extent – Occupation clearly occupy specific regions, whereas Interpersonal, Mood, and Thought Content occupy the same noisy region where multiple domains overlap. Given that similarity is computed using Euclidean distance in an RBF network, it is difficult to accurately classify paragraphs that fall in regions occupied by multiple risk factor domain clusters since prototype centroids from the risk factor domains will overlap and be less differentiable. This is confirmed by the results in Table TABREF9 , where the differences in performance between the RBF and MLP models are more pronounced in the three overlapping domains (0.496 vs 0.448 for Interpersonal, 0.530 vs 0.496 for Mood, and 0.721 vs 0.678 for Thought Content) compared to the non-overlapping domains (0.564 vs 0.566 for Appearance, 0.592 vs 0.598 for Occupation, 0.797 vs 0.792 for Substance, and 0.635 vs 0.624 for Thought Process). We also observe a similarity in the words and phrases with the highest TF-IDF scores across the overlapping domains: many of the Thought Content words and phrases with the highest TF-IDF scores involve interpersonal relations (e.g. `fear surrounding daughter', `father', `family history', `familial conflict') and there is a high degree of similarity between high-scoring words for Mood (e.g. `meets anxiety criteria', `cope with mania', `ocd'[8]) and Thought Content (e.g. `mania', `feels anxious', `feels exhausted').
[8]Obsessive-compulsive disorder
MWEs play a large role in correctly identifying risk factor domains. Factoring them into our models increased classification performance by 15%, a marked improvement over our baseline model. This aligns with our expectations that MWEs comprised of a quotidian vocabulary hold much more clinical significance than when the words in the expressions are treated independently.
Threshold similarity scores also play a large role in determining the precision and recall of our models: higher thresholds lead to a smaller number of false positives and a greater number of false negatives for each risk factor domain. Conversely, more paragraphs are incorrectly classified as Other when thresholds are set higher. Since our classifier will be used in future work as an early step in a data analysis pipeline for determining readmission risk, misclassifying a paragraph with an incorrect risk factor domain at this stage can lead to greater inaccuracies at later stages. Paragraphs misclassified as Other, however, will be discarded from the data pipeline. Therefore, we intentionally set a conservative threshold where only the most confidently labeled paragraphs are assigned membership in a particular domain.
Future Work and Conclusion
To achieve our goal of creating a framework for a readmission risk classifier, the present study performed necessary evaluation steps by updating and adding to our model iteratively. In the first stage of the project, we focused on collecting the data necessary for training and testing, and on the domain classification annotation task. At the same time, we began creating the tools necessary for automatically extracting domain relevance scores at the paragraph and document level from patient EHRs using several forms of vectorization and topic modeling. In future versions of our risk factor domain classification model we will explore increasing robustness through sequence modeling that considers more contextual information.
Our current feature set for training a machine learning classifier is relatively small, consisting of paragraph domain scores, bag-of-words, length of stay, and number of previous admissions, but we intend to factor in many additional features that extend beyond the scope of the present study. These include a deeper analysis of clinical narratives in EHRs: our next task will be to extend our EHR data pipeline by distinguishing between clinically positive and negative phenomena within each risk factor domain. This will involve a series of annotation tasks that will allow us to generate lexicon-based and corpus-based sentiment analysis tools. We can then use these clinical sentiment scores to generate a gradient of patient improvement or deterioration over time.
We will also take into account structured data that have been collected on the target cohort throughout the course of this study such as brain based electrophysiological (EEG) biomarkers, structural brain anatomy from MRI scans (gray matter volume, cortical thickness, cortical surface-area), social and role functioning assessments, personality assessment (NEO-FFI[9]), and various symptom scales (PANSS[10], MADRS[11], YMRS[12]). For each feature we consider adding, we will evaluate the performance of the classifier with and without the feature to determine its contribution as a predictor of readmission.
Acknowledgments
This work was supported by a grant from the National Institute of Mental Health (grant no. 5R01MH109687 to Mei-Hua Hall). We would also like to thank the LOUHI 2018 Workshop reviewers for their constructive and helpful comments.
[9]NEO Five-Factor Inventory BIBREF30 [10]Positive and Negative Syndrome Scale BIBREF31 [11]Montgomery-Asperg Depression Rating Scale BIBREF32 [12]Young Mania Rating Scale BIBREF33 | a corpus of discharge summaries, admission notes, individual encounter notes, and other clinical notes from 220 patients in the OnTrackTM program at McLean Hospital, an additional data set for training our vector space model, comprised of EHR texts queried from the Research Patient Data Registry (RPDR) |
1f6180bba0bc657c773bd3e4269f87540a520ead | 1f6180bba0bc657c773bd3e4269f87540a520ead_0 | Q: How many linguistic and semantic features are learned?
Text: Introduction
Neural machine translation (NMT) has achieved impressive performance on machine translation task in recent years for many language pairs BIBREF0, BIBREF1, BIBREF2. However, in consideration of time cost and space capacity, the NMT model generally employs a limited-size vocabulary that only contains the top-N highest frequency words (commonly in the range of 30K to 80K) BIBREF3, which leads to the Out-of-Vocabulary (OOV) problem following with inaccurate and terrible translation results. Research indicated that sentences with too many unknown words tend to be translated much more poorly than sentences with mainly frequent words. For the low-resource and source-side morphologically-rich machine translation tasks, such as Turkish-English and Uyghur-Chinese, all the above issues are more serious due to the fact that the NMT model cannot effectively identify the complex morpheme structure or capture the linguistic and semantic information with too many rare and unknown words in the training corpus.
Both the Turkish and Uyghur are agglutinative and highly-inflected languages in which the word is formed by suffixes attaching to a stem BIBREF4. The word consists of smaller morpheme units without any splitter between them and its structure can be denoted as “stem + suffix1 + suffix2 + ... + suffixN”. A stem is attached in the rear by zero to many suffixes that have many inflected and morphological variants depending on case, number, gender, and so on. The complex morpheme structure and relatively free constituent order can produce very large vocabulary because of the derivational morphology, so when translating from the agglutinative languages, many words are unseen at training time. Moreover, due to the semantic context, the same word generally has different segmentation forms in the training corpus.
For the purpose of incorporating morphology knowledge of agglutinative languages into word segmentation for NMT, we propose a morphological word segmentation method on the source-side of Turkish-English and Uyghur-Chinese machine translation tasks, which segments the complex words into simple and effective morpheme units while reducing the vocabulary size for model training. In this paper, we investigate and compare the following segmentation strategies:
Stem with combined suffix
Stem with singular suffix
Byte Pair Encoding (BPE)
BPE on stem with combined suffix
BPE on stem with singular suffix
The latter two segmentation strategies are our newly proposed methods. Experimental results show that our morphologically motivated word segmentation method can achieve significant improvement of up to 1.2 and 2.5 BLEU points on Turkish-English and Uyghur-Chinese machine translation tasks over the strong baseline of pure BPE method respectively, indicating that it can provide better translation performance for the NMT model.
Approach
We will elaborate two popular word segmentation methods and our newly proposed segmentation strategies in this section. The two popular segmentation methods are morpheme segmentation BIBREF4 and Byte Pair Encoding (BPE) BIBREF5. After word segmentation, we additionally add an specific symbol behind each separated subword unit, which aims to assist the NMT model to identify the morpheme boundaries and capture the semantic information effectively. The sentence examples with different segmentation strategies for Turkish-English machine translation task are shown in Table 1.
Approach ::: Morpheme Segmentation
The words of Turkish and Uyghur are formed by a stem followed with unlimited number of suffixes. Both of the stem and suffix are called morphemes, and they are the smallest functional unit in agglutinative languages. Study indicated that modeling language based on the morpheme units can provide better performance BIBREF6. Morpheme segmentation can segment the complex word into morpheme units of stem and suffix. This representation maintains a full description of the morphological properties of subwords while minimizing the data sparseness caused by inflection and allomorphy phenomenon in highly-inflected languages.
Approach ::: Morpheme Segmentation ::: Stem with Combined Suffix
In this segmentation strategy, each word is segmented into a stem unit and a combined suffix unit. We add “##” behind the stem unit and add “$$” behind the combined suffix unit. We denote this method as SCS. The segmented word can be denoted as two parts of “stem##” and “suffix1suffix2...suffixN$$”. If the original word has no suffix unit, the word is treated as its stem unit. All the following segmentation strategies will follow this rule.
Approach ::: Morpheme Segmentation ::: Stem with Singular Suffix
In this segmentation strategy, each word is segmented into a stem unit and a sequence of suffix units. We add “##” behind the stem unit and add “$$” behind each singular suffix unit. We denote this method as SSS. The segmented word can be denoted as a sequence of “stem##”, “suffix1$$”, “suffix2$$” until “suffixN$$”.
Approach ::: Byte Pair Encoding (BPE)
BPE BIBREF7 is originally a data compression technique and it is adapted by BIBREF5 for word segmentation and vocabulary reduction by encoding the rare and unknown words as a sequence of subword units, in which the most frequent character sequences are merged iteratively. Frequent character n-grams are eventually merged into a single symbol. This is based on the intuition that various word classes are translatable via smaller units than words. This method making the NMT model capable of open-vocabulary translation, which can generalize to translate and produce new words on the basis of these subword units. The BPE algorithm can be run on the dictionary extracted from a training text, with each word being weighted by its frequency. In this segmentation strategy, we add “@@” behind each no-final subword unit of the segmented word.
Approach ::: Morphologically Motivated Segmentation
The problem with morpheme segmentation is that the vocabulary of stem units is still very large, which leads to many rare and unknown words at the training time. The problem with BPE is that it do not consider the morpheme boundaries inside words, which might cause a loss of morphological properties and semantic information. Hence, on the analyses of the above popular word segmentation methods, we propose the morphologically motivated segmentation strategy that combines the morpheme segmentation and BPE for further improving the translation performance of NMT.
Compared with the sentence of word surface forms, the corresponding sentence of stem units only contains the structure information without considering morphological information, which can make better generalization over inflectional variants of the same word and reduce data sparseness BIBREF8. Therefore, we learn a BPE model on the stem units in the training corpus rather than the words, and then apply it on the stem unit of each word after morpheme segmentation.
Approach ::: Morphologically Motivated Segmentation ::: BPE on Stem with Combined Suffix
In this segmentation strategy, firstly we segment each word into a stem unit and a combined suffix unit as SCS. Secondly, we apply BPE on the stem unit. Thirdly, we add “$$” behind the combined suffix unit. If the stem unit is not segmented, we add “##” behind itself. Otherwise, we add “@@” behind each no-final subword of the segmented stem unit. We denote this method as BPE-SCS.
Approach ::: Morphologically Motivated Segmentation ::: BPE on Stem with Singular Suffix
In this segmentation strategy, firstly we segment each word into a stem unit and a sequence of suffix units as SSS. Secondly, we apply BPE on the stem unit. Thirdly, we add “$$” behind each singular suffix unit. If the stem unit is not segmented, we add “##” behind itself. Otherwise, we add “@@” behind each no-final subword of the segmented stem unit. We denote this method as BPE-SSS.
Experiments ::: Experimental Setup ::: Turkish-English Data :
Following BIBREF9, we use the WIT corpus BIBREF10 and SETimes corpus BIBREF11 for model training, and use the newsdev2016 from Workshop on Machine Translation in 2016 (WMT2016) for validation. The test data are newstest2016 and newstest2017.
Experiments ::: Experimental Setup ::: Uyghur-Chinese Data :
We use the news data from China Workshop on Machine Translation in 2017 (CWMT2017) for model training, validation and test.
Experiments ::: Experimental Setup ::: Data Preprocessing :
We utilize the Zemberek with a morphological disambiguation tool to segment the Turkish words into morpheme units, and utilize the morphology analysis tool BIBREF12 to segment the Uyghur words into morpheme units. We employ the python toolkits of jieba for Chinese word segmentation. We apply BPE on the target-side words and we set the number of merge operations to 35K for Chinese and 30K for English and we set the maximum sentence length to 150 tokens. The training corpus statistics of Turkish-English and Uyghur-Chinese machine translation tasks are shown in Table 2 and Table 3 respectively.
Experiments ::: Experimental Setup ::: Number of Merge Operations :
We set the number of merge operations on the stem units in the consideration of keeping the vocabulary size of BPE, BPE-SCS and BPE-SSS segmentation strategies on the same scale. We will elaborate the number settings for our proposed word segmentation strategies in this section.
In the Turkish-English machine translation task, for the pure BPE strategy, we set the number of merge operations on the words to 35K, set the number of merge operations on the stem units for BPE-SCS strategy to 15K, and set the number of merge operations on the stem units for BPE-SSS strategy to 25K. In the Uyghur-Chinese machine translation task, for the pure BPE strategy, we set the number of merge operations on the words to 38K, set the number of merge operations on the stem units for BPE-SCS strategy to 10K, and set the number of merge operations on the stem units for BPE-SSS strategy to 35K. The detailed training corpus statistics with different segmentation strategies of Turkish and Uyghur are shown in Table 4 and Table 5 respectively.
According to Table 4 and Table 5, we can find that both the Turkish and Uyghur have a very large vocabulary even in the low-resource training corpus. So we propose the morphological word segmentation strategies of BPE-SCS and BPE-SSS that additionally applying BPE on the stem units after morpheme segmentation, which not only consider the morphological properties but also eliminate the rare and unknown words.
Experiments ::: NMT Configuration
We employ the Transformer model BIBREF13 with self-attention mechanism architecture implemented in Sockeye toolkit BIBREF14. Both the encoder and decoder have 6 layers. We set the number of hidden units to 512, the number of heads for self-attention to 8, the source and target word embedding size to 512, and the number of hidden units in feed-forward layers to 2048. We train the NMT model by using the Adam optimizer BIBREF15 with a batch size of 128 sentences, and we shuffle all the training data at each epoch. The label smoothing is set to 0.1. We report the result of averaging the parameters of the 4 best checkpoints on the validation perplexity. Decoding is performed by beam search with beam size of 5. To effectively evaluate the machine translation quality, we report case-sensitive BLEU score with standard tokenization and character n-gram ChrF3 score .
Results
In this paper, we investigate and compare morpheme segmentation, BPE and our proposed morphological segmentation strategies on the low resource and morphologically-rich agglutinative languages. Experimental results of Turkish-English and Uyghur-Chinese machine translation tasks are shown in Table 6 and Table 7 respectively.
Discussion
According to Table 6 and Table 7, we can find that both the BPE-SCS and BPE-SSS strategies outperform morpheme segmentation and the strong baseline of pure BPE method. Especially, the BPE-SSS strategy is better and it achieves significant improvement of up to 1.2 BLEU points on Turkish-English machine translation task and 2.5 BLEU points on Uyghur-Chinese machine translation task. Furthermore, we also find that the translation performance of our proposed segmentation strategy on Turkish-English machine translation task is not obvious than Uyghur-Chinese machine translation task, the probable reasons are: the training corpus of Turkish-English consists of talk and news data while most of the talk data are short informal sentences compared with the news data, which cannot provide more language information for the NMT model. Moreover, the test corpus consists of news data, so due to the data domain is different, the improvement of machine translation quality is limited.
In addition, we estimate how the number of merge operations on the stem units for BPE-SSS strategy effects the machine translation quality. Experimental results are shown in Table 8 and Table 9. We find that the number of 25K for Turkish, 30K and 35K for Uyghur maximizes the translation performance. The probable reason is that these numbers of merge operations are able to generate a more appropriate vocabulary that containing effective morpheme units and moderate subword units, which makes better generalization over the morphologically-rich words.
Related Work
The NMT system is typically trained with a limited vocabulary, which creates bottleneck on translation accuracy and generalization capability. Many word segmentation methods have been proposed to cope with the above problems, which consider the morphological properties of different languages.
Bradbury and Socher BIBREF16 employed the modified Morfessor to provide morphology knowledge into word segmentation, but they neglected the morphological varieties between subword units, which might result in ambiguous translation results. Sanchez-Cartagena and Toral BIBREF17 proposed a rule-based morphological word segmentation for Finnish, which applies BPE on all the morpheme units uniformly without distinguishing their inner morphological roles. Huck BIBREF18 explored target-side segmentation method for German, which shows that the cascading of suffix splitting and compound splitting with BPE can achieve better translation results. Ataman et al. BIBREF19 presented a linguistically motivated vocabulary reduction approach for Turkish, which optimizes the segmentation complexity with constraint on the vocabulary based on a category-based hidden markov model (HMM). Our work is closely related to their idea while ours are more simple and realizable. Tawfik et al. BIBREF20 confirmed that there is some advantage from using a high accuracy dialectal segmenter jointly with a language independent word segmentation method like BPE. The main difference is that their approach needs sufficient monolingual data additionally to train a segmentation model while ours do not need any external resources, which is very convenient for word segmentation on the low-resource and morphologically-rich agglutinative languages.
Conclusion
In this paper, we investigate morphological segmentation strategies on the low-resource and morphologically-rich languages of Turkish and Uyghur. Experimental results show that our proposed morphologically motivated word segmentation method is better suitable for NMT. And the BPE-SSS strategy achieves the best machine translation performance, as it can better preserve the syntactic and semantic information of the words with complex morphology as well as reduce the vocabulary size for model training. Moreover, we also estimate how the number of merge operations on the stem units for BPE-SSS strategy effects the translation quality, and we find that an appropriate vocabulary size is more useful for the NMT model.
In future work, we are planning to incorporate more linguistic and morphology knowledge into the training process of NMT to enhance its capacity of capturing syntactic structure and semantic information on the low-resource and morphologically-rich languages.
Acknowledgments
This work is supported by the National Natural Science Foundation of China, the Open Project of Key Laboratory of Xinjiang Uygur Autonomous Region, the Youth Innovation Promotion Association of the Chinese Academy of Sciences, and the High-level Talents Introduction Project of Xinjiang Uyghur Autonomous Region. | Unanswerable |
57388bf2693d71eb966d42fa58ab66d7f595e55f | 57388bf2693d71eb966d42fa58ab66d7f595e55f_0 | Q: How is morphology knowledge implemented in the method?
Text: Introduction
Neural machine translation (NMT) has achieved impressive performance on machine translation task in recent years for many language pairs BIBREF0, BIBREF1, BIBREF2. However, in consideration of time cost and space capacity, the NMT model generally employs a limited-size vocabulary that only contains the top-N highest frequency words (commonly in the range of 30K to 80K) BIBREF3, which leads to the Out-of-Vocabulary (OOV) problem following with inaccurate and terrible translation results. Research indicated that sentences with too many unknown words tend to be translated much more poorly than sentences with mainly frequent words. For the low-resource and source-side morphologically-rich machine translation tasks, such as Turkish-English and Uyghur-Chinese, all the above issues are more serious due to the fact that the NMT model cannot effectively identify the complex morpheme structure or capture the linguistic and semantic information with too many rare and unknown words in the training corpus.
Both the Turkish and Uyghur are agglutinative and highly-inflected languages in which the word is formed by suffixes attaching to a stem BIBREF4. The word consists of smaller morpheme units without any splitter between them and its structure can be denoted as “stem + suffix1 + suffix2 + ... + suffixN”. A stem is attached in the rear by zero to many suffixes that have many inflected and morphological variants depending on case, number, gender, and so on. The complex morpheme structure and relatively free constituent order can produce very large vocabulary because of the derivational morphology, so when translating from the agglutinative languages, many words are unseen at training time. Moreover, due to the semantic context, the same word generally has different segmentation forms in the training corpus.
For the purpose of incorporating morphology knowledge of agglutinative languages into word segmentation for NMT, we propose a morphological word segmentation method on the source-side of Turkish-English and Uyghur-Chinese machine translation tasks, which segments the complex words into simple and effective morpheme units while reducing the vocabulary size for model training. In this paper, we investigate and compare the following segmentation strategies:
Stem with combined suffix
Stem with singular suffix
Byte Pair Encoding (BPE)
BPE on stem with combined suffix
BPE on stem with singular suffix
The latter two segmentation strategies are our newly proposed methods. Experimental results show that our morphologically motivated word segmentation method can achieve significant improvement of up to 1.2 and 2.5 BLEU points on Turkish-English and Uyghur-Chinese machine translation tasks over the strong baseline of pure BPE method respectively, indicating that it can provide better translation performance for the NMT model.
Approach
We will elaborate two popular word segmentation methods and our newly proposed segmentation strategies in this section. The two popular segmentation methods are morpheme segmentation BIBREF4 and Byte Pair Encoding (BPE) BIBREF5. After word segmentation, we additionally add an specific symbol behind each separated subword unit, which aims to assist the NMT model to identify the morpheme boundaries and capture the semantic information effectively. The sentence examples with different segmentation strategies for Turkish-English machine translation task are shown in Table 1.
Approach ::: Morpheme Segmentation
The words of Turkish and Uyghur are formed by a stem followed with unlimited number of suffixes. Both of the stem and suffix are called morphemes, and they are the smallest functional unit in agglutinative languages. Study indicated that modeling language based on the morpheme units can provide better performance BIBREF6. Morpheme segmentation can segment the complex word into morpheme units of stem and suffix. This representation maintains a full description of the morphological properties of subwords while minimizing the data sparseness caused by inflection and allomorphy phenomenon in highly-inflected languages.
Approach ::: Morpheme Segmentation ::: Stem with Combined Suffix
In this segmentation strategy, each word is segmented into a stem unit and a combined suffix unit. We add “##” behind the stem unit and add “$$” behind the combined suffix unit. We denote this method as SCS. The segmented word can be denoted as two parts of “stem##” and “suffix1suffix2...suffixN$$”. If the original word has no suffix unit, the word is treated as its stem unit. All the following segmentation strategies will follow this rule.
Approach ::: Morpheme Segmentation ::: Stem with Singular Suffix
In this segmentation strategy, each word is segmented into a stem unit and a sequence of suffix units. We add “##” behind the stem unit and add “$$” behind each singular suffix unit. We denote this method as SSS. The segmented word can be denoted as a sequence of “stem##”, “suffix1$$”, “suffix2$$” until “suffixN$$”.
Approach ::: Byte Pair Encoding (BPE)
BPE BIBREF7 is originally a data compression technique and it is adapted by BIBREF5 for word segmentation and vocabulary reduction by encoding the rare and unknown words as a sequence of subword units, in which the most frequent character sequences are merged iteratively. Frequent character n-grams are eventually merged into a single symbol. This is based on the intuition that various word classes are translatable via smaller units than words. This method making the NMT model capable of open-vocabulary translation, which can generalize to translate and produce new words on the basis of these subword units. The BPE algorithm can be run on the dictionary extracted from a training text, with each word being weighted by its frequency. In this segmentation strategy, we add “@@” behind each no-final subword unit of the segmented word.
Approach ::: Morphologically Motivated Segmentation
The problem with morpheme segmentation is that the vocabulary of stem units is still very large, which leads to many rare and unknown words at the training time. The problem with BPE is that it do not consider the morpheme boundaries inside words, which might cause a loss of morphological properties and semantic information. Hence, on the analyses of the above popular word segmentation methods, we propose the morphologically motivated segmentation strategy that combines the morpheme segmentation and BPE for further improving the translation performance of NMT.
Compared with the sentence of word surface forms, the corresponding sentence of stem units only contains the structure information without considering morphological information, which can make better generalization over inflectional variants of the same word and reduce data sparseness BIBREF8. Therefore, we learn a BPE model on the stem units in the training corpus rather than the words, and then apply it on the stem unit of each word after morpheme segmentation.
Approach ::: Morphologically Motivated Segmentation ::: BPE on Stem with Combined Suffix
In this segmentation strategy, firstly we segment each word into a stem unit and a combined suffix unit as SCS. Secondly, we apply BPE on the stem unit. Thirdly, we add “$$” behind the combined suffix unit. If the stem unit is not segmented, we add “##” behind itself. Otherwise, we add “@@” behind each no-final subword of the segmented stem unit. We denote this method as BPE-SCS.
Approach ::: Morphologically Motivated Segmentation ::: BPE on Stem with Singular Suffix
In this segmentation strategy, firstly we segment each word into a stem unit and a sequence of suffix units as SSS. Secondly, we apply BPE on the stem unit. Thirdly, we add “$$” behind each singular suffix unit. If the stem unit is not segmented, we add “##” behind itself. Otherwise, we add “@@” behind each no-final subword of the segmented stem unit. We denote this method as BPE-SSS.
Experiments ::: Experimental Setup ::: Turkish-English Data :
Following BIBREF9, we use the WIT corpus BIBREF10 and SETimes corpus BIBREF11 for model training, and use the newsdev2016 from Workshop on Machine Translation in 2016 (WMT2016) for validation. The test data are newstest2016 and newstest2017.
Experiments ::: Experimental Setup ::: Uyghur-Chinese Data :
We use the news data from China Workshop on Machine Translation in 2017 (CWMT2017) for model training, validation and test.
Experiments ::: Experimental Setup ::: Data Preprocessing :
We utilize the Zemberek with a morphological disambiguation tool to segment the Turkish words into morpheme units, and utilize the morphology analysis tool BIBREF12 to segment the Uyghur words into morpheme units. We employ the python toolkits of jieba for Chinese word segmentation. We apply BPE on the target-side words and we set the number of merge operations to 35K for Chinese and 30K for English and we set the maximum sentence length to 150 tokens. The training corpus statistics of Turkish-English and Uyghur-Chinese machine translation tasks are shown in Table 2 and Table 3 respectively.
Experiments ::: Experimental Setup ::: Number of Merge Operations :
We set the number of merge operations on the stem units in the consideration of keeping the vocabulary size of BPE, BPE-SCS and BPE-SSS segmentation strategies on the same scale. We will elaborate the number settings for our proposed word segmentation strategies in this section.
In the Turkish-English machine translation task, for the pure BPE strategy, we set the number of merge operations on the words to 35K, set the number of merge operations on the stem units for BPE-SCS strategy to 15K, and set the number of merge operations on the stem units for BPE-SSS strategy to 25K. In the Uyghur-Chinese machine translation task, for the pure BPE strategy, we set the number of merge operations on the words to 38K, set the number of merge operations on the stem units for BPE-SCS strategy to 10K, and set the number of merge operations on the stem units for BPE-SSS strategy to 35K. The detailed training corpus statistics with different segmentation strategies of Turkish and Uyghur are shown in Table 4 and Table 5 respectively.
According to Table 4 and Table 5, we can find that both the Turkish and Uyghur have a very large vocabulary even in the low-resource training corpus. So we propose the morphological word segmentation strategies of BPE-SCS and BPE-SSS that additionally applying BPE on the stem units after morpheme segmentation, which not only consider the morphological properties but also eliminate the rare and unknown words.
Experiments ::: NMT Configuration
We employ the Transformer model BIBREF13 with self-attention mechanism architecture implemented in Sockeye toolkit BIBREF14. Both the encoder and decoder have 6 layers. We set the number of hidden units to 512, the number of heads for self-attention to 8, the source and target word embedding size to 512, and the number of hidden units in feed-forward layers to 2048. We train the NMT model by using the Adam optimizer BIBREF15 with a batch size of 128 sentences, and we shuffle all the training data at each epoch. The label smoothing is set to 0.1. We report the result of averaging the parameters of the 4 best checkpoints on the validation perplexity. Decoding is performed by beam search with beam size of 5. To effectively evaluate the machine translation quality, we report case-sensitive BLEU score with standard tokenization and character n-gram ChrF3 score .
Results
In this paper, we investigate and compare morpheme segmentation, BPE and our proposed morphological segmentation strategies on the low resource and morphologically-rich agglutinative languages. Experimental results of Turkish-English and Uyghur-Chinese machine translation tasks are shown in Table 6 and Table 7 respectively.
Discussion
According to Table 6 and Table 7, we can find that both the BPE-SCS and BPE-SSS strategies outperform morpheme segmentation and the strong baseline of pure BPE method. Especially, the BPE-SSS strategy is better and it achieves significant improvement of up to 1.2 BLEU points on Turkish-English machine translation task and 2.5 BLEU points on Uyghur-Chinese machine translation task. Furthermore, we also find that the translation performance of our proposed segmentation strategy on Turkish-English machine translation task is not obvious than Uyghur-Chinese machine translation task, the probable reasons are: the training corpus of Turkish-English consists of talk and news data while most of the talk data are short informal sentences compared with the news data, which cannot provide more language information for the NMT model. Moreover, the test corpus consists of news data, so due to the data domain is different, the improvement of machine translation quality is limited.
In addition, we estimate how the number of merge operations on the stem units for BPE-SSS strategy effects the machine translation quality. Experimental results are shown in Table 8 and Table 9. We find that the number of 25K for Turkish, 30K and 35K for Uyghur maximizes the translation performance. The probable reason is that these numbers of merge operations are able to generate a more appropriate vocabulary that containing effective morpheme units and moderate subword units, which makes better generalization over the morphologically-rich words.
Related Work
The NMT system is typically trained with a limited vocabulary, which creates bottleneck on translation accuracy and generalization capability. Many word segmentation methods have been proposed to cope with the above problems, which consider the morphological properties of different languages.
Bradbury and Socher BIBREF16 employed the modified Morfessor to provide morphology knowledge into word segmentation, but they neglected the morphological varieties between subword units, which might result in ambiguous translation results. Sanchez-Cartagena and Toral BIBREF17 proposed a rule-based morphological word segmentation for Finnish, which applies BPE on all the morpheme units uniformly without distinguishing their inner morphological roles. Huck BIBREF18 explored target-side segmentation method for German, which shows that the cascading of suffix splitting and compound splitting with BPE can achieve better translation results. Ataman et al. BIBREF19 presented a linguistically motivated vocabulary reduction approach for Turkish, which optimizes the segmentation complexity with constraint on the vocabulary based on a category-based hidden markov model (HMM). Our work is closely related to their idea while ours are more simple and realizable. Tawfik et al. BIBREF20 confirmed that there is some advantage from using a high accuracy dialectal segmenter jointly with a language independent word segmentation method like BPE. The main difference is that their approach needs sufficient monolingual data additionally to train a segmentation model while ours do not need any external resources, which is very convenient for word segmentation on the low-resource and morphologically-rich agglutinative languages.
Conclusion
In this paper, we investigate morphological segmentation strategies on the low-resource and morphologically-rich languages of Turkish and Uyghur. Experimental results show that our proposed morphologically motivated word segmentation method is better suitable for NMT. And the BPE-SSS strategy achieves the best machine translation performance, as it can better preserve the syntactic and semantic information of the words with complex morphology as well as reduce the vocabulary size for model training. Moreover, we also estimate how the number of merge operations on the stem units for BPE-SSS strategy effects the translation quality, and we find that an appropriate vocabulary size is more useful for the NMT model.
In future work, we are planning to incorporate more linguistic and morphology knowledge into the training process of NMT to enhance its capacity of capturing syntactic structure and semantic information on the low-resource and morphologically-rich languages.
Acknowledgments
This work is supported by the National Natural Science Foundation of China, the Open Project of Key Laboratory of Xinjiang Uygur Autonomous Region, the Youth Innovation Promotion Association of the Chinese Academy of Sciences, and the High-level Talents Introduction Project of Xinjiang Uyghur Autonomous Region. | A BPE model is applied to the stem after morpheme segmentation. |
47796c7f0a7de76ccb97ccbd43dc851bb8a613d5 | 47796c7f0a7de76ccb97ccbd43dc851bb8a613d5_0 | Q: How does the word segmentation method work?
Text: Introduction
Neural machine translation (NMT) has achieved impressive performance on machine translation task in recent years for many language pairs BIBREF0, BIBREF1, BIBREF2. However, in consideration of time cost and space capacity, the NMT model generally employs a limited-size vocabulary that only contains the top-N highest frequency words (commonly in the range of 30K to 80K) BIBREF3, which leads to the Out-of-Vocabulary (OOV) problem following with inaccurate and terrible translation results. Research indicated that sentences with too many unknown words tend to be translated much more poorly than sentences with mainly frequent words. For the low-resource and source-side morphologically-rich machine translation tasks, such as Turkish-English and Uyghur-Chinese, all the above issues are more serious due to the fact that the NMT model cannot effectively identify the complex morpheme structure or capture the linguistic and semantic information with too many rare and unknown words in the training corpus.
Both the Turkish and Uyghur are agglutinative and highly-inflected languages in which the word is formed by suffixes attaching to a stem BIBREF4. The word consists of smaller morpheme units without any splitter between them and its structure can be denoted as “stem + suffix1 + suffix2 + ... + suffixN”. A stem is attached in the rear by zero to many suffixes that have many inflected and morphological variants depending on case, number, gender, and so on. The complex morpheme structure and relatively free constituent order can produce very large vocabulary because of the derivational morphology, so when translating from the agglutinative languages, many words are unseen at training time. Moreover, due to the semantic context, the same word generally has different segmentation forms in the training corpus.
For the purpose of incorporating morphology knowledge of agglutinative languages into word segmentation for NMT, we propose a morphological word segmentation method on the source-side of Turkish-English and Uyghur-Chinese machine translation tasks, which segments the complex words into simple and effective morpheme units while reducing the vocabulary size for model training. In this paper, we investigate and compare the following segmentation strategies:
Stem with combined suffix
Stem with singular suffix
Byte Pair Encoding (BPE)
BPE on stem with combined suffix
BPE on stem with singular suffix
The latter two segmentation strategies are our newly proposed methods. Experimental results show that our morphologically motivated word segmentation method can achieve significant improvement of up to 1.2 and 2.5 BLEU points on Turkish-English and Uyghur-Chinese machine translation tasks over the strong baseline of pure BPE method respectively, indicating that it can provide better translation performance for the NMT model.
Approach
We will elaborate two popular word segmentation methods and our newly proposed segmentation strategies in this section. The two popular segmentation methods are morpheme segmentation BIBREF4 and Byte Pair Encoding (BPE) BIBREF5. After word segmentation, we additionally add an specific symbol behind each separated subword unit, which aims to assist the NMT model to identify the morpheme boundaries and capture the semantic information effectively. The sentence examples with different segmentation strategies for Turkish-English machine translation task are shown in Table 1.
Approach ::: Morpheme Segmentation
The words of Turkish and Uyghur are formed by a stem followed with unlimited number of suffixes. Both of the stem and suffix are called morphemes, and they are the smallest functional unit in agglutinative languages. Study indicated that modeling language based on the morpheme units can provide better performance BIBREF6. Morpheme segmentation can segment the complex word into morpheme units of stem and suffix. This representation maintains a full description of the morphological properties of subwords while minimizing the data sparseness caused by inflection and allomorphy phenomenon in highly-inflected languages.
Approach ::: Morpheme Segmentation ::: Stem with Combined Suffix
In this segmentation strategy, each word is segmented into a stem unit and a combined suffix unit. We add “##” behind the stem unit and add “$$” behind the combined suffix unit. We denote this method as SCS. The segmented word can be denoted as two parts of “stem##” and “suffix1suffix2...suffixN$$”. If the original word has no suffix unit, the word is treated as its stem unit. All the following segmentation strategies will follow this rule.
Approach ::: Morpheme Segmentation ::: Stem with Singular Suffix
In this segmentation strategy, each word is segmented into a stem unit and a sequence of suffix units. We add “##” behind the stem unit and add “$$” behind each singular suffix unit. We denote this method as SSS. The segmented word can be denoted as a sequence of “stem##”, “suffix1$$”, “suffix2$$” until “suffixN$$”.
Approach ::: Byte Pair Encoding (BPE)
BPE BIBREF7 is originally a data compression technique and it is adapted by BIBREF5 for word segmentation and vocabulary reduction by encoding the rare and unknown words as a sequence of subword units, in which the most frequent character sequences are merged iteratively. Frequent character n-grams are eventually merged into a single symbol. This is based on the intuition that various word classes are translatable via smaller units than words. This method making the NMT model capable of open-vocabulary translation, which can generalize to translate and produce new words on the basis of these subword units. The BPE algorithm can be run on the dictionary extracted from a training text, with each word being weighted by its frequency. In this segmentation strategy, we add “@@” behind each no-final subword unit of the segmented word.
Approach ::: Morphologically Motivated Segmentation
The problem with morpheme segmentation is that the vocabulary of stem units is still very large, which leads to many rare and unknown words at the training time. The problem with BPE is that it do not consider the morpheme boundaries inside words, which might cause a loss of morphological properties and semantic information. Hence, on the analyses of the above popular word segmentation methods, we propose the morphologically motivated segmentation strategy that combines the morpheme segmentation and BPE for further improving the translation performance of NMT.
Compared with the sentence of word surface forms, the corresponding sentence of stem units only contains the structure information without considering morphological information, which can make better generalization over inflectional variants of the same word and reduce data sparseness BIBREF8. Therefore, we learn a BPE model on the stem units in the training corpus rather than the words, and then apply it on the stem unit of each word after morpheme segmentation.
Approach ::: Morphologically Motivated Segmentation ::: BPE on Stem with Combined Suffix
In this segmentation strategy, firstly we segment each word into a stem unit and a combined suffix unit as SCS. Secondly, we apply BPE on the stem unit. Thirdly, we add “$$” behind the combined suffix unit. If the stem unit is not segmented, we add “##” behind itself. Otherwise, we add “@@” behind each no-final subword of the segmented stem unit. We denote this method as BPE-SCS.
Approach ::: Morphologically Motivated Segmentation ::: BPE on Stem with Singular Suffix
In this segmentation strategy, firstly we segment each word into a stem unit and a sequence of suffix units as SSS. Secondly, we apply BPE on the stem unit. Thirdly, we add “$$” behind each singular suffix unit. If the stem unit is not segmented, we add “##” behind itself. Otherwise, we add “@@” behind each no-final subword of the segmented stem unit. We denote this method as BPE-SSS.
Experiments ::: Experimental Setup ::: Turkish-English Data :
Following BIBREF9, we use the WIT corpus BIBREF10 and SETimes corpus BIBREF11 for model training, and use the newsdev2016 from Workshop on Machine Translation in 2016 (WMT2016) for validation. The test data are newstest2016 and newstest2017.
Experiments ::: Experimental Setup ::: Uyghur-Chinese Data :
We use the news data from China Workshop on Machine Translation in 2017 (CWMT2017) for model training, validation and test.
Experiments ::: Experimental Setup ::: Data Preprocessing :
We utilize the Zemberek with a morphological disambiguation tool to segment the Turkish words into morpheme units, and utilize the morphology analysis tool BIBREF12 to segment the Uyghur words into morpheme units. We employ the python toolkits of jieba for Chinese word segmentation. We apply BPE on the target-side words and we set the number of merge operations to 35K for Chinese and 30K for English and we set the maximum sentence length to 150 tokens. The training corpus statistics of Turkish-English and Uyghur-Chinese machine translation tasks are shown in Table 2 and Table 3 respectively.
Experiments ::: Experimental Setup ::: Number of Merge Operations :
We set the number of merge operations on the stem units in the consideration of keeping the vocabulary size of BPE, BPE-SCS and BPE-SSS segmentation strategies on the same scale. We will elaborate the number settings for our proposed word segmentation strategies in this section.
In the Turkish-English machine translation task, for the pure BPE strategy, we set the number of merge operations on the words to 35K, set the number of merge operations on the stem units for BPE-SCS strategy to 15K, and set the number of merge operations on the stem units for BPE-SSS strategy to 25K. In the Uyghur-Chinese machine translation task, for the pure BPE strategy, we set the number of merge operations on the words to 38K, set the number of merge operations on the stem units for BPE-SCS strategy to 10K, and set the number of merge operations on the stem units for BPE-SSS strategy to 35K. The detailed training corpus statistics with different segmentation strategies of Turkish and Uyghur are shown in Table 4 and Table 5 respectively.
According to Table 4 and Table 5, we can find that both the Turkish and Uyghur have a very large vocabulary even in the low-resource training corpus. So we propose the morphological word segmentation strategies of BPE-SCS and BPE-SSS that additionally applying BPE on the stem units after morpheme segmentation, which not only consider the morphological properties but also eliminate the rare and unknown words.
Experiments ::: NMT Configuration
We employ the Transformer model BIBREF13 with self-attention mechanism architecture implemented in Sockeye toolkit BIBREF14. Both the encoder and decoder have 6 layers. We set the number of hidden units to 512, the number of heads for self-attention to 8, the source and target word embedding size to 512, and the number of hidden units in feed-forward layers to 2048. We train the NMT model by using the Adam optimizer BIBREF15 with a batch size of 128 sentences, and we shuffle all the training data at each epoch. The label smoothing is set to 0.1. We report the result of averaging the parameters of the 4 best checkpoints on the validation perplexity. Decoding is performed by beam search with beam size of 5. To effectively evaluate the machine translation quality, we report case-sensitive BLEU score with standard tokenization and character n-gram ChrF3 score .
Results
In this paper, we investigate and compare morpheme segmentation, BPE and our proposed morphological segmentation strategies on the low resource and morphologically-rich agglutinative languages. Experimental results of Turkish-English and Uyghur-Chinese machine translation tasks are shown in Table 6 and Table 7 respectively.
Discussion
According to Table 6 and Table 7, we can find that both the BPE-SCS and BPE-SSS strategies outperform morpheme segmentation and the strong baseline of pure BPE method. Especially, the BPE-SSS strategy is better and it achieves significant improvement of up to 1.2 BLEU points on Turkish-English machine translation task and 2.5 BLEU points on Uyghur-Chinese machine translation task. Furthermore, we also find that the translation performance of our proposed segmentation strategy on Turkish-English machine translation task is not obvious than Uyghur-Chinese machine translation task, the probable reasons are: the training corpus of Turkish-English consists of talk and news data while most of the talk data are short informal sentences compared with the news data, which cannot provide more language information for the NMT model. Moreover, the test corpus consists of news data, so due to the data domain is different, the improvement of machine translation quality is limited.
In addition, we estimate how the number of merge operations on the stem units for BPE-SSS strategy effects the machine translation quality. Experimental results are shown in Table 8 and Table 9. We find that the number of 25K for Turkish, 30K and 35K for Uyghur maximizes the translation performance. The probable reason is that these numbers of merge operations are able to generate a more appropriate vocabulary that containing effective morpheme units and moderate subword units, which makes better generalization over the morphologically-rich words.
Related Work
The NMT system is typically trained with a limited vocabulary, which creates bottleneck on translation accuracy and generalization capability. Many word segmentation methods have been proposed to cope with the above problems, which consider the morphological properties of different languages.
Bradbury and Socher BIBREF16 employed the modified Morfessor to provide morphology knowledge into word segmentation, but they neglected the morphological varieties between subword units, which might result in ambiguous translation results. Sanchez-Cartagena and Toral BIBREF17 proposed a rule-based morphological word segmentation for Finnish, which applies BPE on all the morpheme units uniformly without distinguishing their inner morphological roles. Huck BIBREF18 explored target-side segmentation method for German, which shows that the cascading of suffix splitting and compound splitting with BPE can achieve better translation results. Ataman et al. BIBREF19 presented a linguistically motivated vocabulary reduction approach for Turkish, which optimizes the segmentation complexity with constraint on the vocabulary based on a category-based hidden markov model (HMM). Our work is closely related to their idea while ours are more simple and realizable. Tawfik et al. BIBREF20 confirmed that there is some advantage from using a high accuracy dialectal segmenter jointly with a language independent word segmentation method like BPE. The main difference is that their approach needs sufficient monolingual data additionally to train a segmentation model while ours do not need any external resources, which is very convenient for word segmentation on the low-resource and morphologically-rich agglutinative languages.
Conclusion
In this paper, we investigate morphological segmentation strategies on the low-resource and morphologically-rich languages of Turkish and Uyghur. Experimental results show that our proposed morphologically motivated word segmentation method is better suitable for NMT. And the BPE-SSS strategy achieves the best machine translation performance, as it can better preserve the syntactic and semantic information of the words with complex morphology as well as reduce the vocabulary size for model training. Moreover, we also estimate how the number of merge operations on the stem units for BPE-SSS strategy effects the translation quality, and we find that an appropriate vocabulary size is more useful for the NMT model.
In future work, we are planning to incorporate more linguistic and morphology knowledge into the training process of NMT to enhance its capacity of capturing syntactic structure and semantic information on the low-resource and morphologically-rich languages.
Acknowledgments
This work is supported by the National Natural Science Foundation of China, the Open Project of Key Laboratory of Xinjiang Uygur Autonomous Region, the Youth Innovation Promotion Association of the Chinese Academy of Sciences, and the High-level Talents Introduction Project of Xinjiang Uyghur Autonomous Region. | morpheme segmentation BIBREF4 and Byte Pair Encoding (BPE) BIBREF5, Zemberek, BIBREF12 |
9d5153a7553b7113716420a6ddceb59f877eb617 | 9d5153a7553b7113716420a6ddceb59f877eb617_0 | Q: Is the word segmentation method independently evaluated?
Text: Introduction
Neural machine translation (NMT) has achieved impressive performance on machine translation task in recent years for many language pairs BIBREF0, BIBREF1, BIBREF2. However, in consideration of time cost and space capacity, the NMT model generally employs a limited-size vocabulary that only contains the top-N highest frequency words (commonly in the range of 30K to 80K) BIBREF3, which leads to the Out-of-Vocabulary (OOV) problem following with inaccurate and terrible translation results. Research indicated that sentences with too many unknown words tend to be translated much more poorly than sentences with mainly frequent words. For the low-resource and source-side morphologically-rich machine translation tasks, such as Turkish-English and Uyghur-Chinese, all the above issues are more serious due to the fact that the NMT model cannot effectively identify the complex morpheme structure or capture the linguistic and semantic information with too many rare and unknown words in the training corpus.
Both the Turkish and Uyghur are agglutinative and highly-inflected languages in which the word is formed by suffixes attaching to a stem BIBREF4. The word consists of smaller morpheme units without any splitter between them and its structure can be denoted as “stem + suffix1 + suffix2 + ... + suffixN”. A stem is attached in the rear by zero to many suffixes that have many inflected and morphological variants depending on case, number, gender, and so on. The complex morpheme structure and relatively free constituent order can produce very large vocabulary because of the derivational morphology, so when translating from the agglutinative languages, many words are unseen at training time. Moreover, due to the semantic context, the same word generally has different segmentation forms in the training corpus.
For the purpose of incorporating morphology knowledge of agglutinative languages into word segmentation for NMT, we propose a morphological word segmentation method on the source-side of Turkish-English and Uyghur-Chinese machine translation tasks, which segments the complex words into simple and effective morpheme units while reducing the vocabulary size for model training. In this paper, we investigate and compare the following segmentation strategies:
Stem with combined suffix
Stem with singular suffix
Byte Pair Encoding (BPE)
BPE on stem with combined suffix
BPE on stem with singular suffix
The latter two segmentation strategies are our newly proposed methods. Experimental results show that our morphologically motivated word segmentation method can achieve significant improvement of up to 1.2 and 2.5 BLEU points on Turkish-English and Uyghur-Chinese machine translation tasks over the strong baseline of pure BPE method respectively, indicating that it can provide better translation performance for the NMT model.
Approach
We will elaborate two popular word segmentation methods and our newly proposed segmentation strategies in this section. The two popular segmentation methods are morpheme segmentation BIBREF4 and Byte Pair Encoding (BPE) BIBREF5. After word segmentation, we additionally add an specific symbol behind each separated subword unit, which aims to assist the NMT model to identify the morpheme boundaries and capture the semantic information effectively. The sentence examples with different segmentation strategies for Turkish-English machine translation task are shown in Table 1.
Approach ::: Morpheme Segmentation
The words of Turkish and Uyghur are formed by a stem followed with unlimited number of suffixes. Both of the stem and suffix are called morphemes, and they are the smallest functional unit in agglutinative languages. Study indicated that modeling language based on the morpheme units can provide better performance BIBREF6. Morpheme segmentation can segment the complex word into morpheme units of stem and suffix. This representation maintains a full description of the morphological properties of subwords while minimizing the data sparseness caused by inflection and allomorphy phenomenon in highly-inflected languages.
Approach ::: Morpheme Segmentation ::: Stem with Combined Suffix
In this segmentation strategy, each word is segmented into a stem unit and a combined suffix unit. We add “##” behind the stem unit and add “$$” behind the combined suffix unit. We denote this method as SCS. The segmented word can be denoted as two parts of “stem##” and “suffix1suffix2...suffixN$$”. If the original word has no suffix unit, the word is treated as its stem unit. All the following segmentation strategies will follow this rule.
Approach ::: Morpheme Segmentation ::: Stem with Singular Suffix
In this segmentation strategy, each word is segmented into a stem unit and a sequence of suffix units. We add “##” behind the stem unit and add “$$” behind each singular suffix unit. We denote this method as SSS. The segmented word can be denoted as a sequence of “stem##”, “suffix1$$”, “suffix2$$” until “suffixN$$”.
Approach ::: Byte Pair Encoding (BPE)
BPE BIBREF7 is originally a data compression technique and it is adapted by BIBREF5 for word segmentation and vocabulary reduction by encoding the rare and unknown words as a sequence of subword units, in which the most frequent character sequences are merged iteratively. Frequent character n-grams are eventually merged into a single symbol. This is based on the intuition that various word classes are translatable via smaller units than words. This method making the NMT model capable of open-vocabulary translation, which can generalize to translate and produce new words on the basis of these subword units. The BPE algorithm can be run on the dictionary extracted from a training text, with each word being weighted by its frequency. In this segmentation strategy, we add “@@” behind each no-final subword unit of the segmented word.
Approach ::: Morphologically Motivated Segmentation
The problem with morpheme segmentation is that the vocabulary of stem units is still very large, which leads to many rare and unknown words at the training time. The problem with BPE is that it do not consider the morpheme boundaries inside words, which might cause a loss of morphological properties and semantic information. Hence, on the analyses of the above popular word segmentation methods, we propose the morphologically motivated segmentation strategy that combines the morpheme segmentation and BPE for further improving the translation performance of NMT.
Compared with the sentence of word surface forms, the corresponding sentence of stem units only contains the structure information without considering morphological information, which can make better generalization over inflectional variants of the same word and reduce data sparseness BIBREF8. Therefore, we learn a BPE model on the stem units in the training corpus rather than the words, and then apply it on the stem unit of each word after morpheme segmentation.
Approach ::: Morphologically Motivated Segmentation ::: BPE on Stem with Combined Suffix
In this segmentation strategy, firstly we segment each word into a stem unit and a combined suffix unit as SCS. Secondly, we apply BPE on the stem unit. Thirdly, we add “$$” behind the combined suffix unit. If the stem unit is not segmented, we add “##” behind itself. Otherwise, we add “@@” behind each no-final subword of the segmented stem unit. We denote this method as BPE-SCS.
Approach ::: Morphologically Motivated Segmentation ::: BPE on Stem with Singular Suffix
In this segmentation strategy, firstly we segment each word into a stem unit and a sequence of suffix units as SSS. Secondly, we apply BPE on the stem unit. Thirdly, we add “$$” behind each singular suffix unit. If the stem unit is not segmented, we add “##” behind itself. Otherwise, we add “@@” behind each no-final subword of the segmented stem unit. We denote this method as BPE-SSS.
Experiments ::: Experimental Setup ::: Turkish-English Data :
Following BIBREF9, we use the WIT corpus BIBREF10 and SETimes corpus BIBREF11 for model training, and use the newsdev2016 from Workshop on Machine Translation in 2016 (WMT2016) for validation. The test data are newstest2016 and newstest2017.
Experiments ::: Experimental Setup ::: Uyghur-Chinese Data :
We use the news data from China Workshop on Machine Translation in 2017 (CWMT2017) for model training, validation and test.
Experiments ::: Experimental Setup ::: Data Preprocessing :
We utilize the Zemberek with a morphological disambiguation tool to segment the Turkish words into morpheme units, and utilize the morphology analysis tool BIBREF12 to segment the Uyghur words into morpheme units. We employ the python toolkits of jieba for Chinese word segmentation. We apply BPE on the target-side words and we set the number of merge operations to 35K for Chinese and 30K for English and we set the maximum sentence length to 150 tokens. The training corpus statistics of Turkish-English and Uyghur-Chinese machine translation tasks are shown in Table 2 and Table 3 respectively.
Experiments ::: Experimental Setup ::: Number of Merge Operations :
We set the number of merge operations on the stem units in the consideration of keeping the vocabulary size of BPE, BPE-SCS and BPE-SSS segmentation strategies on the same scale. We will elaborate the number settings for our proposed word segmentation strategies in this section.
In the Turkish-English machine translation task, for the pure BPE strategy, we set the number of merge operations on the words to 35K, set the number of merge operations on the stem units for BPE-SCS strategy to 15K, and set the number of merge operations on the stem units for BPE-SSS strategy to 25K. In the Uyghur-Chinese machine translation task, for the pure BPE strategy, we set the number of merge operations on the words to 38K, set the number of merge operations on the stem units for BPE-SCS strategy to 10K, and set the number of merge operations on the stem units for BPE-SSS strategy to 35K. The detailed training corpus statistics with different segmentation strategies of Turkish and Uyghur are shown in Table 4 and Table 5 respectively.
According to Table 4 and Table 5, we can find that both the Turkish and Uyghur have a very large vocabulary even in the low-resource training corpus. So we propose the morphological word segmentation strategies of BPE-SCS and BPE-SSS that additionally applying BPE on the stem units after morpheme segmentation, which not only consider the morphological properties but also eliminate the rare and unknown words.
Experiments ::: NMT Configuration
We employ the Transformer model BIBREF13 with self-attention mechanism architecture implemented in Sockeye toolkit BIBREF14. Both the encoder and decoder have 6 layers. We set the number of hidden units to 512, the number of heads for self-attention to 8, the source and target word embedding size to 512, and the number of hidden units in feed-forward layers to 2048. We train the NMT model by using the Adam optimizer BIBREF15 with a batch size of 128 sentences, and we shuffle all the training data at each epoch. The label smoothing is set to 0.1. We report the result of averaging the parameters of the 4 best checkpoints on the validation perplexity. Decoding is performed by beam search with beam size of 5. To effectively evaluate the machine translation quality, we report case-sensitive BLEU score with standard tokenization and character n-gram ChrF3 score .
Results
In this paper, we investigate and compare morpheme segmentation, BPE and our proposed morphological segmentation strategies on the low resource and morphologically-rich agglutinative languages. Experimental results of Turkish-English and Uyghur-Chinese machine translation tasks are shown in Table 6 and Table 7 respectively.
Discussion
According to Table 6 and Table 7, we can find that both the BPE-SCS and BPE-SSS strategies outperform morpheme segmentation and the strong baseline of pure BPE method. Especially, the BPE-SSS strategy is better and it achieves significant improvement of up to 1.2 BLEU points on Turkish-English machine translation task and 2.5 BLEU points on Uyghur-Chinese machine translation task. Furthermore, we also find that the translation performance of our proposed segmentation strategy on Turkish-English machine translation task is not obvious than Uyghur-Chinese machine translation task, the probable reasons are: the training corpus of Turkish-English consists of talk and news data while most of the talk data are short informal sentences compared with the news data, which cannot provide more language information for the NMT model. Moreover, the test corpus consists of news data, so due to the data domain is different, the improvement of machine translation quality is limited.
In addition, we estimate how the number of merge operations on the stem units for BPE-SSS strategy effects the machine translation quality. Experimental results are shown in Table 8 and Table 9. We find that the number of 25K for Turkish, 30K and 35K for Uyghur maximizes the translation performance. The probable reason is that these numbers of merge operations are able to generate a more appropriate vocabulary that containing effective morpheme units and moderate subword units, which makes better generalization over the morphologically-rich words.
Related Work
The NMT system is typically trained with a limited vocabulary, which creates bottleneck on translation accuracy and generalization capability. Many word segmentation methods have been proposed to cope with the above problems, which consider the morphological properties of different languages.
Bradbury and Socher BIBREF16 employed the modified Morfessor to provide morphology knowledge into word segmentation, but they neglected the morphological varieties between subword units, which might result in ambiguous translation results. Sanchez-Cartagena and Toral BIBREF17 proposed a rule-based morphological word segmentation for Finnish, which applies BPE on all the morpheme units uniformly without distinguishing their inner morphological roles. Huck BIBREF18 explored target-side segmentation method for German, which shows that the cascading of suffix splitting and compound splitting with BPE can achieve better translation results. Ataman et al. BIBREF19 presented a linguistically motivated vocabulary reduction approach for Turkish, which optimizes the segmentation complexity with constraint on the vocabulary based on a category-based hidden markov model (HMM). Our work is closely related to their idea while ours are more simple and realizable. Tawfik et al. BIBREF20 confirmed that there is some advantage from using a high accuracy dialectal segmenter jointly with a language independent word segmentation method like BPE. The main difference is that their approach needs sufficient monolingual data additionally to train a segmentation model while ours do not need any external resources, which is very convenient for word segmentation on the low-resource and morphologically-rich agglutinative languages.
Conclusion
In this paper, we investigate morphological segmentation strategies on the low-resource and morphologically-rich languages of Turkish and Uyghur. Experimental results show that our proposed morphologically motivated word segmentation method is better suitable for NMT. And the BPE-SSS strategy achieves the best machine translation performance, as it can better preserve the syntactic and semantic information of the words with complex morphology as well as reduce the vocabulary size for model training. Moreover, we also estimate how the number of merge operations on the stem units for BPE-SSS strategy effects the translation quality, and we find that an appropriate vocabulary size is more useful for the NMT model.
In future work, we are planning to incorporate more linguistic and morphology knowledge into the training process of NMT to enhance its capacity of capturing syntactic structure and semantic information on the low-resource and morphologically-rich languages.
Acknowledgments
This work is supported by the National Natural Science Foundation of China, the Open Project of Key Laboratory of Xinjiang Uygur Autonomous Region, the Youth Innovation Promotion Association of the Chinese Academy of Sciences, and the High-level Talents Introduction Project of Xinjiang Uyghur Autonomous Region. | No |
55c840a2f1f663ab2bff984ae71501b17429d0c0 | 55c840a2f1f663ab2bff984ae71501b17429d0c0_0 | Q: Do they normalize the calculated intermediate output hypotheses to compensate for the incompleteness?
Text: Introduction
In this paper, we propose the processing of features not only in the input layer of a deep network, but in the intermediate layers as well. We are motivated by a desire to enable a neural network acoustic model to adaptively process the features depending on partial hypotheses and noise conditions. Many previous methods for adaptation have operated by linearly transforming either input features or intermediate layers in a two pass process where the transform is learned to maximize the likelihood of some adaptation data BIBREF0, BIBREF1, BIBREF2. Other methods have involved characterizing the input via factor analysis or i-vectors BIBREF3, BIBREF4. Here, we suggest an alternative approach in which adaptation can be achieved by re-presenting the feature stream at an intermediate layer of the network that is constructed to be correlated with the ultimate graphemic or phonetic output of the system.
We present this work in the context of Transformer networks BIBREF5. Transformers have become a popular deep learning architecture for modeling sequential datasets, showing improvements in many tasks such as machine translation BIBREF5, language modeling BIBREF6 and autoregressive image generation BIBREF7. In the speech recognition field, Transformers have been proposed to replace recurrent neural network (RNN) architectures such as LSTMs and GRUs BIBREF8. A recent survey of Transformers in many speech related applications may be found in BIBREF9. Compared to RNNs, Transformers have several advantages, specifically an ability to aggregate information across all the time-steps by using a self-attention mechanism. Unlike RNNs, the hidden representations do not need to be computed sequentially across time, thus enabling significant efficiency improvements via parallelization.
In the context of Transformer module, secondary feature analysis is enabled through an additional mid-network transformer module that has access both to previous-layer activations and the raw features. To implement this model, we apply the objective function several times at the intermediate layers, to encourage the development of phonetically relevant hypotheses. Interestingly, we find that the iterated use of an auxiliary loss in the intermediate layers significantly improves performance by itself, as well as enabling the secondary feature analysis.
This paper makes two main contributions:
We present improvements in the basic training process of deep transformer networks, specifically the iterated use of CTC or CE in intermediate layers, and
We show that an intermediate-layer attention model with access to both previous-layer activations and raw feature inputs can significantly improve performance.
We evaluate our proposed model on Librispeech and a large-scale video dataset. From our experimental results, we observe 10-20% relative improvement on Librispeech and 3.2-11% on the video dataset.
Transformer Modules
A transformer network BIBREF5 is a powerful approach to learning and modeling sequential data. A transformer network is itself constructed with a series of transformer modules that each perform some processing. Each module has a self-attention mechanism and several feed-forward layers, enabling easy parallelization over time-steps compared to recurrent models such as RNNs or LSTMs BIBREF10. We use the architecture defined in BIBREF5, and provide only a brief summary below.
Assume we have an input sequence that is of length $S$: $X = [x_1,...,x_S]$. Each $x_i$ is itself a vector of activations. A transformer layer encodes $X$ into a corresponding output representation $Z = [z_1,...,z_S]$ as described below.
Transformers are built around the notion of a self-attention mechanism that is used to extract the relevant information for each time-step $s$ from all time-steps $[1..S]$ in the preceding layer. Self attention is defined in terms of a Query, Key, Value triplet $\lbrace {Q}, {K}, {V}\rbrace \in \mathbb {R}^{S \times d_k}$. In self-attention, the queries, keys and values are the columns of the input itself, $[x_1,...,x_S]$. The output activations are computed as:
Transformer modules deploy a multi-headed version of self-attention. As described in BIBREF5, this is done by linearly projecting the queries, keys and values $P$ times with different, learned linear projections. Self-attention is then applied to each of these projected versions of Queries, Keys and Values. These are concatenated and once again projected, resulting in the final values. We refer to the input projection matrices as $W_p^{Q}, W_p^{K}, W_p^{V}$, and to the output projection as $W_O$. Multihead attention is implemented as
Here, $ W_p^Q, W_p^K, W_p^V \in \mathbb {R}^{d_{k} \times d_m}$, $d_m = d_{k} / P$, and $W_O \in \mathbb {R}^{Pd_m \times d_k}$.
After self-attention, a transformer module applies a series of linear layer, RELU, layer-norm and dropout operations, as well as the application of residual connections. The full sequence of processing is illustrated in Figure FIGREF3.
Iterated Feature Presentation
In this section, we present our proposal for allowing the network to (re)-consider the input features in the light of intermediate processing. We do this by again deploying a self-attention mechanism to combine the information present in the original features with the information available in the activations of an intermediate layer. As described earlier, we calculate the output posteriors and auxiliary loss at the intermediate layer as well. The overall architecture is illustrated in Figure FIGREF6. Here, we have used a 24 layer network, with feature re-presentation after the 12th layer.
In the following subsections, we provide detail on the feature re-presentation mechanism, and iterated loss calculation.
Iterated Feature Presentation ::: Feature Re-Presentation
We process the features in the intermediate later by concatenating a projection of the original features with a projection of previous hidden layer activations, and then applying self-attention.
First, we project both the the input and intermediate layer features $(Z_0 \in \mathbb {R}^{S \times d_0}, Z_{k} \in \mathbb {R}^{S \times d_{k}} )$, apply layer normalization and concatenate with position encoding:
where $d_0$ is the input feature dimension, $d_k$ is the Transformer output dimension, $W_1 \in \mathbb {R}^{d_0 \times d_c}, W_2 \in \mathbb {R}^{d_{k} \times d_c}$ and $E \in \mathbb {R}^{S \times d_{e}}$ is a sinusoidal position encoding BIBREF5.
After we project both information sources to the same dimensionality, we merge the information by using time-axis concatenation:
Then, we extract relevant features with extra Transformer layer and followed by linear projection and ReLU:
where $W_3 \in \mathbb {R}^{d_{k+1}^{^{\prime }} \times d_{k+1}}$ is a linear projection. All biases in the formula above are omitted for simplicity.
Note that in doing time-axis concatenation, our Key and Value sequences are twice as long as the original input. In the standard self-attention where the Query is the same as the Key and Value, the output preserves the sequence length. Therefore, in order to maintain the necessary sequence length $S$, we select either the first half (split A) or the second half (split B) to represent the combined information. The difference between these two is that the use of split A uses the projected input features as the Query set, while split B uses the projected higher level activations as the Query. In initial experiments, we found that the use of high-level features (split B) as queries is preferable. We illustrates this operation on Figure FIGREF11.
Another way of combining information from the features with an intermediate layer is to concatenate the two along with the feature rather than the time axis. However, in initial experiments, we found that time axis concatenation produces better results, and focus on that in the experimental results.
Iterated Feature Presentation ::: Iterated Loss
We have found it beneficial to apply the loss function at several intermediate layers of the network. Suppose there are $M$ total layers, and define a subset of these layers at which to apply the loss function: $K = \lbrace k_1, k_2, ..., k_L\rbrace \subseteq \lbrace 1,..,M-1\rbrace $. The total objective function is then defined as
where $Z_{k_l}$ is the $k_l$-th Transformer layer activations, $Y$ is the ground-truth transcription for CTC and context dependent states for hybrid ASR, and $Loss(P, Y)$ can be defined as CTC objective BIBREF11 or cross entropy for hybrid ASR. The coefficient $\lambda $ scales the auxiliary loss and we set $\lambda = 0.3$ based on our preliminary experiments. We illustrate the auxiliary prediction and loss in Figure FIGREF6.
Experimental results ::: Dataset
We evaluate our proposed module on both the Librispeech BIBREF12 dataset and a large-scale English video dataset. In the Librispeech training set, there are three splits, containing 100 and 360 hours sets of clean speech and 500 hours of other speech. We combined everything, resulting in 960 hours of training data. For the development set, there are also two splits: dev-clean and dev-other. For the test set, there is an analogous split.
The video dataset is a collection of public and anonymized English videos. It consists of a 1000 hour training set, a 9 hour dev set, and a $46.1$ hour test set. The test set comprises an $8.5$ hour curated set of carefully selected very clean videos, a 19 hour clean set and a $18.6$ hour noisy set BIBREF13. For the hybrid ASR experiments on video dataset, alignments were generated with a production system trained with 14k hours.
All speech features are extracted by using log Mel-filterbanks with 80 dimensions, a 25 ms window size and a 10 ms time step between two windows. Then we apply mean and variance normalization.
Experimental results ::: Target Units
For CTC training, we use word-pieces as our target. During training, the reference is tokenized to 5000 sub-word units using sentencepiece with a uni-gram language model BIBREF14. Neural networks are thus used to produce a posterior distribution for 5001 symbols (5000 sub-word units plus blank symbol) every frame. For decoding, each sub-word is modeled by a HMM with two states where the last states share the same blank symbol probability; the best sub-word segmentation of each word is used to form a lexicon; these HMMs, lexicon are then combined with the standard $n$-gram via FST BIBREF15 to form a static decoding graph. Kaldi decoderBIBREF16 is used to produce the best hypothesis.
We further present results with hybrid ASR systems. In this, we use the same HMM topology, GMM bootstrapping and decision tree building procedure as BIBREF13. Specifically, we use context-dependent (CD) graphemes as modeling units. On top of alignments from a GMM model, we build a decision tree to cluster CD graphemes. This results in 7248 context dependent units for Librispeech, and 6560 units for the video dataset. Training then proceeds with the CE loss function. We also apply SpecAugment BIBREF17 online during training, using the LD policy without time warping. For decoding, a standard Kaldi's WFST decoder BIBREF16 is used.
Experimental results ::: Deep Transformer Acoustic Model
All neural networks are implemented with the in-house extension of the fairseq BIBREF18 toolkit. Our speech features are produced by processing the log Mel-spectrogram with two VGG BIBREF19 layers that have the following configurations: (1) two 2-D convolutions with 32 output filters, kernel=3, stride=1, ReLU activation, and max-pooling kernel=2, (2) two 2-D convolutions with 64 output filters, kernel=3, stride=1 and max-pooling kernel=2 for CTC or max-pooling kernel=1 for hybrid. After the VGG layers, the total number of frames are subsampled by (i) 4x for CTC, or (ii) 2x for hybrid, thus enabling us to reduce the run-time and memory usage significantly. After VGG processing, we use 24 Transformer layers with $d_k=512$ head dimensions (8 heads, each head has 64 dimensions), 2048 feedforward hidden dimensions (total parameters $\pm $ 80 millions), and dropout $0.15$. For the proposed models, we utilized an auxiliary MLP with two linear layers with 256 hidden units, LeakyReLU activation and softmax (see Sec. SECREF3). We set our position encoding dimensions $d_e=256$ and pre-concatenation projection $d_c=768$ for the feature re-presentation layer. The loss function is either CTC loss or hybrid CE loss.
Experimental results ::: Results
Table TABREF19 presents CTC based results for the Librispeech dataset, without data augmentation. Our baseline is a 24 layer Transformer network trained with CTC. For the proposed method, we varied the number and placement of iterated loss and the feature re-presentation. The next three results show the effect of using CTC multiple times. We see 12 and 8% relative improvements for test-clean and test-other. Adding feature re-presentation gives a further boost, with net 20 and 18% relative improvements over the baseline.
Table TABREF20 shows results for Librispeech with SpecAugment. We test both CTC and CE/hybrid systems. There are consistent gains first from iterated loss, and then from multiple feature presentation. We also run additional CTC experiments with 36 layers Transformer (total parameters $\pm $120 millions). The baseline with 36 layers has the same performance with 24 layers, but by adding the proposed methods, the 36 layer performance improved to give the best results. This shows that our proposed methods can improve even very deep models.
As shown in Table TABREF21, the proposed methods also provide large performance improvements on the curated video set, up to 13% with CTC, and up to 9% with the hybrid model. We also observe moderate gains of between 3.2 and 8% relative on the clean and noisy video sets.
Related Work
In recent years, Transformer models have become an active research topic in speech processing. The key features of Transformer networks is self-attention, which produces comparable or better performance to LSTMs when used for encoder-decoder based ASR BIBREF23, as well as when trained with CTC BIBREF9. Speech-Transformers BIBREF24 also produce comparable performance to the LSTM-based attention model, but with higher training speed in a single GPU. Abdelrahman et al.BIBREF8 integrates a convolution layer to capture audio context and reduces WER in Librispeech.
The use of an objective function in intermediate layers has been found useful in several previous works such as image classification BIBREF25 and language modeling BIBREF26. In BIBREF27, the authors did pre-training with an RNN-T based model by using a hierarchical CTC criterion with different target units. In this paper, we don't need additional types of target unit, instead we just use same tokenization and targets for both intermediate and final losses.
The application of the objective function to intermediate layers is also similar in spirit to the use of KL-divergence in BIBREF28, which estimates output posteriors at an intermediate layer and regularizes them towards the distributions at the final layer. In contrast to this approach, the direct application of the objective function does not require the network to have a good output distribution before the new gradient contribution is meaningful.
Conclusion
In this paper, we have proposed a method for re-processing the input features in light of the information available at an intermediate network layer. We do this in the context of deep transformer networks, via a self-attention mechanism on both features and hidden states representation. To encourage meaningful partial results, we calculate the objective function at intermediate layers of the network as well as the output layer. This improves performance in and of itself, and when combined with feature re-presentation we observe consistent relative improvements of 10 - 20% for Librispeech and 3.2 - 13% for videos. | Unanswerable |
fa5357c56ea80a21a7ca88a80f21711c5431042c | fa5357c56ea80a21a7ca88a80f21711c5431042c_0 | Q: How many layers do they use in their best performing network?
Text: Introduction
In this paper, we propose the processing of features not only in the input layer of a deep network, but in the intermediate layers as well. We are motivated by a desire to enable a neural network acoustic model to adaptively process the features depending on partial hypotheses and noise conditions. Many previous methods for adaptation have operated by linearly transforming either input features or intermediate layers in a two pass process where the transform is learned to maximize the likelihood of some adaptation data BIBREF0, BIBREF1, BIBREF2. Other methods have involved characterizing the input via factor analysis or i-vectors BIBREF3, BIBREF4. Here, we suggest an alternative approach in which adaptation can be achieved by re-presenting the feature stream at an intermediate layer of the network that is constructed to be correlated with the ultimate graphemic or phonetic output of the system.
We present this work in the context of Transformer networks BIBREF5. Transformers have become a popular deep learning architecture for modeling sequential datasets, showing improvements in many tasks such as machine translation BIBREF5, language modeling BIBREF6 and autoregressive image generation BIBREF7. In the speech recognition field, Transformers have been proposed to replace recurrent neural network (RNN) architectures such as LSTMs and GRUs BIBREF8. A recent survey of Transformers in many speech related applications may be found in BIBREF9. Compared to RNNs, Transformers have several advantages, specifically an ability to aggregate information across all the time-steps by using a self-attention mechanism. Unlike RNNs, the hidden representations do not need to be computed sequentially across time, thus enabling significant efficiency improvements via parallelization.
In the context of Transformer module, secondary feature analysis is enabled through an additional mid-network transformer module that has access both to previous-layer activations and the raw features. To implement this model, we apply the objective function several times at the intermediate layers, to encourage the development of phonetically relevant hypotheses. Interestingly, we find that the iterated use of an auxiliary loss in the intermediate layers significantly improves performance by itself, as well as enabling the secondary feature analysis.
This paper makes two main contributions:
We present improvements in the basic training process of deep transformer networks, specifically the iterated use of CTC or CE in intermediate layers, and
We show that an intermediate-layer attention model with access to both previous-layer activations and raw feature inputs can significantly improve performance.
We evaluate our proposed model on Librispeech and a large-scale video dataset. From our experimental results, we observe 10-20% relative improvement on Librispeech and 3.2-11% on the video dataset.
Transformer Modules
A transformer network BIBREF5 is a powerful approach to learning and modeling sequential data. A transformer network is itself constructed with a series of transformer modules that each perform some processing. Each module has a self-attention mechanism and several feed-forward layers, enabling easy parallelization over time-steps compared to recurrent models such as RNNs or LSTMs BIBREF10. We use the architecture defined in BIBREF5, and provide only a brief summary below.
Assume we have an input sequence that is of length $S$: $X = [x_1,...,x_S]$. Each $x_i$ is itself a vector of activations. A transformer layer encodes $X$ into a corresponding output representation $Z = [z_1,...,z_S]$ as described below.
Transformers are built around the notion of a self-attention mechanism that is used to extract the relevant information for each time-step $s$ from all time-steps $[1..S]$ in the preceding layer. Self attention is defined in terms of a Query, Key, Value triplet $\lbrace {Q}, {K}, {V}\rbrace \in \mathbb {R}^{S \times d_k}$. In self-attention, the queries, keys and values are the columns of the input itself, $[x_1,...,x_S]$. The output activations are computed as:
Transformer modules deploy a multi-headed version of self-attention. As described in BIBREF5, this is done by linearly projecting the queries, keys and values $P$ times with different, learned linear projections. Self-attention is then applied to each of these projected versions of Queries, Keys and Values. These are concatenated and once again projected, resulting in the final values. We refer to the input projection matrices as $W_p^{Q}, W_p^{K}, W_p^{V}$, and to the output projection as $W_O$. Multihead attention is implemented as
Here, $ W_p^Q, W_p^K, W_p^V \in \mathbb {R}^{d_{k} \times d_m}$, $d_m = d_{k} / P$, and $W_O \in \mathbb {R}^{Pd_m \times d_k}$.
After self-attention, a transformer module applies a series of linear layer, RELU, layer-norm and dropout operations, as well as the application of residual connections. The full sequence of processing is illustrated in Figure FIGREF3.
Iterated Feature Presentation
In this section, we present our proposal for allowing the network to (re)-consider the input features in the light of intermediate processing. We do this by again deploying a self-attention mechanism to combine the information present in the original features with the information available in the activations of an intermediate layer. As described earlier, we calculate the output posteriors and auxiliary loss at the intermediate layer as well. The overall architecture is illustrated in Figure FIGREF6. Here, we have used a 24 layer network, with feature re-presentation after the 12th layer.
In the following subsections, we provide detail on the feature re-presentation mechanism, and iterated loss calculation.
Iterated Feature Presentation ::: Feature Re-Presentation
We process the features in the intermediate later by concatenating a projection of the original features with a projection of previous hidden layer activations, and then applying self-attention.
First, we project both the the input and intermediate layer features $(Z_0 \in \mathbb {R}^{S \times d_0}, Z_{k} \in \mathbb {R}^{S \times d_{k}} )$, apply layer normalization and concatenate with position encoding:
where $d_0$ is the input feature dimension, $d_k$ is the Transformer output dimension, $W_1 \in \mathbb {R}^{d_0 \times d_c}, W_2 \in \mathbb {R}^{d_{k} \times d_c}$ and $E \in \mathbb {R}^{S \times d_{e}}$ is a sinusoidal position encoding BIBREF5.
After we project both information sources to the same dimensionality, we merge the information by using time-axis concatenation:
Then, we extract relevant features with extra Transformer layer and followed by linear projection and ReLU:
where $W_3 \in \mathbb {R}^{d_{k+1}^{^{\prime }} \times d_{k+1}}$ is a linear projection. All biases in the formula above are omitted for simplicity.
Note that in doing time-axis concatenation, our Key and Value sequences are twice as long as the original input. In the standard self-attention where the Query is the same as the Key and Value, the output preserves the sequence length. Therefore, in order to maintain the necessary sequence length $S$, we select either the first half (split A) or the second half (split B) to represent the combined information. The difference between these two is that the use of split A uses the projected input features as the Query set, while split B uses the projected higher level activations as the Query. In initial experiments, we found that the use of high-level features (split B) as queries is preferable. We illustrates this operation on Figure FIGREF11.
Another way of combining information from the features with an intermediate layer is to concatenate the two along with the feature rather than the time axis. However, in initial experiments, we found that time axis concatenation produces better results, and focus on that in the experimental results.
Iterated Feature Presentation ::: Iterated Loss
We have found it beneficial to apply the loss function at several intermediate layers of the network. Suppose there are $M$ total layers, and define a subset of these layers at which to apply the loss function: $K = \lbrace k_1, k_2, ..., k_L\rbrace \subseteq \lbrace 1,..,M-1\rbrace $. The total objective function is then defined as
where $Z_{k_l}$ is the $k_l$-th Transformer layer activations, $Y$ is the ground-truth transcription for CTC and context dependent states for hybrid ASR, and $Loss(P, Y)$ can be defined as CTC objective BIBREF11 or cross entropy for hybrid ASR. The coefficient $\lambda $ scales the auxiliary loss and we set $\lambda = 0.3$ based on our preliminary experiments. We illustrate the auxiliary prediction and loss in Figure FIGREF6.
Experimental results ::: Dataset
We evaluate our proposed module on both the Librispeech BIBREF12 dataset and a large-scale English video dataset. In the Librispeech training set, there are three splits, containing 100 and 360 hours sets of clean speech and 500 hours of other speech. We combined everything, resulting in 960 hours of training data. For the development set, there are also two splits: dev-clean and dev-other. For the test set, there is an analogous split.
The video dataset is a collection of public and anonymized English videos. It consists of a 1000 hour training set, a 9 hour dev set, and a $46.1$ hour test set. The test set comprises an $8.5$ hour curated set of carefully selected very clean videos, a 19 hour clean set and a $18.6$ hour noisy set BIBREF13. For the hybrid ASR experiments on video dataset, alignments were generated with a production system trained with 14k hours.
All speech features are extracted by using log Mel-filterbanks with 80 dimensions, a 25 ms window size and a 10 ms time step between two windows. Then we apply mean and variance normalization.
Experimental results ::: Target Units
For CTC training, we use word-pieces as our target. During training, the reference is tokenized to 5000 sub-word units using sentencepiece with a uni-gram language model BIBREF14. Neural networks are thus used to produce a posterior distribution for 5001 symbols (5000 sub-word units plus blank symbol) every frame. For decoding, each sub-word is modeled by a HMM with two states where the last states share the same blank symbol probability; the best sub-word segmentation of each word is used to form a lexicon; these HMMs, lexicon are then combined with the standard $n$-gram via FST BIBREF15 to form a static decoding graph. Kaldi decoderBIBREF16 is used to produce the best hypothesis.
We further present results with hybrid ASR systems. In this, we use the same HMM topology, GMM bootstrapping and decision tree building procedure as BIBREF13. Specifically, we use context-dependent (CD) graphemes as modeling units. On top of alignments from a GMM model, we build a decision tree to cluster CD graphemes. This results in 7248 context dependent units for Librispeech, and 6560 units for the video dataset. Training then proceeds with the CE loss function. We also apply SpecAugment BIBREF17 online during training, using the LD policy without time warping. For decoding, a standard Kaldi's WFST decoder BIBREF16 is used.
Experimental results ::: Deep Transformer Acoustic Model
All neural networks are implemented with the in-house extension of the fairseq BIBREF18 toolkit. Our speech features are produced by processing the log Mel-spectrogram with two VGG BIBREF19 layers that have the following configurations: (1) two 2-D convolutions with 32 output filters, kernel=3, stride=1, ReLU activation, and max-pooling kernel=2, (2) two 2-D convolutions with 64 output filters, kernel=3, stride=1 and max-pooling kernel=2 for CTC or max-pooling kernel=1 for hybrid. After the VGG layers, the total number of frames are subsampled by (i) 4x for CTC, or (ii) 2x for hybrid, thus enabling us to reduce the run-time and memory usage significantly. After VGG processing, we use 24 Transformer layers with $d_k=512$ head dimensions (8 heads, each head has 64 dimensions), 2048 feedforward hidden dimensions (total parameters $\pm $ 80 millions), and dropout $0.15$. For the proposed models, we utilized an auxiliary MLP with two linear layers with 256 hidden units, LeakyReLU activation and softmax (see Sec. SECREF3). We set our position encoding dimensions $d_e=256$ and pre-concatenation projection $d_c=768$ for the feature re-presentation layer. The loss function is either CTC loss or hybrid CE loss.
Experimental results ::: Results
Table TABREF19 presents CTC based results for the Librispeech dataset, without data augmentation. Our baseline is a 24 layer Transformer network trained with CTC. For the proposed method, we varied the number and placement of iterated loss and the feature re-presentation. The next three results show the effect of using CTC multiple times. We see 12 and 8% relative improvements for test-clean and test-other. Adding feature re-presentation gives a further boost, with net 20 and 18% relative improvements over the baseline.
Table TABREF20 shows results for Librispeech with SpecAugment. We test both CTC and CE/hybrid systems. There are consistent gains first from iterated loss, and then from multiple feature presentation. We also run additional CTC experiments with 36 layers Transformer (total parameters $\pm $120 millions). The baseline with 36 layers has the same performance with 24 layers, but by adding the proposed methods, the 36 layer performance improved to give the best results. This shows that our proposed methods can improve even very deep models.
As shown in Table TABREF21, the proposed methods also provide large performance improvements on the curated video set, up to 13% with CTC, and up to 9% with the hybrid model. We also observe moderate gains of between 3.2 and 8% relative on the clean and noisy video sets.
Related Work
In recent years, Transformer models have become an active research topic in speech processing. The key features of Transformer networks is self-attention, which produces comparable or better performance to LSTMs when used for encoder-decoder based ASR BIBREF23, as well as when trained with CTC BIBREF9. Speech-Transformers BIBREF24 also produce comparable performance to the LSTM-based attention model, but with higher training speed in a single GPU. Abdelrahman et al.BIBREF8 integrates a convolution layer to capture audio context and reduces WER in Librispeech.
The use of an objective function in intermediate layers has been found useful in several previous works such as image classification BIBREF25 and language modeling BIBREF26. In BIBREF27, the authors did pre-training with an RNN-T based model by using a hierarchical CTC criterion with different target units. In this paper, we don't need additional types of target unit, instead we just use same tokenization and targets for both intermediate and final losses.
The application of the objective function to intermediate layers is also similar in spirit to the use of KL-divergence in BIBREF28, which estimates output posteriors at an intermediate layer and regularizes them towards the distributions at the final layer. In contrast to this approach, the direct application of the objective function does not require the network to have a good output distribution before the new gradient contribution is meaningful.
Conclusion
In this paper, we have proposed a method for re-processing the input features in light of the information available at an intermediate network layer. We do this in the context of deep transformer networks, via a self-attention mechanism on both features and hidden states representation. To encourage meaningful partial results, we calculate the objective function at intermediate layers of the network as well as the output layer. This improves performance in and of itself, and when combined with feature re-presentation we observe consistent relative improvements of 10 - 20% for Librispeech and 3.2 - 13% for videos. | 36 |
35915166ab2fd3d39c0297c427d4ac00e8083066 | 35915166ab2fd3d39c0297c427d4ac00e8083066_0 | Q: Do they just sum up all the loses the calculate to end up with one single loss?
Text: Introduction
In this paper, we propose the processing of features not only in the input layer of a deep network, but in the intermediate layers as well. We are motivated by a desire to enable a neural network acoustic model to adaptively process the features depending on partial hypotheses and noise conditions. Many previous methods for adaptation have operated by linearly transforming either input features or intermediate layers in a two pass process where the transform is learned to maximize the likelihood of some adaptation data BIBREF0, BIBREF1, BIBREF2. Other methods have involved characterizing the input via factor analysis or i-vectors BIBREF3, BIBREF4. Here, we suggest an alternative approach in which adaptation can be achieved by re-presenting the feature stream at an intermediate layer of the network that is constructed to be correlated with the ultimate graphemic or phonetic output of the system.
We present this work in the context of Transformer networks BIBREF5. Transformers have become a popular deep learning architecture for modeling sequential datasets, showing improvements in many tasks such as machine translation BIBREF5, language modeling BIBREF6 and autoregressive image generation BIBREF7. In the speech recognition field, Transformers have been proposed to replace recurrent neural network (RNN) architectures such as LSTMs and GRUs BIBREF8. A recent survey of Transformers in many speech related applications may be found in BIBREF9. Compared to RNNs, Transformers have several advantages, specifically an ability to aggregate information across all the time-steps by using a self-attention mechanism. Unlike RNNs, the hidden representations do not need to be computed sequentially across time, thus enabling significant efficiency improvements via parallelization.
In the context of Transformer module, secondary feature analysis is enabled through an additional mid-network transformer module that has access both to previous-layer activations and the raw features. To implement this model, we apply the objective function several times at the intermediate layers, to encourage the development of phonetically relevant hypotheses. Interestingly, we find that the iterated use of an auxiliary loss in the intermediate layers significantly improves performance by itself, as well as enabling the secondary feature analysis.
This paper makes two main contributions:
We present improvements in the basic training process of deep transformer networks, specifically the iterated use of CTC or CE in intermediate layers, and
We show that an intermediate-layer attention model with access to both previous-layer activations and raw feature inputs can significantly improve performance.
We evaluate our proposed model on Librispeech and a large-scale video dataset. From our experimental results, we observe 10-20% relative improvement on Librispeech and 3.2-11% on the video dataset.
Transformer Modules
A transformer network BIBREF5 is a powerful approach to learning and modeling sequential data. A transformer network is itself constructed with a series of transformer modules that each perform some processing. Each module has a self-attention mechanism and several feed-forward layers, enabling easy parallelization over time-steps compared to recurrent models such as RNNs or LSTMs BIBREF10. We use the architecture defined in BIBREF5, and provide only a brief summary below.
Assume we have an input sequence that is of length $S$: $X = [x_1,...,x_S]$. Each $x_i$ is itself a vector of activations. A transformer layer encodes $X$ into a corresponding output representation $Z = [z_1,...,z_S]$ as described below.
Transformers are built around the notion of a self-attention mechanism that is used to extract the relevant information for each time-step $s$ from all time-steps $[1..S]$ in the preceding layer. Self attention is defined in terms of a Query, Key, Value triplet $\lbrace {Q}, {K}, {V}\rbrace \in \mathbb {R}^{S \times d_k}$. In self-attention, the queries, keys and values are the columns of the input itself, $[x_1,...,x_S]$. The output activations are computed as:
Transformer modules deploy a multi-headed version of self-attention. As described in BIBREF5, this is done by linearly projecting the queries, keys and values $P$ times with different, learned linear projections. Self-attention is then applied to each of these projected versions of Queries, Keys and Values. These are concatenated and once again projected, resulting in the final values. We refer to the input projection matrices as $W_p^{Q}, W_p^{K}, W_p^{V}$, and to the output projection as $W_O$. Multihead attention is implemented as
Here, $ W_p^Q, W_p^K, W_p^V \in \mathbb {R}^{d_{k} \times d_m}$, $d_m = d_{k} / P$, and $W_O \in \mathbb {R}^{Pd_m \times d_k}$.
After self-attention, a transformer module applies a series of linear layer, RELU, layer-norm and dropout operations, as well as the application of residual connections. The full sequence of processing is illustrated in Figure FIGREF3.
Iterated Feature Presentation
In this section, we present our proposal for allowing the network to (re)-consider the input features in the light of intermediate processing. We do this by again deploying a self-attention mechanism to combine the information present in the original features with the information available in the activations of an intermediate layer. As described earlier, we calculate the output posteriors and auxiliary loss at the intermediate layer as well. The overall architecture is illustrated in Figure FIGREF6. Here, we have used a 24 layer network, with feature re-presentation after the 12th layer.
In the following subsections, we provide detail on the feature re-presentation mechanism, and iterated loss calculation.
Iterated Feature Presentation ::: Feature Re-Presentation
We process the features in the intermediate later by concatenating a projection of the original features with a projection of previous hidden layer activations, and then applying self-attention.
First, we project both the the input and intermediate layer features $(Z_0 \in \mathbb {R}^{S \times d_0}, Z_{k} \in \mathbb {R}^{S \times d_{k}} )$, apply layer normalization and concatenate with position encoding:
where $d_0$ is the input feature dimension, $d_k$ is the Transformer output dimension, $W_1 \in \mathbb {R}^{d_0 \times d_c}, W_2 \in \mathbb {R}^{d_{k} \times d_c}$ and $E \in \mathbb {R}^{S \times d_{e}}$ is a sinusoidal position encoding BIBREF5.
After we project both information sources to the same dimensionality, we merge the information by using time-axis concatenation:
Then, we extract relevant features with extra Transformer layer and followed by linear projection and ReLU:
where $W_3 \in \mathbb {R}^{d_{k+1}^{^{\prime }} \times d_{k+1}}$ is a linear projection. All biases in the formula above are omitted for simplicity.
Note that in doing time-axis concatenation, our Key and Value sequences are twice as long as the original input. In the standard self-attention where the Query is the same as the Key and Value, the output preserves the sequence length. Therefore, in order to maintain the necessary sequence length $S$, we select either the first half (split A) or the second half (split B) to represent the combined information. The difference between these two is that the use of split A uses the projected input features as the Query set, while split B uses the projected higher level activations as the Query. In initial experiments, we found that the use of high-level features (split B) as queries is preferable. We illustrates this operation on Figure FIGREF11.
Another way of combining information from the features with an intermediate layer is to concatenate the two along with the feature rather than the time axis. However, in initial experiments, we found that time axis concatenation produces better results, and focus on that in the experimental results.
Iterated Feature Presentation ::: Iterated Loss
We have found it beneficial to apply the loss function at several intermediate layers of the network. Suppose there are $M$ total layers, and define a subset of these layers at which to apply the loss function: $K = \lbrace k_1, k_2, ..., k_L\rbrace \subseteq \lbrace 1,..,M-1\rbrace $. The total objective function is then defined as
where $Z_{k_l}$ is the $k_l$-th Transformer layer activations, $Y$ is the ground-truth transcription for CTC and context dependent states for hybrid ASR, and $Loss(P, Y)$ can be defined as CTC objective BIBREF11 or cross entropy for hybrid ASR. The coefficient $\lambda $ scales the auxiliary loss and we set $\lambda = 0.3$ based on our preliminary experiments. We illustrate the auxiliary prediction and loss in Figure FIGREF6.
Experimental results ::: Dataset
We evaluate our proposed module on both the Librispeech BIBREF12 dataset and a large-scale English video dataset. In the Librispeech training set, there are three splits, containing 100 and 360 hours sets of clean speech and 500 hours of other speech. We combined everything, resulting in 960 hours of training data. For the development set, there are also two splits: dev-clean and dev-other. For the test set, there is an analogous split.
The video dataset is a collection of public and anonymized English videos. It consists of a 1000 hour training set, a 9 hour dev set, and a $46.1$ hour test set. The test set comprises an $8.5$ hour curated set of carefully selected very clean videos, a 19 hour clean set and a $18.6$ hour noisy set BIBREF13. For the hybrid ASR experiments on video dataset, alignments were generated with a production system trained with 14k hours.
All speech features are extracted by using log Mel-filterbanks with 80 dimensions, a 25 ms window size and a 10 ms time step between two windows. Then we apply mean and variance normalization.
Experimental results ::: Target Units
For CTC training, we use word-pieces as our target. During training, the reference is tokenized to 5000 sub-word units using sentencepiece with a uni-gram language model BIBREF14. Neural networks are thus used to produce a posterior distribution for 5001 symbols (5000 sub-word units plus blank symbol) every frame. For decoding, each sub-word is modeled by a HMM with two states where the last states share the same blank symbol probability; the best sub-word segmentation of each word is used to form a lexicon; these HMMs, lexicon are then combined with the standard $n$-gram via FST BIBREF15 to form a static decoding graph. Kaldi decoderBIBREF16 is used to produce the best hypothesis.
We further present results with hybrid ASR systems. In this, we use the same HMM topology, GMM bootstrapping and decision tree building procedure as BIBREF13. Specifically, we use context-dependent (CD) graphemes as modeling units. On top of alignments from a GMM model, we build a decision tree to cluster CD graphemes. This results in 7248 context dependent units for Librispeech, and 6560 units for the video dataset. Training then proceeds with the CE loss function. We also apply SpecAugment BIBREF17 online during training, using the LD policy without time warping. For decoding, a standard Kaldi's WFST decoder BIBREF16 is used.
Experimental results ::: Deep Transformer Acoustic Model
All neural networks are implemented with the in-house extension of the fairseq BIBREF18 toolkit. Our speech features are produced by processing the log Mel-spectrogram with two VGG BIBREF19 layers that have the following configurations: (1) two 2-D convolutions with 32 output filters, kernel=3, stride=1, ReLU activation, and max-pooling kernel=2, (2) two 2-D convolutions with 64 output filters, kernel=3, stride=1 and max-pooling kernel=2 for CTC or max-pooling kernel=1 for hybrid. After the VGG layers, the total number of frames are subsampled by (i) 4x for CTC, or (ii) 2x for hybrid, thus enabling us to reduce the run-time and memory usage significantly. After VGG processing, we use 24 Transformer layers with $d_k=512$ head dimensions (8 heads, each head has 64 dimensions), 2048 feedforward hidden dimensions (total parameters $\pm $ 80 millions), and dropout $0.15$. For the proposed models, we utilized an auxiliary MLP with two linear layers with 256 hidden units, LeakyReLU activation and softmax (see Sec. SECREF3). We set our position encoding dimensions $d_e=256$ and pre-concatenation projection $d_c=768$ for the feature re-presentation layer. The loss function is either CTC loss or hybrid CE loss.
Experimental results ::: Results
Table TABREF19 presents CTC based results for the Librispeech dataset, without data augmentation. Our baseline is a 24 layer Transformer network trained with CTC. For the proposed method, we varied the number and placement of iterated loss and the feature re-presentation. The next three results show the effect of using CTC multiple times. We see 12 and 8% relative improvements for test-clean and test-other. Adding feature re-presentation gives a further boost, with net 20 and 18% relative improvements over the baseline.
Table TABREF20 shows results for Librispeech with SpecAugment. We test both CTC and CE/hybrid systems. There are consistent gains first from iterated loss, and then from multiple feature presentation. We also run additional CTC experiments with 36 layers Transformer (total parameters $\pm $120 millions). The baseline with 36 layers has the same performance with 24 layers, but by adding the proposed methods, the 36 layer performance improved to give the best results. This shows that our proposed methods can improve even very deep models.
As shown in Table TABREF21, the proposed methods also provide large performance improvements on the curated video set, up to 13% with CTC, and up to 9% with the hybrid model. We also observe moderate gains of between 3.2 and 8% relative on the clean and noisy video sets.
Related Work
In recent years, Transformer models have become an active research topic in speech processing. The key features of Transformer networks is self-attention, which produces comparable or better performance to LSTMs when used for encoder-decoder based ASR BIBREF23, as well as when trained with CTC BIBREF9. Speech-Transformers BIBREF24 also produce comparable performance to the LSTM-based attention model, but with higher training speed in a single GPU. Abdelrahman et al.BIBREF8 integrates a convolution layer to capture audio context and reduces WER in Librispeech.
The use of an objective function in intermediate layers has been found useful in several previous works such as image classification BIBREF25 and language modeling BIBREF26. In BIBREF27, the authors did pre-training with an RNN-T based model by using a hierarchical CTC criterion with different target units. In this paper, we don't need additional types of target unit, instead we just use same tokenization and targets for both intermediate and final losses.
The application of the objective function to intermediate layers is also similar in spirit to the use of KL-divergence in BIBREF28, which estimates output posteriors at an intermediate layer and regularizes them towards the distributions at the final layer. In contrast to this approach, the direct application of the objective function does not require the network to have a good output distribution before the new gradient contribution is meaningful.
Conclusion
In this paper, we have proposed a method for re-processing the input features in light of the information available at an intermediate network layer. We do this in the context of deep transformer networks, via a self-attention mechanism on both features and hidden states representation. To encourage meaningful partial results, we calculate the objective function at intermediate layers of the network as well as the output layer. This improves performance in and of itself, and when combined with feature re-presentation we observe consistent relative improvements of 10 - 20% for Librispeech and 3.2 - 13% for videos. | No |
e6c872fea474ea96ca2553f7e9d5875df4ef55cd | e6c872fea474ea96ca2553f7e9d5875df4ef55cd_0 | Q: Does their model take more time to train than regular transformer models?
Text: Introduction
In this paper, we propose the processing of features not only in the input layer of a deep network, but in the intermediate layers as well. We are motivated by a desire to enable a neural network acoustic model to adaptively process the features depending on partial hypotheses and noise conditions. Many previous methods for adaptation have operated by linearly transforming either input features or intermediate layers in a two pass process where the transform is learned to maximize the likelihood of some adaptation data BIBREF0, BIBREF1, BIBREF2. Other methods have involved characterizing the input via factor analysis or i-vectors BIBREF3, BIBREF4. Here, we suggest an alternative approach in which adaptation can be achieved by re-presenting the feature stream at an intermediate layer of the network that is constructed to be correlated with the ultimate graphemic or phonetic output of the system.
We present this work in the context of Transformer networks BIBREF5. Transformers have become a popular deep learning architecture for modeling sequential datasets, showing improvements in many tasks such as machine translation BIBREF5, language modeling BIBREF6 and autoregressive image generation BIBREF7. In the speech recognition field, Transformers have been proposed to replace recurrent neural network (RNN) architectures such as LSTMs and GRUs BIBREF8. A recent survey of Transformers in many speech related applications may be found in BIBREF9. Compared to RNNs, Transformers have several advantages, specifically an ability to aggregate information across all the time-steps by using a self-attention mechanism. Unlike RNNs, the hidden representations do not need to be computed sequentially across time, thus enabling significant efficiency improvements via parallelization.
In the context of Transformer module, secondary feature analysis is enabled through an additional mid-network transformer module that has access both to previous-layer activations and the raw features. To implement this model, we apply the objective function several times at the intermediate layers, to encourage the development of phonetically relevant hypotheses. Interestingly, we find that the iterated use of an auxiliary loss in the intermediate layers significantly improves performance by itself, as well as enabling the secondary feature analysis.
This paper makes two main contributions:
We present improvements in the basic training process of deep transformer networks, specifically the iterated use of CTC or CE in intermediate layers, and
We show that an intermediate-layer attention model with access to both previous-layer activations and raw feature inputs can significantly improve performance.
We evaluate our proposed model on Librispeech and a large-scale video dataset. From our experimental results, we observe 10-20% relative improvement on Librispeech and 3.2-11% on the video dataset.
Transformer Modules
A transformer network BIBREF5 is a powerful approach to learning and modeling sequential data. A transformer network is itself constructed with a series of transformer modules that each perform some processing. Each module has a self-attention mechanism and several feed-forward layers, enabling easy parallelization over time-steps compared to recurrent models such as RNNs or LSTMs BIBREF10. We use the architecture defined in BIBREF5, and provide only a brief summary below.
Assume we have an input sequence that is of length $S$: $X = [x_1,...,x_S]$. Each $x_i$ is itself a vector of activations. A transformer layer encodes $X$ into a corresponding output representation $Z = [z_1,...,z_S]$ as described below.
Transformers are built around the notion of a self-attention mechanism that is used to extract the relevant information for each time-step $s$ from all time-steps $[1..S]$ in the preceding layer. Self attention is defined in terms of a Query, Key, Value triplet $\lbrace {Q}, {K}, {V}\rbrace \in \mathbb {R}^{S \times d_k}$. In self-attention, the queries, keys and values are the columns of the input itself, $[x_1,...,x_S]$. The output activations are computed as:
Transformer modules deploy a multi-headed version of self-attention. As described in BIBREF5, this is done by linearly projecting the queries, keys and values $P$ times with different, learned linear projections. Self-attention is then applied to each of these projected versions of Queries, Keys and Values. These are concatenated and once again projected, resulting in the final values. We refer to the input projection matrices as $W_p^{Q}, W_p^{K}, W_p^{V}$, and to the output projection as $W_O$. Multihead attention is implemented as
Here, $ W_p^Q, W_p^K, W_p^V \in \mathbb {R}^{d_{k} \times d_m}$, $d_m = d_{k} / P$, and $W_O \in \mathbb {R}^{Pd_m \times d_k}$.
After self-attention, a transformer module applies a series of linear layer, RELU, layer-norm and dropout operations, as well as the application of residual connections. The full sequence of processing is illustrated in Figure FIGREF3.
Iterated Feature Presentation
In this section, we present our proposal for allowing the network to (re)-consider the input features in the light of intermediate processing. We do this by again deploying a self-attention mechanism to combine the information present in the original features with the information available in the activations of an intermediate layer. As described earlier, we calculate the output posteriors and auxiliary loss at the intermediate layer as well. The overall architecture is illustrated in Figure FIGREF6. Here, we have used a 24 layer network, with feature re-presentation after the 12th layer.
In the following subsections, we provide detail on the feature re-presentation mechanism, and iterated loss calculation.
Iterated Feature Presentation ::: Feature Re-Presentation
We process the features in the intermediate later by concatenating a projection of the original features with a projection of previous hidden layer activations, and then applying self-attention.
First, we project both the the input and intermediate layer features $(Z_0 \in \mathbb {R}^{S \times d_0}, Z_{k} \in \mathbb {R}^{S \times d_{k}} )$, apply layer normalization and concatenate with position encoding:
where $d_0$ is the input feature dimension, $d_k$ is the Transformer output dimension, $W_1 \in \mathbb {R}^{d_0 \times d_c}, W_2 \in \mathbb {R}^{d_{k} \times d_c}$ and $E \in \mathbb {R}^{S \times d_{e}}$ is a sinusoidal position encoding BIBREF5.
After we project both information sources to the same dimensionality, we merge the information by using time-axis concatenation:
Then, we extract relevant features with extra Transformer layer and followed by linear projection and ReLU:
where $W_3 \in \mathbb {R}^{d_{k+1}^{^{\prime }} \times d_{k+1}}$ is a linear projection. All biases in the formula above are omitted for simplicity.
Note that in doing time-axis concatenation, our Key and Value sequences are twice as long as the original input. In the standard self-attention where the Query is the same as the Key and Value, the output preserves the sequence length. Therefore, in order to maintain the necessary sequence length $S$, we select either the first half (split A) or the second half (split B) to represent the combined information. The difference between these two is that the use of split A uses the projected input features as the Query set, while split B uses the projected higher level activations as the Query. In initial experiments, we found that the use of high-level features (split B) as queries is preferable. We illustrates this operation on Figure FIGREF11.
Another way of combining information from the features with an intermediate layer is to concatenate the two along with the feature rather than the time axis. However, in initial experiments, we found that time axis concatenation produces better results, and focus on that in the experimental results.
Iterated Feature Presentation ::: Iterated Loss
We have found it beneficial to apply the loss function at several intermediate layers of the network. Suppose there are $M$ total layers, and define a subset of these layers at which to apply the loss function: $K = \lbrace k_1, k_2, ..., k_L\rbrace \subseteq \lbrace 1,..,M-1\rbrace $. The total objective function is then defined as
where $Z_{k_l}$ is the $k_l$-th Transformer layer activations, $Y$ is the ground-truth transcription for CTC and context dependent states for hybrid ASR, and $Loss(P, Y)$ can be defined as CTC objective BIBREF11 or cross entropy for hybrid ASR. The coefficient $\lambda $ scales the auxiliary loss and we set $\lambda = 0.3$ based on our preliminary experiments. We illustrate the auxiliary prediction and loss in Figure FIGREF6.
Experimental results ::: Dataset
We evaluate our proposed module on both the Librispeech BIBREF12 dataset and a large-scale English video dataset. In the Librispeech training set, there are three splits, containing 100 and 360 hours sets of clean speech and 500 hours of other speech. We combined everything, resulting in 960 hours of training data. For the development set, there are also two splits: dev-clean and dev-other. For the test set, there is an analogous split.
The video dataset is a collection of public and anonymized English videos. It consists of a 1000 hour training set, a 9 hour dev set, and a $46.1$ hour test set. The test set comprises an $8.5$ hour curated set of carefully selected very clean videos, a 19 hour clean set and a $18.6$ hour noisy set BIBREF13. For the hybrid ASR experiments on video dataset, alignments were generated with a production system trained with 14k hours.
All speech features are extracted by using log Mel-filterbanks with 80 dimensions, a 25 ms window size and a 10 ms time step between two windows. Then we apply mean and variance normalization.
Experimental results ::: Target Units
For CTC training, we use word-pieces as our target. During training, the reference is tokenized to 5000 sub-word units using sentencepiece with a uni-gram language model BIBREF14. Neural networks are thus used to produce a posterior distribution for 5001 symbols (5000 sub-word units plus blank symbol) every frame. For decoding, each sub-word is modeled by a HMM with two states where the last states share the same blank symbol probability; the best sub-word segmentation of each word is used to form a lexicon; these HMMs, lexicon are then combined with the standard $n$-gram via FST BIBREF15 to form a static decoding graph. Kaldi decoderBIBREF16 is used to produce the best hypothesis.
We further present results with hybrid ASR systems. In this, we use the same HMM topology, GMM bootstrapping and decision tree building procedure as BIBREF13. Specifically, we use context-dependent (CD) graphemes as modeling units. On top of alignments from a GMM model, we build a decision tree to cluster CD graphemes. This results in 7248 context dependent units for Librispeech, and 6560 units for the video dataset. Training then proceeds with the CE loss function. We also apply SpecAugment BIBREF17 online during training, using the LD policy without time warping. For decoding, a standard Kaldi's WFST decoder BIBREF16 is used.
Experimental results ::: Deep Transformer Acoustic Model
All neural networks are implemented with the in-house extension of the fairseq BIBREF18 toolkit. Our speech features are produced by processing the log Mel-spectrogram with two VGG BIBREF19 layers that have the following configurations: (1) two 2-D convolutions with 32 output filters, kernel=3, stride=1, ReLU activation, and max-pooling kernel=2, (2) two 2-D convolutions with 64 output filters, kernel=3, stride=1 and max-pooling kernel=2 for CTC or max-pooling kernel=1 for hybrid. After the VGG layers, the total number of frames are subsampled by (i) 4x for CTC, or (ii) 2x for hybrid, thus enabling us to reduce the run-time and memory usage significantly. After VGG processing, we use 24 Transformer layers with $d_k=512$ head dimensions (8 heads, each head has 64 dimensions), 2048 feedforward hidden dimensions (total parameters $\pm $ 80 millions), and dropout $0.15$. For the proposed models, we utilized an auxiliary MLP with two linear layers with 256 hidden units, LeakyReLU activation and softmax (see Sec. SECREF3). We set our position encoding dimensions $d_e=256$ and pre-concatenation projection $d_c=768$ for the feature re-presentation layer. The loss function is either CTC loss or hybrid CE loss.
Experimental results ::: Results
Table TABREF19 presents CTC based results for the Librispeech dataset, without data augmentation. Our baseline is a 24 layer Transformer network trained with CTC. For the proposed method, we varied the number and placement of iterated loss and the feature re-presentation. The next three results show the effect of using CTC multiple times. We see 12 and 8% relative improvements for test-clean and test-other. Adding feature re-presentation gives a further boost, with net 20 and 18% relative improvements over the baseline.
Table TABREF20 shows results for Librispeech with SpecAugment. We test both CTC and CE/hybrid systems. There are consistent gains first from iterated loss, and then from multiple feature presentation. We also run additional CTC experiments with 36 layers Transformer (total parameters $\pm $120 millions). The baseline with 36 layers has the same performance with 24 layers, but by adding the proposed methods, the 36 layer performance improved to give the best results. This shows that our proposed methods can improve even very deep models.
As shown in Table TABREF21, the proposed methods also provide large performance improvements on the curated video set, up to 13% with CTC, and up to 9% with the hybrid model. We also observe moderate gains of between 3.2 and 8% relative on the clean and noisy video sets.
Related Work
In recent years, Transformer models have become an active research topic in speech processing. The key features of Transformer networks is self-attention, which produces comparable or better performance to LSTMs when used for encoder-decoder based ASR BIBREF23, as well as when trained with CTC BIBREF9. Speech-Transformers BIBREF24 also produce comparable performance to the LSTM-based attention model, but with higher training speed in a single GPU. Abdelrahman et al.BIBREF8 integrates a convolution layer to capture audio context and reduces WER in Librispeech.
The use of an objective function in intermediate layers has been found useful in several previous works such as image classification BIBREF25 and language modeling BIBREF26. In BIBREF27, the authors did pre-training with an RNN-T based model by using a hierarchical CTC criterion with different target units. In this paper, we don't need additional types of target unit, instead we just use same tokenization and targets for both intermediate and final losses.
The application of the objective function to intermediate layers is also similar in spirit to the use of KL-divergence in BIBREF28, which estimates output posteriors at an intermediate layer and regularizes them towards the distributions at the final layer. In contrast to this approach, the direct application of the objective function does not require the network to have a good output distribution before the new gradient contribution is meaningful.
Conclusion
In this paper, we have proposed a method for re-processing the input features in light of the information available at an intermediate network layer. We do this in the context of deep transformer networks, via a self-attention mechanism on both features and hidden states representation. To encourage meaningful partial results, we calculate the objective function at intermediate layers of the network as well as the output layer. This improves performance in and of itself, and when combined with feature re-presentation we observe consistent relative improvements of 10 - 20% for Librispeech and 3.2 - 13% for videos. | Unanswerable |
fc29bb14f251f18862c100e0d3cd1396e8f2c3a1 | fc29bb14f251f18862c100e0d3cd1396e8f2c3a1_0 | Q: Are agglutinative languages used in the prediction of both prefixing and suffixing languages?
Text: Introduction
A widely agreed-on fact in language acquisition research is that learning of a second language (L2) is influenced by a learner's native language (L1) BIBREF0, BIBREF1. A language's morphosyntax seems to be no exception to this rule BIBREF2, but the exact nature of this influence remains unknown. For instance, it is unclear whether it is constraints imposed by the phonological or by the morphosyntactic attributes of the L1 that are more important during the process of learning an L2's morphosyntax.
Within the area of natural language processing (NLP) research, experimenting on neural network models just as if they were human subjects has recently been gaining popularity BIBREF3, BIBREF4, BIBREF5. Often, so-called probing tasks are used, which require a specific subset of linguistic knowledge and can, thus, be leveraged for qualitative evaluation. The goal is to answer the question: What do neural networks learn that helps them to succeed in a given task?
Neural network models, and specifically sequence-to-sequence models, have pushed the state of the art for morphological inflection – the task of learning a mapping from lemmata to their inflected forms – in the last years BIBREF6. Thus, in this work, we experiment on such models, asking not what they learn, but, motivated by the respective research on human subjects, the related question of how what they learn depends on their prior knowledge. We manually investigate the errors made by artificial neural networks for morphological inflection in a target language after pretraining on different source languages. We aim at finding answers to two main questions: (i) Do errors systematically differ between source languages? (ii) Do these differences seem explainable, given the properties of the source and target languages? In other words, we are interested in exploring if and how L2 acquisition of morphological inflection depends on the L1, i.e., the "native language", in neural network models.
To this goal, we select a diverse set of eight source languages from different language families – Basque, French, German, Hungarian, Italian, Navajo, Turkish, and Quechua – and three target languages – English, Spanish and Zulu. We pretrain a neural sequence-to-sequence architecture on each of the source languages and then fine-tune the resulting models on small datasets in each of the target languages. Analyzing the errors made by the systems, we find that (i) source and target language being closely related simplifies the successful learning of inflection in the target language, (ii) the task is harder to learn in a prefixing language if the source language is suffixing – as well as the other way around, and (iii) a source language which exhibits an agglutinative morphology simplifies learning of a second language's inflectional morphology.
Task
Many of the world's languages exhibit rich inflectional morphology: the surface form of an individual lexical entry changes in order to express properties such as person, grammatical gender, or case. The citation form of a lexical entry is referred to as the lemma. The set of all possible surface forms or inflections of a lemma is called its paradigm. Each inflection within a paradigm can be associated with a tag, i.e., 3rdSgPres is the morphological tag associated with the inflection dances of the English lemma dance. We display the paradigms of dance and eat in Table TABREF1.
The presence of rich inflectional morphology is problematic for NLP systems as it increases word form sparsity. For instance, while English verbs can have up to 5 inflected forms, Archi verbs have thousands BIBREF7, even by a conservative count. Thus, an important task in the area of morphology is morphological inflection BIBREF8, BIBREF9, which consists of mapping a lemma to an indicated inflected form. An (irregular) English example would be
with PAST being the target tag, denoting the past tense form. Additionally, a rich inflectional morphology is also challenging for L2 language learners, since both rules and their exceptions need to be memorized.
In NLP, morphological inflection has recently frequently been cast as a sequence-to-sequence problem, where the sequence of target (sub-)tags together with the sequence of input characters constitute the input sequence, and the characters of the inflected word form the output. Neural models define the state of the art for the task and obtain high accuracy if an abundance of training data is available. Here, we focus on learning of inflection from limited data if information about another language's morphology is already known. We, thus, loosely simulate an L2 learning setting.
Task ::: Formal definition.
Let ${\cal M}$ be the paradigm slots which are being expressed in a language, and $w$ a lemma in that language. We then define the paradigm $\pi $ of $w$ as:
$f_k[w]$ denotes an inflected form corresponding to tag $t_{k}$, and $w$ and $f_k[w]$ are strings consisting of letters from an alphabet $\Sigma $.
The task of morphological inflection consists of predicting a missing form $f_i[w]$ from a paradigm, given the lemma $w$ together with the tag $t_i$.
Model ::: Pointer–Generator Network
The models we experiment with are based on a pointer–generator network architecture BIBREF10, BIBREF11, i.e., a recurrent neural network (RNN)-based sequence-to-sequence network with attention and a copy mechanism. A standard sequence-to-sequence model BIBREF12 has been shown to perform well for morphological inflection BIBREF13 and has, thus, been subject to cognitively motivated experiments BIBREF14 before. Here, however, we choose the pointer–generator variant of sharma-katrapati-sharma:2018:K18-30, since it performs better in low-resource settings, which we will assume for our target languages. We will explain the model shortly in the following and refer the reader to the original paper for more details.
Model ::: Pointer–Generator Network ::: Encoders.
Our architecture employs two separate encoders, which are both bi-directional long short-term memory (LSTM) networks BIBREF15: The first processes the morphological tags which describe the desired target form one by one. The second encodes the sequence of characters of the input word.
Model ::: Pointer–Generator Network ::: Attention.
Two separate attention mechanisms are used: one per encoder LSTM. Taking all respective encoder hidden states as well as the current decoder hidden state as input, each of them outputs a so-called context vector, which is a weighted sum of all encoder hidden states. The concatenation of the two individual context vectors results in the final context vector $c_t$, which is the input to the decoder at time step $t$.
Model ::: Pointer–Generator Network ::: Decoder.
Our decoder consists of a uni-directional LSTM. Unlike a standard sequence-to-sequence model, a pointer–generator network is not limited to generating characters from the vocabulary to produce the output. Instead, the model gives certain probability to copying elements from the input over to the output. The probability of a character $y_t$ at time step $t$ is computed as a sum of the probability of $y_t$ given by the decoder and the probability of copying $y_t$, weighted by the probabilities of generating and copying:
$p_{\textrm {dec}}(y_t)$ is calculated as an LSTM update and a projection of the decoder state to the vocabulary, followed by a softmax function. $p_{\textrm {copy}}(y_t)$ corresponds to the attention weights for each input character. The model computes the probability $\alpha $ with which it generates a new output character as
for context vector $c_t$, decoder state $s_t$, embedding of the last output $y_{t-1}$, weights $w_c$, $w_s$, $w_y$, and bias vector $b$. It has been shown empirically that the copy mechanism of the pointer–generator network architecture is beneficial for morphological generation in the low-resource setting BIBREF16.
Model ::: Pretraining and Finetuning
Pretraining and successive fine-tuning of neural network models is a common approach for handling of low-resource settings in NLP. The idea is that certain properties of language can be learned either from raw text, related tasks, or related languages. Technically, pretraining consists of estimating some or all model parameters on examples which do not necessarily belong to the final target task. Fine-tuning refers to continuing training of such a model on a target task, whose data is often limited. While the sizes of the pretrained model parameters usually remain the same between the two phases, the learning rate or other details of the training regime, e.g., dropout, might differ. Pretraining can be seen as finding a suitable initialization of model parameters, before training on limited amounts of task- or language-specific examples.
In the context of morphological generation, pretraining in combination with fine-tuning has been used by kann-schutze-2018-neural, which proposes to pretrain a model on general inflection data and fine-tune on examples from a specific paradigm whose remaining forms should be automatically generated. Famous examples for pretraining in the wider area of NLP include BERT BIBREF17 or GPT-2 BIBREF18: there, general properties of language are learned using large unlabeled corpora.
Here, we are interested in pretraining as a simulation of familiarity with a native language. By investigating a fine-tuned model we ask the question: How does extensive knowledge of one language influence the acquisition of another?
Experimental Design ::: Target Languages
We choose three target languages.
English (ENG) is a morphologically impoverished language, as far as inflectional morphology is concerned. Its verbal paradigm only consists of up to 5 different forms and its nominal paradigm of only up to 2. However, it is one of the most frequently spoken and taught languages in the world, making its acquisition a crucial research topic.
Spanish (SPA), in contrast, is morphologically rich, and disposes of much larger verbal paradigms than English. Like English, it is a suffixing language, and it additionally makes use of internal stem changes (e.g., o $\rightarrow $ ue).
Since English and Spanish are both Indo-European languages, and, thus, relatively similar, we further add a third, unrelated target language. We choose Zulu (ZUL), a Bantoid language. In contrast to the first two, it is strongly prefixing.
Experimental Design ::: Source Languages
For pretraining, we choose languages with different degrees of relatedness and varying morphological similarity to English, Spanish, and Zulu. We limit our experiments to languages which are written in Latin script.
As an estimate for morphological similarity we look at the features from the Morphology category mentioned in The World Atlas of Language Structures (WALS). An overview of the available features as well as the respective values for our set of languages is shown in Table TABREF13.
We decide on Basque (EUS), French (FRA), German (DEU), Hungarian (HUN), Italian (ITA), Navajo (NAV), Turkish (TUR), and Quechua (QVH) as source languages.
Basque is a language isolate. Its inflectional morphology makes similarly frequent use of prefixes and suffixes, with suffixes mostly being attached to nouns, while prefixes and suffixes can both be employed for verbal inflection.
French and Italian are Romance languages, and thus belong to the same family as the target language Spanish. Both are suffixing and fusional languages.
German, like English, belongs to the Germanic language family. It is a fusional, predominantly suffixing language and, similarly to Spanish, makes use of stem changes.
Hungarian, a Finno-Ugric language, and Turkish, a Turkic language, both exhibit an agglutinative morphology, and are predominantly suffixing. They further have vowel harmony systems.
Navajo is an Athabaskan language and the only source language which is strongly prefixing. It further exhibits consonant harmony among its sibilants BIBREF19, BIBREF20.
Finally, Quechua, a Quechuan language spoken in South America, is again predominantly suffixing and unrelated to all of our target languages.
Experimental Design ::: Hyperparameters and Data
We mostly use the default hyperparameters by sharma-katrapati-sharma:2018:K18-30. In particular, all RNNs have one hidden layer of size 100, and all input and output embeddings are 300-dimensional.
For optimization, we use ADAM BIBREF21. Pretraining on the source language is done for exactly 50 epochs. To obtain our final models, we then fine-tune different copies of each pretrained model for 300 additional epochs for each target language. We employ dropout BIBREF22 with a coefficient of 0.3 for pretraining and, since that dataset is smaller, with a coefficient of 0.5 for fine-tuning.
We make use of the datasets from the CoNLL–SIGMORPHON 2018 shared task BIBREF9. The organizers provided a low, medium, and high setting for each language, with 100, 1000, and 10000 examples, respectively. For all L1 languages, we train our models on the high-resource datasets with 10000 examples. For fine-tuning, we use the low-resource datasets.
Quantitative Results
In Table TABREF18, we show the final test accuracy for all models and languages. Pretraining on EUS and NAV results in the weakest target language inflection models for ENG, which might be explained by those two languages being unrelated to ENG and making at least partial use of prefixing, while ENG is a suffixing language (cf. Table TABREF13). In contrast, HUN and ITA yield the best final models for ENG. This is surprising, since DEU is the language in our experiments which is closest related to ENG.
For SPA, again HUN performs best, followed closely by ITA. While the good performance of HUN as a source language is still unexpected, ITA is closely related to SPA, which could explain the high accuracy of the final model. As for ENG, pretraining on EUS and NAV yields the worst final models – importantly, accuracy is over $15\%$ lower than for QVH, which is also an unrelated language. This again suggests that the prefixing morphology of EUS and NAV might play a role.
Lastly, for ZUL, all models perform rather poorly, with a minimum accuracy of 10.7 and 10.8 for the source languages QVH and EUS, respectively, and a maximum accuracy of 24.9 for a model pretrained on Turkish. The latter result hints at the fact that a regular and agglutinative morphology might be beneficial in a source language – something which could also account for the performance of models pretrained on HUN.
Qualitative Results
For our qualitative analysis, we make use of the validation set. Therefore, we show validation set accuracies in Table TABREF19 for comparison. As we can see, the results are similar to the test set results for all language combinations. We manually annotate the outputs for the first 75 development examples for each source–target language combination. All found errors are categorized as belonging to one of the following categories.
Qualitative Results ::: Stem Errors
SUB(X): This error consists of a wrong substitution of one character with another. SUB(V) and SUB(C) denote this happening with a vowel or a consonant, respectively. Letters that differ from each other by an accent count as different vowels.
Example: decultared instead of decultured
DEL(X): This happens when the system ommits a letter from the output. DEL(V) and DEL(C) refer to a missing vowel or consonant, respectively.
Example: firte instead of firtle
NO_CHG(X): This error occurs when inflecting the lemma to the gold form requires a change of either a vowel (NO_CHG(V)) or a consonant (NO_CHG(C)), but this is missing in the predicted form.
Example: verto instead of vierto
MULT: This describes cases where two or more errors occur in the stem. Errors concerning the affix are counted for separately.
Example: aconcoonaste instead of acondicionaste
ADD(X): This error occurs when a letter is mistakenly added to the inflected form. ADD(V) refers to an unnecessary vowel, ADD(C) refers to an unnecessary consonant.
Example: compillan instead of compilan
CHG2E(X): This error occurs when inflecting the lemma to the gold form requires a change of either a vowel (CHG2E(V)) or a consonant (CHG2E(C)), and this is done, but the resulting vowel or consonant is incorrect.
Example: propace instead of propague
Qualitative Results ::: Affix Errors
AFF: This error refers to a wrong affix. This can be either a prefix or a suffix, depending on the correct target form.
Example: ezoJulayi instead of esikaJulayi
CUT: This consists of cutting too much of the lemma's prefix or suffix before attaching the inflected form's prefix or suffix, respectively.
Example: irradiseis instead of irradiaseis
Qualitative Results ::: Miscellaneous Errors
REFL: This happens when a reflective pronoun is missing in the generated form.
Example: doliéramos instead of nos doliéramos
REFL_LOC: This error occurs if the reflective pronouns appears at an unexpected position within the generated form.
Example: taparsebais instead of os tapabais
OVERREG: Overregularization errors occur when the model predicts a form which would be correct if the lemma's inflections were regular but they are not.
Example: underteach instead of undertaught
Qualitative Results ::: Error Analysis: English
Table TABREF35 displays the errors found in the 75 first ENG development examples, for each source language. From Table TABREF19, we know that HUN $>$ ITA $>$ TUR $>$ DEU $>$ FRA $>$ QVH $>$ NAV $>$ EUS, and we get a similar picture when analyzing the first examples. Thus, especially keeping HUN and TUR in mind, we cautiously propose a first conclusion: familiarity with languages which exhibit an agglutinative morphology simplifies learning of a new language's morphology.
Looking at the types of errors, we find that EUS and NAV make the most stem errors. For QVH we find less, but still over 10 more than for the remaining languages. This makes it seem that models pretrained on prefixing or partly prefixing languages indeed have a harder time to learn ENG inflectional morphology, and, in particular, to copy the stem correctly. Thus, our second hypotheses is that familiarity with a prefixing language might lead to suspicion of needed changes to the part of the stem which should remain unaltered in a suffixing language. DEL(X) and ADD(X) errors are particularly frequent for EUS and NAV, which further suggests this conclusion.
Next, the relatively large amount of stem errors for QVH leads to our second hypothesis: language relatedness does play a role when trying to produce a correct stem of an inflected form. This is also implied by the number of MULT errors for EUS, NAV and QVH, as compared to the other languages.
Considering errors related to the affixes which have to be generated, we find that DEU, HUN and ITA make the fewest. This further suggests the conclusion that, especially since DEU is the language which is closest related to ENG, language relatedness plays a role for producing suffixes of inflected forms as well.
Our last observation is that many errors are not found at all in our data sample, e.g., CHG2E(X) or NO_CHG(C). This can be explained by ENG having a relatively poor inflectional morphology, which does not leave much room for mistakes.
Qualitative Results ::: Error Analysis: Spanish
The errors committed for SPA are shown in Table TABREF37, again listed by source language. Together with Table TABREF19 it gets clear that SPA inflectional morphology is more complex than that of ENG: systems for all source languages perform worse.
Similarly to ENG, however, we find that most stem errors happen for the source languages EUS and NAV, which is further evidence for our previous hypothesis that familiarity with prefixing languages impedes acquisition of a suffixing one. Especially MULT errors are much more frequent for EUS and NAV than for all other languages. ADD(X) happens a lot for EUS, while ADD(C) is also frequent for NAV. Models pretrained on either language have difficulties with vowel changes, which reflects in NO_CHG(V). Thus, we conclude that this phenomenon is generally hard to learn.
Analyzing next the errors concerning affixes, we find that models pretrained on HUN, ITA, DEU, and FRA (in that order) commit the fewest errors. This supports two of our previous hypotheses: First, given that ITA and FRA are both from the same language family as SPA, relatedness seems to be benficial for learning of the second language. Second, the system pretrained on HUN performing well suggests again that a source language with an agglutinative, as opposed to a fusional, morphology seems to be beneficial as well.
Qualitative Results ::: Error Analysis: Zulu
In Table TABREF39, the errors for Zulu are shown, and Table TABREF19 reveals the relative performance for different source languages: TUR $>$ HUN $>$ DEU $>$ ITA $>$ FRA $>$ NAV $>$ EUS $>$ QVH. Again, TUR and HUN obtain high accuracy, which is an additional indicator for our hypothesis that a source language with an agglutinative morphology facilitates learning of inflection in another language.
Besides that, results differ from those for ENG and SPA. First of all, more mistakes are made for all source languages. However, there are also several finer differences. For ZUL, the model pretrained on QVH makes the most stem errors, in particular 4 more than the EUS model, which comes second. Given that ZUL is a prefixing language and QVH is suffixing, this relative order seems important. QVH also committs the highest number of MULT errors.
The next big difference between the results for ZUL and those for ENG and SPA is that DEL(X) and ADD(X) errors, which previously have mostly been found for the prefixing or partially prefixing languages EUS and NAV, are now most present in the outputs of suffixing languages. Namely, DEL(C) occurs most for FRA and ITA, DEL(V) for FRA and QVH, and ADD(C) and ADD(V) for HUN. While some deletion and insertion errors are subsumed in MULT, this does not fully explain this difference. For instance, QVH has both the second most DEL(V) and the most MULT errors.
The overall number of errors related to the affix seems comparable between models with different source languages. This weakly supports the hypothesis that relatedness reduces affix-related errors, since none of the pretraining languages in our experiments is particularly close to ZUL. However, we do find more CUT errors for HUN and TUR: again, these are suffixing, while CUT for the target language SPA mostly happened for the prefixing languages EUS and NAV.
Qualitative Results ::: Limitations
A limitation of our work is that we only include languages that are written in Latin script. An interesting question for future work might, thus, regard the effect of disjoint L1 and L2 alphabets.
Furthermore, none of the languages included in our study exhibits a templatic morphology. We make this choice because data for templatic languages is currently mostly available in non-Latin alphabets. Future work could investigate languages with templatic morphology as source or target languages, if needed by mapping the language's alphabet to Latin characters.
Finally, while we intend to choose a diverse set of languages for this study, our overall number of languages is still rather small. This affects the generalizability of the results, and future work might want to look at larger samples of languages.
Related Work ::: Neural network models for inflection.
Most research on inflectional morphology in NLP within the last years has been related to the SIGMORPHON and CoNLL–SIGMORPHON shared tasks on morphological inflection, which have been organized yearly since 2016 BIBREF6. Traditionally being focused on individual languages, the 2019 edition BIBREF23 contained a task which asked for transfer learning from a high-resource to a low-resource language. However, source–target pairs were predefined, and the question of how the source language influences learning besides the final accuracy score was not considered. Similarly to us, kyle performed a manual error analysis of morphological inflection systems for multiple languages. However, they did not investigate transfer learning, but focused on monolingual models.
Outside the scope of the shared tasks, kann-etal-2017-one investigated cross-lingual transfer for morphological inflection, but was limited to a quantitative analysis. Furthermore, that work experimented with a standard sequence-to-sequence model BIBREF12 in a multi-task training fashion BIBREF24, while we pretrain and fine-tune pointer–generator networks. jin-kann-2017-exploring also investigated cross-lingual transfer in neural sequence-to-sequence models for morphological inflection. However, their experimental setup mimicked kann-etal-2017-one, and the main research questions were different: While jin-kann-2017-exploring asked how cross-lingual knowledge transfer works during multi-task training of neural sequence-to-sequence models on two languages, we investigate if neural inflection models demonstrate interesting differences in production errors depending on the pretraining language. Besides that, we differ in the artificial neural network architecture and language pairs we investigate.
Related Work ::: Cross-lingual transfer in NLP.
Cross-lingual transfer learning has been used for a large variety NLP of tasks, e.g., automatic speech recognition BIBREF25, entity recognition BIBREF26, language modeling BIBREF27, or parsing BIBREF28, BIBREF29, BIBREF30. Machine translation has been no exception BIBREF31, BIBREF32, BIBREF33. Recent research asked how to automatically select a suitable source language for a given target language BIBREF34. This is similar to our work in that our findings could potentially be leveraged to find good source languages.
Related Work ::: Acquisition of morphological inflection.
Finally, a lot of research has focused on human L1 and L2 acquisition of inflectional morphology BIBREF35, BIBREF36, BIBREF37, BIBREF38, BIBREF39, BIBREF40.
To name some specific examples, marques2011study investigated the effect of a stay abroad on Spanish L2 acquisition, including learning of its verbal morphology in English speakers. jia2003acquisition studied how Mandarin Chinese-speaking children learned the English plural morpheme. nicoladis2012young studied the English past tense acquisition in Chinese–English and French–English bilingual children. They found that, while both groups showed similar production accuracy, they differed slightly in the type of errors they made. Also considering the effect of the native language explicitly, yang2004impact investigated the acquisition of the tense-aspect system in an L2 for speakers of a native language which does not mark tense explicitly.
Finally, our work has been weakly motivated by bliss2006l2. There, the author asked a question for human subjects which is similar to the one we ask for neural models: How does the native language influence L2 acquisition of inflectional morphology?
Conclusion and Future Work
Motivated by the fact that, in humans, learning of a second language is influenced by a learner's native language, we investigated a similar question in artificial neural network models for morphological inflection: How does pretraining on different languages influence a model's learning of inflection in a target language?
We performed experiments on eight different source languages and three different target languages. An extensive error analysis of all final models showed that (i) for closely related source and target languages, acquisition of target language inflection gets easier; (ii) knowledge of a prefixing language makes learning of inflection in a suffixing language more challenging, as well as the other way around; and (iii) languages which exhibit an agglutinative morphology facilitate learning of inflection in a second language.
Future work might leverage those findings to improve neural network models for morphological inflection in low-resource languages, by choosing suitable source languages for pretraining.
Another interesting next step would be to investigate how the errors made by our models compare to those by human L2 learners with different native languages. If the exhibited patterns resemble each other, computational models could be used to predict errors a person will make, which, in turn, could be leveraged for further research or the development of educational material.
Acknowledgments
I would like to thank Samuel R. Bowman and Kyle Gorman for helpful discussions and suggestions. This work has benefited from the support of Samsung Research under the project Improving Deep Learning using Latent Structure and from the donation of a Titan V GPU by NVIDIA Corporation. | Yes |
f3e96c5487d87557a661a65395b0162033dc05b3 | f3e96c5487d87557a661a65395b0162033dc05b3_0 | Q: What is an example of a prefixing language?
Text: Introduction
A widely agreed-on fact in language acquisition research is that learning of a second language (L2) is influenced by a learner's native language (L1) BIBREF0, BIBREF1. A language's morphosyntax seems to be no exception to this rule BIBREF2, but the exact nature of this influence remains unknown. For instance, it is unclear whether it is constraints imposed by the phonological or by the morphosyntactic attributes of the L1 that are more important during the process of learning an L2's morphosyntax.
Within the area of natural language processing (NLP) research, experimenting on neural network models just as if they were human subjects has recently been gaining popularity BIBREF3, BIBREF4, BIBREF5. Often, so-called probing tasks are used, which require a specific subset of linguistic knowledge and can, thus, be leveraged for qualitative evaluation. The goal is to answer the question: What do neural networks learn that helps them to succeed in a given task?
Neural network models, and specifically sequence-to-sequence models, have pushed the state of the art for morphological inflection – the task of learning a mapping from lemmata to their inflected forms – in the last years BIBREF6. Thus, in this work, we experiment on such models, asking not what they learn, but, motivated by the respective research on human subjects, the related question of how what they learn depends on their prior knowledge. We manually investigate the errors made by artificial neural networks for morphological inflection in a target language after pretraining on different source languages. We aim at finding answers to two main questions: (i) Do errors systematically differ between source languages? (ii) Do these differences seem explainable, given the properties of the source and target languages? In other words, we are interested in exploring if and how L2 acquisition of morphological inflection depends on the L1, i.e., the "native language", in neural network models.
To this goal, we select a diverse set of eight source languages from different language families – Basque, French, German, Hungarian, Italian, Navajo, Turkish, and Quechua – and three target languages – English, Spanish and Zulu. We pretrain a neural sequence-to-sequence architecture on each of the source languages and then fine-tune the resulting models on small datasets in each of the target languages. Analyzing the errors made by the systems, we find that (i) source and target language being closely related simplifies the successful learning of inflection in the target language, (ii) the task is harder to learn in a prefixing language if the source language is suffixing – as well as the other way around, and (iii) a source language which exhibits an agglutinative morphology simplifies learning of a second language's inflectional morphology.
Task
Many of the world's languages exhibit rich inflectional morphology: the surface form of an individual lexical entry changes in order to express properties such as person, grammatical gender, or case. The citation form of a lexical entry is referred to as the lemma. The set of all possible surface forms or inflections of a lemma is called its paradigm. Each inflection within a paradigm can be associated with a tag, i.e., 3rdSgPres is the morphological tag associated with the inflection dances of the English lemma dance. We display the paradigms of dance and eat in Table TABREF1.
The presence of rich inflectional morphology is problematic for NLP systems as it increases word form sparsity. For instance, while English verbs can have up to 5 inflected forms, Archi verbs have thousands BIBREF7, even by a conservative count. Thus, an important task in the area of morphology is morphological inflection BIBREF8, BIBREF9, which consists of mapping a lemma to an indicated inflected form. An (irregular) English example would be
with PAST being the target tag, denoting the past tense form. Additionally, a rich inflectional morphology is also challenging for L2 language learners, since both rules and their exceptions need to be memorized.
In NLP, morphological inflection has recently frequently been cast as a sequence-to-sequence problem, where the sequence of target (sub-)tags together with the sequence of input characters constitute the input sequence, and the characters of the inflected word form the output. Neural models define the state of the art for the task and obtain high accuracy if an abundance of training data is available. Here, we focus on learning of inflection from limited data if information about another language's morphology is already known. We, thus, loosely simulate an L2 learning setting.
Task ::: Formal definition.
Let ${\cal M}$ be the paradigm slots which are being expressed in a language, and $w$ a lemma in that language. We then define the paradigm $\pi $ of $w$ as:
$f_k[w]$ denotes an inflected form corresponding to tag $t_{k}$, and $w$ and $f_k[w]$ are strings consisting of letters from an alphabet $\Sigma $.
The task of morphological inflection consists of predicting a missing form $f_i[w]$ from a paradigm, given the lemma $w$ together with the tag $t_i$.
Model ::: Pointer–Generator Network
The models we experiment with are based on a pointer–generator network architecture BIBREF10, BIBREF11, i.e., a recurrent neural network (RNN)-based sequence-to-sequence network with attention and a copy mechanism. A standard sequence-to-sequence model BIBREF12 has been shown to perform well for morphological inflection BIBREF13 and has, thus, been subject to cognitively motivated experiments BIBREF14 before. Here, however, we choose the pointer–generator variant of sharma-katrapati-sharma:2018:K18-30, since it performs better in low-resource settings, which we will assume for our target languages. We will explain the model shortly in the following and refer the reader to the original paper for more details.
Model ::: Pointer–Generator Network ::: Encoders.
Our architecture employs two separate encoders, which are both bi-directional long short-term memory (LSTM) networks BIBREF15: The first processes the morphological tags which describe the desired target form one by one. The second encodes the sequence of characters of the input word.
Model ::: Pointer–Generator Network ::: Attention.
Two separate attention mechanisms are used: one per encoder LSTM. Taking all respective encoder hidden states as well as the current decoder hidden state as input, each of them outputs a so-called context vector, which is a weighted sum of all encoder hidden states. The concatenation of the two individual context vectors results in the final context vector $c_t$, which is the input to the decoder at time step $t$.
Model ::: Pointer–Generator Network ::: Decoder.
Our decoder consists of a uni-directional LSTM. Unlike a standard sequence-to-sequence model, a pointer–generator network is not limited to generating characters from the vocabulary to produce the output. Instead, the model gives certain probability to copying elements from the input over to the output. The probability of a character $y_t$ at time step $t$ is computed as a sum of the probability of $y_t$ given by the decoder and the probability of copying $y_t$, weighted by the probabilities of generating and copying:
$p_{\textrm {dec}}(y_t)$ is calculated as an LSTM update and a projection of the decoder state to the vocabulary, followed by a softmax function. $p_{\textrm {copy}}(y_t)$ corresponds to the attention weights for each input character. The model computes the probability $\alpha $ with which it generates a new output character as
for context vector $c_t$, decoder state $s_t$, embedding of the last output $y_{t-1}$, weights $w_c$, $w_s$, $w_y$, and bias vector $b$. It has been shown empirically that the copy mechanism of the pointer–generator network architecture is beneficial for morphological generation in the low-resource setting BIBREF16.
Model ::: Pretraining and Finetuning
Pretraining and successive fine-tuning of neural network models is a common approach for handling of low-resource settings in NLP. The idea is that certain properties of language can be learned either from raw text, related tasks, or related languages. Technically, pretraining consists of estimating some or all model parameters on examples which do not necessarily belong to the final target task. Fine-tuning refers to continuing training of such a model on a target task, whose data is often limited. While the sizes of the pretrained model parameters usually remain the same between the two phases, the learning rate or other details of the training regime, e.g., dropout, might differ. Pretraining can be seen as finding a suitable initialization of model parameters, before training on limited amounts of task- or language-specific examples.
In the context of morphological generation, pretraining in combination with fine-tuning has been used by kann-schutze-2018-neural, which proposes to pretrain a model on general inflection data and fine-tune on examples from a specific paradigm whose remaining forms should be automatically generated. Famous examples for pretraining in the wider area of NLP include BERT BIBREF17 or GPT-2 BIBREF18: there, general properties of language are learned using large unlabeled corpora.
Here, we are interested in pretraining as a simulation of familiarity with a native language. By investigating a fine-tuned model we ask the question: How does extensive knowledge of one language influence the acquisition of another?
Experimental Design ::: Target Languages
We choose three target languages.
English (ENG) is a morphologically impoverished language, as far as inflectional morphology is concerned. Its verbal paradigm only consists of up to 5 different forms and its nominal paradigm of only up to 2. However, it is one of the most frequently spoken and taught languages in the world, making its acquisition a crucial research topic.
Spanish (SPA), in contrast, is morphologically rich, and disposes of much larger verbal paradigms than English. Like English, it is a suffixing language, and it additionally makes use of internal stem changes (e.g., o $\rightarrow $ ue).
Since English and Spanish are both Indo-European languages, and, thus, relatively similar, we further add a third, unrelated target language. We choose Zulu (ZUL), a Bantoid language. In contrast to the first two, it is strongly prefixing.
Experimental Design ::: Source Languages
For pretraining, we choose languages with different degrees of relatedness and varying morphological similarity to English, Spanish, and Zulu. We limit our experiments to languages which are written in Latin script.
As an estimate for morphological similarity we look at the features from the Morphology category mentioned in The World Atlas of Language Structures (WALS). An overview of the available features as well as the respective values for our set of languages is shown in Table TABREF13.
We decide on Basque (EUS), French (FRA), German (DEU), Hungarian (HUN), Italian (ITA), Navajo (NAV), Turkish (TUR), and Quechua (QVH) as source languages.
Basque is a language isolate. Its inflectional morphology makes similarly frequent use of prefixes and suffixes, with suffixes mostly being attached to nouns, while prefixes and suffixes can both be employed for verbal inflection.
French and Italian are Romance languages, and thus belong to the same family as the target language Spanish. Both are suffixing and fusional languages.
German, like English, belongs to the Germanic language family. It is a fusional, predominantly suffixing language and, similarly to Spanish, makes use of stem changes.
Hungarian, a Finno-Ugric language, and Turkish, a Turkic language, both exhibit an agglutinative morphology, and are predominantly suffixing. They further have vowel harmony systems.
Navajo is an Athabaskan language and the only source language which is strongly prefixing. It further exhibits consonant harmony among its sibilants BIBREF19, BIBREF20.
Finally, Quechua, a Quechuan language spoken in South America, is again predominantly suffixing and unrelated to all of our target languages.
Experimental Design ::: Hyperparameters and Data
We mostly use the default hyperparameters by sharma-katrapati-sharma:2018:K18-30. In particular, all RNNs have one hidden layer of size 100, and all input and output embeddings are 300-dimensional.
For optimization, we use ADAM BIBREF21. Pretraining on the source language is done for exactly 50 epochs. To obtain our final models, we then fine-tune different copies of each pretrained model for 300 additional epochs for each target language. We employ dropout BIBREF22 with a coefficient of 0.3 for pretraining and, since that dataset is smaller, with a coefficient of 0.5 for fine-tuning.
We make use of the datasets from the CoNLL–SIGMORPHON 2018 shared task BIBREF9. The organizers provided a low, medium, and high setting for each language, with 100, 1000, and 10000 examples, respectively. For all L1 languages, we train our models on the high-resource datasets with 10000 examples. For fine-tuning, we use the low-resource datasets.
Quantitative Results
In Table TABREF18, we show the final test accuracy for all models and languages. Pretraining on EUS and NAV results in the weakest target language inflection models for ENG, which might be explained by those two languages being unrelated to ENG and making at least partial use of prefixing, while ENG is a suffixing language (cf. Table TABREF13). In contrast, HUN and ITA yield the best final models for ENG. This is surprising, since DEU is the language in our experiments which is closest related to ENG.
For SPA, again HUN performs best, followed closely by ITA. While the good performance of HUN as a source language is still unexpected, ITA is closely related to SPA, which could explain the high accuracy of the final model. As for ENG, pretraining on EUS and NAV yields the worst final models – importantly, accuracy is over $15\%$ lower than for QVH, which is also an unrelated language. This again suggests that the prefixing morphology of EUS and NAV might play a role.
Lastly, for ZUL, all models perform rather poorly, with a minimum accuracy of 10.7 and 10.8 for the source languages QVH and EUS, respectively, and a maximum accuracy of 24.9 for a model pretrained on Turkish. The latter result hints at the fact that a regular and agglutinative morphology might be beneficial in a source language – something which could also account for the performance of models pretrained on HUN.
Qualitative Results
For our qualitative analysis, we make use of the validation set. Therefore, we show validation set accuracies in Table TABREF19 for comparison. As we can see, the results are similar to the test set results for all language combinations. We manually annotate the outputs for the first 75 development examples for each source–target language combination. All found errors are categorized as belonging to one of the following categories.
Qualitative Results ::: Stem Errors
SUB(X): This error consists of a wrong substitution of one character with another. SUB(V) and SUB(C) denote this happening with a vowel or a consonant, respectively. Letters that differ from each other by an accent count as different vowels.
Example: decultared instead of decultured
DEL(X): This happens when the system ommits a letter from the output. DEL(V) and DEL(C) refer to a missing vowel or consonant, respectively.
Example: firte instead of firtle
NO_CHG(X): This error occurs when inflecting the lemma to the gold form requires a change of either a vowel (NO_CHG(V)) or a consonant (NO_CHG(C)), but this is missing in the predicted form.
Example: verto instead of vierto
MULT: This describes cases where two or more errors occur in the stem. Errors concerning the affix are counted for separately.
Example: aconcoonaste instead of acondicionaste
ADD(X): This error occurs when a letter is mistakenly added to the inflected form. ADD(V) refers to an unnecessary vowel, ADD(C) refers to an unnecessary consonant.
Example: compillan instead of compilan
CHG2E(X): This error occurs when inflecting the lemma to the gold form requires a change of either a vowel (CHG2E(V)) or a consonant (CHG2E(C)), and this is done, but the resulting vowel or consonant is incorrect.
Example: propace instead of propague
Qualitative Results ::: Affix Errors
AFF: This error refers to a wrong affix. This can be either a prefix or a suffix, depending on the correct target form.
Example: ezoJulayi instead of esikaJulayi
CUT: This consists of cutting too much of the lemma's prefix or suffix before attaching the inflected form's prefix or suffix, respectively.
Example: irradiseis instead of irradiaseis
Qualitative Results ::: Miscellaneous Errors
REFL: This happens when a reflective pronoun is missing in the generated form.
Example: doliéramos instead of nos doliéramos
REFL_LOC: This error occurs if the reflective pronouns appears at an unexpected position within the generated form.
Example: taparsebais instead of os tapabais
OVERREG: Overregularization errors occur when the model predicts a form which would be correct if the lemma's inflections were regular but they are not.
Example: underteach instead of undertaught
Qualitative Results ::: Error Analysis: English
Table TABREF35 displays the errors found in the 75 first ENG development examples, for each source language. From Table TABREF19, we know that HUN $>$ ITA $>$ TUR $>$ DEU $>$ FRA $>$ QVH $>$ NAV $>$ EUS, and we get a similar picture when analyzing the first examples. Thus, especially keeping HUN and TUR in mind, we cautiously propose a first conclusion: familiarity with languages which exhibit an agglutinative morphology simplifies learning of a new language's morphology.
Looking at the types of errors, we find that EUS and NAV make the most stem errors. For QVH we find less, but still over 10 more than for the remaining languages. This makes it seem that models pretrained on prefixing or partly prefixing languages indeed have a harder time to learn ENG inflectional morphology, and, in particular, to copy the stem correctly. Thus, our second hypotheses is that familiarity with a prefixing language might lead to suspicion of needed changes to the part of the stem which should remain unaltered in a suffixing language. DEL(X) and ADD(X) errors are particularly frequent for EUS and NAV, which further suggests this conclusion.
Next, the relatively large amount of stem errors for QVH leads to our second hypothesis: language relatedness does play a role when trying to produce a correct stem of an inflected form. This is also implied by the number of MULT errors for EUS, NAV and QVH, as compared to the other languages.
Considering errors related to the affixes which have to be generated, we find that DEU, HUN and ITA make the fewest. This further suggests the conclusion that, especially since DEU is the language which is closest related to ENG, language relatedness plays a role for producing suffixes of inflected forms as well.
Our last observation is that many errors are not found at all in our data sample, e.g., CHG2E(X) or NO_CHG(C). This can be explained by ENG having a relatively poor inflectional morphology, which does not leave much room for mistakes.
Qualitative Results ::: Error Analysis: Spanish
The errors committed for SPA are shown in Table TABREF37, again listed by source language. Together with Table TABREF19 it gets clear that SPA inflectional morphology is more complex than that of ENG: systems for all source languages perform worse.
Similarly to ENG, however, we find that most stem errors happen for the source languages EUS and NAV, which is further evidence for our previous hypothesis that familiarity with prefixing languages impedes acquisition of a suffixing one. Especially MULT errors are much more frequent for EUS and NAV than for all other languages. ADD(X) happens a lot for EUS, while ADD(C) is also frequent for NAV. Models pretrained on either language have difficulties with vowel changes, which reflects in NO_CHG(V). Thus, we conclude that this phenomenon is generally hard to learn.
Analyzing next the errors concerning affixes, we find that models pretrained on HUN, ITA, DEU, and FRA (in that order) commit the fewest errors. This supports two of our previous hypotheses: First, given that ITA and FRA are both from the same language family as SPA, relatedness seems to be benficial for learning of the second language. Second, the system pretrained on HUN performing well suggests again that a source language with an agglutinative, as opposed to a fusional, morphology seems to be beneficial as well.
Qualitative Results ::: Error Analysis: Zulu
In Table TABREF39, the errors for Zulu are shown, and Table TABREF19 reveals the relative performance for different source languages: TUR $>$ HUN $>$ DEU $>$ ITA $>$ FRA $>$ NAV $>$ EUS $>$ QVH. Again, TUR and HUN obtain high accuracy, which is an additional indicator for our hypothesis that a source language with an agglutinative morphology facilitates learning of inflection in another language.
Besides that, results differ from those for ENG and SPA. First of all, more mistakes are made for all source languages. However, there are also several finer differences. For ZUL, the model pretrained on QVH makes the most stem errors, in particular 4 more than the EUS model, which comes second. Given that ZUL is a prefixing language and QVH is suffixing, this relative order seems important. QVH also committs the highest number of MULT errors.
The next big difference between the results for ZUL and those for ENG and SPA is that DEL(X) and ADD(X) errors, which previously have mostly been found for the prefixing or partially prefixing languages EUS and NAV, are now most present in the outputs of suffixing languages. Namely, DEL(C) occurs most for FRA and ITA, DEL(V) for FRA and QVH, and ADD(C) and ADD(V) for HUN. While some deletion and insertion errors are subsumed in MULT, this does not fully explain this difference. For instance, QVH has both the second most DEL(V) and the most MULT errors.
The overall number of errors related to the affix seems comparable between models with different source languages. This weakly supports the hypothesis that relatedness reduces affix-related errors, since none of the pretraining languages in our experiments is particularly close to ZUL. However, we do find more CUT errors for HUN and TUR: again, these are suffixing, while CUT for the target language SPA mostly happened for the prefixing languages EUS and NAV.
Qualitative Results ::: Limitations
A limitation of our work is that we only include languages that are written in Latin script. An interesting question for future work might, thus, regard the effect of disjoint L1 and L2 alphabets.
Furthermore, none of the languages included in our study exhibits a templatic morphology. We make this choice because data for templatic languages is currently mostly available in non-Latin alphabets. Future work could investigate languages with templatic morphology as source or target languages, if needed by mapping the language's alphabet to Latin characters.
Finally, while we intend to choose a diverse set of languages for this study, our overall number of languages is still rather small. This affects the generalizability of the results, and future work might want to look at larger samples of languages.
Related Work ::: Neural network models for inflection.
Most research on inflectional morphology in NLP within the last years has been related to the SIGMORPHON and CoNLL–SIGMORPHON shared tasks on morphological inflection, which have been organized yearly since 2016 BIBREF6. Traditionally being focused on individual languages, the 2019 edition BIBREF23 contained a task which asked for transfer learning from a high-resource to a low-resource language. However, source–target pairs were predefined, and the question of how the source language influences learning besides the final accuracy score was not considered. Similarly to us, kyle performed a manual error analysis of morphological inflection systems for multiple languages. However, they did not investigate transfer learning, but focused on monolingual models.
Outside the scope of the shared tasks, kann-etal-2017-one investigated cross-lingual transfer for morphological inflection, but was limited to a quantitative analysis. Furthermore, that work experimented with a standard sequence-to-sequence model BIBREF12 in a multi-task training fashion BIBREF24, while we pretrain and fine-tune pointer–generator networks. jin-kann-2017-exploring also investigated cross-lingual transfer in neural sequence-to-sequence models for morphological inflection. However, their experimental setup mimicked kann-etal-2017-one, and the main research questions were different: While jin-kann-2017-exploring asked how cross-lingual knowledge transfer works during multi-task training of neural sequence-to-sequence models on two languages, we investigate if neural inflection models demonstrate interesting differences in production errors depending on the pretraining language. Besides that, we differ in the artificial neural network architecture and language pairs we investigate.
Related Work ::: Cross-lingual transfer in NLP.
Cross-lingual transfer learning has been used for a large variety NLP of tasks, e.g., automatic speech recognition BIBREF25, entity recognition BIBREF26, language modeling BIBREF27, or parsing BIBREF28, BIBREF29, BIBREF30. Machine translation has been no exception BIBREF31, BIBREF32, BIBREF33. Recent research asked how to automatically select a suitable source language for a given target language BIBREF34. This is similar to our work in that our findings could potentially be leveraged to find good source languages.
Related Work ::: Acquisition of morphological inflection.
Finally, a lot of research has focused on human L1 and L2 acquisition of inflectional morphology BIBREF35, BIBREF36, BIBREF37, BIBREF38, BIBREF39, BIBREF40.
To name some specific examples, marques2011study investigated the effect of a stay abroad on Spanish L2 acquisition, including learning of its verbal morphology in English speakers. jia2003acquisition studied how Mandarin Chinese-speaking children learned the English plural morpheme. nicoladis2012young studied the English past tense acquisition in Chinese–English and French–English bilingual children. They found that, while both groups showed similar production accuracy, they differed slightly in the type of errors they made. Also considering the effect of the native language explicitly, yang2004impact investigated the acquisition of the tense-aspect system in an L2 for speakers of a native language which does not mark tense explicitly.
Finally, our work has been weakly motivated by bliss2006l2. There, the author asked a question for human subjects which is similar to the one we ask for neural models: How does the native language influence L2 acquisition of inflectional morphology?
Conclusion and Future Work
Motivated by the fact that, in humans, learning of a second language is influenced by a learner's native language, we investigated a similar question in artificial neural network models for morphological inflection: How does pretraining on different languages influence a model's learning of inflection in a target language?
We performed experiments on eight different source languages and three different target languages. An extensive error analysis of all final models showed that (i) for closely related source and target languages, acquisition of target language inflection gets easier; (ii) knowledge of a prefixing language makes learning of inflection in a suffixing language more challenging, as well as the other way around; and (iii) languages which exhibit an agglutinative morphology facilitate learning of inflection in a second language.
Future work might leverage those findings to improve neural network models for morphological inflection in low-resource languages, by choosing suitable source languages for pretraining.
Another interesting next step would be to investigate how the errors made by our models compare to those by human L2 learners with different native languages. If the exhibited patterns resemble each other, computational models could be used to predict errors a person will make, which, in turn, could be leveraged for further research or the development of educational material.
Acknowledgments
I would like to thank Samuel R. Bowman and Kyle Gorman for helpful discussions and suggestions. This work has benefited from the support of Samsung Research under the project Improving Deep Learning using Latent Structure and from the donation of a Titan V GPU by NVIDIA Corporation. | Zulu |
74db8301d42c7e7936eb09b2171cd857744c52eb | 74db8301d42c7e7936eb09b2171cd857744c52eb_0 | Q: How is the performance on the task evaluated?
Text: Introduction
A widely agreed-on fact in language acquisition research is that learning of a second language (L2) is influenced by a learner's native language (L1) BIBREF0, BIBREF1. A language's morphosyntax seems to be no exception to this rule BIBREF2, but the exact nature of this influence remains unknown. For instance, it is unclear whether it is constraints imposed by the phonological or by the morphosyntactic attributes of the L1 that are more important during the process of learning an L2's morphosyntax.
Within the area of natural language processing (NLP) research, experimenting on neural network models just as if they were human subjects has recently been gaining popularity BIBREF3, BIBREF4, BIBREF5. Often, so-called probing tasks are used, which require a specific subset of linguistic knowledge and can, thus, be leveraged for qualitative evaluation. The goal is to answer the question: What do neural networks learn that helps them to succeed in a given task?
Neural network models, and specifically sequence-to-sequence models, have pushed the state of the art for morphological inflection – the task of learning a mapping from lemmata to their inflected forms – in the last years BIBREF6. Thus, in this work, we experiment on such models, asking not what they learn, but, motivated by the respective research on human subjects, the related question of how what they learn depends on their prior knowledge. We manually investigate the errors made by artificial neural networks for morphological inflection in a target language after pretraining on different source languages. We aim at finding answers to two main questions: (i) Do errors systematically differ between source languages? (ii) Do these differences seem explainable, given the properties of the source and target languages? In other words, we are interested in exploring if and how L2 acquisition of morphological inflection depends on the L1, i.e., the "native language", in neural network models.
To this goal, we select a diverse set of eight source languages from different language families – Basque, French, German, Hungarian, Italian, Navajo, Turkish, and Quechua – and three target languages – English, Spanish and Zulu. We pretrain a neural sequence-to-sequence architecture on each of the source languages and then fine-tune the resulting models on small datasets in each of the target languages. Analyzing the errors made by the systems, we find that (i) source and target language being closely related simplifies the successful learning of inflection in the target language, (ii) the task is harder to learn in a prefixing language if the source language is suffixing – as well as the other way around, and (iii) a source language which exhibits an agglutinative morphology simplifies learning of a second language's inflectional morphology.
Task
Many of the world's languages exhibit rich inflectional morphology: the surface form of an individual lexical entry changes in order to express properties such as person, grammatical gender, or case. The citation form of a lexical entry is referred to as the lemma. The set of all possible surface forms or inflections of a lemma is called its paradigm. Each inflection within a paradigm can be associated with a tag, i.e., 3rdSgPres is the morphological tag associated with the inflection dances of the English lemma dance. We display the paradigms of dance and eat in Table TABREF1.
The presence of rich inflectional morphology is problematic for NLP systems as it increases word form sparsity. For instance, while English verbs can have up to 5 inflected forms, Archi verbs have thousands BIBREF7, even by a conservative count. Thus, an important task in the area of morphology is morphological inflection BIBREF8, BIBREF9, which consists of mapping a lemma to an indicated inflected form. An (irregular) English example would be
with PAST being the target tag, denoting the past tense form. Additionally, a rich inflectional morphology is also challenging for L2 language learners, since both rules and their exceptions need to be memorized.
In NLP, morphological inflection has recently frequently been cast as a sequence-to-sequence problem, where the sequence of target (sub-)tags together with the sequence of input characters constitute the input sequence, and the characters of the inflected word form the output. Neural models define the state of the art for the task and obtain high accuracy if an abundance of training data is available. Here, we focus on learning of inflection from limited data if information about another language's morphology is already known. We, thus, loosely simulate an L2 learning setting.
Task ::: Formal definition.
Let ${\cal M}$ be the paradigm slots which are being expressed in a language, and $w$ a lemma in that language. We then define the paradigm $\pi $ of $w$ as:
$f_k[w]$ denotes an inflected form corresponding to tag $t_{k}$, and $w$ and $f_k[w]$ are strings consisting of letters from an alphabet $\Sigma $.
The task of morphological inflection consists of predicting a missing form $f_i[w]$ from a paradigm, given the lemma $w$ together with the tag $t_i$.
Model ::: Pointer–Generator Network
The models we experiment with are based on a pointer–generator network architecture BIBREF10, BIBREF11, i.e., a recurrent neural network (RNN)-based sequence-to-sequence network with attention and a copy mechanism. A standard sequence-to-sequence model BIBREF12 has been shown to perform well for morphological inflection BIBREF13 and has, thus, been subject to cognitively motivated experiments BIBREF14 before. Here, however, we choose the pointer–generator variant of sharma-katrapati-sharma:2018:K18-30, since it performs better in low-resource settings, which we will assume for our target languages. We will explain the model shortly in the following and refer the reader to the original paper for more details.
Model ::: Pointer–Generator Network ::: Encoders.
Our architecture employs two separate encoders, which are both bi-directional long short-term memory (LSTM) networks BIBREF15: The first processes the morphological tags which describe the desired target form one by one. The second encodes the sequence of characters of the input word.
Model ::: Pointer–Generator Network ::: Attention.
Two separate attention mechanisms are used: one per encoder LSTM. Taking all respective encoder hidden states as well as the current decoder hidden state as input, each of them outputs a so-called context vector, which is a weighted sum of all encoder hidden states. The concatenation of the two individual context vectors results in the final context vector $c_t$, which is the input to the decoder at time step $t$.
Model ::: Pointer–Generator Network ::: Decoder.
Our decoder consists of a uni-directional LSTM. Unlike a standard sequence-to-sequence model, a pointer–generator network is not limited to generating characters from the vocabulary to produce the output. Instead, the model gives certain probability to copying elements from the input over to the output. The probability of a character $y_t$ at time step $t$ is computed as a sum of the probability of $y_t$ given by the decoder and the probability of copying $y_t$, weighted by the probabilities of generating and copying:
$p_{\textrm {dec}}(y_t)$ is calculated as an LSTM update and a projection of the decoder state to the vocabulary, followed by a softmax function. $p_{\textrm {copy}}(y_t)$ corresponds to the attention weights for each input character. The model computes the probability $\alpha $ with which it generates a new output character as
for context vector $c_t$, decoder state $s_t$, embedding of the last output $y_{t-1}$, weights $w_c$, $w_s$, $w_y$, and bias vector $b$. It has been shown empirically that the copy mechanism of the pointer–generator network architecture is beneficial for morphological generation in the low-resource setting BIBREF16.
Model ::: Pretraining and Finetuning
Pretraining and successive fine-tuning of neural network models is a common approach for handling of low-resource settings in NLP. The idea is that certain properties of language can be learned either from raw text, related tasks, or related languages. Technically, pretraining consists of estimating some or all model parameters on examples which do not necessarily belong to the final target task. Fine-tuning refers to continuing training of such a model on a target task, whose data is often limited. While the sizes of the pretrained model parameters usually remain the same between the two phases, the learning rate or other details of the training regime, e.g., dropout, might differ. Pretraining can be seen as finding a suitable initialization of model parameters, before training on limited amounts of task- or language-specific examples.
In the context of morphological generation, pretraining in combination with fine-tuning has been used by kann-schutze-2018-neural, which proposes to pretrain a model on general inflection data and fine-tune on examples from a specific paradigm whose remaining forms should be automatically generated. Famous examples for pretraining in the wider area of NLP include BERT BIBREF17 or GPT-2 BIBREF18: there, general properties of language are learned using large unlabeled corpora.
Here, we are interested in pretraining as a simulation of familiarity with a native language. By investigating a fine-tuned model we ask the question: How does extensive knowledge of one language influence the acquisition of another?
Experimental Design ::: Target Languages
We choose three target languages.
English (ENG) is a morphologically impoverished language, as far as inflectional morphology is concerned. Its verbal paradigm only consists of up to 5 different forms and its nominal paradigm of only up to 2. However, it is one of the most frequently spoken and taught languages in the world, making its acquisition a crucial research topic.
Spanish (SPA), in contrast, is morphologically rich, and disposes of much larger verbal paradigms than English. Like English, it is a suffixing language, and it additionally makes use of internal stem changes (e.g., o $\rightarrow $ ue).
Since English and Spanish are both Indo-European languages, and, thus, relatively similar, we further add a third, unrelated target language. We choose Zulu (ZUL), a Bantoid language. In contrast to the first two, it is strongly prefixing.
Experimental Design ::: Source Languages
For pretraining, we choose languages with different degrees of relatedness and varying morphological similarity to English, Spanish, and Zulu. We limit our experiments to languages which are written in Latin script.
As an estimate for morphological similarity we look at the features from the Morphology category mentioned in The World Atlas of Language Structures (WALS). An overview of the available features as well as the respective values for our set of languages is shown in Table TABREF13.
We decide on Basque (EUS), French (FRA), German (DEU), Hungarian (HUN), Italian (ITA), Navajo (NAV), Turkish (TUR), and Quechua (QVH) as source languages.
Basque is a language isolate. Its inflectional morphology makes similarly frequent use of prefixes and suffixes, with suffixes mostly being attached to nouns, while prefixes and suffixes can both be employed for verbal inflection.
French and Italian are Romance languages, and thus belong to the same family as the target language Spanish. Both are suffixing and fusional languages.
German, like English, belongs to the Germanic language family. It is a fusional, predominantly suffixing language and, similarly to Spanish, makes use of stem changes.
Hungarian, a Finno-Ugric language, and Turkish, a Turkic language, both exhibit an agglutinative morphology, and are predominantly suffixing. They further have vowel harmony systems.
Navajo is an Athabaskan language and the only source language which is strongly prefixing. It further exhibits consonant harmony among its sibilants BIBREF19, BIBREF20.
Finally, Quechua, a Quechuan language spoken in South America, is again predominantly suffixing and unrelated to all of our target languages.
Experimental Design ::: Hyperparameters and Data
We mostly use the default hyperparameters by sharma-katrapati-sharma:2018:K18-30. In particular, all RNNs have one hidden layer of size 100, and all input and output embeddings are 300-dimensional.
For optimization, we use ADAM BIBREF21. Pretraining on the source language is done for exactly 50 epochs. To obtain our final models, we then fine-tune different copies of each pretrained model for 300 additional epochs for each target language. We employ dropout BIBREF22 with a coefficient of 0.3 for pretraining and, since that dataset is smaller, with a coefficient of 0.5 for fine-tuning.
We make use of the datasets from the CoNLL–SIGMORPHON 2018 shared task BIBREF9. The organizers provided a low, medium, and high setting for each language, with 100, 1000, and 10000 examples, respectively. For all L1 languages, we train our models on the high-resource datasets with 10000 examples. For fine-tuning, we use the low-resource datasets.
Quantitative Results
In Table TABREF18, we show the final test accuracy for all models and languages. Pretraining on EUS and NAV results in the weakest target language inflection models for ENG, which might be explained by those two languages being unrelated to ENG and making at least partial use of prefixing, while ENG is a suffixing language (cf. Table TABREF13). In contrast, HUN and ITA yield the best final models for ENG. This is surprising, since DEU is the language in our experiments which is closest related to ENG.
For SPA, again HUN performs best, followed closely by ITA. While the good performance of HUN as a source language is still unexpected, ITA is closely related to SPA, which could explain the high accuracy of the final model. As for ENG, pretraining on EUS and NAV yields the worst final models – importantly, accuracy is over $15\%$ lower than for QVH, which is also an unrelated language. This again suggests that the prefixing morphology of EUS and NAV might play a role.
Lastly, for ZUL, all models perform rather poorly, with a minimum accuracy of 10.7 and 10.8 for the source languages QVH and EUS, respectively, and a maximum accuracy of 24.9 for a model pretrained on Turkish. The latter result hints at the fact that a regular and agglutinative morphology might be beneficial in a source language – something which could also account for the performance of models pretrained on HUN.
Qualitative Results
For our qualitative analysis, we make use of the validation set. Therefore, we show validation set accuracies in Table TABREF19 for comparison. As we can see, the results are similar to the test set results for all language combinations. We manually annotate the outputs for the first 75 development examples for each source–target language combination. All found errors are categorized as belonging to one of the following categories.
Qualitative Results ::: Stem Errors
SUB(X): This error consists of a wrong substitution of one character with another. SUB(V) and SUB(C) denote this happening with a vowel or a consonant, respectively. Letters that differ from each other by an accent count as different vowels.
Example: decultared instead of decultured
DEL(X): This happens when the system ommits a letter from the output. DEL(V) and DEL(C) refer to a missing vowel or consonant, respectively.
Example: firte instead of firtle
NO_CHG(X): This error occurs when inflecting the lemma to the gold form requires a change of either a vowel (NO_CHG(V)) or a consonant (NO_CHG(C)), but this is missing in the predicted form.
Example: verto instead of vierto
MULT: This describes cases where two or more errors occur in the stem. Errors concerning the affix are counted for separately.
Example: aconcoonaste instead of acondicionaste
ADD(X): This error occurs when a letter is mistakenly added to the inflected form. ADD(V) refers to an unnecessary vowel, ADD(C) refers to an unnecessary consonant.
Example: compillan instead of compilan
CHG2E(X): This error occurs when inflecting the lemma to the gold form requires a change of either a vowel (CHG2E(V)) or a consonant (CHG2E(C)), and this is done, but the resulting vowel or consonant is incorrect.
Example: propace instead of propague
Qualitative Results ::: Affix Errors
AFF: This error refers to a wrong affix. This can be either a prefix or a suffix, depending on the correct target form.
Example: ezoJulayi instead of esikaJulayi
CUT: This consists of cutting too much of the lemma's prefix or suffix before attaching the inflected form's prefix or suffix, respectively.
Example: irradiseis instead of irradiaseis
Qualitative Results ::: Miscellaneous Errors
REFL: This happens when a reflective pronoun is missing in the generated form.
Example: doliéramos instead of nos doliéramos
REFL_LOC: This error occurs if the reflective pronouns appears at an unexpected position within the generated form.
Example: taparsebais instead of os tapabais
OVERREG: Overregularization errors occur when the model predicts a form which would be correct if the lemma's inflections were regular but they are not.
Example: underteach instead of undertaught
Qualitative Results ::: Error Analysis: English
Table TABREF35 displays the errors found in the 75 first ENG development examples, for each source language. From Table TABREF19, we know that HUN $>$ ITA $>$ TUR $>$ DEU $>$ FRA $>$ QVH $>$ NAV $>$ EUS, and we get a similar picture when analyzing the first examples. Thus, especially keeping HUN and TUR in mind, we cautiously propose a first conclusion: familiarity with languages which exhibit an agglutinative morphology simplifies learning of a new language's morphology.
Looking at the types of errors, we find that EUS and NAV make the most stem errors. For QVH we find less, but still over 10 more than for the remaining languages. This makes it seem that models pretrained on prefixing or partly prefixing languages indeed have a harder time to learn ENG inflectional morphology, and, in particular, to copy the stem correctly. Thus, our second hypotheses is that familiarity with a prefixing language might lead to suspicion of needed changes to the part of the stem which should remain unaltered in a suffixing language. DEL(X) and ADD(X) errors are particularly frequent for EUS and NAV, which further suggests this conclusion.
Next, the relatively large amount of stem errors for QVH leads to our second hypothesis: language relatedness does play a role when trying to produce a correct stem of an inflected form. This is also implied by the number of MULT errors for EUS, NAV and QVH, as compared to the other languages.
Considering errors related to the affixes which have to be generated, we find that DEU, HUN and ITA make the fewest. This further suggests the conclusion that, especially since DEU is the language which is closest related to ENG, language relatedness plays a role for producing suffixes of inflected forms as well.
Our last observation is that many errors are not found at all in our data sample, e.g., CHG2E(X) or NO_CHG(C). This can be explained by ENG having a relatively poor inflectional morphology, which does not leave much room for mistakes.
Qualitative Results ::: Error Analysis: Spanish
The errors committed for SPA are shown in Table TABREF37, again listed by source language. Together with Table TABREF19 it gets clear that SPA inflectional morphology is more complex than that of ENG: systems for all source languages perform worse.
Similarly to ENG, however, we find that most stem errors happen for the source languages EUS and NAV, which is further evidence for our previous hypothesis that familiarity with prefixing languages impedes acquisition of a suffixing one. Especially MULT errors are much more frequent for EUS and NAV than for all other languages. ADD(X) happens a lot for EUS, while ADD(C) is also frequent for NAV. Models pretrained on either language have difficulties with vowel changes, which reflects in NO_CHG(V). Thus, we conclude that this phenomenon is generally hard to learn.
Analyzing next the errors concerning affixes, we find that models pretrained on HUN, ITA, DEU, and FRA (in that order) commit the fewest errors. This supports two of our previous hypotheses: First, given that ITA and FRA are both from the same language family as SPA, relatedness seems to be benficial for learning of the second language. Second, the system pretrained on HUN performing well suggests again that a source language with an agglutinative, as opposed to a fusional, morphology seems to be beneficial as well.
Qualitative Results ::: Error Analysis: Zulu
In Table TABREF39, the errors for Zulu are shown, and Table TABREF19 reveals the relative performance for different source languages: TUR $>$ HUN $>$ DEU $>$ ITA $>$ FRA $>$ NAV $>$ EUS $>$ QVH. Again, TUR and HUN obtain high accuracy, which is an additional indicator for our hypothesis that a source language with an agglutinative morphology facilitates learning of inflection in another language.
Besides that, results differ from those for ENG and SPA. First of all, more mistakes are made for all source languages. However, there are also several finer differences. For ZUL, the model pretrained on QVH makes the most stem errors, in particular 4 more than the EUS model, which comes second. Given that ZUL is a prefixing language and QVH is suffixing, this relative order seems important. QVH also committs the highest number of MULT errors.
The next big difference between the results for ZUL and those for ENG and SPA is that DEL(X) and ADD(X) errors, which previously have mostly been found for the prefixing or partially prefixing languages EUS and NAV, are now most present in the outputs of suffixing languages. Namely, DEL(C) occurs most for FRA and ITA, DEL(V) for FRA and QVH, and ADD(C) and ADD(V) for HUN. While some deletion and insertion errors are subsumed in MULT, this does not fully explain this difference. For instance, QVH has both the second most DEL(V) and the most MULT errors.
The overall number of errors related to the affix seems comparable between models with different source languages. This weakly supports the hypothesis that relatedness reduces affix-related errors, since none of the pretraining languages in our experiments is particularly close to ZUL. However, we do find more CUT errors for HUN and TUR: again, these are suffixing, while CUT for the target language SPA mostly happened for the prefixing languages EUS and NAV.
Qualitative Results ::: Limitations
A limitation of our work is that we only include languages that are written in Latin script. An interesting question for future work might, thus, regard the effect of disjoint L1 and L2 alphabets.
Furthermore, none of the languages included in our study exhibits a templatic morphology. We make this choice because data for templatic languages is currently mostly available in non-Latin alphabets. Future work could investigate languages with templatic morphology as source or target languages, if needed by mapping the language's alphabet to Latin characters.
Finally, while we intend to choose a diverse set of languages for this study, our overall number of languages is still rather small. This affects the generalizability of the results, and future work might want to look at larger samples of languages.
Related Work ::: Neural network models for inflection.
Most research on inflectional morphology in NLP within the last years has been related to the SIGMORPHON and CoNLL–SIGMORPHON shared tasks on morphological inflection, which have been organized yearly since 2016 BIBREF6. Traditionally being focused on individual languages, the 2019 edition BIBREF23 contained a task which asked for transfer learning from a high-resource to a low-resource language. However, source–target pairs were predefined, and the question of how the source language influences learning besides the final accuracy score was not considered. Similarly to us, kyle performed a manual error analysis of morphological inflection systems for multiple languages. However, they did not investigate transfer learning, but focused on monolingual models.
Outside the scope of the shared tasks, kann-etal-2017-one investigated cross-lingual transfer for morphological inflection, but was limited to a quantitative analysis. Furthermore, that work experimented with a standard sequence-to-sequence model BIBREF12 in a multi-task training fashion BIBREF24, while we pretrain and fine-tune pointer–generator networks. jin-kann-2017-exploring also investigated cross-lingual transfer in neural sequence-to-sequence models for morphological inflection. However, their experimental setup mimicked kann-etal-2017-one, and the main research questions were different: While jin-kann-2017-exploring asked how cross-lingual knowledge transfer works during multi-task training of neural sequence-to-sequence models on two languages, we investigate if neural inflection models demonstrate interesting differences in production errors depending on the pretraining language. Besides that, we differ in the artificial neural network architecture and language pairs we investigate.
Related Work ::: Cross-lingual transfer in NLP.
Cross-lingual transfer learning has been used for a large variety NLP of tasks, e.g., automatic speech recognition BIBREF25, entity recognition BIBREF26, language modeling BIBREF27, or parsing BIBREF28, BIBREF29, BIBREF30. Machine translation has been no exception BIBREF31, BIBREF32, BIBREF33. Recent research asked how to automatically select a suitable source language for a given target language BIBREF34. This is similar to our work in that our findings could potentially be leveraged to find good source languages.
Related Work ::: Acquisition of morphological inflection.
Finally, a lot of research has focused on human L1 and L2 acquisition of inflectional morphology BIBREF35, BIBREF36, BIBREF37, BIBREF38, BIBREF39, BIBREF40.
To name some specific examples, marques2011study investigated the effect of a stay abroad on Spanish L2 acquisition, including learning of its verbal morphology in English speakers. jia2003acquisition studied how Mandarin Chinese-speaking children learned the English plural morpheme. nicoladis2012young studied the English past tense acquisition in Chinese–English and French–English bilingual children. They found that, while both groups showed similar production accuracy, they differed slightly in the type of errors they made. Also considering the effect of the native language explicitly, yang2004impact investigated the acquisition of the tense-aspect system in an L2 for speakers of a native language which does not mark tense explicitly.
Finally, our work has been weakly motivated by bliss2006l2. There, the author asked a question for human subjects which is similar to the one we ask for neural models: How does the native language influence L2 acquisition of inflectional morphology?
Conclusion and Future Work
Motivated by the fact that, in humans, learning of a second language is influenced by a learner's native language, we investigated a similar question in artificial neural network models for morphological inflection: How does pretraining on different languages influence a model's learning of inflection in a target language?
We performed experiments on eight different source languages and three different target languages. An extensive error analysis of all final models showed that (i) for closely related source and target languages, acquisition of target language inflection gets easier; (ii) knowledge of a prefixing language makes learning of inflection in a suffixing language more challenging, as well as the other way around; and (iii) languages which exhibit an agglutinative morphology facilitate learning of inflection in a second language.
Future work might leverage those findings to improve neural network models for morphological inflection in low-resource languages, by choosing suitable source languages for pretraining.
Another interesting next step would be to investigate how the errors made by our models compare to those by human L2 learners with different native languages. If the exhibited patterns resemble each other, computational models could be used to predict errors a person will make, which, in turn, could be leveraged for further research or the development of educational material.
Acknowledgments
I would like to thank Samuel R. Bowman and Kyle Gorman for helpful discussions and suggestions. This work has benefited from the support of Samsung Research under the project Improving Deep Learning using Latent Structure and from the donation of a Titan V GPU by NVIDIA Corporation. | Comparison of test accuracies of neural network models on an inflection task and qualitative analysis of the errors |
587885bc86543b8f8b134c20e2c62f6251195571 | 587885bc86543b8f8b134c20e2c62f6251195571_0 | Q: What are the tree target languages studied in the paper?
Text: Introduction
A widely agreed-on fact in language acquisition research is that learning of a second language (L2) is influenced by a learner's native language (L1) BIBREF0, BIBREF1. A language's morphosyntax seems to be no exception to this rule BIBREF2, but the exact nature of this influence remains unknown. For instance, it is unclear whether it is constraints imposed by the phonological or by the morphosyntactic attributes of the L1 that are more important during the process of learning an L2's morphosyntax.
Within the area of natural language processing (NLP) research, experimenting on neural network models just as if they were human subjects has recently been gaining popularity BIBREF3, BIBREF4, BIBREF5. Often, so-called probing tasks are used, which require a specific subset of linguistic knowledge and can, thus, be leveraged for qualitative evaluation. The goal is to answer the question: What do neural networks learn that helps them to succeed in a given task?
Neural network models, and specifically sequence-to-sequence models, have pushed the state of the art for morphological inflection – the task of learning a mapping from lemmata to their inflected forms – in the last years BIBREF6. Thus, in this work, we experiment on such models, asking not what they learn, but, motivated by the respective research on human subjects, the related question of how what they learn depends on their prior knowledge. We manually investigate the errors made by artificial neural networks for morphological inflection in a target language after pretraining on different source languages. We aim at finding answers to two main questions: (i) Do errors systematically differ between source languages? (ii) Do these differences seem explainable, given the properties of the source and target languages? In other words, we are interested in exploring if and how L2 acquisition of morphological inflection depends on the L1, i.e., the "native language", in neural network models.
To this goal, we select a diverse set of eight source languages from different language families – Basque, French, German, Hungarian, Italian, Navajo, Turkish, and Quechua – and three target languages – English, Spanish and Zulu. We pretrain a neural sequence-to-sequence architecture on each of the source languages and then fine-tune the resulting models on small datasets in each of the target languages. Analyzing the errors made by the systems, we find that (i) source and target language being closely related simplifies the successful learning of inflection in the target language, (ii) the task is harder to learn in a prefixing language if the source language is suffixing – as well as the other way around, and (iii) a source language which exhibits an agglutinative morphology simplifies learning of a second language's inflectional morphology.
Task
Many of the world's languages exhibit rich inflectional morphology: the surface form of an individual lexical entry changes in order to express properties such as person, grammatical gender, or case. The citation form of a lexical entry is referred to as the lemma. The set of all possible surface forms or inflections of a lemma is called its paradigm. Each inflection within a paradigm can be associated with a tag, i.e., 3rdSgPres is the morphological tag associated with the inflection dances of the English lemma dance. We display the paradigms of dance and eat in Table TABREF1.
The presence of rich inflectional morphology is problematic for NLP systems as it increases word form sparsity. For instance, while English verbs can have up to 5 inflected forms, Archi verbs have thousands BIBREF7, even by a conservative count. Thus, an important task in the area of morphology is morphological inflection BIBREF8, BIBREF9, which consists of mapping a lemma to an indicated inflected form. An (irregular) English example would be
with PAST being the target tag, denoting the past tense form. Additionally, a rich inflectional morphology is also challenging for L2 language learners, since both rules and their exceptions need to be memorized.
In NLP, morphological inflection has recently frequently been cast as a sequence-to-sequence problem, where the sequence of target (sub-)tags together with the sequence of input characters constitute the input sequence, and the characters of the inflected word form the output. Neural models define the state of the art for the task and obtain high accuracy if an abundance of training data is available. Here, we focus on learning of inflection from limited data if information about another language's morphology is already known. We, thus, loosely simulate an L2 learning setting.
Task ::: Formal definition.
Let ${\cal M}$ be the paradigm slots which are being expressed in a language, and $w$ a lemma in that language. We then define the paradigm $\pi $ of $w$ as:
$f_k[w]$ denotes an inflected form corresponding to tag $t_{k}$, and $w$ and $f_k[w]$ are strings consisting of letters from an alphabet $\Sigma $.
The task of morphological inflection consists of predicting a missing form $f_i[w]$ from a paradigm, given the lemma $w$ together with the tag $t_i$.
Model ::: Pointer–Generator Network
The models we experiment with are based on a pointer–generator network architecture BIBREF10, BIBREF11, i.e., a recurrent neural network (RNN)-based sequence-to-sequence network with attention and a copy mechanism. A standard sequence-to-sequence model BIBREF12 has been shown to perform well for morphological inflection BIBREF13 and has, thus, been subject to cognitively motivated experiments BIBREF14 before. Here, however, we choose the pointer–generator variant of sharma-katrapati-sharma:2018:K18-30, since it performs better in low-resource settings, which we will assume for our target languages. We will explain the model shortly in the following and refer the reader to the original paper for more details.
Model ::: Pointer–Generator Network ::: Encoders.
Our architecture employs two separate encoders, which are both bi-directional long short-term memory (LSTM) networks BIBREF15: The first processes the morphological tags which describe the desired target form one by one. The second encodes the sequence of characters of the input word.
Model ::: Pointer–Generator Network ::: Attention.
Two separate attention mechanisms are used: one per encoder LSTM. Taking all respective encoder hidden states as well as the current decoder hidden state as input, each of them outputs a so-called context vector, which is a weighted sum of all encoder hidden states. The concatenation of the two individual context vectors results in the final context vector $c_t$, which is the input to the decoder at time step $t$.
Model ::: Pointer–Generator Network ::: Decoder.
Our decoder consists of a uni-directional LSTM. Unlike a standard sequence-to-sequence model, a pointer–generator network is not limited to generating characters from the vocabulary to produce the output. Instead, the model gives certain probability to copying elements from the input over to the output. The probability of a character $y_t$ at time step $t$ is computed as a sum of the probability of $y_t$ given by the decoder and the probability of copying $y_t$, weighted by the probabilities of generating and copying:
$p_{\textrm {dec}}(y_t)$ is calculated as an LSTM update and a projection of the decoder state to the vocabulary, followed by a softmax function. $p_{\textrm {copy}}(y_t)$ corresponds to the attention weights for each input character. The model computes the probability $\alpha $ with which it generates a new output character as
for context vector $c_t$, decoder state $s_t$, embedding of the last output $y_{t-1}$, weights $w_c$, $w_s$, $w_y$, and bias vector $b$. It has been shown empirically that the copy mechanism of the pointer–generator network architecture is beneficial for morphological generation in the low-resource setting BIBREF16.
Model ::: Pretraining and Finetuning
Pretraining and successive fine-tuning of neural network models is a common approach for handling of low-resource settings in NLP. The idea is that certain properties of language can be learned either from raw text, related tasks, or related languages. Technically, pretraining consists of estimating some or all model parameters on examples which do not necessarily belong to the final target task. Fine-tuning refers to continuing training of such a model on a target task, whose data is often limited. While the sizes of the pretrained model parameters usually remain the same between the two phases, the learning rate or other details of the training regime, e.g., dropout, might differ. Pretraining can be seen as finding a suitable initialization of model parameters, before training on limited amounts of task- or language-specific examples.
In the context of morphological generation, pretraining in combination with fine-tuning has been used by kann-schutze-2018-neural, which proposes to pretrain a model on general inflection data and fine-tune on examples from a specific paradigm whose remaining forms should be automatically generated. Famous examples for pretraining in the wider area of NLP include BERT BIBREF17 or GPT-2 BIBREF18: there, general properties of language are learned using large unlabeled corpora.
Here, we are interested in pretraining as a simulation of familiarity with a native language. By investigating a fine-tuned model we ask the question: How does extensive knowledge of one language influence the acquisition of another?
Experimental Design ::: Target Languages
We choose three target languages.
English (ENG) is a morphologically impoverished language, as far as inflectional morphology is concerned. Its verbal paradigm only consists of up to 5 different forms and its nominal paradigm of only up to 2. However, it is one of the most frequently spoken and taught languages in the world, making its acquisition a crucial research topic.
Spanish (SPA), in contrast, is morphologically rich, and disposes of much larger verbal paradigms than English. Like English, it is a suffixing language, and it additionally makes use of internal stem changes (e.g., o $\rightarrow $ ue).
Since English and Spanish are both Indo-European languages, and, thus, relatively similar, we further add a third, unrelated target language. We choose Zulu (ZUL), a Bantoid language. In contrast to the first two, it is strongly prefixing.
Experimental Design ::: Source Languages
For pretraining, we choose languages with different degrees of relatedness and varying morphological similarity to English, Spanish, and Zulu. We limit our experiments to languages which are written in Latin script.
As an estimate for morphological similarity we look at the features from the Morphology category mentioned in The World Atlas of Language Structures (WALS). An overview of the available features as well as the respective values for our set of languages is shown in Table TABREF13.
We decide on Basque (EUS), French (FRA), German (DEU), Hungarian (HUN), Italian (ITA), Navajo (NAV), Turkish (TUR), and Quechua (QVH) as source languages.
Basque is a language isolate. Its inflectional morphology makes similarly frequent use of prefixes and suffixes, with suffixes mostly being attached to nouns, while prefixes and suffixes can both be employed for verbal inflection.
French and Italian are Romance languages, and thus belong to the same family as the target language Spanish. Both are suffixing and fusional languages.
German, like English, belongs to the Germanic language family. It is a fusional, predominantly suffixing language and, similarly to Spanish, makes use of stem changes.
Hungarian, a Finno-Ugric language, and Turkish, a Turkic language, both exhibit an agglutinative morphology, and are predominantly suffixing. They further have vowel harmony systems.
Navajo is an Athabaskan language and the only source language which is strongly prefixing. It further exhibits consonant harmony among its sibilants BIBREF19, BIBREF20.
Finally, Quechua, a Quechuan language spoken in South America, is again predominantly suffixing and unrelated to all of our target languages.
Experimental Design ::: Hyperparameters and Data
We mostly use the default hyperparameters by sharma-katrapati-sharma:2018:K18-30. In particular, all RNNs have one hidden layer of size 100, and all input and output embeddings are 300-dimensional.
For optimization, we use ADAM BIBREF21. Pretraining on the source language is done for exactly 50 epochs. To obtain our final models, we then fine-tune different copies of each pretrained model for 300 additional epochs for each target language. We employ dropout BIBREF22 with a coefficient of 0.3 for pretraining and, since that dataset is smaller, with a coefficient of 0.5 for fine-tuning.
We make use of the datasets from the CoNLL–SIGMORPHON 2018 shared task BIBREF9. The organizers provided a low, medium, and high setting for each language, with 100, 1000, and 10000 examples, respectively. For all L1 languages, we train our models on the high-resource datasets with 10000 examples. For fine-tuning, we use the low-resource datasets.
Quantitative Results
In Table TABREF18, we show the final test accuracy for all models and languages. Pretraining on EUS and NAV results in the weakest target language inflection models for ENG, which might be explained by those two languages being unrelated to ENG and making at least partial use of prefixing, while ENG is a suffixing language (cf. Table TABREF13). In contrast, HUN and ITA yield the best final models for ENG. This is surprising, since DEU is the language in our experiments which is closest related to ENG.
For SPA, again HUN performs best, followed closely by ITA. While the good performance of HUN as a source language is still unexpected, ITA is closely related to SPA, which could explain the high accuracy of the final model. As for ENG, pretraining on EUS and NAV yields the worst final models – importantly, accuracy is over $15\%$ lower than for QVH, which is also an unrelated language. This again suggests that the prefixing morphology of EUS and NAV might play a role.
Lastly, for ZUL, all models perform rather poorly, with a minimum accuracy of 10.7 and 10.8 for the source languages QVH and EUS, respectively, and a maximum accuracy of 24.9 for a model pretrained on Turkish. The latter result hints at the fact that a regular and agglutinative morphology might be beneficial in a source language – something which could also account for the performance of models pretrained on HUN.
Qualitative Results
For our qualitative analysis, we make use of the validation set. Therefore, we show validation set accuracies in Table TABREF19 for comparison. As we can see, the results are similar to the test set results for all language combinations. We manually annotate the outputs for the first 75 development examples for each source–target language combination. All found errors are categorized as belonging to one of the following categories.
Qualitative Results ::: Stem Errors
SUB(X): This error consists of a wrong substitution of one character with another. SUB(V) and SUB(C) denote this happening with a vowel or a consonant, respectively. Letters that differ from each other by an accent count as different vowels.
Example: decultared instead of decultured
DEL(X): This happens when the system ommits a letter from the output. DEL(V) and DEL(C) refer to a missing vowel or consonant, respectively.
Example: firte instead of firtle
NO_CHG(X): This error occurs when inflecting the lemma to the gold form requires a change of either a vowel (NO_CHG(V)) or a consonant (NO_CHG(C)), but this is missing in the predicted form.
Example: verto instead of vierto
MULT: This describes cases where two or more errors occur in the stem. Errors concerning the affix are counted for separately.
Example: aconcoonaste instead of acondicionaste
ADD(X): This error occurs when a letter is mistakenly added to the inflected form. ADD(V) refers to an unnecessary vowel, ADD(C) refers to an unnecessary consonant.
Example: compillan instead of compilan
CHG2E(X): This error occurs when inflecting the lemma to the gold form requires a change of either a vowel (CHG2E(V)) or a consonant (CHG2E(C)), and this is done, but the resulting vowel or consonant is incorrect.
Example: propace instead of propague
Qualitative Results ::: Affix Errors
AFF: This error refers to a wrong affix. This can be either a prefix or a suffix, depending on the correct target form.
Example: ezoJulayi instead of esikaJulayi
CUT: This consists of cutting too much of the lemma's prefix or suffix before attaching the inflected form's prefix or suffix, respectively.
Example: irradiseis instead of irradiaseis
Qualitative Results ::: Miscellaneous Errors
REFL: This happens when a reflective pronoun is missing in the generated form.
Example: doliéramos instead of nos doliéramos
REFL_LOC: This error occurs if the reflective pronouns appears at an unexpected position within the generated form.
Example: taparsebais instead of os tapabais
OVERREG: Overregularization errors occur when the model predicts a form which would be correct if the lemma's inflections were regular but they are not.
Example: underteach instead of undertaught
Qualitative Results ::: Error Analysis: English
Table TABREF35 displays the errors found in the 75 first ENG development examples, for each source language. From Table TABREF19, we know that HUN $>$ ITA $>$ TUR $>$ DEU $>$ FRA $>$ QVH $>$ NAV $>$ EUS, and we get a similar picture when analyzing the first examples. Thus, especially keeping HUN and TUR in mind, we cautiously propose a first conclusion: familiarity with languages which exhibit an agglutinative morphology simplifies learning of a new language's morphology.
Looking at the types of errors, we find that EUS and NAV make the most stem errors. For QVH we find less, but still over 10 more than for the remaining languages. This makes it seem that models pretrained on prefixing or partly prefixing languages indeed have a harder time to learn ENG inflectional morphology, and, in particular, to copy the stem correctly. Thus, our second hypotheses is that familiarity with a prefixing language might lead to suspicion of needed changes to the part of the stem which should remain unaltered in a suffixing language. DEL(X) and ADD(X) errors are particularly frequent for EUS and NAV, which further suggests this conclusion.
Next, the relatively large amount of stem errors for QVH leads to our second hypothesis: language relatedness does play a role when trying to produce a correct stem of an inflected form. This is also implied by the number of MULT errors for EUS, NAV and QVH, as compared to the other languages.
Considering errors related to the affixes which have to be generated, we find that DEU, HUN and ITA make the fewest. This further suggests the conclusion that, especially since DEU is the language which is closest related to ENG, language relatedness plays a role for producing suffixes of inflected forms as well.
Our last observation is that many errors are not found at all in our data sample, e.g., CHG2E(X) or NO_CHG(C). This can be explained by ENG having a relatively poor inflectional morphology, which does not leave much room for mistakes.
Qualitative Results ::: Error Analysis: Spanish
The errors committed for SPA are shown in Table TABREF37, again listed by source language. Together with Table TABREF19 it gets clear that SPA inflectional morphology is more complex than that of ENG: systems for all source languages perform worse.
Similarly to ENG, however, we find that most stem errors happen for the source languages EUS and NAV, which is further evidence for our previous hypothesis that familiarity with prefixing languages impedes acquisition of a suffixing one. Especially MULT errors are much more frequent for EUS and NAV than for all other languages. ADD(X) happens a lot for EUS, while ADD(C) is also frequent for NAV. Models pretrained on either language have difficulties with vowel changes, which reflects in NO_CHG(V). Thus, we conclude that this phenomenon is generally hard to learn.
Analyzing next the errors concerning affixes, we find that models pretrained on HUN, ITA, DEU, and FRA (in that order) commit the fewest errors. This supports two of our previous hypotheses: First, given that ITA and FRA are both from the same language family as SPA, relatedness seems to be benficial for learning of the second language. Second, the system pretrained on HUN performing well suggests again that a source language with an agglutinative, as opposed to a fusional, morphology seems to be beneficial as well.
Qualitative Results ::: Error Analysis: Zulu
In Table TABREF39, the errors for Zulu are shown, and Table TABREF19 reveals the relative performance for different source languages: TUR $>$ HUN $>$ DEU $>$ ITA $>$ FRA $>$ NAV $>$ EUS $>$ QVH. Again, TUR and HUN obtain high accuracy, which is an additional indicator for our hypothesis that a source language with an agglutinative morphology facilitates learning of inflection in another language.
Besides that, results differ from those for ENG and SPA. First of all, more mistakes are made for all source languages. However, there are also several finer differences. For ZUL, the model pretrained on QVH makes the most stem errors, in particular 4 more than the EUS model, which comes second. Given that ZUL is a prefixing language and QVH is suffixing, this relative order seems important. QVH also committs the highest number of MULT errors.
The next big difference between the results for ZUL and those for ENG and SPA is that DEL(X) and ADD(X) errors, which previously have mostly been found for the prefixing or partially prefixing languages EUS and NAV, are now most present in the outputs of suffixing languages. Namely, DEL(C) occurs most for FRA and ITA, DEL(V) for FRA and QVH, and ADD(C) and ADD(V) for HUN. While some deletion and insertion errors are subsumed in MULT, this does not fully explain this difference. For instance, QVH has both the second most DEL(V) and the most MULT errors.
The overall number of errors related to the affix seems comparable between models with different source languages. This weakly supports the hypothesis that relatedness reduces affix-related errors, since none of the pretraining languages in our experiments is particularly close to ZUL. However, we do find more CUT errors for HUN and TUR: again, these are suffixing, while CUT for the target language SPA mostly happened for the prefixing languages EUS and NAV.
Qualitative Results ::: Limitations
A limitation of our work is that we only include languages that are written in Latin script. An interesting question for future work might, thus, regard the effect of disjoint L1 and L2 alphabets.
Furthermore, none of the languages included in our study exhibits a templatic morphology. We make this choice because data for templatic languages is currently mostly available in non-Latin alphabets. Future work could investigate languages with templatic morphology as source or target languages, if needed by mapping the language's alphabet to Latin characters.
Finally, while we intend to choose a diverse set of languages for this study, our overall number of languages is still rather small. This affects the generalizability of the results, and future work might want to look at larger samples of languages.
Related Work ::: Neural network models for inflection.
Most research on inflectional morphology in NLP within the last years has been related to the SIGMORPHON and CoNLL–SIGMORPHON shared tasks on morphological inflection, which have been organized yearly since 2016 BIBREF6. Traditionally being focused on individual languages, the 2019 edition BIBREF23 contained a task which asked for transfer learning from a high-resource to a low-resource language. However, source–target pairs were predefined, and the question of how the source language influences learning besides the final accuracy score was not considered. Similarly to us, kyle performed a manual error analysis of morphological inflection systems for multiple languages. However, they did not investigate transfer learning, but focused on monolingual models.
Outside the scope of the shared tasks, kann-etal-2017-one investigated cross-lingual transfer for morphological inflection, but was limited to a quantitative analysis. Furthermore, that work experimented with a standard sequence-to-sequence model BIBREF12 in a multi-task training fashion BIBREF24, while we pretrain and fine-tune pointer–generator networks. jin-kann-2017-exploring also investigated cross-lingual transfer in neural sequence-to-sequence models for morphological inflection. However, their experimental setup mimicked kann-etal-2017-one, and the main research questions were different: While jin-kann-2017-exploring asked how cross-lingual knowledge transfer works during multi-task training of neural sequence-to-sequence models on two languages, we investigate if neural inflection models demonstrate interesting differences in production errors depending on the pretraining language. Besides that, we differ in the artificial neural network architecture and language pairs we investigate.
Related Work ::: Cross-lingual transfer in NLP.
Cross-lingual transfer learning has been used for a large variety NLP of tasks, e.g., automatic speech recognition BIBREF25, entity recognition BIBREF26, language modeling BIBREF27, or parsing BIBREF28, BIBREF29, BIBREF30. Machine translation has been no exception BIBREF31, BIBREF32, BIBREF33. Recent research asked how to automatically select a suitable source language for a given target language BIBREF34. This is similar to our work in that our findings could potentially be leveraged to find good source languages.
Related Work ::: Acquisition of morphological inflection.
Finally, a lot of research has focused on human L1 and L2 acquisition of inflectional morphology BIBREF35, BIBREF36, BIBREF37, BIBREF38, BIBREF39, BIBREF40.
To name some specific examples, marques2011study investigated the effect of a stay abroad on Spanish L2 acquisition, including learning of its verbal morphology in English speakers. jia2003acquisition studied how Mandarin Chinese-speaking children learned the English plural morpheme. nicoladis2012young studied the English past tense acquisition in Chinese–English and French–English bilingual children. They found that, while both groups showed similar production accuracy, they differed slightly in the type of errors they made. Also considering the effect of the native language explicitly, yang2004impact investigated the acquisition of the tense-aspect system in an L2 for speakers of a native language which does not mark tense explicitly.
Finally, our work has been weakly motivated by bliss2006l2. There, the author asked a question for human subjects which is similar to the one we ask for neural models: How does the native language influence L2 acquisition of inflectional morphology?
Conclusion and Future Work
Motivated by the fact that, in humans, learning of a second language is influenced by a learner's native language, we investigated a similar question in artificial neural network models for morphological inflection: How does pretraining on different languages influence a model's learning of inflection in a target language?
We performed experiments on eight different source languages and three different target languages. An extensive error analysis of all final models showed that (i) for closely related source and target languages, acquisition of target language inflection gets easier; (ii) knowledge of a prefixing language makes learning of inflection in a suffixing language more challenging, as well as the other way around; and (iii) languages which exhibit an agglutinative morphology facilitate learning of inflection in a second language.
Future work might leverage those findings to improve neural network models for morphological inflection in low-resource languages, by choosing suitable source languages for pretraining.
Another interesting next step would be to investigate how the errors made by our models compare to those by human L2 learners with different native languages. If the exhibited patterns resemble each other, computational models could be used to predict errors a person will make, which, in turn, could be leveraged for further research or the development of educational material.
Acknowledgments
I would like to thank Samuel R. Bowman and Kyle Gorman for helpful discussions and suggestions. This work has benefited from the support of Samsung Research under the project Improving Deep Learning using Latent Structure and from the donation of a Titan V GPU by NVIDIA Corporation. | English, Spanish and Zulu |