id
stringlengths 40
40
| pid
stringlengths 42
42
| input
stringlengths 8.37k
169k
| output
stringlengths 1
1.63k
|
---|---|---|---|
753990d0b621d390ed58f20c4d9e4f065f0dc672 | 753990d0b621d390ed58f20c4d9e4f065f0dc672_0 | Q: What is the seed lexicon?
Text: Introduction
Affective events BIBREF0 are events that typically affect people in positive or negative ways. For example, getting money and playing sports are usually positive to the experiencers; catching cold and losing one's wallet are negative. Understanding affective events is important to various natural language processing (NLP) applications such as dialogue systems BIBREF1, question-answering systems BIBREF2, and humor recognition BIBREF3. In this paper, we work on recognizing the polarity of an affective event that is represented by a score ranging from $-1$ (negative) to 1 (positive).
Learning affective events is challenging because, as the examples above suggest, the polarity of an event is not necessarily predictable from its constituent words. Combined with the unbounded combinatorial nature of language, the non-compositionality of affective polarity entails the need for large amounts of world knowledge, which can hardly be learned from small annotated data.
In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event.
We trained the models using a Japanese web corpus. Given the minimum amount of supervision, they performed well. In addition, the combination of annotated and unannotated data yielded a gain over a purely supervised baseline when labeled data were small.
Related Work
Learning affective events is closely related to sentiment analysis. Whereas sentiment analysis usually focuses on the polarity of what are described (e.g., movies), we work on how people are typically affected by events. In sentiment analysis, much attention has been paid to compositionality. Word-level polarity BIBREF5, BIBREF6, BIBREF7 and the roles of negation and intensification BIBREF8, BIBREF6, BIBREF9 are among the most important topics. In contrast, we are more interested in recognizing the sentiment polarity of an event that pertains to commonsense knowledge (e.g., getting money and catching cold).
Label propagation from seed instances is a common approach to inducing sentiment polarities. While BIBREF5 and BIBREF10 worked on word- and phrase-level polarities, BIBREF0 dealt with event-level polarities. BIBREF5 and BIBREF10 linked instances using co-occurrence information and/or phrase-level coordinations (e.g., “$A$ and $B$” and “$A$ but $B$”). We shift our scope to event pairs that are more complex than phrase pairs, and consequently exploit discourse connectives as event-level counterparts of phrase-level conjunctions.
BIBREF0 constructed a network of events using word embedding-derived similarities. Compared with this method, our discourse relation-based linking of events is much simpler and more intuitive.
Some previous studies made use of document structure to understand the sentiment. BIBREF11 proposed a sentiment-specific pre-training strategy using unlabeled dialog data (tweet-reply pairs). BIBREF12 proposed a method of building a polarity-tagged corpus (ACP Corpus). They automatically gathered sentences that had positive or negative opinions utilizing HTML layout structures in addition to linguistic patterns. Our method depends only on raw texts and thus has wider applicability.
Proposed Method
Proposed Method ::: Polarity Function
Our goal is to learn the polarity function $p(x)$, which predicts the sentiment polarity score of an event $x$. We approximate $p(x)$ by a neural network with the following form:
${\rm Encoder}$ outputs a vector representation of the event $x$. ${\rm Linear}$ is a fully-connected layer and transforms the representation into a scalar. ${\rm tanh}$ is the hyperbolic tangent and transforms the scalar into a score ranging from $-1$ to 1. In Section SECREF21, we consider two specific implementations of ${\rm Encoder}$.
Proposed Method ::: Discourse Relation-Based Event Pairs
Our method requires a very small seed lexicon and a large raw corpus. We assume that we can automatically extract discourse-tagged event pairs, $(x_{i1}, x_{i2})$ ($i=1, \cdots $) from the raw corpus. We refer to $x_{i1}$ and $x_{i2}$ as former and latter events, respectively. As shown in Figure FIGREF1, we limit our scope to two discourse relations: Cause and Concession.
The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event. We expect the model to automatically learn complex phenomena through label propagation. Based on the availability of scores and the types of discourse relations, we classify the extracted event pairs into the following three types.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: AL (Automatically Labeled Pairs)
The seed lexicon matches (1) the latter event but (2) not the former event, and (3) their discourse relation type is Cause or Concession. If the discourse relation type is Cause, the former event is given the same score as the latter. Likewise, if the discourse relation type is Concession, the former event is given the opposite of the latter's score. They are used as reference scores during training.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: CA (Cause Pairs)
The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Cause. We assume the two events have the same polarities.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: CO (Concession Pairs)
The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Concession. We assume the two events have the reversed polarities.
Proposed Method ::: Loss Functions
Using AL, CA, and CO data, we optimize the parameters of the polarity function $p(x)$. We define a loss function for each of the three types of event pairs and sum up the multiple loss functions.
We use mean squared error to construct loss functions. For the AL data, the loss function is defined as:
where $x_{i1}$ and $x_{i2}$ are the $i$-th pair of the AL data. $r_{i1}$ and $r_{i2}$ are the automatically-assigned scores of $x_{i1}$ and $x_{i2}$, respectively. $N_{\rm AL}$ is the total number of AL pairs, and $\lambda _{\rm AL}$ is a hyperparameter.
For the CA data, the loss function is defined as:
$y_{i1}$ and $y_{i2}$ are the $i$-th pair of the CA pairs. $N_{\rm CA}$ is the total number of CA pairs. $\lambda _{\rm CA}$ and $\mu $ are hyperparameters. The first term makes the scores of the two events closer while the second term prevents the scores from shrinking to zero.
The loss function for the CO data is defined analogously:
The difference is that the first term makes the scores of the two events distant from each other.
Experiments
Experiments ::: Dataset
Experiments ::: Dataset ::: AL, CA, and CO
As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. To extract event pairs tagged with discourse relations, we used the Japanese dependency parser KNP and in-house postprocessing scripts BIBREF14. KNP used hand-written rules to segment each sentence into what we conventionally called clauses (mostly consecutive text chunks), each of which contained one main predicate. KNP also identified the discourse relations of event pairs if explicit discourse connectives BIBREF4 such as “ので” (because) and “のに” (in spite of) were present. We treated Cause/Reason (原因・理由) and Condition (条件) in the original tagset BIBREF15 as Cause and Concession (逆接) as Concession, respectively. Here is an example of event pair extraction.
. 重大な失敗を犯したので、仕事をクビになった。
Because [I] made a serious mistake, [I] got fired.
From this sentence, we extracted the event pair of “重大な失敗を犯す” ([I] make a serious mistake) and “仕事をクビになる” ([I] get fired), and tagged it with Cause.
We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16.
Experiments ::: Dataset ::: ACP (ACP Corpus)
We used the latest version of the ACP Corpus BIBREF12 for evaluation. It was used for (semi-)supervised training as well. Extracted from Japanese websites using HTML layouts and linguistic patterns, the dataset covered various genres. For example, the following two sentences were labeled positive and negative, respectively:
. 作業が楽だ。
The work is easy.
. 駐車場がない。
There is no parking lot.
Although the ACP corpus was originally constructed in the context of sentiment analysis, we found that it could roughly be regarded as a collection of affective events. We parsed each sentence and extracted the last clause in it. The train/dev/test split of the data is shown in Table TABREF19.
The objective function for supervised training is:
where $v_i$ is the $i$-th event, $R_i$ is the reference score of $v_i$, and $N_{\rm ACP}$ is the number of the events of the ACP Corpus.
To optimize the hyperparameters, we used the dev set of the ACP Corpus. For the evaluation, we used the test set of the ACP Corpus. The model output was classified as positive if $p(x) > 0$ and negative if $p(x) \le 0$.
Experiments ::: Model Configurations
As for ${\rm Encoder}$, we compared two types of neural networks: BiGRU and BERT. GRU BIBREF16 is a recurrent neural network sequence encoder. BiGRU reads an input sequence forward and backward and the output is the concatenation of the final forward and backward hidden states.
BERT BIBREF17 is a pre-trained multi-layer bidirectional Transformer BIBREF18 encoder. Its output is the final hidden state corresponding to the special classification tag ([CLS]). For the details of ${\rm Encoder}$, see Sections SECREF30.
We trained the model with the following four combinations of the datasets: AL, AL+CA+CO (two proposed models), ACP (supervised), and ACP+AL+CA+CO (semi-supervised). The corresponding objective functions were: $\mathcal {L}_{\rm AL}$, $\mathcal {L}_{\rm AL} + \mathcal {L}_{\rm CA} + \mathcal {L}_{\rm CO}$, $\mathcal {L}_{\rm ACP}$, and $\mathcal {L}_{\rm ACP} + \mathcal {L}_{\rm AL} + \mathcal {L}_{\rm CA} + \mathcal {L}_{\rm CO}$.
Experiments ::: Results and Discussion
Table TABREF23 shows accuracy. As the Random baseline suggests, positive and negative labels were distributed evenly. The Random+Seed baseline made use of the seed lexicon and output the corresponding label (or the reverse of it for negation) if the event's predicate is in the seed lexicon. We can see that the seed lexicon itself had practically no impact on prediction.
The models in the top block performed considerably better than the random baselines. The performance gaps with their (semi-)supervised counterparts, shown in the middle block, were less than 7%. This demonstrates the effectiveness of discourse relation-based label propagation.
Comparing the model variants, we obtained the highest score with the BiGRU encoder trained with the AL+CA+CO dataset. BERT was competitive but its performance went down if CA and CO were used in addition to AL. We conjecture that BERT was more sensitive to noises found more frequently in CA and CO.
Contrary to our expectations, supervised models (ACP) outperformed semi-supervised models (ACP+AL+CA+CO). This suggests that the training set of 0.6 million events is sufficiently large for training the models. For comparison, we trained the models with a subset (6,000 events) of the ACP dataset. As the results shown in Table TABREF24 demonstrate, our method is effective when labeled data are small.
The result of hyperparameter optimization for the BiGRU encoder was as follows:
As the CA and CO pairs were equal in size (Table TABREF16), $\lambda _{\rm CA}$ and $\lambda _{\rm CO}$ were comparable values. $\lambda _{\rm CA}$ was about one-third of $\lambda _{\rm CO}$, and this indicated that the CA pairs were noisier than the CO pairs. A major type of CA pairs that violates our assumption was in the form of “$\textit {problem}_{\text{negative}}$ causes $\textit {solution}_{\text{positive}}$”:
. (悪いところがある, よくなるように努力する)
(there is a bad point, [I] try to improve [it])
The polarities of the two events were reversed in spite of the Cause relation, and this lowered the value of $\lambda _{\rm CA}$.
Some examples of model outputs are shown in Table TABREF26. The first two examples suggest that our model successfully learned negation without explicit supervision. Similarly, the next two examples differ only in voice but the model correctly recognized that they had opposite polarities. The last two examples share the predicate “落とす" (drop) and only the objects are different. The second event “肩を落とす" (lit. drop one's shoulders) is an idiom that expresses a disappointed feeling. The examples demonstrate that our model correctly learned non-compositional expressions.
Conclusion
In this paper, we proposed to use discourse relations to effectively propagate polarities of affective events from seeds. Experiments show that, even with a minimal amount of supervision, the proposed method performed well.
Although event pairs linked by discourse analysis are shown to be useful, they nevertheless contain noises. Adding linguistically-motivated filtering rules would help improve the performance.
Acknowledgments
We thank Nobuhiro Kaji for providing the ACP Corpus and Hirokazu Kiyomaru and Yudai Kishimoto for their help in extracting event pairs. This work was partially supported by Yahoo! Japan Corporation.
Appendices ::: Seed Lexicon ::: Positive Words
喜ぶ (rejoice), 嬉しい (be glad), 楽しい (be pleasant), 幸せ (be happy), 感動 (be impressed), 興奮 (be excited), 懐かしい (feel nostalgic), 好き (like), 尊敬 (respect), 安心 (be relieved), 感心 (admire), 落ち着く (be calm), 満足 (be satisfied), 癒される (be healed), and スッキリ (be refreshed).
Appendices ::: Seed Lexicon ::: Negative Words
怒る (get angry), 悲しい (be sad), 寂しい (be lonely), 怖い (be scared), 不安 (feel anxious), 恥ずかしい (be embarrassed), 嫌 (hate), 落ち込む (feel down), 退屈 (be bored), 絶望 (feel hopeless), 辛い (have a hard time), 困る (have trouble), 憂鬱 (be depressed), 心配 (be worried), and 情けない (be sorry).
Appendices ::: Settings of Encoder ::: BiGRU
The dimension of the embedding layer was 256. The embedding layer was initialized with the word embeddings pretrained using the Web corpus. The input sentences were segmented into words by the morphological analyzer Juman++. The vocabulary size was 100,000. The number of hidden layers was 2. The dimension of hidden units was 256. The optimizer was Momentum SGD BIBREF21. The mini-batch size was 1024. We ran 100 epochs and selected the snapshot that achieved the highest score for the dev set.
Appendices ::: Settings of Encoder ::: BERT
We used a Japanese BERT model pretrained with Japanese Wikipedia. The input sentences were segmented into words by Juman++, and words were broken into subwords by applying BPE BIBREF20. The vocabulary size was 32,000. The maximum length of an input sequence was 128. The number of hidden layers was 12. The dimension of hidden units was 768. The number of self-attention heads was 12. The optimizer was Adam BIBREF19. The mini-batch size was 32. We ran 1 epoch. | a vocabulary of positive and negative predicates that helps determine the polarity score of an event |
753990d0b621d390ed58f20c4d9e4f065f0dc672 | 753990d0b621d390ed58f20c4d9e4f065f0dc672_1 | Q: What is the seed lexicon?
Text: Introduction
Affective events BIBREF0 are events that typically affect people in positive or negative ways. For example, getting money and playing sports are usually positive to the experiencers; catching cold and losing one's wallet are negative. Understanding affective events is important to various natural language processing (NLP) applications such as dialogue systems BIBREF1, question-answering systems BIBREF2, and humor recognition BIBREF3. In this paper, we work on recognizing the polarity of an affective event that is represented by a score ranging from $-1$ (negative) to 1 (positive).
Learning affective events is challenging because, as the examples above suggest, the polarity of an event is not necessarily predictable from its constituent words. Combined with the unbounded combinatorial nature of language, the non-compositionality of affective polarity entails the need for large amounts of world knowledge, which can hardly be learned from small annotated data.
In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event.
We trained the models using a Japanese web corpus. Given the minimum amount of supervision, they performed well. In addition, the combination of annotated and unannotated data yielded a gain over a purely supervised baseline when labeled data were small.
Related Work
Learning affective events is closely related to sentiment analysis. Whereas sentiment analysis usually focuses on the polarity of what are described (e.g., movies), we work on how people are typically affected by events. In sentiment analysis, much attention has been paid to compositionality. Word-level polarity BIBREF5, BIBREF6, BIBREF7 and the roles of negation and intensification BIBREF8, BIBREF6, BIBREF9 are among the most important topics. In contrast, we are more interested in recognizing the sentiment polarity of an event that pertains to commonsense knowledge (e.g., getting money and catching cold).
Label propagation from seed instances is a common approach to inducing sentiment polarities. While BIBREF5 and BIBREF10 worked on word- and phrase-level polarities, BIBREF0 dealt with event-level polarities. BIBREF5 and BIBREF10 linked instances using co-occurrence information and/or phrase-level coordinations (e.g., “$A$ and $B$” and “$A$ but $B$”). We shift our scope to event pairs that are more complex than phrase pairs, and consequently exploit discourse connectives as event-level counterparts of phrase-level conjunctions.
BIBREF0 constructed a network of events using word embedding-derived similarities. Compared with this method, our discourse relation-based linking of events is much simpler and more intuitive.
Some previous studies made use of document structure to understand the sentiment. BIBREF11 proposed a sentiment-specific pre-training strategy using unlabeled dialog data (tweet-reply pairs). BIBREF12 proposed a method of building a polarity-tagged corpus (ACP Corpus). They automatically gathered sentences that had positive or negative opinions utilizing HTML layout structures in addition to linguistic patterns. Our method depends only on raw texts and thus has wider applicability.
Proposed Method
Proposed Method ::: Polarity Function
Our goal is to learn the polarity function $p(x)$, which predicts the sentiment polarity score of an event $x$. We approximate $p(x)$ by a neural network with the following form:
${\rm Encoder}$ outputs a vector representation of the event $x$. ${\rm Linear}$ is a fully-connected layer and transforms the representation into a scalar. ${\rm tanh}$ is the hyperbolic tangent and transforms the scalar into a score ranging from $-1$ to 1. In Section SECREF21, we consider two specific implementations of ${\rm Encoder}$.
Proposed Method ::: Discourse Relation-Based Event Pairs
Our method requires a very small seed lexicon and a large raw corpus. We assume that we can automatically extract discourse-tagged event pairs, $(x_{i1}, x_{i2})$ ($i=1, \cdots $) from the raw corpus. We refer to $x_{i1}$ and $x_{i2}$ as former and latter events, respectively. As shown in Figure FIGREF1, we limit our scope to two discourse relations: Cause and Concession.
The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event. We expect the model to automatically learn complex phenomena through label propagation. Based on the availability of scores and the types of discourse relations, we classify the extracted event pairs into the following three types.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: AL (Automatically Labeled Pairs)
The seed lexicon matches (1) the latter event but (2) not the former event, and (3) their discourse relation type is Cause or Concession. If the discourse relation type is Cause, the former event is given the same score as the latter. Likewise, if the discourse relation type is Concession, the former event is given the opposite of the latter's score. They are used as reference scores during training.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: CA (Cause Pairs)
The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Cause. We assume the two events have the same polarities.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: CO (Concession Pairs)
The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Concession. We assume the two events have the reversed polarities.
Proposed Method ::: Loss Functions
Using AL, CA, and CO data, we optimize the parameters of the polarity function $p(x)$. We define a loss function for each of the three types of event pairs and sum up the multiple loss functions.
We use mean squared error to construct loss functions. For the AL data, the loss function is defined as:
where $x_{i1}$ and $x_{i2}$ are the $i$-th pair of the AL data. $r_{i1}$ and $r_{i2}$ are the automatically-assigned scores of $x_{i1}$ and $x_{i2}$, respectively. $N_{\rm AL}$ is the total number of AL pairs, and $\lambda _{\rm AL}$ is a hyperparameter.
For the CA data, the loss function is defined as:
$y_{i1}$ and $y_{i2}$ are the $i$-th pair of the CA pairs. $N_{\rm CA}$ is the total number of CA pairs. $\lambda _{\rm CA}$ and $\mu $ are hyperparameters. The first term makes the scores of the two events closer while the second term prevents the scores from shrinking to zero.
The loss function for the CO data is defined analogously:
The difference is that the first term makes the scores of the two events distant from each other.
Experiments
Experiments ::: Dataset
Experiments ::: Dataset ::: AL, CA, and CO
As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. To extract event pairs tagged with discourse relations, we used the Japanese dependency parser KNP and in-house postprocessing scripts BIBREF14. KNP used hand-written rules to segment each sentence into what we conventionally called clauses (mostly consecutive text chunks), each of which contained one main predicate. KNP also identified the discourse relations of event pairs if explicit discourse connectives BIBREF4 such as “ので” (because) and “のに” (in spite of) were present. We treated Cause/Reason (原因・理由) and Condition (条件) in the original tagset BIBREF15 as Cause and Concession (逆接) as Concession, respectively. Here is an example of event pair extraction.
. 重大な失敗を犯したので、仕事をクビになった。
Because [I] made a serious mistake, [I] got fired.
From this sentence, we extracted the event pair of “重大な失敗を犯す” ([I] make a serious mistake) and “仕事をクビになる” ([I] get fired), and tagged it with Cause.
We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16.
Experiments ::: Dataset ::: ACP (ACP Corpus)
We used the latest version of the ACP Corpus BIBREF12 for evaluation. It was used for (semi-)supervised training as well. Extracted from Japanese websites using HTML layouts and linguistic patterns, the dataset covered various genres. For example, the following two sentences were labeled positive and negative, respectively:
. 作業が楽だ。
The work is easy.
. 駐車場がない。
There is no parking lot.
Although the ACP corpus was originally constructed in the context of sentiment analysis, we found that it could roughly be regarded as a collection of affective events. We parsed each sentence and extracted the last clause in it. The train/dev/test split of the data is shown in Table TABREF19.
The objective function for supervised training is:
where $v_i$ is the $i$-th event, $R_i$ is the reference score of $v_i$, and $N_{\rm ACP}$ is the number of the events of the ACP Corpus.
To optimize the hyperparameters, we used the dev set of the ACP Corpus. For the evaluation, we used the test set of the ACP Corpus. The model output was classified as positive if $p(x) > 0$ and negative if $p(x) \le 0$.
Experiments ::: Model Configurations
As for ${\rm Encoder}$, we compared two types of neural networks: BiGRU and BERT. GRU BIBREF16 is a recurrent neural network sequence encoder. BiGRU reads an input sequence forward and backward and the output is the concatenation of the final forward and backward hidden states.
BERT BIBREF17 is a pre-trained multi-layer bidirectional Transformer BIBREF18 encoder. Its output is the final hidden state corresponding to the special classification tag ([CLS]). For the details of ${\rm Encoder}$, see Sections SECREF30.
We trained the model with the following four combinations of the datasets: AL, AL+CA+CO (two proposed models), ACP (supervised), and ACP+AL+CA+CO (semi-supervised). The corresponding objective functions were: $\mathcal {L}_{\rm AL}$, $\mathcal {L}_{\rm AL} + \mathcal {L}_{\rm CA} + \mathcal {L}_{\rm CO}$, $\mathcal {L}_{\rm ACP}$, and $\mathcal {L}_{\rm ACP} + \mathcal {L}_{\rm AL} + \mathcal {L}_{\rm CA} + \mathcal {L}_{\rm CO}$.
Experiments ::: Results and Discussion
Table TABREF23 shows accuracy. As the Random baseline suggests, positive and negative labels were distributed evenly. The Random+Seed baseline made use of the seed lexicon and output the corresponding label (or the reverse of it for negation) if the event's predicate is in the seed lexicon. We can see that the seed lexicon itself had practically no impact on prediction.
The models in the top block performed considerably better than the random baselines. The performance gaps with their (semi-)supervised counterparts, shown in the middle block, were less than 7%. This demonstrates the effectiveness of discourse relation-based label propagation.
Comparing the model variants, we obtained the highest score with the BiGRU encoder trained with the AL+CA+CO dataset. BERT was competitive but its performance went down if CA and CO were used in addition to AL. We conjecture that BERT was more sensitive to noises found more frequently in CA and CO.
Contrary to our expectations, supervised models (ACP) outperformed semi-supervised models (ACP+AL+CA+CO). This suggests that the training set of 0.6 million events is sufficiently large for training the models. For comparison, we trained the models with a subset (6,000 events) of the ACP dataset. As the results shown in Table TABREF24 demonstrate, our method is effective when labeled data are small.
The result of hyperparameter optimization for the BiGRU encoder was as follows:
As the CA and CO pairs were equal in size (Table TABREF16), $\lambda _{\rm CA}$ and $\lambda _{\rm CO}$ were comparable values. $\lambda _{\rm CA}$ was about one-third of $\lambda _{\rm CO}$, and this indicated that the CA pairs were noisier than the CO pairs. A major type of CA pairs that violates our assumption was in the form of “$\textit {problem}_{\text{negative}}$ causes $\textit {solution}_{\text{positive}}$”:
. (悪いところがある, よくなるように努力する)
(there is a bad point, [I] try to improve [it])
The polarities of the two events were reversed in spite of the Cause relation, and this lowered the value of $\lambda _{\rm CA}$.
Some examples of model outputs are shown in Table TABREF26. The first two examples suggest that our model successfully learned negation without explicit supervision. Similarly, the next two examples differ only in voice but the model correctly recognized that they had opposite polarities. The last two examples share the predicate “落とす" (drop) and only the objects are different. The second event “肩を落とす" (lit. drop one's shoulders) is an idiom that expresses a disappointed feeling. The examples demonstrate that our model correctly learned non-compositional expressions.
Conclusion
In this paper, we proposed to use discourse relations to effectively propagate polarities of affective events from seeds. Experiments show that, even with a minimal amount of supervision, the proposed method performed well.
Although event pairs linked by discourse analysis are shown to be useful, they nevertheless contain noises. Adding linguistically-motivated filtering rules would help improve the performance.
Acknowledgments
We thank Nobuhiro Kaji for providing the ACP Corpus and Hirokazu Kiyomaru and Yudai Kishimoto for their help in extracting event pairs. This work was partially supported by Yahoo! Japan Corporation.
Appendices ::: Seed Lexicon ::: Positive Words
喜ぶ (rejoice), 嬉しい (be glad), 楽しい (be pleasant), 幸せ (be happy), 感動 (be impressed), 興奮 (be excited), 懐かしい (feel nostalgic), 好き (like), 尊敬 (respect), 安心 (be relieved), 感心 (admire), 落ち着く (be calm), 満足 (be satisfied), 癒される (be healed), and スッキリ (be refreshed).
Appendices ::: Seed Lexicon ::: Negative Words
怒る (get angry), 悲しい (be sad), 寂しい (be lonely), 怖い (be scared), 不安 (feel anxious), 恥ずかしい (be embarrassed), 嫌 (hate), 落ち込む (feel down), 退屈 (be bored), 絶望 (feel hopeless), 辛い (have a hard time), 困る (have trouble), 憂鬱 (be depressed), 心配 (be worried), and 情けない (be sorry).
Appendices ::: Settings of Encoder ::: BiGRU
The dimension of the embedding layer was 256. The embedding layer was initialized with the word embeddings pretrained using the Web corpus. The input sentences were segmented into words by the morphological analyzer Juman++. The vocabulary size was 100,000. The number of hidden layers was 2. The dimension of hidden units was 256. The optimizer was Momentum SGD BIBREF21. The mini-batch size was 1024. We ran 100 epochs and selected the snapshot that achieved the highest score for the dev set.
Appendices ::: Settings of Encoder ::: BERT
We used a Japanese BERT model pretrained with Japanese Wikipedia. The input sentences were segmented into words by Juman++, and words were broken into subwords by applying BPE BIBREF20. The vocabulary size was 32,000. The maximum length of an input sequence was 128. The number of hidden layers was 12. The dimension of hidden units was 768. The number of self-attention heads was 12. The optimizer was Adam BIBREF19. The mini-batch size was 32. We ran 1 epoch. | seed lexicon consists of positive and negative predicates |
9d578ddccc27dd849244d632dd0f6bf27348ad81 | 9d578ddccc27dd849244d632dd0f6bf27348ad81_0 | Q: What are the results?
Text: Introduction
Affective events BIBREF0 are events that typically affect people in positive or negative ways. For example, getting money and playing sports are usually positive to the experiencers; catching cold and losing one's wallet are negative. Understanding affective events is important to various natural language processing (NLP) applications such as dialogue systems BIBREF1, question-answering systems BIBREF2, and humor recognition BIBREF3. In this paper, we work on recognizing the polarity of an affective event that is represented by a score ranging from $-1$ (negative) to 1 (positive).
Learning affective events is challenging because, as the examples above suggest, the polarity of an event is not necessarily predictable from its constituent words. Combined with the unbounded combinatorial nature of language, the non-compositionality of affective polarity entails the need for large amounts of world knowledge, which can hardly be learned from small annotated data.
In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event.
We trained the models using a Japanese web corpus. Given the minimum amount of supervision, they performed well. In addition, the combination of annotated and unannotated data yielded a gain over a purely supervised baseline when labeled data were small.
Related Work
Learning affective events is closely related to sentiment analysis. Whereas sentiment analysis usually focuses on the polarity of what are described (e.g., movies), we work on how people are typically affected by events. In sentiment analysis, much attention has been paid to compositionality. Word-level polarity BIBREF5, BIBREF6, BIBREF7 and the roles of negation and intensification BIBREF8, BIBREF6, BIBREF9 are among the most important topics. In contrast, we are more interested in recognizing the sentiment polarity of an event that pertains to commonsense knowledge (e.g., getting money and catching cold).
Label propagation from seed instances is a common approach to inducing sentiment polarities. While BIBREF5 and BIBREF10 worked on word- and phrase-level polarities, BIBREF0 dealt with event-level polarities. BIBREF5 and BIBREF10 linked instances using co-occurrence information and/or phrase-level coordinations (e.g., “$A$ and $B$” and “$A$ but $B$”). We shift our scope to event pairs that are more complex than phrase pairs, and consequently exploit discourse connectives as event-level counterparts of phrase-level conjunctions.
BIBREF0 constructed a network of events using word embedding-derived similarities. Compared with this method, our discourse relation-based linking of events is much simpler and more intuitive.
Some previous studies made use of document structure to understand the sentiment. BIBREF11 proposed a sentiment-specific pre-training strategy using unlabeled dialog data (tweet-reply pairs). BIBREF12 proposed a method of building a polarity-tagged corpus (ACP Corpus). They automatically gathered sentences that had positive or negative opinions utilizing HTML layout structures in addition to linguistic patterns. Our method depends only on raw texts and thus has wider applicability.
Proposed Method
Proposed Method ::: Polarity Function
Our goal is to learn the polarity function $p(x)$, which predicts the sentiment polarity score of an event $x$. We approximate $p(x)$ by a neural network with the following form:
${\rm Encoder}$ outputs a vector representation of the event $x$. ${\rm Linear}$ is a fully-connected layer and transforms the representation into a scalar. ${\rm tanh}$ is the hyperbolic tangent and transforms the scalar into a score ranging from $-1$ to 1. In Section SECREF21, we consider two specific implementations of ${\rm Encoder}$.
Proposed Method ::: Discourse Relation-Based Event Pairs
Our method requires a very small seed lexicon and a large raw corpus. We assume that we can automatically extract discourse-tagged event pairs, $(x_{i1}, x_{i2})$ ($i=1, \cdots $) from the raw corpus. We refer to $x_{i1}$ and $x_{i2}$ as former and latter events, respectively. As shown in Figure FIGREF1, we limit our scope to two discourse relations: Cause and Concession.
The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event. We expect the model to automatically learn complex phenomena through label propagation. Based on the availability of scores and the types of discourse relations, we classify the extracted event pairs into the following three types.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: AL (Automatically Labeled Pairs)
The seed lexicon matches (1) the latter event but (2) not the former event, and (3) their discourse relation type is Cause or Concession. If the discourse relation type is Cause, the former event is given the same score as the latter. Likewise, if the discourse relation type is Concession, the former event is given the opposite of the latter's score. They are used as reference scores during training.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: CA (Cause Pairs)
The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Cause. We assume the two events have the same polarities.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: CO (Concession Pairs)
The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Concession. We assume the two events have the reversed polarities.
Proposed Method ::: Loss Functions
Using AL, CA, and CO data, we optimize the parameters of the polarity function $p(x)$. We define a loss function for each of the three types of event pairs and sum up the multiple loss functions.
We use mean squared error to construct loss functions. For the AL data, the loss function is defined as:
where $x_{i1}$ and $x_{i2}$ are the $i$-th pair of the AL data. $r_{i1}$ and $r_{i2}$ are the automatically-assigned scores of $x_{i1}$ and $x_{i2}$, respectively. $N_{\rm AL}$ is the total number of AL pairs, and $\lambda _{\rm AL}$ is a hyperparameter.
For the CA data, the loss function is defined as:
$y_{i1}$ and $y_{i2}$ are the $i$-th pair of the CA pairs. $N_{\rm CA}$ is the total number of CA pairs. $\lambda _{\rm CA}$ and $\mu $ are hyperparameters. The first term makes the scores of the two events closer while the second term prevents the scores from shrinking to zero.
The loss function for the CO data is defined analogously:
The difference is that the first term makes the scores of the two events distant from each other.
Experiments
Experiments ::: Dataset
Experiments ::: Dataset ::: AL, CA, and CO
As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. To extract event pairs tagged with discourse relations, we used the Japanese dependency parser KNP and in-house postprocessing scripts BIBREF14. KNP used hand-written rules to segment each sentence into what we conventionally called clauses (mostly consecutive text chunks), each of which contained one main predicate. KNP also identified the discourse relations of event pairs if explicit discourse connectives BIBREF4 such as “ので” (because) and “のに” (in spite of) were present. We treated Cause/Reason (原因・理由) and Condition (条件) in the original tagset BIBREF15 as Cause and Concession (逆接) as Concession, respectively. Here is an example of event pair extraction.
. 重大な失敗を犯したので、仕事をクビになった。
Because [I] made a serious mistake, [I] got fired.
From this sentence, we extracted the event pair of “重大な失敗を犯す” ([I] make a serious mistake) and “仕事をクビになる” ([I] get fired), and tagged it with Cause.
We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16.
Experiments ::: Dataset ::: ACP (ACP Corpus)
We used the latest version of the ACP Corpus BIBREF12 for evaluation. It was used for (semi-)supervised training as well. Extracted from Japanese websites using HTML layouts and linguistic patterns, the dataset covered various genres. For example, the following two sentences were labeled positive and negative, respectively:
. 作業が楽だ。
The work is easy.
. 駐車場がない。
There is no parking lot.
Although the ACP corpus was originally constructed in the context of sentiment analysis, we found that it could roughly be regarded as a collection of affective events. We parsed each sentence and extracted the last clause in it. The train/dev/test split of the data is shown in Table TABREF19.
The objective function for supervised training is:
where $v_i$ is the $i$-th event, $R_i$ is the reference score of $v_i$, and $N_{\rm ACP}$ is the number of the events of the ACP Corpus.
To optimize the hyperparameters, we used the dev set of the ACP Corpus. For the evaluation, we used the test set of the ACP Corpus. The model output was classified as positive if $p(x) > 0$ and negative if $p(x) \le 0$.
Experiments ::: Model Configurations
As for ${\rm Encoder}$, we compared two types of neural networks: BiGRU and BERT. GRU BIBREF16 is a recurrent neural network sequence encoder. BiGRU reads an input sequence forward and backward and the output is the concatenation of the final forward and backward hidden states.
BERT BIBREF17 is a pre-trained multi-layer bidirectional Transformer BIBREF18 encoder. Its output is the final hidden state corresponding to the special classification tag ([CLS]). For the details of ${\rm Encoder}$, see Sections SECREF30.
We trained the model with the following four combinations of the datasets: AL, AL+CA+CO (two proposed models), ACP (supervised), and ACP+AL+CA+CO (semi-supervised). The corresponding objective functions were: $\mathcal {L}_{\rm AL}$, $\mathcal {L}_{\rm AL} + \mathcal {L}_{\rm CA} + \mathcal {L}_{\rm CO}$, $\mathcal {L}_{\rm ACP}$, and $\mathcal {L}_{\rm ACP} + \mathcal {L}_{\rm AL} + \mathcal {L}_{\rm CA} + \mathcal {L}_{\rm CO}$.
Experiments ::: Results and Discussion
Table TABREF23 shows accuracy. As the Random baseline suggests, positive and negative labels were distributed evenly. The Random+Seed baseline made use of the seed lexicon and output the corresponding label (or the reverse of it for negation) if the event's predicate is in the seed lexicon. We can see that the seed lexicon itself had practically no impact on prediction.
The models in the top block performed considerably better than the random baselines. The performance gaps with their (semi-)supervised counterparts, shown in the middle block, were less than 7%. This demonstrates the effectiveness of discourse relation-based label propagation.
Comparing the model variants, we obtained the highest score with the BiGRU encoder trained with the AL+CA+CO dataset. BERT was competitive but its performance went down if CA and CO were used in addition to AL. We conjecture that BERT was more sensitive to noises found more frequently in CA and CO.
Contrary to our expectations, supervised models (ACP) outperformed semi-supervised models (ACP+AL+CA+CO). This suggests that the training set of 0.6 million events is sufficiently large for training the models. For comparison, we trained the models with a subset (6,000 events) of the ACP dataset. As the results shown in Table TABREF24 demonstrate, our method is effective when labeled data are small.
The result of hyperparameter optimization for the BiGRU encoder was as follows:
As the CA and CO pairs were equal in size (Table TABREF16), $\lambda _{\rm CA}$ and $\lambda _{\rm CO}$ were comparable values. $\lambda _{\rm CA}$ was about one-third of $\lambda _{\rm CO}$, and this indicated that the CA pairs were noisier than the CO pairs. A major type of CA pairs that violates our assumption was in the form of “$\textit {problem}_{\text{negative}}$ causes $\textit {solution}_{\text{positive}}$”:
. (悪いところがある, よくなるように努力する)
(there is a bad point, [I] try to improve [it])
The polarities of the two events were reversed in spite of the Cause relation, and this lowered the value of $\lambda _{\rm CA}$.
Some examples of model outputs are shown in Table TABREF26. The first two examples suggest that our model successfully learned negation without explicit supervision. Similarly, the next two examples differ only in voice but the model correctly recognized that they had opposite polarities. The last two examples share the predicate “落とす" (drop) and only the objects are different. The second event “肩を落とす" (lit. drop one's shoulders) is an idiom that expresses a disappointed feeling. The examples demonstrate that our model correctly learned non-compositional expressions.
Conclusion
In this paper, we proposed to use discourse relations to effectively propagate polarities of affective events from seeds. Experiments show that, even with a minimal amount of supervision, the proposed method performed well.
Although event pairs linked by discourse analysis are shown to be useful, they nevertheless contain noises. Adding linguistically-motivated filtering rules would help improve the performance.
Acknowledgments
We thank Nobuhiro Kaji for providing the ACP Corpus and Hirokazu Kiyomaru and Yudai Kishimoto for their help in extracting event pairs. This work was partially supported by Yahoo! Japan Corporation.
Appendices ::: Seed Lexicon ::: Positive Words
喜ぶ (rejoice), 嬉しい (be glad), 楽しい (be pleasant), 幸せ (be happy), 感動 (be impressed), 興奮 (be excited), 懐かしい (feel nostalgic), 好き (like), 尊敬 (respect), 安心 (be relieved), 感心 (admire), 落ち着く (be calm), 満足 (be satisfied), 癒される (be healed), and スッキリ (be refreshed).
Appendices ::: Seed Lexicon ::: Negative Words
怒る (get angry), 悲しい (be sad), 寂しい (be lonely), 怖い (be scared), 不安 (feel anxious), 恥ずかしい (be embarrassed), 嫌 (hate), 落ち込む (feel down), 退屈 (be bored), 絶望 (feel hopeless), 辛い (have a hard time), 困る (have trouble), 憂鬱 (be depressed), 心配 (be worried), and 情けない (be sorry).
Appendices ::: Settings of Encoder ::: BiGRU
The dimension of the embedding layer was 256. The embedding layer was initialized with the word embeddings pretrained using the Web corpus. The input sentences were segmented into words by the morphological analyzer Juman++. The vocabulary size was 100,000. The number of hidden layers was 2. The dimension of hidden units was 256. The optimizer was Momentum SGD BIBREF21. The mini-batch size was 1024. We ran 100 epochs and selected the snapshot that achieved the highest score for the dev set.
Appendices ::: Settings of Encoder ::: BERT
We used a Japanese BERT model pretrained with Japanese Wikipedia. The input sentences were segmented into words by Juman++, and words were broken into subwords by applying BPE BIBREF20. The vocabulary size was 32,000. The maximum length of an input sequence was 128. The number of hidden layers was 12. The dimension of hidden units was 768. The number of self-attention heads was 12. The optimizer was Adam BIBREF19. The mini-batch size was 32. We ran 1 epoch. | Using all data to train: AL -- BiGRU achieved 0.843 accuracy, AL -- BERT achieved 0.863 accuracy, AL+CA+CO -- BiGRU achieved 0.866 accuracy, AL+CA+CO -- BERT achieved 0.835, accuracy, ACP -- BiGRU achieved 0.919 accuracy, ACP -- BERT achived 0.933, accuracy, ACP+AL+CA+CO -- BiGRU achieved 0.917 accuracy, ACP+AL+CA+CO -- BERT achieved 0.913 accuracy.
Using a subset to train: BERT achieved 0.876 accuracy using ACP (6K), BERT achieved 0.886 accuracy using ACP (6K) + AL, BiGRU achieved 0.830 accuracy using ACP (6K), BiGRU achieved 0.879 accuracy using ACP (6K) + AL + CA + CO. |
02e4bf719b1a504e385c35c6186742e720bcb281 | 02e4bf719b1a504e385c35c6186742e720bcb281_0 | Q: How are relations used to propagate polarity?
Text: Introduction
Affective events BIBREF0 are events that typically affect people in positive or negative ways. For example, getting money and playing sports are usually positive to the experiencers; catching cold and losing one's wallet are negative. Understanding affective events is important to various natural language processing (NLP) applications such as dialogue systems BIBREF1, question-answering systems BIBREF2, and humor recognition BIBREF3. In this paper, we work on recognizing the polarity of an affective event that is represented by a score ranging from $-1$ (negative) to 1 (positive).
Learning affective events is challenging because, as the examples above suggest, the polarity of an event is not necessarily predictable from its constituent words. Combined with the unbounded combinatorial nature of language, the non-compositionality of affective polarity entails the need for large amounts of world knowledge, which can hardly be learned from small annotated data.
In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event.
We trained the models using a Japanese web corpus. Given the minimum amount of supervision, they performed well. In addition, the combination of annotated and unannotated data yielded a gain over a purely supervised baseline when labeled data were small.
Related Work
Learning affective events is closely related to sentiment analysis. Whereas sentiment analysis usually focuses on the polarity of what are described (e.g., movies), we work on how people are typically affected by events. In sentiment analysis, much attention has been paid to compositionality. Word-level polarity BIBREF5, BIBREF6, BIBREF7 and the roles of negation and intensification BIBREF8, BIBREF6, BIBREF9 are among the most important topics. In contrast, we are more interested in recognizing the sentiment polarity of an event that pertains to commonsense knowledge (e.g., getting money and catching cold).
Label propagation from seed instances is a common approach to inducing sentiment polarities. While BIBREF5 and BIBREF10 worked on word- and phrase-level polarities, BIBREF0 dealt with event-level polarities. BIBREF5 and BIBREF10 linked instances using co-occurrence information and/or phrase-level coordinations (e.g., “$A$ and $B$” and “$A$ but $B$”). We shift our scope to event pairs that are more complex than phrase pairs, and consequently exploit discourse connectives as event-level counterparts of phrase-level conjunctions.
BIBREF0 constructed a network of events using word embedding-derived similarities. Compared with this method, our discourse relation-based linking of events is much simpler and more intuitive.
Some previous studies made use of document structure to understand the sentiment. BIBREF11 proposed a sentiment-specific pre-training strategy using unlabeled dialog data (tweet-reply pairs). BIBREF12 proposed a method of building a polarity-tagged corpus (ACP Corpus). They automatically gathered sentences that had positive or negative opinions utilizing HTML layout structures in addition to linguistic patterns. Our method depends only on raw texts and thus has wider applicability.
Proposed Method
Proposed Method ::: Polarity Function
Our goal is to learn the polarity function $p(x)$, which predicts the sentiment polarity score of an event $x$. We approximate $p(x)$ by a neural network with the following form:
${\rm Encoder}$ outputs a vector representation of the event $x$. ${\rm Linear}$ is a fully-connected layer and transforms the representation into a scalar. ${\rm tanh}$ is the hyperbolic tangent and transforms the scalar into a score ranging from $-1$ to 1. In Section SECREF21, we consider two specific implementations of ${\rm Encoder}$.
Proposed Method ::: Discourse Relation-Based Event Pairs
Our method requires a very small seed lexicon and a large raw corpus. We assume that we can automatically extract discourse-tagged event pairs, $(x_{i1}, x_{i2})$ ($i=1, \cdots $) from the raw corpus. We refer to $x_{i1}$ and $x_{i2}$ as former and latter events, respectively. As shown in Figure FIGREF1, we limit our scope to two discourse relations: Cause and Concession.
The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event. We expect the model to automatically learn complex phenomena through label propagation. Based on the availability of scores and the types of discourse relations, we classify the extracted event pairs into the following three types.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: AL (Automatically Labeled Pairs)
The seed lexicon matches (1) the latter event but (2) not the former event, and (3) their discourse relation type is Cause or Concession. If the discourse relation type is Cause, the former event is given the same score as the latter. Likewise, if the discourse relation type is Concession, the former event is given the opposite of the latter's score. They are used as reference scores during training.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: CA (Cause Pairs)
The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Cause. We assume the two events have the same polarities.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: CO (Concession Pairs)
The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Concession. We assume the two events have the reversed polarities.
Proposed Method ::: Loss Functions
Using AL, CA, and CO data, we optimize the parameters of the polarity function $p(x)$. We define a loss function for each of the three types of event pairs and sum up the multiple loss functions.
We use mean squared error to construct loss functions. For the AL data, the loss function is defined as:
where $x_{i1}$ and $x_{i2}$ are the $i$-th pair of the AL data. $r_{i1}$ and $r_{i2}$ are the automatically-assigned scores of $x_{i1}$ and $x_{i2}$, respectively. $N_{\rm AL}$ is the total number of AL pairs, and $\lambda _{\rm AL}$ is a hyperparameter.
For the CA data, the loss function is defined as:
$y_{i1}$ and $y_{i2}$ are the $i$-th pair of the CA pairs. $N_{\rm CA}$ is the total number of CA pairs. $\lambda _{\rm CA}$ and $\mu $ are hyperparameters. The first term makes the scores of the two events closer while the second term prevents the scores from shrinking to zero.
The loss function for the CO data is defined analogously:
The difference is that the first term makes the scores of the two events distant from each other.
Experiments
Experiments ::: Dataset
Experiments ::: Dataset ::: AL, CA, and CO
As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. To extract event pairs tagged with discourse relations, we used the Japanese dependency parser KNP and in-house postprocessing scripts BIBREF14. KNP used hand-written rules to segment each sentence into what we conventionally called clauses (mostly consecutive text chunks), each of which contained one main predicate. KNP also identified the discourse relations of event pairs if explicit discourse connectives BIBREF4 such as “ので” (because) and “のに” (in spite of) were present. We treated Cause/Reason (原因・理由) and Condition (条件) in the original tagset BIBREF15 as Cause and Concession (逆接) as Concession, respectively. Here is an example of event pair extraction.
. 重大な失敗を犯したので、仕事をクビになった。
Because [I] made a serious mistake, [I] got fired.
From this sentence, we extracted the event pair of “重大な失敗を犯す” ([I] make a serious mistake) and “仕事をクビになる” ([I] get fired), and tagged it with Cause.
We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16.
Experiments ::: Dataset ::: ACP (ACP Corpus)
We used the latest version of the ACP Corpus BIBREF12 for evaluation. It was used for (semi-)supervised training as well. Extracted from Japanese websites using HTML layouts and linguistic patterns, the dataset covered various genres. For example, the following two sentences were labeled positive and negative, respectively:
. 作業が楽だ。
The work is easy.
. 駐車場がない。
There is no parking lot.
Although the ACP corpus was originally constructed in the context of sentiment analysis, we found that it could roughly be regarded as a collection of affective events. We parsed each sentence and extracted the last clause in it. The train/dev/test split of the data is shown in Table TABREF19.
The objective function for supervised training is:
where $v_i$ is the $i$-th event, $R_i$ is the reference score of $v_i$, and $N_{\rm ACP}$ is the number of the events of the ACP Corpus.
To optimize the hyperparameters, we used the dev set of the ACP Corpus. For the evaluation, we used the test set of the ACP Corpus. The model output was classified as positive if $p(x) > 0$ and negative if $p(x) \le 0$.
Experiments ::: Model Configurations
As for ${\rm Encoder}$, we compared two types of neural networks: BiGRU and BERT. GRU BIBREF16 is a recurrent neural network sequence encoder. BiGRU reads an input sequence forward and backward and the output is the concatenation of the final forward and backward hidden states.
BERT BIBREF17 is a pre-trained multi-layer bidirectional Transformer BIBREF18 encoder. Its output is the final hidden state corresponding to the special classification tag ([CLS]). For the details of ${\rm Encoder}$, see Sections SECREF30.
We trained the model with the following four combinations of the datasets: AL, AL+CA+CO (two proposed models), ACP (supervised), and ACP+AL+CA+CO (semi-supervised). The corresponding objective functions were: $\mathcal {L}_{\rm AL}$, $\mathcal {L}_{\rm AL} + \mathcal {L}_{\rm CA} + \mathcal {L}_{\rm CO}$, $\mathcal {L}_{\rm ACP}$, and $\mathcal {L}_{\rm ACP} + \mathcal {L}_{\rm AL} + \mathcal {L}_{\rm CA} + \mathcal {L}_{\rm CO}$.
Experiments ::: Results and Discussion
Table TABREF23 shows accuracy. As the Random baseline suggests, positive and negative labels were distributed evenly. The Random+Seed baseline made use of the seed lexicon and output the corresponding label (or the reverse of it for negation) if the event's predicate is in the seed lexicon. We can see that the seed lexicon itself had practically no impact on prediction.
The models in the top block performed considerably better than the random baselines. The performance gaps with their (semi-)supervised counterparts, shown in the middle block, were less than 7%. This demonstrates the effectiveness of discourse relation-based label propagation.
Comparing the model variants, we obtained the highest score with the BiGRU encoder trained with the AL+CA+CO dataset. BERT was competitive but its performance went down if CA and CO were used in addition to AL. We conjecture that BERT was more sensitive to noises found more frequently in CA and CO.
Contrary to our expectations, supervised models (ACP) outperformed semi-supervised models (ACP+AL+CA+CO). This suggests that the training set of 0.6 million events is sufficiently large for training the models. For comparison, we trained the models with a subset (6,000 events) of the ACP dataset. As the results shown in Table TABREF24 demonstrate, our method is effective when labeled data are small.
The result of hyperparameter optimization for the BiGRU encoder was as follows:
As the CA and CO pairs were equal in size (Table TABREF16), $\lambda _{\rm CA}$ and $\lambda _{\rm CO}$ were comparable values. $\lambda _{\rm CA}$ was about one-third of $\lambda _{\rm CO}$, and this indicated that the CA pairs were noisier than the CO pairs. A major type of CA pairs that violates our assumption was in the form of “$\textit {problem}_{\text{negative}}$ causes $\textit {solution}_{\text{positive}}$”:
. (悪いところがある, よくなるように努力する)
(there is a bad point, [I] try to improve [it])
The polarities of the two events were reversed in spite of the Cause relation, and this lowered the value of $\lambda _{\rm CA}$.
Some examples of model outputs are shown in Table TABREF26. The first two examples suggest that our model successfully learned negation without explicit supervision. Similarly, the next two examples differ only in voice but the model correctly recognized that they had opposite polarities. The last two examples share the predicate “落とす" (drop) and only the objects are different. The second event “肩を落とす" (lit. drop one's shoulders) is an idiom that expresses a disappointed feeling. The examples demonstrate that our model correctly learned non-compositional expressions.
Conclusion
In this paper, we proposed to use discourse relations to effectively propagate polarities of affective events from seeds. Experiments show that, even with a minimal amount of supervision, the proposed method performed well.
Although event pairs linked by discourse analysis are shown to be useful, they nevertheless contain noises. Adding linguistically-motivated filtering rules would help improve the performance.
Acknowledgments
We thank Nobuhiro Kaji for providing the ACP Corpus and Hirokazu Kiyomaru and Yudai Kishimoto for their help in extracting event pairs. This work was partially supported by Yahoo! Japan Corporation.
Appendices ::: Seed Lexicon ::: Positive Words
喜ぶ (rejoice), 嬉しい (be glad), 楽しい (be pleasant), 幸せ (be happy), 感動 (be impressed), 興奮 (be excited), 懐かしい (feel nostalgic), 好き (like), 尊敬 (respect), 安心 (be relieved), 感心 (admire), 落ち着く (be calm), 満足 (be satisfied), 癒される (be healed), and スッキリ (be refreshed).
Appendices ::: Seed Lexicon ::: Negative Words
怒る (get angry), 悲しい (be sad), 寂しい (be lonely), 怖い (be scared), 不安 (feel anxious), 恥ずかしい (be embarrassed), 嫌 (hate), 落ち込む (feel down), 退屈 (be bored), 絶望 (feel hopeless), 辛い (have a hard time), 困る (have trouble), 憂鬱 (be depressed), 心配 (be worried), and 情けない (be sorry).
Appendices ::: Settings of Encoder ::: BiGRU
The dimension of the embedding layer was 256. The embedding layer was initialized with the word embeddings pretrained using the Web corpus. The input sentences were segmented into words by the morphological analyzer Juman++. The vocabulary size was 100,000. The number of hidden layers was 2. The dimension of hidden units was 256. The optimizer was Momentum SGD BIBREF21. The mini-batch size was 1024. We ran 100 epochs and selected the snapshot that achieved the highest score for the dev set.
Appendices ::: Settings of Encoder ::: BERT
We used a Japanese BERT model pretrained with Japanese Wikipedia. The input sentences were segmented into words by Juman++, and words were broken into subwords by applying BPE BIBREF20. The vocabulary size was 32,000. The maximum length of an input sequence was 128. The number of hidden layers was 12. The dimension of hidden units was 768. The number of self-attention heads was 12. The optimizer was Adam BIBREF19. The mini-batch size was 32. We ran 1 epoch. | based on the relation between events, the suggested polarity of one event can determine the possible polarity of the other event |
02e4bf719b1a504e385c35c6186742e720bcb281 | 02e4bf719b1a504e385c35c6186742e720bcb281_1 | Q: How are relations used to propagate polarity?
Text: Introduction
Affective events BIBREF0 are events that typically affect people in positive or negative ways. For example, getting money and playing sports are usually positive to the experiencers; catching cold and losing one's wallet are negative. Understanding affective events is important to various natural language processing (NLP) applications such as dialogue systems BIBREF1, question-answering systems BIBREF2, and humor recognition BIBREF3. In this paper, we work on recognizing the polarity of an affective event that is represented by a score ranging from $-1$ (negative) to 1 (positive).
Learning affective events is challenging because, as the examples above suggest, the polarity of an event is not necessarily predictable from its constituent words. Combined with the unbounded combinatorial nature of language, the non-compositionality of affective polarity entails the need for large amounts of world knowledge, which can hardly be learned from small annotated data.
In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event.
We trained the models using a Japanese web corpus. Given the minimum amount of supervision, they performed well. In addition, the combination of annotated and unannotated data yielded a gain over a purely supervised baseline when labeled data were small.
Related Work
Learning affective events is closely related to sentiment analysis. Whereas sentiment analysis usually focuses on the polarity of what are described (e.g., movies), we work on how people are typically affected by events. In sentiment analysis, much attention has been paid to compositionality. Word-level polarity BIBREF5, BIBREF6, BIBREF7 and the roles of negation and intensification BIBREF8, BIBREF6, BIBREF9 are among the most important topics. In contrast, we are more interested in recognizing the sentiment polarity of an event that pertains to commonsense knowledge (e.g., getting money and catching cold).
Label propagation from seed instances is a common approach to inducing sentiment polarities. While BIBREF5 and BIBREF10 worked on word- and phrase-level polarities, BIBREF0 dealt with event-level polarities. BIBREF5 and BIBREF10 linked instances using co-occurrence information and/or phrase-level coordinations (e.g., “$A$ and $B$” and “$A$ but $B$”). We shift our scope to event pairs that are more complex than phrase pairs, and consequently exploit discourse connectives as event-level counterparts of phrase-level conjunctions.
BIBREF0 constructed a network of events using word embedding-derived similarities. Compared with this method, our discourse relation-based linking of events is much simpler and more intuitive.
Some previous studies made use of document structure to understand the sentiment. BIBREF11 proposed a sentiment-specific pre-training strategy using unlabeled dialog data (tweet-reply pairs). BIBREF12 proposed a method of building a polarity-tagged corpus (ACP Corpus). They automatically gathered sentences that had positive or negative opinions utilizing HTML layout structures in addition to linguistic patterns. Our method depends only on raw texts and thus has wider applicability.
Proposed Method
Proposed Method ::: Polarity Function
Our goal is to learn the polarity function $p(x)$, which predicts the sentiment polarity score of an event $x$. We approximate $p(x)$ by a neural network with the following form:
${\rm Encoder}$ outputs a vector representation of the event $x$. ${\rm Linear}$ is a fully-connected layer and transforms the representation into a scalar. ${\rm tanh}$ is the hyperbolic tangent and transforms the scalar into a score ranging from $-1$ to 1. In Section SECREF21, we consider two specific implementations of ${\rm Encoder}$.
Proposed Method ::: Discourse Relation-Based Event Pairs
Our method requires a very small seed lexicon and a large raw corpus. We assume that we can automatically extract discourse-tagged event pairs, $(x_{i1}, x_{i2})$ ($i=1, \cdots $) from the raw corpus. We refer to $x_{i1}$ and $x_{i2}$ as former and latter events, respectively. As shown in Figure FIGREF1, we limit our scope to two discourse relations: Cause and Concession.
The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event. We expect the model to automatically learn complex phenomena through label propagation. Based on the availability of scores and the types of discourse relations, we classify the extracted event pairs into the following three types.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: AL (Automatically Labeled Pairs)
The seed lexicon matches (1) the latter event but (2) not the former event, and (3) their discourse relation type is Cause or Concession. If the discourse relation type is Cause, the former event is given the same score as the latter. Likewise, if the discourse relation type is Concession, the former event is given the opposite of the latter's score. They are used as reference scores during training.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: CA (Cause Pairs)
The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Cause. We assume the two events have the same polarities.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: CO (Concession Pairs)
The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Concession. We assume the two events have the reversed polarities.
Proposed Method ::: Loss Functions
Using AL, CA, and CO data, we optimize the parameters of the polarity function $p(x)$. We define a loss function for each of the three types of event pairs and sum up the multiple loss functions.
We use mean squared error to construct loss functions. For the AL data, the loss function is defined as:
where $x_{i1}$ and $x_{i2}$ are the $i$-th pair of the AL data. $r_{i1}$ and $r_{i2}$ are the automatically-assigned scores of $x_{i1}$ and $x_{i2}$, respectively. $N_{\rm AL}$ is the total number of AL pairs, and $\lambda _{\rm AL}$ is a hyperparameter.
For the CA data, the loss function is defined as:
$y_{i1}$ and $y_{i2}$ are the $i$-th pair of the CA pairs. $N_{\rm CA}$ is the total number of CA pairs. $\lambda _{\rm CA}$ and $\mu $ are hyperparameters. The first term makes the scores of the two events closer while the second term prevents the scores from shrinking to zero.
The loss function for the CO data is defined analogously:
The difference is that the first term makes the scores of the two events distant from each other.
Experiments
Experiments ::: Dataset
Experiments ::: Dataset ::: AL, CA, and CO
As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. To extract event pairs tagged with discourse relations, we used the Japanese dependency parser KNP and in-house postprocessing scripts BIBREF14. KNP used hand-written rules to segment each sentence into what we conventionally called clauses (mostly consecutive text chunks), each of which contained one main predicate. KNP also identified the discourse relations of event pairs if explicit discourse connectives BIBREF4 such as “ので” (because) and “のに” (in spite of) were present. We treated Cause/Reason (原因・理由) and Condition (条件) in the original tagset BIBREF15 as Cause and Concession (逆接) as Concession, respectively. Here is an example of event pair extraction.
. 重大な失敗を犯したので、仕事をクビになった。
Because [I] made a serious mistake, [I] got fired.
From this sentence, we extracted the event pair of “重大な失敗を犯す” ([I] make a serious mistake) and “仕事をクビになる” ([I] get fired), and tagged it with Cause.
We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16.
Experiments ::: Dataset ::: ACP (ACP Corpus)
We used the latest version of the ACP Corpus BIBREF12 for evaluation. It was used for (semi-)supervised training as well. Extracted from Japanese websites using HTML layouts and linguistic patterns, the dataset covered various genres. For example, the following two sentences were labeled positive and negative, respectively:
. 作業が楽だ。
The work is easy.
. 駐車場がない。
There is no parking lot.
Although the ACP corpus was originally constructed in the context of sentiment analysis, we found that it could roughly be regarded as a collection of affective events. We parsed each sentence and extracted the last clause in it. The train/dev/test split of the data is shown in Table TABREF19.
The objective function for supervised training is:
where $v_i$ is the $i$-th event, $R_i$ is the reference score of $v_i$, and $N_{\rm ACP}$ is the number of the events of the ACP Corpus.
To optimize the hyperparameters, we used the dev set of the ACP Corpus. For the evaluation, we used the test set of the ACP Corpus. The model output was classified as positive if $p(x) > 0$ and negative if $p(x) \le 0$.
Experiments ::: Model Configurations
As for ${\rm Encoder}$, we compared two types of neural networks: BiGRU and BERT. GRU BIBREF16 is a recurrent neural network sequence encoder. BiGRU reads an input sequence forward and backward and the output is the concatenation of the final forward and backward hidden states.
BERT BIBREF17 is a pre-trained multi-layer bidirectional Transformer BIBREF18 encoder. Its output is the final hidden state corresponding to the special classification tag ([CLS]). For the details of ${\rm Encoder}$, see Sections SECREF30.
We trained the model with the following four combinations of the datasets: AL, AL+CA+CO (two proposed models), ACP (supervised), and ACP+AL+CA+CO (semi-supervised). The corresponding objective functions were: $\mathcal {L}_{\rm AL}$, $\mathcal {L}_{\rm AL} + \mathcal {L}_{\rm CA} + \mathcal {L}_{\rm CO}$, $\mathcal {L}_{\rm ACP}$, and $\mathcal {L}_{\rm ACP} + \mathcal {L}_{\rm AL} + \mathcal {L}_{\rm CA} + \mathcal {L}_{\rm CO}$.
Experiments ::: Results and Discussion
Table TABREF23 shows accuracy. As the Random baseline suggests, positive and negative labels were distributed evenly. The Random+Seed baseline made use of the seed lexicon and output the corresponding label (or the reverse of it for negation) if the event's predicate is in the seed lexicon. We can see that the seed lexicon itself had practically no impact on prediction.
The models in the top block performed considerably better than the random baselines. The performance gaps with their (semi-)supervised counterparts, shown in the middle block, were less than 7%. This demonstrates the effectiveness of discourse relation-based label propagation.
Comparing the model variants, we obtained the highest score with the BiGRU encoder trained with the AL+CA+CO dataset. BERT was competitive but its performance went down if CA and CO were used in addition to AL. We conjecture that BERT was more sensitive to noises found more frequently in CA and CO.
Contrary to our expectations, supervised models (ACP) outperformed semi-supervised models (ACP+AL+CA+CO). This suggests that the training set of 0.6 million events is sufficiently large for training the models. For comparison, we trained the models with a subset (6,000 events) of the ACP dataset. As the results shown in Table TABREF24 demonstrate, our method is effective when labeled data are small.
The result of hyperparameter optimization for the BiGRU encoder was as follows:
As the CA and CO pairs were equal in size (Table TABREF16), $\lambda _{\rm CA}$ and $\lambda _{\rm CO}$ were comparable values. $\lambda _{\rm CA}$ was about one-third of $\lambda _{\rm CO}$, and this indicated that the CA pairs were noisier than the CO pairs. A major type of CA pairs that violates our assumption was in the form of “$\textit {problem}_{\text{negative}}$ causes $\textit {solution}_{\text{positive}}$”:
. (悪いところがある, よくなるように努力する)
(there is a bad point, [I] try to improve [it])
The polarities of the two events were reversed in spite of the Cause relation, and this lowered the value of $\lambda _{\rm CA}$.
Some examples of model outputs are shown in Table TABREF26. The first two examples suggest that our model successfully learned negation without explicit supervision. Similarly, the next two examples differ only in voice but the model correctly recognized that they had opposite polarities. The last two examples share the predicate “落とす" (drop) and only the objects are different. The second event “肩を落とす" (lit. drop one's shoulders) is an idiom that expresses a disappointed feeling. The examples demonstrate that our model correctly learned non-compositional expressions.
Conclusion
In this paper, we proposed to use discourse relations to effectively propagate polarities of affective events from seeds. Experiments show that, even with a minimal amount of supervision, the proposed method performed well.
Although event pairs linked by discourse analysis are shown to be useful, they nevertheless contain noises. Adding linguistically-motivated filtering rules would help improve the performance.
Acknowledgments
We thank Nobuhiro Kaji for providing the ACP Corpus and Hirokazu Kiyomaru and Yudai Kishimoto for their help in extracting event pairs. This work was partially supported by Yahoo! Japan Corporation.
Appendices ::: Seed Lexicon ::: Positive Words
喜ぶ (rejoice), 嬉しい (be glad), 楽しい (be pleasant), 幸せ (be happy), 感動 (be impressed), 興奮 (be excited), 懐かしい (feel nostalgic), 好き (like), 尊敬 (respect), 安心 (be relieved), 感心 (admire), 落ち着く (be calm), 満足 (be satisfied), 癒される (be healed), and スッキリ (be refreshed).
Appendices ::: Seed Lexicon ::: Negative Words
怒る (get angry), 悲しい (be sad), 寂しい (be lonely), 怖い (be scared), 不安 (feel anxious), 恥ずかしい (be embarrassed), 嫌 (hate), 落ち込む (feel down), 退屈 (be bored), 絶望 (feel hopeless), 辛い (have a hard time), 困る (have trouble), 憂鬱 (be depressed), 心配 (be worried), and 情けない (be sorry).
Appendices ::: Settings of Encoder ::: BiGRU
The dimension of the embedding layer was 256. The embedding layer was initialized with the word embeddings pretrained using the Web corpus. The input sentences were segmented into words by the morphological analyzer Juman++. The vocabulary size was 100,000. The number of hidden layers was 2. The dimension of hidden units was 256. The optimizer was Momentum SGD BIBREF21. The mini-batch size was 1024. We ran 100 epochs and selected the snapshot that achieved the highest score for the dev set.
Appendices ::: Settings of Encoder ::: BERT
We used a Japanese BERT model pretrained with Japanese Wikipedia. The input sentences were segmented into words by Juman++, and words were broken into subwords by applying BPE BIBREF20. The vocabulary size was 32,000. The maximum length of an input sequence was 128. The number of hidden layers was 12. The dimension of hidden units was 768. The number of self-attention heads was 12. The optimizer was Adam BIBREF19. The mini-batch size was 32. We ran 1 epoch. | cause relation: both events in the relation should have the same polarity; concession relation: events should have opposite polarity |
44c4bd6decc86f1091b5fc0728873d9324cdde4e | 44c4bd6decc86f1091b5fc0728873d9324cdde4e_0 | Q: How big is the Japanese data?
Text: Introduction
Affective events BIBREF0 are events that typically affect people in positive or negative ways. For example, getting money and playing sports are usually positive to the experiencers; catching cold and losing one's wallet are negative. Understanding affective events is important to various natural language processing (NLP) applications such as dialogue systems BIBREF1, question-answering systems BIBREF2, and humor recognition BIBREF3. In this paper, we work on recognizing the polarity of an affective event that is represented by a score ranging from $-1$ (negative) to 1 (positive).
Learning affective events is challenging because, as the examples above suggest, the polarity of an event is not necessarily predictable from its constituent words. Combined with the unbounded combinatorial nature of language, the non-compositionality of affective polarity entails the need for large amounts of world knowledge, which can hardly be learned from small annotated data.
In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event.
We trained the models using a Japanese web corpus. Given the minimum amount of supervision, they performed well. In addition, the combination of annotated and unannotated data yielded a gain over a purely supervised baseline when labeled data were small.
Related Work
Learning affective events is closely related to sentiment analysis. Whereas sentiment analysis usually focuses on the polarity of what are described (e.g., movies), we work on how people are typically affected by events. In sentiment analysis, much attention has been paid to compositionality. Word-level polarity BIBREF5, BIBREF6, BIBREF7 and the roles of negation and intensification BIBREF8, BIBREF6, BIBREF9 are among the most important topics. In contrast, we are more interested in recognizing the sentiment polarity of an event that pertains to commonsense knowledge (e.g., getting money and catching cold).
Label propagation from seed instances is a common approach to inducing sentiment polarities. While BIBREF5 and BIBREF10 worked on word- and phrase-level polarities, BIBREF0 dealt with event-level polarities. BIBREF5 and BIBREF10 linked instances using co-occurrence information and/or phrase-level coordinations (e.g., “$A$ and $B$” and “$A$ but $B$”). We shift our scope to event pairs that are more complex than phrase pairs, and consequently exploit discourse connectives as event-level counterparts of phrase-level conjunctions.
BIBREF0 constructed a network of events using word embedding-derived similarities. Compared with this method, our discourse relation-based linking of events is much simpler and more intuitive.
Some previous studies made use of document structure to understand the sentiment. BIBREF11 proposed a sentiment-specific pre-training strategy using unlabeled dialog data (tweet-reply pairs). BIBREF12 proposed a method of building a polarity-tagged corpus (ACP Corpus). They automatically gathered sentences that had positive or negative opinions utilizing HTML layout structures in addition to linguistic patterns. Our method depends only on raw texts and thus has wider applicability.
Proposed Method
Proposed Method ::: Polarity Function
Our goal is to learn the polarity function $p(x)$, which predicts the sentiment polarity score of an event $x$. We approximate $p(x)$ by a neural network with the following form:
${\rm Encoder}$ outputs a vector representation of the event $x$. ${\rm Linear}$ is a fully-connected layer and transforms the representation into a scalar. ${\rm tanh}$ is the hyperbolic tangent and transforms the scalar into a score ranging from $-1$ to 1. In Section SECREF21, we consider two specific implementations of ${\rm Encoder}$.
Proposed Method ::: Discourse Relation-Based Event Pairs
Our method requires a very small seed lexicon and a large raw corpus. We assume that we can automatically extract discourse-tagged event pairs, $(x_{i1}, x_{i2})$ ($i=1, \cdots $) from the raw corpus. We refer to $x_{i1}$ and $x_{i2}$ as former and latter events, respectively. As shown in Figure FIGREF1, we limit our scope to two discourse relations: Cause and Concession.
The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event. We expect the model to automatically learn complex phenomena through label propagation. Based on the availability of scores and the types of discourse relations, we classify the extracted event pairs into the following three types.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: AL (Automatically Labeled Pairs)
The seed lexicon matches (1) the latter event but (2) not the former event, and (3) their discourse relation type is Cause or Concession. If the discourse relation type is Cause, the former event is given the same score as the latter. Likewise, if the discourse relation type is Concession, the former event is given the opposite of the latter's score. They are used as reference scores during training.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: CA (Cause Pairs)
The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Cause. We assume the two events have the same polarities.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: CO (Concession Pairs)
The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Concession. We assume the two events have the reversed polarities.
Proposed Method ::: Loss Functions
Using AL, CA, and CO data, we optimize the parameters of the polarity function $p(x)$. We define a loss function for each of the three types of event pairs and sum up the multiple loss functions.
We use mean squared error to construct loss functions. For the AL data, the loss function is defined as:
where $x_{i1}$ and $x_{i2}$ are the $i$-th pair of the AL data. $r_{i1}$ and $r_{i2}$ are the automatically-assigned scores of $x_{i1}$ and $x_{i2}$, respectively. $N_{\rm AL}$ is the total number of AL pairs, and $\lambda _{\rm AL}$ is a hyperparameter.
For the CA data, the loss function is defined as:
$y_{i1}$ and $y_{i2}$ are the $i$-th pair of the CA pairs. $N_{\rm CA}$ is the total number of CA pairs. $\lambda _{\rm CA}$ and $\mu $ are hyperparameters. The first term makes the scores of the two events closer while the second term prevents the scores from shrinking to zero.
The loss function for the CO data is defined analogously:
The difference is that the first term makes the scores of the two events distant from each other.
Experiments
Experiments ::: Dataset
Experiments ::: Dataset ::: AL, CA, and CO
As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. To extract event pairs tagged with discourse relations, we used the Japanese dependency parser KNP and in-house postprocessing scripts BIBREF14. KNP used hand-written rules to segment each sentence into what we conventionally called clauses (mostly consecutive text chunks), each of which contained one main predicate. KNP also identified the discourse relations of event pairs if explicit discourse connectives BIBREF4 such as “ので” (because) and “のに” (in spite of) were present. We treated Cause/Reason (原因・理由) and Condition (条件) in the original tagset BIBREF15 as Cause and Concession (逆接) as Concession, respectively. Here is an example of event pair extraction.
. 重大な失敗を犯したので、仕事をクビになった。
Because [I] made a serious mistake, [I] got fired.
From this sentence, we extracted the event pair of “重大な失敗を犯す” ([I] make a serious mistake) and “仕事をクビになる” ([I] get fired), and tagged it with Cause.
We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16.
Experiments ::: Dataset ::: ACP (ACP Corpus)
We used the latest version of the ACP Corpus BIBREF12 for evaluation. It was used for (semi-)supervised training as well. Extracted from Japanese websites using HTML layouts and linguistic patterns, the dataset covered various genres. For example, the following two sentences were labeled positive and negative, respectively:
. 作業が楽だ。
The work is easy.
. 駐車場がない。
There is no parking lot.
Although the ACP corpus was originally constructed in the context of sentiment analysis, we found that it could roughly be regarded as a collection of affective events. We parsed each sentence and extracted the last clause in it. The train/dev/test split of the data is shown in Table TABREF19.
The objective function for supervised training is:
where $v_i$ is the $i$-th event, $R_i$ is the reference score of $v_i$, and $N_{\rm ACP}$ is the number of the events of the ACP Corpus.
To optimize the hyperparameters, we used the dev set of the ACP Corpus. For the evaluation, we used the test set of the ACP Corpus. The model output was classified as positive if $p(x) > 0$ and negative if $p(x) \le 0$.
Experiments ::: Model Configurations
As for ${\rm Encoder}$, we compared two types of neural networks: BiGRU and BERT. GRU BIBREF16 is a recurrent neural network sequence encoder. BiGRU reads an input sequence forward and backward and the output is the concatenation of the final forward and backward hidden states.
BERT BIBREF17 is a pre-trained multi-layer bidirectional Transformer BIBREF18 encoder. Its output is the final hidden state corresponding to the special classification tag ([CLS]). For the details of ${\rm Encoder}$, see Sections SECREF30.
We trained the model with the following four combinations of the datasets: AL, AL+CA+CO (two proposed models), ACP (supervised), and ACP+AL+CA+CO (semi-supervised). The corresponding objective functions were: $\mathcal {L}_{\rm AL}$, $\mathcal {L}_{\rm AL} + \mathcal {L}_{\rm CA} + \mathcal {L}_{\rm CO}$, $\mathcal {L}_{\rm ACP}$, and $\mathcal {L}_{\rm ACP} + \mathcal {L}_{\rm AL} + \mathcal {L}_{\rm CA} + \mathcal {L}_{\rm CO}$.
Experiments ::: Results and Discussion
Table TABREF23 shows accuracy. As the Random baseline suggests, positive and negative labels were distributed evenly. The Random+Seed baseline made use of the seed lexicon and output the corresponding label (or the reverse of it for negation) if the event's predicate is in the seed lexicon. We can see that the seed lexicon itself had practically no impact on prediction.
The models in the top block performed considerably better than the random baselines. The performance gaps with their (semi-)supervised counterparts, shown in the middle block, were less than 7%. This demonstrates the effectiveness of discourse relation-based label propagation.
Comparing the model variants, we obtained the highest score with the BiGRU encoder trained with the AL+CA+CO dataset. BERT was competitive but its performance went down if CA and CO were used in addition to AL. We conjecture that BERT was more sensitive to noises found more frequently in CA and CO.
Contrary to our expectations, supervised models (ACP) outperformed semi-supervised models (ACP+AL+CA+CO). This suggests that the training set of 0.6 million events is sufficiently large for training the models. For comparison, we trained the models with a subset (6,000 events) of the ACP dataset. As the results shown in Table TABREF24 demonstrate, our method is effective when labeled data are small.
The result of hyperparameter optimization for the BiGRU encoder was as follows:
As the CA and CO pairs were equal in size (Table TABREF16), $\lambda _{\rm CA}$ and $\lambda _{\rm CO}$ were comparable values. $\lambda _{\rm CA}$ was about one-third of $\lambda _{\rm CO}$, and this indicated that the CA pairs were noisier than the CO pairs. A major type of CA pairs that violates our assumption was in the form of “$\textit {problem}_{\text{negative}}$ causes $\textit {solution}_{\text{positive}}$”:
. (悪いところがある, よくなるように努力する)
(there is a bad point, [I] try to improve [it])
The polarities of the two events were reversed in spite of the Cause relation, and this lowered the value of $\lambda _{\rm CA}$.
Some examples of model outputs are shown in Table TABREF26. The first two examples suggest that our model successfully learned negation without explicit supervision. Similarly, the next two examples differ only in voice but the model correctly recognized that they had opposite polarities. The last two examples share the predicate “落とす" (drop) and only the objects are different. The second event “肩を落とす" (lit. drop one's shoulders) is an idiom that expresses a disappointed feeling. The examples demonstrate that our model correctly learned non-compositional expressions.
Conclusion
In this paper, we proposed to use discourse relations to effectively propagate polarities of affective events from seeds. Experiments show that, even with a minimal amount of supervision, the proposed method performed well.
Although event pairs linked by discourse analysis are shown to be useful, they nevertheless contain noises. Adding linguistically-motivated filtering rules would help improve the performance.
Acknowledgments
We thank Nobuhiro Kaji for providing the ACP Corpus and Hirokazu Kiyomaru and Yudai Kishimoto for their help in extracting event pairs. This work was partially supported by Yahoo! Japan Corporation.
Appendices ::: Seed Lexicon ::: Positive Words
喜ぶ (rejoice), 嬉しい (be glad), 楽しい (be pleasant), 幸せ (be happy), 感動 (be impressed), 興奮 (be excited), 懐かしい (feel nostalgic), 好き (like), 尊敬 (respect), 安心 (be relieved), 感心 (admire), 落ち着く (be calm), 満足 (be satisfied), 癒される (be healed), and スッキリ (be refreshed).
Appendices ::: Seed Lexicon ::: Negative Words
怒る (get angry), 悲しい (be sad), 寂しい (be lonely), 怖い (be scared), 不安 (feel anxious), 恥ずかしい (be embarrassed), 嫌 (hate), 落ち込む (feel down), 退屈 (be bored), 絶望 (feel hopeless), 辛い (have a hard time), 困る (have trouble), 憂鬱 (be depressed), 心配 (be worried), and 情けない (be sorry).
Appendices ::: Settings of Encoder ::: BiGRU
The dimension of the embedding layer was 256. The embedding layer was initialized with the word embeddings pretrained using the Web corpus. The input sentences were segmented into words by the morphological analyzer Juman++. The vocabulary size was 100,000. The number of hidden layers was 2. The dimension of hidden units was 256. The optimizer was Momentum SGD BIBREF21. The mini-batch size was 1024. We ran 100 epochs and selected the snapshot that achieved the highest score for the dev set.
Appendices ::: Settings of Encoder ::: BERT
We used a Japanese BERT model pretrained with Japanese Wikipedia. The input sentences were segmented into words by Juman++, and words were broken into subwords by applying BPE BIBREF20. The vocabulary size was 32,000. The maximum length of an input sequence was 128. The number of hidden layers was 12. The dimension of hidden units was 768. The number of self-attention heads was 12. The optimizer was Adam BIBREF19. The mini-batch size was 32. We ran 1 epoch. | 7000000 pairs of events were extracted from the Japanese Web corpus, 529850 pairs of events were extracted from the ACP corpus |
44c4bd6decc86f1091b5fc0728873d9324cdde4e | 44c4bd6decc86f1091b5fc0728873d9324cdde4e_1 | Q: How big is the Japanese data?
Text: Introduction
Affective events BIBREF0 are events that typically affect people in positive or negative ways. For example, getting money and playing sports are usually positive to the experiencers; catching cold and losing one's wallet are negative. Understanding affective events is important to various natural language processing (NLP) applications such as dialogue systems BIBREF1, question-answering systems BIBREF2, and humor recognition BIBREF3. In this paper, we work on recognizing the polarity of an affective event that is represented by a score ranging from $-1$ (negative) to 1 (positive).
Learning affective events is challenging because, as the examples above suggest, the polarity of an event is not necessarily predictable from its constituent words. Combined with the unbounded combinatorial nature of language, the non-compositionality of affective polarity entails the need for large amounts of world knowledge, which can hardly be learned from small annotated data.
In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event.
We trained the models using a Japanese web corpus. Given the minimum amount of supervision, they performed well. In addition, the combination of annotated and unannotated data yielded a gain over a purely supervised baseline when labeled data were small.
Related Work
Learning affective events is closely related to sentiment analysis. Whereas sentiment analysis usually focuses on the polarity of what are described (e.g., movies), we work on how people are typically affected by events. In sentiment analysis, much attention has been paid to compositionality. Word-level polarity BIBREF5, BIBREF6, BIBREF7 and the roles of negation and intensification BIBREF8, BIBREF6, BIBREF9 are among the most important topics. In contrast, we are more interested in recognizing the sentiment polarity of an event that pertains to commonsense knowledge (e.g., getting money and catching cold).
Label propagation from seed instances is a common approach to inducing sentiment polarities. While BIBREF5 and BIBREF10 worked on word- and phrase-level polarities, BIBREF0 dealt with event-level polarities. BIBREF5 and BIBREF10 linked instances using co-occurrence information and/or phrase-level coordinations (e.g., “$A$ and $B$” and “$A$ but $B$”). We shift our scope to event pairs that are more complex than phrase pairs, and consequently exploit discourse connectives as event-level counterparts of phrase-level conjunctions.
BIBREF0 constructed a network of events using word embedding-derived similarities. Compared with this method, our discourse relation-based linking of events is much simpler and more intuitive.
Some previous studies made use of document structure to understand the sentiment. BIBREF11 proposed a sentiment-specific pre-training strategy using unlabeled dialog data (tweet-reply pairs). BIBREF12 proposed a method of building a polarity-tagged corpus (ACP Corpus). They automatically gathered sentences that had positive or negative opinions utilizing HTML layout structures in addition to linguistic patterns. Our method depends only on raw texts and thus has wider applicability.
Proposed Method
Proposed Method ::: Polarity Function
Our goal is to learn the polarity function $p(x)$, which predicts the sentiment polarity score of an event $x$. We approximate $p(x)$ by a neural network with the following form:
${\rm Encoder}$ outputs a vector representation of the event $x$. ${\rm Linear}$ is a fully-connected layer and transforms the representation into a scalar. ${\rm tanh}$ is the hyperbolic tangent and transforms the scalar into a score ranging from $-1$ to 1. In Section SECREF21, we consider two specific implementations of ${\rm Encoder}$.
Proposed Method ::: Discourse Relation-Based Event Pairs
Our method requires a very small seed lexicon and a large raw corpus. We assume that we can automatically extract discourse-tagged event pairs, $(x_{i1}, x_{i2})$ ($i=1, \cdots $) from the raw corpus. We refer to $x_{i1}$ and $x_{i2}$ as former and latter events, respectively. As shown in Figure FIGREF1, we limit our scope to two discourse relations: Cause and Concession.
The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event. We expect the model to automatically learn complex phenomena through label propagation. Based on the availability of scores and the types of discourse relations, we classify the extracted event pairs into the following three types.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: AL (Automatically Labeled Pairs)
The seed lexicon matches (1) the latter event but (2) not the former event, and (3) their discourse relation type is Cause or Concession. If the discourse relation type is Cause, the former event is given the same score as the latter. Likewise, if the discourse relation type is Concession, the former event is given the opposite of the latter's score. They are used as reference scores during training.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: CA (Cause Pairs)
The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Cause. We assume the two events have the same polarities.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: CO (Concession Pairs)
The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Concession. We assume the two events have the reversed polarities.
Proposed Method ::: Loss Functions
Using AL, CA, and CO data, we optimize the parameters of the polarity function $p(x)$. We define a loss function for each of the three types of event pairs and sum up the multiple loss functions.
We use mean squared error to construct loss functions. For the AL data, the loss function is defined as:
where $x_{i1}$ and $x_{i2}$ are the $i$-th pair of the AL data. $r_{i1}$ and $r_{i2}$ are the automatically-assigned scores of $x_{i1}$ and $x_{i2}$, respectively. $N_{\rm AL}$ is the total number of AL pairs, and $\lambda _{\rm AL}$ is a hyperparameter.
For the CA data, the loss function is defined as:
$y_{i1}$ and $y_{i2}$ are the $i$-th pair of the CA pairs. $N_{\rm CA}$ is the total number of CA pairs. $\lambda _{\rm CA}$ and $\mu $ are hyperparameters. The first term makes the scores of the two events closer while the second term prevents the scores from shrinking to zero.
The loss function for the CO data is defined analogously:
The difference is that the first term makes the scores of the two events distant from each other.
Experiments
Experiments ::: Dataset
Experiments ::: Dataset ::: AL, CA, and CO
As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. To extract event pairs tagged with discourse relations, we used the Japanese dependency parser KNP and in-house postprocessing scripts BIBREF14. KNP used hand-written rules to segment each sentence into what we conventionally called clauses (mostly consecutive text chunks), each of which contained one main predicate. KNP also identified the discourse relations of event pairs if explicit discourse connectives BIBREF4 such as “ので” (because) and “のに” (in spite of) were present. We treated Cause/Reason (原因・理由) and Condition (条件) in the original tagset BIBREF15 as Cause and Concession (逆接) as Concession, respectively. Here is an example of event pair extraction.
. 重大な失敗を犯したので、仕事をクビになった。
Because [I] made a serious mistake, [I] got fired.
From this sentence, we extracted the event pair of “重大な失敗を犯す” ([I] make a serious mistake) and “仕事をクビになる” ([I] get fired), and tagged it with Cause.
We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16.
Experiments ::: Dataset ::: ACP (ACP Corpus)
We used the latest version of the ACP Corpus BIBREF12 for evaluation. It was used for (semi-)supervised training as well. Extracted from Japanese websites using HTML layouts and linguistic patterns, the dataset covered various genres. For example, the following two sentences were labeled positive and negative, respectively:
. 作業が楽だ。
The work is easy.
. 駐車場がない。
There is no parking lot.
Although the ACP corpus was originally constructed in the context of sentiment analysis, we found that it could roughly be regarded as a collection of affective events. We parsed each sentence and extracted the last clause in it. The train/dev/test split of the data is shown in Table TABREF19.
The objective function for supervised training is:
where $v_i$ is the $i$-th event, $R_i$ is the reference score of $v_i$, and $N_{\rm ACP}$ is the number of the events of the ACP Corpus.
To optimize the hyperparameters, we used the dev set of the ACP Corpus. For the evaluation, we used the test set of the ACP Corpus. The model output was classified as positive if $p(x) > 0$ and negative if $p(x) \le 0$.
Experiments ::: Model Configurations
As for ${\rm Encoder}$, we compared two types of neural networks: BiGRU and BERT. GRU BIBREF16 is a recurrent neural network sequence encoder. BiGRU reads an input sequence forward and backward and the output is the concatenation of the final forward and backward hidden states.
BERT BIBREF17 is a pre-trained multi-layer bidirectional Transformer BIBREF18 encoder. Its output is the final hidden state corresponding to the special classification tag ([CLS]). For the details of ${\rm Encoder}$, see Sections SECREF30.
We trained the model with the following four combinations of the datasets: AL, AL+CA+CO (two proposed models), ACP (supervised), and ACP+AL+CA+CO (semi-supervised). The corresponding objective functions were: $\mathcal {L}_{\rm AL}$, $\mathcal {L}_{\rm AL} + \mathcal {L}_{\rm CA} + \mathcal {L}_{\rm CO}$, $\mathcal {L}_{\rm ACP}$, and $\mathcal {L}_{\rm ACP} + \mathcal {L}_{\rm AL} + \mathcal {L}_{\rm CA} + \mathcal {L}_{\rm CO}$.
Experiments ::: Results and Discussion
Table TABREF23 shows accuracy. As the Random baseline suggests, positive and negative labels were distributed evenly. The Random+Seed baseline made use of the seed lexicon and output the corresponding label (or the reverse of it for negation) if the event's predicate is in the seed lexicon. We can see that the seed lexicon itself had practically no impact on prediction.
The models in the top block performed considerably better than the random baselines. The performance gaps with their (semi-)supervised counterparts, shown in the middle block, were less than 7%. This demonstrates the effectiveness of discourse relation-based label propagation.
Comparing the model variants, we obtained the highest score with the BiGRU encoder trained with the AL+CA+CO dataset. BERT was competitive but its performance went down if CA and CO were used in addition to AL. We conjecture that BERT was more sensitive to noises found more frequently in CA and CO.
Contrary to our expectations, supervised models (ACP) outperformed semi-supervised models (ACP+AL+CA+CO). This suggests that the training set of 0.6 million events is sufficiently large for training the models. For comparison, we trained the models with a subset (6,000 events) of the ACP dataset. As the results shown in Table TABREF24 demonstrate, our method is effective when labeled data are small.
The result of hyperparameter optimization for the BiGRU encoder was as follows:
As the CA and CO pairs were equal in size (Table TABREF16), $\lambda _{\rm CA}$ and $\lambda _{\rm CO}$ were comparable values. $\lambda _{\rm CA}$ was about one-third of $\lambda _{\rm CO}$, and this indicated that the CA pairs were noisier than the CO pairs. A major type of CA pairs that violates our assumption was in the form of “$\textit {problem}_{\text{negative}}$ causes $\textit {solution}_{\text{positive}}$”:
. (悪いところがある, よくなるように努力する)
(there is a bad point, [I] try to improve [it])
The polarities of the two events were reversed in spite of the Cause relation, and this lowered the value of $\lambda _{\rm CA}$.
Some examples of model outputs are shown in Table TABREF26. The first two examples suggest that our model successfully learned negation without explicit supervision. Similarly, the next two examples differ only in voice but the model correctly recognized that they had opposite polarities. The last two examples share the predicate “落とす" (drop) and only the objects are different. The second event “肩を落とす" (lit. drop one's shoulders) is an idiom that expresses a disappointed feeling. The examples demonstrate that our model correctly learned non-compositional expressions.
Conclusion
In this paper, we proposed to use discourse relations to effectively propagate polarities of affective events from seeds. Experiments show that, even with a minimal amount of supervision, the proposed method performed well.
Although event pairs linked by discourse analysis are shown to be useful, they nevertheless contain noises. Adding linguistically-motivated filtering rules would help improve the performance.
Acknowledgments
We thank Nobuhiro Kaji for providing the ACP Corpus and Hirokazu Kiyomaru and Yudai Kishimoto for their help in extracting event pairs. This work was partially supported by Yahoo! Japan Corporation.
Appendices ::: Seed Lexicon ::: Positive Words
喜ぶ (rejoice), 嬉しい (be glad), 楽しい (be pleasant), 幸せ (be happy), 感動 (be impressed), 興奮 (be excited), 懐かしい (feel nostalgic), 好き (like), 尊敬 (respect), 安心 (be relieved), 感心 (admire), 落ち着く (be calm), 満足 (be satisfied), 癒される (be healed), and スッキリ (be refreshed).
Appendices ::: Seed Lexicon ::: Negative Words
怒る (get angry), 悲しい (be sad), 寂しい (be lonely), 怖い (be scared), 不安 (feel anxious), 恥ずかしい (be embarrassed), 嫌 (hate), 落ち込む (feel down), 退屈 (be bored), 絶望 (feel hopeless), 辛い (have a hard time), 困る (have trouble), 憂鬱 (be depressed), 心配 (be worried), and 情けない (be sorry).
Appendices ::: Settings of Encoder ::: BiGRU
The dimension of the embedding layer was 256. The embedding layer was initialized with the word embeddings pretrained using the Web corpus. The input sentences were segmented into words by the morphological analyzer Juman++. The vocabulary size was 100,000. The number of hidden layers was 2. The dimension of hidden units was 256. The optimizer was Momentum SGD BIBREF21. The mini-batch size was 1024. We ran 100 epochs and selected the snapshot that achieved the highest score for the dev set.
Appendices ::: Settings of Encoder ::: BERT
We used a Japanese BERT model pretrained with Japanese Wikipedia. The input sentences were segmented into words by Juman++, and words were broken into subwords by applying BPE BIBREF20. The vocabulary size was 32,000. The maximum length of an input sequence was 128. The number of hidden layers was 12. The dimension of hidden units was 768. The number of self-attention heads was 12. The optimizer was Adam BIBREF19. The mini-batch size was 32. We ran 1 epoch. | The ACP corpus has around 700k events split into positive and negative polarity |
86abeff85f3db79cf87a8c993e5e5aa61226dc98 | 86abeff85f3db79cf87a8c993e5e5aa61226dc98_0 | Q: What are labels available in dataset for supervision?
Text: Introduction
Affective events BIBREF0 are events that typically affect people in positive or negative ways. For example, getting money and playing sports are usually positive to the experiencers; catching cold and losing one's wallet are negative. Understanding affective events is important to various natural language processing (NLP) applications such as dialogue systems BIBREF1, question-answering systems BIBREF2, and humor recognition BIBREF3. In this paper, we work on recognizing the polarity of an affective event that is represented by a score ranging from $-1$ (negative) to 1 (positive).
Learning affective events is challenging because, as the examples above suggest, the polarity of an event is not necessarily predictable from its constituent words. Combined with the unbounded combinatorial nature of language, the non-compositionality of affective polarity entails the need for large amounts of world knowledge, which can hardly be learned from small annotated data.
In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event.
We trained the models using a Japanese web corpus. Given the minimum amount of supervision, they performed well. In addition, the combination of annotated and unannotated data yielded a gain over a purely supervised baseline when labeled data were small.
Related Work
Learning affective events is closely related to sentiment analysis. Whereas sentiment analysis usually focuses on the polarity of what are described (e.g., movies), we work on how people are typically affected by events. In sentiment analysis, much attention has been paid to compositionality. Word-level polarity BIBREF5, BIBREF6, BIBREF7 and the roles of negation and intensification BIBREF8, BIBREF6, BIBREF9 are among the most important topics. In contrast, we are more interested in recognizing the sentiment polarity of an event that pertains to commonsense knowledge (e.g., getting money and catching cold).
Label propagation from seed instances is a common approach to inducing sentiment polarities. While BIBREF5 and BIBREF10 worked on word- and phrase-level polarities, BIBREF0 dealt with event-level polarities. BIBREF5 and BIBREF10 linked instances using co-occurrence information and/or phrase-level coordinations (e.g., “$A$ and $B$” and “$A$ but $B$”). We shift our scope to event pairs that are more complex than phrase pairs, and consequently exploit discourse connectives as event-level counterparts of phrase-level conjunctions.
BIBREF0 constructed a network of events using word embedding-derived similarities. Compared with this method, our discourse relation-based linking of events is much simpler and more intuitive.
Some previous studies made use of document structure to understand the sentiment. BIBREF11 proposed a sentiment-specific pre-training strategy using unlabeled dialog data (tweet-reply pairs). BIBREF12 proposed a method of building a polarity-tagged corpus (ACP Corpus). They automatically gathered sentences that had positive or negative opinions utilizing HTML layout structures in addition to linguistic patterns. Our method depends only on raw texts and thus has wider applicability.
Proposed Method
Proposed Method ::: Polarity Function
Our goal is to learn the polarity function $p(x)$, which predicts the sentiment polarity score of an event $x$. We approximate $p(x)$ by a neural network with the following form:
${\rm Encoder}$ outputs a vector representation of the event $x$. ${\rm Linear}$ is a fully-connected layer and transforms the representation into a scalar. ${\rm tanh}$ is the hyperbolic tangent and transforms the scalar into a score ranging from $-1$ to 1. In Section SECREF21, we consider two specific implementations of ${\rm Encoder}$.
Proposed Method ::: Discourse Relation-Based Event Pairs
Our method requires a very small seed lexicon and a large raw corpus. We assume that we can automatically extract discourse-tagged event pairs, $(x_{i1}, x_{i2})$ ($i=1, \cdots $) from the raw corpus. We refer to $x_{i1}$ and $x_{i2}$ as former and latter events, respectively. As shown in Figure FIGREF1, we limit our scope to two discourse relations: Cause and Concession.
The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event. We expect the model to automatically learn complex phenomena through label propagation. Based on the availability of scores and the types of discourse relations, we classify the extracted event pairs into the following three types.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: AL (Automatically Labeled Pairs)
The seed lexicon matches (1) the latter event but (2) not the former event, and (3) their discourse relation type is Cause or Concession. If the discourse relation type is Cause, the former event is given the same score as the latter. Likewise, if the discourse relation type is Concession, the former event is given the opposite of the latter's score. They are used as reference scores during training.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: CA (Cause Pairs)
The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Cause. We assume the two events have the same polarities.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: CO (Concession Pairs)
The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Concession. We assume the two events have the reversed polarities.
Proposed Method ::: Loss Functions
Using AL, CA, and CO data, we optimize the parameters of the polarity function $p(x)$. We define a loss function for each of the three types of event pairs and sum up the multiple loss functions.
We use mean squared error to construct loss functions. For the AL data, the loss function is defined as:
where $x_{i1}$ and $x_{i2}$ are the $i$-th pair of the AL data. $r_{i1}$ and $r_{i2}$ are the automatically-assigned scores of $x_{i1}$ and $x_{i2}$, respectively. $N_{\rm AL}$ is the total number of AL pairs, and $\lambda _{\rm AL}$ is a hyperparameter.
For the CA data, the loss function is defined as:
$y_{i1}$ and $y_{i2}$ are the $i$-th pair of the CA pairs. $N_{\rm CA}$ is the total number of CA pairs. $\lambda _{\rm CA}$ and $\mu $ are hyperparameters. The first term makes the scores of the two events closer while the second term prevents the scores from shrinking to zero.
The loss function for the CO data is defined analogously:
The difference is that the first term makes the scores of the two events distant from each other.
Experiments
Experiments ::: Dataset
Experiments ::: Dataset ::: AL, CA, and CO
As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. To extract event pairs tagged with discourse relations, we used the Japanese dependency parser KNP and in-house postprocessing scripts BIBREF14. KNP used hand-written rules to segment each sentence into what we conventionally called clauses (mostly consecutive text chunks), each of which contained one main predicate. KNP also identified the discourse relations of event pairs if explicit discourse connectives BIBREF4 such as “ので” (because) and “のに” (in spite of) were present. We treated Cause/Reason (原因・理由) and Condition (条件) in the original tagset BIBREF15 as Cause and Concession (逆接) as Concession, respectively. Here is an example of event pair extraction.
. 重大な失敗を犯したので、仕事をクビになった。
Because [I] made a serious mistake, [I] got fired.
From this sentence, we extracted the event pair of “重大な失敗を犯す” ([I] make a serious mistake) and “仕事をクビになる” ([I] get fired), and tagged it with Cause.
We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16.
Experiments ::: Dataset ::: ACP (ACP Corpus)
We used the latest version of the ACP Corpus BIBREF12 for evaluation. It was used for (semi-)supervised training as well. Extracted from Japanese websites using HTML layouts and linguistic patterns, the dataset covered various genres. For example, the following two sentences were labeled positive and negative, respectively:
. 作業が楽だ。
The work is easy.
. 駐車場がない。
There is no parking lot.
Although the ACP corpus was originally constructed in the context of sentiment analysis, we found that it could roughly be regarded as a collection of affective events. We parsed each sentence and extracted the last clause in it. The train/dev/test split of the data is shown in Table TABREF19.
The objective function for supervised training is:
where $v_i$ is the $i$-th event, $R_i$ is the reference score of $v_i$, and $N_{\rm ACP}$ is the number of the events of the ACP Corpus.
To optimize the hyperparameters, we used the dev set of the ACP Corpus. For the evaluation, we used the test set of the ACP Corpus. The model output was classified as positive if $p(x) > 0$ and negative if $p(x) \le 0$.
Experiments ::: Model Configurations
As for ${\rm Encoder}$, we compared two types of neural networks: BiGRU and BERT. GRU BIBREF16 is a recurrent neural network sequence encoder. BiGRU reads an input sequence forward and backward and the output is the concatenation of the final forward and backward hidden states.
BERT BIBREF17 is a pre-trained multi-layer bidirectional Transformer BIBREF18 encoder. Its output is the final hidden state corresponding to the special classification tag ([CLS]). For the details of ${\rm Encoder}$, see Sections SECREF30.
We trained the model with the following four combinations of the datasets: AL, AL+CA+CO (two proposed models), ACP (supervised), and ACP+AL+CA+CO (semi-supervised). The corresponding objective functions were: $\mathcal {L}_{\rm AL}$, $\mathcal {L}_{\rm AL} + \mathcal {L}_{\rm CA} + \mathcal {L}_{\rm CO}$, $\mathcal {L}_{\rm ACP}$, and $\mathcal {L}_{\rm ACP} + \mathcal {L}_{\rm AL} + \mathcal {L}_{\rm CA} + \mathcal {L}_{\rm CO}$.
Experiments ::: Results and Discussion
Table TABREF23 shows accuracy. As the Random baseline suggests, positive and negative labels were distributed evenly. The Random+Seed baseline made use of the seed lexicon and output the corresponding label (or the reverse of it for negation) if the event's predicate is in the seed lexicon. We can see that the seed lexicon itself had practically no impact on prediction.
The models in the top block performed considerably better than the random baselines. The performance gaps with their (semi-)supervised counterparts, shown in the middle block, were less than 7%. This demonstrates the effectiveness of discourse relation-based label propagation.
Comparing the model variants, we obtained the highest score with the BiGRU encoder trained with the AL+CA+CO dataset. BERT was competitive but its performance went down if CA and CO were used in addition to AL. We conjecture that BERT was more sensitive to noises found more frequently in CA and CO.
Contrary to our expectations, supervised models (ACP) outperformed semi-supervised models (ACP+AL+CA+CO). This suggests that the training set of 0.6 million events is sufficiently large for training the models. For comparison, we trained the models with a subset (6,000 events) of the ACP dataset. As the results shown in Table TABREF24 demonstrate, our method is effective when labeled data are small.
The result of hyperparameter optimization for the BiGRU encoder was as follows:
As the CA and CO pairs were equal in size (Table TABREF16), $\lambda _{\rm CA}$ and $\lambda _{\rm CO}$ were comparable values. $\lambda _{\rm CA}$ was about one-third of $\lambda _{\rm CO}$, and this indicated that the CA pairs were noisier than the CO pairs. A major type of CA pairs that violates our assumption was in the form of “$\textit {problem}_{\text{negative}}$ causes $\textit {solution}_{\text{positive}}$”:
. (悪いところがある, よくなるように努力する)
(there is a bad point, [I] try to improve [it])
The polarities of the two events were reversed in spite of the Cause relation, and this lowered the value of $\lambda _{\rm CA}$.
Some examples of model outputs are shown in Table TABREF26. The first two examples suggest that our model successfully learned negation without explicit supervision. Similarly, the next two examples differ only in voice but the model correctly recognized that they had opposite polarities. The last two examples share the predicate “落とす" (drop) and only the objects are different. The second event “肩を落とす" (lit. drop one's shoulders) is an idiom that expresses a disappointed feeling. The examples demonstrate that our model correctly learned non-compositional expressions.
Conclusion
In this paper, we proposed to use discourse relations to effectively propagate polarities of affective events from seeds. Experiments show that, even with a minimal amount of supervision, the proposed method performed well.
Although event pairs linked by discourse analysis are shown to be useful, they nevertheless contain noises. Adding linguistically-motivated filtering rules would help improve the performance.
Acknowledgments
We thank Nobuhiro Kaji for providing the ACP Corpus and Hirokazu Kiyomaru and Yudai Kishimoto for their help in extracting event pairs. This work was partially supported by Yahoo! Japan Corporation.
Appendices ::: Seed Lexicon ::: Positive Words
喜ぶ (rejoice), 嬉しい (be glad), 楽しい (be pleasant), 幸せ (be happy), 感動 (be impressed), 興奮 (be excited), 懐かしい (feel nostalgic), 好き (like), 尊敬 (respect), 安心 (be relieved), 感心 (admire), 落ち着く (be calm), 満足 (be satisfied), 癒される (be healed), and スッキリ (be refreshed).
Appendices ::: Seed Lexicon ::: Negative Words
怒る (get angry), 悲しい (be sad), 寂しい (be lonely), 怖い (be scared), 不安 (feel anxious), 恥ずかしい (be embarrassed), 嫌 (hate), 落ち込む (feel down), 退屈 (be bored), 絶望 (feel hopeless), 辛い (have a hard time), 困る (have trouble), 憂鬱 (be depressed), 心配 (be worried), and 情けない (be sorry).
Appendices ::: Settings of Encoder ::: BiGRU
The dimension of the embedding layer was 256. The embedding layer was initialized with the word embeddings pretrained using the Web corpus. The input sentences were segmented into words by the morphological analyzer Juman++. The vocabulary size was 100,000. The number of hidden layers was 2. The dimension of hidden units was 256. The optimizer was Momentum SGD BIBREF21. The mini-batch size was 1024. We ran 100 epochs and selected the snapshot that achieved the highest score for the dev set.
Appendices ::: Settings of Encoder ::: BERT
We used a Japanese BERT model pretrained with Japanese Wikipedia. The input sentences were segmented into words by Juman++, and words were broken into subwords by applying BPE BIBREF20. The vocabulary size was 32,000. The maximum length of an input sequence was 128. The number of hidden layers was 12. The dimension of hidden units was 768. The number of self-attention heads was 12. The optimizer was Adam BIBREF19. The mini-batch size was 32. We ran 1 epoch. | negative, positive |
c029deb7f99756d2669abad0a349d917428e9c12 | c029deb7f99756d2669abad0a349d917428e9c12_0 | Q: How big are improvements of supervszed learning results trained on smalled labeled data enhanced with proposed approach copared to basic approach?
Text: Introduction
Affective events BIBREF0 are events that typically affect people in positive or negative ways. For example, getting money and playing sports are usually positive to the experiencers; catching cold and losing one's wallet are negative. Understanding affective events is important to various natural language processing (NLP) applications such as dialogue systems BIBREF1, question-answering systems BIBREF2, and humor recognition BIBREF3. In this paper, we work on recognizing the polarity of an affective event that is represented by a score ranging from $-1$ (negative) to 1 (positive).
Learning affective events is challenging because, as the examples above suggest, the polarity of an event is not necessarily predictable from its constituent words. Combined with the unbounded combinatorial nature of language, the non-compositionality of affective polarity entails the need for large amounts of world knowledge, which can hardly be learned from small annotated data.
In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event.
We trained the models using a Japanese web corpus. Given the minimum amount of supervision, they performed well. In addition, the combination of annotated and unannotated data yielded a gain over a purely supervised baseline when labeled data were small.
Related Work
Learning affective events is closely related to sentiment analysis. Whereas sentiment analysis usually focuses on the polarity of what are described (e.g., movies), we work on how people are typically affected by events. In sentiment analysis, much attention has been paid to compositionality. Word-level polarity BIBREF5, BIBREF6, BIBREF7 and the roles of negation and intensification BIBREF8, BIBREF6, BIBREF9 are among the most important topics. In contrast, we are more interested in recognizing the sentiment polarity of an event that pertains to commonsense knowledge (e.g., getting money and catching cold).
Label propagation from seed instances is a common approach to inducing sentiment polarities. While BIBREF5 and BIBREF10 worked on word- and phrase-level polarities, BIBREF0 dealt with event-level polarities. BIBREF5 and BIBREF10 linked instances using co-occurrence information and/or phrase-level coordinations (e.g., “$A$ and $B$” and “$A$ but $B$”). We shift our scope to event pairs that are more complex than phrase pairs, and consequently exploit discourse connectives as event-level counterparts of phrase-level conjunctions.
BIBREF0 constructed a network of events using word embedding-derived similarities. Compared with this method, our discourse relation-based linking of events is much simpler and more intuitive.
Some previous studies made use of document structure to understand the sentiment. BIBREF11 proposed a sentiment-specific pre-training strategy using unlabeled dialog data (tweet-reply pairs). BIBREF12 proposed a method of building a polarity-tagged corpus (ACP Corpus). They automatically gathered sentences that had positive or negative opinions utilizing HTML layout structures in addition to linguistic patterns. Our method depends only on raw texts and thus has wider applicability.
Proposed Method
Proposed Method ::: Polarity Function
Our goal is to learn the polarity function $p(x)$, which predicts the sentiment polarity score of an event $x$. We approximate $p(x)$ by a neural network with the following form:
${\rm Encoder}$ outputs a vector representation of the event $x$. ${\rm Linear}$ is a fully-connected layer and transforms the representation into a scalar. ${\rm tanh}$ is the hyperbolic tangent and transforms the scalar into a score ranging from $-1$ to 1. In Section SECREF21, we consider two specific implementations of ${\rm Encoder}$.
Proposed Method ::: Discourse Relation-Based Event Pairs
Our method requires a very small seed lexicon and a large raw corpus. We assume that we can automatically extract discourse-tagged event pairs, $(x_{i1}, x_{i2})$ ($i=1, \cdots $) from the raw corpus. We refer to $x_{i1}$ and $x_{i2}$ as former and latter events, respectively. As shown in Figure FIGREF1, we limit our scope to two discourse relations: Cause and Concession.
The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event. We expect the model to automatically learn complex phenomena through label propagation. Based on the availability of scores and the types of discourse relations, we classify the extracted event pairs into the following three types.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: AL (Automatically Labeled Pairs)
The seed lexicon matches (1) the latter event but (2) not the former event, and (3) their discourse relation type is Cause or Concession. If the discourse relation type is Cause, the former event is given the same score as the latter. Likewise, if the discourse relation type is Concession, the former event is given the opposite of the latter's score. They are used as reference scores during training.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: CA (Cause Pairs)
The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Cause. We assume the two events have the same polarities.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: CO (Concession Pairs)
The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Concession. We assume the two events have the reversed polarities.
Proposed Method ::: Loss Functions
Using AL, CA, and CO data, we optimize the parameters of the polarity function $p(x)$. We define a loss function for each of the three types of event pairs and sum up the multiple loss functions.
We use mean squared error to construct loss functions. For the AL data, the loss function is defined as:
where $x_{i1}$ and $x_{i2}$ are the $i$-th pair of the AL data. $r_{i1}$ and $r_{i2}$ are the automatically-assigned scores of $x_{i1}$ and $x_{i2}$, respectively. $N_{\rm AL}$ is the total number of AL pairs, and $\lambda _{\rm AL}$ is a hyperparameter.
For the CA data, the loss function is defined as:
$y_{i1}$ and $y_{i2}$ are the $i$-th pair of the CA pairs. $N_{\rm CA}$ is the total number of CA pairs. $\lambda _{\rm CA}$ and $\mu $ are hyperparameters. The first term makes the scores of the two events closer while the second term prevents the scores from shrinking to zero.
The loss function for the CO data is defined analogously:
The difference is that the first term makes the scores of the two events distant from each other.
Experiments
Experiments ::: Dataset
Experiments ::: Dataset ::: AL, CA, and CO
As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. To extract event pairs tagged with discourse relations, we used the Japanese dependency parser KNP and in-house postprocessing scripts BIBREF14. KNP used hand-written rules to segment each sentence into what we conventionally called clauses (mostly consecutive text chunks), each of which contained one main predicate. KNP also identified the discourse relations of event pairs if explicit discourse connectives BIBREF4 such as “ので” (because) and “のに” (in spite of) were present. We treated Cause/Reason (原因・理由) and Condition (条件) in the original tagset BIBREF15 as Cause and Concession (逆接) as Concession, respectively. Here is an example of event pair extraction.
. 重大な失敗を犯したので、仕事をクビになった。
Because [I] made a serious mistake, [I] got fired.
From this sentence, we extracted the event pair of “重大な失敗を犯す” ([I] make a serious mistake) and “仕事をクビになる” ([I] get fired), and tagged it with Cause.
We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16.
Experiments ::: Dataset ::: ACP (ACP Corpus)
We used the latest version of the ACP Corpus BIBREF12 for evaluation. It was used for (semi-)supervised training as well. Extracted from Japanese websites using HTML layouts and linguistic patterns, the dataset covered various genres. For example, the following two sentences were labeled positive and negative, respectively:
. 作業が楽だ。
The work is easy.
. 駐車場がない。
There is no parking lot.
Although the ACP corpus was originally constructed in the context of sentiment analysis, we found that it could roughly be regarded as a collection of affective events. We parsed each sentence and extracted the last clause in it. The train/dev/test split of the data is shown in Table TABREF19.
The objective function for supervised training is:
where $v_i$ is the $i$-th event, $R_i$ is the reference score of $v_i$, and $N_{\rm ACP}$ is the number of the events of the ACP Corpus.
To optimize the hyperparameters, we used the dev set of the ACP Corpus. For the evaluation, we used the test set of the ACP Corpus. The model output was classified as positive if $p(x) > 0$ and negative if $p(x) \le 0$.
Experiments ::: Model Configurations
As for ${\rm Encoder}$, we compared two types of neural networks: BiGRU and BERT. GRU BIBREF16 is a recurrent neural network sequence encoder. BiGRU reads an input sequence forward and backward and the output is the concatenation of the final forward and backward hidden states.
BERT BIBREF17 is a pre-trained multi-layer bidirectional Transformer BIBREF18 encoder. Its output is the final hidden state corresponding to the special classification tag ([CLS]). For the details of ${\rm Encoder}$, see Sections SECREF30.
We trained the model with the following four combinations of the datasets: AL, AL+CA+CO (two proposed models), ACP (supervised), and ACP+AL+CA+CO (semi-supervised). The corresponding objective functions were: $\mathcal {L}_{\rm AL}$, $\mathcal {L}_{\rm AL} + \mathcal {L}_{\rm CA} + \mathcal {L}_{\rm CO}$, $\mathcal {L}_{\rm ACP}$, and $\mathcal {L}_{\rm ACP} + \mathcal {L}_{\rm AL} + \mathcal {L}_{\rm CA} + \mathcal {L}_{\rm CO}$.
Experiments ::: Results and Discussion
Table TABREF23 shows accuracy. As the Random baseline suggests, positive and negative labels were distributed evenly. The Random+Seed baseline made use of the seed lexicon and output the corresponding label (or the reverse of it for negation) if the event's predicate is in the seed lexicon. We can see that the seed lexicon itself had practically no impact on prediction.
The models in the top block performed considerably better than the random baselines. The performance gaps with their (semi-)supervised counterparts, shown in the middle block, were less than 7%. This demonstrates the effectiveness of discourse relation-based label propagation.
Comparing the model variants, we obtained the highest score with the BiGRU encoder trained with the AL+CA+CO dataset. BERT was competitive but its performance went down if CA and CO were used in addition to AL. We conjecture that BERT was more sensitive to noises found more frequently in CA and CO.
Contrary to our expectations, supervised models (ACP) outperformed semi-supervised models (ACP+AL+CA+CO). This suggests that the training set of 0.6 million events is sufficiently large for training the models. For comparison, we trained the models with a subset (6,000 events) of the ACP dataset. As the results shown in Table TABREF24 demonstrate, our method is effective when labeled data are small.
The result of hyperparameter optimization for the BiGRU encoder was as follows:
As the CA and CO pairs were equal in size (Table TABREF16), $\lambda _{\rm CA}$ and $\lambda _{\rm CO}$ were comparable values. $\lambda _{\rm CA}$ was about one-third of $\lambda _{\rm CO}$, and this indicated that the CA pairs were noisier than the CO pairs. A major type of CA pairs that violates our assumption was in the form of “$\textit {problem}_{\text{negative}}$ causes $\textit {solution}_{\text{positive}}$”:
. (悪いところがある, よくなるように努力する)
(there is a bad point, [I] try to improve [it])
The polarities of the two events were reversed in spite of the Cause relation, and this lowered the value of $\lambda _{\rm CA}$.
Some examples of model outputs are shown in Table TABREF26. The first two examples suggest that our model successfully learned negation without explicit supervision. Similarly, the next two examples differ only in voice but the model correctly recognized that they had opposite polarities. The last two examples share the predicate “落とす" (drop) and only the objects are different. The second event “肩を落とす" (lit. drop one's shoulders) is an idiom that expresses a disappointed feeling. The examples demonstrate that our model correctly learned non-compositional expressions.
Conclusion
In this paper, we proposed to use discourse relations to effectively propagate polarities of affective events from seeds. Experiments show that, even with a minimal amount of supervision, the proposed method performed well.
Although event pairs linked by discourse analysis are shown to be useful, they nevertheless contain noises. Adding linguistically-motivated filtering rules would help improve the performance.
Acknowledgments
We thank Nobuhiro Kaji for providing the ACP Corpus and Hirokazu Kiyomaru and Yudai Kishimoto for their help in extracting event pairs. This work was partially supported by Yahoo! Japan Corporation.
Appendices ::: Seed Lexicon ::: Positive Words
喜ぶ (rejoice), 嬉しい (be glad), 楽しい (be pleasant), 幸せ (be happy), 感動 (be impressed), 興奮 (be excited), 懐かしい (feel nostalgic), 好き (like), 尊敬 (respect), 安心 (be relieved), 感心 (admire), 落ち着く (be calm), 満足 (be satisfied), 癒される (be healed), and スッキリ (be refreshed).
Appendices ::: Seed Lexicon ::: Negative Words
怒る (get angry), 悲しい (be sad), 寂しい (be lonely), 怖い (be scared), 不安 (feel anxious), 恥ずかしい (be embarrassed), 嫌 (hate), 落ち込む (feel down), 退屈 (be bored), 絶望 (feel hopeless), 辛い (have a hard time), 困る (have trouble), 憂鬱 (be depressed), 心配 (be worried), and 情けない (be sorry).
Appendices ::: Settings of Encoder ::: BiGRU
The dimension of the embedding layer was 256. The embedding layer was initialized with the word embeddings pretrained using the Web corpus. The input sentences were segmented into words by the morphological analyzer Juman++. The vocabulary size was 100,000. The number of hidden layers was 2. The dimension of hidden units was 256. The optimizer was Momentum SGD BIBREF21. The mini-batch size was 1024. We ran 100 epochs and selected the snapshot that achieved the highest score for the dev set.
Appendices ::: Settings of Encoder ::: BERT
We used a Japanese BERT model pretrained with Japanese Wikipedia. The input sentences were segmented into words by Juman++, and words were broken into subwords by applying BPE BIBREF20. The vocabulary size was 32,000. The maximum length of an input sequence was 128. The number of hidden layers was 12. The dimension of hidden units was 768. The number of self-attention heads was 12. The optimizer was Adam BIBREF19. The mini-batch size was 32. We ran 1 epoch. | 3% |
39f8db10d949c6b477fa4b51e7c184016505884f | 39f8db10d949c6b477fa4b51e7c184016505884f_0 | Q: How does their model learn using mostly raw data?
Text: Introduction
Affective events BIBREF0 are events that typically affect people in positive or negative ways. For example, getting money and playing sports are usually positive to the experiencers; catching cold and losing one's wallet are negative. Understanding affective events is important to various natural language processing (NLP) applications such as dialogue systems BIBREF1, question-answering systems BIBREF2, and humor recognition BIBREF3. In this paper, we work on recognizing the polarity of an affective event that is represented by a score ranging from $-1$ (negative) to 1 (positive).
Learning affective events is challenging because, as the examples above suggest, the polarity of an event is not necessarily predictable from its constituent words. Combined with the unbounded combinatorial nature of language, the non-compositionality of affective polarity entails the need for large amounts of world knowledge, which can hardly be learned from small annotated data.
In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event.
We trained the models using a Japanese web corpus. Given the minimum amount of supervision, they performed well. In addition, the combination of annotated and unannotated data yielded a gain over a purely supervised baseline when labeled data were small.
Related Work
Learning affective events is closely related to sentiment analysis. Whereas sentiment analysis usually focuses on the polarity of what are described (e.g., movies), we work on how people are typically affected by events. In sentiment analysis, much attention has been paid to compositionality. Word-level polarity BIBREF5, BIBREF6, BIBREF7 and the roles of negation and intensification BIBREF8, BIBREF6, BIBREF9 are among the most important topics. In contrast, we are more interested in recognizing the sentiment polarity of an event that pertains to commonsense knowledge (e.g., getting money and catching cold).
Label propagation from seed instances is a common approach to inducing sentiment polarities. While BIBREF5 and BIBREF10 worked on word- and phrase-level polarities, BIBREF0 dealt with event-level polarities. BIBREF5 and BIBREF10 linked instances using co-occurrence information and/or phrase-level coordinations (e.g., “$A$ and $B$” and “$A$ but $B$”). We shift our scope to event pairs that are more complex than phrase pairs, and consequently exploit discourse connectives as event-level counterparts of phrase-level conjunctions.
BIBREF0 constructed a network of events using word embedding-derived similarities. Compared with this method, our discourse relation-based linking of events is much simpler and more intuitive.
Some previous studies made use of document structure to understand the sentiment. BIBREF11 proposed a sentiment-specific pre-training strategy using unlabeled dialog data (tweet-reply pairs). BIBREF12 proposed a method of building a polarity-tagged corpus (ACP Corpus). They automatically gathered sentences that had positive or negative opinions utilizing HTML layout structures in addition to linguistic patterns. Our method depends only on raw texts and thus has wider applicability.
Proposed Method
Proposed Method ::: Polarity Function
Our goal is to learn the polarity function $p(x)$, which predicts the sentiment polarity score of an event $x$. We approximate $p(x)$ by a neural network with the following form:
${\rm Encoder}$ outputs a vector representation of the event $x$. ${\rm Linear}$ is a fully-connected layer and transforms the representation into a scalar. ${\rm tanh}$ is the hyperbolic tangent and transforms the scalar into a score ranging from $-1$ to 1. In Section SECREF21, we consider two specific implementations of ${\rm Encoder}$.
Proposed Method ::: Discourse Relation-Based Event Pairs
Our method requires a very small seed lexicon and a large raw corpus. We assume that we can automatically extract discourse-tagged event pairs, $(x_{i1}, x_{i2})$ ($i=1, \cdots $) from the raw corpus. We refer to $x_{i1}$ and $x_{i2}$ as former and latter events, respectively. As shown in Figure FIGREF1, we limit our scope to two discourse relations: Cause and Concession.
The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event. We expect the model to automatically learn complex phenomena through label propagation. Based on the availability of scores and the types of discourse relations, we classify the extracted event pairs into the following three types.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: AL (Automatically Labeled Pairs)
The seed lexicon matches (1) the latter event but (2) not the former event, and (3) their discourse relation type is Cause or Concession. If the discourse relation type is Cause, the former event is given the same score as the latter. Likewise, if the discourse relation type is Concession, the former event is given the opposite of the latter's score. They are used as reference scores during training.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: CA (Cause Pairs)
The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Cause. We assume the two events have the same polarities.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: CO (Concession Pairs)
The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Concession. We assume the two events have the reversed polarities.
Proposed Method ::: Loss Functions
Using AL, CA, and CO data, we optimize the parameters of the polarity function $p(x)$. We define a loss function for each of the three types of event pairs and sum up the multiple loss functions.
We use mean squared error to construct loss functions. For the AL data, the loss function is defined as:
where $x_{i1}$ and $x_{i2}$ are the $i$-th pair of the AL data. $r_{i1}$ and $r_{i2}$ are the automatically-assigned scores of $x_{i1}$ and $x_{i2}$, respectively. $N_{\rm AL}$ is the total number of AL pairs, and $\lambda _{\rm AL}$ is a hyperparameter.
For the CA data, the loss function is defined as:
$y_{i1}$ and $y_{i2}$ are the $i$-th pair of the CA pairs. $N_{\rm CA}$ is the total number of CA pairs. $\lambda _{\rm CA}$ and $\mu $ are hyperparameters. The first term makes the scores of the two events closer while the second term prevents the scores from shrinking to zero.
The loss function for the CO data is defined analogously:
The difference is that the first term makes the scores of the two events distant from each other.
Experiments
Experiments ::: Dataset
Experiments ::: Dataset ::: AL, CA, and CO
As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. To extract event pairs tagged with discourse relations, we used the Japanese dependency parser KNP and in-house postprocessing scripts BIBREF14. KNP used hand-written rules to segment each sentence into what we conventionally called clauses (mostly consecutive text chunks), each of which contained one main predicate. KNP also identified the discourse relations of event pairs if explicit discourse connectives BIBREF4 such as “ので” (because) and “のに” (in spite of) were present. We treated Cause/Reason (原因・理由) and Condition (条件) in the original tagset BIBREF15 as Cause and Concession (逆接) as Concession, respectively. Here is an example of event pair extraction.
. 重大な失敗を犯したので、仕事をクビになった。
Because [I] made a serious mistake, [I] got fired.
From this sentence, we extracted the event pair of “重大な失敗を犯す” ([I] make a serious mistake) and “仕事をクビになる” ([I] get fired), and tagged it with Cause.
We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16.
Experiments ::: Dataset ::: ACP (ACP Corpus)
We used the latest version of the ACP Corpus BIBREF12 for evaluation. It was used for (semi-)supervised training as well. Extracted from Japanese websites using HTML layouts and linguistic patterns, the dataset covered various genres. For example, the following two sentences were labeled positive and negative, respectively:
. 作業が楽だ。
The work is easy.
. 駐車場がない。
There is no parking lot.
Although the ACP corpus was originally constructed in the context of sentiment analysis, we found that it could roughly be regarded as a collection of affective events. We parsed each sentence and extracted the last clause in it. The train/dev/test split of the data is shown in Table TABREF19.
The objective function for supervised training is:
where $v_i$ is the $i$-th event, $R_i$ is the reference score of $v_i$, and $N_{\rm ACP}$ is the number of the events of the ACP Corpus.
To optimize the hyperparameters, we used the dev set of the ACP Corpus. For the evaluation, we used the test set of the ACP Corpus. The model output was classified as positive if $p(x) > 0$ and negative if $p(x) \le 0$.
Experiments ::: Model Configurations
As for ${\rm Encoder}$, we compared two types of neural networks: BiGRU and BERT. GRU BIBREF16 is a recurrent neural network sequence encoder. BiGRU reads an input sequence forward and backward and the output is the concatenation of the final forward and backward hidden states.
BERT BIBREF17 is a pre-trained multi-layer bidirectional Transformer BIBREF18 encoder. Its output is the final hidden state corresponding to the special classification tag ([CLS]). For the details of ${\rm Encoder}$, see Sections SECREF30.
We trained the model with the following four combinations of the datasets: AL, AL+CA+CO (two proposed models), ACP (supervised), and ACP+AL+CA+CO (semi-supervised). The corresponding objective functions were: $\mathcal {L}_{\rm AL}$, $\mathcal {L}_{\rm AL} + \mathcal {L}_{\rm CA} + \mathcal {L}_{\rm CO}$, $\mathcal {L}_{\rm ACP}$, and $\mathcal {L}_{\rm ACP} + \mathcal {L}_{\rm AL} + \mathcal {L}_{\rm CA} + \mathcal {L}_{\rm CO}$.
Experiments ::: Results and Discussion
Table TABREF23 shows accuracy. As the Random baseline suggests, positive and negative labels were distributed evenly. The Random+Seed baseline made use of the seed lexicon and output the corresponding label (or the reverse of it for negation) if the event's predicate is in the seed lexicon. We can see that the seed lexicon itself had practically no impact on prediction.
The models in the top block performed considerably better than the random baselines. The performance gaps with their (semi-)supervised counterparts, shown in the middle block, were less than 7%. This demonstrates the effectiveness of discourse relation-based label propagation.
Comparing the model variants, we obtained the highest score with the BiGRU encoder trained with the AL+CA+CO dataset. BERT was competitive but its performance went down if CA and CO were used in addition to AL. We conjecture that BERT was more sensitive to noises found more frequently in CA and CO.
Contrary to our expectations, supervised models (ACP) outperformed semi-supervised models (ACP+AL+CA+CO). This suggests that the training set of 0.6 million events is sufficiently large for training the models. For comparison, we trained the models with a subset (6,000 events) of the ACP dataset. As the results shown in Table TABREF24 demonstrate, our method is effective when labeled data are small.
The result of hyperparameter optimization for the BiGRU encoder was as follows:
As the CA and CO pairs were equal in size (Table TABREF16), $\lambda _{\rm CA}$ and $\lambda _{\rm CO}$ were comparable values. $\lambda _{\rm CA}$ was about one-third of $\lambda _{\rm CO}$, and this indicated that the CA pairs were noisier than the CO pairs. A major type of CA pairs that violates our assumption was in the form of “$\textit {problem}_{\text{negative}}$ causes $\textit {solution}_{\text{positive}}$”:
. (悪いところがある, よくなるように努力する)
(there is a bad point, [I] try to improve [it])
The polarities of the two events were reversed in spite of the Cause relation, and this lowered the value of $\lambda _{\rm CA}$.
Some examples of model outputs are shown in Table TABREF26. The first two examples suggest that our model successfully learned negation without explicit supervision. Similarly, the next two examples differ only in voice but the model correctly recognized that they had opposite polarities. The last two examples share the predicate “落とす" (drop) and only the objects are different. The second event “肩を落とす" (lit. drop one's shoulders) is an idiom that expresses a disappointed feeling. The examples demonstrate that our model correctly learned non-compositional expressions.
Conclusion
In this paper, we proposed to use discourse relations to effectively propagate polarities of affective events from seeds. Experiments show that, even with a minimal amount of supervision, the proposed method performed well.
Although event pairs linked by discourse analysis are shown to be useful, they nevertheless contain noises. Adding linguistically-motivated filtering rules would help improve the performance.
Acknowledgments
We thank Nobuhiro Kaji for providing the ACP Corpus and Hirokazu Kiyomaru and Yudai Kishimoto for their help in extracting event pairs. This work was partially supported by Yahoo! Japan Corporation.
Appendices ::: Seed Lexicon ::: Positive Words
喜ぶ (rejoice), 嬉しい (be glad), 楽しい (be pleasant), 幸せ (be happy), 感動 (be impressed), 興奮 (be excited), 懐かしい (feel nostalgic), 好き (like), 尊敬 (respect), 安心 (be relieved), 感心 (admire), 落ち着く (be calm), 満足 (be satisfied), 癒される (be healed), and スッキリ (be refreshed).
Appendices ::: Seed Lexicon ::: Negative Words
怒る (get angry), 悲しい (be sad), 寂しい (be lonely), 怖い (be scared), 不安 (feel anxious), 恥ずかしい (be embarrassed), 嫌 (hate), 落ち込む (feel down), 退屈 (be bored), 絶望 (feel hopeless), 辛い (have a hard time), 困る (have trouble), 憂鬱 (be depressed), 心配 (be worried), and 情けない (be sorry).
Appendices ::: Settings of Encoder ::: BiGRU
The dimension of the embedding layer was 256. The embedding layer was initialized with the word embeddings pretrained using the Web corpus. The input sentences were segmented into words by the morphological analyzer Juman++. The vocabulary size was 100,000. The number of hidden layers was 2. The dimension of hidden units was 256. The optimizer was Momentum SGD BIBREF21. The mini-batch size was 1024. We ran 100 epochs and selected the snapshot that achieved the highest score for the dev set.
Appendices ::: Settings of Encoder ::: BERT
We used a Japanese BERT model pretrained with Japanese Wikipedia. The input sentences were segmented into words by Juman++, and words were broken into subwords by applying BPE BIBREF20. The vocabulary size was 32,000. The maximum length of an input sequence was 128. The number of hidden layers was 12. The dimension of hidden units was 768. The number of self-attention heads was 12. The optimizer was Adam BIBREF19. The mini-batch size was 32. We ran 1 epoch. | by exploiting discourse relations to propagate polarity from seed predicates to final sentiment polarity |
d0bc782961567dc1dd7e074b621a6d6be44bb5b4 | d0bc782961567dc1dd7e074b621a6d6be44bb5b4_0 | Q: How big is seed lexicon used for training?
Text: Introduction
Affective events BIBREF0 are events that typically affect people in positive or negative ways. For example, getting money and playing sports are usually positive to the experiencers; catching cold and losing one's wallet are negative. Understanding affective events is important to various natural language processing (NLP) applications such as dialogue systems BIBREF1, question-answering systems BIBREF2, and humor recognition BIBREF3. In this paper, we work on recognizing the polarity of an affective event that is represented by a score ranging from $-1$ (negative) to 1 (positive).
Learning affective events is challenging because, as the examples above suggest, the polarity of an event is not necessarily predictable from its constituent words. Combined with the unbounded combinatorial nature of language, the non-compositionality of affective polarity entails the need for large amounts of world knowledge, which can hardly be learned from small annotated data.
In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event.
We trained the models using a Japanese web corpus. Given the minimum amount of supervision, they performed well. In addition, the combination of annotated and unannotated data yielded a gain over a purely supervised baseline when labeled data were small.
Related Work
Learning affective events is closely related to sentiment analysis. Whereas sentiment analysis usually focuses on the polarity of what are described (e.g., movies), we work on how people are typically affected by events. In sentiment analysis, much attention has been paid to compositionality. Word-level polarity BIBREF5, BIBREF6, BIBREF7 and the roles of negation and intensification BIBREF8, BIBREF6, BIBREF9 are among the most important topics. In contrast, we are more interested in recognizing the sentiment polarity of an event that pertains to commonsense knowledge (e.g., getting money and catching cold).
Label propagation from seed instances is a common approach to inducing sentiment polarities. While BIBREF5 and BIBREF10 worked on word- and phrase-level polarities, BIBREF0 dealt with event-level polarities. BIBREF5 and BIBREF10 linked instances using co-occurrence information and/or phrase-level coordinations (e.g., “$A$ and $B$” and “$A$ but $B$”). We shift our scope to event pairs that are more complex than phrase pairs, and consequently exploit discourse connectives as event-level counterparts of phrase-level conjunctions.
BIBREF0 constructed a network of events using word embedding-derived similarities. Compared with this method, our discourse relation-based linking of events is much simpler and more intuitive.
Some previous studies made use of document structure to understand the sentiment. BIBREF11 proposed a sentiment-specific pre-training strategy using unlabeled dialog data (tweet-reply pairs). BIBREF12 proposed a method of building a polarity-tagged corpus (ACP Corpus). They automatically gathered sentences that had positive or negative opinions utilizing HTML layout structures in addition to linguistic patterns. Our method depends only on raw texts and thus has wider applicability.
Proposed Method
Proposed Method ::: Polarity Function
Our goal is to learn the polarity function $p(x)$, which predicts the sentiment polarity score of an event $x$. We approximate $p(x)$ by a neural network with the following form:
${\rm Encoder}$ outputs a vector representation of the event $x$. ${\rm Linear}$ is a fully-connected layer and transforms the representation into a scalar. ${\rm tanh}$ is the hyperbolic tangent and transforms the scalar into a score ranging from $-1$ to 1. In Section SECREF21, we consider two specific implementations of ${\rm Encoder}$.
Proposed Method ::: Discourse Relation-Based Event Pairs
Our method requires a very small seed lexicon and a large raw corpus. We assume that we can automatically extract discourse-tagged event pairs, $(x_{i1}, x_{i2})$ ($i=1, \cdots $) from the raw corpus. We refer to $x_{i1}$ and $x_{i2}$ as former and latter events, respectively. As shown in Figure FIGREF1, we limit our scope to two discourse relations: Cause and Concession.
The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event. We expect the model to automatically learn complex phenomena through label propagation. Based on the availability of scores and the types of discourse relations, we classify the extracted event pairs into the following three types.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: AL (Automatically Labeled Pairs)
The seed lexicon matches (1) the latter event but (2) not the former event, and (3) their discourse relation type is Cause or Concession. If the discourse relation type is Cause, the former event is given the same score as the latter. Likewise, if the discourse relation type is Concession, the former event is given the opposite of the latter's score. They are used as reference scores during training.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: CA (Cause Pairs)
The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Cause. We assume the two events have the same polarities.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: CO (Concession Pairs)
The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Concession. We assume the two events have the reversed polarities.
Proposed Method ::: Loss Functions
Using AL, CA, and CO data, we optimize the parameters of the polarity function $p(x)$. We define a loss function for each of the three types of event pairs and sum up the multiple loss functions.
We use mean squared error to construct loss functions. For the AL data, the loss function is defined as:
where $x_{i1}$ and $x_{i2}$ are the $i$-th pair of the AL data. $r_{i1}$ and $r_{i2}$ are the automatically-assigned scores of $x_{i1}$ and $x_{i2}$, respectively. $N_{\rm AL}$ is the total number of AL pairs, and $\lambda _{\rm AL}$ is a hyperparameter.
For the CA data, the loss function is defined as:
$y_{i1}$ and $y_{i2}$ are the $i$-th pair of the CA pairs. $N_{\rm CA}$ is the total number of CA pairs. $\lambda _{\rm CA}$ and $\mu $ are hyperparameters. The first term makes the scores of the two events closer while the second term prevents the scores from shrinking to zero.
The loss function for the CO data is defined analogously:
The difference is that the first term makes the scores of the two events distant from each other.
Experiments
Experiments ::: Dataset
Experiments ::: Dataset ::: AL, CA, and CO
As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. To extract event pairs tagged with discourse relations, we used the Japanese dependency parser KNP and in-house postprocessing scripts BIBREF14. KNP used hand-written rules to segment each sentence into what we conventionally called clauses (mostly consecutive text chunks), each of which contained one main predicate. KNP also identified the discourse relations of event pairs if explicit discourse connectives BIBREF4 such as “ので” (because) and “のに” (in spite of) were present. We treated Cause/Reason (原因・理由) and Condition (条件) in the original tagset BIBREF15 as Cause and Concession (逆接) as Concession, respectively. Here is an example of event pair extraction.
. 重大な失敗を犯したので、仕事をクビになった。
Because [I] made a serious mistake, [I] got fired.
From this sentence, we extracted the event pair of “重大な失敗を犯す” ([I] make a serious mistake) and “仕事をクビになる” ([I] get fired), and tagged it with Cause.
We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16.
Experiments ::: Dataset ::: ACP (ACP Corpus)
We used the latest version of the ACP Corpus BIBREF12 for evaluation. It was used for (semi-)supervised training as well. Extracted from Japanese websites using HTML layouts and linguistic patterns, the dataset covered various genres. For example, the following two sentences were labeled positive and negative, respectively:
. 作業が楽だ。
The work is easy.
. 駐車場がない。
There is no parking lot.
Although the ACP corpus was originally constructed in the context of sentiment analysis, we found that it could roughly be regarded as a collection of affective events. We parsed each sentence and extracted the last clause in it. The train/dev/test split of the data is shown in Table TABREF19.
The objective function for supervised training is:
where $v_i$ is the $i$-th event, $R_i$ is the reference score of $v_i$, and $N_{\rm ACP}$ is the number of the events of the ACP Corpus.
To optimize the hyperparameters, we used the dev set of the ACP Corpus. For the evaluation, we used the test set of the ACP Corpus. The model output was classified as positive if $p(x) > 0$ and negative if $p(x) \le 0$.
Experiments ::: Model Configurations
As for ${\rm Encoder}$, we compared two types of neural networks: BiGRU and BERT. GRU BIBREF16 is a recurrent neural network sequence encoder. BiGRU reads an input sequence forward and backward and the output is the concatenation of the final forward and backward hidden states.
BERT BIBREF17 is a pre-trained multi-layer bidirectional Transformer BIBREF18 encoder. Its output is the final hidden state corresponding to the special classification tag ([CLS]). For the details of ${\rm Encoder}$, see Sections SECREF30.
We trained the model with the following four combinations of the datasets: AL, AL+CA+CO (two proposed models), ACP (supervised), and ACP+AL+CA+CO (semi-supervised). The corresponding objective functions were: $\mathcal {L}_{\rm AL}$, $\mathcal {L}_{\rm AL} + \mathcal {L}_{\rm CA} + \mathcal {L}_{\rm CO}$, $\mathcal {L}_{\rm ACP}$, and $\mathcal {L}_{\rm ACP} + \mathcal {L}_{\rm AL} + \mathcal {L}_{\rm CA} + \mathcal {L}_{\rm CO}$.
Experiments ::: Results and Discussion
Table TABREF23 shows accuracy. As the Random baseline suggests, positive and negative labels were distributed evenly. The Random+Seed baseline made use of the seed lexicon and output the corresponding label (or the reverse of it for negation) if the event's predicate is in the seed lexicon. We can see that the seed lexicon itself had practically no impact on prediction.
The models in the top block performed considerably better than the random baselines. The performance gaps with their (semi-)supervised counterparts, shown in the middle block, were less than 7%. This demonstrates the effectiveness of discourse relation-based label propagation.
Comparing the model variants, we obtained the highest score with the BiGRU encoder trained with the AL+CA+CO dataset. BERT was competitive but its performance went down if CA and CO were used in addition to AL. We conjecture that BERT was more sensitive to noises found more frequently in CA and CO.
Contrary to our expectations, supervised models (ACP) outperformed semi-supervised models (ACP+AL+CA+CO). This suggests that the training set of 0.6 million events is sufficiently large for training the models. For comparison, we trained the models with a subset (6,000 events) of the ACP dataset. As the results shown in Table TABREF24 demonstrate, our method is effective when labeled data are small.
The result of hyperparameter optimization for the BiGRU encoder was as follows:
As the CA and CO pairs were equal in size (Table TABREF16), $\lambda _{\rm CA}$ and $\lambda _{\rm CO}$ were comparable values. $\lambda _{\rm CA}$ was about one-third of $\lambda _{\rm CO}$, and this indicated that the CA pairs were noisier than the CO pairs. A major type of CA pairs that violates our assumption was in the form of “$\textit {problem}_{\text{negative}}$ causes $\textit {solution}_{\text{positive}}$”:
. (悪いところがある, よくなるように努力する)
(there is a bad point, [I] try to improve [it])
The polarities of the two events were reversed in spite of the Cause relation, and this lowered the value of $\lambda _{\rm CA}$.
Some examples of model outputs are shown in Table TABREF26. The first two examples suggest that our model successfully learned negation without explicit supervision. Similarly, the next two examples differ only in voice but the model correctly recognized that they had opposite polarities. The last two examples share the predicate “落とす" (drop) and only the objects are different. The second event “肩を落とす" (lit. drop one's shoulders) is an idiom that expresses a disappointed feeling. The examples demonstrate that our model correctly learned non-compositional expressions.
Conclusion
In this paper, we proposed to use discourse relations to effectively propagate polarities of affective events from seeds. Experiments show that, even with a minimal amount of supervision, the proposed method performed well.
Although event pairs linked by discourse analysis are shown to be useful, they nevertheless contain noises. Adding linguistically-motivated filtering rules would help improve the performance.
Acknowledgments
We thank Nobuhiro Kaji for providing the ACP Corpus and Hirokazu Kiyomaru and Yudai Kishimoto for their help in extracting event pairs. This work was partially supported by Yahoo! Japan Corporation.
Appendices ::: Seed Lexicon ::: Positive Words
喜ぶ (rejoice), 嬉しい (be glad), 楽しい (be pleasant), 幸せ (be happy), 感動 (be impressed), 興奮 (be excited), 懐かしい (feel nostalgic), 好き (like), 尊敬 (respect), 安心 (be relieved), 感心 (admire), 落ち着く (be calm), 満足 (be satisfied), 癒される (be healed), and スッキリ (be refreshed).
Appendices ::: Seed Lexicon ::: Negative Words
怒る (get angry), 悲しい (be sad), 寂しい (be lonely), 怖い (be scared), 不安 (feel anxious), 恥ずかしい (be embarrassed), 嫌 (hate), 落ち込む (feel down), 退屈 (be bored), 絶望 (feel hopeless), 辛い (have a hard time), 困る (have trouble), 憂鬱 (be depressed), 心配 (be worried), and 情けない (be sorry).
Appendices ::: Settings of Encoder ::: BiGRU
The dimension of the embedding layer was 256. The embedding layer was initialized with the word embeddings pretrained using the Web corpus. The input sentences were segmented into words by the morphological analyzer Juman++. The vocabulary size was 100,000. The number of hidden layers was 2. The dimension of hidden units was 256. The optimizer was Momentum SGD BIBREF21. The mini-batch size was 1024. We ran 100 epochs and selected the snapshot that achieved the highest score for the dev set.
Appendices ::: Settings of Encoder ::: BERT
We used a Japanese BERT model pretrained with Japanese Wikipedia. The input sentences were segmented into words by Juman++, and words were broken into subwords by applying BPE BIBREF20. The vocabulary size was 32,000. The maximum length of an input sequence was 128. The number of hidden layers was 12. The dimension of hidden units was 768. The number of self-attention heads was 12. The optimizer was Adam BIBREF19. The mini-batch size was 32. We ran 1 epoch. | 30 words |
a592498ba2fac994cd6fad7372836f0adb37e22a | a592498ba2fac994cd6fad7372836f0adb37e22a_0 | Q: How large is raw corpus used for training?
Text: Introduction
Affective events BIBREF0 are events that typically affect people in positive or negative ways. For example, getting money and playing sports are usually positive to the experiencers; catching cold and losing one's wallet are negative. Understanding affective events is important to various natural language processing (NLP) applications such as dialogue systems BIBREF1, question-answering systems BIBREF2, and humor recognition BIBREF3. In this paper, we work on recognizing the polarity of an affective event that is represented by a score ranging from $-1$ (negative) to 1 (positive).
Learning affective events is challenging because, as the examples above suggest, the polarity of an event is not necessarily predictable from its constituent words. Combined with the unbounded combinatorial nature of language, the non-compositionality of affective polarity entails the need for large amounts of world knowledge, which can hardly be learned from small annotated data.
In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event.
We trained the models using a Japanese web corpus. Given the minimum amount of supervision, they performed well. In addition, the combination of annotated and unannotated data yielded a gain over a purely supervised baseline when labeled data were small.
Related Work
Learning affective events is closely related to sentiment analysis. Whereas sentiment analysis usually focuses on the polarity of what are described (e.g., movies), we work on how people are typically affected by events. In sentiment analysis, much attention has been paid to compositionality. Word-level polarity BIBREF5, BIBREF6, BIBREF7 and the roles of negation and intensification BIBREF8, BIBREF6, BIBREF9 are among the most important topics. In contrast, we are more interested in recognizing the sentiment polarity of an event that pertains to commonsense knowledge (e.g., getting money and catching cold).
Label propagation from seed instances is a common approach to inducing sentiment polarities. While BIBREF5 and BIBREF10 worked on word- and phrase-level polarities, BIBREF0 dealt with event-level polarities. BIBREF5 and BIBREF10 linked instances using co-occurrence information and/or phrase-level coordinations (e.g., “$A$ and $B$” and “$A$ but $B$”). We shift our scope to event pairs that are more complex than phrase pairs, and consequently exploit discourse connectives as event-level counterparts of phrase-level conjunctions.
BIBREF0 constructed a network of events using word embedding-derived similarities. Compared with this method, our discourse relation-based linking of events is much simpler and more intuitive.
Some previous studies made use of document structure to understand the sentiment. BIBREF11 proposed a sentiment-specific pre-training strategy using unlabeled dialog data (tweet-reply pairs). BIBREF12 proposed a method of building a polarity-tagged corpus (ACP Corpus). They automatically gathered sentences that had positive or negative opinions utilizing HTML layout structures in addition to linguistic patterns. Our method depends only on raw texts and thus has wider applicability.
Proposed Method
Proposed Method ::: Polarity Function
Our goal is to learn the polarity function $p(x)$, which predicts the sentiment polarity score of an event $x$. We approximate $p(x)$ by a neural network with the following form:
${\rm Encoder}$ outputs a vector representation of the event $x$. ${\rm Linear}$ is a fully-connected layer and transforms the representation into a scalar. ${\rm tanh}$ is the hyperbolic tangent and transforms the scalar into a score ranging from $-1$ to 1. In Section SECREF21, we consider two specific implementations of ${\rm Encoder}$.
Proposed Method ::: Discourse Relation-Based Event Pairs
Our method requires a very small seed lexicon and a large raw corpus. We assume that we can automatically extract discourse-tagged event pairs, $(x_{i1}, x_{i2})$ ($i=1, \cdots $) from the raw corpus. We refer to $x_{i1}$ and $x_{i2}$ as former and latter events, respectively. As shown in Figure FIGREF1, we limit our scope to two discourse relations: Cause and Concession.
The seed lexicon consists of positive and negative predicates. If the predicate of an extracted event is in the seed lexicon and does not involve complex phenomena like negation, we assign the corresponding polarity score ($+1$ for positive events and $-1$ for negative events) to the event. We expect the model to automatically learn complex phenomena through label propagation. Based on the availability of scores and the types of discourse relations, we classify the extracted event pairs into the following three types.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: AL (Automatically Labeled Pairs)
The seed lexicon matches (1) the latter event but (2) not the former event, and (3) their discourse relation type is Cause or Concession. If the discourse relation type is Cause, the former event is given the same score as the latter. Likewise, if the discourse relation type is Concession, the former event is given the opposite of the latter's score. They are used as reference scores during training.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: CA (Cause Pairs)
The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Cause. We assume the two events have the same polarities.
Proposed Method ::: Discourse Relation-Based Event Pairs ::: CO (Concession Pairs)
The seed lexicon matches neither the former nor the latter event, and their discourse relation type is Concession. We assume the two events have the reversed polarities.
Proposed Method ::: Loss Functions
Using AL, CA, and CO data, we optimize the parameters of the polarity function $p(x)$. We define a loss function for each of the three types of event pairs and sum up the multiple loss functions.
We use mean squared error to construct loss functions. For the AL data, the loss function is defined as:
where $x_{i1}$ and $x_{i2}$ are the $i$-th pair of the AL data. $r_{i1}$ and $r_{i2}$ are the automatically-assigned scores of $x_{i1}$ and $x_{i2}$, respectively. $N_{\rm AL}$ is the total number of AL pairs, and $\lambda _{\rm AL}$ is a hyperparameter.
For the CA data, the loss function is defined as:
$y_{i1}$ and $y_{i2}$ are the $i$-th pair of the CA pairs. $N_{\rm CA}$ is the total number of CA pairs. $\lambda _{\rm CA}$ and $\mu $ are hyperparameters. The first term makes the scores of the two events closer while the second term prevents the scores from shrinking to zero.
The loss function for the CO data is defined analogously:
The difference is that the first term makes the scores of the two events distant from each other.
Experiments
Experiments ::: Dataset
Experiments ::: Dataset ::: AL, CA, and CO
As a raw corpus, we used a Japanese web corpus that was compiled through the procedures proposed by BIBREF13. To extract event pairs tagged with discourse relations, we used the Japanese dependency parser KNP and in-house postprocessing scripts BIBREF14. KNP used hand-written rules to segment each sentence into what we conventionally called clauses (mostly consecutive text chunks), each of which contained one main predicate. KNP also identified the discourse relations of event pairs if explicit discourse connectives BIBREF4 such as “ので” (because) and “のに” (in spite of) were present. We treated Cause/Reason (原因・理由) and Condition (条件) in the original tagset BIBREF15 as Cause and Concession (逆接) as Concession, respectively. Here is an example of event pair extraction.
. 重大な失敗を犯したので、仕事をクビになった。
Because [I] made a serious mistake, [I] got fired.
From this sentence, we extracted the event pair of “重大な失敗を犯す” ([I] make a serious mistake) and “仕事をクビになる” ([I] get fired), and tagged it with Cause.
We constructed our seed lexicon consisting of 15 positive words and 15 negative words, as shown in Section SECREF27. From the corpus of about 100 million sentences, we obtained 1.4 millions event pairs for AL, 41 millions for CA, and 6 millions for CO. We randomly selected subsets of AL event pairs such that positive and negative latter events were equal in size. We also sampled event pairs for each of CA and CO such that it was five times larger than AL. The results are shown in Table TABREF16.
Experiments ::: Dataset ::: ACP (ACP Corpus)
We used the latest version of the ACP Corpus BIBREF12 for evaluation. It was used for (semi-)supervised training as well. Extracted from Japanese websites using HTML layouts and linguistic patterns, the dataset covered various genres. For example, the following two sentences were labeled positive and negative, respectively:
. 作業が楽だ。
The work is easy.
. 駐車場がない。
There is no parking lot.
Although the ACP corpus was originally constructed in the context of sentiment analysis, we found that it could roughly be regarded as a collection of affective events. We parsed each sentence and extracted the last clause in it. The train/dev/test split of the data is shown in Table TABREF19.
The objective function for supervised training is:
where $v_i$ is the $i$-th event, $R_i$ is the reference score of $v_i$, and $N_{\rm ACP}$ is the number of the events of the ACP Corpus.
To optimize the hyperparameters, we used the dev set of the ACP Corpus. For the evaluation, we used the test set of the ACP Corpus. The model output was classified as positive if $p(x) > 0$ and negative if $p(x) \le 0$.
Experiments ::: Model Configurations
As for ${\rm Encoder}$, we compared two types of neural networks: BiGRU and BERT. GRU BIBREF16 is a recurrent neural network sequence encoder. BiGRU reads an input sequence forward and backward and the output is the concatenation of the final forward and backward hidden states.
BERT BIBREF17 is a pre-trained multi-layer bidirectional Transformer BIBREF18 encoder. Its output is the final hidden state corresponding to the special classification tag ([CLS]). For the details of ${\rm Encoder}$, see Sections SECREF30.
We trained the model with the following four combinations of the datasets: AL, AL+CA+CO (two proposed models), ACP (supervised), and ACP+AL+CA+CO (semi-supervised). The corresponding objective functions were: $\mathcal {L}_{\rm AL}$, $\mathcal {L}_{\rm AL} + \mathcal {L}_{\rm CA} + \mathcal {L}_{\rm CO}$, $\mathcal {L}_{\rm ACP}$, and $\mathcal {L}_{\rm ACP} + \mathcal {L}_{\rm AL} + \mathcal {L}_{\rm CA} + \mathcal {L}_{\rm CO}$.
Experiments ::: Results and Discussion
Table TABREF23 shows accuracy. As the Random baseline suggests, positive and negative labels were distributed evenly. The Random+Seed baseline made use of the seed lexicon and output the corresponding label (or the reverse of it for negation) if the event's predicate is in the seed lexicon. We can see that the seed lexicon itself had practically no impact on prediction.
The models in the top block performed considerably better than the random baselines. The performance gaps with their (semi-)supervised counterparts, shown in the middle block, were less than 7%. This demonstrates the effectiveness of discourse relation-based label propagation.
Comparing the model variants, we obtained the highest score with the BiGRU encoder trained with the AL+CA+CO dataset. BERT was competitive but its performance went down if CA and CO were used in addition to AL. We conjecture that BERT was more sensitive to noises found more frequently in CA and CO.
Contrary to our expectations, supervised models (ACP) outperformed semi-supervised models (ACP+AL+CA+CO). This suggests that the training set of 0.6 million events is sufficiently large for training the models. For comparison, we trained the models with a subset (6,000 events) of the ACP dataset. As the results shown in Table TABREF24 demonstrate, our method is effective when labeled data are small.
The result of hyperparameter optimization for the BiGRU encoder was as follows:
As the CA and CO pairs were equal in size (Table TABREF16), $\lambda _{\rm CA}$ and $\lambda _{\rm CO}$ were comparable values. $\lambda _{\rm CA}$ was about one-third of $\lambda _{\rm CO}$, and this indicated that the CA pairs were noisier than the CO pairs. A major type of CA pairs that violates our assumption was in the form of “$\textit {problem}_{\text{negative}}$ causes $\textit {solution}_{\text{positive}}$”:
. (悪いところがある, よくなるように努力する)
(there is a bad point, [I] try to improve [it])
The polarities of the two events were reversed in spite of the Cause relation, and this lowered the value of $\lambda _{\rm CA}$.
Some examples of model outputs are shown in Table TABREF26. The first two examples suggest that our model successfully learned negation without explicit supervision. Similarly, the next two examples differ only in voice but the model correctly recognized that they had opposite polarities. The last two examples share the predicate “落とす" (drop) and only the objects are different. The second event “肩を落とす" (lit. drop one's shoulders) is an idiom that expresses a disappointed feeling. The examples demonstrate that our model correctly learned non-compositional expressions.
Conclusion
In this paper, we proposed to use discourse relations to effectively propagate polarities of affective events from seeds. Experiments show that, even with a minimal amount of supervision, the proposed method performed well.
Although event pairs linked by discourse analysis are shown to be useful, they nevertheless contain noises. Adding linguistically-motivated filtering rules would help improve the performance.
Acknowledgments
We thank Nobuhiro Kaji for providing the ACP Corpus and Hirokazu Kiyomaru and Yudai Kishimoto for their help in extracting event pairs. This work was partially supported by Yahoo! Japan Corporation.
Appendices ::: Seed Lexicon ::: Positive Words
喜ぶ (rejoice), 嬉しい (be glad), 楽しい (be pleasant), 幸せ (be happy), 感動 (be impressed), 興奮 (be excited), 懐かしい (feel nostalgic), 好き (like), 尊敬 (respect), 安心 (be relieved), 感心 (admire), 落ち着く (be calm), 満足 (be satisfied), 癒される (be healed), and スッキリ (be refreshed).
Appendices ::: Seed Lexicon ::: Negative Words
怒る (get angry), 悲しい (be sad), 寂しい (be lonely), 怖い (be scared), 不安 (feel anxious), 恥ずかしい (be embarrassed), 嫌 (hate), 落ち込む (feel down), 退屈 (be bored), 絶望 (feel hopeless), 辛い (have a hard time), 困る (have trouble), 憂鬱 (be depressed), 心配 (be worried), and 情けない (be sorry).
Appendices ::: Settings of Encoder ::: BiGRU
The dimension of the embedding layer was 256. The embedding layer was initialized with the word embeddings pretrained using the Web corpus. The input sentences were segmented into words by the morphological analyzer Juman++. The vocabulary size was 100,000. The number of hidden layers was 2. The dimension of hidden units was 256. The optimizer was Momentum SGD BIBREF21. The mini-batch size was 1024. We ran 100 epochs and selected the snapshot that achieved the highest score for the dev set.
Appendices ::: Settings of Encoder ::: BERT
We used a Japanese BERT model pretrained with Japanese Wikipedia. The input sentences were segmented into words by Juman++, and words were broken into subwords by applying BPE BIBREF20. The vocabulary size was 32,000. The maximum length of an input sequence was 128. The number of hidden layers was 12. The dimension of hidden units was 768. The number of self-attention heads was 12. The optimizer was Adam BIBREF19. The mini-batch size was 32. We ran 1 epoch. | 100 million sentences |
3a9d391d25cde8af3334ac62d478b36b30079d74 | 3a9d391d25cde8af3334ac62d478b36b30079d74_0 | Q: Does the paper report macro F1?
Text:
1.1em
:::
1.1.1em
::: :::
1.1.1.1em
Thomas Haider$^{1,3}$, Steffen Eger$^2$, Evgeny Kim$^3$, Roman Klinger$^3$, Winfried Menninghaus$^1$
$^{1}$Department of Language and Literature, Max Planck Institute for Empirical Aesthetics
$^{2}$NLLG, Department of Computer Science, Technische Universitat Darmstadt
$^{3}$Institut für Maschinelle Sprachverarbeitung, University of Stuttgart
{thomas.haider, w.m}@ae.mpg.de, eger@aiphes.tu-darmstadt.de
{roman.klinger, evgeny.kim}@ims.uni-stuttgart.de
Most approaches to emotion analysis regarding social media, literature, news, and other domains focus exclusively on basic emotion categories as defined by Ekman or Plutchik. However, art (such as literature) enables engagement in a broader range of more complex and subtle emotions that have been shown to also include mixed emotional responses. We consider emotions as they are elicited in the reader, rather than what is expressed in the text or intended by the author. Thus, we conceptualize a set of aesthetic emotions that are predictive of aesthetic appreciation in the reader, and allow the annotation of multiple labels per line to capture mixed emotions within context. We evaluate this novel setting in an annotation experiment both with carefully trained experts and via crowdsourcing. Our annotation with experts leads to an acceptable agreement of $\kappa =.70$, resulting in a consistent dataset for future large scale analysis. Finally, we conduct first emotion classification experiments based on BERT, showing that identifying aesthetic emotions is challenging in our data, with up to .52 F1-micro on the German subset. Data and resources are available at https://github.com/tnhaider/poetry-emotion.
Emotion, Aesthetic Emotions, Literature, Poetry, Annotation, Corpora, Emotion Recognition, Multi-Label
Introduction
Emotions are central to human experience, creativity and behavior. Models of affect and emotion, both in psychology and natural language processing, commonly operate on predefined categories, designated either by continuous scales of, e.g., Valence, Arousal and Dominance BIBREF0 or discrete emotion labels (which can also vary in intensity). Discrete sets of emotions often have been motivated by theories of basic emotions, as proposed by Ekman1992—Anger, Fear, Joy, Disgust, Surprise, Sadness—and Plutchik1991, who added Trust and Anticipation. These categories are likely to have evolved as they motivate behavior that is directly relevant for survival. However, art reception typically presupposes a situation of safety and therefore offers special opportunities to engage in a broader range of more complex and subtle emotions. These differences between real-life and art contexts have not been considered in natural language processing work so far.
To emotionally move readers is considered a prime goal of literature since Latin antiquity BIBREF1, BIBREF2, BIBREF3. Deeply moved readers shed tears or get chills and goosebumps even in lab settings BIBREF4. In cases like these, the emotional response actually implies an aesthetic evaluation: narratives that have the capacity to move readers are evaluated as good and powerful texts for this very reason. Similarly, feelings of suspense experienced in narratives not only respond to the trajectory of the plot's content, but are also directly predictive of aesthetic liking (or disliking). Emotions that exhibit this dual capacity have been defined as “aesthetic emotions” BIBREF2. Contrary to the negativity bias of classical emotion catalogues, emotion terms used for aesthetic evaluation purposes include far more positive than negative emotions. At the same time, many overall positive aesthetic emotions encompass negative or mixed emotional ingredients BIBREF2, e.g., feelings of suspense include both hopeful and fearful anticipations.
For these reasons, we argue that the analysis of literature (with a focus on poetry) should rely on specifically selected emotion items rather than on the narrow range of basic emotions only. Our selection is based on previous research on this issue in psychological studies on art reception and, specifically, on poetry. For instance, knoop2016mapping found that Beauty is a major factor in poetry reception.
We primarily adopt and adapt emotion terms that schindler2017measuring have identified as aesthetic emotions in their study on how to measure and categorize such particular affective states. Further, we consider the aspect that, when selecting specific emotion labels, the perspective of annotators plays a major role. Whether emotions are elicited in the reader, expressed in the text, or intended by the author largely changes the permissible labels. For example, feelings of Disgust or Love might be intended or expressed in the text, but the text might still fail to elicit corresponding feelings as these concepts presume a strong reaction in the reader. Our focus here was on the actual emotional experience of the readers rather than on hypothetical intentions of authors. We opted for this reader perspective based on previous research in NLP BIBREF5, BIBREF6 and work in empirical aesthetics BIBREF7, that specifically measured the reception of poetry. Our final set of emotion labels consists of Beauty/Joy, Sadness, Uneasiness, Vitality, Suspense, Awe/Sublime, Humor, Annoyance, and Nostalgia.
In addition to selecting an adapted set of emotions, the annotation of poetry brings further challenges, one of which is the choice of the appropriate unit of annotation. Previous work considers words BIBREF8, BIBREF9, sentences BIBREF10, BIBREF11, utterances BIBREF12, sentence triples BIBREF13, or paragraphs BIBREF14 as the units of annotation. For poetry, reasonable units follow the logical document structure of poems, i.e., verse (line), stanza, and, owing to its relative shortness, the complete text. The more coarse-grained the unit, the more difficult the annotation is likely to be, but the more it may also enable the annotation of emotions in context. We find that annotating fine-grained units (lines) that are hierarchically ordered within a larger context (stanza, poem) caters to the specific structure of poems, where emotions are regularly mixed and are more interpretable within the whole poem. Consequently, we allow the mixing of emotions already at line level through multi-label annotation.
The remainder of this paper includes (1) a report of the annotation process that takes these challenges into consideration, (2) a description of our annotated corpora, and (3) an implementation of baseline models for the novel task of aesthetic emotion annotation in poetry. In a first study, the annotators work on the annotations in a closely supervised fashion, carefully reading each verse, stanza, and poem. In a second study, the annotations are performed via crowdsourcing within relatively short time periods with annotators not seeing the entire poem while reading the stanza. Using these two settings, we aim at obtaining a better understanding of the advantages and disadvantages of an expert vs. crowdsourcing setting in this novel annotation task. Particularly, we are interested in estimating the potential of a crowdsourcing environment for the task of self-perceived emotion annotation in poetry, given time and cost overhead associated with in-house annotation process (that usually involve training and close supervision of the annotators).
We provide the final datasets of German and English language poems annotated with reader emotions on verse level at https://github.com/tnhaider/poetry-emotion.
Related Work ::: Poetry in Natural Language Processing
Natural language understanding research on poetry has investigated stylistic variation BIBREF15, BIBREF16, BIBREF17, with a focus on broadly accepted formal features such as meter BIBREF18, BIBREF19, BIBREF20 and rhyme BIBREF21, BIBREF22, as well as enjambement BIBREF23, BIBREF24 and metaphor BIBREF25, BIBREF26. Recent work has also explored the relationship of poetry and prose, mainly on a syntactic level BIBREF27, BIBREF28. Furthermore, poetry also lends itself well to semantic (change) analysis BIBREF29, BIBREF30, as linguistic invention BIBREF31, BIBREF32 and succinctness BIBREF33 are at the core of poetic production.
Corpus-based analysis of emotions in poetry has been considered, but there is no work on German, and little on English. kao2015computational analyze English poems with word associations from the Harvard Inquirer and LIWC, within the categories positive/negative outlook, positive/negative emotion and phys./psych. well-being. hou-frank-2015-analyzing examine the binary sentiment polarity of Chinese poems with a weighted personalized PageRank algorithm. barros2013automatic followed a tagging approach with a thesaurus to annotate words that are similar to the words `Joy', `Anger', `Fear' and `Sadness' (moreover translating these from English to Spanish). With these word lists, they distinguish the categories `Love', `Songs to Lisi', `Satire' and `Philosophical-Moral-Religious' in Quevedo's poetry. Similarly, alsharif2013emotion classify unique Arabic `emotional text forms' based on word unigrams.
Mohanty2018 create a corpus of 788 poems in the Indian Odia language, annotate it on text (poem) level with binary negative and positive sentiment, and are able to distinguish these with moderate success. Sreeja2019 construct a corpus of 736 Indian language poems and annotate the texts on Ekman's six categories + Love + Courage. They achieve a Fleiss Kappa of .48.
In contrast to our work, these studies focus on basic emotions and binary sentiment polarity only, rather than addressing aesthetic emotions. Moreover, they annotate on the level of complete poems (instead of fine-grained verse and stanza-level).
Related Work ::: Emotion Annotation
Emotion corpora have been created for different tasks and with different annotation strategies, with different units of analysis and different foci of emotion perspective (reader, writer, text). Examples include the ISEAR dataset BIBREF34 (document-level); emotion annotation in children stories BIBREF10 and news headlines BIBREF35 (sentence-level); and fine-grained emotion annotation in literature by Kim2018 (phrase- and word-level). We refer the interested reader to an overview paper on existing corpora BIBREF36.
We are only aware of a limited number of publications which look in more depth into the emotion perspective. buechel-hahn-2017-emobank report on an annotation study that focuses both on writer's and reader's emotions associated with English sentences. The results show that the reader perspective yields better inter-annotator agreement. Yang2009 also study the difference between writer and reader emotions, but not with a modeling perspective. The authors find that positive reader emotions tend to be linked to positive writer emotions in online blogs.
Related Work ::: Emotion Classification
The task of emotion classification has been tackled before using rule-based and machine learning approaches. Rule-based emotion classification typically relies on lexical resources of emotionally charged words BIBREF9, BIBREF37, BIBREF8 and offers a straightforward and transparent way to detect emotions in text.
In contrast to rule-based approaches, current models for emotion classification are often based on neural networks and commonly use word embeddings as features. Schuff2017 applied models from the classes of CNN, BiLSTM, and LSTM and compare them to linear classifiers (SVM and MaxEnt), where the BiLSTM shows best results with the most balanced precision and recall. AbdulMageed2017 claim the highest F$_1$ with gated recurrent unit networks BIBREF38 for Plutchik's emotion model. More recently, shared tasks on emotion analysis BIBREF39, BIBREF40 triggered a set of more advanced deep learning approaches, including BERT BIBREF41 and other transfer learning methods BIBREF42.
Data Collection
For our annotation and modeling studies, we build on top of two poetry corpora (in English and German), which we refer to as PO-EMO. This collection represents important contributions to the literary canon over the last 400 years. We make this resource available in TEI P5 XML and an easy-to-use tab separated format. Table TABREF9 shows a size overview of these data sets. Figure FIGREF8 shows the distribution of our data over time via density plots. Note that both corpora show a relative underrepresentation before the onset of the romantic period (around 1750).
Data Collection ::: German
The German corpus contains poems available from the website lyrik.antikoerperchen.de (ANTI-K), which provides a platform for students to upload essays about poems. The data is available in the Hypertext Markup Language, with clean line and stanza segmentation. ANTI-K also has extensive metadata, including author names, years of publication, numbers of sentences, poetic genres, and literary periods, that enable us to gauge the distribution of poems according to periods. The 158 poems we consider (731 stanzas) are dispersed over 51 authors and the New High German timeline (1575–1936 A.D.). This data has been annotated, besides emotions, for meter, rhythm, and rhyme in other studies BIBREF22, BIBREF43.
Data Collection ::: English
The English corpus contains 64 poems of popular English writers. It was partly collected from Project Gutenberg with the GutenTag tool, and, in addition, includes a number of hand selected poems from the modern period and represents a cross section of popular English poets. We took care to include a number of female authors, who would have been underrepresented in a uniform sample. Time stamps in the corpus are organized by the birth year of the author, as assigned in Project Gutenberg.
Expert Annotation
In the following, we will explain how we compiled and annotated three data subsets, namely, (1) 48 German poems with gold annotation. These were originally annotated by three annotators. The labels were then aggregated with majority voting and based on discussions among the annotators. Finally, they were curated to only include one gold annotation. (2) The remaining 110 German poems that are used to compute the agreement in table TABREF20 and (3) 64 English poems contain the raw annotation from two annotators.
We report the genesis of our annotation guidelines including the emotion classes. With the intention to provide a language resource for the computational analysis of emotion in poetry, we aimed at maximizing the consistency of our annotation, while doing justice to the diversity of poetry. We iteratively improved the guidelines and the annotation workflow by annotating in batches, cleaning the class set, and the compilation of a gold standard. The final overall cost of producing this expert annotated dataset amounts to approximately 3,500.
Expert Annotation ::: Workflow
The annotation process was initially conducted by three female university students majoring in linguistics and/or literary studies, which we refer to as our “expert annotators”. We used the INCePTION platform for annotation BIBREF44. Starting with the German poems, we annotated in batches of about 16 (and later in some cases 32) poems. After each batch, we computed agreement statistics including heatmaps, and provided this feedback to the annotators. For the first three batches, the three annotators produced a gold standard using a majority vote for each line. Where this was inconclusive, they developed an adjudicated annotation based on discussion. Where necessary, we encouraged the annotators to aim for more consistency, as most of the frequent switching of emotions within a stanza could not be reconstructed or justified.
In poems, emotions are regularly mixed (already on line level) and are more interpretable within the whole poem. We therefore annotate lines hierarchically within the larger context of stanzas and the whole poem. Hence, we instruct the annotators to read a complete stanza or full poem, and then annotate each line in the context of its stanza. To reflect on the emotional complexity of poetry, we allow a maximum of two labels per line while avoiding heavy label fluctuations by encouraging annotators to reflect on their feelings to avoid `empty' annotations. Rather, they were advised to use fewer labels and more consistent annotation. This additional constraint is necessary to avoid “wild”, non-reconstructable or non-justified annotations.
All subsequent batches (all except the first three) were only annotated by two out of the three initial annotators, coincidentally those two who had the lowest initial agreement with each other. We asked these two experts to use the generated gold standard (48 poems; majority votes of 3 annotators plus manual curation) as a reference (“if in doubt, annotate according to the gold standard”). This eliminated some systematic differences between them and markedly improved the agreement levels, roughly from 0.3–0.5 Cohen's $\kappa $ in the first three batches to around 0.6–0.8 $\kappa $ for all subsequent batches. This annotation procedure relaxes the reader perspective, as we encourage annotators (if in doubt) to annotate how they think the other annotators would annotate. However, we found that this formulation improves the usability of the data and leads to a more consistent annotation.
Expert Annotation ::: Emotion Labels
We opt for measuring the reader perspective rather than the text surface or author's intent. To closer define and support conceptualizing our labels, we use particular `items', as they are used in psychological self-evaluations. These items consist of adjectives, verbs or short phrases. We build on top of schindler2017measuring who proposed 43 items that were then grouped by a factor analysis based on self-evaluations of participants. The resulting factors are shown in Table TABREF17. We attempt to cover all identified factors and supplement with basic emotions BIBREF46, BIBREF47, where possible.
We started with a larger set of labels to then delete and substitute (tone down) labels during the initial annotation process to avoid infrequent classes and inconsistencies. Further, we conflate labels if they show considerable confusion with each other. These iterative improvements particularly affected Confusion, Boredom and Other that were very infrequently annotated and had little agreement among annotators ($\kappa <.2$). For German, we also removed Nostalgia ($\kappa =.218$) after gold standard creation, but after consideration, added it back for English, then achieving agreement. Nostalgia is still available in the gold standard (then with a second label Beauty/Joy or Sadness to keep consistency). However, Confusion, Boredom and Other are not available in any sub-corpus.
Our final set consists of nine classes, i.e., (in order of frequency) Beauty/Joy, Sadness, Uneasiness, Vitality, Suspense, Awe/Sublime, Humor, Annoyance, and Nostalgia. In the following, we describe the labels and give further details on the aggregation process.
Annoyance (annoys me/angers me/felt frustrated): Annoyance implies feeling annoyed, frustrated or even angry while reading the line/stanza. We include the class Anger here, as this was found to be too strong in intensity.
Awe/Sublime (found it overwhelming/sense of greatness): Awe/Sublime implies being overwhelmed by the line/stanza, i.e., if one gets the impression of facing something sublime or if the line/stanza inspires one with awe (or that the expression itself is sublime). Such emotions are often associated with subjects like god, death, life, truth, etc. The term Sublime originated with kant2000critique as one of the first aesthetic emotion terms. Awe is a more common English term.
Beauty/Joy (found it beautiful/pleasing/makes me happy/joyful): kant2000critique already spoke of a “feeling of beauty”, and it should be noted that it is not a `merely pleasing emotion'. Therefore, in our pilot annotations, Beauty and Joy were separate labels. However, schindler2017measuring found that items for Beauty and Joy load into the same factors. Furthermore, our pilot annotations revealed, while Beauty is the more dominant and frequent feeling, both labels regularly accompany each other, and they often get confused across annotators. Therefore, we add Joy to form an inclusive label Beauty/Joy that increases annotation consistency.
Humor (found it funny/amusing): Implies feeling amused by the line/stanza or if it makes one laugh.
Nostalgia (makes me nostalgic): Nostalgia is defined as a sentimental longing for things, persons or situations in the past. It often carries both positive and negative feelings. However, since this label is quite infrequent, and not available in all subsets of the data, we annotated it with an additional Beauty/Joy or Sadness label to ensure annotation consistency.
Sadness (makes me sad/touches me): If the line/stanza makes one feel sad. It also includes a more general `being touched / moved'.
Suspense (found it gripping/sparked my interest): Choose Suspense if the line/stanza keeps one in suspense (if the line/stanza excites one or triggers one's curiosity). We further removed Anticipation from Suspense/Anticipation, as Anticipation appeared to us as being a more cognitive prediction whereas Suspense is a far more straightforward emotion item.
Uneasiness (found it ugly/unsettling/disturbing / frightening/distasteful): This label covers situations when one feels discomfort about the line/stanza (if the line/stanza feels distasteful/ugly, unsettling/disturbing or frightens one). The labels Ugliness and Disgust were conflated into Uneasiness, as both are seldom felt in poetry (being inadequate/too strong/high in arousal), and typically lead to Uneasiness.
Vitality (found it invigorating/spurs me on/inspires me): This label is meant for a line/stanza that has an inciting, encouraging effect (if the line/stanza conveys a feeling of movement, energy and vitality which animates to action). Similar terms are Activation and Stimulation.
Expert Annotation ::: Agreement
Table TABREF20 shows the Cohen's $\kappa $ agreement scores among our two expert annotators for each emotion category $e$ as follows. We assign each instance (a line in a poem) a binary label indicating whether or not the annotator has annotated the emotion category $e$ in question. From this, we obtain vectors $v_i^e$, for annotators $i=0,1$, where each entry of $v_i^e$ holds the binary value for the corresponding line. We then apply the $\kappa $ statistics to the two binary vectors $v_i^e$. Additionally to averaged $\kappa $, we report micro-F1 values in Table TABREF21 between the multi-label annotations of both expert annotators as well as the micro-F1 score of a random baseline as well as of the majority emotion baseline (which labels each line as Beauty/Joy).
We find that Cohen $\kappa $ agreement ranges from .84 for Uneasiness in the English data, .81 for Humor and Nostalgia, down to German Suspense (.65), Awe/Sublime (.61) and Vitality for both languages (.50 English, .63 German). Both annotators have a similar emotion frequency profile, where the ranking is almost identical, especially for German. However, for English, Annotator 2 annotates more Vitality than Uneasiness. Figure FIGREF18 shows the confusion matrices of labels between annotators as heatmaps. Notably, Beauty/Joy and Sadness are confused across annotators more often than other labels. This is topical for poetry, and therefore not surprising: One might argue that the beauty of beings and situations is only beautiful because it is not enduring and therefore not to divorce from the sadness of the vanishing of beauty BIBREF48. We also find considerable confusion of Sadness with Awe/Sublime and Vitality, while the latter is also regularly confused with Beauty/Joy.
Furthermore, as shown in Figure FIGREF23, we find that no single poem aggregates to more than six emotion labels, while no stanza aggregates to more than four emotion labels. However, most lines and stanzas prefer one or two labels. German poems seem more emotionally diverse where more poems have three labels than two labels, while the majority of English poems have only two labels. This is however attributable to the generally shorter English texts.
Crowdsourcing Annotation
After concluding the expert annotation, we performed a focused crowdsourcing experiment, based on the final label set and items as they are listed in Table TABREF27 and Section SECREF19. With this experiment, we aim to understand whether it is possible to collect reliable judgements for aesthetic perception of poetry from a crowdsourcing platform. A second goal is to see whether we can replicate the expensive expert annotations with less costly crowd annotations.
We opted for a maximally simple annotation environment, where we asked participants to annotate English 4-line stanzas with self-perceived reader emotions. We choose English due to the higher availability of English language annotators on crowdsourcing platforms. Each annotator rates each stanza independently of surrounding context.
Crowdsourcing Annotation ::: Data and Setup
For consistency and to simplify the task for the annotators, we opt for a trade-off between completeness and granularity of the annotation. Specifically, we subselect stanzas composed of four verses from the corpus of 64 hand selected English poems. The resulting selection of 59 stanzas is uploaded to Figure Eight for annotation.
The annotators are asked to answer the following questions for each instance.
Question 1 (single-choice): Read the following stanza and decide for yourself which emotions it evokes.
Question 2 (multiple-choice): Which additional emotions does the stanza evoke?
The answers to both questions correspond to the emotion labels we defined to use in our annotation, as described in Section SECREF19. We add an additional answer choice “None” to Question 2 to allow annotators to say that a stanza does not evoke any additional emotions.
Each instance is annotated by ten people. We restrict the task geographically to the United Kingdom and Ireland and set the internal parameters on Figure Eight to only include the highest quality annotators to join the task. We pay 0.09 per instance. The final cost of the crowdsourcing experiment is 74.
Crowdsourcing Annotation ::: Results
In the following, we determine the best aggregation strategy regarding the 10 annotators with bootstrap resampling. For instance, one could assign the label of a specific emotion to an instance if just one annotators picks it, or one could assign the label only if all annotators agree on this emotion. To evaluate this, we repeatedly pick two sets of 5 annotators each out of the 10 annotators for each of the 59 stanzas, 1000 times overall (i.e., 1000$\times $59 times, bootstrap resampling). For each of these repetitions, we compare the agreement of these two groups of 5 annotators. Each group gets assigned with an adjudicated emotion which is accepted if at least one annotator picks it, at least two annotators pick it, etc. up to all five pick it.
We show the results in Table TABREF27. The $\kappa $ scores show the average agreement between the two groups of five annotators, when the adjudicated class is picked based on the particular threshold of annotators with the same label choice. We see that some emotions tend to have higher agreement scores than others, namely Annoyance (.66), Sadness (up to .52), and Awe/Sublime, Beauty/Joy, Humor (all .46). The maximum agreement is reached mostly with a threshold of 2 (4 times) or 3 (3 times).
We further show in the same table the average numbers of labels from each strategy. Obviously, a lower threshold leads to higher numbers (corresponding to a disjunction of annotations for each emotion). The drop in label counts is comparably drastic, with on average 18 labels per class. Overall, the best average $\kappa $ agreement (.32) is less than half of what we saw for the expert annotators (roughly .70). Crowds especially disagree on many more intricate emotion labels (Uneasiness, Vitality, Nostalgia, Suspense).
We visualize how often two emotions are used to label an instance in a confusion table in Figure FIGREF18. Sadness is used most often to annotate a stanza, and it is often confused with Suspense, Uneasiness, and Nostalgia. Further, Beauty/Joy partially overlaps with Awe/Sublime, Nostalgia, and Sadness.
On average, each crowd annotator uses two emotion labels per stanza (56% of cases); only in 36% of the cases the annotators use one label, and in 6% and 1% of the cases three and four labels, respectively. This contrasts with the expert annotators, who use one label in about 70% of the cases and two labels in 30% of the cases for the same 59 four-liners. Concerning frequency distribution for emotion labels, both experts and crowds name Sadness and Beauty/Joy as the most frequent emotions (for the `best' threshold of 3) and Nostalgia as one of the least frequent emotions. The Spearman rank correlation between experts and crowds is about 0.55 with respect to the label frequency distribution, indicating that crowds could replace experts to a moderate degree when it comes to extracting, e.g., emotion distributions for an author or time period. Now, we further compare crowds and experts in terms of whether crowds could replicate expert annotations also on a finer stanza level (rather than only on a distributional level).
Crowdsourcing Annotation ::: Comparing Experts with Crowds
To gauge the quality of the crowd annotations in comparison with our experts, we calculate agreement on the emotions between experts and an increasing group size from the crowd. For each stanza instance $s$, we pick $N$ crowd workers, where $N\in \lbrace 4,6,8,10\rbrace $, then pick their majority emotion for $s$, and additionally pick their second ranked majority emotion if at least $\frac{N}{2}-1$ workers have chosen it. For the experts, we aggregate their emotion labels on stanza level, then perform the same strategy for selection of emotion labels. Thus, for $s$, both crowds and experts have 1 or 2 emotions. For each emotion, we then compute Cohen's $\kappa $ as before. Note that, compared to our previous experiments in Section SECREF26 with a threshold, each stanza now receives an emotion annotation (exactly one or two emotion labels), both by the experts and the crowd-workers.
In Figure FIGREF30, we plot agreement between experts and crowds on stanza level as we vary the number $N$ of crowd workers involved. On average, there is roughly a steady linear increase in agreement as $N$ grows, which may indicate that $N=20$ or $N=30$ would still lead to better agreement. Concerning individual emotions, Nostalgia is the emotion with the least agreement, as opposed to Sadness (in our sample of 59 four-liners): the agreement for this emotion grows from $.47$ $\kappa $ with $N=4$ to $.65$ $\kappa $ with $N=10$. Sadness is also the most frequent emotion, both according to experts and crowds. Other emotions for which a reasonable agreement is achieved are Annoyance, Awe/Sublime, Beauty/Joy, Humor ($\kappa $ > 0.2). Emotions with little agreement are Vitality, Uneasiness, Suspense, Nostalgia ($\kappa $ < 0.2).
By and large, we note from Figure FIGREF18 that expert annotation is more restrictive, with experts agreeing more often on particular emotion labels (seen in the darker diagonal). The results of the crowdsourcing experiment, on the other hand, are a mixed bag as evidenced by a much sparser distribution of emotion labels. However, we note that these differences can be caused by 1) the disparate training procedure for the experts and crowds, and 2) the lack of opportunities for close supervision and on-going training of the crowds, as opposed to the in-house expert annotators.
In general, however, we find that substituting experts with crowds is possible to a certain degree. Even though the crowds' labels look inconsistent at first view, there appears to be a good signal in their aggregated annotations, helping to approximate expert annotations to a certain degree. The average $\kappa $ agreement (with the experts) we get from $N=10$ crowd workers (0.24) is still considerably below the agreement among the experts (0.70).
Modeling
To estimate the difficulty of automatic classification of our data set, we perform multi-label document classification (of stanzas) with BERT BIBREF41. For this experiment we aggregate all labels for a stanza and sort them by frequency, both for the gold standard and the raw expert annotations. As can be seen in Figure FIGREF23, a stanza bears a minimum of one and a maximum of four emotions. Unfortunately, the label Nostalgia is only available 16 times in the German data (the gold standard) as a second label (as discussed in Section SECREF19). None of our models was able to learn this label for German. Therefore we omit it, leaving us with eight proper labels.
We use the code and the pre-trained BERT models of Farm, provided by deepset.ai. We test the multilingual-uncased model (Multiling), the german-base-cased model (Base), the german-dbmdz-uncased model (Dbmdz), and we tune the Base model on 80k stanzas of the German Poetry Corpus DLK BIBREF30 for 2 epochs, both on token (masked words) and sequence (next line) prediction (Base$_{\textsc {Tuned}}$).
We split the randomized German dataset so that each label is at least 10 times in the validation set (63 instances, 113 labels), and at least 10 times in the test set (56 instances, 108 labels) and leave the rest for training (617 instances, 946 labels). We train BERT for 10 epochs (with a batch size of 8), optimize with entropy loss, and report F1-micro on the test set. See Table TABREF36 for the results.
We find that the multilingual model cannot handle infrequent categories, i.e., Awe/Sublime, Suspense and Humor. However, increasing the dataset with English data improves the results, suggesting that the classification would largely benefit from more annotated data. The best model overall is DBMDZ (.520), showing a balanced response on both validation and test set. See Table TABREF37 for a breakdown of all emotions as predicted by the this model. Precision is mostly higher than recall. The labels Awe/Sublime, Suspense and Humor are harder to predict than the other labels.
The BASE and BASE$_{\textsc {TUNED}}$ models perform slightly worse than DBMDZ. The effect of tuning of the BASE model is questionable, probably because of the restricted vocabulary (30k). We found that tuning on poetry does not show obvious improvements. Lastly, we find that models that were trained on lines (instead of stanzas) do not achieve the same F1 (~.42 for the German models).
Concluding Remarks
In this paper, we presented a dataset of German and English poetry annotated with reader response to reading poetry. We argued that basic emotions as proposed by psychologists (such as Ekman and Plutchik) that are often used in emotion analysis from text are of little use for the annotation of poetry reception. We instead conceptualized aesthetic emotion labels and showed that a closely supervised annotation task results in substantial agreement—in terms of $\kappa $ score—on the final dataset.
The task of collecting reader-perceived emotion response to poetry in a crowdsourcing setting is not straightforward. In contrast to expert annotators, who were closely supervised and reflected upon the task, the annotators on crowdsourcing platforms are difficult to control and may lack necessary background knowledge to perform the task at hand. However, using a larger number of crowd annotators may lead to finding an aggregation strategy with a better trade-off between quality and quantity of adjudicated labels. For future work, we thus propose to repeat the experiment with larger number of crowdworkers, and develop an improved training strategy that would suit the crowdsourcing environment.
The dataset presented in this paper can be of use for different application scenarios, including multi-label emotion classification, style-conditioned poetry generation, investigating the influence of rhythm/prosodic features on emotion, or analysis of authors, genres and diachronic variation (e.g., how emotions are represented differently in certain periods).
Further, though our modeling experiments are still rudimentary, we propose that this data set can be used to investigate the intra-poem relations either through multi-task learning BIBREF49 and/or with the help of hierarchical sequence classification approaches.
Acknowledgements
A special thanks goes to Gesine Fuhrmann, who created the guidelines and tirelessly documented the annotation progress. Also thanks to Annika Palm and Debby Trzeciak who annotated and gave lively feedback. For help with the conceptualization of labels we thank Ines Schindler. This research has been partially conducted within the CRETA center (http://www.creta.uni-stuttgart.de/) which is funded by the German Ministry for Education and Research (BMBF) and partially funded by the German Research Council (DFG), projects SEAT (Structured Multi-Domain Emotion Analysis from Text, KL 2869/1-1). This work has also been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) at the Technische Universität Darmstadt under grant No. GRK 1994/1.
Appendix
We illustrate two examples of our German gold standard annotation, a poem each by Friedrich Hölderlin and Georg Trakl, and an English poem by Walt Whitman. Hölderlin's text stands out, because the mood changes starkly from the first stanza to the second, from Beauty/Joy to Sadness. Trakl's text is a bit more complex with bits of Nostalgia and, most importantly, a mixture of Uneasiness with Awe/Sublime. Whitman's poem is an example of Vitality and its mixing with Sadness. The English annotation was unified by us for space constraints. For the full annotation please see https://github.com/tnhaider/poetry-emotion/
Appendix ::: Friedrich Hölderlin: Hälfte des Lebens (1804)
Appendix ::: Georg Trakl: In den Nachmittag geflüstert (1912)
Appendix ::: Walt Whitman: O Captain! My Captain! (1865) | Yes |
3a9d391d25cde8af3334ac62d478b36b30079d74 | 3a9d391d25cde8af3334ac62d478b36b30079d74_1 | Q: Does the paper report macro F1?
Text:
1.1em
:::
1.1.1em
::: :::
1.1.1.1em
Thomas Haider$^{1,3}$, Steffen Eger$^2$, Evgeny Kim$^3$, Roman Klinger$^3$, Winfried Menninghaus$^1$
$^{1}$Department of Language and Literature, Max Planck Institute for Empirical Aesthetics
$^{2}$NLLG, Department of Computer Science, Technische Universitat Darmstadt
$^{3}$Institut für Maschinelle Sprachverarbeitung, University of Stuttgart
{thomas.haider, w.m}@ae.mpg.de, eger@aiphes.tu-darmstadt.de
{roman.klinger, evgeny.kim}@ims.uni-stuttgart.de
Most approaches to emotion analysis regarding social media, literature, news, and other domains focus exclusively on basic emotion categories as defined by Ekman or Plutchik. However, art (such as literature) enables engagement in a broader range of more complex and subtle emotions that have been shown to also include mixed emotional responses. We consider emotions as they are elicited in the reader, rather than what is expressed in the text or intended by the author. Thus, we conceptualize a set of aesthetic emotions that are predictive of aesthetic appreciation in the reader, and allow the annotation of multiple labels per line to capture mixed emotions within context. We evaluate this novel setting in an annotation experiment both with carefully trained experts and via crowdsourcing. Our annotation with experts leads to an acceptable agreement of $\kappa =.70$, resulting in a consistent dataset for future large scale analysis. Finally, we conduct first emotion classification experiments based on BERT, showing that identifying aesthetic emotions is challenging in our data, with up to .52 F1-micro on the German subset. Data and resources are available at https://github.com/tnhaider/poetry-emotion.
Emotion, Aesthetic Emotions, Literature, Poetry, Annotation, Corpora, Emotion Recognition, Multi-Label
Introduction
Emotions are central to human experience, creativity and behavior. Models of affect and emotion, both in psychology and natural language processing, commonly operate on predefined categories, designated either by continuous scales of, e.g., Valence, Arousal and Dominance BIBREF0 or discrete emotion labels (which can also vary in intensity). Discrete sets of emotions often have been motivated by theories of basic emotions, as proposed by Ekman1992—Anger, Fear, Joy, Disgust, Surprise, Sadness—and Plutchik1991, who added Trust and Anticipation. These categories are likely to have evolved as they motivate behavior that is directly relevant for survival. However, art reception typically presupposes a situation of safety and therefore offers special opportunities to engage in a broader range of more complex and subtle emotions. These differences between real-life and art contexts have not been considered in natural language processing work so far.
To emotionally move readers is considered a prime goal of literature since Latin antiquity BIBREF1, BIBREF2, BIBREF3. Deeply moved readers shed tears or get chills and goosebumps even in lab settings BIBREF4. In cases like these, the emotional response actually implies an aesthetic evaluation: narratives that have the capacity to move readers are evaluated as good and powerful texts for this very reason. Similarly, feelings of suspense experienced in narratives not only respond to the trajectory of the plot's content, but are also directly predictive of aesthetic liking (or disliking). Emotions that exhibit this dual capacity have been defined as “aesthetic emotions” BIBREF2. Contrary to the negativity bias of classical emotion catalogues, emotion terms used for aesthetic evaluation purposes include far more positive than negative emotions. At the same time, many overall positive aesthetic emotions encompass negative or mixed emotional ingredients BIBREF2, e.g., feelings of suspense include both hopeful and fearful anticipations.
For these reasons, we argue that the analysis of literature (with a focus on poetry) should rely on specifically selected emotion items rather than on the narrow range of basic emotions only. Our selection is based on previous research on this issue in psychological studies on art reception and, specifically, on poetry. For instance, knoop2016mapping found that Beauty is a major factor in poetry reception.
We primarily adopt and adapt emotion terms that schindler2017measuring have identified as aesthetic emotions in their study on how to measure and categorize such particular affective states. Further, we consider the aspect that, when selecting specific emotion labels, the perspective of annotators plays a major role. Whether emotions are elicited in the reader, expressed in the text, or intended by the author largely changes the permissible labels. For example, feelings of Disgust or Love might be intended or expressed in the text, but the text might still fail to elicit corresponding feelings as these concepts presume a strong reaction in the reader. Our focus here was on the actual emotional experience of the readers rather than on hypothetical intentions of authors. We opted for this reader perspective based on previous research in NLP BIBREF5, BIBREF6 and work in empirical aesthetics BIBREF7, that specifically measured the reception of poetry. Our final set of emotion labels consists of Beauty/Joy, Sadness, Uneasiness, Vitality, Suspense, Awe/Sublime, Humor, Annoyance, and Nostalgia.
In addition to selecting an adapted set of emotions, the annotation of poetry brings further challenges, one of which is the choice of the appropriate unit of annotation. Previous work considers words BIBREF8, BIBREF9, sentences BIBREF10, BIBREF11, utterances BIBREF12, sentence triples BIBREF13, or paragraphs BIBREF14 as the units of annotation. For poetry, reasonable units follow the logical document structure of poems, i.e., verse (line), stanza, and, owing to its relative shortness, the complete text. The more coarse-grained the unit, the more difficult the annotation is likely to be, but the more it may also enable the annotation of emotions in context. We find that annotating fine-grained units (lines) that are hierarchically ordered within a larger context (stanza, poem) caters to the specific structure of poems, where emotions are regularly mixed and are more interpretable within the whole poem. Consequently, we allow the mixing of emotions already at line level through multi-label annotation.
The remainder of this paper includes (1) a report of the annotation process that takes these challenges into consideration, (2) a description of our annotated corpora, and (3) an implementation of baseline models for the novel task of aesthetic emotion annotation in poetry. In a first study, the annotators work on the annotations in a closely supervised fashion, carefully reading each verse, stanza, and poem. In a second study, the annotations are performed via crowdsourcing within relatively short time periods with annotators not seeing the entire poem while reading the stanza. Using these two settings, we aim at obtaining a better understanding of the advantages and disadvantages of an expert vs. crowdsourcing setting in this novel annotation task. Particularly, we are interested in estimating the potential of a crowdsourcing environment for the task of self-perceived emotion annotation in poetry, given time and cost overhead associated with in-house annotation process (that usually involve training and close supervision of the annotators).
We provide the final datasets of German and English language poems annotated with reader emotions on verse level at https://github.com/tnhaider/poetry-emotion.
Related Work ::: Poetry in Natural Language Processing
Natural language understanding research on poetry has investigated stylistic variation BIBREF15, BIBREF16, BIBREF17, with a focus on broadly accepted formal features such as meter BIBREF18, BIBREF19, BIBREF20 and rhyme BIBREF21, BIBREF22, as well as enjambement BIBREF23, BIBREF24 and metaphor BIBREF25, BIBREF26. Recent work has also explored the relationship of poetry and prose, mainly on a syntactic level BIBREF27, BIBREF28. Furthermore, poetry also lends itself well to semantic (change) analysis BIBREF29, BIBREF30, as linguistic invention BIBREF31, BIBREF32 and succinctness BIBREF33 are at the core of poetic production.
Corpus-based analysis of emotions in poetry has been considered, but there is no work on German, and little on English. kao2015computational analyze English poems with word associations from the Harvard Inquirer and LIWC, within the categories positive/negative outlook, positive/negative emotion and phys./psych. well-being. hou-frank-2015-analyzing examine the binary sentiment polarity of Chinese poems with a weighted personalized PageRank algorithm. barros2013automatic followed a tagging approach with a thesaurus to annotate words that are similar to the words `Joy', `Anger', `Fear' and `Sadness' (moreover translating these from English to Spanish). With these word lists, they distinguish the categories `Love', `Songs to Lisi', `Satire' and `Philosophical-Moral-Religious' in Quevedo's poetry. Similarly, alsharif2013emotion classify unique Arabic `emotional text forms' based on word unigrams.
Mohanty2018 create a corpus of 788 poems in the Indian Odia language, annotate it on text (poem) level with binary negative and positive sentiment, and are able to distinguish these with moderate success. Sreeja2019 construct a corpus of 736 Indian language poems and annotate the texts on Ekman's six categories + Love + Courage. They achieve a Fleiss Kappa of .48.
In contrast to our work, these studies focus on basic emotions and binary sentiment polarity only, rather than addressing aesthetic emotions. Moreover, they annotate on the level of complete poems (instead of fine-grained verse and stanza-level).
Related Work ::: Emotion Annotation
Emotion corpora have been created for different tasks and with different annotation strategies, with different units of analysis and different foci of emotion perspective (reader, writer, text). Examples include the ISEAR dataset BIBREF34 (document-level); emotion annotation in children stories BIBREF10 and news headlines BIBREF35 (sentence-level); and fine-grained emotion annotation in literature by Kim2018 (phrase- and word-level). We refer the interested reader to an overview paper on existing corpora BIBREF36.
We are only aware of a limited number of publications which look in more depth into the emotion perspective. buechel-hahn-2017-emobank report on an annotation study that focuses both on writer's and reader's emotions associated with English sentences. The results show that the reader perspective yields better inter-annotator agreement. Yang2009 also study the difference between writer and reader emotions, but not with a modeling perspective. The authors find that positive reader emotions tend to be linked to positive writer emotions in online blogs.
Related Work ::: Emotion Classification
The task of emotion classification has been tackled before using rule-based and machine learning approaches. Rule-based emotion classification typically relies on lexical resources of emotionally charged words BIBREF9, BIBREF37, BIBREF8 and offers a straightforward and transparent way to detect emotions in text.
In contrast to rule-based approaches, current models for emotion classification are often based on neural networks and commonly use word embeddings as features. Schuff2017 applied models from the classes of CNN, BiLSTM, and LSTM and compare them to linear classifiers (SVM and MaxEnt), where the BiLSTM shows best results with the most balanced precision and recall. AbdulMageed2017 claim the highest F$_1$ with gated recurrent unit networks BIBREF38 for Plutchik's emotion model. More recently, shared tasks on emotion analysis BIBREF39, BIBREF40 triggered a set of more advanced deep learning approaches, including BERT BIBREF41 and other transfer learning methods BIBREF42.
Data Collection
For our annotation and modeling studies, we build on top of two poetry corpora (in English and German), which we refer to as PO-EMO. This collection represents important contributions to the literary canon over the last 400 years. We make this resource available in TEI P5 XML and an easy-to-use tab separated format. Table TABREF9 shows a size overview of these data sets. Figure FIGREF8 shows the distribution of our data over time via density plots. Note that both corpora show a relative underrepresentation before the onset of the romantic period (around 1750).
Data Collection ::: German
The German corpus contains poems available from the website lyrik.antikoerperchen.de (ANTI-K), which provides a platform for students to upload essays about poems. The data is available in the Hypertext Markup Language, with clean line and stanza segmentation. ANTI-K also has extensive metadata, including author names, years of publication, numbers of sentences, poetic genres, and literary periods, that enable us to gauge the distribution of poems according to periods. The 158 poems we consider (731 stanzas) are dispersed over 51 authors and the New High German timeline (1575–1936 A.D.). This data has been annotated, besides emotions, for meter, rhythm, and rhyme in other studies BIBREF22, BIBREF43.
Data Collection ::: English
The English corpus contains 64 poems of popular English writers. It was partly collected from Project Gutenberg with the GutenTag tool, and, in addition, includes a number of hand selected poems from the modern period and represents a cross section of popular English poets. We took care to include a number of female authors, who would have been underrepresented in a uniform sample. Time stamps in the corpus are organized by the birth year of the author, as assigned in Project Gutenberg.
Expert Annotation
In the following, we will explain how we compiled and annotated three data subsets, namely, (1) 48 German poems with gold annotation. These were originally annotated by three annotators. The labels were then aggregated with majority voting and based on discussions among the annotators. Finally, they were curated to only include one gold annotation. (2) The remaining 110 German poems that are used to compute the agreement in table TABREF20 and (3) 64 English poems contain the raw annotation from two annotators.
We report the genesis of our annotation guidelines including the emotion classes. With the intention to provide a language resource for the computational analysis of emotion in poetry, we aimed at maximizing the consistency of our annotation, while doing justice to the diversity of poetry. We iteratively improved the guidelines and the annotation workflow by annotating in batches, cleaning the class set, and the compilation of a gold standard. The final overall cost of producing this expert annotated dataset amounts to approximately 3,500.
Expert Annotation ::: Workflow
The annotation process was initially conducted by three female university students majoring in linguistics and/or literary studies, which we refer to as our “expert annotators”. We used the INCePTION platform for annotation BIBREF44. Starting with the German poems, we annotated in batches of about 16 (and later in some cases 32) poems. After each batch, we computed agreement statistics including heatmaps, and provided this feedback to the annotators. For the first three batches, the three annotators produced a gold standard using a majority vote for each line. Where this was inconclusive, they developed an adjudicated annotation based on discussion. Where necessary, we encouraged the annotators to aim for more consistency, as most of the frequent switching of emotions within a stanza could not be reconstructed or justified.
In poems, emotions are regularly mixed (already on line level) and are more interpretable within the whole poem. We therefore annotate lines hierarchically within the larger context of stanzas and the whole poem. Hence, we instruct the annotators to read a complete stanza or full poem, and then annotate each line in the context of its stanza. To reflect on the emotional complexity of poetry, we allow a maximum of two labels per line while avoiding heavy label fluctuations by encouraging annotators to reflect on their feelings to avoid `empty' annotations. Rather, they were advised to use fewer labels and more consistent annotation. This additional constraint is necessary to avoid “wild”, non-reconstructable or non-justified annotations.
All subsequent batches (all except the first three) were only annotated by two out of the three initial annotators, coincidentally those two who had the lowest initial agreement with each other. We asked these two experts to use the generated gold standard (48 poems; majority votes of 3 annotators plus manual curation) as a reference (“if in doubt, annotate according to the gold standard”). This eliminated some systematic differences between them and markedly improved the agreement levels, roughly from 0.3–0.5 Cohen's $\kappa $ in the first three batches to around 0.6–0.8 $\kappa $ for all subsequent batches. This annotation procedure relaxes the reader perspective, as we encourage annotators (if in doubt) to annotate how they think the other annotators would annotate. However, we found that this formulation improves the usability of the data and leads to a more consistent annotation.
Expert Annotation ::: Emotion Labels
We opt for measuring the reader perspective rather than the text surface or author's intent. To closer define and support conceptualizing our labels, we use particular `items', as they are used in psychological self-evaluations. These items consist of adjectives, verbs or short phrases. We build on top of schindler2017measuring who proposed 43 items that were then grouped by a factor analysis based on self-evaluations of participants. The resulting factors are shown in Table TABREF17. We attempt to cover all identified factors and supplement with basic emotions BIBREF46, BIBREF47, where possible.
We started with a larger set of labels to then delete and substitute (tone down) labels during the initial annotation process to avoid infrequent classes and inconsistencies. Further, we conflate labels if they show considerable confusion with each other. These iterative improvements particularly affected Confusion, Boredom and Other that were very infrequently annotated and had little agreement among annotators ($\kappa <.2$). For German, we also removed Nostalgia ($\kappa =.218$) after gold standard creation, but after consideration, added it back for English, then achieving agreement. Nostalgia is still available in the gold standard (then with a second label Beauty/Joy or Sadness to keep consistency). However, Confusion, Boredom and Other are not available in any sub-corpus.
Our final set consists of nine classes, i.e., (in order of frequency) Beauty/Joy, Sadness, Uneasiness, Vitality, Suspense, Awe/Sublime, Humor, Annoyance, and Nostalgia. In the following, we describe the labels and give further details on the aggregation process.
Annoyance (annoys me/angers me/felt frustrated): Annoyance implies feeling annoyed, frustrated or even angry while reading the line/stanza. We include the class Anger here, as this was found to be too strong in intensity.
Awe/Sublime (found it overwhelming/sense of greatness): Awe/Sublime implies being overwhelmed by the line/stanza, i.e., if one gets the impression of facing something sublime or if the line/stanza inspires one with awe (or that the expression itself is sublime). Such emotions are often associated with subjects like god, death, life, truth, etc. The term Sublime originated with kant2000critique as one of the first aesthetic emotion terms. Awe is a more common English term.
Beauty/Joy (found it beautiful/pleasing/makes me happy/joyful): kant2000critique already spoke of a “feeling of beauty”, and it should be noted that it is not a `merely pleasing emotion'. Therefore, in our pilot annotations, Beauty and Joy were separate labels. However, schindler2017measuring found that items for Beauty and Joy load into the same factors. Furthermore, our pilot annotations revealed, while Beauty is the more dominant and frequent feeling, both labels regularly accompany each other, and they often get confused across annotators. Therefore, we add Joy to form an inclusive label Beauty/Joy that increases annotation consistency.
Humor (found it funny/amusing): Implies feeling amused by the line/stanza or if it makes one laugh.
Nostalgia (makes me nostalgic): Nostalgia is defined as a sentimental longing for things, persons or situations in the past. It often carries both positive and negative feelings. However, since this label is quite infrequent, and not available in all subsets of the data, we annotated it with an additional Beauty/Joy or Sadness label to ensure annotation consistency.
Sadness (makes me sad/touches me): If the line/stanza makes one feel sad. It also includes a more general `being touched / moved'.
Suspense (found it gripping/sparked my interest): Choose Suspense if the line/stanza keeps one in suspense (if the line/stanza excites one or triggers one's curiosity). We further removed Anticipation from Suspense/Anticipation, as Anticipation appeared to us as being a more cognitive prediction whereas Suspense is a far more straightforward emotion item.
Uneasiness (found it ugly/unsettling/disturbing / frightening/distasteful): This label covers situations when one feels discomfort about the line/stanza (if the line/stanza feels distasteful/ugly, unsettling/disturbing or frightens one). The labels Ugliness and Disgust were conflated into Uneasiness, as both are seldom felt in poetry (being inadequate/too strong/high in arousal), and typically lead to Uneasiness.
Vitality (found it invigorating/spurs me on/inspires me): This label is meant for a line/stanza that has an inciting, encouraging effect (if the line/stanza conveys a feeling of movement, energy and vitality which animates to action). Similar terms are Activation and Stimulation.
Expert Annotation ::: Agreement
Table TABREF20 shows the Cohen's $\kappa $ agreement scores among our two expert annotators for each emotion category $e$ as follows. We assign each instance (a line in a poem) a binary label indicating whether or not the annotator has annotated the emotion category $e$ in question. From this, we obtain vectors $v_i^e$, for annotators $i=0,1$, where each entry of $v_i^e$ holds the binary value for the corresponding line. We then apply the $\kappa $ statistics to the two binary vectors $v_i^e$. Additionally to averaged $\kappa $, we report micro-F1 values in Table TABREF21 between the multi-label annotations of both expert annotators as well as the micro-F1 score of a random baseline as well as of the majority emotion baseline (which labels each line as Beauty/Joy).
We find that Cohen $\kappa $ agreement ranges from .84 for Uneasiness in the English data, .81 for Humor and Nostalgia, down to German Suspense (.65), Awe/Sublime (.61) and Vitality for both languages (.50 English, .63 German). Both annotators have a similar emotion frequency profile, where the ranking is almost identical, especially for German. However, for English, Annotator 2 annotates more Vitality than Uneasiness. Figure FIGREF18 shows the confusion matrices of labels between annotators as heatmaps. Notably, Beauty/Joy and Sadness are confused across annotators more often than other labels. This is topical for poetry, and therefore not surprising: One might argue that the beauty of beings and situations is only beautiful because it is not enduring and therefore not to divorce from the sadness of the vanishing of beauty BIBREF48. We also find considerable confusion of Sadness with Awe/Sublime and Vitality, while the latter is also regularly confused with Beauty/Joy.
Furthermore, as shown in Figure FIGREF23, we find that no single poem aggregates to more than six emotion labels, while no stanza aggregates to more than four emotion labels. However, most lines and stanzas prefer one or two labels. German poems seem more emotionally diverse where more poems have three labels than two labels, while the majority of English poems have only two labels. This is however attributable to the generally shorter English texts.
Crowdsourcing Annotation
After concluding the expert annotation, we performed a focused crowdsourcing experiment, based on the final label set and items as they are listed in Table TABREF27 and Section SECREF19. With this experiment, we aim to understand whether it is possible to collect reliable judgements for aesthetic perception of poetry from a crowdsourcing platform. A second goal is to see whether we can replicate the expensive expert annotations with less costly crowd annotations.
We opted for a maximally simple annotation environment, where we asked participants to annotate English 4-line stanzas with self-perceived reader emotions. We choose English due to the higher availability of English language annotators on crowdsourcing platforms. Each annotator rates each stanza independently of surrounding context.
Crowdsourcing Annotation ::: Data and Setup
For consistency and to simplify the task for the annotators, we opt for a trade-off between completeness and granularity of the annotation. Specifically, we subselect stanzas composed of four verses from the corpus of 64 hand selected English poems. The resulting selection of 59 stanzas is uploaded to Figure Eight for annotation.
The annotators are asked to answer the following questions for each instance.
Question 1 (single-choice): Read the following stanza and decide for yourself which emotions it evokes.
Question 2 (multiple-choice): Which additional emotions does the stanza evoke?
The answers to both questions correspond to the emotion labels we defined to use in our annotation, as described in Section SECREF19. We add an additional answer choice “None” to Question 2 to allow annotators to say that a stanza does not evoke any additional emotions.
Each instance is annotated by ten people. We restrict the task geographically to the United Kingdom and Ireland and set the internal parameters on Figure Eight to only include the highest quality annotators to join the task. We pay 0.09 per instance. The final cost of the crowdsourcing experiment is 74.
Crowdsourcing Annotation ::: Results
In the following, we determine the best aggregation strategy regarding the 10 annotators with bootstrap resampling. For instance, one could assign the label of a specific emotion to an instance if just one annotators picks it, or one could assign the label only if all annotators agree on this emotion. To evaluate this, we repeatedly pick two sets of 5 annotators each out of the 10 annotators for each of the 59 stanzas, 1000 times overall (i.e., 1000$\times $59 times, bootstrap resampling). For each of these repetitions, we compare the agreement of these two groups of 5 annotators. Each group gets assigned with an adjudicated emotion which is accepted if at least one annotator picks it, at least two annotators pick it, etc. up to all five pick it.
We show the results in Table TABREF27. The $\kappa $ scores show the average agreement between the two groups of five annotators, when the adjudicated class is picked based on the particular threshold of annotators with the same label choice. We see that some emotions tend to have higher agreement scores than others, namely Annoyance (.66), Sadness (up to .52), and Awe/Sublime, Beauty/Joy, Humor (all .46). The maximum agreement is reached mostly with a threshold of 2 (4 times) or 3 (3 times).
We further show in the same table the average numbers of labels from each strategy. Obviously, a lower threshold leads to higher numbers (corresponding to a disjunction of annotations for each emotion). The drop in label counts is comparably drastic, with on average 18 labels per class. Overall, the best average $\kappa $ agreement (.32) is less than half of what we saw for the expert annotators (roughly .70). Crowds especially disagree on many more intricate emotion labels (Uneasiness, Vitality, Nostalgia, Suspense).
We visualize how often two emotions are used to label an instance in a confusion table in Figure FIGREF18. Sadness is used most often to annotate a stanza, and it is often confused with Suspense, Uneasiness, and Nostalgia. Further, Beauty/Joy partially overlaps with Awe/Sublime, Nostalgia, and Sadness.
On average, each crowd annotator uses two emotion labels per stanza (56% of cases); only in 36% of the cases the annotators use one label, and in 6% and 1% of the cases three and four labels, respectively. This contrasts with the expert annotators, who use one label in about 70% of the cases and two labels in 30% of the cases for the same 59 four-liners. Concerning frequency distribution for emotion labels, both experts and crowds name Sadness and Beauty/Joy as the most frequent emotions (for the `best' threshold of 3) and Nostalgia as one of the least frequent emotions. The Spearman rank correlation between experts and crowds is about 0.55 with respect to the label frequency distribution, indicating that crowds could replace experts to a moderate degree when it comes to extracting, e.g., emotion distributions for an author or time period. Now, we further compare crowds and experts in terms of whether crowds could replicate expert annotations also on a finer stanza level (rather than only on a distributional level).
Crowdsourcing Annotation ::: Comparing Experts with Crowds
To gauge the quality of the crowd annotations in comparison with our experts, we calculate agreement on the emotions between experts and an increasing group size from the crowd. For each stanza instance $s$, we pick $N$ crowd workers, where $N\in \lbrace 4,6,8,10\rbrace $, then pick their majority emotion for $s$, and additionally pick their second ranked majority emotion if at least $\frac{N}{2}-1$ workers have chosen it. For the experts, we aggregate their emotion labels on stanza level, then perform the same strategy for selection of emotion labels. Thus, for $s$, both crowds and experts have 1 or 2 emotions. For each emotion, we then compute Cohen's $\kappa $ as before. Note that, compared to our previous experiments in Section SECREF26 with a threshold, each stanza now receives an emotion annotation (exactly one or two emotion labels), both by the experts and the crowd-workers.
In Figure FIGREF30, we plot agreement between experts and crowds on stanza level as we vary the number $N$ of crowd workers involved. On average, there is roughly a steady linear increase in agreement as $N$ grows, which may indicate that $N=20$ or $N=30$ would still lead to better agreement. Concerning individual emotions, Nostalgia is the emotion with the least agreement, as opposed to Sadness (in our sample of 59 four-liners): the agreement for this emotion grows from $.47$ $\kappa $ with $N=4$ to $.65$ $\kappa $ with $N=10$. Sadness is also the most frequent emotion, both according to experts and crowds. Other emotions for which a reasonable agreement is achieved are Annoyance, Awe/Sublime, Beauty/Joy, Humor ($\kappa $ > 0.2). Emotions with little agreement are Vitality, Uneasiness, Suspense, Nostalgia ($\kappa $ < 0.2).
By and large, we note from Figure FIGREF18 that expert annotation is more restrictive, with experts agreeing more often on particular emotion labels (seen in the darker diagonal). The results of the crowdsourcing experiment, on the other hand, are a mixed bag as evidenced by a much sparser distribution of emotion labels. However, we note that these differences can be caused by 1) the disparate training procedure for the experts and crowds, and 2) the lack of opportunities for close supervision and on-going training of the crowds, as opposed to the in-house expert annotators.
In general, however, we find that substituting experts with crowds is possible to a certain degree. Even though the crowds' labels look inconsistent at first view, there appears to be a good signal in their aggregated annotations, helping to approximate expert annotations to a certain degree. The average $\kappa $ agreement (with the experts) we get from $N=10$ crowd workers (0.24) is still considerably below the agreement among the experts (0.70).
Modeling
To estimate the difficulty of automatic classification of our data set, we perform multi-label document classification (of stanzas) with BERT BIBREF41. For this experiment we aggregate all labels for a stanza and sort them by frequency, both for the gold standard and the raw expert annotations. As can be seen in Figure FIGREF23, a stanza bears a minimum of one and a maximum of four emotions. Unfortunately, the label Nostalgia is only available 16 times in the German data (the gold standard) as a second label (as discussed in Section SECREF19). None of our models was able to learn this label for German. Therefore we omit it, leaving us with eight proper labels.
We use the code and the pre-trained BERT models of Farm, provided by deepset.ai. We test the multilingual-uncased model (Multiling), the german-base-cased model (Base), the german-dbmdz-uncased model (Dbmdz), and we tune the Base model on 80k stanzas of the German Poetry Corpus DLK BIBREF30 for 2 epochs, both on token (masked words) and sequence (next line) prediction (Base$_{\textsc {Tuned}}$).
We split the randomized German dataset so that each label is at least 10 times in the validation set (63 instances, 113 labels), and at least 10 times in the test set (56 instances, 108 labels) and leave the rest for training (617 instances, 946 labels). We train BERT for 10 epochs (with a batch size of 8), optimize with entropy loss, and report F1-micro on the test set. See Table TABREF36 for the results.
We find that the multilingual model cannot handle infrequent categories, i.e., Awe/Sublime, Suspense and Humor. However, increasing the dataset with English data improves the results, suggesting that the classification would largely benefit from more annotated data. The best model overall is DBMDZ (.520), showing a balanced response on both validation and test set. See Table TABREF37 for a breakdown of all emotions as predicted by the this model. Precision is mostly higher than recall. The labels Awe/Sublime, Suspense and Humor are harder to predict than the other labels.
The BASE and BASE$_{\textsc {TUNED}}$ models perform slightly worse than DBMDZ. The effect of tuning of the BASE model is questionable, probably because of the restricted vocabulary (30k). We found that tuning on poetry does not show obvious improvements. Lastly, we find that models that were trained on lines (instead of stanzas) do not achieve the same F1 (~.42 for the German models).
Concluding Remarks
In this paper, we presented a dataset of German and English poetry annotated with reader response to reading poetry. We argued that basic emotions as proposed by psychologists (such as Ekman and Plutchik) that are often used in emotion analysis from text are of little use for the annotation of poetry reception. We instead conceptualized aesthetic emotion labels and showed that a closely supervised annotation task results in substantial agreement—in terms of $\kappa $ score—on the final dataset.
The task of collecting reader-perceived emotion response to poetry in a crowdsourcing setting is not straightforward. In contrast to expert annotators, who were closely supervised and reflected upon the task, the annotators on crowdsourcing platforms are difficult to control and may lack necessary background knowledge to perform the task at hand. However, using a larger number of crowd annotators may lead to finding an aggregation strategy with a better trade-off between quality and quantity of adjudicated labels. For future work, we thus propose to repeat the experiment with larger number of crowdworkers, and develop an improved training strategy that would suit the crowdsourcing environment.
The dataset presented in this paper can be of use for different application scenarios, including multi-label emotion classification, style-conditioned poetry generation, investigating the influence of rhythm/prosodic features on emotion, or analysis of authors, genres and diachronic variation (e.g., how emotions are represented differently in certain periods).
Further, though our modeling experiments are still rudimentary, we propose that this data set can be used to investigate the intra-poem relations either through multi-task learning BIBREF49 and/or with the help of hierarchical sequence classification approaches.
Acknowledgements
A special thanks goes to Gesine Fuhrmann, who created the guidelines and tirelessly documented the annotation progress. Also thanks to Annika Palm and Debby Trzeciak who annotated and gave lively feedback. For help with the conceptualization of labels we thank Ines Schindler. This research has been partially conducted within the CRETA center (http://www.creta.uni-stuttgart.de/) which is funded by the German Ministry for Education and Research (BMBF) and partially funded by the German Research Council (DFG), projects SEAT (Structured Multi-Domain Emotion Analysis from Text, KL 2869/1-1). This work has also been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) at the Technische Universität Darmstadt under grant No. GRK 1994/1.
Appendix
We illustrate two examples of our German gold standard annotation, a poem each by Friedrich Hölderlin and Georg Trakl, and an English poem by Walt Whitman. Hölderlin's text stands out, because the mood changes starkly from the first stanza to the second, from Beauty/Joy to Sadness. Trakl's text is a bit more complex with bits of Nostalgia and, most importantly, a mixture of Uneasiness with Awe/Sublime. Whitman's poem is an example of Vitality and its mixing with Sadness. The English annotation was unified by us for space constraints. For the full annotation please see https://github.com/tnhaider/poetry-emotion/
Appendix ::: Friedrich Hölderlin: Hälfte des Lebens (1804)
Appendix ::: Georg Trakl: In den Nachmittag geflüstert (1912)
Appendix ::: Walt Whitman: O Captain! My Captain! (1865) | Yes |
8d8300d88283c73424c8f301ad9fdd733845eb47 | 8d8300d88283c73424c8f301ad9fdd733845eb47_0 | Q: How is the annotation experiment evaluated?
Text:
1.1em
:::
1.1.1em
::: :::
1.1.1.1em
Thomas Haider$^{1,3}$, Steffen Eger$^2$, Evgeny Kim$^3$, Roman Klinger$^3$, Winfried Menninghaus$^1$
$^{1}$Department of Language and Literature, Max Planck Institute for Empirical Aesthetics
$^{2}$NLLG, Department of Computer Science, Technische Universitat Darmstadt
$^{3}$Institut für Maschinelle Sprachverarbeitung, University of Stuttgart
{thomas.haider, w.m}@ae.mpg.de, eger@aiphes.tu-darmstadt.de
{roman.klinger, evgeny.kim}@ims.uni-stuttgart.de
Most approaches to emotion analysis regarding social media, literature, news, and other domains focus exclusively on basic emotion categories as defined by Ekman or Plutchik. However, art (such as literature) enables engagement in a broader range of more complex and subtle emotions that have been shown to also include mixed emotional responses. We consider emotions as they are elicited in the reader, rather than what is expressed in the text or intended by the author. Thus, we conceptualize a set of aesthetic emotions that are predictive of aesthetic appreciation in the reader, and allow the annotation of multiple labels per line to capture mixed emotions within context. We evaluate this novel setting in an annotation experiment both with carefully trained experts and via crowdsourcing. Our annotation with experts leads to an acceptable agreement of $\kappa =.70$, resulting in a consistent dataset for future large scale analysis. Finally, we conduct first emotion classification experiments based on BERT, showing that identifying aesthetic emotions is challenging in our data, with up to .52 F1-micro on the German subset. Data and resources are available at https://github.com/tnhaider/poetry-emotion.
Emotion, Aesthetic Emotions, Literature, Poetry, Annotation, Corpora, Emotion Recognition, Multi-Label
Introduction
Emotions are central to human experience, creativity and behavior. Models of affect and emotion, both in psychology and natural language processing, commonly operate on predefined categories, designated either by continuous scales of, e.g., Valence, Arousal and Dominance BIBREF0 or discrete emotion labels (which can also vary in intensity). Discrete sets of emotions often have been motivated by theories of basic emotions, as proposed by Ekman1992—Anger, Fear, Joy, Disgust, Surprise, Sadness—and Plutchik1991, who added Trust and Anticipation. These categories are likely to have evolved as they motivate behavior that is directly relevant for survival. However, art reception typically presupposes a situation of safety and therefore offers special opportunities to engage in a broader range of more complex and subtle emotions. These differences between real-life and art contexts have not been considered in natural language processing work so far.
To emotionally move readers is considered a prime goal of literature since Latin antiquity BIBREF1, BIBREF2, BIBREF3. Deeply moved readers shed tears or get chills and goosebumps even in lab settings BIBREF4. In cases like these, the emotional response actually implies an aesthetic evaluation: narratives that have the capacity to move readers are evaluated as good and powerful texts for this very reason. Similarly, feelings of suspense experienced in narratives not only respond to the trajectory of the plot's content, but are also directly predictive of aesthetic liking (or disliking). Emotions that exhibit this dual capacity have been defined as “aesthetic emotions” BIBREF2. Contrary to the negativity bias of classical emotion catalogues, emotion terms used for aesthetic evaluation purposes include far more positive than negative emotions. At the same time, many overall positive aesthetic emotions encompass negative or mixed emotional ingredients BIBREF2, e.g., feelings of suspense include both hopeful and fearful anticipations.
For these reasons, we argue that the analysis of literature (with a focus on poetry) should rely on specifically selected emotion items rather than on the narrow range of basic emotions only. Our selection is based on previous research on this issue in psychological studies on art reception and, specifically, on poetry. For instance, knoop2016mapping found that Beauty is a major factor in poetry reception.
We primarily adopt and adapt emotion terms that schindler2017measuring have identified as aesthetic emotions in their study on how to measure and categorize such particular affective states. Further, we consider the aspect that, when selecting specific emotion labels, the perspective of annotators plays a major role. Whether emotions are elicited in the reader, expressed in the text, or intended by the author largely changes the permissible labels. For example, feelings of Disgust or Love might be intended or expressed in the text, but the text might still fail to elicit corresponding feelings as these concepts presume a strong reaction in the reader. Our focus here was on the actual emotional experience of the readers rather than on hypothetical intentions of authors. We opted for this reader perspective based on previous research in NLP BIBREF5, BIBREF6 and work in empirical aesthetics BIBREF7, that specifically measured the reception of poetry. Our final set of emotion labels consists of Beauty/Joy, Sadness, Uneasiness, Vitality, Suspense, Awe/Sublime, Humor, Annoyance, and Nostalgia.
In addition to selecting an adapted set of emotions, the annotation of poetry brings further challenges, one of which is the choice of the appropriate unit of annotation. Previous work considers words BIBREF8, BIBREF9, sentences BIBREF10, BIBREF11, utterances BIBREF12, sentence triples BIBREF13, or paragraphs BIBREF14 as the units of annotation. For poetry, reasonable units follow the logical document structure of poems, i.e., verse (line), stanza, and, owing to its relative shortness, the complete text. The more coarse-grained the unit, the more difficult the annotation is likely to be, but the more it may also enable the annotation of emotions in context. We find that annotating fine-grained units (lines) that are hierarchically ordered within a larger context (stanza, poem) caters to the specific structure of poems, where emotions are regularly mixed and are more interpretable within the whole poem. Consequently, we allow the mixing of emotions already at line level through multi-label annotation.
The remainder of this paper includes (1) a report of the annotation process that takes these challenges into consideration, (2) a description of our annotated corpora, and (3) an implementation of baseline models for the novel task of aesthetic emotion annotation in poetry. In a first study, the annotators work on the annotations in a closely supervised fashion, carefully reading each verse, stanza, and poem. In a second study, the annotations are performed via crowdsourcing within relatively short time periods with annotators not seeing the entire poem while reading the stanza. Using these two settings, we aim at obtaining a better understanding of the advantages and disadvantages of an expert vs. crowdsourcing setting in this novel annotation task. Particularly, we are interested in estimating the potential of a crowdsourcing environment for the task of self-perceived emotion annotation in poetry, given time and cost overhead associated with in-house annotation process (that usually involve training and close supervision of the annotators).
We provide the final datasets of German and English language poems annotated with reader emotions on verse level at https://github.com/tnhaider/poetry-emotion.
Related Work ::: Poetry in Natural Language Processing
Natural language understanding research on poetry has investigated stylistic variation BIBREF15, BIBREF16, BIBREF17, with a focus on broadly accepted formal features such as meter BIBREF18, BIBREF19, BIBREF20 and rhyme BIBREF21, BIBREF22, as well as enjambement BIBREF23, BIBREF24 and metaphor BIBREF25, BIBREF26. Recent work has also explored the relationship of poetry and prose, mainly on a syntactic level BIBREF27, BIBREF28. Furthermore, poetry also lends itself well to semantic (change) analysis BIBREF29, BIBREF30, as linguistic invention BIBREF31, BIBREF32 and succinctness BIBREF33 are at the core of poetic production.
Corpus-based analysis of emotions in poetry has been considered, but there is no work on German, and little on English. kao2015computational analyze English poems with word associations from the Harvard Inquirer and LIWC, within the categories positive/negative outlook, positive/negative emotion and phys./psych. well-being. hou-frank-2015-analyzing examine the binary sentiment polarity of Chinese poems with a weighted personalized PageRank algorithm. barros2013automatic followed a tagging approach with a thesaurus to annotate words that are similar to the words `Joy', `Anger', `Fear' and `Sadness' (moreover translating these from English to Spanish). With these word lists, they distinguish the categories `Love', `Songs to Lisi', `Satire' and `Philosophical-Moral-Religious' in Quevedo's poetry. Similarly, alsharif2013emotion classify unique Arabic `emotional text forms' based on word unigrams.
Mohanty2018 create a corpus of 788 poems in the Indian Odia language, annotate it on text (poem) level with binary negative and positive sentiment, and are able to distinguish these with moderate success. Sreeja2019 construct a corpus of 736 Indian language poems and annotate the texts on Ekman's six categories + Love + Courage. They achieve a Fleiss Kappa of .48.
In contrast to our work, these studies focus on basic emotions and binary sentiment polarity only, rather than addressing aesthetic emotions. Moreover, they annotate on the level of complete poems (instead of fine-grained verse and stanza-level).
Related Work ::: Emotion Annotation
Emotion corpora have been created for different tasks and with different annotation strategies, with different units of analysis and different foci of emotion perspective (reader, writer, text). Examples include the ISEAR dataset BIBREF34 (document-level); emotion annotation in children stories BIBREF10 and news headlines BIBREF35 (sentence-level); and fine-grained emotion annotation in literature by Kim2018 (phrase- and word-level). We refer the interested reader to an overview paper on existing corpora BIBREF36.
We are only aware of a limited number of publications which look in more depth into the emotion perspective. buechel-hahn-2017-emobank report on an annotation study that focuses both on writer's and reader's emotions associated with English sentences. The results show that the reader perspective yields better inter-annotator agreement. Yang2009 also study the difference between writer and reader emotions, but not with a modeling perspective. The authors find that positive reader emotions tend to be linked to positive writer emotions in online blogs.
Related Work ::: Emotion Classification
The task of emotion classification has been tackled before using rule-based and machine learning approaches. Rule-based emotion classification typically relies on lexical resources of emotionally charged words BIBREF9, BIBREF37, BIBREF8 and offers a straightforward and transparent way to detect emotions in text.
In contrast to rule-based approaches, current models for emotion classification are often based on neural networks and commonly use word embeddings as features. Schuff2017 applied models from the classes of CNN, BiLSTM, and LSTM and compare them to linear classifiers (SVM and MaxEnt), where the BiLSTM shows best results with the most balanced precision and recall. AbdulMageed2017 claim the highest F$_1$ with gated recurrent unit networks BIBREF38 for Plutchik's emotion model. More recently, shared tasks on emotion analysis BIBREF39, BIBREF40 triggered a set of more advanced deep learning approaches, including BERT BIBREF41 and other transfer learning methods BIBREF42.
Data Collection
For our annotation and modeling studies, we build on top of two poetry corpora (in English and German), which we refer to as PO-EMO. This collection represents important contributions to the literary canon over the last 400 years. We make this resource available in TEI P5 XML and an easy-to-use tab separated format. Table TABREF9 shows a size overview of these data sets. Figure FIGREF8 shows the distribution of our data over time via density plots. Note that both corpora show a relative underrepresentation before the onset of the romantic period (around 1750).
Data Collection ::: German
The German corpus contains poems available from the website lyrik.antikoerperchen.de (ANTI-K), which provides a platform for students to upload essays about poems. The data is available in the Hypertext Markup Language, with clean line and stanza segmentation. ANTI-K also has extensive metadata, including author names, years of publication, numbers of sentences, poetic genres, and literary periods, that enable us to gauge the distribution of poems according to periods. The 158 poems we consider (731 stanzas) are dispersed over 51 authors and the New High German timeline (1575–1936 A.D.). This data has been annotated, besides emotions, for meter, rhythm, and rhyme in other studies BIBREF22, BIBREF43.
Data Collection ::: English
The English corpus contains 64 poems of popular English writers. It was partly collected from Project Gutenberg with the GutenTag tool, and, in addition, includes a number of hand selected poems from the modern period and represents a cross section of popular English poets. We took care to include a number of female authors, who would have been underrepresented in a uniform sample. Time stamps in the corpus are organized by the birth year of the author, as assigned in Project Gutenberg.
Expert Annotation
In the following, we will explain how we compiled and annotated three data subsets, namely, (1) 48 German poems with gold annotation. These were originally annotated by three annotators. The labels were then aggregated with majority voting and based on discussions among the annotators. Finally, they were curated to only include one gold annotation. (2) The remaining 110 German poems that are used to compute the agreement in table TABREF20 and (3) 64 English poems contain the raw annotation from two annotators.
We report the genesis of our annotation guidelines including the emotion classes. With the intention to provide a language resource for the computational analysis of emotion in poetry, we aimed at maximizing the consistency of our annotation, while doing justice to the diversity of poetry. We iteratively improved the guidelines and the annotation workflow by annotating in batches, cleaning the class set, and the compilation of a gold standard. The final overall cost of producing this expert annotated dataset amounts to approximately 3,500.
Expert Annotation ::: Workflow
The annotation process was initially conducted by three female university students majoring in linguistics and/or literary studies, which we refer to as our “expert annotators”. We used the INCePTION platform for annotation BIBREF44. Starting with the German poems, we annotated in batches of about 16 (and later in some cases 32) poems. After each batch, we computed agreement statistics including heatmaps, and provided this feedback to the annotators. For the first three batches, the three annotators produced a gold standard using a majority vote for each line. Where this was inconclusive, they developed an adjudicated annotation based on discussion. Where necessary, we encouraged the annotators to aim for more consistency, as most of the frequent switching of emotions within a stanza could not be reconstructed or justified.
In poems, emotions are regularly mixed (already on line level) and are more interpretable within the whole poem. We therefore annotate lines hierarchically within the larger context of stanzas and the whole poem. Hence, we instruct the annotators to read a complete stanza or full poem, and then annotate each line in the context of its stanza. To reflect on the emotional complexity of poetry, we allow a maximum of two labels per line while avoiding heavy label fluctuations by encouraging annotators to reflect on their feelings to avoid `empty' annotations. Rather, they were advised to use fewer labels and more consistent annotation. This additional constraint is necessary to avoid “wild”, non-reconstructable or non-justified annotations.
All subsequent batches (all except the first three) were only annotated by two out of the three initial annotators, coincidentally those two who had the lowest initial agreement with each other. We asked these two experts to use the generated gold standard (48 poems; majority votes of 3 annotators plus manual curation) as a reference (“if in doubt, annotate according to the gold standard”). This eliminated some systematic differences between them and markedly improved the agreement levels, roughly from 0.3–0.5 Cohen's $\kappa $ in the first three batches to around 0.6–0.8 $\kappa $ for all subsequent batches. This annotation procedure relaxes the reader perspective, as we encourage annotators (if in doubt) to annotate how they think the other annotators would annotate. However, we found that this formulation improves the usability of the data and leads to a more consistent annotation.
Expert Annotation ::: Emotion Labels
We opt for measuring the reader perspective rather than the text surface or author's intent. To closer define and support conceptualizing our labels, we use particular `items', as they are used in psychological self-evaluations. These items consist of adjectives, verbs or short phrases. We build on top of schindler2017measuring who proposed 43 items that were then grouped by a factor analysis based on self-evaluations of participants. The resulting factors are shown in Table TABREF17. We attempt to cover all identified factors and supplement with basic emotions BIBREF46, BIBREF47, where possible.
We started with a larger set of labels to then delete and substitute (tone down) labels during the initial annotation process to avoid infrequent classes and inconsistencies. Further, we conflate labels if they show considerable confusion with each other. These iterative improvements particularly affected Confusion, Boredom and Other that were very infrequently annotated and had little agreement among annotators ($\kappa <.2$). For German, we also removed Nostalgia ($\kappa =.218$) after gold standard creation, but after consideration, added it back for English, then achieving agreement. Nostalgia is still available in the gold standard (then with a second label Beauty/Joy or Sadness to keep consistency). However, Confusion, Boredom and Other are not available in any sub-corpus.
Our final set consists of nine classes, i.e., (in order of frequency) Beauty/Joy, Sadness, Uneasiness, Vitality, Suspense, Awe/Sublime, Humor, Annoyance, and Nostalgia. In the following, we describe the labels and give further details on the aggregation process.
Annoyance (annoys me/angers me/felt frustrated): Annoyance implies feeling annoyed, frustrated or even angry while reading the line/stanza. We include the class Anger here, as this was found to be too strong in intensity.
Awe/Sublime (found it overwhelming/sense of greatness): Awe/Sublime implies being overwhelmed by the line/stanza, i.e., if one gets the impression of facing something sublime or if the line/stanza inspires one with awe (or that the expression itself is sublime). Such emotions are often associated with subjects like god, death, life, truth, etc. The term Sublime originated with kant2000critique as one of the first aesthetic emotion terms. Awe is a more common English term.
Beauty/Joy (found it beautiful/pleasing/makes me happy/joyful): kant2000critique already spoke of a “feeling of beauty”, and it should be noted that it is not a `merely pleasing emotion'. Therefore, in our pilot annotations, Beauty and Joy were separate labels. However, schindler2017measuring found that items for Beauty and Joy load into the same factors. Furthermore, our pilot annotations revealed, while Beauty is the more dominant and frequent feeling, both labels regularly accompany each other, and they often get confused across annotators. Therefore, we add Joy to form an inclusive label Beauty/Joy that increases annotation consistency.
Humor (found it funny/amusing): Implies feeling amused by the line/stanza or if it makes one laugh.
Nostalgia (makes me nostalgic): Nostalgia is defined as a sentimental longing for things, persons or situations in the past. It often carries both positive and negative feelings. However, since this label is quite infrequent, and not available in all subsets of the data, we annotated it with an additional Beauty/Joy or Sadness label to ensure annotation consistency.
Sadness (makes me sad/touches me): If the line/stanza makes one feel sad. It also includes a more general `being touched / moved'.
Suspense (found it gripping/sparked my interest): Choose Suspense if the line/stanza keeps one in suspense (if the line/stanza excites one or triggers one's curiosity). We further removed Anticipation from Suspense/Anticipation, as Anticipation appeared to us as being a more cognitive prediction whereas Suspense is a far more straightforward emotion item.
Uneasiness (found it ugly/unsettling/disturbing / frightening/distasteful): This label covers situations when one feels discomfort about the line/stanza (if the line/stanza feels distasteful/ugly, unsettling/disturbing or frightens one). The labels Ugliness and Disgust were conflated into Uneasiness, as both are seldom felt in poetry (being inadequate/too strong/high in arousal), and typically lead to Uneasiness.
Vitality (found it invigorating/spurs me on/inspires me): This label is meant for a line/stanza that has an inciting, encouraging effect (if the line/stanza conveys a feeling of movement, energy and vitality which animates to action). Similar terms are Activation and Stimulation.
Expert Annotation ::: Agreement
Table TABREF20 shows the Cohen's $\kappa $ agreement scores among our two expert annotators for each emotion category $e$ as follows. We assign each instance (a line in a poem) a binary label indicating whether or not the annotator has annotated the emotion category $e$ in question. From this, we obtain vectors $v_i^e$, for annotators $i=0,1$, where each entry of $v_i^e$ holds the binary value for the corresponding line. We then apply the $\kappa $ statistics to the two binary vectors $v_i^e$. Additionally to averaged $\kappa $, we report micro-F1 values in Table TABREF21 between the multi-label annotations of both expert annotators as well as the micro-F1 score of a random baseline as well as of the majority emotion baseline (which labels each line as Beauty/Joy).
We find that Cohen $\kappa $ agreement ranges from .84 for Uneasiness in the English data, .81 for Humor and Nostalgia, down to German Suspense (.65), Awe/Sublime (.61) and Vitality for both languages (.50 English, .63 German). Both annotators have a similar emotion frequency profile, where the ranking is almost identical, especially for German. However, for English, Annotator 2 annotates more Vitality than Uneasiness. Figure FIGREF18 shows the confusion matrices of labels between annotators as heatmaps. Notably, Beauty/Joy and Sadness are confused across annotators more often than other labels. This is topical for poetry, and therefore not surprising: One might argue that the beauty of beings and situations is only beautiful because it is not enduring and therefore not to divorce from the sadness of the vanishing of beauty BIBREF48. We also find considerable confusion of Sadness with Awe/Sublime and Vitality, while the latter is also regularly confused with Beauty/Joy.
Furthermore, as shown in Figure FIGREF23, we find that no single poem aggregates to more than six emotion labels, while no stanza aggregates to more than four emotion labels. However, most lines and stanzas prefer one or two labels. German poems seem more emotionally diverse where more poems have three labels than two labels, while the majority of English poems have only two labels. This is however attributable to the generally shorter English texts.
Crowdsourcing Annotation
After concluding the expert annotation, we performed a focused crowdsourcing experiment, based on the final label set and items as they are listed in Table TABREF27 and Section SECREF19. With this experiment, we aim to understand whether it is possible to collect reliable judgements for aesthetic perception of poetry from a crowdsourcing platform. A second goal is to see whether we can replicate the expensive expert annotations with less costly crowd annotations.
We opted for a maximally simple annotation environment, where we asked participants to annotate English 4-line stanzas with self-perceived reader emotions. We choose English due to the higher availability of English language annotators on crowdsourcing platforms. Each annotator rates each stanza independently of surrounding context.
Crowdsourcing Annotation ::: Data and Setup
For consistency and to simplify the task for the annotators, we opt for a trade-off between completeness and granularity of the annotation. Specifically, we subselect stanzas composed of four verses from the corpus of 64 hand selected English poems. The resulting selection of 59 stanzas is uploaded to Figure Eight for annotation.
The annotators are asked to answer the following questions for each instance.
Question 1 (single-choice): Read the following stanza and decide for yourself which emotions it evokes.
Question 2 (multiple-choice): Which additional emotions does the stanza evoke?
The answers to both questions correspond to the emotion labels we defined to use in our annotation, as described in Section SECREF19. We add an additional answer choice “None” to Question 2 to allow annotators to say that a stanza does not evoke any additional emotions.
Each instance is annotated by ten people. We restrict the task geographically to the United Kingdom and Ireland and set the internal parameters on Figure Eight to only include the highest quality annotators to join the task. We pay 0.09 per instance. The final cost of the crowdsourcing experiment is 74.
Crowdsourcing Annotation ::: Results
In the following, we determine the best aggregation strategy regarding the 10 annotators with bootstrap resampling. For instance, one could assign the label of a specific emotion to an instance if just one annotators picks it, or one could assign the label only if all annotators agree on this emotion. To evaluate this, we repeatedly pick two sets of 5 annotators each out of the 10 annotators for each of the 59 stanzas, 1000 times overall (i.e., 1000$\times $59 times, bootstrap resampling). For each of these repetitions, we compare the agreement of these two groups of 5 annotators. Each group gets assigned with an adjudicated emotion which is accepted if at least one annotator picks it, at least two annotators pick it, etc. up to all five pick it.
We show the results in Table TABREF27. The $\kappa $ scores show the average agreement between the two groups of five annotators, when the adjudicated class is picked based on the particular threshold of annotators with the same label choice. We see that some emotions tend to have higher agreement scores than others, namely Annoyance (.66), Sadness (up to .52), and Awe/Sublime, Beauty/Joy, Humor (all .46). The maximum agreement is reached mostly with a threshold of 2 (4 times) or 3 (3 times).
We further show in the same table the average numbers of labels from each strategy. Obviously, a lower threshold leads to higher numbers (corresponding to a disjunction of annotations for each emotion). The drop in label counts is comparably drastic, with on average 18 labels per class. Overall, the best average $\kappa $ agreement (.32) is less than half of what we saw for the expert annotators (roughly .70). Crowds especially disagree on many more intricate emotion labels (Uneasiness, Vitality, Nostalgia, Suspense).
We visualize how often two emotions are used to label an instance in a confusion table in Figure FIGREF18. Sadness is used most often to annotate a stanza, and it is often confused with Suspense, Uneasiness, and Nostalgia. Further, Beauty/Joy partially overlaps with Awe/Sublime, Nostalgia, and Sadness.
On average, each crowd annotator uses two emotion labels per stanza (56% of cases); only in 36% of the cases the annotators use one label, and in 6% and 1% of the cases three and four labels, respectively. This contrasts with the expert annotators, who use one label in about 70% of the cases and two labels in 30% of the cases for the same 59 four-liners. Concerning frequency distribution for emotion labels, both experts and crowds name Sadness and Beauty/Joy as the most frequent emotions (for the `best' threshold of 3) and Nostalgia as one of the least frequent emotions. The Spearman rank correlation between experts and crowds is about 0.55 with respect to the label frequency distribution, indicating that crowds could replace experts to a moderate degree when it comes to extracting, e.g., emotion distributions for an author or time period. Now, we further compare crowds and experts in terms of whether crowds could replicate expert annotations also on a finer stanza level (rather than only on a distributional level).
Crowdsourcing Annotation ::: Comparing Experts with Crowds
To gauge the quality of the crowd annotations in comparison with our experts, we calculate agreement on the emotions between experts and an increasing group size from the crowd. For each stanza instance $s$, we pick $N$ crowd workers, where $N\in \lbrace 4,6,8,10\rbrace $, then pick their majority emotion for $s$, and additionally pick their second ranked majority emotion if at least $\frac{N}{2}-1$ workers have chosen it. For the experts, we aggregate their emotion labels on stanza level, then perform the same strategy for selection of emotion labels. Thus, for $s$, both crowds and experts have 1 or 2 emotions. For each emotion, we then compute Cohen's $\kappa $ as before. Note that, compared to our previous experiments in Section SECREF26 with a threshold, each stanza now receives an emotion annotation (exactly one or two emotion labels), both by the experts and the crowd-workers.
In Figure FIGREF30, we plot agreement between experts and crowds on stanza level as we vary the number $N$ of crowd workers involved. On average, there is roughly a steady linear increase in agreement as $N$ grows, which may indicate that $N=20$ or $N=30$ would still lead to better agreement. Concerning individual emotions, Nostalgia is the emotion with the least agreement, as opposed to Sadness (in our sample of 59 four-liners): the agreement for this emotion grows from $.47$ $\kappa $ with $N=4$ to $.65$ $\kappa $ with $N=10$. Sadness is also the most frequent emotion, both according to experts and crowds. Other emotions for which a reasonable agreement is achieved are Annoyance, Awe/Sublime, Beauty/Joy, Humor ($\kappa $ > 0.2). Emotions with little agreement are Vitality, Uneasiness, Suspense, Nostalgia ($\kappa $ < 0.2).
By and large, we note from Figure FIGREF18 that expert annotation is more restrictive, with experts agreeing more often on particular emotion labels (seen in the darker diagonal). The results of the crowdsourcing experiment, on the other hand, are a mixed bag as evidenced by a much sparser distribution of emotion labels. However, we note that these differences can be caused by 1) the disparate training procedure for the experts and crowds, and 2) the lack of opportunities for close supervision and on-going training of the crowds, as opposed to the in-house expert annotators.
In general, however, we find that substituting experts with crowds is possible to a certain degree. Even though the crowds' labels look inconsistent at first view, there appears to be a good signal in their aggregated annotations, helping to approximate expert annotations to a certain degree. The average $\kappa $ agreement (with the experts) we get from $N=10$ crowd workers (0.24) is still considerably below the agreement among the experts (0.70).
Modeling
To estimate the difficulty of automatic classification of our data set, we perform multi-label document classification (of stanzas) with BERT BIBREF41. For this experiment we aggregate all labels for a stanza and sort them by frequency, both for the gold standard and the raw expert annotations. As can be seen in Figure FIGREF23, a stanza bears a minimum of one and a maximum of four emotions. Unfortunately, the label Nostalgia is only available 16 times in the German data (the gold standard) as a second label (as discussed in Section SECREF19). None of our models was able to learn this label for German. Therefore we omit it, leaving us with eight proper labels.
We use the code and the pre-trained BERT models of Farm, provided by deepset.ai. We test the multilingual-uncased model (Multiling), the german-base-cased model (Base), the german-dbmdz-uncased model (Dbmdz), and we tune the Base model on 80k stanzas of the German Poetry Corpus DLK BIBREF30 for 2 epochs, both on token (masked words) and sequence (next line) prediction (Base$_{\textsc {Tuned}}$).
We split the randomized German dataset so that each label is at least 10 times in the validation set (63 instances, 113 labels), and at least 10 times in the test set (56 instances, 108 labels) and leave the rest for training (617 instances, 946 labels). We train BERT for 10 epochs (with a batch size of 8), optimize with entropy loss, and report F1-micro on the test set. See Table TABREF36 for the results.
We find that the multilingual model cannot handle infrequent categories, i.e., Awe/Sublime, Suspense and Humor. However, increasing the dataset with English data improves the results, suggesting that the classification would largely benefit from more annotated data. The best model overall is DBMDZ (.520), showing a balanced response on both validation and test set. See Table TABREF37 for a breakdown of all emotions as predicted by the this model. Precision is mostly higher than recall. The labels Awe/Sublime, Suspense and Humor are harder to predict than the other labels.
The BASE and BASE$_{\textsc {TUNED}}$ models perform slightly worse than DBMDZ. The effect of tuning of the BASE model is questionable, probably because of the restricted vocabulary (30k). We found that tuning on poetry does not show obvious improvements. Lastly, we find that models that were trained on lines (instead of stanzas) do not achieve the same F1 (~.42 for the German models).
Concluding Remarks
In this paper, we presented a dataset of German and English poetry annotated with reader response to reading poetry. We argued that basic emotions as proposed by psychologists (such as Ekman and Plutchik) that are often used in emotion analysis from text are of little use for the annotation of poetry reception. We instead conceptualized aesthetic emotion labels and showed that a closely supervised annotation task results in substantial agreement—in terms of $\kappa $ score—on the final dataset.
The task of collecting reader-perceived emotion response to poetry in a crowdsourcing setting is not straightforward. In contrast to expert annotators, who were closely supervised and reflected upon the task, the annotators on crowdsourcing platforms are difficult to control and may lack necessary background knowledge to perform the task at hand. However, using a larger number of crowd annotators may lead to finding an aggregation strategy with a better trade-off between quality and quantity of adjudicated labels. For future work, we thus propose to repeat the experiment with larger number of crowdworkers, and develop an improved training strategy that would suit the crowdsourcing environment.
The dataset presented in this paper can be of use for different application scenarios, including multi-label emotion classification, style-conditioned poetry generation, investigating the influence of rhythm/prosodic features on emotion, or analysis of authors, genres and diachronic variation (e.g., how emotions are represented differently in certain periods).
Further, though our modeling experiments are still rudimentary, we propose that this data set can be used to investigate the intra-poem relations either through multi-task learning BIBREF49 and/or with the help of hierarchical sequence classification approaches.
Acknowledgements
A special thanks goes to Gesine Fuhrmann, who created the guidelines and tirelessly documented the annotation progress. Also thanks to Annika Palm and Debby Trzeciak who annotated and gave lively feedback. For help with the conceptualization of labels we thank Ines Schindler. This research has been partially conducted within the CRETA center (http://www.creta.uni-stuttgart.de/) which is funded by the German Ministry for Education and Research (BMBF) and partially funded by the German Research Council (DFG), projects SEAT (Structured Multi-Domain Emotion Analysis from Text, KL 2869/1-1). This work has also been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) at the Technische Universität Darmstadt under grant No. GRK 1994/1.
Appendix
We illustrate two examples of our German gold standard annotation, a poem each by Friedrich Hölderlin and Georg Trakl, and an English poem by Walt Whitman. Hölderlin's text stands out, because the mood changes starkly from the first stanza to the second, from Beauty/Joy to Sadness. Trakl's text is a bit more complex with bits of Nostalgia and, most importantly, a mixture of Uneasiness with Awe/Sublime. Whitman's poem is an example of Vitality and its mixing with Sadness. The English annotation was unified by us for space constraints. For the full annotation please see https://github.com/tnhaider/poetry-emotion/
Appendix ::: Friedrich Hölderlin: Hälfte des Lebens (1804)
Appendix ::: Georg Trakl: In den Nachmittag geflüstert (1912)
Appendix ::: Walt Whitman: O Captain! My Captain! (1865) | confusion matrices of labels between annotators |
48b12eb53e2d507343f19b8a667696a39b719807 | 48b12eb53e2d507343f19b8a667696a39b719807_0 | Q: What are the aesthetic emotions formalized?
Text:
1.1em
:::
1.1.1em
::: :::
1.1.1.1em
Thomas Haider$^{1,3}$, Steffen Eger$^2$, Evgeny Kim$^3$, Roman Klinger$^3$, Winfried Menninghaus$^1$
$^{1}$Department of Language and Literature, Max Planck Institute for Empirical Aesthetics
$^{2}$NLLG, Department of Computer Science, Technische Universitat Darmstadt
$^{3}$Institut für Maschinelle Sprachverarbeitung, University of Stuttgart
{thomas.haider, w.m}@ae.mpg.de, eger@aiphes.tu-darmstadt.de
{roman.klinger, evgeny.kim}@ims.uni-stuttgart.de
Most approaches to emotion analysis regarding social media, literature, news, and other domains focus exclusively on basic emotion categories as defined by Ekman or Plutchik. However, art (such as literature) enables engagement in a broader range of more complex and subtle emotions that have been shown to also include mixed emotional responses. We consider emotions as they are elicited in the reader, rather than what is expressed in the text or intended by the author. Thus, we conceptualize a set of aesthetic emotions that are predictive of aesthetic appreciation in the reader, and allow the annotation of multiple labels per line to capture mixed emotions within context. We evaluate this novel setting in an annotation experiment both with carefully trained experts and via crowdsourcing. Our annotation with experts leads to an acceptable agreement of $\kappa =.70$, resulting in a consistent dataset for future large scale analysis. Finally, we conduct first emotion classification experiments based on BERT, showing that identifying aesthetic emotions is challenging in our data, with up to .52 F1-micro on the German subset. Data and resources are available at https://github.com/tnhaider/poetry-emotion.
Emotion, Aesthetic Emotions, Literature, Poetry, Annotation, Corpora, Emotion Recognition, Multi-Label
Introduction
Emotions are central to human experience, creativity and behavior. Models of affect and emotion, both in psychology and natural language processing, commonly operate on predefined categories, designated either by continuous scales of, e.g., Valence, Arousal and Dominance BIBREF0 or discrete emotion labels (which can also vary in intensity). Discrete sets of emotions often have been motivated by theories of basic emotions, as proposed by Ekman1992—Anger, Fear, Joy, Disgust, Surprise, Sadness—and Plutchik1991, who added Trust and Anticipation. These categories are likely to have evolved as they motivate behavior that is directly relevant for survival. However, art reception typically presupposes a situation of safety and therefore offers special opportunities to engage in a broader range of more complex and subtle emotions. These differences between real-life and art contexts have not been considered in natural language processing work so far.
To emotionally move readers is considered a prime goal of literature since Latin antiquity BIBREF1, BIBREF2, BIBREF3. Deeply moved readers shed tears or get chills and goosebumps even in lab settings BIBREF4. In cases like these, the emotional response actually implies an aesthetic evaluation: narratives that have the capacity to move readers are evaluated as good and powerful texts for this very reason. Similarly, feelings of suspense experienced in narratives not only respond to the trajectory of the plot's content, but are also directly predictive of aesthetic liking (or disliking). Emotions that exhibit this dual capacity have been defined as “aesthetic emotions” BIBREF2. Contrary to the negativity bias of classical emotion catalogues, emotion terms used for aesthetic evaluation purposes include far more positive than negative emotions. At the same time, many overall positive aesthetic emotions encompass negative or mixed emotional ingredients BIBREF2, e.g., feelings of suspense include both hopeful and fearful anticipations.
For these reasons, we argue that the analysis of literature (with a focus on poetry) should rely on specifically selected emotion items rather than on the narrow range of basic emotions only. Our selection is based on previous research on this issue in psychological studies on art reception and, specifically, on poetry. For instance, knoop2016mapping found that Beauty is a major factor in poetry reception.
We primarily adopt and adapt emotion terms that schindler2017measuring have identified as aesthetic emotions in their study on how to measure and categorize such particular affective states. Further, we consider the aspect that, when selecting specific emotion labels, the perspective of annotators plays a major role. Whether emotions are elicited in the reader, expressed in the text, or intended by the author largely changes the permissible labels. For example, feelings of Disgust or Love might be intended or expressed in the text, but the text might still fail to elicit corresponding feelings as these concepts presume a strong reaction in the reader. Our focus here was on the actual emotional experience of the readers rather than on hypothetical intentions of authors. We opted for this reader perspective based on previous research in NLP BIBREF5, BIBREF6 and work in empirical aesthetics BIBREF7, that specifically measured the reception of poetry. Our final set of emotion labels consists of Beauty/Joy, Sadness, Uneasiness, Vitality, Suspense, Awe/Sublime, Humor, Annoyance, and Nostalgia.
In addition to selecting an adapted set of emotions, the annotation of poetry brings further challenges, one of which is the choice of the appropriate unit of annotation. Previous work considers words BIBREF8, BIBREF9, sentences BIBREF10, BIBREF11, utterances BIBREF12, sentence triples BIBREF13, or paragraphs BIBREF14 as the units of annotation. For poetry, reasonable units follow the logical document structure of poems, i.e., verse (line), stanza, and, owing to its relative shortness, the complete text. The more coarse-grained the unit, the more difficult the annotation is likely to be, but the more it may also enable the annotation of emotions in context. We find that annotating fine-grained units (lines) that are hierarchically ordered within a larger context (stanza, poem) caters to the specific structure of poems, where emotions are regularly mixed and are more interpretable within the whole poem. Consequently, we allow the mixing of emotions already at line level through multi-label annotation.
The remainder of this paper includes (1) a report of the annotation process that takes these challenges into consideration, (2) a description of our annotated corpora, and (3) an implementation of baseline models for the novel task of aesthetic emotion annotation in poetry. In a first study, the annotators work on the annotations in a closely supervised fashion, carefully reading each verse, stanza, and poem. In a second study, the annotations are performed via crowdsourcing within relatively short time periods with annotators not seeing the entire poem while reading the stanza. Using these two settings, we aim at obtaining a better understanding of the advantages and disadvantages of an expert vs. crowdsourcing setting in this novel annotation task. Particularly, we are interested in estimating the potential of a crowdsourcing environment for the task of self-perceived emotion annotation in poetry, given time and cost overhead associated with in-house annotation process (that usually involve training and close supervision of the annotators).
We provide the final datasets of German and English language poems annotated with reader emotions on verse level at https://github.com/tnhaider/poetry-emotion.
Related Work ::: Poetry in Natural Language Processing
Natural language understanding research on poetry has investigated stylistic variation BIBREF15, BIBREF16, BIBREF17, with a focus on broadly accepted formal features such as meter BIBREF18, BIBREF19, BIBREF20 and rhyme BIBREF21, BIBREF22, as well as enjambement BIBREF23, BIBREF24 and metaphor BIBREF25, BIBREF26. Recent work has also explored the relationship of poetry and prose, mainly on a syntactic level BIBREF27, BIBREF28. Furthermore, poetry also lends itself well to semantic (change) analysis BIBREF29, BIBREF30, as linguistic invention BIBREF31, BIBREF32 and succinctness BIBREF33 are at the core of poetic production.
Corpus-based analysis of emotions in poetry has been considered, but there is no work on German, and little on English. kao2015computational analyze English poems with word associations from the Harvard Inquirer and LIWC, within the categories positive/negative outlook, positive/negative emotion and phys./psych. well-being. hou-frank-2015-analyzing examine the binary sentiment polarity of Chinese poems with a weighted personalized PageRank algorithm. barros2013automatic followed a tagging approach with a thesaurus to annotate words that are similar to the words `Joy', `Anger', `Fear' and `Sadness' (moreover translating these from English to Spanish). With these word lists, they distinguish the categories `Love', `Songs to Lisi', `Satire' and `Philosophical-Moral-Religious' in Quevedo's poetry. Similarly, alsharif2013emotion classify unique Arabic `emotional text forms' based on word unigrams.
Mohanty2018 create a corpus of 788 poems in the Indian Odia language, annotate it on text (poem) level with binary negative and positive sentiment, and are able to distinguish these with moderate success. Sreeja2019 construct a corpus of 736 Indian language poems and annotate the texts on Ekman's six categories + Love + Courage. They achieve a Fleiss Kappa of .48.
In contrast to our work, these studies focus on basic emotions and binary sentiment polarity only, rather than addressing aesthetic emotions. Moreover, they annotate on the level of complete poems (instead of fine-grained verse and stanza-level).
Related Work ::: Emotion Annotation
Emotion corpora have been created for different tasks and with different annotation strategies, with different units of analysis and different foci of emotion perspective (reader, writer, text). Examples include the ISEAR dataset BIBREF34 (document-level); emotion annotation in children stories BIBREF10 and news headlines BIBREF35 (sentence-level); and fine-grained emotion annotation in literature by Kim2018 (phrase- and word-level). We refer the interested reader to an overview paper on existing corpora BIBREF36.
We are only aware of a limited number of publications which look in more depth into the emotion perspective. buechel-hahn-2017-emobank report on an annotation study that focuses both on writer's and reader's emotions associated with English sentences. The results show that the reader perspective yields better inter-annotator agreement. Yang2009 also study the difference between writer and reader emotions, but not with a modeling perspective. The authors find that positive reader emotions tend to be linked to positive writer emotions in online blogs.
Related Work ::: Emotion Classification
The task of emotion classification has been tackled before using rule-based and machine learning approaches. Rule-based emotion classification typically relies on lexical resources of emotionally charged words BIBREF9, BIBREF37, BIBREF8 and offers a straightforward and transparent way to detect emotions in text.
In contrast to rule-based approaches, current models for emotion classification are often based on neural networks and commonly use word embeddings as features. Schuff2017 applied models from the classes of CNN, BiLSTM, and LSTM and compare them to linear classifiers (SVM and MaxEnt), where the BiLSTM shows best results with the most balanced precision and recall. AbdulMageed2017 claim the highest F$_1$ with gated recurrent unit networks BIBREF38 for Plutchik's emotion model. More recently, shared tasks on emotion analysis BIBREF39, BIBREF40 triggered a set of more advanced deep learning approaches, including BERT BIBREF41 and other transfer learning methods BIBREF42.
Data Collection
For our annotation and modeling studies, we build on top of two poetry corpora (in English and German), which we refer to as PO-EMO. This collection represents important contributions to the literary canon over the last 400 years. We make this resource available in TEI P5 XML and an easy-to-use tab separated format. Table TABREF9 shows a size overview of these data sets. Figure FIGREF8 shows the distribution of our data over time via density plots. Note that both corpora show a relative underrepresentation before the onset of the romantic period (around 1750).
Data Collection ::: German
The German corpus contains poems available from the website lyrik.antikoerperchen.de (ANTI-K), which provides a platform for students to upload essays about poems. The data is available in the Hypertext Markup Language, with clean line and stanza segmentation. ANTI-K also has extensive metadata, including author names, years of publication, numbers of sentences, poetic genres, and literary periods, that enable us to gauge the distribution of poems according to periods. The 158 poems we consider (731 stanzas) are dispersed over 51 authors and the New High German timeline (1575–1936 A.D.). This data has been annotated, besides emotions, for meter, rhythm, and rhyme in other studies BIBREF22, BIBREF43.
Data Collection ::: English
The English corpus contains 64 poems of popular English writers. It was partly collected from Project Gutenberg with the GutenTag tool, and, in addition, includes a number of hand selected poems from the modern period and represents a cross section of popular English poets. We took care to include a number of female authors, who would have been underrepresented in a uniform sample. Time stamps in the corpus are organized by the birth year of the author, as assigned in Project Gutenberg.
Expert Annotation
In the following, we will explain how we compiled and annotated three data subsets, namely, (1) 48 German poems with gold annotation. These were originally annotated by three annotators. The labels were then aggregated with majority voting and based on discussions among the annotators. Finally, they were curated to only include one gold annotation. (2) The remaining 110 German poems that are used to compute the agreement in table TABREF20 and (3) 64 English poems contain the raw annotation from two annotators.
We report the genesis of our annotation guidelines including the emotion classes. With the intention to provide a language resource for the computational analysis of emotion in poetry, we aimed at maximizing the consistency of our annotation, while doing justice to the diversity of poetry. We iteratively improved the guidelines and the annotation workflow by annotating in batches, cleaning the class set, and the compilation of a gold standard. The final overall cost of producing this expert annotated dataset amounts to approximately 3,500.
Expert Annotation ::: Workflow
The annotation process was initially conducted by three female university students majoring in linguistics and/or literary studies, which we refer to as our “expert annotators”. We used the INCePTION platform for annotation BIBREF44. Starting with the German poems, we annotated in batches of about 16 (and later in some cases 32) poems. After each batch, we computed agreement statistics including heatmaps, and provided this feedback to the annotators. For the first three batches, the three annotators produced a gold standard using a majority vote for each line. Where this was inconclusive, they developed an adjudicated annotation based on discussion. Where necessary, we encouraged the annotators to aim for more consistency, as most of the frequent switching of emotions within a stanza could not be reconstructed or justified.
In poems, emotions are regularly mixed (already on line level) and are more interpretable within the whole poem. We therefore annotate lines hierarchically within the larger context of stanzas and the whole poem. Hence, we instruct the annotators to read a complete stanza or full poem, and then annotate each line in the context of its stanza. To reflect on the emotional complexity of poetry, we allow a maximum of two labels per line while avoiding heavy label fluctuations by encouraging annotators to reflect on their feelings to avoid `empty' annotations. Rather, they were advised to use fewer labels and more consistent annotation. This additional constraint is necessary to avoid “wild”, non-reconstructable or non-justified annotations.
All subsequent batches (all except the first three) were only annotated by two out of the three initial annotators, coincidentally those two who had the lowest initial agreement with each other. We asked these two experts to use the generated gold standard (48 poems; majority votes of 3 annotators plus manual curation) as a reference (“if in doubt, annotate according to the gold standard”). This eliminated some systematic differences between them and markedly improved the agreement levels, roughly from 0.3–0.5 Cohen's $\kappa $ in the first three batches to around 0.6–0.8 $\kappa $ for all subsequent batches. This annotation procedure relaxes the reader perspective, as we encourage annotators (if in doubt) to annotate how they think the other annotators would annotate. However, we found that this formulation improves the usability of the data and leads to a more consistent annotation.
Expert Annotation ::: Emotion Labels
We opt for measuring the reader perspective rather than the text surface or author's intent. To closer define and support conceptualizing our labels, we use particular `items', as they are used in psychological self-evaluations. These items consist of adjectives, verbs or short phrases. We build on top of schindler2017measuring who proposed 43 items that were then grouped by a factor analysis based on self-evaluations of participants. The resulting factors are shown in Table TABREF17. We attempt to cover all identified factors and supplement with basic emotions BIBREF46, BIBREF47, where possible.
We started with a larger set of labels to then delete and substitute (tone down) labels during the initial annotation process to avoid infrequent classes and inconsistencies. Further, we conflate labels if they show considerable confusion with each other. These iterative improvements particularly affected Confusion, Boredom and Other that were very infrequently annotated and had little agreement among annotators ($\kappa <.2$). For German, we also removed Nostalgia ($\kappa =.218$) after gold standard creation, but after consideration, added it back for English, then achieving agreement. Nostalgia is still available in the gold standard (then with a second label Beauty/Joy or Sadness to keep consistency). However, Confusion, Boredom and Other are not available in any sub-corpus.
Our final set consists of nine classes, i.e., (in order of frequency) Beauty/Joy, Sadness, Uneasiness, Vitality, Suspense, Awe/Sublime, Humor, Annoyance, and Nostalgia. In the following, we describe the labels and give further details on the aggregation process.
Annoyance (annoys me/angers me/felt frustrated): Annoyance implies feeling annoyed, frustrated or even angry while reading the line/stanza. We include the class Anger here, as this was found to be too strong in intensity.
Awe/Sublime (found it overwhelming/sense of greatness): Awe/Sublime implies being overwhelmed by the line/stanza, i.e., if one gets the impression of facing something sublime or if the line/stanza inspires one with awe (or that the expression itself is sublime). Such emotions are often associated with subjects like god, death, life, truth, etc. The term Sublime originated with kant2000critique as one of the first aesthetic emotion terms. Awe is a more common English term.
Beauty/Joy (found it beautiful/pleasing/makes me happy/joyful): kant2000critique already spoke of a “feeling of beauty”, and it should be noted that it is not a `merely pleasing emotion'. Therefore, in our pilot annotations, Beauty and Joy were separate labels. However, schindler2017measuring found that items for Beauty and Joy load into the same factors. Furthermore, our pilot annotations revealed, while Beauty is the more dominant and frequent feeling, both labels regularly accompany each other, and they often get confused across annotators. Therefore, we add Joy to form an inclusive label Beauty/Joy that increases annotation consistency.
Humor (found it funny/amusing): Implies feeling amused by the line/stanza or if it makes one laugh.
Nostalgia (makes me nostalgic): Nostalgia is defined as a sentimental longing for things, persons or situations in the past. It often carries both positive and negative feelings. However, since this label is quite infrequent, and not available in all subsets of the data, we annotated it with an additional Beauty/Joy or Sadness label to ensure annotation consistency.
Sadness (makes me sad/touches me): If the line/stanza makes one feel sad. It also includes a more general `being touched / moved'.
Suspense (found it gripping/sparked my interest): Choose Suspense if the line/stanza keeps one in suspense (if the line/stanza excites one or triggers one's curiosity). We further removed Anticipation from Suspense/Anticipation, as Anticipation appeared to us as being a more cognitive prediction whereas Suspense is a far more straightforward emotion item.
Uneasiness (found it ugly/unsettling/disturbing / frightening/distasteful): This label covers situations when one feels discomfort about the line/stanza (if the line/stanza feels distasteful/ugly, unsettling/disturbing or frightens one). The labels Ugliness and Disgust were conflated into Uneasiness, as both are seldom felt in poetry (being inadequate/too strong/high in arousal), and typically lead to Uneasiness.
Vitality (found it invigorating/spurs me on/inspires me): This label is meant for a line/stanza that has an inciting, encouraging effect (if the line/stanza conveys a feeling of movement, energy and vitality which animates to action). Similar terms are Activation and Stimulation.
Expert Annotation ::: Agreement
Table TABREF20 shows the Cohen's $\kappa $ agreement scores among our two expert annotators for each emotion category $e$ as follows. We assign each instance (a line in a poem) a binary label indicating whether or not the annotator has annotated the emotion category $e$ in question. From this, we obtain vectors $v_i^e$, for annotators $i=0,1$, where each entry of $v_i^e$ holds the binary value for the corresponding line. We then apply the $\kappa $ statistics to the two binary vectors $v_i^e$. Additionally to averaged $\kappa $, we report micro-F1 values in Table TABREF21 between the multi-label annotations of both expert annotators as well as the micro-F1 score of a random baseline as well as of the majority emotion baseline (which labels each line as Beauty/Joy).
We find that Cohen $\kappa $ agreement ranges from .84 for Uneasiness in the English data, .81 for Humor and Nostalgia, down to German Suspense (.65), Awe/Sublime (.61) and Vitality for both languages (.50 English, .63 German). Both annotators have a similar emotion frequency profile, where the ranking is almost identical, especially for German. However, for English, Annotator 2 annotates more Vitality than Uneasiness. Figure FIGREF18 shows the confusion matrices of labels between annotators as heatmaps. Notably, Beauty/Joy and Sadness are confused across annotators more often than other labels. This is topical for poetry, and therefore not surprising: One might argue that the beauty of beings and situations is only beautiful because it is not enduring and therefore not to divorce from the sadness of the vanishing of beauty BIBREF48. We also find considerable confusion of Sadness with Awe/Sublime and Vitality, while the latter is also regularly confused with Beauty/Joy.
Furthermore, as shown in Figure FIGREF23, we find that no single poem aggregates to more than six emotion labels, while no stanza aggregates to more than four emotion labels. However, most lines and stanzas prefer one or two labels. German poems seem more emotionally diverse where more poems have three labels than two labels, while the majority of English poems have only two labels. This is however attributable to the generally shorter English texts.
Crowdsourcing Annotation
After concluding the expert annotation, we performed a focused crowdsourcing experiment, based on the final label set and items as they are listed in Table TABREF27 and Section SECREF19. With this experiment, we aim to understand whether it is possible to collect reliable judgements for aesthetic perception of poetry from a crowdsourcing platform. A second goal is to see whether we can replicate the expensive expert annotations with less costly crowd annotations.
We opted for a maximally simple annotation environment, where we asked participants to annotate English 4-line stanzas with self-perceived reader emotions. We choose English due to the higher availability of English language annotators on crowdsourcing platforms. Each annotator rates each stanza independently of surrounding context.
Crowdsourcing Annotation ::: Data and Setup
For consistency and to simplify the task for the annotators, we opt for a trade-off between completeness and granularity of the annotation. Specifically, we subselect stanzas composed of four verses from the corpus of 64 hand selected English poems. The resulting selection of 59 stanzas is uploaded to Figure Eight for annotation.
The annotators are asked to answer the following questions for each instance.
Question 1 (single-choice): Read the following stanza and decide for yourself which emotions it evokes.
Question 2 (multiple-choice): Which additional emotions does the stanza evoke?
The answers to both questions correspond to the emotion labels we defined to use in our annotation, as described in Section SECREF19. We add an additional answer choice “None” to Question 2 to allow annotators to say that a stanza does not evoke any additional emotions.
Each instance is annotated by ten people. We restrict the task geographically to the United Kingdom and Ireland and set the internal parameters on Figure Eight to only include the highest quality annotators to join the task. We pay 0.09 per instance. The final cost of the crowdsourcing experiment is 74.
Crowdsourcing Annotation ::: Results
In the following, we determine the best aggregation strategy regarding the 10 annotators with bootstrap resampling. For instance, one could assign the label of a specific emotion to an instance if just one annotators picks it, or one could assign the label only if all annotators agree on this emotion. To evaluate this, we repeatedly pick two sets of 5 annotators each out of the 10 annotators for each of the 59 stanzas, 1000 times overall (i.e., 1000$\times $59 times, bootstrap resampling). For each of these repetitions, we compare the agreement of these two groups of 5 annotators. Each group gets assigned with an adjudicated emotion which is accepted if at least one annotator picks it, at least two annotators pick it, etc. up to all five pick it.
We show the results in Table TABREF27. The $\kappa $ scores show the average agreement between the two groups of five annotators, when the adjudicated class is picked based on the particular threshold of annotators with the same label choice. We see that some emotions tend to have higher agreement scores than others, namely Annoyance (.66), Sadness (up to .52), and Awe/Sublime, Beauty/Joy, Humor (all .46). The maximum agreement is reached mostly with a threshold of 2 (4 times) or 3 (3 times).
We further show in the same table the average numbers of labels from each strategy. Obviously, a lower threshold leads to higher numbers (corresponding to a disjunction of annotations for each emotion). The drop in label counts is comparably drastic, with on average 18 labels per class. Overall, the best average $\kappa $ agreement (.32) is less than half of what we saw for the expert annotators (roughly .70). Crowds especially disagree on many more intricate emotion labels (Uneasiness, Vitality, Nostalgia, Suspense).
We visualize how often two emotions are used to label an instance in a confusion table in Figure FIGREF18. Sadness is used most often to annotate a stanza, and it is often confused with Suspense, Uneasiness, and Nostalgia. Further, Beauty/Joy partially overlaps with Awe/Sublime, Nostalgia, and Sadness.
On average, each crowd annotator uses two emotion labels per stanza (56% of cases); only in 36% of the cases the annotators use one label, and in 6% and 1% of the cases three and four labels, respectively. This contrasts with the expert annotators, who use one label in about 70% of the cases and two labels in 30% of the cases for the same 59 four-liners. Concerning frequency distribution for emotion labels, both experts and crowds name Sadness and Beauty/Joy as the most frequent emotions (for the `best' threshold of 3) and Nostalgia as one of the least frequent emotions. The Spearman rank correlation between experts and crowds is about 0.55 with respect to the label frequency distribution, indicating that crowds could replace experts to a moderate degree when it comes to extracting, e.g., emotion distributions for an author or time period. Now, we further compare crowds and experts in terms of whether crowds could replicate expert annotations also on a finer stanza level (rather than only on a distributional level).
Crowdsourcing Annotation ::: Comparing Experts with Crowds
To gauge the quality of the crowd annotations in comparison with our experts, we calculate agreement on the emotions between experts and an increasing group size from the crowd. For each stanza instance $s$, we pick $N$ crowd workers, where $N\in \lbrace 4,6,8,10\rbrace $, then pick their majority emotion for $s$, and additionally pick their second ranked majority emotion if at least $\frac{N}{2}-1$ workers have chosen it. For the experts, we aggregate their emotion labels on stanza level, then perform the same strategy for selection of emotion labels. Thus, for $s$, both crowds and experts have 1 or 2 emotions. For each emotion, we then compute Cohen's $\kappa $ as before. Note that, compared to our previous experiments in Section SECREF26 with a threshold, each stanza now receives an emotion annotation (exactly one or two emotion labels), both by the experts and the crowd-workers.
In Figure FIGREF30, we plot agreement between experts and crowds on stanza level as we vary the number $N$ of crowd workers involved. On average, there is roughly a steady linear increase in agreement as $N$ grows, which may indicate that $N=20$ or $N=30$ would still lead to better agreement. Concerning individual emotions, Nostalgia is the emotion with the least agreement, as opposed to Sadness (in our sample of 59 four-liners): the agreement for this emotion grows from $.47$ $\kappa $ with $N=4$ to $.65$ $\kappa $ with $N=10$. Sadness is also the most frequent emotion, both according to experts and crowds. Other emotions for which a reasonable agreement is achieved are Annoyance, Awe/Sublime, Beauty/Joy, Humor ($\kappa $ > 0.2). Emotions with little agreement are Vitality, Uneasiness, Suspense, Nostalgia ($\kappa $ < 0.2).
By and large, we note from Figure FIGREF18 that expert annotation is more restrictive, with experts agreeing more often on particular emotion labels (seen in the darker diagonal). The results of the crowdsourcing experiment, on the other hand, are a mixed bag as evidenced by a much sparser distribution of emotion labels. However, we note that these differences can be caused by 1) the disparate training procedure for the experts and crowds, and 2) the lack of opportunities for close supervision and on-going training of the crowds, as opposed to the in-house expert annotators.
In general, however, we find that substituting experts with crowds is possible to a certain degree. Even though the crowds' labels look inconsistent at first view, there appears to be a good signal in their aggregated annotations, helping to approximate expert annotations to a certain degree. The average $\kappa $ agreement (with the experts) we get from $N=10$ crowd workers (0.24) is still considerably below the agreement among the experts (0.70).
Modeling
To estimate the difficulty of automatic classification of our data set, we perform multi-label document classification (of stanzas) with BERT BIBREF41. For this experiment we aggregate all labels for a stanza and sort them by frequency, both for the gold standard and the raw expert annotations. As can be seen in Figure FIGREF23, a stanza bears a minimum of one and a maximum of four emotions. Unfortunately, the label Nostalgia is only available 16 times in the German data (the gold standard) as a second label (as discussed in Section SECREF19). None of our models was able to learn this label for German. Therefore we omit it, leaving us with eight proper labels.
We use the code and the pre-trained BERT models of Farm, provided by deepset.ai. We test the multilingual-uncased model (Multiling), the german-base-cased model (Base), the german-dbmdz-uncased model (Dbmdz), and we tune the Base model on 80k stanzas of the German Poetry Corpus DLK BIBREF30 for 2 epochs, both on token (masked words) and sequence (next line) prediction (Base$_{\textsc {Tuned}}$).
We split the randomized German dataset so that each label is at least 10 times in the validation set (63 instances, 113 labels), and at least 10 times in the test set (56 instances, 108 labels) and leave the rest for training (617 instances, 946 labels). We train BERT for 10 epochs (with a batch size of 8), optimize with entropy loss, and report F1-micro on the test set. See Table TABREF36 for the results.
We find that the multilingual model cannot handle infrequent categories, i.e., Awe/Sublime, Suspense and Humor. However, increasing the dataset with English data improves the results, suggesting that the classification would largely benefit from more annotated data. The best model overall is DBMDZ (.520), showing a balanced response on both validation and test set. See Table TABREF37 for a breakdown of all emotions as predicted by the this model. Precision is mostly higher than recall. The labels Awe/Sublime, Suspense and Humor are harder to predict than the other labels.
The BASE and BASE$_{\textsc {TUNED}}$ models perform slightly worse than DBMDZ. The effect of tuning of the BASE model is questionable, probably because of the restricted vocabulary (30k). We found that tuning on poetry does not show obvious improvements. Lastly, we find that models that were trained on lines (instead of stanzas) do not achieve the same F1 (~.42 for the German models).
Concluding Remarks
In this paper, we presented a dataset of German and English poetry annotated with reader response to reading poetry. We argued that basic emotions as proposed by psychologists (such as Ekman and Plutchik) that are often used in emotion analysis from text are of little use for the annotation of poetry reception. We instead conceptualized aesthetic emotion labels and showed that a closely supervised annotation task results in substantial agreement—in terms of $\kappa $ score—on the final dataset.
The task of collecting reader-perceived emotion response to poetry in a crowdsourcing setting is not straightforward. In contrast to expert annotators, who were closely supervised and reflected upon the task, the annotators on crowdsourcing platforms are difficult to control and may lack necessary background knowledge to perform the task at hand. However, using a larger number of crowd annotators may lead to finding an aggregation strategy with a better trade-off between quality and quantity of adjudicated labels. For future work, we thus propose to repeat the experiment with larger number of crowdworkers, and develop an improved training strategy that would suit the crowdsourcing environment.
The dataset presented in this paper can be of use for different application scenarios, including multi-label emotion classification, style-conditioned poetry generation, investigating the influence of rhythm/prosodic features on emotion, or analysis of authors, genres and diachronic variation (e.g., how emotions are represented differently in certain periods).
Further, though our modeling experiments are still rudimentary, we propose that this data set can be used to investigate the intra-poem relations either through multi-task learning BIBREF49 and/or with the help of hierarchical sequence classification approaches.
Acknowledgements
A special thanks goes to Gesine Fuhrmann, who created the guidelines and tirelessly documented the annotation progress. Also thanks to Annika Palm and Debby Trzeciak who annotated and gave lively feedback. For help with the conceptualization of labels we thank Ines Schindler. This research has been partially conducted within the CRETA center (http://www.creta.uni-stuttgart.de/) which is funded by the German Ministry for Education and Research (BMBF) and partially funded by the German Research Council (DFG), projects SEAT (Structured Multi-Domain Emotion Analysis from Text, KL 2869/1-1). This work has also been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) at the Technische Universität Darmstadt under grant No. GRK 1994/1.
Appendix
We illustrate two examples of our German gold standard annotation, a poem each by Friedrich Hölderlin and Georg Trakl, and an English poem by Walt Whitman. Hölderlin's text stands out, because the mood changes starkly from the first stanza to the second, from Beauty/Joy to Sadness. Trakl's text is a bit more complex with bits of Nostalgia and, most importantly, a mixture of Uneasiness with Awe/Sublime. Whitman's poem is an example of Vitality and its mixing with Sadness. The English annotation was unified by us for space constraints. For the full annotation please see https://github.com/tnhaider/poetry-emotion/
Appendix ::: Friedrich Hölderlin: Hälfte des Lebens (1804)
Appendix ::: Georg Trakl: In den Nachmittag geflüstert (1912)
Appendix ::: Walt Whitman: O Captain! My Captain! (1865) | feelings of suspense experienced in narratives not only respond to the trajectory of the plot's content, but are also directly predictive of aesthetic liking (or disliking), Emotions that exhibit this dual capacity have been defined as “aesthetic emotions” |
003f884d3893532f8c302431c9f70be6f64d9be8 | 003f884d3893532f8c302431c9f70be6f64d9be8_0 | Q: Do they report results only on English data?
Text: Introduction
“If each city is like a game of chess, the day when I have learned the rules, I shall finally possess my empire, even if I shall never succeed in knowing all the cities it contains.”
— Italo Calvino, Invisible Cities
A community's identity—defined through the common interests and shared experiences of its users—shapes various facets of the social dynamics within it BIBREF0 , BIBREF1 , BIBREF2 . Numerous instances of this interplay between a community's identity and social dynamics have been extensively studied in the context of individual online communities BIBREF3 , BIBREF4 , BIBREF5 . However, the sheer variety of online platforms complicates the task of generalizing insights beyond these isolated, single-community glimpses. A new way to reason about the variation across multiple communities is needed in order to systematically characterize the relationship between properties of a community and the dynamics taking place within.
One especially important component of community dynamics is user engagement. We can aim to understand why users join certain communities BIBREF6 , what factors influence user retention BIBREF7 , and how users react to innovation BIBREF5 . While striking patterns of user engagement have been uncovered in prior case studies of individual communities BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , we do not know whether these observations hold beyond these cases, or when we can draw analogies between different communities. Are there certain types of communities where we can expect similar or contrasting engagement patterns?
To address such questions quantitatively we need to provide structure to the diverse and complex space of online communities. Organizing the multi-community landscape would allow us to both characterize individual points within this space, and reason about systematic variations in patterns of user engagement across the space.
Present work: Structuring the multi-community space. In order to systematically understand the relationship between community identityand user engagement we introduce a quantitative typology of online communities. Our typology is based on two key aspects of community identity: how distinctive—or niche—a community's interests are relative to other communities, and how dynamic—or volatile—these interests are over time. These axes aim to capture the salience of a community's identity and dynamics of its temporal evolution.
Our main insight in implementing this typology automatically and at scale is that the language used within a community can simultaneously capture how distinctive and dynamic its interests are. This language-based approach draws on a wealth of literature characterizing linguistic variation in online communities and its relationship to community and user identity BIBREF16 , BIBREF5 , BIBREF17 , BIBREF18 , BIBREF19 . Basing our typology on language is also convenient since it renders our framework immediately applicable to a wide variety of online communities, where communication is primarily recorded in a textual format.
Using our framework, we map almost 300 Reddit communities onto the landscape defined by the two axes of our typology (Section SECREF2 ). We find that this mapping induces conceptually sound categorizations that effectively capture key aspects of community-level social dynamics. In particular, we quantitatively validate the effectiveness of our mapping by showing that our two-dimensional typology encodes signals that are predictive of community-level rates of user retention, complementing strong activity-based features.
Engagement and community identity. We apply our framework to understand how two important aspects of user engagement in a community—the community's propensity to retain its users (Section SECREF3 ), and its permeability to new members (Section SECREF4 )—vary according to the type of collective identity it fosters. We find that communities that are characterized by specialized, constantly-updating content have higher user retention rates, but also exhibit larger linguistic gaps that separate newcomers from established members.
More closely examining factors that could contribute to this linguistic gap, we find that especially within distinctive communities, established users have an increased propensity to engage with the community's specialized content, compared to newcomers (Section SECREF5 ). Interestingly, while established members of distinctive communities more avidly respond to temporal updates than newcomers, in more generic communities it is the outsiders who engage more with volatile content, perhaps suggesting that such content may serve as an entry-point to the community (but not necessarily a reason to stay). Such insights into the relation between collective identity and user engagement can be informative to community maintainers seeking to better understand growth patterns within their online communities.
More generally, our methodology stands as an example of how sociological questions can be addressed in a multi-community setting. In performing our analyses across a rich variety of communities, we reveal both the diversity of phenomena that can occur, as well as the systematic nature of this diversity.
A typology of community identity
A community's identity derives from its members' common interests and shared experiences BIBREF15 , BIBREF20 . In this work, we structure the multi-community landscape along these two key dimensions of community identity: how distinctive a community's interests are, and how dynamic the community is over time.
We now proceed to outline our quantitative typology, which maps communities along these two dimensions. We start by providing an intuition through inspecting a few example communities. We then introduce a generalizable language-based methodology and use it to map a large set of Reddit communities onto the landscape defined by our typology of community identity.
Overview and intuition
In order to illustrate the diversity within the multi-community space, and to provide an intuition for the underlying structure captured by the proposed typology, we first examine a few example communities and draw attention to some key social dynamics that occur within them.
We consider four communities from Reddit: in Seahawks, fans of the Seahawks football team gather to discuss games and players; in BabyBumps, expecting mothers trade advice and updates on their pregnancy; Cooking consists of recipe ideas and general discussion about cooking; while in pics, users share various images of random things (like eels and hornets). We note that these communities are topically contrasting and foster fairly disjoint user bases. Additionally, these communities exhibit varied patterns of user engagement. While Seahawks maintains a devoted set of users from month to month, pics is dominated by transient users who post a few times and then depart.
Discussions within these communities also span varied sets of interests. Some of these interests are more specific to the community than others: risotto, for example, is seldom a discussion point beyond Cooking. Additionally, some interests consistently recur, while others are specific to a particular time: kitchens are a consistent focus point for cooking, but mint is only in season during spring. Coupling specificity and consistency we find interests such as easter, which isn't particularly specific to BabyBumps but gains prominence in that community around Easter (see Figure FIGREF3 .A for further examples).
These specific interests provide a window into the nature of the communities' interests as a whole, and by extension their community identities. Overall, discussions in Cooking focus on topics which are highly distinctive and consistently recur (like risotto). In contrast, discussions in Seahawks are highly dynamic, rapidly shifting over time as new games occur and players are traded in and out. In the remainder of this section we formally introduce a methodology for mapping communities in this space defined by their distinctiveness and dynamicity (examples in Figure FIGREF3 .B).
Language-based formalization
Our approach follows the intuition that a distinctive community will use language that is particularly specific, or unique, to that community. Similarly, a dynamic community will use volatile language that rapidly changes across successive windows of time. To capture this intuition automatically, we start by defining word-level measures of specificity and volatility. We then extend these word-level primitives to characterize entire comments, and the community itself.
Our characterizations of words in a community are motivated by methodology from prior literature that compares the frequency of a word in a particular setting to its frequency in some background distribution, in order to identify instances of linguistic variation BIBREF21 , BIBREF19 . Our particular framework makes this comparison by way of pointwise mutual information (PMI).
In the following, we use INLINEFORM0 to denote one community within a set INLINEFORM1 of communities, and INLINEFORM2 to denote one time period within the entire history INLINEFORM3 of INLINEFORM4 . We account for temporal as well as inter-community variation by computing word-level measures for each time period of each community's history, INLINEFORM5 . Given a word INLINEFORM6 used within a particular community INLINEFORM7 at time INLINEFORM8 , we define two word-level measures:
Specificity. We quantify the specificity INLINEFORM0 of INLINEFORM1 to INLINEFORM2 by calculating the PMI of INLINEFORM3 and INLINEFORM4 , relative to INLINEFORM5 , INLINEFORM6
where INLINEFORM0 is INLINEFORM1 's frequency in INLINEFORM2 . INLINEFORM3 is specific to INLINEFORM4 if it occurs more frequently in INLINEFORM5 than in the entire set INLINEFORM6 , hence distinguishing this community from the rest. A word INLINEFORM7 whose occurrence is decoupled from INLINEFORM8 , and thus has INLINEFORM9 close to 0, is said to be generic.
We compute values of INLINEFORM0 for each time period INLINEFORM1 in INLINEFORM2 ; in the above description we drop the time-based subscripts for clarity.
Volatility. We quantify the volatility INLINEFORM0 of INLINEFORM1 to INLINEFORM2 as the PMI of INLINEFORM3 and INLINEFORM4 relative to INLINEFORM5 , the entire history of INLINEFORM6 : INLINEFORM7
A word INLINEFORM0 is volatile at time INLINEFORM1 in INLINEFORM2 if it occurs more frequently at INLINEFORM3 than in the entire history INLINEFORM4 , behaving as a fad within a small window of time. A word that occurs with similar frequency across time, and hence has INLINEFORM5 close to 0, is said to be stable.
Extending to utterances. Using our word-level primitives, we define the specificity of an utterance INLINEFORM0 in INLINEFORM1 , INLINEFORM2 as the average specificity of each word in the utterance. The volatility of utterances is defined analogously.
Community-level measures
Having described these word-level measures, we now proceed to establish the primary axes of our typology:
Distinctiveness. A community with a very distinctive identity will tend to have distinctive interests, expressed through specialized language. Formally, we define the distinctiveness of a community INLINEFORM0 as the average specificity of all utterances in INLINEFORM1 . We refer to a community with a less distinctive identity as being generic.
Dynamicity. A highly dynamic community constantly shifts interests from one time window to another, and these temporal variations are reflected in its use of volatile language. Formally, we define the dynamicity of a community INLINEFORM0 as the average volatility of all utterances in INLINEFORM1 . We refer to a community whose language is relatively consistent throughout time as being stable.
In our subsequent analyses, we focus mostly on examing the average distinctiveness and dynamicity of a community over time, denoted INLINEFORM0 and INLINEFORM1 .
Applying the typology to Reddit
We now explain how our typology can be applied to the particular setting of Reddit, and describe the overall behaviour of our linguistic axes in this context.
Dataset description. Reddit is a popular website where users form and participate in discussion-based communities called subreddits. Within these communities, users post content—such as images, URLs, or questions—which often spark vibrant lengthy discussions in thread-based comment sections.
The website contains many highly active subreddits with thousands of active subscribers. These communities span an extremely rich variety of topical interests, as represented by the examples described earlier. They also vary along a rich multitude of structural dimensions, such as the number of users, the amount of conversation and social interaction, and the social norms determining which types of content become popular. The diversity and scope of Reddit's multicommunity ecosystem make it an ideal landscape in which to closely examine the relation between varying community identities and social dynamics.
Our full dataset consists of all subreddits on Reddit from January 2013 to December 2014, for which there are at least 500 words in the vocabulary used to estimate our measures, in at least 4 months of the subreddit's history. We compute our measures over the comments written by users in a community in time windows of months, for each sufficiently active month, and manually remove communities where the bulk of the contributions are in a foreign language. This results in 283 communities ( INLINEFORM0 ), for a total of 4,872 community-months ( INLINEFORM1 ).
Estimating linguistic measures. We estimate word frequencies INLINEFORM0 , and by extension each downstream measure, in a carefully controlled manner in order to ensure we capture robust and meaningful linguistic behaviour. First, we only consider top-level comments which are initial responses to a post, as the content of lower-level responses might reflect conventions of dialogue more than a community's high-level interests. Next, in order to prevent a few highly active users from dominating our frequency estimates, we count each unique word once per user, ignoring successive uses of the same word by the same user. This ensures that our word-level characterizations are not skewed by a small subset of highly active contributors.
In our subsequent analyses, we will only look at these measures computed over the nouns used in comments. In principle, our framework can be applied to any choice of vocabulary. However, in the case of Reddit using nouns provides a convenient degree of interpretability. We can easily understand the implication of a community preferentially mentioning a noun such as gamer or feminist, but interpreting the overuse of verbs or function words such as take or of is less straightforward. Additionally, in focusing on nouns we adopt the view emphasized in modern “third wave” accounts of sociolinguistic variation, that stylistic variation is inseparable from topical content BIBREF23 . In the case of online communities, the choice of what people choose to talk about serves as a primary signal of social identity. That said, a typology based on more purely stylistic differences is an interesting avenue for future work.
Accounting for rare words. One complication when using measures such as PMI, which are based off of ratios of frequencies, is that estimates for very infrequent words could be overemphasized BIBREF24 . Words that only appear a few times in a community tend to score at the extreme ends of our measures (e.g. as highly specific or highly generic), obfuscating the impact of more frequent words in the community. To address this issue, we discard the long tail of infrequent words in our analyses, using only the top 5th percentile of words, by frequency within each INLINEFORM0 , to score comments and communities.
Typology output on Reddit. The distribution of INLINEFORM0 and INLINEFORM1 across Reddit communities is shown in Figure FIGREF3 .B, along with examples of communities at the extremes of our typology. We find that interpretable groupings of communities emerge at various points within our axes. For instance, highly distinctive and dynamic communities tend to focus on rapidly-updating interests like sports teams and games, while generic and consistent communities tend to be large “link-sharing” hubs where users generally post content with no clear dominating themes. More examples of communities at the extremes of our typology are shown in Table TABREF9 .
We note that these groupings capture abstract properties of a community's content that go beyond its topic. For instance, our typology relates topically contrasting communities such as yugioh (which is about a popular trading card game) and Seahawks through the shared trait that their content is particularly distinctive. Additionally, the axes can clarify differences between topically similar communities: while startrek and thewalkingdead both focus on TV shows, startrek is less dynamic than the median community, while thewalkingdead is among the most dynamic communities, as the show was still airing during the years considered.
Community identity and user retention
We have seen that our typology produces qualitatively satisfying groupings of communities according to the nature of their collective identity. This section shows that there is an informative and highly predictive relationship between a community's position in this typology and its user engagement patterns. We find that communities with distinctive and dynamic identities have higher rates of user engagement, and further show that a community's position in our identity-based landscape holds important predictive information that is complementary to a strong activity baseline.
In particular user retention is one of the most crucial aspects of engagement and is critical to community maintenance BIBREF2 . We quantify how successful communities are at retaining users in terms of both short and long-term commitment. Our results indicate that rates of user retention vary drastically, yet systematically according to how distinctive and dynamic a community is (Figure FIGREF3 ).
We find a strong, explanatory relationship between the temporal consistency of a community's identity and rates of user engagement: dynamic communities that continually update and renew their discussion content tend to have far higher rates of user engagement. The relationship between distinctiveness and engagement is less universal, but still highly informative: niche communities tend to engender strong, focused interest from users at one particular point in time, though this does not necessarily translate into long-term retention.
Community-type and monthly retention
We find that dynamic communities, such as Seahawks or starcraft, have substantially higher rates of monthly user retention than more stable communities (Spearman's INLINEFORM0 = 0.70, INLINEFORM1 0.001, computed with community points averaged over months; Figure FIGREF11 .A, left). Similarly, more distinctive communities, like Cooking and Naruto, exhibit moderately higher monthly retention rates than more generic communities (Spearman's INLINEFORM2 = 0.33, INLINEFORM3 0.001; Figure FIGREF11 .A, right).
Monthly retention is formally defined as the proportion of users who contribute in month INLINEFORM0 and then return to contribute again in month INLINEFORM1 . Each monthly datapoint is treated as unique and the trends in Figure FIGREF11 show 95% bootstrapped confidence intervals, cluster-resampled at the level of subreddit BIBREF25 , to account for differences in the number of months each subreddit contributes to the data.
Importantly, we find that in the task of predicting community-level user retention our identity-based typology holds additional predictive value on top of strong baseline features based on community-size (# contributing users) and activity levels (mean # contributions per user), which are commonly used for churn prediction BIBREF7 . We compared out-of-sample predictive performance via leave-one-community-out cross validation using random forest regressors with ensembles of size 100, and otherwise default hyperparameters BIBREF26 . A model predicting average monthly retention based on a community's average distinctiveness and dynamicity achieves an average mean squared error ( INLINEFORM0 ) of INLINEFORM1 and INLINEFORM2 , while an analogous model predicting based on a community's size and average activity level (both log-transformed) achieves INLINEFORM4 and INLINEFORM5 . The difference between the two models is not statistically significant ( INLINEFORM6 , Wilcoxon signed-rank test). However, combining features from both models results in a large and statistically significant improvement over each independent model ( INLINEFORM7 , INLINEFORM8 , INLINEFORM9 Bonferroni-corrected pairwise Wilcoxon tests). These results indicate that our typology can explain variance in community-level retention rates, and provides information beyond what is present in standard activity-based features.
Community-type and user tenure
As with monthly retention, we find a strong positive relationship between a community's dynamicity and the average number of months that a user will stay in that community (Spearman's INLINEFORM0 = 0.41, INLINEFORM1 0.001, computed over all community points; Figure FIGREF11 .B, left). This verifies that the short-term trend observed for monthly retention translates into longer-term engagement and suggests that long-term user retention might be strongly driven by the extent to which a community continually provides novel content. Interestingly, there is no significant relationship between distinctiveness and long-term engagement (Spearman's INLINEFORM2 = 0.03, INLINEFORM3 0.77; Figure FIGREF11 .B, right). Thus, while highly distinctive communities like RandomActsOfMakeup may generate focused commitment from users over a short period of time, such communities are unlikely to retain long-term users unless they also have sufficiently dynamic content.
To measure user tenures we focused on one slice of data (May, 2013) and measured how many months a user spends in each community, on average—the average number of months between a user's first and last comment in each community. We have activity data up until May 2015, so the maximum tenure is 24 months in this set-up, which is exceptionally long relative to the average community member (throughout our entire data less than INLINEFORM0 of users have tenures of more than 24 months in any community).
Community identity and acculturation
The previous section shows that there is a strong connection between the nature of a community's identity and its basic user engagement patterns. In this section, we probe the relationship between a community's identity and how permeable, or accessible, it is to outsiders.
We measure this phenomenon using what we call the acculturation gap, which compares the extent to which engaged vs. non-engaged users employ community-specific language. While previous work has found this gap to be large and predictive of future user engagement in two beer-review communities BIBREF5 , we find that the size of the acculturation gap depends strongly on the nature of a community's identity, with the gap being most pronounced in stable, highly distinctive communities (Figure FIGREF13 ).
This finding has important implications for our understanding of online communities. Though many works have analyzed the dynamics of “linguistic belonging” in online communities BIBREF16 , BIBREF28 , BIBREF5 , BIBREF17 , our results suggest that the process of linguistically fitting in is highly contingent on the nature of a community's identity. At one extreme, in generic communities like pics or worldnews there is no distinctive, linguistic identity for users to adopt.
To measure the acculturation gap for a community, we follow Danescu-Niculescu-Mizil et al danescu-niculescu-mizilno2013 and build “snapshot language models” (SLMs) for each community, which capture the linguistic state of a community at one point of time. Using these language models we can capture how linguistically close a particular utterance is to the community by measuring the cross-entropy of this utterance relative to the SLM: DISPLAYFORM0
where INLINEFORM0 is the probability assigned to bigram INLINEFORM1 from comment INLINEFORM2 in community-month INLINEFORM3 . We build the SLMs by randomly sampling 200 active users—defined as users with at least 5 comments in the respective community and month. For each of these 200 active users we select 5 random 10-word spans from 5 unique comments. To ensure robustness and maximize data efficiency, we construct 100 SLMs for each community-month pair that has enough data, bootstrap-resampling from the set of active users.
We compute a basic measure of the acculturation gap for a community-month INLINEFORM0 as the relative difference of the cross-entropy of comments by users active in INLINEFORM1 with that of singleton comments by outsiders—i.e., users who only ever commented once in INLINEFORM2 , but who are still active in Reddit in general: DISPLAYFORM0
INLINEFORM0 denotes the distribution over singleton comments, INLINEFORM1 denotes the distribution over comments from users active in INLINEFORM2 , and INLINEFORM3 the expected values of the cross-entropy over these respective distributions. For each bootstrap-sampled SLM we compute the cross-entropy of 50 comments by active users (10 comments from 5 randomly sampled active users, who were not used to construct the SLM) and 50 comments from randomly-sampled outsiders.
Figure FIGREF13 .A shows that the acculturation gap varies substantially with how distinctive and dynamic a community is. Highly distinctive communities have far higher acculturation gaps, while dynamicity exhibits a non-linear relationship: relatively stable communities have a higher linguistic `entry barrier', as do very dynamic ones. Thus, in communities like IAmA (a general Q&A forum) that are very generic, with content that is highly, but not extremely dynamic, outsiders are at no disadvantage in matching the community's language. In contrast, the acculturation gap is large in stable, distinctive communities like Cooking that have consistent community-specific language. The gap is also large in extremely dynamic communities like Seahawks, which perhaps require more attention or interest on the part of active users to keep up-to-date with recent trends in content.
These results show that phenomena like the acculturation gap, which were previously observed in individual communities BIBREF28 , BIBREF5 , cannot be easily generalized to a larger, heterogeneous set of communities. At the same time, we see that structuring the space of possible communities enables us to observe systematic patterns in how such phenomena vary.
Community identity and content affinity
Through the acculturation gap, we have shown that communities exhibit large yet systematic variations in their permeability to outsiders. We now turn to understanding the divide in commenting behaviour between outsiders and active community members at a finer granularity, by focusing on two particular ways in which such gaps might manifest among users: through different levels of engagement with specific content and with temporally volatile content.
Echoing previous results, we find that community type mediates the extent and nature of the divide in content affinity. While in distinctive communities active members have a higher affinity for both community-specific content and for highly volatile content, the opposite is true for generic communities, where it is the outsiders who engage more with volatile content.
We quantify these divides in content affinity by measuring differences in the language of the comments written by active users and outsiders. Concretely, for each community INLINEFORM0 , we define the specificity gap INLINEFORM1 as the relative difference between the average specificity of comments authored by active members, and by outsiders, where these measures are macroaveraged over users. Large, positive INLINEFORM2 then occur in communities where active users tend to engage with substantially more community-specific content than outsiders.
We analogously define the volatility gap INLINEFORM0 as the relative difference in volatilities of active member and outsider comments. Large, positive values of INLINEFORM1 characterize communities where active users tend to have more volatile interests than outsiders, while negative values indicate communities where active users tend to have more stable interests.
We find that in 94% of communities, INLINEFORM0 , indicating (somewhat unsurprisingly) that in almost all communities, active users tend to engage with more community-specific content than outsiders. However, the magnitude of this divide can vary greatly: for instance, in Homebrewing, which is dedicated to brewing beer, the divide is very pronounced ( INLINEFORM1 0.33) compared to funny, a large hub where users share humorous content ( INLINEFORM2 0.011).
The nature of the volatility gap is comparatively more varied. In Homebrewing ( INLINEFORM0 0.16), as in 68% of communities, active users tend to write more volatile comments than outsiders ( INLINEFORM1 0). However, communities like funny ( INLINEFORM2 -0.16), where active users contribute relatively stable comments compared to outsiders ( INLINEFORM3 0), are also well-represented on Reddit.
To understand whether these variations manifest systematically across communities, we examine the relationship between divides in content affinity and community type. In particular, following the intuition that active users have a relatively high affinity for a community's niche, we expect that the distinctiveness of a community will be a salient mediator of specificity and volatility gaps. Indeed, we find a strong correlation between a community's distinctiveness and its specificity gap (Spearman's INLINEFORM0 0.34, INLINEFORM1 0.001).
We also find a strong correlation between distinctiveness and community volatility gaps (Spearman's INLINEFORM0 0.53, INLINEFORM1 0.001). In particular, we see that among the most distinctive communities (i.e., the top third of communities by distinctiveness), active users tend to write more volatile comments than outsiders (mean INLINEFORM2 0.098), while across the most generic communities (i.e., the bottom third), active users tend to write more stable comments (mean INLINEFORM3 -0.047, Mann-Whitney U test INLINEFORM4 0.001). The relative affinity of outsiders for volatile content in these communities indicates that temporally ephemeral content might serve as an entry point into such a community, without necessarily engaging users in the long term.
Further related work
Our language-based typology and analysis of user engagement draws on and contributes to several distinct research threads, in addition to the many foundational studies cited in the previous sections.
Multicommunity studies. Our investigation of user engagement in multicommunity settings follows prior literature which has examined differences in user and community dynamics across various online groups, such as email listservs. Such studies have primarily related variations in user behaviour to structural features such as group size and volume of content BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 . In focusing on the linguistic content of communities, we extend this research by providing a content-based framework through which user engagement can be examined.
Reddit has been a particularly useful setting for studying multiple communities in prior work. Such studies have mostly focused on characterizing how individual users engage across a multi-community platform BIBREF34 , BIBREF35 , or on specific user engagement patterns such as loyalty to particular communities BIBREF22 . We complement these studies by seeking to understand how features of communities can mediate a broad array of user engagement patterns within them.
Typologies of online communities. Prior attempts to typologize online communities have primarily been qualitative and based on hand-designed categories, making them difficult to apply at scale. These typologies often hinge on having some well-defined function the community serves, such as supporting a business or non-profit cause BIBREF36 , which can be difficult or impossible to identify in massive, anonymous multi-community settings. Other typologies emphasize differences in communication platforms and other functional requirements BIBREF37 , BIBREF38 , which are important but preclude analyzing differences between communities within the same multi-community platform. Similarly, previous computational methods of characterizing multiple communities have relied on the presence of markers such as affixes in community names BIBREF35 , or platform-specific affordances such as evaluation mechanisms BIBREF39 .
Our typology is also distinguished from community detection techniques that rely on structural or functional categorizations BIBREF40 , BIBREF41 . While the focus of those studies is to identify and characterize sub-communities within a larger social network, our typology provides a characterization of pre-defined communities based on the nature of their identity.
Broader work on collective identity. Our focus on community identity dovetails with a long line of research on collective identity and user engagement, in both online and offline communities BIBREF42 , BIBREF1 , BIBREF2 . These studies focus on individual-level psychological manifestations of collective (or social) identity, and their relationship to user engagement BIBREF42 , BIBREF43 , BIBREF44 , BIBREF0 .
In contrast, we seek to characterize community identities at an aggregate level and in an interpretable manner, with the goal of systematically organizing the diverse space of online communities. Typologies of this kind are critical to these broader, social-psychological studies of collective identity: they allow researchers to systematically analyze how the psychological manifestations and implications of collective identity vary across diverse sets of communities.
Conclusion and future work
Our current understanding of engagement patterns in online communities is patched up from glimpses offered by several disparate studies focusing on a few individual communities. This work calls into attention the need for a method to systematically reason about similarities and differences across communities. By proposing a way to structure the multi-community space, we find not only that radically contrasting engagement patterns emerge in different parts of this space, but also that this variation can be at least partly explained by the type of identity each community fosters.
Our choice in this work is to structure the multi-community space according to a typology based on community identity, as reflected in language use. We show that this effectively explains cross-community variation of three different user engagement measures—retention, acculturation and content affinity—and complements measures based on activity and size with additional interpretable information. For example, we find that in niche communities established members are more likely to engage with volatile content than outsiders, while the opposite is true in generic communities. Such insights can be useful for community maintainers seeking to understand engagement patterns in their own communities.
One main area of future research is to examine the temporal dynamics in the multi-community landscape. By averaging our measures of distinctiveness and dynamicity across time, our present study treated community identity as a static property. However, as communities experience internal changes and respond to external events, we can expect the nature of their identity to shift as well. For instance, the relative consistency of harrypotter may be disrupted by the release of a new novel, while Seahawks may foster different identities during and between football seasons. Conversely, a community's type may also mediate the impact of new events. Moving beyond a static view of community identity could enable us to better understand how temporal phenomena such as linguistic change manifest across different communities, and also provide a more nuanced view of user engagement—for instance, are communities more welcoming to newcomers at certain points in their lifecycle?
Another important avenue of future work is to explore other ways of mapping the landscape of online communities. For example, combining structural properties of communities BIBREF40 with topical information BIBREF35 and with our identity-based measures could further characterize and explain variations in user engagement patterns. Furthermore, extending the present analyses to even more diverse communities supported by different platforms (e.g., GitHub, StackExchange, Wikipedia) could enable the characterization of more complex behavioral patterns such as collaboration and altruism, which become salient in different multicommunity landscapes.
Acknowledgements
The authors thank Liye Fu, Jack Hessel, David Jurgens and Lillian Lee for their helpful comments. This research has been supported in part by a Discovery and Innovation Research Seed Award from the Office of the Vice Provost for Research at Cornell, NSF CNS-1010921, IIS-1149837, IIS-1514268 NIH BD2K, ARO MURI, DARPA XDATA, DARPA SIMPLEX, DARPA NGS2, Stanford Data Science Initiative, SAP Stanford Graduate Fellowship, NSERC PGS-D, Boeing, Lightspeed, and Volkswagen. | No |
003f884d3893532f8c302431c9f70be6f64d9be8 | 003f884d3893532f8c302431c9f70be6f64d9be8_1 | Q: Do they report results only on English data?
Text: Introduction
“If each city is like a game of chess, the day when I have learned the rules, I shall finally possess my empire, even if I shall never succeed in knowing all the cities it contains.”
— Italo Calvino, Invisible Cities
A community's identity—defined through the common interests and shared experiences of its users—shapes various facets of the social dynamics within it BIBREF0 , BIBREF1 , BIBREF2 . Numerous instances of this interplay between a community's identity and social dynamics have been extensively studied in the context of individual online communities BIBREF3 , BIBREF4 , BIBREF5 . However, the sheer variety of online platforms complicates the task of generalizing insights beyond these isolated, single-community glimpses. A new way to reason about the variation across multiple communities is needed in order to systematically characterize the relationship between properties of a community and the dynamics taking place within.
One especially important component of community dynamics is user engagement. We can aim to understand why users join certain communities BIBREF6 , what factors influence user retention BIBREF7 , and how users react to innovation BIBREF5 . While striking patterns of user engagement have been uncovered in prior case studies of individual communities BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , we do not know whether these observations hold beyond these cases, or when we can draw analogies between different communities. Are there certain types of communities where we can expect similar or contrasting engagement patterns?
To address such questions quantitatively we need to provide structure to the diverse and complex space of online communities. Organizing the multi-community landscape would allow us to both characterize individual points within this space, and reason about systematic variations in patterns of user engagement across the space.
Present work: Structuring the multi-community space. In order to systematically understand the relationship between community identityand user engagement we introduce a quantitative typology of online communities. Our typology is based on two key aspects of community identity: how distinctive—or niche—a community's interests are relative to other communities, and how dynamic—or volatile—these interests are over time. These axes aim to capture the salience of a community's identity and dynamics of its temporal evolution.
Our main insight in implementing this typology automatically and at scale is that the language used within a community can simultaneously capture how distinctive and dynamic its interests are. This language-based approach draws on a wealth of literature characterizing linguistic variation in online communities and its relationship to community and user identity BIBREF16 , BIBREF5 , BIBREF17 , BIBREF18 , BIBREF19 . Basing our typology on language is also convenient since it renders our framework immediately applicable to a wide variety of online communities, where communication is primarily recorded in a textual format.
Using our framework, we map almost 300 Reddit communities onto the landscape defined by the two axes of our typology (Section SECREF2 ). We find that this mapping induces conceptually sound categorizations that effectively capture key aspects of community-level social dynamics. In particular, we quantitatively validate the effectiveness of our mapping by showing that our two-dimensional typology encodes signals that are predictive of community-level rates of user retention, complementing strong activity-based features.
Engagement and community identity. We apply our framework to understand how two important aspects of user engagement in a community—the community's propensity to retain its users (Section SECREF3 ), and its permeability to new members (Section SECREF4 )—vary according to the type of collective identity it fosters. We find that communities that are characterized by specialized, constantly-updating content have higher user retention rates, but also exhibit larger linguistic gaps that separate newcomers from established members.
More closely examining factors that could contribute to this linguistic gap, we find that especially within distinctive communities, established users have an increased propensity to engage with the community's specialized content, compared to newcomers (Section SECREF5 ). Interestingly, while established members of distinctive communities more avidly respond to temporal updates than newcomers, in more generic communities it is the outsiders who engage more with volatile content, perhaps suggesting that such content may serve as an entry-point to the community (but not necessarily a reason to stay). Such insights into the relation between collective identity and user engagement can be informative to community maintainers seeking to better understand growth patterns within their online communities.
More generally, our methodology stands as an example of how sociological questions can be addressed in a multi-community setting. In performing our analyses across a rich variety of communities, we reveal both the diversity of phenomena that can occur, as well as the systematic nature of this diversity.
A typology of community identity
A community's identity derives from its members' common interests and shared experiences BIBREF15 , BIBREF20 . In this work, we structure the multi-community landscape along these two key dimensions of community identity: how distinctive a community's interests are, and how dynamic the community is over time.
We now proceed to outline our quantitative typology, which maps communities along these two dimensions. We start by providing an intuition through inspecting a few example communities. We then introduce a generalizable language-based methodology and use it to map a large set of Reddit communities onto the landscape defined by our typology of community identity.
Overview and intuition
In order to illustrate the diversity within the multi-community space, and to provide an intuition for the underlying structure captured by the proposed typology, we first examine a few example communities and draw attention to some key social dynamics that occur within them.
We consider four communities from Reddit: in Seahawks, fans of the Seahawks football team gather to discuss games and players; in BabyBumps, expecting mothers trade advice and updates on their pregnancy; Cooking consists of recipe ideas and general discussion about cooking; while in pics, users share various images of random things (like eels and hornets). We note that these communities are topically contrasting and foster fairly disjoint user bases. Additionally, these communities exhibit varied patterns of user engagement. While Seahawks maintains a devoted set of users from month to month, pics is dominated by transient users who post a few times and then depart.
Discussions within these communities also span varied sets of interests. Some of these interests are more specific to the community than others: risotto, for example, is seldom a discussion point beyond Cooking. Additionally, some interests consistently recur, while others are specific to a particular time: kitchens are a consistent focus point for cooking, but mint is only in season during spring. Coupling specificity and consistency we find interests such as easter, which isn't particularly specific to BabyBumps but gains prominence in that community around Easter (see Figure FIGREF3 .A for further examples).
These specific interests provide a window into the nature of the communities' interests as a whole, and by extension their community identities. Overall, discussions in Cooking focus on topics which are highly distinctive and consistently recur (like risotto). In contrast, discussions in Seahawks are highly dynamic, rapidly shifting over time as new games occur and players are traded in and out. In the remainder of this section we formally introduce a methodology for mapping communities in this space defined by their distinctiveness and dynamicity (examples in Figure FIGREF3 .B).
Language-based formalization
Our approach follows the intuition that a distinctive community will use language that is particularly specific, or unique, to that community. Similarly, a dynamic community will use volatile language that rapidly changes across successive windows of time. To capture this intuition automatically, we start by defining word-level measures of specificity and volatility. We then extend these word-level primitives to characterize entire comments, and the community itself.
Our characterizations of words in a community are motivated by methodology from prior literature that compares the frequency of a word in a particular setting to its frequency in some background distribution, in order to identify instances of linguistic variation BIBREF21 , BIBREF19 . Our particular framework makes this comparison by way of pointwise mutual information (PMI).
In the following, we use INLINEFORM0 to denote one community within a set INLINEFORM1 of communities, and INLINEFORM2 to denote one time period within the entire history INLINEFORM3 of INLINEFORM4 . We account for temporal as well as inter-community variation by computing word-level measures for each time period of each community's history, INLINEFORM5 . Given a word INLINEFORM6 used within a particular community INLINEFORM7 at time INLINEFORM8 , we define two word-level measures:
Specificity. We quantify the specificity INLINEFORM0 of INLINEFORM1 to INLINEFORM2 by calculating the PMI of INLINEFORM3 and INLINEFORM4 , relative to INLINEFORM5 , INLINEFORM6
where INLINEFORM0 is INLINEFORM1 's frequency in INLINEFORM2 . INLINEFORM3 is specific to INLINEFORM4 if it occurs more frequently in INLINEFORM5 than in the entire set INLINEFORM6 , hence distinguishing this community from the rest. A word INLINEFORM7 whose occurrence is decoupled from INLINEFORM8 , and thus has INLINEFORM9 close to 0, is said to be generic.
We compute values of INLINEFORM0 for each time period INLINEFORM1 in INLINEFORM2 ; in the above description we drop the time-based subscripts for clarity.
Volatility. We quantify the volatility INLINEFORM0 of INLINEFORM1 to INLINEFORM2 as the PMI of INLINEFORM3 and INLINEFORM4 relative to INLINEFORM5 , the entire history of INLINEFORM6 : INLINEFORM7
A word INLINEFORM0 is volatile at time INLINEFORM1 in INLINEFORM2 if it occurs more frequently at INLINEFORM3 than in the entire history INLINEFORM4 , behaving as a fad within a small window of time. A word that occurs with similar frequency across time, and hence has INLINEFORM5 close to 0, is said to be stable.
Extending to utterances. Using our word-level primitives, we define the specificity of an utterance INLINEFORM0 in INLINEFORM1 , INLINEFORM2 as the average specificity of each word in the utterance. The volatility of utterances is defined analogously.
Community-level measures
Having described these word-level measures, we now proceed to establish the primary axes of our typology:
Distinctiveness. A community with a very distinctive identity will tend to have distinctive interests, expressed through specialized language. Formally, we define the distinctiveness of a community INLINEFORM0 as the average specificity of all utterances in INLINEFORM1 . We refer to a community with a less distinctive identity as being generic.
Dynamicity. A highly dynamic community constantly shifts interests from one time window to another, and these temporal variations are reflected in its use of volatile language. Formally, we define the dynamicity of a community INLINEFORM0 as the average volatility of all utterances in INLINEFORM1 . We refer to a community whose language is relatively consistent throughout time as being stable.
In our subsequent analyses, we focus mostly on examing the average distinctiveness and dynamicity of a community over time, denoted INLINEFORM0 and INLINEFORM1 .
Applying the typology to Reddit
We now explain how our typology can be applied to the particular setting of Reddit, and describe the overall behaviour of our linguistic axes in this context.
Dataset description. Reddit is a popular website where users form and participate in discussion-based communities called subreddits. Within these communities, users post content—such as images, URLs, or questions—which often spark vibrant lengthy discussions in thread-based comment sections.
The website contains many highly active subreddits with thousands of active subscribers. These communities span an extremely rich variety of topical interests, as represented by the examples described earlier. They also vary along a rich multitude of structural dimensions, such as the number of users, the amount of conversation and social interaction, and the social norms determining which types of content become popular. The diversity and scope of Reddit's multicommunity ecosystem make it an ideal landscape in which to closely examine the relation between varying community identities and social dynamics.
Our full dataset consists of all subreddits on Reddit from January 2013 to December 2014, for which there are at least 500 words in the vocabulary used to estimate our measures, in at least 4 months of the subreddit's history. We compute our measures over the comments written by users in a community in time windows of months, for each sufficiently active month, and manually remove communities where the bulk of the contributions are in a foreign language. This results in 283 communities ( INLINEFORM0 ), for a total of 4,872 community-months ( INLINEFORM1 ).
Estimating linguistic measures. We estimate word frequencies INLINEFORM0 , and by extension each downstream measure, in a carefully controlled manner in order to ensure we capture robust and meaningful linguistic behaviour. First, we only consider top-level comments which are initial responses to a post, as the content of lower-level responses might reflect conventions of dialogue more than a community's high-level interests. Next, in order to prevent a few highly active users from dominating our frequency estimates, we count each unique word once per user, ignoring successive uses of the same word by the same user. This ensures that our word-level characterizations are not skewed by a small subset of highly active contributors.
In our subsequent analyses, we will only look at these measures computed over the nouns used in comments. In principle, our framework can be applied to any choice of vocabulary. However, in the case of Reddit using nouns provides a convenient degree of interpretability. We can easily understand the implication of a community preferentially mentioning a noun such as gamer or feminist, but interpreting the overuse of verbs or function words such as take or of is less straightforward. Additionally, in focusing on nouns we adopt the view emphasized in modern “third wave” accounts of sociolinguistic variation, that stylistic variation is inseparable from topical content BIBREF23 . In the case of online communities, the choice of what people choose to talk about serves as a primary signal of social identity. That said, a typology based on more purely stylistic differences is an interesting avenue for future work.
Accounting for rare words. One complication when using measures such as PMI, which are based off of ratios of frequencies, is that estimates for very infrequent words could be overemphasized BIBREF24 . Words that only appear a few times in a community tend to score at the extreme ends of our measures (e.g. as highly specific or highly generic), obfuscating the impact of more frequent words in the community. To address this issue, we discard the long tail of infrequent words in our analyses, using only the top 5th percentile of words, by frequency within each INLINEFORM0 , to score comments and communities.
Typology output on Reddit. The distribution of INLINEFORM0 and INLINEFORM1 across Reddit communities is shown in Figure FIGREF3 .B, along with examples of communities at the extremes of our typology. We find that interpretable groupings of communities emerge at various points within our axes. For instance, highly distinctive and dynamic communities tend to focus on rapidly-updating interests like sports teams and games, while generic and consistent communities tend to be large “link-sharing” hubs where users generally post content with no clear dominating themes. More examples of communities at the extremes of our typology are shown in Table TABREF9 .
We note that these groupings capture abstract properties of a community's content that go beyond its topic. For instance, our typology relates topically contrasting communities such as yugioh (which is about a popular trading card game) and Seahawks through the shared trait that their content is particularly distinctive. Additionally, the axes can clarify differences between topically similar communities: while startrek and thewalkingdead both focus on TV shows, startrek is less dynamic than the median community, while thewalkingdead is among the most dynamic communities, as the show was still airing during the years considered.
Community identity and user retention
We have seen that our typology produces qualitatively satisfying groupings of communities according to the nature of their collective identity. This section shows that there is an informative and highly predictive relationship between a community's position in this typology and its user engagement patterns. We find that communities with distinctive and dynamic identities have higher rates of user engagement, and further show that a community's position in our identity-based landscape holds important predictive information that is complementary to a strong activity baseline.
In particular user retention is one of the most crucial aspects of engagement and is critical to community maintenance BIBREF2 . We quantify how successful communities are at retaining users in terms of both short and long-term commitment. Our results indicate that rates of user retention vary drastically, yet systematically according to how distinctive and dynamic a community is (Figure FIGREF3 ).
We find a strong, explanatory relationship between the temporal consistency of a community's identity and rates of user engagement: dynamic communities that continually update and renew their discussion content tend to have far higher rates of user engagement. The relationship between distinctiveness and engagement is less universal, but still highly informative: niche communities tend to engender strong, focused interest from users at one particular point in time, though this does not necessarily translate into long-term retention.
Community-type and monthly retention
We find that dynamic communities, such as Seahawks or starcraft, have substantially higher rates of monthly user retention than more stable communities (Spearman's INLINEFORM0 = 0.70, INLINEFORM1 0.001, computed with community points averaged over months; Figure FIGREF11 .A, left). Similarly, more distinctive communities, like Cooking and Naruto, exhibit moderately higher monthly retention rates than more generic communities (Spearman's INLINEFORM2 = 0.33, INLINEFORM3 0.001; Figure FIGREF11 .A, right).
Monthly retention is formally defined as the proportion of users who contribute in month INLINEFORM0 and then return to contribute again in month INLINEFORM1 . Each monthly datapoint is treated as unique and the trends in Figure FIGREF11 show 95% bootstrapped confidence intervals, cluster-resampled at the level of subreddit BIBREF25 , to account for differences in the number of months each subreddit contributes to the data.
Importantly, we find that in the task of predicting community-level user retention our identity-based typology holds additional predictive value on top of strong baseline features based on community-size (# contributing users) and activity levels (mean # contributions per user), which are commonly used for churn prediction BIBREF7 . We compared out-of-sample predictive performance via leave-one-community-out cross validation using random forest regressors with ensembles of size 100, and otherwise default hyperparameters BIBREF26 . A model predicting average monthly retention based on a community's average distinctiveness and dynamicity achieves an average mean squared error ( INLINEFORM0 ) of INLINEFORM1 and INLINEFORM2 , while an analogous model predicting based on a community's size and average activity level (both log-transformed) achieves INLINEFORM4 and INLINEFORM5 . The difference between the two models is not statistically significant ( INLINEFORM6 , Wilcoxon signed-rank test). However, combining features from both models results in a large and statistically significant improvement over each independent model ( INLINEFORM7 , INLINEFORM8 , INLINEFORM9 Bonferroni-corrected pairwise Wilcoxon tests). These results indicate that our typology can explain variance in community-level retention rates, and provides information beyond what is present in standard activity-based features.
Community-type and user tenure
As with monthly retention, we find a strong positive relationship between a community's dynamicity and the average number of months that a user will stay in that community (Spearman's INLINEFORM0 = 0.41, INLINEFORM1 0.001, computed over all community points; Figure FIGREF11 .B, left). This verifies that the short-term trend observed for monthly retention translates into longer-term engagement and suggests that long-term user retention might be strongly driven by the extent to which a community continually provides novel content. Interestingly, there is no significant relationship between distinctiveness and long-term engagement (Spearman's INLINEFORM2 = 0.03, INLINEFORM3 0.77; Figure FIGREF11 .B, right). Thus, while highly distinctive communities like RandomActsOfMakeup may generate focused commitment from users over a short period of time, such communities are unlikely to retain long-term users unless they also have sufficiently dynamic content.
To measure user tenures we focused on one slice of data (May, 2013) and measured how many months a user spends in each community, on average—the average number of months between a user's first and last comment in each community. We have activity data up until May 2015, so the maximum tenure is 24 months in this set-up, which is exceptionally long relative to the average community member (throughout our entire data less than INLINEFORM0 of users have tenures of more than 24 months in any community).
Community identity and acculturation
The previous section shows that there is a strong connection between the nature of a community's identity and its basic user engagement patterns. In this section, we probe the relationship between a community's identity and how permeable, or accessible, it is to outsiders.
We measure this phenomenon using what we call the acculturation gap, which compares the extent to which engaged vs. non-engaged users employ community-specific language. While previous work has found this gap to be large and predictive of future user engagement in two beer-review communities BIBREF5 , we find that the size of the acculturation gap depends strongly on the nature of a community's identity, with the gap being most pronounced in stable, highly distinctive communities (Figure FIGREF13 ).
This finding has important implications for our understanding of online communities. Though many works have analyzed the dynamics of “linguistic belonging” in online communities BIBREF16 , BIBREF28 , BIBREF5 , BIBREF17 , our results suggest that the process of linguistically fitting in is highly contingent on the nature of a community's identity. At one extreme, in generic communities like pics or worldnews there is no distinctive, linguistic identity for users to adopt.
To measure the acculturation gap for a community, we follow Danescu-Niculescu-Mizil et al danescu-niculescu-mizilno2013 and build “snapshot language models” (SLMs) for each community, which capture the linguistic state of a community at one point of time. Using these language models we can capture how linguistically close a particular utterance is to the community by measuring the cross-entropy of this utterance relative to the SLM: DISPLAYFORM0
where INLINEFORM0 is the probability assigned to bigram INLINEFORM1 from comment INLINEFORM2 in community-month INLINEFORM3 . We build the SLMs by randomly sampling 200 active users—defined as users with at least 5 comments in the respective community and month. For each of these 200 active users we select 5 random 10-word spans from 5 unique comments. To ensure robustness and maximize data efficiency, we construct 100 SLMs for each community-month pair that has enough data, bootstrap-resampling from the set of active users.
We compute a basic measure of the acculturation gap for a community-month INLINEFORM0 as the relative difference of the cross-entropy of comments by users active in INLINEFORM1 with that of singleton comments by outsiders—i.e., users who only ever commented once in INLINEFORM2 , but who are still active in Reddit in general: DISPLAYFORM0
INLINEFORM0 denotes the distribution over singleton comments, INLINEFORM1 denotes the distribution over comments from users active in INLINEFORM2 , and INLINEFORM3 the expected values of the cross-entropy over these respective distributions. For each bootstrap-sampled SLM we compute the cross-entropy of 50 comments by active users (10 comments from 5 randomly sampled active users, who were not used to construct the SLM) and 50 comments from randomly-sampled outsiders.
Figure FIGREF13 .A shows that the acculturation gap varies substantially with how distinctive and dynamic a community is. Highly distinctive communities have far higher acculturation gaps, while dynamicity exhibits a non-linear relationship: relatively stable communities have a higher linguistic `entry barrier', as do very dynamic ones. Thus, in communities like IAmA (a general Q&A forum) that are very generic, with content that is highly, but not extremely dynamic, outsiders are at no disadvantage in matching the community's language. In contrast, the acculturation gap is large in stable, distinctive communities like Cooking that have consistent community-specific language. The gap is also large in extremely dynamic communities like Seahawks, which perhaps require more attention or interest on the part of active users to keep up-to-date with recent trends in content.
These results show that phenomena like the acculturation gap, which were previously observed in individual communities BIBREF28 , BIBREF5 , cannot be easily generalized to a larger, heterogeneous set of communities. At the same time, we see that structuring the space of possible communities enables us to observe systematic patterns in how such phenomena vary.
Community identity and content affinity
Through the acculturation gap, we have shown that communities exhibit large yet systematic variations in their permeability to outsiders. We now turn to understanding the divide in commenting behaviour between outsiders and active community members at a finer granularity, by focusing on two particular ways in which such gaps might manifest among users: through different levels of engagement with specific content and with temporally volatile content.
Echoing previous results, we find that community type mediates the extent and nature of the divide in content affinity. While in distinctive communities active members have a higher affinity for both community-specific content and for highly volatile content, the opposite is true for generic communities, where it is the outsiders who engage more with volatile content.
We quantify these divides in content affinity by measuring differences in the language of the comments written by active users and outsiders. Concretely, for each community INLINEFORM0 , we define the specificity gap INLINEFORM1 as the relative difference between the average specificity of comments authored by active members, and by outsiders, where these measures are macroaveraged over users. Large, positive INLINEFORM2 then occur in communities where active users tend to engage with substantially more community-specific content than outsiders.
We analogously define the volatility gap INLINEFORM0 as the relative difference in volatilities of active member and outsider comments. Large, positive values of INLINEFORM1 characterize communities where active users tend to have more volatile interests than outsiders, while negative values indicate communities where active users tend to have more stable interests.
We find that in 94% of communities, INLINEFORM0 , indicating (somewhat unsurprisingly) that in almost all communities, active users tend to engage with more community-specific content than outsiders. However, the magnitude of this divide can vary greatly: for instance, in Homebrewing, which is dedicated to brewing beer, the divide is very pronounced ( INLINEFORM1 0.33) compared to funny, a large hub where users share humorous content ( INLINEFORM2 0.011).
The nature of the volatility gap is comparatively more varied. In Homebrewing ( INLINEFORM0 0.16), as in 68% of communities, active users tend to write more volatile comments than outsiders ( INLINEFORM1 0). However, communities like funny ( INLINEFORM2 -0.16), where active users contribute relatively stable comments compared to outsiders ( INLINEFORM3 0), are also well-represented on Reddit.
To understand whether these variations manifest systematically across communities, we examine the relationship between divides in content affinity and community type. In particular, following the intuition that active users have a relatively high affinity for a community's niche, we expect that the distinctiveness of a community will be a salient mediator of specificity and volatility gaps. Indeed, we find a strong correlation between a community's distinctiveness and its specificity gap (Spearman's INLINEFORM0 0.34, INLINEFORM1 0.001).
We also find a strong correlation between distinctiveness and community volatility gaps (Spearman's INLINEFORM0 0.53, INLINEFORM1 0.001). In particular, we see that among the most distinctive communities (i.e., the top third of communities by distinctiveness), active users tend to write more volatile comments than outsiders (mean INLINEFORM2 0.098), while across the most generic communities (i.e., the bottom third), active users tend to write more stable comments (mean INLINEFORM3 -0.047, Mann-Whitney U test INLINEFORM4 0.001). The relative affinity of outsiders for volatile content in these communities indicates that temporally ephemeral content might serve as an entry point into such a community, without necessarily engaging users in the long term.
Further related work
Our language-based typology and analysis of user engagement draws on and contributes to several distinct research threads, in addition to the many foundational studies cited in the previous sections.
Multicommunity studies. Our investigation of user engagement in multicommunity settings follows prior literature which has examined differences in user and community dynamics across various online groups, such as email listservs. Such studies have primarily related variations in user behaviour to structural features such as group size and volume of content BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 . In focusing on the linguistic content of communities, we extend this research by providing a content-based framework through which user engagement can be examined.
Reddit has been a particularly useful setting for studying multiple communities in prior work. Such studies have mostly focused on characterizing how individual users engage across a multi-community platform BIBREF34 , BIBREF35 , or on specific user engagement patterns such as loyalty to particular communities BIBREF22 . We complement these studies by seeking to understand how features of communities can mediate a broad array of user engagement patterns within them.
Typologies of online communities. Prior attempts to typologize online communities have primarily been qualitative and based on hand-designed categories, making them difficult to apply at scale. These typologies often hinge on having some well-defined function the community serves, such as supporting a business or non-profit cause BIBREF36 , which can be difficult or impossible to identify in massive, anonymous multi-community settings. Other typologies emphasize differences in communication platforms and other functional requirements BIBREF37 , BIBREF38 , which are important but preclude analyzing differences between communities within the same multi-community platform. Similarly, previous computational methods of characterizing multiple communities have relied on the presence of markers such as affixes in community names BIBREF35 , or platform-specific affordances such as evaluation mechanisms BIBREF39 .
Our typology is also distinguished from community detection techniques that rely on structural or functional categorizations BIBREF40 , BIBREF41 . While the focus of those studies is to identify and characterize sub-communities within a larger social network, our typology provides a characterization of pre-defined communities based on the nature of their identity.
Broader work on collective identity. Our focus on community identity dovetails with a long line of research on collective identity and user engagement, in both online and offline communities BIBREF42 , BIBREF1 , BIBREF2 . These studies focus on individual-level psychological manifestations of collective (or social) identity, and their relationship to user engagement BIBREF42 , BIBREF43 , BIBREF44 , BIBREF0 .
In contrast, we seek to characterize community identities at an aggregate level and in an interpretable manner, with the goal of systematically organizing the diverse space of online communities. Typologies of this kind are critical to these broader, social-psychological studies of collective identity: they allow researchers to systematically analyze how the psychological manifestations and implications of collective identity vary across diverse sets of communities.
Conclusion and future work
Our current understanding of engagement patterns in online communities is patched up from glimpses offered by several disparate studies focusing on a few individual communities. This work calls into attention the need for a method to systematically reason about similarities and differences across communities. By proposing a way to structure the multi-community space, we find not only that radically contrasting engagement patterns emerge in different parts of this space, but also that this variation can be at least partly explained by the type of identity each community fosters.
Our choice in this work is to structure the multi-community space according to a typology based on community identity, as reflected in language use. We show that this effectively explains cross-community variation of three different user engagement measures—retention, acculturation and content affinity—and complements measures based on activity and size with additional interpretable information. For example, we find that in niche communities established members are more likely to engage with volatile content than outsiders, while the opposite is true in generic communities. Such insights can be useful for community maintainers seeking to understand engagement patterns in their own communities.
One main area of future research is to examine the temporal dynamics in the multi-community landscape. By averaging our measures of distinctiveness and dynamicity across time, our present study treated community identity as a static property. However, as communities experience internal changes and respond to external events, we can expect the nature of their identity to shift as well. For instance, the relative consistency of harrypotter may be disrupted by the release of a new novel, while Seahawks may foster different identities during and between football seasons. Conversely, a community's type may also mediate the impact of new events. Moving beyond a static view of community identity could enable us to better understand how temporal phenomena such as linguistic change manifest across different communities, and also provide a more nuanced view of user engagement—for instance, are communities more welcoming to newcomers at certain points in their lifecycle?
Another important avenue of future work is to explore other ways of mapping the landscape of online communities. For example, combining structural properties of communities BIBREF40 with topical information BIBREF35 and with our identity-based measures could further characterize and explain variations in user engagement patterns. Furthermore, extending the present analyses to even more diverse communities supported by different platforms (e.g., GitHub, StackExchange, Wikipedia) could enable the characterization of more complex behavioral patterns such as collaboration and altruism, which become salient in different multicommunity landscapes.
Acknowledgements
The authors thank Liye Fu, Jack Hessel, David Jurgens and Lillian Lee for their helpful comments. This research has been supported in part by a Discovery and Innovation Research Seed Award from the Office of the Vice Provost for Research at Cornell, NSF CNS-1010921, IIS-1149837, IIS-1514268 NIH BD2K, ARO MURI, DARPA XDATA, DARPA SIMPLEX, DARPA NGS2, Stanford Data Science Initiative, SAP Stanford Graduate Fellowship, NSERC PGS-D, Boeing, Lightspeed, and Volkswagen. | Unanswerable |
bb97537a0a7c8f12a3f65eba73cefa6abcd2f2b2 | bb97537a0a7c8f12a3f65eba73cefa6abcd2f2b2_0 | Q: How do the various social phenomena examined manifest in different types of communities?
Text: Introduction
“If each city is like a game of chess, the day when I have learned the rules, I shall finally possess my empire, even if I shall never succeed in knowing all the cities it contains.”
— Italo Calvino, Invisible Cities
A community's identity—defined through the common interests and shared experiences of its users—shapes various facets of the social dynamics within it BIBREF0 , BIBREF1 , BIBREF2 . Numerous instances of this interplay between a community's identity and social dynamics have been extensively studied in the context of individual online communities BIBREF3 , BIBREF4 , BIBREF5 . However, the sheer variety of online platforms complicates the task of generalizing insights beyond these isolated, single-community glimpses. A new way to reason about the variation across multiple communities is needed in order to systematically characterize the relationship between properties of a community and the dynamics taking place within.
One especially important component of community dynamics is user engagement. We can aim to understand why users join certain communities BIBREF6 , what factors influence user retention BIBREF7 , and how users react to innovation BIBREF5 . While striking patterns of user engagement have been uncovered in prior case studies of individual communities BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , we do not know whether these observations hold beyond these cases, or when we can draw analogies between different communities. Are there certain types of communities where we can expect similar or contrasting engagement patterns?
To address such questions quantitatively we need to provide structure to the diverse and complex space of online communities. Organizing the multi-community landscape would allow us to both characterize individual points within this space, and reason about systematic variations in patterns of user engagement across the space.
Present work: Structuring the multi-community space. In order to systematically understand the relationship between community identityand user engagement we introduce a quantitative typology of online communities. Our typology is based on two key aspects of community identity: how distinctive—or niche—a community's interests are relative to other communities, and how dynamic—or volatile—these interests are over time. These axes aim to capture the salience of a community's identity and dynamics of its temporal evolution.
Our main insight in implementing this typology automatically and at scale is that the language used within a community can simultaneously capture how distinctive and dynamic its interests are. This language-based approach draws on a wealth of literature characterizing linguistic variation in online communities and its relationship to community and user identity BIBREF16 , BIBREF5 , BIBREF17 , BIBREF18 , BIBREF19 . Basing our typology on language is also convenient since it renders our framework immediately applicable to a wide variety of online communities, where communication is primarily recorded in a textual format.
Using our framework, we map almost 300 Reddit communities onto the landscape defined by the two axes of our typology (Section SECREF2 ). We find that this mapping induces conceptually sound categorizations that effectively capture key aspects of community-level social dynamics. In particular, we quantitatively validate the effectiveness of our mapping by showing that our two-dimensional typology encodes signals that are predictive of community-level rates of user retention, complementing strong activity-based features.
Engagement and community identity. We apply our framework to understand how two important aspects of user engagement in a community—the community's propensity to retain its users (Section SECREF3 ), and its permeability to new members (Section SECREF4 )—vary according to the type of collective identity it fosters. We find that communities that are characterized by specialized, constantly-updating content have higher user retention rates, but also exhibit larger linguistic gaps that separate newcomers from established members.
More closely examining factors that could contribute to this linguistic gap, we find that especially within distinctive communities, established users have an increased propensity to engage with the community's specialized content, compared to newcomers (Section SECREF5 ). Interestingly, while established members of distinctive communities more avidly respond to temporal updates than newcomers, in more generic communities it is the outsiders who engage more with volatile content, perhaps suggesting that such content may serve as an entry-point to the community (but not necessarily a reason to stay). Such insights into the relation between collective identity and user engagement can be informative to community maintainers seeking to better understand growth patterns within their online communities.
More generally, our methodology stands as an example of how sociological questions can be addressed in a multi-community setting. In performing our analyses across a rich variety of communities, we reveal both the diversity of phenomena that can occur, as well as the systematic nature of this diversity.
A typology of community identity
A community's identity derives from its members' common interests and shared experiences BIBREF15 , BIBREF20 . In this work, we structure the multi-community landscape along these two key dimensions of community identity: how distinctive a community's interests are, and how dynamic the community is over time.
We now proceed to outline our quantitative typology, which maps communities along these two dimensions. We start by providing an intuition through inspecting a few example communities. We then introduce a generalizable language-based methodology and use it to map a large set of Reddit communities onto the landscape defined by our typology of community identity.
Overview and intuition
In order to illustrate the diversity within the multi-community space, and to provide an intuition for the underlying structure captured by the proposed typology, we first examine a few example communities and draw attention to some key social dynamics that occur within them.
We consider four communities from Reddit: in Seahawks, fans of the Seahawks football team gather to discuss games and players; in BabyBumps, expecting mothers trade advice and updates on their pregnancy; Cooking consists of recipe ideas and general discussion about cooking; while in pics, users share various images of random things (like eels and hornets). We note that these communities are topically contrasting and foster fairly disjoint user bases. Additionally, these communities exhibit varied patterns of user engagement. While Seahawks maintains a devoted set of users from month to month, pics is dominated by transient users who post a few times and then depart.
Discussions within these communities also span varied sets of interests. Some of these interests are more specific to the community than others: risotto, for example, is seldom a discussion point beyond Cooking. Additionally, some interests consistently recur, while others are specific to a particular time: kitchens are a consistent focus point for cooking, but mint is only in season during spring. Coupling specificity and consistency we find interests such as easter, which isn't particularly specific to BabyBumps but gains prominence in that community around Easter (see Figure FIGREF3 .A for further examples).
These specific interests provide a window into the nature of the communities' interests as a whole, and by extension their community identities. Overall, discussions in Cooking focus on topics which are highly distinctive and consistently recur (like risotto). In contrast, discussions in Seahawks are highly dynamic, rapidly shifting over time as new games occur and players are traded in and out. In the remainder of this section we formally introduce a methodology for mapping communities in this space defined by their distinctiveness and dynamicity (examples in Figure FIGREF3 .B).
Language-based formalization
Our approach follows the intuition that a distinctive community will use language that is particularly specific, or unique, to that community. Similarly, a dynamic community will use volatile language that rapidly changes across successive windows of time. To capture this intuition automatically, we start by defining word-level measures of specificity and volatility. We then extend these word-level primitives to characterize entire comments, and the community itself.
Our characterizations of words in a community are motivated by methodology from prior literature that compares the frequency of a word in a particular setting to its frequency in some background distribution, in order to identify instances of linguistic variation BIBREF21 , BIBREF19 . Our particular framework makes this comparison by way of pointwise mutual information (PMI).
In the following, we use INLINEFORM0 to denote one community within a set INLINEFORM1 of communities, and INLINEFORM2 to denote one time period within the entire history INLINEFORM3 of INLINEFORM4 . We account for temporal as well as inter-community variation by computing word-level measures for each time period of each community's history, INLINEFORM5 . Given a word INLINEFORM6 used within a particular community INLINEFORM7 at time INLINEFORM8 , we define two word-level measures:
Specificity. We quantify the specificity INLINEFORM0 of INLINEFORM1 to INLINEFORM2 by calculating the PMI of INLINEFORM3 and INLINEFORM4 , relative to INLINEFORM5 , INLINEFORM6
where INLINEFORM0 is INLINEFORM1 's frequency in INLINEFORM2 . INLINEFORM3 is specific to INLINEFORM4 if it occurs more frequently in INLINEFORM5 than in the entire set INLINEFORM6 , hence distinguishing this community from the rest. A word INLINEFORM7 whose occurrence is decoupled from INLINEFORM8 , and thus has INLINEFORM9 close to 0, is said to be generic.
We compute values of INLINEFORM0 for each time period INLINEFORM1 in INLINEFORM2 ; in the above description we drop the time-based subscripts for clarity.
Volatility. We quantify the volatility INLINEFORM0 of INLINEFORM1 to INLINEFORM2 as the PMI of INLINEFORM3 and INLINEFORM4 relative to INLINEFORM5 , the entire history of INLINEFORM6 : INLINEFORM7
A word INLINEFORM0 is volatile at time INLINEFORM1 in INLINEFORM2 if it occurs more frequently at INLINEFORM3 than in the entire history INLINEFORM4 , behaving as a fad within a small window of time. A word that occurs with similar frequency across time, and hence has INLINEFORM5 close to 0, is said to be stable.
Extending to utterances. Using our word-level primitives, we define the specificity of an utterance INLINEFORM0 in INLINEFORM1 , INLINEFORM2 as the average specificity of each word in the utterance. The volatility of utterances is defined analogously.
Community-level measures
Having described these word-level measures, we now proceed to establish the primary axes of our typology:
Distinctiveness. A community with a very distinctive identity will tend to have distinctive interests, expressed through specialized language. Formally, we define the distinctiveness of a community INLINEFORM0 as the average specificity of all utterances in INLINEFORM1 . We refer to a community with a less distinctive identity as being generic.
Dynamicity. A highly dynamic community constantly shifts interests from one time window to another, and these temporal variations are reflected in its use of volatile language. Formally, we define the dynamicity of a community INLINEFORM0 as the average volatility of all utterances in INLINEFORM1 . We refer to a community whose language is relatively consistent throughout time as being stable.
In our subsequent analyses, we focus mostly on examing the average distinctiveness and dynamicity of a community over time, denoted INLINEFORM0 and INLINEFORM1 .
Applying the typology to Reddit
We now explain how our typology can be applied to the particular setting of Reddit, and describe the overall behaviour of our linguistic axes in this context.
Dataset description. Reddit is a popular website where users form and participate in discussion-based communities called subreddits. Within these communities, users post content—such as images, URLs, or questions—which often spark vibrant lengthy discussions in thread-based comment sections.
The website contains many highly active subreddits with thousands of active subscribers. These communities span an extremely rich variety of topical interests, as represented by the examples described earlier. They also vary along a rich multitude of structural dimensions, such as the number of users, the amount of conversation and social interaction, and the social norms determining which types of content become popular. The diversity and scope of Reddit's multicommunity ecosystem make it an ideal landscape in which to closely examine the relation between varying community identities and social dynamics.
Our full dataset consists of all subreddits on Reddit from January 2013 to December 2014, for which there are at least 500 words in the vocabulary used to estimate our measures, in at least 4 months of the subreddit's history. We compute our measures over the comments written by users in a community in time windows of months, for each sufficiently active month, and manually remove communities where the bulk of the contributions are in a foreign language. This results in 283 communities ( INLINEFORM0 ), for a total of 4,872 community-months ( INLINEFORM1 ).
Estimating linguistic measures. We estimate word frequencies INLINEFORM0 , and by extension each downstream measure, in a carefully controlled manner in order to ensure we capture robust and meaningful linguistic behaviour. First, we only consider top-level comments which are initial responses to a post, as the content of lower-level responses might reflect conventions of dialogue more than a community's high-level interests. Next, in order to prevent a few highly active users from dominating our frequency estimates, we count each unique word once per user, ignoring successive uses of the same word by the same user. This ensures that our word-level characterizations are not skewed by a small subset of highly active contributors.
In our subsequent analyses, we will only look at these measures computed over the nouns used in comments. In principle, our framework can be applied to any choice of vocabulary. However, in the case of Reddit using nouns provides a convenient degree of interpretability. We can easily understand the implication of a community preferentially mentioning a noun such as gamer or feminist, but interpreting the overuse of verbs or function words such as take or of is less straightforward. Additionally, in focusing on nouns we adopt the view emphasized in modern “third wave” accounts of sociolinguistic variation, that stylistic variation is inseparable from topical content BIBREF23 . In the case of online communities, the choice of what people choose to talk about serves as a primary signal of social identity. That said, a typology based on more purely stylistic differences is an interesting avenue for future work.
Accounting for rare words. One complication when using measures such as PMI, which are based off of ratios of frequencies, is that estimates for very infrequent words could be overemphasized BIBREF24 . Words that only appear a few times in a community tend to score at the extreme ends of our measures (e.g. as highly specific or highly generic), obfuscating the impact of more frequent words in the community. To address this issue, we discard the long tail of infrequent words in our analyses, using only the top 5th percentile of words, by frequency within each INLINEFORM0 , to score comments and communities.
Typology output on Reddit. The distribution of INLINEFORM0 and INLINEFORM1 across Reddit communities is shown in Figure FIGREF3 .B, along with examples of communities at the extremes of our typology. We find that interpretable groupings of communities emerge at various points within our axes. For instance, highly distinctive and dynamic communities tend to focus on rapidly-updating interests like sports teams and games, while generic and consistent communities tend to be large “link-sharing” hubs where users generally post content with no clear dominating themes. More examples of communities at the extremes of our typology are shown in Table TABREF9 .
We note that these groupings capture abstract properties of a community's content that go beyond its topic. For instance, our typology relates topically contrasting communities such as yugioh (which is about a popular trading card game) and Seahawks through the shared trait that their content is particularly distinctive. Additionally, the axes can clarify differences between topically similar communities: while startrek and thewalkingdead both focus on TV shows, startrek is less dynamic than the median community, while thewalkingdead is among the most dynamic communities, as the show was still airing during the years considered.
Community identity and user retention
We have seen that our typology produces qualitatively satisfying groupings of communities according to the nature of their collective identity. This section shows that there is an informative and highly predictive relationship between a community's position in this typology and its user engagement patterns. We find that communities with distinctive and dynamic identities have higher rates of user engagement, and further show that a community's position in our identity-based landscape holds important predictive information that is complementary to a strong activity baseline.
In particular user retention is one of the most crucial aspects of engagement and is critical to community maintenance BIBREF2 . We quantify how successful communities are at retaining users in terms of both short and long-term commitment. Our results indicate that rates of user retention vary drastically, yet systematically according to how distinctive and dynamic a community is (Figure FIGREF3 ).
We find a strong, explanatory relationship between the temporal consistency of a community's identity and rates of user engagement: dynamic communities that continually update and renew their discussion content tend to have far higher rates of user engagement. The relationship between distinctiveness and engagement is less universal, but still highly informative: niche communities tend to engender strong, focused interest from users at one particular point in time, though this does not necessarily translate into long-term retention.
Community-type and monthly retention
We find that dynamic communities, such as Seahawks or starcraft, have substantially higher rates of monthly user retention than more stable communities (Spearman's INLINEFORM0 = 0.70, INLINEFORM1 0.001, computed with community points averaged over months; Figure FIGREF11 .A, left). Similarly, more distinctive communities, like Cooking and Naruto, exhibit moderately higher monthly retention rates than more generic communities (Spearman's INLINEFORM2 = 0.33, INLINEFORM3 0.001; Figure FIGREF11 .A, right).
Monthly retention is formally defined as the proportion of users who contribute in month INLINEFORM0 and then return to contribute again in month INLINEFORM1 . Each monthly datapoint is treated as unique and the trends in Figure FIGREF11 show 95% bootstrapped confidence intervals, cluster-resampled at the level of subreddit BIBREF25 , to account for differences in the number of months each subreddit contributes to the data.
Importantly, we find that in the task of predicting community-level user retention our identity-based typology holds additional predictive value on top of strong baseline features based on community-size (# contributing users) and activity levels (mean # contributions per user), which are commonly used for churn prediction BIBREF7 . We compared out-of-sample predictive performance via leave-one-community-out cross validation using random forest regressors with ensembles of size 100, and otherwise default hyperparameters BIBREF26 . A model predicting average monthly retention based on a community's average distinctiveness and dynamicity achieves an average mean squared error ( INLINEFORM0 ) of INLINEFORM1 and INLINEFORM2 , while an analogous model predicting based on a community's size and average activity level (both log-transformed) achieves INLINEFORM4 and INLINEFORM5 . The difference between the two models is not statistically significant ( INLINEFORM6 , Wilcoxon signed-rank test). However, combining features from both models results in a large and statistically significant improvement over each independent model ( INLINEFORM7 , INLINEFORM8 , INLINEFORM9 Bonferroni-corrected pairwise Wilcoxon tests). These results indicate that our typology can explain variance in community-level retention rates, and provides information beyond what is present in standard activity-based features.
Community-type and user tenure
As with monthly retention, we find a strong positive relationship between a community's dynamicity and the average number of months that a user will stay in that community (Spearman's INLINEFORM0 = 0.41, INLINEFORM1 0.001, computed over all community points; Figure FIGREF11 .B, left). This verifies that the short-term trend observed for monthly retention translates into longer-term engagement and suggests that long-term user retention might be strongly driven by the extent to which a community continually provides novel content. Interestingly, there is no significant relationship between distinctiveness and long-term engagement (Spearman's INLINEFORM2 = 0.03, INLINEFORM3 0.77; Figure FIGREF11 .B, right). Thus, while highly distinctive communities like RandomActsOfMakeup may generate focused commitment from users over a short period of time, such communities are unlikely to retain long-term users unless they also have sufficiently dynamic content.
To measure user tenures we focused on one slice of data (May, 2013) and measured how many months a user spends in each community, on average—the average number of months between a user's first and last comment in each community. We have activity data up until May 2015, so the maximum tenure is 24 months in this set-up, which is exceptionally long relative to the average community member (throughout our entire data less than INLINEFORM0 of users have tenures of more than 24 months in any community).
Community identity and acculturation
The previous section shows that there is a strong connection between the nature of a community's identity and its basic user engagement patterns. In this section, we probe the relationship between a community's identity and how permeable, or accessible, it is to outsiders.
We measure this phenomenon using what we call the acculturation gap, which compares the extent to which engaged vs. non-engaged users employ community-specific language. While previous work has found this gap to be large and predictive of future user engagement in two beer-review communities BIBREF5 , we find that the size of the acculturation gap depends strongly on the nature of a community's identity, with the gap being most pronounced in stable, highly distinctive communities (Figure FIGREF13 ).
This finding has important implications for our understanding of online communities. Though many works have analyzed the dynamics of “linguistic belonging” in online communities BIBREF16 , BIBREF28 , BIBREF5 , BIBREF17 , our results suggest that the process of linguistically fitting in is highly contingent on the nature of a community's identity. At one extreme, in generic communities like pics or worldnews there is no distinctive, linguistic identity for users to adopt.
To measure the acculturation gap for a community, we follow Danescu-Niculescu-Mizil et al danescu-niculescu-mizilno2013 and build “snapshot language models” (SLMs) for each community, which capture the linguistic state of a community at one point of time. Using these language models we can capture how linguistically close a particular utterance is to the community by measuring the cross-entropy of this utterance relative to the SLM: DISPLAYFORM0
where INLINEFORM0 is the probability assigned to bigram INLINEFORM1 from comment INLINEFORM2 in community-month INLINEFORM3 . We build the SLMs by randomly sampling 200 active users—defined as users with at least 5 comments in the respective community and month. For each of these 200 active users we select 5 random 10-word spans from 5 unique comments. To ensure robustness and maximize data efficiency, we construct 100 SLMs for each community-month pair that has enough data, bootstrap-resampling from the set of active users.
We compute a basic measure of the acculturation gap for a community-month INLINEFORM0 as the relative difference of the cross-entropy of comments by users active in INLINEFORM1 with that of singleton comments by outsiders—i.e., users who only ever commented once in INLINEFORM2 , but who are still active in Reddit in general: DISPLAYFORM0
INLINEFORM0 denotes the distribution over singleton comments, INLINEFORM1 denotes the distribution over comments from users active in INLINEFORM2 , and INLINEFORM3 the expected values of the cross-entropy over these respective distributions. For each bootstrap-sampled SLM we compute the cross-entropy of 50 comments by active users (10 comments from 5 randomly sampled active users, who were not used to construct the SLM) and 50 comments from randomly-sampled outsiders.
Figure FIGREF13 .A shows that the acculturation gap varies substantially with how distinctive and dynamic a community is. Highly distinctive communities have far higher acculturation gaps, while dynamicity exhibits a non-linear relationship: relatively stable communities have a higher linguistic `entry barrier', as do very dynamic ones. Thus, in communities like IAmA (a general Q&A forum) that are very generic, with content that is highly, but not extremely dynamic, outsiders are at no disadvantage in matching the community's language. In contrast, the acculturation gap is large in stable, distinctive communities like Cooking that have consistent community-specific language. The gap is also large in extremely dynamic communities like Seahawks, which perhaps require more attention or interest on the part of active users to keep up-to-date with recent trends in content.
These results show that phenomena like the acculturation gap, which were previously observed in individual communities BIBREF28 , BIBREF5 , cannot be easily generalized to a larger, heterogeneous set of communities. At the same time, we see that structuring the space of possible communities enables us to observe systematic patterns in how such phenomena vary.
Community identity and content affinity
Through the acculturation gap, we have shown that communities exhibit large yet systematic variations in their permeability to outsiders. We now turn to understanding the divide in commenting behaviour between outsiders and active community members at a finer granularity, by focusing on two particular ways in which such gaps might manifest among users: through different levels of engagement with specific content and with temporally volatile content.
Echoing previous results, we find that community type mediates the extent and nature of the divide in content affinity. While in distinctive communities active members have a higher affinity for both community-specific content and for highly volatile content, the opposite is true for generic communities, where it is the outsiders who engage more with volatile content.
We quantify these divides in content affinity by measuring differences in the language of the comments written by active users and outsiders. Concretely, for each community INLINEFORM0 , we define the specificity gap INLINEFORM1 as the relative difference between the average specificity of comments authored by active members, and by outsiders, where these measures are macroaveraged over users. Large, positive INLINEFORM2 then occur in communities where active users tend to engage with substantially more community-specific content than outsiders.
We analogously define the volatility gap INLINEFORM0 as the relative difference in volatilities of active member and outsider comments. Large, positive values of INLINEFORM1 characterize communities where active users tend to have more volatile interests than outsiders, while negative values indicate communities where active users tend to have more stable interests.
We find that in 94% of communities, INLINEFORM0 , indicating (somewhat unsurprisingly) that in almost all communities, active users tend to engage with more community-specific content than outsiders. However, the magnitude of this divide can vary greatly: for instance, in Homebrewing, which is dedicated to brewing beer, the divide is very pronounced ( INLINEFORM1 0.33) compared to funny, a large hub where users share humorous content ( INLINEFORM2 0.011).
The nature of the volatility gap is comparatively more varied. In Homebrewing ( INLINEFORM0 0.16), as in 68% of communities, active users tend to write more volatile comments than outsiders ( INLINEFORM1 0). However, communities like funny ( INLINEFORM2 -0.16), where active users contribute relatively stable comments compared to outsiders ( INLINEFORM3 0), are also well-represented on Reddit.
To understand whether these variations manifest systematically across communities, we examine the relationship between divides in content affinity and community type. In particular, following the intuition that active users have a relatively high affinity for a community's niche, we expect that the distinctiveness of a community will be a salient mediator of specificity and volatility gaps. Indeed, we find a strong correlation between a community's distinctiveness and its specificity gap (Spearman's INLINEFORM0 0.34, INLINEFORM1 0.001).
We also find a strong correlation between distinctiveness and community volatility gaps (Spearman's INLINEFORM0 0.53, INLINEFORM1 0.001). In particular, we see that among the most distinctive communities (i.e., the top third of communities by distinctiveness), active users tend to write more volatile comments than outsiders (mean INLINEFORM2 0.098), while across the most generic communities (i.e., the bottom third), active users tend to write more stable comments (mean INLINEFORM3 -0.047, Mann-Whitney U test INLINEFORM4 0.001). The relative affinity of outsiders for volatile content in these communities indicates that temporally ephemeral content might serve as an entry point into such a community, without necessarily engaging users in the long term.
Further related work
Our language-based typology and analysis of user engagement draws on and contributes to several distinct research threads, in addition to the many foundational studies cited in the previous sections.
Multicommunity studies. Our investigation of user engagement in multicommunity settings follows prior literature which has examined differences in user and community dynamics across various online groups, such as email listservs. Such studies have primarily related variations in user behaviour to structural features such as group size and volume of content BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 . In focusing on the linguistic content of communities, we extend this research by providing a content-based framework through which user engagement can be examined.
Reddit has been a particularly useful setting for studying multiple communities in prior work. Such studies have mostly focused on characterizing how individual users engage across a multi-community platform BIBREF34 , BIBREF35 , or on specific user engagement patterns such as loyalty to particular communities BIBREF22 . We complement these studies by seeking to understand how features of communities can mediate a broad array of user engagement patterns within them.
Typologies of online communities. Prior attempts to typologize online communities have primarily been qualitative and based on hand-designed categories, making them difficult to apply at scale. These typologies often hinge on having some well-defined function the community serves, such as supporting a business or non-profit cause BIBREF36 , which can be difficult or impossible to identify in massive, anonymous multi-community settings. Other typologies emphasize differences in communication platforms and other functional requirements BIBREF37 , BIBREF38 , which are important but preclude analyzing differences between communities within the same multi-community platform. Similarly, previous computational methods of characterizing multiple communities have relied on the presence of markers such as affixes in community names BIBREF35 , or platform-specific affordances such as evaluation mechanisms BIBREF39 .
Our typology is also distinguished from community detection techniques that rely on structural or functional categorizations BIBREF40 , BIBREF41 . While the focus of those studies is to identify and characterize sub-communities within a larger social network, our typology provides a characterization of pre-defined communities based on the nature of their identity.
Broader work on collective identity. Our focus on community identity dovetails with a long line of research on collective identity and user engagement, in both online and offline communities BIBREF42 , BIBREF1 , BIBREF2 . These studies focus on individual-level psychological manifestations of collective (or social) identity, and their relationship to user engagement BIBREF42 , BIBREF43 , BIBREF44 , BIBREF0 .
In contrast, we seek to characterize community identities at an aggregate level and in an interpretable manner, with the goal of systematically organizing the diverse space of online communities. Typologies of this kind are critical to these broader, social-psychological studies of collective identity: they allow researchers to systematically analyze how the psychological manifestations and implications of collective identity vary across diverse sets of communities.
Conclusion and future work
Our current understanding of engagement patterns in online communities is patched up from glimpses offered by several disparate studies focusing on a few individual communities. This work calls into attention the need for a method to systematically reason about similarities and differences across communities. By proposing a way to structure the multi-community space, we find not only that radically contrasting engagement patterns emerge in different parts of this space, but also that this variation can be at least partly explained by the type of identity each community fosters.
Our choice in this work is to structure the multi-community space according to a typology based on community identity, as reflected in language use. We show that this effectively explains cross-community variation of three different user engagement measures—retention, acculturation and content affinity—and complements measures based on activity and size with additional interpretable information. For example, we find that in niche communities established members are more likely to engage with volatile content than outsiders, while the opposite is true in generic communities. Such insights can be useful for community maintainers seeking to understand engagement patterns in their own communities.
One main area of future research is to examine the temporal dynamics in the multi-community landscape. By averaging our measures of distinctiveness and dynamicity across time, our present study treated community identity as a static property. However, as communities experience internal changes and respond to external events, we can expect the nature of their identity to shift as well. For instance, the relative consistency of harrypotter may be disrupted by the release of a new novel, while Seahawks may foster different identities during and between football seasons. Conversely, a community's type may also mediate the impact of new events. Moving beyond a static view of community identity could enable us to better understand how temporal phenomena such as linguistic change manifest across different communities, and also provide a more nuanced view of user engagement—for instance, are communities more welcoming to newcomers at certain points in their lifecycle?
Another important avenue of future work is to explore other ways of mapping the landscape of online communities. For example, combining structural properties of communities BIBREF40 with topical information BIBREF35 and with our identity-based measures could further characterize and explain variations in user engagement patterns. Furthermore, extending the present analyses to even more diverse communities supported by different platforms (e.g., GitHub, StackExchange, Wikipedia) could enable the characterization of more complex behavioral patterns such as collaboration and altruism, which become salient in different multicommunity landscapes.
Acknowledgements
The authors thank Liye Fu, Jack Hessel, David Jurgens and Lillian Lee for their helpful comments. This research has been supported in part by a Discovery and Innovation Research Seed Award from the Office of the Vice Provost for Research at Cornell, NSF CNS-1010921, IIS-1149837, IIS-1514268 NIH BD2K, ARO MURI, DARPA XDATA, DARPA SIMPLEX, DARPA NGS2, Stanford Data Science Initiative, SAP Stanford Graduate Fellowship, NSERC PGS-D, Boeing, Lightspeed, and Volkswagen. | Dynamic communities have substantially higher rates of monthly user retention than more stable communities. More distinctive communities exhibit moderately higher monthly retention rates than more generic communities. There is also a strong positive relationship between a community's dynamicity and the average number of months that a user will stay in that community - a short-term trend observed for monthly retention translates into longer-term engagement and suggests that long-term user retention might be strongly driven by the extent to which a community continually provides novel content.
|
eea089baedc0ce80731c8fdcb064b82f584f483a | eea089baedc0ce80731c8fdcb064b82f584f483a_0 | Q: What patterns do they observe about how user engagement varies with the characteristics of a community?
Text: Introduction
“If each city is like a game of chess, the day when I have learned the rules, I shall finally possess my empire, even if I shall never succeed in knowing all the cities it contains.”
— Italo Calvino, Invisible Cities
A community's identity—defined through the common interests and shared experiences of its users—shapes various facets of the social dynamics within it BIBREF0 , BIBREF1 , BIBREF2 . Numerous instances of this interplay between a community's identity and social dynamics have been extensively studied in the context of individual online communities BIBREF3 , BIBREF4 , BIBREF5 . However, the sheer variety of online platforms complicates the task of generalizing insights beyond these isolated, single-community glimpses. A new way to reason about the variation across multiple communities is needed in order to systematically characterize the relationship between properties of a community and the dynamics taking place within.
One especially important component of community dynamics is user engagement. We can aim to understand why users join certain communities BIBREF6 , what factors influence user retention BIBREF7 , and how users react to innovation BIBREF5 . While striking patterns of user engagement have been uncovered in prior case studies of individual communities BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , we do not know whether these observations hold beyond these cases, or when we can draw analogies between different communities. Are there certain types of communities where we can expect similar or contrasting engagement patterns?
To address such questions quantitatively we need to provide structure to the diverse and complex space of online communities. Organizing the multi-community landscape would allow us to both characterize individual points within this space, and reason about systematic variations in patterns of user engagement across the space.
Present work: Structuring the multi-community space. In order to systematically understand the relationship between community identityand user engagement we introduce a quantitative typology of online communities. Our typology is based on two key aspects of community identity: how distinctive—or niche—a community's interests are relative to other communities, and how dynamic—or volatile—these interests are over time. These axes aim to capture the salience of a community's identity and dynamics of its temporal evolution.
Our main insight in implementing this typology automatically and at scale is that the language used within a community can simultaneously capture how distinctive and dynamic its interests are. This language-based approach draws on a wealth of literature characterizing linguistic variation in online communities and its relationship to community and user identity BIBREF16 , BIBREF5 , BIBREF17 , BIBREF18 , BIBREF19 . Basing our typology on language is also convenient since it renders our framework immediately applicable to a wide variety of online communities, where communication is primarily recorded in a textual format.
Using our framework, we map almost 300 Reddit communities onto the landscape defined by the two axes of our typology (Section SECREF2 ). We find that this mapping induces conceptually sound categorizations that effectively capture key aspects of community-level social dynamics. In particular, we quantitatively validate the effectiveness of our mapping by showing that our two-dimensional typology encodes signals that are predictive of community-level rates of user retention, complementing strong activity-based features.
Engagement and community identity. We apply our framework to understand how two important aspects of user engagement in a community—the community's propensity to retain its users (Section SECREF3 ), and its permeability to new members (Section SECREF4 )—vary according to the type of collective identity it fosters. We find that communities that are characterized by specialized, constantly-updating content have higher user retention rates, but also exhibit larger linguistic gaps that separate newcomers from established members.
More closely examining factors that could contribute to this linguistic gap, we find that especially within distinctive communities, established users have an increased propensity to engage with the community's specialized content, compared to newcomers (Section SECREF5 ). Interestingly, while established members of distinctive communities more avidly respond to temporal updates than newcomers, in more generic communities it is the outsiders who engage more with volatile content, perhaps suggesting that such content may serve as an entry-point to the community (but not necessarily a reason to stay). Such insights into the relation between collective identity and user engagement can be informative to community maintainers seeking to better understand growth patterns within their online communities.
More generally, our methodology stands as an example of how sociological questions can be addressed in a multi-community setting. In performing our analyses across a rich variety of communities, we reveal both the diversity of phenomena that can occur, as well as the systematic nature of this diversity.
A typology of community identity
A community's identity derives from its members' common interests and shared experiences BIBREF15 , BIBREF20 . In this work, we structure the multi-community landscape along these two key dimensions of community identity: how distinctive a community's interests are, and how dynamic the community is over time.
We now proceed to outline our quantitative typology, which maps communities along these two dimensions. We start by providing an intuition through inspecting a few example communities. We then introduce a generalizable language-based methodology and use it to map a large set of Reddit communities onto the landscape defined by our typology of community identity.
Overview and intuition
In order to illustrate the diversity within the multi-community space, and to provide an intuition for the underlying structure captured by the proposed typology, we first examine a few example communities and draw attention to some key social dynamics that occur within them.
We consider four communities from Reddit: in Seahawks, fans of the Seahawks football team gather to discuss games and players; in BabyBumps, expecting mothers trade advice and updates on their pregnancy; Cooking consists of recipe ideas and general discussion about cooking; while in pics, users share various images of random things (like eels and hornets). We note that these communities are topically contrasting and foster fairly disjoint user bases. Additionally, these communities exhibit varied patterns of user engagement. While Seahawks maintains a devoted set of users from month to month, pics is dominated by transient users who post a few times and then depart.
Discussions within these communities also span varied sets of interests. Some of these interests are more specific to the community than others: risotto, for example, is seldom a discussion point beyond Cooking. Additionally, some interests consistently recur, while others are specific to a particular time: kitchens are a consistent focus point for cooking, but mint is only in season during spring. Coupling specificity and consistency we find interests such as easter, which isn't particularly specific to BabyBumps but gains prominence in that community around Easter (see Figure FIGREF3 .A for further examples).
These specific interests provide a window into the nature of the communities' interests as a whole, and by extension their community identities. Overall, discussions in Cooking focus on topics which are highly distinctive and consistently recur (like risotto). In contrast, discussions in Seahawks are highly dynamic, rapidly shifting over time as new games occur and players are traded in and out. In the remainder of this section we formally introduce a methodology for mapping communities in this space defined by their distinctiveness and dynamicity (examples in Figure FIGREF3 .B).
Language-based formalization
Our approach follows the intuition that a distinctive community will use language that is particularly specific, or unique, to that community. Similarly, a dynamic community will use volatile language that rapidly changes across successive windows of time. To capture this intuition automatically, we start by defining word-level measures of specificity and volatility. We then extend these word-level primitives to characterize entire comments, and the community itself.
Our characterizations of words in a community are motivated by methodology from prior literature that compares the frequency of a word in a particular setting to its frequency in some background distribution, in order to identify instances of linguistic variation BIBREF21 , BIBREF19 . Our particular framework makes this comparison by way of pointwise mutual information (PMI).
In the following, we use INLINEFORM0 to denote one community within a set INLINEFORM1 of communities, and INLINEFORM2 to denote one time period within the entire history INLINEFORM3 of INLINEFORM4 . We account for temporal as well as inter-community variation by computing word-level measures for each time period of each community's history, INLINEFORM5 . Given a word INLINEFORM6 used within a particular community INLINEFORM7 at time INLINEFORM8 , we define two word-level measures:
Specificity. We quantify the specificity INLINEFORM0 of INLINEFORM1 to INLINEFORM2 by calculating the PMI of INLINEFORM3 and INLINEFORM4 , relative to INLINEFORM5 , INLINEFORM6
where INLINEFORM0 is INLINEFORM1 's frequency in INLINEFORM2 . INLINEFORM3 is specific to INLINEFORM4 if it occurs more frequently in INLINEFORM5 than in the entire set INLINEFORM6 , hence distinguishing this community from the rest. A word INLINEFORM7 whose occurrence is decoupled from INLINEFORM8 , and thus has INLINEFORM9 close to 0, is said to be generic.
We compute values of INLINEFORM0 for each time period INLINEFORM1 in INLINEFORM2 ; in the above description we drop the time-based subscripts for clarity.
Volatility. We quantify the volatility INLINEFORM0 of INLINEFORM1 to INLINEFORM2 as the PMI of INLINEFORM3 and INLINEFORM4 relative to INLINEFORM5 , the entire history of INLINEFORM6 : INLINEFORM7
A word INLINEFORM0 is volatile at time INLINEFORM1 in INLINEFORM2 if it occurs more frequently at INLINEFORM3 than in the entire history INLINEFORM4 , behaving as a fad within a small window of time. A word that occurs with similar frequency across time, and hence has INLINEFORM5 close to 0, is said to be stable.
Extending to utterances. Using our word-level primitives, we define the specificity of an utterance INLINEFORM0 in INLINEFORM1 , INLINEFORM2 as the average specificity of each word in the utterance. The volatility of utterances is defined analogously.
Community-level measures
Having described these word-level measures, we now proceed to establish the primary axes of our typology:
Distinctiveness. A community with a very distinctive identity will tend to have distinctive interests, expressed through specialized language. Formally, we define the distinctiveness of a community INLINEFORM0 as the average specificity of all utterances in INLINEFORM1 . We refer to a community with a less distinctive identity as being generic.
Dynamicity. A highly dynamic community constantly shifts interests from one time window to another, and these temporal variations are reflected in its use of volatile language. Formally, we define the dynamicity of a community INLINEFORM0 as the average volatility of all utterances in INLINEFORM1 . We refer to a community whose language is relatively consistent throughout time as being stable.
In our subsequent analyses, we focus mostly on examing the average distinctiveness and dynamicity of a community over time, denoted INLINEFORM0 and INLINEFORM1 .
Applying the typology to Reddit
We now explain how our typology can be applied to the particular setting of Reddit, and describe the overall behaviour of our linguistic axes in this context.
Dataset description. Reddit is a popular website where users form and participate in discussion-based communities called subreddits. Within these communities, users post content—such as images, URLs, or questions—which often spark vibrant lengthy discussions in thread-based comment sections.
The website contains many highly active subreddits with thousands of active subscribers. These communities span an extremely rich variety of topical interests, as represented by the examples described earlier. They also vary along a rich multitude of structural dimensions, such as the number of users, the amount of conversation and social interaction, and the social norms determining which types of content become popular. The diversity and scope of Reddit's multicommunity ecosystem make it an ideal landscape in which to closely examine the relation between varying community identities and social dynamics.
Our full dataset consists of all subreddits on Reddit from January 2013 to December 2014, for which there are at least 500 words in the vocabulary used to estimate our measures, in at least 4 months of the subreddit's history. We compute our measures over the comments written by users in a community in time windows of months, for each sufficiently active month, and manually remove communities where the bulk of the contributions are in a foreign language. This results in 283 communities ( INLINEFORM0 ), for a total of 4,872 community-months ( INLINEFORM1 ).
Estimating linguistic measures. We estimate word frequencies INLINEFORM0 , and by extension each downstream measure, in a carefully controlled manner in order to ensure we capture robust and meaningful linguistic behaviour. First, we only consider top-level comments which are initial responses to a post, as the content of lower-level responses might reflect conventions of dialogue more than a community's high-level interests. Next, in order to prevent a few highly active users from dominating our frequency estimates, we count each unique word once per user, ignoring successive uses of the same word by the same user. This ensures that our word-level characterizations are not skewed by a small subset of highly active contributors.
In our subsequent analyses, we will only look at these measures computed over the nouns used in comments. In principle, our framework can be applied to any choice of vocabulary. However, in the case of Reddit using nouns provides a convenient degree of interpretability. We can easily understand the implication of a community preferentially mentioning a noun such as gamer or feminist, but interpreting the overuse of verbs or function words such as take or of is less straightforward. Additionally, in focusing on nouns we adopt the view emphasized in modern “third wave” accounts of sociolinguistic variation, that stylistic variation is inseparable from topical content BIBREF23 . In the case of online communities, the choice of what people choose to talk about serves as a primary signal of social identity. That said, a typology based on more purely stylistic differences is an interesting avenue for future work.
Accounting for rare words. One complication when using measures such as PMI, which are based off of ratios of frequencies, is that estimates for very infrequent words could be overemphasized BIBREF24 . Words that only appear a few times in a community tend to score at the extreme ends of our measures (e.g. as highly specific or highly generic), obfuscating the impact of more frequent words in the community. To address this issue, we discard the long tail of infrequent words in our analyses, using only the top 5th percentile of words, by frequency within each INLINEFORM0 , to score comments and communities.
Typology output on Reddit. The distribution of INLINEFORM0 and INLINEFORM1 across Reddit communities is shown in Figure FIGREF3 .B, along with examples of communities at the extremes of our typology. We find that interpretable groupings of communities emerge at various points within our axes. For instance, highly distinctive and dynamic communities tend to focus on rapidly-updating interests like sports teams and games, while generic and consistent communities tend to be large “link-sharing” hubs where users generally post content with no clear dominating themes. More examples of communities at the extremes of our typology are shown in Table TABREF9 .
We note that these groupings capture abstract properties of a community's content that go beyond its topic. For instance, our typology relates topically contrasting communities such as yugioh (which is about a popular trading card game) and Seahawks through the shared trait that their content is particularly distinctive. Additionally, the axes can clarify differences between topically similar communities: while startrek and thewalkingdead both focus on TV shows, startrek is less dynamic than the median community, while thewalkingdead is among the most dynamic communities, as the show was still airing during the years considered.
Community identity and user retention
We have seen that our typology produces qualitatively satisfying groupings of communities according to the nature of their collective identity. This section shows that there is an informative and highly predictive relationship between a community's position in this typology and its user engagement patterns. We find that communities with distinctive and dynamic identities have higher rates of user engagement, and further show that a community's position in our identity-based landscape holds important predictive information that is complementary to a strong activity baseline.
In particular user retention is one of the most crucial aspects of engagement and is critical to community maintenance BIBREF2 . We quantify how successful communities are at retaining users in terms of both short and long-term commitment. Our results indicate that rates of user retention vary drastically, yet systematically according to how distinctive and dynamic a community is (Figure FIGREF3 ).
We find a strong, explanatory relationship between the temporal consistency of a community's identity and rates of user engagement: dynamic communities that continually update and renew their discussion content tend to have far higher rates of user engagement. The relationship between distinctiveness and engagement is less universal, but still highly informative: niche communities tend to engender strong, focused interest from users at one particular point in time, though this does not necessarily translate into long-term retention.
Community-type and monthly retention
We find that dynamic communities, such as Seahawks or starcraft, have substantially higher rates of monthly user retention than more stable communities (Spearman's INLINEFORM0 = 0.70, INLINEFORM1 0.001, computed with community points averaged over months; Figure FIGREF11 .A, left). Similarly, more distinctive communities, like Cooking and Naruto, exhibit moderately higher monthly retention rates than more generic communities (Spearman's INLINEFORM2 = 0.33, INLINEFORM3 0.001; Figure FIGREF11 .A, right).
Monthly retention is formally defined as the proportion of users who contribute in month INLINEFORM0 and then return to contribute again in month INLINEFORM1 . Each monthly datapoint is treated as unique and the trends in Figure FIGREF11 show 95% bootstrapped confidence intervals, cluster-resampled at the level of subreddit BIBREF25 , to account for differences in the number of months each subreddit contributes to the data.
Importantly, we find that in the task of predicting community-level user retention our identity-based typology holds additional predictive value on top of strong baseline features based on community-size (# contributing users) and activity levels (mean # contributions per user), which are commonly used for churn prediction BIBREF7 . We compared out-of-sample predictive performance via leave-one-community-out cross validation using random forest regressors with ensembles of size 100, and otherwise default hyperparameters BIBREF26 . A model predicting average monthly retention based on a community's average distinctiveness and dynamicity achieves an average mean squared error ( INLINEFORM0 ) of INLINEFORM1 and INLINEFORM2 , while an analogous model predicting based on a community's size and average activity level (both log-transformed) achieves INLINEFORM4 and INLINEFORM5 . The difference between the two models is not statistically significant ( INLINEFORM6 , Wilcoxon signed-rank test). However, combining features from both models results in a large and statistically significant improvement over each independent model ( INLINEFORM7 , INLINEFORM8 , INLINEFORM9 Bonferroni-corrected pairwise Wilcoxon tests). These results indicate that our typology can explain variance in community-level retention rates, and provides information beyond what is present in standard activity-based features.
Community-type and user tenure
As with monthly retention, we find a strong positive relationship between a community's dynamicity and the average number of months that a user will stay in that community (Spearman's INLINEFORM0 = 0.41, INLINEFORM1 0.001, computed over all community points; Figure FIGREF11 .B, left). This verifies that the short-term trend observed for monthly retention translates into longer-term engagement and suggests that long-term user retention might be strongly driven by the extent to which a community continually provides novel content. Interestingly, there is no significant relationship between distinctiveness and long-term engagement (Spearman's INLINEFORM2 = 0.03, INLINEFORM3 0.77; Figure FIGREF11 .B, right). Thus, while highly distinctive communities like RandomActsOfMakeup may generate focused commitment from users over a short period of time, such communities are unlikely to retain long-term users unless they also have sufficiently dynamic content.
To measure user tenures we focused on one slice of data (May, 2013) and measured how many months a user spends in each community, on average—the average number of months between a user's first and last comment in each community. We have activity data up until May 2015, so the maximum tenure is 24 months in this set-up, which is exceptionally long relative to the average community member (throughout our entire data less than INLINEFORM0 of users have tenures of more than 24 months in any community).
Community identity and acculturation
The previous section shows that there is a strong connection between the nature of a community's identity and its basic user engagement patterns. In this section, we probe the relationship between a community's identity and how permeable, or accessible, it is to outsiders.
We measure this phenomenon using what we call the acculturation gap, which compares the extent to which engaged vs. non-engaged users employ community-specific language. While previous work has found this gap to be large and predictive of future user engagement in two beer-review communities BIBREF5 , we find that the size of the acculturation gap depends strongly on the nature of a community's identity, with the gap being most pronounced in stable, highly distinctive communities (Figure FIGREF13 ).
This finding has important implications for our understanding of online communities. Though many works have analyzed the dynamics of “linguistic belonging” in online communities BIBREF16 , BIBREF28 , BIBREF5 , BIBREF17 , our results suggest that the process of linguistically fitting in is highly contingent on the nature of a community's identity. At one extreme, in generic communities like pics or worldnews there is no distinctive, linguistic identity for users to adopt.
To measure the acculturation gap for a community, we follow Danescu-Niculescu-Mizil et al danescu-niculescu-mizilno2013 and build “snapshot language models” (SLMs) for each community, which capture the linguistic state of a community at one point of time. Using these language models we can capture how linguistically close a particular utterance is to the community by measuring the cross-entropy of this utterance relative to the SLM: DISPLAYFORM0
where INLINEFORM0 is the probability assigned to bigram INLINEFORM1 from comment INLINEFORM2 in community-month INLINEFORM3 . We build the SLMs by randomly sampling 200 active users—defined as users with at least 5 comments in the respective community and month. For each of these 200 active users we select 5 random 10-word spans from 5 unique comments. To ensure robustness and maximize data efficiency, we construct 100 SLMs for each community-month pair that has enough data, bootstrap-resampling from the set of active users.
We compute a basic measure of the acculturation gap for a community-month INLINEFORM0 as the relative difference of the cross-entropy of comments by users active in INLINEFORM1 with that of singleton comments by outsiders—i.e., users who only ever commented once in INLINEFORM2 , but who are still active in Reddit in general: DISPLAYFORM0
INLINEFORM0 denotes the distribution over singleton comments, INLINEFORM1 denotes the distribution over comments from users active in INLINEFORM2 , and INLINEFORM3 the expected values of the cross-entropy over these respective distributions. For each bootstrap-sampled SLM we compute the cross-entropy of 50 comments by active users (10 comments from 5 randomly sampled active users, who were not used to construct the SLM) and 50 comments from randomly-sampled outsiders.
Figure FIGREF13 .A shows that the acculturation gap varies substantially with how distinctive and dynamic a community is. Highly distinctive communities have far higher acculturation gaps, while dynamicity exhibits a non-linear relationship: relatively stable communities have a higher linguistic `entry barrier', as do very dynamic ones. Thus, in communities like IAmA (a general Q&A forum) that are very generic, with content that is highly, but not extremely dynamic, outsiders are at no disadvantage in matching the community's language. In contrast, the acculturation gap is large in stable, distinctive communities like Cooking that have consistent community-specific language. The gap is also large in extremely dynamic communities like Seahawks, which perhaps require more attention or interest on the part of active users to keep up-to-date with recent trends in content.
These results show that phenomena like the acculturation gap, which were previously observed in individual communities BIBREF28 , BIBREF5 , cannot be easily generalized to a larger, heterogeneous set of communities. At the same time, we see that structuring the space of possible communities enables us to observe systematic patterns in how such phenomena vary.
Community identity and content affinity
Through the acculturation gap, we have shown that communities exhibit large yet systematic variations in their permeability to outsiders. We now turn to understanding the divide in commenting behaviour between outsiders and active community members at a finer granularity, by focusing on two particular ways in which such gaps might manifest among users: through different levels of engagement with specific content and with temporally volatile content.
Echoing previous results, we find that community type mediates the extent and nature of the divide in content affinity. While in distinctive communities active members have a higher affinity for both community-specific content and for highly volatile content, the opposite is true for generic communities, where it is the outsiders who engage more with volatile content.
We quantify these divides in content affinity by measuring differences in the language of the comments written by active users and outsiders. Concretely, for each community INLINEFORM0 , we define the specificity gap INLINEFORM1 as the relative difference between the average specificity of comments authored by active members, and by outsiders, where these measures are macroaveraged over users. Large, positive INLINEFORM2 then occur in communities where active users tend to engage with substantially more community-specific content than outsiders.
We analogously define the volatility gap INLINEFORM0 as the relative difference in volatilities of active member and outsider comments. Large, positive values of INLINEFORM1 characterize communities where active users tend to have more volatile interests than outsiders, while negative values indicate communities where active users tend to have more stable interests.
We find that in 94% of communities, INLINEFORM0 , indicating (somewhat unsurprisingly) that in almost all communities, active users tend to engage with more community-specific content than outsiders. However, the magnitude of this divide can vary greatly: for instance, in Homebrewing, which is dedicated to brewing beer, the divide is very pronounced ( INLINEFORM1 0.33) compared to funny, a large hub where users share humorous content ( INLINEFORM2 0.011).
The nature of the volatility gap is comparatively more varied. In Homebrewing ( INLINEFORM0 0.16), as in 68% of communities, active users tend to write more volatile comments than outsiders ( INLINEFORM1 0). However, communities like funny ( INLINEFORM2 -0.16), where active users contribute relatively stable comments compared to outsiders ( INLINEFORM3 0), are also well-represented on Reddit.
To understand whether these variations manifest systematically across communities, we examine the relationship between divides in content affinity and community type. In particular, following the intuition that active users have a relatively high affinity for a community's niche, we expect that the distinctiveness of a community will be a salient mediator of specificity and volatility gaps. Indeed, we find a strong correlation between a community's distinctiveness and its specificity gap (Spearman's INLINEFORM0 0.34, INLINEFORM1 0.001).
We also find a strong correlation between distinctiveness and community volatility gaps (Spearman's INLINEFORM0 0.53, INLINEFORM1 0.001). In particular, we see that among the most distinctive communities (i.e., the top third of communities by distinctiveness), active users tend to write more volatile comments than outsiders (mean INLINEFORM2 0.098), while across the most generic communities (i.e., the bottom third), active users tend to write more stable comments (mean INLINEFORM3 -0.047, Mann-Whitney U test INLINEFORM4 0.001). The relative affinity of outsiders for volatile content in these communities indicates that temporally ephemeral content might serve as an entry point into such a community, without necessarily engaging users in the long term.
Further related work
Our language-based typology and analysis of user engagement draws on and contributes to several distinct research threads, in addition to the many foundational studies cited in the previous sections.
Multicommunity studies. Our investigation of user engagement in multicommunity settings follows prior literature which has examined differences in user and community dynamics across various online groups, such as email listservs. Such studies have primarily related variations in user behaviour to structural features such as group size and volume of content BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 . In focusing on the linguistic content of communities, we extend this research by providing a content-based framework through which user engagement can be examined.
Reddit has been a particularly useful setting for studying multiple communities in prior work. Such studies have mostly focused on characterizing how individual users engage across a multi-community platform BIBREF34 , BIBREF35 , or on specific user engagement patterns such as loyalty to particular communities BIBREF22 . We complement these studies by seeking to understand how features of communities can mediate a broad array of user engagement patterns within them.
Typologies of online communities. Prior attempts to typologize online communities have primarily been qualitative and based on hand-designed categories, making them difficult to apply at scale. These typologies often hinge on having some well-defined function the community serves, such as supporting a business or non-profit cause BIBREF36 , which can be difficult or impossible to identify in massive, anonymous multi-community settings. Other typologies emphasize differences in communication platforms and other functional requirements BIBREF37 , BIBREF38 , which are important but preclude analyzing differences between communities within the same multi-community platform. Similarly, previous computational methods of characterizing multiple communities have relied on the presence of markers such as affixes in community names BIBREF35 , or platform-specific affordances such as evaluation mechanisms BIBREF39 .
Our typology is also distinguished from community detection techniques that rely on structural or functional categorizations BIBREF40 , BIBREF41 . While the focus of those studies is to identify and characterize sub-communities within a larger social network, our typology provides a characterization of pre-defined communities based on the nature of their identity.
Broader work on collective identity. Our focus on community identity dovetails with a long line of research on collective identity and user engagement, in both online and offline communities BIBREF42 , BIBREF1 , BIBREF2 . These studies focus on individual-level psychological manifestations of collective (or social) identity, and their relationship to user engagement BIBREF42 , BIBREF43 , BIBREF44 , BIBREF0 .
In contrast, we seek to characterize community identities at an aggregate level and in an interpretable manner, with the goal of systematically organizing the diverse space of online communities. Typologies of this kind are critical to these broader, social-psychological studies of collective identity: they allow researchers to systematically analyze how the psychological manifestations and implications of collective identity vary across diverse sets of communities.
Conclusion and future work
Our current understanding of engagement patterns in online communities is patched up from glimpses offered by several disparate studies focusing on a few individual communities. This work calls into attention the need for a method to systematically reason about similarities and differences across communities. By proposing a way to structure the multi-community space, we find not only that radically contrasting engagement patterns emerge in different parts of this space, but also that this variation can be at least partly explained by the type of identity each community fosters.
Our choice in this work is to structure the multi-community space according to a typology based on community identity, as reflected in language use. We show that this effectively explains cross-community variation of three different user engagement measures—retention, acculturation and content affinity—and complements measures based on activity and size with additional interpretable information. For example, we find that in niche communities established members are more likely to engage with volatile content than outsiders, while the opposite is true in generic communities. Such insights can be useful for community maintainers seeking to understand engagement patterns in their own communities.
One main area of future research is to examine the temporal dynamics in the multi-community landscape. By averaging our measures of distinctiveness and dynamicity across time, our present study treated community identity as a static property. However, as communities experience internal changes and respond to external events, we can expect the nature of their identity to shift as well. For instance, the relative consistency of harrypotter may be disrupted by the release of a new novel, while Seahawks may foster different identities during and between football seasons. Conversely, a community's type may also mediate the impact of new events. Moving beyond a static view of community identity could enable us to better understand how temporal phenomena such as linguistic change manifest across different communities, and also provide a more nuanced view of user engagement—for instance, are communities more welcoming to newcomers at certain points in their lifecycle?
Another important avenue of future work is to explore other ways of mapping the landscape of online communities. For example, combining structural properties of communities BIBREF40 with topical information BIBREF35 and with our identity-based measures could further characterize and explain variations in user engagement patterns. Furthermore, extending the present analyses to even more diverse communities supported by different platforms (e.g., GitHub, StackExchange, Wikipedia) could enable the characterization of more complex behavioral patterns such as collaboration and altruism, which become salient in different multicommunity landscapes.
Acknowledgements
The authors thank Liye Fu, Jack Hessel, David Jurgens and Lillian Lee for their helpful comments. This research has been supported in part by a Discovery and Innovation Research Seed Award from the Office of the Vice Provost for Research at Cornell, NSF CNS-1010921, IIS-1149837, IIS-1514268 NIH BD2K, ARO MURI, DARPA XDATA, DARPA SIMPLEX, DARPA NGS2, Stanford Data Science Initiative, SAP Stanford Graduate Fellowship, NSERC PGS-D, Boeing, Lightspeed, and Volkswagen. | communities that are characterized by specialized, constantly-updating content have higher user retention rates, but also exhibit larger linguistic gaps that separate newcomers from established members, within distinctive communities, established users have an increased propensity to engage with the community's specialized content, compared to newcomers |
edb2d24d6d10af13931b3a47a6543bd469752f0c | edb2d24d6d10af13931b3a47a6543bd469752f0c_0 | Q: How did the select the 300 Reddit communities for comparison?
Text: Introduction
“If each city is like a game of chess, the day when I have learned the rules, I shall finally possess my empire, even if I shall never succeed in knowing all the cities it contains.”
— Italo Calvino, Invisible Cities
A community's identity—defined through the common interests and shared experiences of its users—shapes various facets of the social dynamics within it BIBREF0 , BIBREF1 , BIBREF2 . Numerous instances of this interplay between a community's identity and social dynamics have been extensively studied in the context of individual online communities BIBREF3 , BIBREF4 , BIBREF5 . However, the sheer variety of online platforms complicates the task of generalizing insights beyond these isolated, single-community glimpses. A new way to reason about the variation across multiple communities is needed in order to systematically characterize the relationship between properties of a community and the dynamics taking place within.
One especially important component of community dynamics is user engagement. We can aim to understand why users join certain communities BIBREF6 , what factors influence user retention BIBREF7 , and how users react to innovation BIBREF5 . While striking patterns of user engagement have been uncovered in prior case studies of individual communities BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , we do not know whether these observations hold beyond these cases, or when we can draw analogies between different communities. Are there certain types of communities where we can expect similar or contrasting engagement patterns?
To address such questions quantitatively we need to provide structure to the diverse and complex space of online communities. Organizing the multi-community landscape would allow us to both characterize individual points within this space, and reason about systematic variations in patterns of user engagement across the space.
Present work: Structuring the multi-community space. In order to systematically understand the relationship between community identityand user engagement we introduce a quantitative typology of online communities. Our typology is based on two key aspects of community identity: how distinctive—or niche—a community's interests are relative to other communities, and how dynamic—or volatile—these interests are over time. These axes aim to capture the salience of a community's identity and dynamics of its temporal evolution.
Our main insight in implementing this typology automatically and at scale is that the language used within a community can simultaneously capture how distinctive and dynamic its interests are. This language-based approach draws on a wealth of literature characterizing linguistic variation in online communities and its relationship to community and user identity BIBREF16 , BIBREF5 , BIBREF17 , BIBREF18 , BIBREF19 . Basing our typology on language is also convenient since it renders our framework immediately applicable to a wide variety of online communities, where communication is primarily recorded in a textual format.
Using our framework, we map almost 300 Reddit communities onto the landscape defined by the two axes of our typology (Section SECREF2 ). We find that this mapping induces conceptually sound categorizations that effectively capture key aspects of community-level social dynamics. In particular, we quantitatively validate the effectiveness of our mapping by showing that our two-dimensional typology encodes signals that are predictive of community-level rates of user retention, complementing strong activity-based features.
Engagement and community identity. We apply our framework to understand how two important aspects of user engagement in a community—the community's propensity to retain its users (Section SECREF3 ), and its permeability to new members (Section SECREF4 )—vary according to the type of collective identity it fosters. We find that communities that are characterized by specialized, constantly-updating content have higher user retention rates, but also exhibit larger linguistic gaps that separate newcomers from established members.
More closely examining factors that could contribute to this linguistic gap, we find that especially within distinctive communities, established users have an increased propensity to engage with the community's specialized content, compared to newcomers (Section SECREF5 ). Interestingly, while established members of distinctive communities more avidly respond to temporal updates than newcomers, in more generic communities it is the outsiders who engage more with volatile content, perhaps suggesting that such content may serve as an entry-point to the community (but not necessarily a reason to stay). Such insights into the relation between collective identity and user engagement can be informative to community maintainers seeking to better understand growth patterns within their online communities.
More generally, our methodology stands as an example of how sociological questions can be addressed in a multi-community setting. In performing our analyses across a rich variety of communities, we reveal both the diversity of phenomena that can occur, as well as the systematic nature of this diversity.
A typology of community identity
A community's identity derives from its members' common interests and shared experiences BIBREF15 , BIBREF20 . In this work, we structure the multi-community landscape along these two key dimensions of community identity: how distinctive a community's interests are, and how dynamic the community is over time.
We now proceed to outline our quantitative typology, which maps communities along these two dimensions. We start by providing an intuition through inspecting a few example communities. We then introduce a generalizable language-based methodology and use it to map a large set of Reddit communities onto the landscape defined by our typology of community identity.
Overview and intuition
In order to illustrate the diversity within the multi-community space, and to provide an intuition for the underlying structure captured by the proposed typology, we first examine a few example communities and draw attention to some key social dynamics that occur within them.
We consider four communities from Reddit: in Seahawks, fans of the Seahawks football team gather to discuss games and players; in BabyBumps, expecting mothers trade advice and updates on their pregnancy; Cooking consists of recipe ideas and general discussion about cooking; while in pics, users share various images of random things (like eels and hornets). We note that these communities are topically contrasting and foster fairly disjoint user bases. Additionally, these communities exhibit varied patterns of user engagement. While Seahawks maintains a devoted set of users from month to month, pics is dominated by transient users who post a few times and then depart.
Discussions within these communities also span varied sets of interests. Some of these interests are more specific to the community than others: risotto, for example, is seldom a discussion point beyond Cooking. Additionally, some interests consistently recur, while others are specific to a particular time: kitchens are a consistent focus point for cooking, but mint is only in season during spring. Coupling specificity and consistency we find interests such as easter, which isn't particularly specific to BabyBumps but gains prominence in that community around Easter (see Figure FIGREF3 .A for further examples).
These specific interests provide a window into the nature of the communities' interests as a whole, and by extension their community identities. Overall, discussions in Cooking focus on topics which are highly distinctive and consistently recur (like risotto). In contrast, discussions in Seahawks are highly dynamic, rapidly shifting over time as new games occur and players are traded in and out. In the remainder of this section we formally introduce a methodology for mapping communities in this space defined by their distinctiveness and dynamicity (examples in Figure FIGREF3 .B).
Language-based formalization
Our approach follows the intuition that a distinctive community will use language that is particularly specific, or unique, to that community. Similarly, a dynamic community will use volatile language that rapidly changes across successive windows of time. To capture this intuition automatically, we start by defining word-level measures of specificity and volatility. We then extend these word-level primitives to characterize entire comments, and the community itself.
Our characterizations of words in a community are motivated by methodology from prior literature that compares the frequency of a word in a particular setting to its frequency in some background distribution, in order to identify instances of linguistic variation BIBREF21 , BIBREF19 . Our particular framework makes this comparison by way of pointwise mutual information (PMI).
In the following, we use INLINEFORM0 to denote one community within a set INLINEFORM1 of communities, and INLINEFORM2 to denote one time period within the entire history INLINEFORM3 of INLINEFORM4 . We account for temporal as well as inter-community variation by computing word-level measures for each time period of each community's history, INLINEFORM5 . Given a word INLINEFORM6 used within a particular community INLINEFORM7 at time INLINEFORM8 , we define two word-level measures:
Specificity. We quantify the specificity INLINEFORM0 of INLINEFORM1 to INLINEFORM2 by calculating the PMI of INLINEFORM3 and INLINEFORM4 , relative to INLINEFORM5 , INLINEFORM6
where INLINEFORM0 is INLINEFORM1 's frequency in INLINEFORM2 . INLINEFORM3 is specific to INLINEFORM4 if it occurs more frequently in INLINEFORM5 than in the entire set INLINEFORM6 , hence distinguishing this community from the rest. A word INLINEFORM7 whose occurrence is decoupled from INLINEFORM8 , and thus has INLINEFORM9 close to 0, is said to be generic.
We compute values of INLINEFORM0 for each time period INLINEFORM1 in INLINEFORM2 ; in the above description we drop the time-based subscripts for clarity.
Volatility. We quantify the volatility INLINEFORM0 of INLINEFORM1 to INLINEFORM2 as the PMI of INLINEFORM3 and INLINEFORM4 relative to INLINEFORM5 , the entire history of INLINEFORM6 : INLINEFORM7
A word INLINEFORM0 is volatile at time INLINEFORM1 in INLINEFORM2 if it occurs more frequently at INLINEFORM3 than in the entire history INLINEFORM4 , behaving as a fad within a small window of time. A word that occurs with similar frequency across time, and hence has INLINEFORM5 close to 0, is said to be stable.
Extending to utterances. Using our word-level primitives, we define the specificity of an utterance INLINEFORM0 in INLINEFORM1 , INLINEFORM2 as the average specificity of each word in the utterance. The volatility of utterances is defined analogously.
Community-level measures
Having described these word-level measures, we now proceed to establish the primary axes of our typology:
Distinctiveness. A community with a very distinctive identity will tend to have distinctive interests, expressed through specialized language. Formally, we define the distinctiveness of a community INLINEFORM0 as the average specificity of all utterances in INLINEFORM1 . We refer to a community with a less distinctive identity as being generic.
Dynamicity. A highly dynamic community constantly shifts interests from one time window to another, and these temporal variations are reflected in its use of volatile language. Formally, we define the dynamicity of a community INLINEFORM0 as the average volatility of all utterances in INLINEFORM1 . We refer to a community whose language is relatively consistent throughout time as being stable.
In our subsequent analyses, we focus mostly on examing the average distinctiveness and dynamicity of a community over time, denoted INLINEFORM0 and INLINEFORM1 .
Applying the typology to Reddit
We now explain how our typology can be applied to the particular setting of Reddit, and describe the overall behaviour of our linguistic axes in this context.
Dataset description. Reddit is a popular website where users form and participate in discussion-based communities called subreddits. Within these communities, users post content—such as images, URLs, or questions—which often spark vibrant lengthy discussions in thread-based comment sections.
The website contains many highly active subreddits with thousands of active subscribers. These communities span an extremely rich variety of topical interests, as represented by the examples described earlier. They also vary along a rich multitude of structural dimensions, such as the number of users, the amount of conversation and social interaction, and the social norms determining which types of content become popular. The diversity and scope of Reddit's multicommunity ecosystem make it an ideal landscape in which to closely examine the relation between varying community identities and social dynamics.
Our full dataset consists of all subreddits on Reddit from January 2013 to December 2014, for which there are at least 500 words in the vocabulary used to estimate our measures, in at least 4 months of the subreddit's history. We compute our measures over the comments written by users in a community in time windows of months, for each sufficiently active month, and manually remove communities where the bulk of the contributions are in a foreign language. This results in 283 communities ( INLINEFORM0 ), for a total of 4,872 community-months ( INLINEFORM1 ).
Estimating linguistic measures. We estimate word frequencies INLINEFORM0 , and by extension each downstream measure, in a carefully controlled manner in order to ensure we capture robust and meaningful linguistic behaviour. First, we only consider top-level comments which are initial responses to a post, as the content of lower-level responses might reflect conventions of dialogue more than a community's high-level interests. Next, in order to prevent a few highly active users from dominating our frequency estimates, we count each unique word once per user, ignoring successive uses of the same word by the same user. This ensures that our word-level characterizations are not skewed by a small subset of highly active contributors.
In our subsequent analyses, we will only look at these measures computed over the nouns used in comments. In principle, our framework can be applied to any choice of vocabulary. However, in the case of Reddit using nouns provides a convenient degree of interpretability. We can easily understand the implication of a community preferentially mentioning a noun such as gamer or feminist, but interpreting the overuse of verbs or function words such as take or of is less straightforward. Additionally, in focusing on nouns we adopt the view emphasized in modern “third wave” accounts of sociolinguistic variation, that stylistic variation is inseparable from topical content BIBREF23 . In the case of online communities, the choice of what people choose to talk about serves as a primary signal of social identity. That said, a typology based on more purely stylistic differences is an interesting avenue for future work.
Accounting for rare words. One complication when using measures such as PMI, which are based off of ratios of frequencies, is that estimates for very infrequent words could be overemphasized BIBREF24 . Words that only appear a few times in a community tend to score at the extreme ends of our measures (e.g. as highly specific or highly generic), obfuscating the impact of more frequent words in the community. To address this issue, we discard the long tail of infrequent words in our analyses, using only the top 5th percentile of words, by frequency within each INLINEFORM0 , to score comments and communities.
Typology output on Reddit. The distribution of INLINEFORM0 and INLINEFORM1 across Reddit communities is shown in Figure FIGREF3 .B, along with examples of communities at the extremes of our typology. We find that interpretable groupings of communities emerge at various points within our axes. For instance, highly distinctive and dynamic communities tend to focus on rapidly-updating interests like sports teams and games, while generic and consistent communities tend to be large “link-sharing” hubs where users generally post content with no clear dominating themes. More examples of communities at the extremes of our typology are shown in Table TABREF9 .
We note that these groupings capture abstract properties of a community's content that go beyond its topic. For instance, our typology relates topically contrasting communities such as yugioh (which is about a popular trading card game) and Seahawks through the shared trait that their content is particularly distinctive. Additionally, the axes can clarify differences between topically similar communities: while startrek and thewalkingdead both focus on TV shows, startrek is less dynamic than the median community, while thewalkingdead is among the most dynamic communities, as the show was still airing during the years considered.
Community identity and user retention
We have seen that our typology produces qualitatively satisfying groupings of communities according to the nature of their collective identity. This section shows that there is an informative and highly predictive relationship between a community's position in this typology and its user engagement patterns. We find that communities with distinctive and dynamic identities have higher rates of user engagement, and further show that a community's position in our identity-based landscape holds important predictive information that is complementary to a strong activity baseline.
In particular user retention is one of the most crucial aspects of engagement and is critical to community maintenance BIBREF2 . We quantify how successful communities are at retaining users in terms of both short and long-term commitment. Our results indicate that rates of user retention vary drastically, yet systematically according to how distinctive and dynamic a community is (Figure FIGREF3 ).
We find a strong, explanatory relationship between the temporal consistency of a community's identity and rates of user engagement: dynamic communities that continually update and renew their discussion content tend to have far higher rates of user engagement. The relationship between distinctiveness and engagement is less universal, but still highly informative: niche communities tend to engender strong, focused interest from users at one particular point in time, though this does not necessarily translate into long-term retention.
Community-type and monthly retention
We find that dynamic communities, such as Seahawks or starcraft, have substantially higher rates of monthly user retention than more stable communities (Spearman's INLINEFORM0 = 0.70, INLINEFORM1 0.001, computed with community points averaged over months; Figure FIGREF11 .A, left). Similarly, more distinctive communities, like Cooking and Naruto, exhibit moderately higher monthly retention rates than more generic communities (Spearman's INLINEFORM2 = 0.33, INLINEFORM3 0.001; Figure FIGREF11 .A, right).
Monthly retention is formally defined as the proportion of users who contribute in month INLINEFORM0 and then return to contribute again in month INLINEFORM1 . Each monthly datapoint is treated as unique and the trends in Figure FIGREF11 show 95% bootstrapped confidence intervals, cluster-resampled at the level of subreddit BIBREF25 , to account for differences in the number of months each subreddit contributes to the data.
Importantly, we find that in the task of predicting community-level user retention our identity-based typology holds additional predictive value on top of strong baseline features based on community-size (# contributing users) and activity levels (mean # contributions per user), which are commonly used for churn prediction BIBREF7 . We compared out-of-sample predictive performance via leave-one-community-out cross validation using random forest regressors with ensembles of size 100, and otherwise default hyperparameters BIBREF26 . A model predicting average monthly retention based on a community's average distinctiveness and dynamicity achieves an average mean squared error ( INLINEFORM0 ) of INLINEFORM1 and INLINEFORM2 , while an analogous model predicting based on a community's size and average activity level (both log-transformed) achieves INLINEFORM4 and INLINEFORM5 . The difference between the two models is not statistically significant ( INLINEFORM6 , Wilcoxon signed-rank test). However, combining features from both models results in a large and statistically significant improvement over each independent model ( INLINEFORM7 , INLINEFORM8 , INLINEFORM9 Bonferroni-corrected pairwise Wilcoxon tests). These results indicate that our typology can explain variance in community-level retention rates, and provides information beyond what is present in standard activity-based features.
Community-type and user tenure
As with monthly retention, we find a strong positive relationship between a community's dynamicity and the average number of months that a user will stay in that community (Spearman's INLINEFORM0 = 0.41, INLINEFORM1 0.001, computed over all community points; Figure FIGREF11 .B, left). This verifies that the short-term trend observed for monthly retention translates into longer-term engagement and suggests that long-term user retention might be strongly driven by the extent to which a community continually provides novel content. Interestingly, there is no significant relationship between distinctiveness and long-term engagement (Spearman's INLINEFORM2 = 0.03, INLINEFORM3 0.77; Figure FIGREF11 .B, right). Thus, while highly distinctive communities like RandomActsOfMakeup may generate focused commitment from users over a short period of time, such communities are unlikely to retain long-term users unless they also have sufficiently dynamic content.
To measure user tenures we focused on one slice of data (May, 2013) and measured how many months a user spends in each community, on average—the average number of months between a user's first and last comment in each community. We have activity data up until May 2015, so the maximum tenure is 24 months in this set-up, which is exceptionally long relative to the average community member (throughout our entire data less than INLINEFORM0 of users have tenures of more than 24 months in any community).
Community identity and acculturation
The previous section shows that there is a strong connection between the nature of a community's identity and its basic user engagement patterns. In this section, we probe the relationship between a community's identity and how permeable, or accessible, it is to outsiders.
We measure this phenomenon using what we call the acculturation gap, which compares the extent to which engaged vs. non-engaged users employ community-specific language. While previous work has found this gap to be large and predictive of future user engagement in two beer-review communities BIBREF5 , we find that the size of the acculturation gap depends strongly on the nature of a community's identity, with the gap being most pronounced in stable, highly distinctive communities (Figure FIGREF13 ).
This finding has important implications for our understanding of online communities. Though many works have analyzed the dynamics of “linguistic belonging” in online communities BIBREF16 , BIBREF28 , BIBREF5 , BIBREF17 , our results suggest that the process of linguistically fitting in is highly contingent on the nature of a community's identity. At one extreme, in generic communities like pics or worldnews there is no distinctive, linguistic identity for users to adopt.
To measure the acculturation gap for a community, we follow Danescu-Niculescu-Mizil et al danescu-niculescu-mizilno2013 and build “snapshot language models” (SLMs) for each community, which capture the linguistic state of a community at one point of time. Using these language models we can capture how linguistically close a particular utterance is to the community by measuring the cross-entropy of this utterance relative to the SLM: DISPLAYFORM0
where INLINEFORM0 is the probability assigned to bigram INLINEFORM1 from comment INLINEFORM2 in community-month INLINEFORM3 . We build the SLMs by randomly sampling 200 active users—defined as users with at least 5 comments in the respective community and month. For each of these 200 active users we select 5 random 10-word spans from 5 unique comments. To ensure robustness and maximize data efficiency, we construct 100 SLMs for each community-month pair that has enough data, bootstrap-resampling from the set of active users.
We compute a basic measure of the acculturation gap for a community-month INLINEFORM0 as the relative difference of the cross-entropy of comments by users active in INLINEFORM1 with that of singleton comments by outsiders—i.e., users who only ever commented once in INLINEFORM2 , but who are still active in Reddit in general: DISPLAYFORM0
INLINEFORM0 denotes the distribution over singleton comments, INLINEFORM1 denotes the distribution over comments from users active in INLINEFORM2 , and INLINEFORM3 the expected values of the cross-entropy over these respective distributions. For each bootstrap-sampled SLM we compute the cross-entropy of 50 comments by active users (10 comments from 5 randomly sampled active users, who were not used to construct the SLM) and 50 comments from randomly-sampled outsiders.
Figure FIGREF13 .A shows that the acculturation gap varies substantially with how distinctive and dynamic a community is. Highly distinctive communities have far higher acculturation gaps, while dynamicity exhibits a non-linear relationship: relatively stable communities have a higher linguistic `entry barrier', as do very dynamic ones. Thus, in communities like IAmA (a general Q&A forum) that are very generic, with content that is highly, but not extremely dynamic, outsiders are at no disadvantage in matching the community's language. In contrast, the acculturation gap is large in stable, distinctive communities like Cooking that have consistent community-specific language. The gap is also large in extremely dynamic communities like Seahawks, which perhaps require more attention or interest on the part of active users to keep up-to-date with recent trends in content.
These results show that phenomena like the acculturation gap, which were previously observed in individual communities BIBREF28 , BIBREF5 , cannot be easily generalized to a larger, heterogeneous set of communities. At the same time, we see that structuring the space of possible communities enables us to observe systematic patterns in how such phenomena vary.
Community identity and content affinity
Through the acculturation gap, we have shown that communities exhibit large yet systematic variations in their permeability to outsiders. We now turn to understanding the divide in commenting behaviour between outsiders and active community members at a finer granularity, by focusing on two particular ways in which such gaps might manifest among users: through different levels of engagement with specific content and with temporally volatile content.
Echoing previous results, we find that community type mediates the extent and nature of the divide in content affinity. While in distinctive communities active members have a higher affinity for both community-specific content and for highly volatile content, the opposite is true for generic communities, where it is the outsiders who engage more with volatile content.
We quantify these divides in content affinity by measuring differences in the language of the comments written by active users and outsiders. Concretely, for each community INLINEFORM0 , we define the specificity gap INLINEFORM1 as the relative difference between the average specificity of comments authored by active members, and by outsiders, where these measures are macroaveraged over users. Large, positive INLINEFORM2 then occur in communities where active users tend to engage with substantially more community-specific content than outsiders.
We analogously define the volatility gap INLINEFORM0 as the relative difference in volatilities of active member and outsider comments. Large, positive values of INLINEFORM1 characterize communities where active users tend to have more volatile interests than outsiders, while negative values indicate communities where active users tend to have more stable interests.
We find that in 94% of communities, INLINEFORM0 , indicating (somewhat unsurprisingly) that in almost all communities, active users tend to engage with more community-specific content than outsiders. However, the magnitude of this divide can vary greatly: for instance, in Homebrewing, which is dedicated to brewing beer, the divide is very pronounced ( INLINEFORM1 0.33) compared to funny, a large hub where users share humorous content ( INLINEFORM2 0.011).
The nature of the volatility gap is comparatively more varied. In Homebrewing ( INLINEFORM0 0.16), as in 68% of communities, active users tend to write more volatile comments than outsiders ( INLINEFORM1 0). However, communities like funny ( INLINEFORM2 -0.16), where active users contribute relatively stable comments compared to outsiders ( INLINEFORM3 0), are also well-represented on Reddit.
To understand whether these variations manifest systematically across communities, we examine the relationship between divides in content affinity and community type. In particular, following the intuition that active users have a relatively high affinity for a community's niche, we expect that the distinctiveness of a community will be a salient mediator of specificity and volatility gaps. Indeed, we find a strong correlation between a community's distinctiveness and its specificity gap (Spearman's INLINEFORM0 0.34, INLINEFORM1 0.001).
We also find a strong correlation between distinctiveness and community volatility gaps (Spearman's INLINEFORM0 0.53, INLINEFORM1 0.001). In particular, we see that among the most distinctive communities (i.e., the top third of communities by distinctiveness), active users tend to write more volatile comments than outsiders (mean INLINEFORM2 0.098), while across the most generic communities (i.e., the bottom third), active users tend to write more stable comments (mean INLINEFORM3 -0.047, Mann-Whitney U test INLINEFORM4 0.001). The relative affinity of outsiders for volatile content in these communities indicates that temporally ephemeral content might serve as an entry point into such a community, without necessarily engaging users in the long term.
Further related work
Our language-based typology and analysis of user engagement draws on and contributes to several distinct research threads, in addition to the many foundational studies cited in the previous sections.
Multicommunity studies. Our investigation of user engagement in multicommunity settings follows prior literature which has examined differences in user and community dynamics across various online groups, such as email listservs. Such studies have primarily related variations in user behaviour to structural features such as group size and volume of content BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 . In focusing on the linguistic content of communities, we extend this research by providing a content-based framework through which user engagement can be examined.
Reddit has been a particularly useful setting for studying multiple communities in prior work. Such studies have mostly focused on characterizing how individual users engage across a multi-community platform BIBREF34 , BIBREF35 , or on specific user engagement patterns such as loyalty to particular communities BIBREF22 . We complement these studies by seeking to understand how features of communities can mediate a broad array of user engagement patterns within them.
Typologies of online communities. Prior attempts to typologize online communities have primarily been qualitative and based on hand-designed categories, making them difficult to apply at scale. These typologies often hinge on having some well-defined function the community serves, such as supporting a business or non-profit cause BIBREF36 , which can be difficult or impossible to identify in massive, anonymous multi-community settings. Other typologies emphasize differences in communication platforms and other functional requirements BIBREF37 , BIBREF38 , which are important but preclude analyzing differences between communities within the same multi-community platform. Similarly, previous computational methods of characterizing multiple communities have relied on the presence of markers such as affixes in community names BIBREF35 , or platform-specific affordances such as evaluation mechanisms BIBREF39 .
Our typology is also distinguished from community detection techniques that rely on structural or functional categorizations BIBREF40 , BIBREF41 . While the focus of those studies is to identify and characterize sub-communities within a larger social network, our typology provides a characterization of pre-defined communities based on the nature of their identity.
Broader work on collective identity. Our focus on community identity dovetails with a long line of research on collective identity and user engagement, in both online and offline communities BIBREF42 , BIBREF1 , BIBREF2 . These studies focus on individual-level psychological manifestations of collective (or social) identity, and their relationship to user engagement BIBREF42 , BIBREF43 , BIBREF44 , BIBREF0 .
In contrast, we seek to characterize community identities at an aggregate level and in an interpretable manner, with the goal of systematically organizing the diverse space of online communities. Typologies of this kind are critical to these broader, social-psychological studies of collective identity: they allow researchers to systematically analyze how the psychological manifestations and implications of collective identity vary across diverse sets of communities.
Conclusion and future work
Our current understanding of engagement patterns in online communities is patched up from glimpses offered by several disparate studies focusing on a few individual communities. This work calls into attention the need for a method to systematically reason about similarities and differences across communities. By proposing a way to structure the multi-community space, we find not only that radically contrasting engagement patterns emerge in different parts of this space, but also that this variation can be at least partly explained by the type of identity each community fosters.
Our choice in this work is to structure the multi-community space according to a typology based on community identity, as reflected in language use. We show that this effectively explains cross-community variation of three different user engagement measures—retention, acculturation and content affinity—and complements measures based on activity and size with additional interpretable information. For example, we find that in niche communities established members are more likely to engage with volatile content than outsiders, while the opposite is true in generic communities. Such insights can be useful for community maintainers seeking to understand engagement patterns in their own communities.
One main area of future research is to examine the temporal dynamics in the multi-community landscape. By averaging our measures of distinctiveness and dynamicity across time, our present study treated community identity as a static property. However, as communities experience internal changes and respond to external events, we can expect the nature of their identity to shift as well. For instance, the relative consistency of harrypotter may be disrupted by the release of a new novel, while Seahawks may foster different identities during and between football seasons. Conversely, a community's type may also mediate the impact of new events. Moving beyond a static view of community identity could enable us to better understand how temporal phenomena such as linguistic change manifest across different communities, and also provide a more nuanced view of user engagement—for instance, are communities more welcoming to newcomers at certain points in their lifecycle?
Another important avenue of future work is to explore other ways of mapping the landscape of online communities. For example, combining structural properties of communities BIBREF40 with topical information BIBREF35 and with our identity-based measures could further characterize and explain variations in user engagement patterns. Furthermore, extending the present analyses to even more diverse communities supported by different platforms (e.g., GitHub, StackExchange, Wikipedia) could enable the characterization of more complex behavioral patterns such as collaboration and altruism, which become salient in different multicommunity landscapes.
Acknowledgements
The authors thank Liye Fu, Jack Hessel, David Jurgens and Lillian Lee for their helpful comments. This research has been supported in part by a Discovery and Innovation Research Seed Award from the Office of the Vice Provost for Research at Cornell, NSF CNS-1010921, IIS-1149837, IIS-1514268 NIH BD2K, ARO MURI, DARPA XDATA, DARPA SIMPLEX, DARPA NGS2, Stanford Data Science Initiative, SAP Stanford Graduate Fellowship, NSERC PGS-D, Boeing, Lightspeed, and Volkswagen. | They selected all the subreddits from January 2013 to December 2014 with at least 500 words in the vocabulary and at least 4 months of the subreddit's history. They also removed communities with the bulk of the contributions are in foreign language. |
edb2d24d6d10af13931b3a47a6543bd469752f0c | edb2d24d6d10af13931b3a47a6543bd469752f0c_1 | Q: How did the select the 300 Reddit communities for comparison?
Text: Introduction
“If each city is like a game of chess, the day when I have learned the rules, I shall finally possess my empire, even if I shall never succeed in knowing all the cities it contains.”
— Italo Calvino, Invisible Cities
A community's identity—defined through the common interests and shared experiences of its users—shapes various facets of the social dynamics within it BIBREF0 , BIBREF1 , BIBREF2 . Numerous instances of this interplay between a community's identity and social dynamics have been extensively studied in the context of individual online communities BIBREF3 , BIBREF4 , BIBREF5 . However, the sheer variety of online platforms complicates the task of generalizing insights beyond these isolated, single-community glimpses. A new way to reason about the variation across multiple communities is needed in order to systematically characterize the relationship between properties of a community and the dynamics taking place within.
One especially important component of community dynamics is user engagement. We can aim to understand why users join certain communities BIBREF6 , what factors influence user retention BIBREF7 , and how users react to innovation BIBREF5 . While striking patterns of user engagement have been uncovered in prior case studies of individual communities BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , we do not know whether these observations hold beyond these cases, or when we can draw analogies between different communities. Are there certain types of communities where we can expect similar or contrasting engagement patterns?
To address such questions quantitatively we need to provide structure to the diverse and complex space of online communities. Organizing the multi-community landscape would allow us to both characterize individual points within this space, and reason about systematic variations in patterns of user engagement across the space.
Present work: Structuring the multi-community space. In order to systematically understand the relationship between community identityand user engagement we introduce a quantitative typology of online communities. Our typology is based on two key aspects of community identity: how distinctive—or niche—a community's interests are relative to other communities, and how dynamic—or volatile—these interests are over time. These axes aim to capture the salience of a community's identity and dynamics of its temporal evolution.
Our main insight in implementing this typology automatically and at scale is that the language used within a community can simultaneously capture how distinctive and dynamic its interests are. This language-based approach draws on a wealth of literature characterizing linguistic variation in online communities and its relationship to community and user identity BIBREF16 , BIBREF5 , BIBREF17 , BIBREF18 , BIBREF19 . Basing our typology on language is also convenient since it renders our framework immediately applicable to a wide variety of online communities, where communication is primarily recorded in a textual format.
Using our framework, we map almost 300 Reddit communities onto the landscape defined by the two axes of our typology (Section SECREF2 ). We find that this mapping induces conceptually sound categorizations that effectively capture key aspects of community-level social dynamics. In particular, we quantitatively validate the effectiveness of our mapping by showing that our two-dimensional typology encodes signals that are predictive of community-level rates of user retention, complementing strong activity-based features.
Engagement and community identity. We apply our framework to understand how two important aspects of user engagement in a community—the community's propensity to retain its users (Section SECREF3 ), and its permeability to new members (Section SECREF4 )—vary according to the type of collective identity it fosters. We find that communities that are characterized by specialized, constantly-updating content have higher user retention rates, but also exhibit larger linguistic gaps that separate newcomers from established members.
More closely examining factors that could contribute to this linguistic gap, we find that especially within distinctive communities, established users have an increased propensity to engage with the community's specialized content, compared to newcomers (Section SECREF5 ). Interestingly, while established members of distinctive communities more avidly respond to temporal updates than newcomers, in more generic communities it is the outsiders who engage more with volatile content, perhaps suggesting that such content may serve as an entry-point to the community (but not necessarily a reason to stay). Such insights into the relation between collective identity and user engagement can be informative to community maintainers seeking to better understand growth patterns within their online communities.
More generally, our methodology stands as an example of how sociological questions can be addressed in a multi-community setting. In performing our analyses across a rich variety of communities, we reveal both the diversity of phenomena that can occur, as well as the systematic nature of this diversity.
A typology of community identity
A community's identity derives from its members' common interests and shared experiences BIBREF15 , BIBREF20 . In this work, we structure the multi-community landscape along these two key dimensions of community identity: how distinctive a community's interests are, and how dynamic the community is over time.
We now proceed to outline our quantitative typology, which maps communities along these two dimensions. We start by providing an intuition through inspecting a few example communities. We then introduce a generalizable language-based methodology and use it to map a large set of Reddit communities onto the landscape defined by our typology of community identity.
Overview and intuition
In order to illustrate the diversity within the multi-community space, and to provide an intuition for the underlying structure captured by the proposed typology, we first examine a few example communities and draw attention to some key social dynamics that occur within them.
We consider four communities from Reddit: in Seahawks, fans of the Seahawks football team gather to discuss games and players; in BabyBumps, expecting mothers trade advice and updates on their pregnancy; Cooking consists of recipe ideas and general discussion about cooking; while in pics, users share various images of random things (like eels and hornets). We note that these communities are topically contrasting and foster fairly disjoint user bases. Additionally, these communities exhibit varied patterns of user engagement. While Seahawks maintains a devoted set of users from month to month, pics is dominated by transient users who post a few times and then depart.
Discussions within these communities also span varied sets of interests. Some of these interests are more specific to the community than others: risotto, for example, is seldom a discussion point beyond Cooking. Additionally, some interests consistently recur, while others are specific to a particular time: kitchens are a consistent focus point for cooking, but mint is only in season during spring. Coupling specificity and consistency we find interests such as easter, which isn't particularly specific to BabyBumps but gains prominence in that community around Easter (see Figure FIGREF3 .A for further examples).
These specific interests provide a window into the nature of the communities' interests as a whole, and by extension their community identities. Overall, discussions in Cooking focus on topics which are highly distinctive and consistently recur (like risotto). In contrast, discussions in Seahawks are highly dynamic, rapidly shifting over time as new games occur and players are traded in and out. In the remainder of this section we formally introduce a methodology for mapping communities in this space defined by their distinctiveness and dynamicity (examples in Figure FIGREF3 .B).
Language-based formalization
Our approach follows the intuition that a distinctive community will use language that is particularly specific, or unique, to that community. Similarly, a dynamic community will use volatile language that rapidly changes across successive windows of time. To capture this intuition automatically, we start by defining word-level measures of specificity and volatility. We then extend these word-level primitives to characterize entire comments, and the community itself.
Our characterizations of words in a community are motivated by methodology from prior literature that compares the frequency of a word in a particular setting to its frequency in some background distribution, in order to identify instances of linguistic variation BIBREF21 , BIBREF19 . Our particular framework makes this comparison by way of pointwise mutual information (PMI).
In the following, we use INLINEFORM0 to denote one community within a set INLINEFORM1 of communities, and INLINEFORM2 to denote one time period within the entire history INLINEFORM3 of INLINEFORM4 . We account for temporal as well as inter-community variation by computing word-level measures for each time period of each community's history, INLINEFORM5 . Given a word INLINEFORM6 used within a particular community INLINEFORM7 at time INLINEFORM8 , we define two word-level measures:
Specificity. We quantify the specificity INLINEFORM0 of INLINEFORM1 to INLINEFORM2 by calculating the PMI of INLINEFORM3 and INLINEFORM4 , relative to INLINEFORM5 , INLINEFORM6
where INLINEFORM0 is INLINEFORM1 's frequency in INLINEFORM2 . INLINEFORM3 is specific to INLINEFORM4 if it occurs more frequently in INLINEFORM5 than in the entire set INLINEFORM6 , hence distinguishing this community from the rest. A word INLINEFORM7 whose occurrence is decoupled from INLINEFORM8 , and thus has INLINEFORM9 close to 0, is said to be generic.
We compute values of INLINEFORM0 for each time period INLINEFORM1 in INLINEFORM2 ; in the above description we drop the time-based subscripts for clarity.
Volatility. We quantify the volatility INLINEFORM0 of INLINEFORM1 to INLINEFORM2 as the PMI of INLINEFORM3 and INLINEFORM4 relative to INLINEFORM5 , the entire history of INLINEFORM6 : INLINEFORM7
A word INLINEFORM0 is volatile at time INLINEFORM1 in INLINEFORM2 if it occurs more frequently at INLINEFORM3 than in the entire history INLINEFORM4 , behaving as a fad within a small window of time. A word that occurs with similar frequency across time, and hence has INLINEFORM5 close to 0, is said to be stable.
Extending to utterances. Using our word-level primitives, we define the specificity of an utterance INLINEFORM0 in INLINEFORM1 , INLINEFORM2 as the average specificity of each word in the utterance. The volatility of utterances is defined analogously.
Community-level measures
Having described these word-level measures, we now proceed to establish the primary axes of our typology:
Distinctiveness. A community with a very distinctive identity will tend to have distinctive interests, expressed through specialized language. Formally, we define the distinctiveness of a community INLINEFORM0 as the average specificity of all utterances in INLINEFORM1 . We refer to a community with a less distinctive identity as being generic.
Dynamicity. A highly dynamic community constantly shifts interests from one time window to another, and these temporal variations are reflected in its use of volatile language. Formally, we define the dynamicity of a community INLINEFORM0 as the average volatility of all utterances in INLINEFORM1 . We refer to a community whose language is relatively consistent throughout time as being stable.
In our subsequent analyses, we focus mostly on examing the average distinctiveness and dynamicity of a community over time, denoted INLINEFORM0 and INLINEFORM1 .
Applying the typology to Reddit
We now explain how our typology can be applied to the particular setting of Reddit, and describe the overall behaviour of our linguistic axes in this context.
Dataset description. Reddit is a popular website where users form and participate in discussion-based communities called subreddits. Within these communities, users post content—such as images, URLs, or questions—which often spark vibrant lengthy discussions in thread-based comment sections.
The website contains many highly active subreddits with thousands of active subscribers. These communities span an extremely rich variety of topical interests, as represented by the examples described earlier. They also vary along a rich multitude of structural dimensions, such as the number of users, the amount of conversation and social interaction, and the social norms determining which types of content become popular. The diversity and scope of Reddit's multicommunity ecosystem make it an ideal landscape in which to closely examine the relation between varying community identities and social dynamics.
Our full dataset consists of all subreddits on Reddit from January 2013 to December 2014, for which there are at least 500 words in the vocabulary used to estimate our measures, in at least 4 months of the subreddit's history. We compute our measures over the comments written by users in a community in time windows of months, for each sufficiently active month, and manually remove communities where the bulk of the contributions are in a foreign language. This results in 283 communities ( INLINEFORM0 ), for a total of 4,872 community-months ( INLINEFORM1 ).
Estimating linguistic measures. We estimate word frequencies INLINEFORM0 , and by extension each downstream measure, in a carefully controlled manner in order to ensure we capture robust and meaningful linguistic behaviour. First, we only consider top-level comments which are initial responses to a post, as the content of lower-level responses might reflect conventions of dialogue more than a community's high-level interests. Next, in order to prevent a few highly active users from dominating our frequency estimates, we count each unique word once per user, ignoring successive uses of the same word by the same user. This ensures that our word-level characterizations are not skewed by a small subset of highly active contributors.
In our subsequent analyses, we will only look at these measures computed over the nouns used in comments. In principle, our framework can be applied to any choice of vocabulary. However, in the case of Reddit using nouns provides a convenient degree of interpretability. We can easily understand the implication of a community preferentially mentioning a noun such as gamer or feminist, but interpreting the overuse of verbs or function words such as take or of is less straightforward. Additionally, in focusing on nouns we adopt the view emphasized in modern “third wave” accounts of sociolinguistic variation, that stylistic variation is inseparable from topical content BIBREF23 . In the case of online communities, the choice of what people choose to talk about serves as a primary signal of social identity. That said, a typology based on more purely stylistic differences is an interesting avenue for future work.
Accounting for rare words. One complication when using measures such as PMI, which are based off of ratios of frequencies, is that estimates for very infrequent words could be overemphasized BIBREF24 . Words that only appear a few times in a community tend to score at the extreme ends of our measures (e.g. as highly specific or highly generic), obfuscating the impact of more frequent words in the community. To address this issue, we discard the long tail of infrequent words in our analyses, using only the top 5th percentile of words, by frequency within each INLINEFORM0 , to score comments and communities.
Typology output on Reddit. The distribution of INLINEFORM0 and INLINEFORM1 across Reddit communities is shown in Figure FIGREF3 .B, along with examples of communities at the extremes of our typology. We find that interpretable groupings of communities emerge at various points within our axes. For instance, highly distinctive and dynamic communities tend to focus on rapidly-updating interests like sports teams and games, while generic and consistent communities tend to be large “link-sharing” hubs where users generally post content with no clear dominating themes. More examples of communities at the extremes of our typology are shown in Table TABREF9 .
We note that these groupings capture abstract properties of a community's content that go beyond its topic. For instance, our typology relates topically contrasting communities such as yugioh (which is about a popular trading card game) and Seahawks through the shared trait that their content is particularly distinctive. Additionally, the axes can clarify differences between topically similar communities: while startrek and thewalkingdead both focus on TV shows, startrek is less dynamic than the median community, while thewalkingdead is among the most dynamic communities, as the show was still airing during the years considered.
Community identity and user retention
We have seen that our typology produces qualitatively satisfying groupings of communities according to the nature of their collective identity. This section shows that there is an informative and highly predictive relationship between a community's position in this typology and its user engagement patterns. We find that communities with distinctive and dynamic identities have higher rates of user engagement, and further show that a community's position in our identity-based landscape holds important predictive information that is complementary to a strong activity baseline.
In particular user retention is one of the most crucial aspects of engagement and is critical to community maintenance BIBREF2 . We quantify how successful communities are at retaining users in terms of both short and long-term commitment. Our results indicate that rates of user retention vary drastically, yet systematically according to how distinctive and dynamic a community is (Figure FIGREF3 ).
We find a strong, explanatory relationship between the temporal consistency of a community's identity and rates of user engagement: dynamic communities that continually update and renew their discussion content tend to have far higher rates of user engagement. The relationship between distinctiveness and engagement is less universal, but still highly informative: niche communities tend to engender strong, focused interest from users at one particular point in time, though this does not necessarily translate into long-term retention.
Community-type and monthly retention
We find that dynamic communities, such as Seahawks or starcraft, have substantially higher rates of monthly user retention than more stable communities (Spearman's INLINEFORM0 = 0.70, INLINEFORM1 0.001, computed with community points averaged over months; Figure FIGREF11 .A, left). Similarly, more distinctive communities, like Cooking and Naruto, exhibit moderately higher monthly retention rates than more generic communities (Spearman's INLINEFORM2 = 0.33, INLINEFORM3 0.001; Figure FIGREF11 .A, right).
Monthly retention is formally defined as the proportion of users who contribute in month INLINEFORM0 and then return to contribute again in month INLINEFORM1 . Each monthly datapoint is treated as unique and the trends in Figure FIGREF11 show 95% bootstrapped confidence intervals, cluster-resampled at the level of subreddit BIBREF25 , to account for differences in the number of months each subreddit contributes to the data.
Importantly, we find that in the task of predicting community-level user retention our identity-based typology holds additional predictive value on top of strong baseline features based on community-size (# contributing users) and activity levels (mean # contributions per user), which are commonly used for churn prediction BIBREF7 . We compared out-of-sample predictive performance via leave-one-community-out cross validation using random forest regressors with ensembles of size 100, and otherwise default hyperparameters BIBREF26 . A model predicting average monthly retention based on a community's average distinctiveness and dynamicity achieves an average mean squared error ( INLINEFORM0 ) of INLINEFORM1 and INLINEFORM2 , while an analogous model predicting based on a community's size and average activity level (both log-transformed) achieves INLINEFORM4 and INLINEFORM5 . The difference between the two models is not statistically significant ( INLINEFORM6 , Wilcoxon signed-rank test). However, combining features from both models results in a large and statistically significant improvement over each independent model ( INLINEFORM7 , INLINEFORM8 , INLINEFORM9 Bonferroni-corrected pairwise Wilcoxon tests). These results indicate that our typology can explain variance in community-level retention rates, and provides information beyond what is present in standard activity-based features.
Community-type and user tenure
As with monthly retention, we find a strong positive relationship between a community's dynamicity and the average number of months that a user will stay in that community (Spearman's INLINEFORM0 = 0.41, INLINEFORM1 0.001, computed over all community points; Figure FIGREF11 .B, left). This verifies that the short-term trend observed for monthly retention translates into longer-term engagement and suggests that long-term user retention might be strongly driven by the extent to which a community continually provides novel content. Interestingly, there is no significant relationship between distinctiveness and long-term engagement (Spearman's INLINEFORM2 = 0.03, INLINEFORM3 0.77; Figure FIGREF11 .B, right). Thus, while highly distinctive communities like RandomActsOfMakeup may generate focused commitment from users over a short period of time, such communities are unlikely to retain long-term users unless they also have sufficiently dynamic content.
To measure user tenures we focused on one slice of data (May, 2013) and measured how many months a user spends in each community, on average—the average number of months between a user's first and last comment in each community. We have activity data up until May 2015, so the maximum tenure is 24 months in this set-up, which is exceptionally long relative to the average community member (throughout our entire data less than INLINEFORM0 of users have tenures of more than 24 months in any community).
Community identity and acculturation
The previous section shows that there is a strong connection between the nature of a community's identity and its basic user engagement patterns. In this section, we probe the relationship between a community's identity and how permeable, or accessible, it is to outsiders.
We measure this phenomenon using what we call the acculturation gap, which compares the extent to which engaged vs. non-engaged users employ community-specific language. While previous work has found this gap to be large and predictive of future user engagement in two beer-review communities BIBREF5 , we find that the size of the acculturation gap depends strongly on the nature of a community's identity, with the gap being most pronounced in stable, highly distinctive communities (Figure FIGREF13 ).
This finding has important implications for our understanding of online communities. Though many works have analyzed the dynamics of “linguistic belonging” in online communities BIBREF16 , BIBREF28 , BIBREF5 , BIBREF17 , our results suggest that the process of linguistically fitting in is highly contingent on the nature of a community's identity. At one extreme, in generic communities like pics or worldnews there is no distinctive, linguistic identity for users to adopt.
To measure the acculturation gap for a community, we follow Danescu-Niculescu-Mizil et al danescu-niculescu-mizilno2013 and build “snapshot language models” (SLMs) for each community, which capture the linguistic state of a community at one point of time. Using these language models we can capture how linguistically close a particular utterance is to the community by measuring the cross-entropy of this utterance relative to the SLM: DISPLAYFORM0
where INLINEFORM0 is the probability assigned to bigram INLINEFORM1 from comment INLINEFORM2 in community-month INLINEFORM3 . We build the SLMs by randomly sampling 200 active users—defined as users with at least 5 comments in the respective community and month. For each of these 200 active users we select 5 random 10-word spans from 5 unique comments. To ensure robustness and maximize data efficiency, we construct 100 SLMs for each community-month pair that has enough data, bootstrap-resampling from the set of active users.
We compute a basic measure of the acculturation gap for a community-month INLINEFORM0 as the relative difference of the cross-entropy of comments by users active in INLINEFORM1 with that of singleton comments by outsiders—i.e., users who only ever commented once in INLINEFORM2 , but who are still active in Reddit in general: DISPLAYFORM0
INLINEFORM0 denotes the distribution over singleton comments, INLINEFORM1 denotes the distribution over comments from users active in INLINEFORM2 , and INLINEFORM3 the expected values of the cross-entropy over these respective distributions. For each bootstrap-sampled SLM we compute the cross-entropy of 50 comments by active users (10 comments from 5 randomly sampled active users, who were not used to construct the SLM) and 50 comments from randomly-sampled outsiders.
Figure FIGREF13 .A shows that the acculturation gap varies substantially with how distinctive and dynamic a community is. Highly distinctive communities have far higher acculturation gaps, while dynamicity exhibits a non-linear relationship: relatively stable communities have a higher linguistic `entry barrier', as do very dynamic ones. Thus, in communities like IAmA (a general Q&A forum) that are very generic, with content that is highly, but not extremely dynamic, outsiders are at no disadvantage in matching the community's language. In contrast, the acculturation gap is large in stable, distinctive communities like Cooking that have consistent community-specific language. The gap is also large in extremely dynamic communities like Seahawks, which perhaps require more attention or interest on the part of active users to keep up-to-date with recent trends in content.
These results show that phenomena like the acculturation gap, which were previously observed in individual communities BIBREF28 , BIBREF5 , cannot be easily generalized to a larger, heterogeneous set of communities. At the same time, we see that structuring the space of possible communities enables us to observe systematic patterns in how such phenomena vary.
Community identity and content affinity
Through the acculturation gap, we have shown that communities exhibit large yet systematic variations in their permeability to outsiders. We now turn to understanding the divide in commenting behaviour between outsiders and active community members at a finer granularity, by focusing on two particular ways in which such gaps might manifest among users: through different levels of engagement with specific content and with temporally volatile content.
Echoing previous results, we find that community type mediates the extent and nature of the divide in content affinity. While in distinctive communities active members have a higher affinity for both community-specific content and for highly volatile content, the opposite is true for generic communities, where it is the outsiders who engage more with volatile content.
We quantify these divides in content affinity by measuring differences in the language of the comments written by active users and outsiders. Concretely, for each community INLINEFORM0 , we define the specificity gap INLINEFORM1 as the relative difference between the average specificity of comments authored by active members, and by outsiders, where these measures are macroaveraged over users. Large, positive INLINEFORM2 then occur in communities where active users tend to engage with substantially more community-specific content than outsiders.
We analogously define the volatility gap INLINEFORM0 as the relative difference in volatilities of active member and outsider comments. Large, positive values of INLINEFORM1 characterize communities where active users tend to have more volatile interests than outsiders, while negative values indicate communities where active users tend to have more stable interests.
We find that in 94% of communities, INLINEFORM0 , indicating (somewhat unsurprisingly) that in almost all communities, active users tend to engage with more community-specific content than outsiders. However, the magnitude of this divide can vary greatly: for instance, in Homebrewing, which is dedicated to brewing beer, the divide is very pronounced ( INLINEFORM1 0.33) compared to funny, a large hub where users share humorous content ( INLINEFORM2 0.011).
The nature of the volatility gap is comparatively more varied. In Homebrewing ( INLINEFORM0 0.16), as in 68% of communities, active users tend to write more volatile comments than outsiders ( INLINEFORM1 0). However, communities like funny ( INLINEFORM2 -0.16), where active users contribute relatively stable comments compared to outsiders ( INLINEFORM3 0), are also well-represented on Reddit.
To understand whether these variations manifest systematically across communities, we examine the relationship between divides in content affinity and community type. In particular, following the intuition that active users have a relatively high affinity for a community's niche, we expect that the distinctiveness of a community will be a salient mediator of specificity and volatility gaps. Indeed, we find a strong correlation between a community's distinctiveness and its specificity gap (Spearman's INLINEFORM0 0.34, INLINEFORM1 0.001).
We also find a strong correlation between distinctiveness and community volatility gaps (Spearman's INLINEFORM0 0.53, INLINEFORM1 0.001). In particular, we see that among the most distinctive communities (i.e., the top third of communities by distinctiveness), active users tend to write more volatile comments than outsiders (mean INLINEFORM2 0.098), while across the most generic communities (i.e., the bottom third), active users tend to write more stable comments (mean INLINEFORM3 -0.047, Mann-Whitney U test INLINEFORM4 0.001). The relative affinity of outsiders for volatile content in these communities indicates that temporally ephemeral content might serve as an entry point into such a community, without necessarily engaging users in the long term.
Further related work
Our language-based typology and analysis of user engagement draws on and contributes to several distinct research threads, in addition to the many foundational studies cited in the previous sections.
Multicommunity studies. Our investigation of user engagement in multicommunity settings follows prior literature which has examined differences in user and community dynamics across various online groups, such as email listservs. Such studies have primarily related variations in user behaviour to structural features such as group size and volume of content BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 . In focusing on the linguistic content of communities, we extend this research by providing a content-based framework through which user engagement can be examined.
Reddit has been a particularly useful setting for studying multiple communities in prior work. Such studies have mostly focused on characterizing how individual users engage across a multi-community platform BIBREF34 , BIBREF35 , or on specific user engagement patterns such as loyalty to particular communities BIBREF22 . We complement these studies by seeking to understand how features of communities can mediate a broad array of user engagement patterns within them.
Typologies of online communities. Prior attempts to typologize online communities have primarily been qualitative and based on hand-designed categories, making them difficult to apply at scale. These typologies often hinge on having some well-defined function the community serves, such as supporting a business or non-profit cause BIBREF36 , which can be difficult or impossible to identify in massive, anonymous multi-community settings. Other typologies emphasize differences in communication platforms and other functional requirements BIBREF37 , BIBREF38 , which are important but preclude analyzing differences between communities within the same multi-community platform. Similarly, previous computational methods of characterizing multiple communities have relied on the presence of markers such as affixes in community names BIBREF35 , or platform-specific affordances such as evaluation mechanisms BIBREF39 .
Our typology is also distinguished from community detection techniques that rely on structural or functional categorizations BIBREF40 , BIBREF41 . While the focus of those studies is to identify and characterize sub-communities within a larger social network, our typology provides a characterization of pre-defined communities based on the nature of their identity.
Broader work on collective identity. Our focus on community identity dovetails with a long line of research on collective identity and user engagement, in both online and offline communities BIBREF42 , BIBREF1 , BIBREF2 . These studies focus on individual-level psychological manifestations of collective (or social) identity, and their relationship to user engagement BIBREF42 , BIBREF43 , BIBREF44 , BIBREF0 .
In contrast, we seek to characterize community identities at an aggregate level and in an interpretable manner, with the goal of systematically organizing the diverse space of online communities. Typologies of this kind are critical to these broader, social-psychological studies of collective identity: they allow researchers to systematically analyze how the psychological manifestations and implications of collective identity vary across diverse sets of communities.
Conclusion and future work
Our current understanding of engagement patterns in online communities is patched up from glimpses offered by several disparate studies focusing on a few individual communities. This work calls into attention the need for a method to systematically reason about similarities and differences across communities. By proposing a way to structure the multi-community space, we find not only that radically contrasting engagement patterns emerge in different parts of this space, but also that this variation can be at least partly explained by the type of identity each community fosters.
Our choice in this work is to structure the multi-community space according to a typology based on community identity, as reflected in language use. We show that this effectively explains cross-community variation of three different user engagement measures—retention, acculturation and content affinity—and complements measures based on activity and size with additional interpretable information. For example, we find that in niche communities established members are more likely to engage with volatile content than outsiders, while the opposite is true in generic communities. Such insights can be useful for community maintainers seeking to understand engagement patterns in their own communities.
One main area of future research is to examine the temporal dynamics in the multi-community landscape. By averaging our measures of distinctiveness and dynamicity across time, our present study treated community identity as a static property. However, as communities experience internal changes and respond to external events, we can expect the nature of their identity to shift as well. For instance, the relative consistency of harrypotter may be disrupted by the release of a new novel, while Seahawks may foster different identities during and between football seasons. Conversely, a community's type may also mediate the impact of new events. Moving beyond a static view of community identity could enable us to better understand how temporal phenomena such as linguistic change manifest across different communities, and also provide a more nuanced view of user engagement—for instance, are communities more welcoming to newcomers at certain points in their lifecycle?
Another important avenue of future work is to explore other ways of mapping the landscape of online communities. For example, combining structural properties of communities BIBREF40 with topical information BIBREF35 and with our identity-based measures could further characterize and explain variations in user engagement patterns. Furthermore, extending the present analyses to even more diverse communities supported by different platforms (e.g., GitHub, StackExchange, Wikipedia) could enable the characterization of more complex behavioral patterns such as collaboration and altruism, which become salient in different multicommunity landscapes.
Acknowledgements
The authors thank Liye Fu, Jack Hessel, David Jurgens and Lillian Lee for their helpful comments. This research has been supported in part by a Discovery and Innovation Research Seed Award from the Office of the Vice Provost for Research at Cornell, NSF CNS-1010921, IIS-1149837, IIS-1514268 NIH BD2K, ARO MURI, DARPA XDATA, DARPA SIMPLEX, DARPA NGS2, Stanford Data Science Initiative, SAP Stanford Graduate Fellowship, NSERC PGS-D, Boeing, Lightspeed, and Volkswagen. | They collect subreddits from January 2013 to December 2014,2 for which there are at
least 500 words in the vocabulary used to estimate the measures,
in at least 4 months of the subreddit’s history. They compute our measures over the comments written by users in a community in time windows of months, for each sufficiently active month, and manually remove communities where the bulk of the contributions are in a foreign language. |
938cf30c4f1d14fa182e82919e16072fdbcf2a82 | 938cf30c4f1d14fa182e82919e16072fdbcf2a82_0 | Q: How do the authors measure how temporally dynamic a community is?
Text: Introduction
“If each city is like a game of chess, the day when I have learned the rules, I shall finally possess my empire, even if I shall never succeed in knowing all the cities it contains.”
— Italo Calvino, Invisible Cities
A community's identity—defined through the common interests and shared experiences of its users—shapes various facets of the social dynamics within it BIBREF0 , BIBREF1 , BIBREF2 . Numerous instances of this interplay between a community's identity and social dynamics have been extensively studied in the context of individual online communities BIBREF3 , BIBREF4 , BIBREF5 . However, the sheer variety of online platforms complicates the task of generalizing insights beyond these isolated, single-community glimpses. A new way to reason about the variation across multiple communities is needed in order to systematically characterize the relationship between properties of a community and the dynamics taking place within.
One especially important component of community dynamics is user engagement. We can aim to understand why users join certain communities BIBREF6 , what factors influence user retention BIBREF7 , and how users react to innovation BIBREF5 . While striking patterns of user engagement have been uncovered in prior case studies of individual communities BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , we do not know whether these observations hold beyond these cases, or when we can draw analogies between different communities. Are there certain types of communities where we can expect similar or contrasting engagement patterns?
To address such questions quantitatively we need to provide structure to the diverse and complex space of online communities. Organizing the multi-community landscape would allow us to both characterize individual points within this space, and reason about systematic variations in patterns of user engagement across the space.
Present work: Structuring the multi-community space. In order to systematically understand the relationship between community identityand user engagement we introduce a quantitative typology of online communities. Our typology is based on two key aspects of community identity: how distinctive—or niche—a community's interests are relative to other communities, and how dynamic—or volatile—these interests are over time. These axes aim to capture the salience of a community's identity and dynamics of its temporal evolution.
Our main insight in implementing this typology automatically and at scale is that the language used within a community can simultaneously capture how distinctive and dynamic its interests are. This language-based approach draws on a wealth of literature characterizing linguistic variation in online communities and its relationship to community and user identity BIBREF16 , BIBREF5 , BIBREF17 , BIBREF18 , BIBREF19 . Basing our typology on language is also convenient since it renders our framework immediately applicable to a wide variety of online communities, where communication is primarily recorded in a textual format.
Using our framework, we map almost 300 Reddit communities onto the landscape defined by the two axes of our typology (Section SECREF2 ). We find that this mapping induces conceptually sound categorizations that effectively capture key aspects of community-level social dynamics. In particular, we quantitatively validate the effectiveness of our mapping by showing that our two-dimensional typology encodes signals that are predictive of community-level rates of user retention, complementing strong activity-based features.
Engagement and community identity. We apply our framework to understand how two important aspects of user engagement in a community—the community's propensity to retain its users (Section SECREF3 ), and its permeability to new members (Section SECREF4 )—vary according to the type of collective identity it fosters. We find that communities that are characterized by specialized, constantly-updating content have higher user retention rates, but also exhibit larger linguistic gaps that separate newcomers from established members.
More closely examining factors that could contribute to this linguistic gap, we find that especially within distinctive communities, established users have an increased propensity to engage with the community's specialized content, compared to newcomers (Section SECREF5 ). Interestingly, while established members of distinctive communities more avidly respond to temporal updates than newcomers, in more generic communities it is the outsiders who engage more with volatile content, perhaps suggesting that such content may serve as an entry-point to the community (but not necessarily a reason to stay). Such insights into the relation between collective identity and user engagement can be informative to community maintainers seeking to better understand growth patterns within their online communities.
More generally, our methodology stands as an example of how sociological questions can be addressed in a multi-community setting. In performing our analyses across a rich variety of communities, we reveal both the diversity of phenomena that can occur, as well as the systematic nature of this diversity.
A typology of community identity
A community's identity derives from its members' common interests and shared experiences BIBREF15 , BIBREF20 . In this work, we structure the multi-community landscape along these two key dimensions of community identity: how distinctive a community's interests are, and how dynamic the community is over time.
We now proceed to outline our quantitative typology, which maps communities along these two dimensions. We start by providing an intuition through inspecting a few example communities. We then introduce a generalizable language-based methodology and use it to map a large set of Reddit communities onto the landscape defined by our typology of community identity.
Overview and intuition
In order to illustrate the diversity within the multi-community space, and to provide an intuition for the underlying structure captured by the proposed typology, we first examine a few example communities and draw attention to some key social dynamics that occur within them.
We consider four communities from Reddit: in Seahawks, fans of the Seahawks football team gather to discuss games and players; in BabyBumps, expecting mothers trade advice and updates on their pregnancy; Cooking consists of recipe ideas and general discussion about cooking; while in pics, users share various images of random things (like eels and hornets). We note that these communities are topically contrasting and foster fairly disjoint user bases. Additionally, these communities exhibit varied patterns of user engagement. While Seahawks maintains a devoted set of users from month to month, pics is dominated by transient users who post a few times and then depart.
Discussions within these communities also span varied sets of interests. Some of these interests are more specific to the community than others: risotto, for example, is seldom a discussion point beyond Cooking. Additionally, some interests consistently recur, while others are specific to a particular time: kitchens are a consistent focus point for cooking, but mint is only in season during spring. Coupling specificity and consistency we find interests such as easter, which isn't particularly specific to BabyBumps but gains prominence in that community around Easter (see Figure FIGREF3 .A for further examples).
These specific interests provide a window into the nature of the communities' interests as a whole, and by extension their community identities. Overall, discussions in Cooking focus on topics which are highly distinctive and consistently recur (like risotto). In contrast, discussions in Seahawks are highly dynamic, rapidly shifting over time as new games occur and players are traded in and out. In the remainder of this section we formally introduce a methodology for mapping communities in this space defined by their distinctiveness and dynamicity (examples in Figure FIGREF3 .B).
Language-based formalization
Our approach follows the intuition that a distinctive community will use language that is particularly specific, or unique, to that community. Similarly, a dynamic community will use volatile language that rapidly changes across successive windows of time. To capture this intuition automatically, we start by defining word-level measures of specificity and volatility. We then extend these word-level primitives to characterize entire comments, and the community itself.
Our characterizations of words in a community are motivated by methodology from prior literature that compares the frequency of a word in a particular setting to its frequency in some background distribution, in order to identify instances of linguistic variation BIBREF21 , BIBREF19 . Our particular framework makes this comparison by way of pointwise mutual information (PMI).
In the following, we use INLINEFORM0 to denote one community within a set INLINEFORM1 of communities, and INLINEFORM2 to denote one time period within the entire history INLINEFORM3 of INLINEFORM4 . We account for temporal as well as inter-community variation by computing word-level measures for each time period of each community's history, INLINEFORM5 . Given a word INLINEFORM6 used within a particular community INLINEFORM7 at time INLINEFORM8 , we define two word-level measures:
Specificity. We quantify the specificity INLINEFORM0 of INLINEFORM1 to INLINEFORM2 by calculating the PMI of INLINEFORM3 and INLINEFORM4 , relative to INLINEFORM5 , INLINEFORM6
where INLINEFORM0 is INLINEFORM1 's frequency in INLINEFORM2 . INLINEFORM3 is specific to INLINEFORM4 if it occurs more frequently in INLINEFORM5 than in the entire set INLINEFORM6 , hence distinguishing this community from the rest. A word INLINEFORM7 whose occurrence is decoupled from INLINEFORM8 , and thus has INLINEFORM9 close to 0, is said to be generic.
We compute values of INLINEFORM0 for each time period INLINEFORM1 in INLINEFORM2 ; in the above description we drop the time-based subscripts for clarity.
Volatility. We quantify the volatility INLINEFORM0 of INLINEFORM1 to INLINEFORM2 as the PMI of INLINEFORM3 and INLINEFORM4 relative to INLINEFORM5 , the entire history of INLINEFORM6 : INLINEFORM7
A word INLINEFORM0 is volatile at time INLINEFORM1 in INLINEFORM2 if it occurs more frequently at INLINEFORM3 than in the entire history INLINEFORM4 , behaving as a fad within a small window of time. A word that occurs with similar frequency across time, and hence has INLINEFORM5 close to 0, is said to be stable.
Extending to utterances. Using our word-level primitives, we define the specificity of an utterance INLINEFORM0 in INLINEFORM1 , INLINEFORM2 as the average specificity of each word in the utterance. The volatility of utterances is defined analogously.
Community-level measures
Having described these word-level measures, we now proceed to establish the primary axes of our typology:
Distinctiveness. A community with a very distinctive identity will tend to have distinctive interests, expressed through specialized language. Formally, we define the distinctiveness of a community INLINEFORM0 as the average specificity of all utterances in INLINEFORM1 . We refer to a community with a less distinctive identity as being generic.
Dynamicity. A highly dynamic community constantly shifts interests from one time window to another, and these temporal variations are reflected in its use of volatile language. Formally, we define the dynamicity of a community INLINEFORM0 as the average volatility of all utterances in INLINEFORM1 . We refer to a community whose language is relatively consistent throughout time as being stable.
In our subsequent analyses, we focus mostly on examing the average distinctiveness and dynamicity of a community over time, denoted INLINEFORM0 and INLINEFORM1 .
Applying the typology to Reddit
We now explain how our typology can be applied to the particular setting of Reddit, and describe the overall behaviour of our linguistic axes in this context.
Dataset description. Reddit is a popular website where users form and participate in discussion-based communities called subreddits. Within these communities, users post content—such as images, URLs, or questions—which often spark vibrant lengthy discussions in thread-based comment sections.
The website contains many highly active subreddits with thousands of active subscribers. These communities span an extremely rich variety of topical interests, as represented by the examples described earlier. They also vary along a rich multitude of structural dimensions, such as the number of users, the amount of conversation and social interaction, and the social norms determining which types of content become popular. The diversity and scope of Reddit's multicommunity ecosystem make it an ideal landscape in which to closely examine the relation between varying community identities and social dynamics.
Our full dataset consists of all subreddits on Reddit from January 2013 to December 2014, for which there are at least 500 words in the vocabulary used to estimate our measures, in at least 4 months of the subreddit's history. We compute our measures over the comments written by users in a community in time windows of months, for each sufficiently active month, and manually remove communities where the bulk of the contributions are in a foreign language. This results in 283 communities ( INLINEFORM0 ), for a total of 4,872 community-months ( INLINEFORM1 ).
Estimating linguistic measures. We estimate word frequencies INLINEFORM0 , and by extension each downstream measure, in a carefully controlled manner in order to ensure we capture robust and meaningful linguistic behaviour. First, we only consider top-level comments which are initial responses to a post, as the content of lower-level responses might reflect conventions of dialogue more than a community's high-level interests. Next, in order to prevent a few highly active users from dominating our frequency estimates, we count each unique word once per user, ignoring successive uses of the same word by the same user. This ensures that our word-level characterizations are not skewed by a small subset of highly active contributors.
In our subsequent analyses, we will only look at these measures computed over the nouns used in comments. In principle, our framework can be applied to any choice of vocabulary. However, in the case of Reddit using nouns provides a convenient degree of interpretability. We can easily understand the implication of a community preferentially mentioning a noun such as gamer or feminist, but interpreting the overuse of verbs or function words such as take or of is less straightforward. Additionally, in focusing on nouns we adopt the view emphasized in modern “third wave” accounts of sociolinguistic variation, that stylistic variation is inseparable from topical content BIBREF23 . In the case of online communities, the choice of what people choose to talk about serves as a primary signal of social identity. That said, a typology based on more purely stylistic differences is an interesting avenue for future work.
Accounting for rare words. One complication when using measures such as PMI, which are based off of ratios of frequencies, is that estimates for very infrequent words could be overemphasized BIBREF24 . Words that only appear a few times in a community tend to score at the extreme ends of our measures (e.g. as highly specific or highly generic), obfuscating the impact of more frequent words in the community. To address this issue, we discard the long tail of infrequent words in our analyses, using only the top 5th percentile of words, by frequency within each INLINEFORM0 , to score comments and communities.
Typology output on Reddit. The distribution of INLINEFORM0 and INLINEFORM1 across Reddit communities is shown in Figure FIGREF3 .B, along with examples of communities at the extremes of our typology. We find that interpretable groupings of communities emerge at various points within our axes. For instance, highly distinctive and dynamic communities tend to focus on rapidly-updating interests like sports teams and games, while generic and consistent communities tend to be large “link-sharing” hubs where users generally post content with no clear dominating themes. More examples of communities at the extremes of our typology are shown in Table TABREF9 .
We note that these groupings capture abstract properties of a community's content that go beyond its topic. For instance, our typology relates topically contrasting communities such as yugioh (which is about a popular trading card game) and Seahawks through the shared trait that their content is particularly distinctive. Additionally, the axes can clarify differences between topically similar communities: while startrek and thewalkingdead both focus on TV shows, startrek is less dynamic than the median community, while thewalkingdead is among the most dynamic communities, as the show was still airing during the years considered.
Community identity and user retention
We have seen that our typology produces qualitatively satisfying groupings of communities according to the nature of their collective identity. This section shows that there is an informative and highly predictive relationship between a community's position in this typology and its user engagement patterns. We find that communities with distinctive and dynamic identities have higher rates of user engagement, and further show that a community's position in our identity-based landscape holds important predictive information that is complementary to a strong activity baseline.
In particular user retention is one of the most crucial aspects of engagement and is critical to community maintenance BIBREF2 . We quantify how successful communities are at retaining users in terms of both short and long-term commitment. Our results indicate that rates of user retention vary drastically, yet systematically according to how distinctive and dynamic a community is (Figure FIGREF3 ).
We find a strong, explanatory relationship between the temporal consistency of a community's identity and rates of user engagement: dynamic communities that continually update and renew their discussion content tend to have far higher rates of user engagement. The relationship between distinctiveness and engagement is less universal, but still highly informative: niche communities tend to engender strong, focused interest from users at one particular point in time, though this does not necessarily translate into long-term retention.
Community-type and monthly retention
We find that dynamic communities, such as Seahawks or starcraft, have substantially higher rates of monthly user retention than more stable communities (Spearman's INLINEFORM0 = 0.70, INLINEFORM1 0.001, computed with community points averaged over months; Figure FIGREF11 .A, left). Similarly, more distinctive communities, like Cooking and Naruto, exhibit moderately higher monthly retention rates than more generic communities (Spearman's INLINEFORM2 = 0.33, INLINEFORM3 0.001; Figure FIGREF11 .A, right).
Monthly retention is formally defined as the proportion of users who contribute in month INLINEFORM0 and then return to contribute again in month INLINEFORM1 . Each monthly datapoint is treated as unique and the trends in Figure FIGREF11 show 95% bootstrapped confidence intervals, cluster-resampled at the level of subreddit BIBREF25 , to account for differences in the number of months each subreddit contributes to the data.
Importantly, we find that in the task of predicting community-level user retention our identity-based typology holds additional predictive value on top of strong baseline features based on community-size (# contributing users) and activity levels (mean # contributions per user), which are commonly used for churn prediction BIBREF7 . We compared out-of-sample predictive performance via leave-one-community-out cross validation using random forest regressors with ensembles of size 100, and otherwise default hyperparameters BIBREF26 . A model predicting average monthly retention based on a community's average distinctiveness and dynamicity achieves an average mean squared error ( INLINEFORM0 ) of INLINEFORM1 and INLINEFORM2 , while an analogous model predicting based on a community's size and average activity level (both log-transformed) achieves INLINEFORM4 and INLINEFORM5 . The difference between the two models is not statistically significant ( INLINEFORM6 , Wilcoxon signed-rank test). However, combining features from both models results in a large and statistically significant improvement over each independent model ( INLINEFORM7 , INLINEFORM8 , INLINEFORM9 Bonferroni-corrected pairwise Wilcoxon tests). These results indicate that our typology can explain variance in community-level retention rates, and provides information beyond what is present in standard activity-based features.
Community-type and user tenure
As with monthly retention, we find a strong positive relationship between a community's dynamicity and the average number of months that a user will stay in that community (Spearman's INLINEFORM0 = 0.41, INLINEFORM1 0.001, computed over all community points; Figure FIGREF11 .B, left). This verifies that the short-term trend observed for monthly retention translates into longer-term engagement and suggests that long-term user retention might be strongly driven by the extent to which a community continually provides novel content. Interestingly, there is no significant relationship between distinctiveness and long-term engagement (Spearman's INLINEFORM2 = 0.03, INLINEFORM3 0.77; Figure FIGREF11 .B, right). Thus, while highly distinctive communities like RandomActsOfMakeup may generate focused commitment from users over a short period of time, such communities are unlikely to retain long-term users unless they also have sufficiently dynamic content.
To measure user tenures we focused on one slice of data (May, 2013) and measured how many months a user spends in each community, on average—the average number of months between a user's first and last comment in each community. We have activity data up until May 2015, so the maximum tenure is 24 months in this set-up, which is exceptionally long relative to the average community member (throughout our entire data less than INLINEFORM0 of users have tenures of more than 24 months in any community).
Community identity and acculturation
The previous section shows that there is a strong connection between the nature of a community's identity and its basic user engagement patterns. In this section, we probe the relationship between a community's identity and how permeable, or accessible, it is to outsiders.
We measure this phenomenon using what we call the acculturation gap, which compares the extent to which engaged vs. non-engaged users employ community-specific language. While previous work has found this gap to be large and predictive of future user engagement in two beer-review communities BIBREF5 , we find that the size of the acculturation gap depends strongly on the nature of a community's identity, with the gap being most pronounced in stable, highly distinctive communities (Figure FIGREF13 ).
This finding has important implications for our understanding of online communities. Though many works have analyzed the dynamics of “linguistic belonging” in online communities BIBREF16 , BIBREF28 , BIBREF5 , BIBREF17 , our results suggest that the process of linguistically fitting in is highly contingent on the nature of a community's identity. At one extreme, in generic communities like pics or worldnews there is no distinctive, linguistic identity for users to adopt.
To measure the acculturation gap for a community, we follow Danescu-Niculescu-Mizil et al danescu-niculescu-mizilno2013 and build “snapshot language models” (SLMs) for each community, which capture the linguistic state of a community at one point of time. Using these language models we can capture how linguistically close a particular utterance is to the community by measuring the cross-entropy of this utterance relative to the SLM: DISPLAYFORM0
where INLINEFORM0 is the probability assigned to bigram INLINEFORM1 from comment INLINEFORM2 in community-month INLINEFORM3 . We build the SLMs by randomly sampling 200 active users—defined as users with at least 5 comments in the respective community and month. For each of these 200 active users we select 5 random 10-word spans from 5 unique comments. To ensure robustness and maximize data efficiency, we construct 100 SLMs for each community-month pair that has enough data, bootstrap-resampling from the set of active users.
We compute a basic measure of the acculturation gap for a community-month INLINEFORM0 as the relative difference of the cross-entropy of comments by users active in INLINEFORM1 with that of singleton comments by outsiders—i.e., users who only ever commented once in INLINEFORM2 , but who are still active in Reddit in general: DISPLAYFORM0
INLINEFORM0 denotes the distribution over singleton comments, INLINEFORM1 denotes the distribution over comments from users active in INLINEFORM2 , and INLINEFORM3 the expected values of the cross-entropy over these respective distributions. For each bootstrap-sampled SLM we compute the cross-entropy of 50 comments by active users (10 comments from 5 randomly sampled active users, who were not used to construct the SLM) and 50 comments from randomly-sampled outsiders.
Figure FIGREF13 .A shows that the acculturation gap varies substantially with how distinctive and dynamic a community is. Highly distinctive communities have far higher acculturation gaps, while dynamicity exhibits a non-linear relationship: relatively stable communities have a higher linguistic `entry barrier', as do very dynamic ones. Thus, in communities like IAmA (a general Q&A forum) that are very generic, with content that is highly, but not extremely dynamic, outsiders are at no disadvantage in matching the community's language. In contrast, the acculturation gap is large in stable, distinctive communities like Cooking that have consistent community-specific language. The gap is also large in extremely dynamic communities like Seahawks, which perhaps require more attention or interest on the part of active users to keep up-to-date with recent trends in content.
These results show that phenomena like the acculturation gap, which were previously observed in individual communities BIBREF28 , BIBREF5 , cannot be easily generalized to a larger, heterogeneous set of communities. At the same time, we see that structuring the space of possible communities enables us to observe systematic patterns in how such phenomena vary.
Community identity and content affinity
Through the acculturation gap, we have shown that communities exhibit large yet systematic variations in their permeability to outsiders. We now turn to understanding the divide in commenting behaviour between outsiders and active community members at a finer granularity, by focusing on two particular ways in which such gaps might manifest among users: through different levels of engagement with specific content and with temporally volatile content.
Echoing previous results, we find that community type mediates the extent and nature of the divide in content affinity. While in distinctive communities active members have a higher affinity for both community-specific content and for highly volatile content, the opposite is true for generic communities, where it is the outsiders who engage more with volatile content.
We quantify these divides in content affinity by measuring differences in the language of the comments written by active users and outsiders. Concretely, for each community INLINEFORM0 , we define the specificity gap INLINEFORM1 as the relative difference between the average specificity of comments authored by active members, and by outsiders, where these measures are macroaveraged over users. Large, positive INLINEFORM2 then occur in communities where active users tend to engage with substantially more community-specific content than outsiders.
We analogously define the volatility gap INLINEFORM0 as the relative difference in volatilities of active member and outsider comments. Large, positive values of INLINEFORM1 characterize communities where active users tend to have more volatile interests than outsiders, while negative values indicate communities where active users tend to have more stable interests.
We find that in 94% of communities, INLINEFORM0 , indicating (somewhat unsurprisingly) that in almost all communities, active users tend to engage with more community-specific content than outsiders. However, the magnitude of this divide can vary greatly: for instance, in Homebrewing, which is dedicated to brewing beer, the divide is very pronounced ( INLINEFORM1 0.33) compared to funny, a large hub where users share humorous content ( INLINEFORM2 0.011).
The nature of the volatility gap is comparatively more varied. In Homebrewing ( INLINEFORM0 0.16), as in 68% of communities, active users tend to write more volatile comments than outsiders ( INLINEFORM1 0). However, communities like funny ( INLINEFORM2 -0.16), where active users contribute relatively stable comments compared to outsiders ( INLINEFORM3 0), are also well-represented on Reddit.
To understand whether these variations manifest systematically across communities, we examine the relationship between divides in content affinity and community type. In particular, following the intuition that active users have a relatively high affinity for a community's niche, we expect that the distinctiveness of a community will be a salient mediator of specificity and volatility gaps. Indeed, we find a strong correlation between a community's distinctiveness and its specificity gap (Spearman's INLINEFORM0 0.34, INLINEFORM1 0.001).
We also find a strong correlation between distinctiveness and community volatility gaps (Spearman's INLINEFORM0 0.53, INLINEFORM1 0.001). In particular, we see that among the most distinctive communities (i.e., the top third of communities by distinctiveness), active users tend to write more volatile comments than outsiders (mean INLINEFORM2 0.098), while across the most generic communities (i.e., the bottom third), active users tend to write more stable comments (mean INLINEFORM3 -0.047, Mann-Whitney U test INLINEFORM4 0.001). The relative affinity of outsiders for volatile content in these communities indicates that temporally ephemeral content might serve as an entry point into such a community, without necessarily engaging users in the long term.
Further related work
Our language-based typology and analysis of user engagement draws on and contributes to several distinct research threads, in addition to the many foundational studies cited in the previous sections.
Multicommunity studies. Our investigation of user engagement in multicommunity settings follows prior literature which has examined differences in user and community dynamics across various online groups, such as email listservs. Such studies have primarily related variations in user behaviour to structural features such as group size and volume of content BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 . In focusing on the linguistic content of communities, we extend this research by providing a content-based framework through which user engagement can be examined.
Reddit has been a particularly useful setting for studying multiple communities in prior work. Such studies have mostly focused on characterizing how individual users engage across a multi-community platform BIBREF34 , BIBREF35 , or on specific user engagement patterns such as loyalty to particular communities BIBREF22 . We complement these studies by seeking to understand how features of communities can mediate a broad array of user engagement patterns within them.
Typologies of online communities. Prior attempts to typologize online communities have primarily been qualitative and based on hand-designed categories, making them difficult to apply at scale. These typologies often hinge on having some well-defined function the community serves, such as supporting a business or non-profit cause BIBREF36 , which can be difficult or impossible to identify in massive, anonymous multi-community settings. Other typologies emphasize differences in communication platforms and other functional requirements BIBREF37 , BIBREF38 , which are important but preclude analyzing differences between communities within the same multi-community platform. Similarly, previous computational methods of characterizing multiple communities have relied on the presence of markers such as affixes in community names BIBREF35 , or platform-specific affordances such as evaluation mechanisms BIBREF39 .
Our typology is also distinguished from community detection techniques that rely on structural or functional categorizations BIBREF40 , BIBREF41 . While the focus of those studies is to identify and characterize sub-communities within a larger social network, our typology provides a characterization of pre-defined communities based on the nature of their identity.
Broader work on collective identity. Our focus on community identity dovetails with a long line of research on collective identity and user engagement, in both online and offline communities BIBREF42 , BIBREF1 , BIBREF2 . These studies focus on individual-level psychological manifestations of collective (or social) identity, and their relationship to user engagement BIBREF42 , BIBREF43 , BIBREF44 , BIBREF0 .
In contrast, we seek to characterize community identities at an aggregate level and in an interpretable manner, with the goal of systematically organizing the diverse space of online communities. Typologies of this kind are critical to these broader, social-psychological studies of collective identity: they allow researchers to systematically analyze how the psychological manifestations and implications of collective identity vary across diverse sets of communities.
Conclusion and future work
Our current understanding of engagement patterns in online communities is patched up from glimpses offered by several disparate studies focusing on a few individual communities. This work calls into attention the need for a method to systematically reason about similarities and differences across communities. By proposing a way to structure the multi-community space, we find not only that radically contrasting engagement patterns emerge in different parts of this space, but also that this variation can be at least partly explained by the type of identity each community fosters.
Our choice in this work is to structure the multi-community space according to a typology based on community identity, as reflected in language use. We show that this effectively explains cross-community variation of three different user engagement measures—retention, acculturation and content affinity—and complements measures based on activity and size with additional interpretable information. For example, we find that in niche communities established members are more likely to engage with volatile content than outsiders, while the opposite is true in generic communities. Such insights can be useful for community maintainers seeking to understand engagement patterns in their own communities.
One main area of future research is to examine the temporal dynamics in the multi-community landscape. By averaging our measures of distinctiveness and dynamicity across time, our present study treated community identity as a static property. However, as communities experience internal changes and respond to external events, we can expect the nature of their identity to shift as well. For instance, the relative consistency of harrypotter may be disrupted by the release of a new novel, while Seahawks may foster different identities during and between football seasons. Conversely, a community's type may also mediate the impact of new events. Moving beyond a static view of community identity could enable us to better understand how temporal phenomena such as linguistic change manifest across different communities, and also provide a more nuanced view of user engagement—for instance, are communities more welcoming to newcomers at certain points in their lifecycle?
Another important avenue of future work is to explore other ways of mapping the landscape of online communities. For example, combining structural properties of communities BIBREF40 with topical information BIBREF35 and with our identity-based measures could further characterize and explain variations in user engagement patterns. Furthermore, extending the present analyses to even more diverse communities supported by different platforms (e.g., GitHub, StackExchange, Wikipedia) could enable the characterization of more complex behavioral patterns such as collaboration and altruism, which become salient in different multicommunity landscapes.
Acknowledgements
The authors thank Liye Fu, Jack Hessel, David Jurgens and Lillian Lee for their helpful comments. This research has been supported in part by a Discovery and Innovation Research Seed Award from the Office of the Vice Provost for Research at Cornell, NSF CNS-1010921, IIS-1149837, IIS-1514268 NIH BD2K, ARO MURI, DARPA XDATA, DARPA SIMPLEX, DARPA NGS2, Stanford Data Science Initiative, SAP Stanford Graduate Fellowship, NSERC PGS-D, Boeing, Lightspeed, and Volkswagen. | the average volatility of all utterances |
93f4ad6568207c9bd10d712a52f8de25b3ebadd4 | 93f4ad6568207c9bd10d712a52f8de25b3ebadd4_0 | Q: How do the authors measure how distinctive a community is?
Text: Introduction
“If each city is like a game of chess, the day when I have learned the rules, I shall finally possess my empire, even if I shall never succeed in knowing all the cities it contains.”
— Italo Calvino, Invisible Cities
A community's identity—defined through the common interests and shared experiences of its users—shapes various facets of the social dynamics within it BIBREF0 , BIBREF1 , BIBREF2 . Numerous instances of this interplay between a community's identity and social dynamics have been extensively studied in the context of individual online communities BIBREF3 , BIBREF4 , BIBREF5 . However, the sheer variety of online platforms complicates the task of generalizing insights beyond these isolated, single-community glimpses. A new way to reason about the variation across multiple communities is needed in order to systematically characterize the relationship between properties of a community and the dynamics taking place within.
One especially important component of community dynamics is user engagement. We can aim to understand why users join certain communities BIBREF6 , what factors influence user retention BIBREF7 , and how users react to innovation BIBREF5 . While striking patterns of user engagement have been uncovered in prior case studies of individual communities BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , we do not know whether these observations hold beyond these cases, or when we can draw analogies between different communities. Are there certain types of communities where we can expect similar or contrasting engagement patterns?
To address such questions quantitatively we need to provide structure to the diverse and complex space of online communities. Organizing the multi-community landscape would allow us to both characterize individual points within this space, and reason about systematic variations in patterns of user engagement across the space.
Present work: Structuring the multi-community space. In order to systematically understand the relationship between community identityand user engagement we introduce a quantitative typology of online communities. Our typology is based on two key aspects of community identity: how distinctive—or niche—a community's interests are relative to other communities, and how dynamic—or volatile—these interests are over time. These axes aim to capture the salience of a community's identity and dynamics of its temporal evolution.
Our main insight in implementing this typology automatically and at scale is that the language used within a community can simultaneously capture how distinctive and dynamic its interests are. This language-based approach draws on a wealth of literature characterizing linguistic variation in online communities and its relationship to community and user identity BIBREF16 , BIBREF5 , BIBREF17 , BIBREF18 , BIBREF19 . Basing our typology on language is also convenient since it renders our framework immediately applicable to a wide variety of online communities, where communication is primarily recorded in a textual format.
Using our framework, we map almost 300 Reddit communities onto the landscape defined by the two axes of our typology (Section SECREF2 ). We find that this mapping induces conceptually sound categorizations that effectively capture key aspects of community-level social dynamics. In particular, we quantitatively validate the effectiveness of our mapping by showing that our two-dimensional typology encodes signals that are predictive of community-level rates of user retention, complementing strong activity-based features.
Engagement and community identity. We apply our framework to understand how two important aspects of user engagement in a community—the community's propensity to retain its users (Section SECREF3 ), and its permeability to new members (Section SECREF4 )—vary according to the type of collective identity it fosters. We find that communities that are characterized by specialized, constantly-updating content have higher user retention rates, but also exhibit larger linguistic gaps that separate newcomers from established members.
More closely examining factors that could contribute to this linguistic gap, we find that especially within distinctive communities, established users have an increased propensity to engage with the community's specialized content, compared to newcomers (Section SECREF5 ). Interestingly, while established members of distinctive communities more avidly respond to temporal updates than newcomers, in more generic communities it is the outsiders who engage more with volatile content, perhaps suggesting that such content may serve as an entry-point to the community (but not necessarily a reason to stay). Such insights into the relation between collective identity and user engagement can be informative to community maintainers seeking to better understand growth patterns within their online communities.
More generally, our methodology stands as an example of how sociological questions can be addressed in a multi-community setting. In performing our analyses across a rich variety of communities, we reveal both the diversity of phenomena that can occur, as well as the systematic nature of this diversity.
A typology of community identity
A community's identity derives from its members' common interests and shared experiences BIBREF15 , BIBREF20 . In this work, we structure the multi-community landscape along these two key dimensions of community identity: how distinctive a community's interests are, and how dynamic the community is over time.
We now proceed to outline our quantitative typology, which maps communities along these two dimensions. We start by providing an intuition through inspecting a few example communities. We then introduce a generalizable language-based methodology and use it to map a large set of Reddit communities onto the landscape defined by our typology of community identity.
Overview and intuition
In order to illustrate the diversity within the multi-community space, and to provide an intuition for the underlying structure captured by the proposed typology, we first examine a few example communities and draw attention to some key social dynamics that occur within them.
We consider four communities from Reddit: in Seahawks, fans of the Seahawks football team gather to discuss games and players; in BabyBumps, expecting mothers trade advice and updates on their pregnancy; Cooking consists of recipe ideas and general discussion about cooking; while in pics, users share various images of random things (like eels and hornets). We note that these communities are topically contrasting and foster fairly disjoint user bases. Additionally, these communities exhibit varied patterns of user engagement. While Seahawks maintains a devoted set of users from month to month, pics is dominated by transient users who post a few times and then depart.
Discussions within these communities also span varied sets of interests. Some of these interests are more specific to the community than others: risotto, for example, is seldom a discussion point beyond Cooking. Additionally, some interests consistently recur, while others are specific to a particular time: kitchens are a consistent focus point for cooking, but mint is only in season during spring. Coupling specificity and consistency we find interests such as easter, which isn't particularly specific to BabyBumps but gains prominence in that community around Easter (see Figure FIGREF3 .A for further examples).
These specific interests provide a window into the nature of the communities' interests as a whole, and by extension their community identities. Overall, discussions in Cooking focus on topics which are highly distinctive and consistently recur (like risotto). In contrast, discussions in Seahawks are highly dynamic, rapidly shifting over time as new games occur and players are traded in and out. In the remainder of this section we formally introduce a methodology for mapping communities in this space defined by their distinctiveness and dynamicity (examples in Figure FIGREF3 .B).
Language-based formalization
Our approach follows the intuition that a distinctive community will use language that is particularly specific, or unique, to that community. Similarly, a dynamic community will use volatile language that rapidly changes across successive windows of time. To capture this intuition automatically, we start by defining word-level measures of specificity and volatility. We then extend these word-level primitives to characterize entire comments, and the community itself.
Our characterizations of words in a community are motivated by methodology from prior literature that compares the frequency of a word in a particular setting to its frequency in some background distribution, in order to identify instances of linguistic variation BIBREF21 , BIBREF19 . Our particular framework makes this comparison by way of pointwise mutual information (PMI).
In the following, we use INLINEFORM0 to denote one community within a set INLINEFORM1 of communities, and INLINEFORM2 to denote one time period within the entire history INLINEFORM3 of INLINEFORM4 . We account for temporal as well as inter-community variation by computing word-level measures for each time period of each community's history, INLINEFORM5 . Given a word INLINEFORM6 used within a particular community INLINEFORM7 at time INLINEFORM8 , we define two word-level measures:
Specificity. We quantify the specificity INLINEFORM0 of INLINEFORM1 to INLINEFORM2 by calculating the PMI of INLINEFORM3 and INLINEFORM4 , relative to INLINEFORM5 , INLINEFORM6
where INLINEFORM0 is INLINEFORM1 's frequency in INLINEFORM2 . INLINEFORM3 is specific to INLINEFORM4 if it occurs more frequently in INLINEFORM5 than in the entire set INLINEFORM6 , hence distinguishing this community from the rest. A word INLINEFORM7 whose occurrence is decoupled from INLINEFORM8 , and thus has INLINEFORM9 close to 0, is said to be generic.
We compute values of INLINEFORM0 for each time period INLINEFORM1 in INLINEFORM2 ; in the above description we drop the time-based subscripts for clarity.
Volatility. We quantify the volatility INLINEFORM0 of INLINEFORM1 to INLINEFORM2 as the PMI of INLINEFORM3 and INLINEFORM4 relative to INLINEFORM5 , the entire history of INLINEFORM6 : INLINEFORM7
A word INLINEFORM0 is volatile at time INLINEFORM1 in INLINEFORM2 if it occurs more frequently at INLINEFORM3 than in the entire history INLINEFORM4 , behaving as a fad within a small window of time. A word that occurs with similar frequency across time, and hence has INLINEFORM5 close to 0, is said to be stable.
Extending to utterances. Using our word-level primitives, we define the specificity of an utterance INLINEFORM0 in INLINEFORM1 , INLINEFORM2 as the average specificity of each word in the utterance. The volatility of utterances is defined analogously.
Community-level measures
Having described these word-level measures, we now proceed to establish the primary axes of our typology:
Distinctiveness. A community with a very distinctive identity will tend to have distinctive interests, expressed through specialized language. Formally, we define the distinctiveness of a community INLINEFORM0 as the average specificity of all utterances in INLINEFORM1 . We refer to a community with a less distinctive identity as being generic.
Dynamicity. A highly dynamic community constantly shifts interests from one time window to another, and these temporal variations are reflected in its use of volatile language. Formally, we define the dynamicity of a community INLINEFORM0 as the average volatility of all utterances in INLINEFORM1 . We refer to a community whose language is relatively consistent throughout time as being stable.
In our subsequent analyses, we focus mostly on examing the average distinctiveness and dynamicity of a community over time, denoted INLINEFORM0 and INLINEFORM1 .
Applying the typology to Reddit
We now explain how our typology can be applied to the particular setting of Reddit, and describe the overall behaviour of our linguistic axes in this context.
Dataset description. Reddit is a popular website where users form and participate in discussion-based communities called subreddits. Within these communities, users post content—such as images, URLs, or questions—which often spark vibrant lengthy discussions in thread-based comment sections.
The website contains many highly active subreddits with thousands of active subscribers. These communities span an extremely rich variety of topical interests, as represented by the examples described earlier. They also vary along a rich multitude of structural dimensions, such as the number of users, the amount of conversation and social interaction, and the social norms determining which types of content become popular. The diversity and scope of Reddit's multicommunity ecosystem make it an ideal landscape in which to closely examine the relation between varying community identities and social dynamics.
Our full dataset consists of all subreddits on Reddit from January 2013 to December 2014, for which there are at least 500 words in the vocabulary used to estimate our measures, in at least 4 months of the subreddit's history. We compute our measures over the comments written by users in a community in time windows of months, for each sufficiently active month, and manually remove communities where the bulk of the contributions are in a foreign language. This results in 283 communities ( INLINEFORM0 ), for a total of 4,872 community-months ( INLINEFORM1 ).
Estimating linguistic measures. We estimate word frequencies INLINEFORM0 , and by extension each downstream measure, in a carefully controlled manner in order to ensure we capture robust and meaningful linguistic behaviour. First, we only consider top-level comments which are initial responses to a post, as the content of lower-level responses might reflect conventions of dialogue more than a community's high-level interests. Next, in order to prevent a few highly active users from dominating our frequency estimates, we count each unique word once per user, ignoring successive uses of the same word by the same user. This ensures that our word-level characterizations are not skewed by a small subset of highly active contributors.
In our subsequent analyses, we will only look at these measures computed over the nouns used in comments. In principle, our framework can be applied to any choice of vocabulary. However, in the case of Reddit using nouns provides a convenient degree of interpretability. We can easily understand the implication of a community preferentially mentioning a noun such as gamer or feminist, but interpreting the overuse of verbs or function words such as take or of is less straightforward. Additionally, in focusing on nouns we adopt the view emphasized in modern “third wave” accounts of sociolinguistic variation, that stylistic variation is inseparable from topical content BIBREF23 . In the case of online communities, the choice of what people choose to talk about serves as a primary signal of social identity. That said, a typology based on more purely stylistic differences is an interesting avenue for future work.
Accounting for rare words. One complication when using measures such as PMI, which are based off of ratios of frequencies, is that estimates for very infrequent words could be overemphasized BIBREF24 . Words that only appear a few times in a community tend to score at the extreme ends of our measures (e.g. as highly specific or highly generic), obfuscating the impact of more frequent words in the community. To address this issue, we discard the long tail of infrequent words in our analyses, using only the top 5th percentile of words, by frequency within each INLINEFORM0 , to score comments and communities.
Typology output on Reddit. The distribution of INLINEFORM0 and INLINEFORM1 across Reddit communities is shown in Figure FIGREF3 .B, along with examples of communities at the extremes of our typology. We find that interpretable groupings of communities emerge at various points within our axes. For instance, highly distinctive and dynamic communities tend to focus on rapidly-updating interests like sports teams and games, while generic and consistent communities tend to be large “link-sharing” hubs where users generally post content with no clear dominating themes. More examples of communities at the extremes of our typology are shown in Table TABREF9 .
We note that these groupings capture abstract properties of a community's content that go beyond its topic. For instance, our typology relates topically contrasting communities such as yugioh (which is about a popular trading card game) and Seahawks through the shared trait that their content is particularly distinctive. Additionally, the axes can clarify differences between topically similar communities: while startrek and thewalkingdead both focus on TV shows, startrek is less dynamic than the median community, while thewalkingdead is among the most dynamic communities, as the show was still airing during the years considered.
Community identity and user retention
We have seen that our typology produces qualitatively satisfying groupings of communities according to the nature of their collective identity. This section shows that there is an informative and highly predictive relationship between a community's position in this typology and its user engagement patterns. We find that communities with distinctive and dynamic identities have higher rates of user engagement, and further show that a community's position in our identity-based landscape holds important predictive information that is complementary to a strong activity baseline.
In particular user retention is one of the most crucial aspects of engagement and is critical to community maintenance BIBREF2 . We quantify how successful communities are at retaining users in terms of both short and long-term commitment. Our results indicate that rates of user retention vary drastically, yet systematically according to how distinctive and dynamic a community is (Figure FIGREF3 ).
We find a strong, explanatory relationship between the temporal consistency of a community's identity and rates of user engagement: dynamic communities that continually update and renew their discussion content tend to have far higher rates of user engagement. The relationship between distinctiveness and engagement is less universal, but still highly informative: niche communities tend to engender strong, focused interest from users at one particular point in time, though this does not necessarily translate into long-term retention.
Community-type and monthly retention
We find that dynamic communities, such as Seahawks or starcraft, have substantially higher rates of monthly user retention than more stable communities (Spearman's INLINEFORM0 = 0.70, INLINEFORM1 0.001, computed with community points averaged over months; Figure FIGREF11 .A, left). Similarly, more distinctive communities, like Cooking and Naruto, exhibit moderately higher monthly retention rates than more generic communities (Spearman's INLINEFORM2 = 0.33, INLINEFORM3 0.001; Figure FIGREF11 .A, right).
Monthly retention is formally defined as the proportion of users who contribute in month INLINEFORM0 and then return to contribute again in month INLINEFORM1 . Each monthly datapoint is treated as unique and the trends in Figure FIGREF11 show 95% bootstrapped confidence intervals, cluster-resampled at the level of subreddit BIBREF25 , to account for differences in the number of months each subreddit contributes to the data.
Importantly, we find that in the task of predicting community-level user retention our identity-based typology holds additional predictive value on top of strong baseline features based on community-size (# contributing users) and activity levels (mean # contributions per user), which are commonly used for churn prediction BIBREF7 . We compared out-of-sample predictive performance via leave-one-community-out cross validation using random forest regressors with ensembles of size 100, and otherwise default hyperparameters BIBREF26 . A model predicting average monthly retention based on a community's average distinctiveness and dynamicity achieves an average mean squared error ( INLINEFORM0 ) of INLINEFORM1 and INLINEFORM2 , while an analogous model predicting based on a community's size and average activity level (both log-transformed) achieves INLINEFORM4 and INLINEFORM5 . The difference between the two models is not statistically significant ( INLINEFORM6 , Wilcoxon signed-rank test). However, combining features from both models results in a large and statistically significant improvement over each independent model ( INLINEFORM7 , INLINEFORM8 , INLINEFORM9 Bonferroni-corrected pairwise Wilcoxon tests). These results indicate that our typology can explain variance in community-level retention rates, and provides information beyond what is present in standard activity-based features.
Community-type and user tenure
As with monthly retention, we find a strong positive relationship between a community's dynamicity and the average number of months that a user will stay in that community (Spearman's INLINEFORM0 = 0.41, INLINEFORM1 0.001, computed over all community points; Figure FIGREF11 .B, left). This verifies that the short-term trend observed for monthly retention translates into longer-term engagement and suggests that long-term user retention might be strongly driven by the extent to which a community continually provides novel content. Interestingly, there is no significant relationship between distinctiveness and long-term engagement (Spearman's INLINEFORM2 = 0.03, INLINEFORM3 0.77; Figure FIGREF11 .B, right). Thus, while highly distinctive communities like RandomActsOfMakeup may generate focused commitment from users over a short period of time, such communities are unlikely to retain long-term users unless they also have sufficiently dynamic content.
To measure user tenures we focused on one slice of data (May, 2013) and measured how many months a user spends in each community, on average—the average number of months between a user's first and last comment in each community. We have activity data up until May 2015, so the maximum tenure is 24 months in this set-up, which is exceptionally long relative to the average community member (throughout our entire data less than INLINEFORM0 of users have tenures of more than 24 months in any community).
Community identity and acculturation
The previous section shows that there is a strong connection between the nature of a community's identity and its basic user engagement patterns. In this section, we probe the relationship between a community's identity and how permeable, or accessible, it is to outsiders.
We measure this phenomenon using what we call the acculturation gap, which compares the extent to which engaged vs. non-engaged users employ community-specific language. While previous work has found this gap to be large and predictive of future user engagement in two beer-review communities BIBREF5 , we find that the size of the acculturation gap depends strongly on the nature of a community's identity, with the gap being most pronounced in stable, highly distinctive communities (Figure FIGREF13 ).
This finding has important implications for our understanding of online communities. Though many works have analyzed the dynamics of “linguistic belonging” in online communities BIBREF16 , BIBREF28 , BIBREF5 , BIBREF17 , our results suggest that the process of linguistically fitting in is highly contingent on the nature of a community's identity. At one extreme, in generic communities like pics or worldnews there is no distinctive, linguistic identity for users to adopt.
To measure the acculturation gap for a community, we follow Danescu-Niculescu-Mizil et al danescu-niculescu-mizilno2013 and build “snapshot language models” (SLMs) for each community, which capture the linguistic state of a community at one point of time. Using these language models we can capture how linguistically close a particular utterance is to the community by measuring the cross-entropy of this utterance relative to the SLM: DISPLAYFORM0
where INLINEFORM0 is the probability assigned to bigram INLINEFORM1 from comment INLINEFORM2 in community-month INLINEFORM3 . We build the SLMs by randomly sampling 200 active users—defined as users with at least 5 comments in the respective community and month. For each of these 200 active users we select 5 random 10-word spans from 5 unique comments. To ensure robustness and maximize data efficiency, we construct 100 SLMs for each community-month pair that has enough data, bootstrap-resampling from the set of active users.
We compute a basic measure of the acculturation gap for a community-month INLINEFORM0 as the relative difference of the cross-entropy of comments by users active in INLINEFORM1 with that of singleton comments by outsiders—i.e., users who only ever commented once in INLINEFORM2 , but who are still active in Reddit in general: DISPLAYFORM0
INLINEFORM0 denotes the distribution over singleton comments, INLINEFORM1 denotes the distribution over comments from users active in INLINEFORM2 , and INLINEFORM3 the expected values of the cross-entropy over these respective distributions. For each bootstrap-sampled SLM we compute the cross-entropy of 50 comments by active users (10 comments from 5 randomly sampled active users, who were not used to construct the SLM) and 50 comments from randomly-sampled outsiders.
Figure FIGREF13 .A shows that the acculturation gap varies substantially with how distinctive and dynamic a community is. Highly distinctive communities have far higher acculturation gaps, while dynamicity exhibits a non-linear relationship: relatively stable communities have a higher linguistic `entry barrier', as do very dynamic ones. Thus, in communities like IAmA (a general Q&A forum) that are very generic, with content that is highly, but not extremely dynamic, outsiders are at no disadvantage in matching the community's language. In contrast, the acculturation gap is large in stable, distinctive communities like Cooking that have consistent community-specific language. The gap is also large in extremely dynamic communities like Seahawks, which perhaps require more attention or interest on the part of active users to keep up-to-date with recent trends in content.
These results show that phenomena like the acculturation gap, which were previously observed in individual communities BIBREF28 , BIBREF5 , cannot be easily generalized to a larger, heterogeneous set of communities. At the same time, we see that structuring the space of possible communities enables us to observe systematic patterns in how such phenomena vary.
Community identity and content affinity
Through the acculturation gap, we have shown that communities exhibit large yet systematic variations in their permeability to outsiders. We now turn to understanding the divide in commenting behaviour between outsiders and active community members at a finer granularity, by focusing on two particular ways in which such gaps might manifest among users: through different levels of engagement with specific content and with temporally volatile content.
Echoing previous results, we find that community type mediates the extent and nature of the divide in content affinity. While in distinctive communities active members have a higher affinity for both community-specific content and for highly volatile content, the opposite is true for generic communities, where it is the outsiders who engage more with volatile content.
We quantify these divides in content affinity by measuring differences in the language of the comments written by active users and outsiders. Concretely, for each community INLINEFORM0 , we define the specificity gap INLINEFORM1 as the relative difference between the average specificity of comments authored by active members, and by outsiders, where these measures are macroaveraged over users. Large, positive INLINEFORM2 then occur in communities where active users tend to engage with substantially more community-specific content than outsiders.
We analogously define the volatility gap INLINEFORM0 as the relative difference in volatilities of active member and outsider comments. Large, positive values of INLINEFORM1 characterize communities where active users tend to have more volatile interests than outsiders, while negative values indicate communities where active users tend to have more stable interests.
We find that in 94% of communities, INLINEFORM0 , indicating (somewhat unsurprisingly) that in almost all communities, active users tend to engage with more community-specific content than outsiders. However, the magnitude of this divide can vary greatly: for instance, in Homebrewing, which is dedicated to brewing beer, the divide is very pronounced ( INLINEFORM1 0.33) compared to funny, a large hub where users share humorous content ( INLINEFORM2 0.011).
The nature of the volatility gap is comparatively more varied. In Homebrewing ( INLINEFORM0 0.16), as in 68% of communities, active users tend to write more volatile comments than outsiders ( INLINEFORM1 0). However, communities like funny ( INLINEFORM2 -0.16), where active users contribute relatively stable comments compared to outsiders ( INLINEFORM3 0), are also well-represented on Reddit.
To understand whether these variations manifest systematically across communities, we examine the relationship between divides in content affinity and community type. In particular, following the intuition that active users have a relatively high affinity for a community's niche, we expect that the distinctiveness of a community will be a salient mediator of specificity and volatility gaps. Indeed, we find a strong correlation between a community's distinctiveness and its specificity gap (Spearman's INLINEFORM0 0.34, INLINEFORM1 0.001).
We also find a strong correlation between distinctiveness and community volatility gaps (Spearman's INLINEFORM0 0.53, INLINEFORM1 0.001). In particular, we see that among the most distinctive communities (i.e., the top third of communities by distinctiveness), active users tend to write more volatile comments than outsiders (mean INLINEFORM2 0.098), while across the most generic communities (i.e., the bottom third), active users tend to write more stable comments (mean INLINEFORM3 -0.047, Mann-Whitney U test INLINEFORM4 0.001). The relative affinity of outsiders for volatile content in these communities indicates that temporally ephemeral content might serve as an entry point into such a community, without necessarily engaging users in the long term.
Further related work
Our language-based typology and analysis of user engagement draws on and contributes to several distinct research threads, in addition to the many foundational studies cited in the previous sections.
Multicommunity studies. Our investigation of user engagement in multicommunity settings follows prior literature which has examined differences in user and community dynamics across various online groups, such as email listservs. Such studies have primarily related variations in user behaviour to structural features such as group size and volume of content BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 . In focusing on the linguistic content of communities, we extend this research by providing a content-based framework through which user engagement can be examined.
Reddit has been a particularly useful setting for studying multiple communities in prior work. Such studies have mostly focused on characterizing how individual users engage across a multi-community platform BIBREF34 , BIBREF35 , or on specific user engagement patterns such as loyalty to particular communities BIBREF22 . We complement these studies by seeking to understand how features of communities can mediate a broad array of user engagement patterns within them.
Typologies of online communities. Prior attempts to typologize online communities have primarily been qualitative and based on hand-designed categories, making them difficult to apply at scale. These typologies often hinge on having some well-defined function the community serves, such as supporting a business or non-profit cause BIBREF36 , which can be difficult or impossible to identify in massive, anonymous multi-community settings. Other typologies emphasize differences in communication platforms and other functional requirements BIBREF37 , BIBREF38 , which are important but preclude analyzing differences between communities within the same multi-community platform. Similarly, previous computational methods of characterizing multiple communities have relied on the presence of markers such as affixes in community names BIBREF35 , or platform-specific affordances such as evaluation mechanisms BIBREF39 .
Our typology is also distinguished from community detection techniques that rely on structural or functional categorizations BIBREF40 , BIBREF41 . While the focus of those studies is to identify and characterize sub-communities within a larger social network, our typology provides a characterization of pre-defined communities based on the nature of their identity.
Broader work on collective identity. Our focus on community identity dovetails with a long line of research on collective identity and user engagement, in both online and offline communities BIBREF42 , BIBREF1 , BIBREF2 . These studies focus on individual-level psychological manifestations of collective (or social) identity, and their relationship to user engagement BIBREF42 , BIBREF43 , BIBREF44 , BIBREF0 .
In contrast, we seek to characterize community identities at an aggregate level and in an interpretable manner, with the goal of systematically organizing the diverse space of online communities. Typologies of this kind are critical to these broader, social-psychological studies of collective identity: they allow researchers to systematically analyze how the psychological manifestations and implications of collective identity vary across diverse sets of communities.
Conclusion and future work
Our current understanding of engagement patterns in online communities is patched up from glimpses offered by several disparate studies focusing on a few individual communities. This work calls into attention the need for a method to systematically reason about similarities and differences across communities. By proposing a way to structure the multi-community space, we find not only that radically contrasting engagement patterns emerge in different parts of this space, but also that this variation can be at least partly explained by the type of identity each community fosters.
Our choice in this work is to structure the multi-community space according to a typology based on community identity, as reflected in language use. We show that this effectively explains cross-community variation of three different user engagement measures—retention, acculturation and content affinity—and complements measures based on activity and size with additional interpretable information. For example, we find that in niche communities established members are more likely to engage with volatile content than outsiders, while the opposite is true in generic communities. Such insights can be useful for community maintainers seeking to understand engagement patterns in their own communities.
One main area of future research is to examine the temporal dynamics in the multi-community landscape. By averaging our measures of distinctiveness and dynamicity across time, our present study treated community identity as a static property. However, as communities experience internal changes and respond to external events, we can expect the nature of their identity to shift as well. For instance, the relative consistency of harrypotter may be disrupted by the release of a new novel, while Seahawks may foster different identities during and between football seasons. Conversely, a community's type may also mediate the impact of new events. Moving beyond a static view of community identity could enable us to better understand how temporal phenomena such as linguistic change manifest across different communities, and also provide a more nuanced view of user engagement—for instance, are communities more welcoming to newcomers at certain points in their lifecycle?
Another important avenue of future work is to explore other ways of mapping the landscape of online communities. For example, combining structural properties of communities BIBREF40 with topical information BIBREF35 and with our identity-based measures could further characterize and explain variations in user engagement patterns. Furthermore, extending the present analyses to even more diverse communities supported by different platforms (e.g., GitHub, StackExchange, Wikipedia) could enable the characterization of more complex behavioral patterns such as collaboration and altruism, which become salient in different multicommunity landscapes.
Acknowledgements
The authors thank Liye Fu, Jack Hessel, David Jurgens and Lillian Lee for their helpful comments. This research has been supported in part by a Discovery and Innovation Research Seed Award from the Office of the Vice Provost for Research at Cornell, NSF CNS-1010921, IIS-1149837, IIS-1514268 NIH BD2K, ARO MURI, DARPA XDATA, DARPA SIMPLEX, DARPA NGS2, Stanford Data Science Initiative, SAP Stanford Graduate Fellowship, NSERC PGS-D, Boeing, Lightspeed, and Volkswagen. | the average specificity of all utterances |
71a7153e12879defa186bfb6dbafe79c74265e10 | 71a7153e12879defa186bfb6dbafe79c74265e10_0 | Q: What data is the language model pretrained on?
Text: Introduction
Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches.
However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size). Researchers have to construct different models for it, which is already costly, and hence it calls for a lot of labeled data for each model. Moreover, labeling necessary amount of data for training neural network requires expensive labor cost. To handle it, researchers turn to some rule-based structuring methods which often have lower labor cost.
Traditionally, CTS tasks can be addressed by rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2, task-specific end-to-end methods BIBREF3, BIBREF4, BIBREF5, BIBREF6 and pipeline methods BIBREF7, BIBREF8, BIBREF9. Rule and dictionary based methods suffer from costly human-designed extraction rules, while task-specific end-to-end methods have non-uniform output formats and require task-specific training dataset. Pipeline methods break down the entire process into several pieces which improves the performance and generality. However, when the pipeline depth grows, error propagation will have a greater impact on the performance.
To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows.
We first present a question answering based clinical text structuring (QA-CTS) task, which unifies different specific tasks and make dataset shareable. We also propose an effective model to integrate clinical named entity information into pre-trained language model.
Experimental results show that QA-CTS task leads to significant improvement due to shared dataset. Our proposed model also achieves significantly better performance than the strong baseline methods. In addition, we also show that two-stage training mechanism has a great improvement on QA-CTS task.
The rest of the paper is organized as follows. We briefly review the related work on clinical text structuring in Section SECREF2. Then, we present question answer based clinical text structuring task in Section SECREF3. In Section SECREF4, we present an effective model for this task. Section SECREF5 is devoted to computational studies and several investigations on the key issues of our proposed model. Finally, conclusions are given in Section SECREF6.
Related Work ::: Clinical Text Structuring
Clinical text structuring is a final problem which is highly related to practical applications. Most of existing studies are case-by-case. Few of them are developed for the general purpose structuring task. These studies can be roughly divided into three categories: rule and dictionary based methods, task-specific end-to-end methods and pipeline methods.
Rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2 rely extremely on heuristics and handcrafted extraction rules which is more of an art than a science and incurring extensive trial-and-error experiments. Fukuda et al. BIBREF0 identified protein names from biological papers by dictionaries and several features of protein names. Wang et al. BIBREF1 developed some linguistic rules (i.e. normalised/expanded term matching and substring term matching) to map specific terminology to SNOMED CT. Song et al. BIBREF2 proposed a hybrid dictionary-based bio-entity extraction technique and expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. This kind of approach features its interpretability and easy modifiability. However, with the increase of the rule amount, supplementing new rules to existing system will turn to be a rule disaster.
Task-specific end-to-end methods BIBREF3, BIBREF4 use large amount of data to automatically model the specific task. Topaz et al. BIBREF3 constructed an automated wound information identification model with five output. Tan et al. BIBREF4 identified patients undergoing radical cystectomy for bladder cancer. Although they achieved good performance, none of their models could be used to another task due to output format difference. This makes building a new model for a new task a costly job.
Pipeline methods BIBREF7, BIBREF8, BIBREF9 break down the entire task into several basic natural language processing tasks. Bill et al. BIBREF7 focused on attributes extraction which mainly relied on dependency parsing and named entity recognition BIBREF10, BIBREF11, BIBREF12. Meanwhile, Fonferko et al. BIBREF9 used more components like noun phrase chunking BIBREF13, BIBREF14, BIBREF15, part-of-speech tagging BIBREF16, BIBREF17, BIBREF18, sentence splitter, named entity linking BIBREF19, BIBREF20, BIBREF21, relation extraction BIBREF22, BIBREF23. This kind of method focus on language itself, so it can handle tasks more general. However, as the depth of pipeline grows, it is obvious that error propagation will be more and more serious. In contrary, using less components to decrease the pipeline depth will lead to a poor performance. So the upper limit of this method depends mainly on the worst component.
Related Work ::: Pre-trained Language Model
Recently, some works focused on pre-trained language representation models to capture language information from text and then utilizing the information to improve the performance of specific natural language processing tasks BIBREF24, BIBREF25, BIBREF26, BIBREF27 which makes language model a shared model to all natural language processing tasks. Radford et al. BIBREF24 proposed a framework for fine-tuning pre-trained language model. Peters et al. BIBREF25 proposed ELMo which concatenates forward and backward language models in a shallow manner. Devlin et al. BIBREF26 used bidirectional Transformers to model deep interactions between the two directions. Yang et al. BIBREF27 replaced the fixed forward or backward factorization order with all possible permutations of the factorization order and avoided using the [MASK] tag which causes pretrain-finetune discrepancy that BERT is subject to.
The main motivation of introducing pre-trained language model is to solve the shortage of labeled data and polysemy problem. Although polysemy problem is not a common phenomenon in biomedical domain, shortage of labeled data is always a non-trivial problem. Lee et al. BIBREF28 applied BERT on large-scale biomedical unannotated data and achieved improvement on biomedical named entity recognition, relation extraction and question answering. Kim et al. BIBREF29 adapted BioBERT into multi-type named entity recognition and discovered new entities. Both of them demonstrates the usefulness of introducing pre-trained language model into biomedical domain.
Question Answering based Clinical Text Structuring
Given a sequence of paragraph text $X=<x_1, x_2, ..., x_n>$, clinical text structuring (CTS) can be regarded to extract or generate a key-value pair where key $Q$ is typically a query term such as proximal resection margin and value $V$ is a result of query term $Q$ according to the paragraph text $X$.
Generally, researchers solve CTS problem in two steps. Firstly, the answer-related text is pick out. And then several steps such as entity names conversion and negative words recognition are deployed to generate the final answer. While final answer varies from task to task, which truly causes non-uniform output formats, finding the answer-related text is a common action among all tasks. Traditional methods regard both the steps as a whole. In this paper, we focus on finding the answer-related substring $Xs = <X_i, X_i+1, X_i+2, ... X_j> (1 <= i < j <= n)$ from paragraph text $X$. For example, given sentence UTF8gkai“远端胃切除标本:小弯长11.5cm,大弯长17.0cm。距上切端6.0cm、下切端8.0cm" (Distal gastrectomy specimen: measuring 11.5cm in length along the lesser curvature, 17.0cm in length along the greater curvature; 6.0cm from the proximal resection margin, and 8.0cm from the distal resection margin) and query UTF8gkai“上切缘距离"(proximal resection margin), the answer should be 6.0cm which is located in original text from index 32 to 37. With such definition, it unifies the output format of CTS tasks and therefore make the training data shareable, in order to reduce the training data quantity requirement.
Since BERT BIBREF26 has already demonstrated the usefulness of shared model, we suppose extracting commonality of this problem and unifying the output format will make the model more powerful than dedicated model and meanwhile, for a specific clinical task, use the data for other tasks to supplement the training data.
The Proposed Model for QA-CTS Task
In this section, we present an effective model for the question answering based clinical text structuring (QA-CTS). As shown in Fig. FIGREF8, paragraph text $X$ is first passed to a clinical named entity recognition (CNER) model BIBREF12 to capture named entity information and obtain one-hot CNER output tagging sequence for query text $I_{nq}$ and paragraph text $I_{nt}$ with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. $I_{nq}$ and $I_{nt}$ are then integrated together into $I_n$. Meanwhile, the paragraph text $X$ and query text $Q$ are organized and passed to contextualized representation model which is pre-trained language model BERT BIBREF26 here to obtain the contextualized representation vector $V_s$ of both text and query. Afterwards, $V_s$ and $I_n$ are integrated together and fed into a feed forward network to calculate the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word.
The Proposed Model for QA-CTS Task ::: Contextualized Representation of Sentence Text and Query Text
For any clinical free-text paragraph $X$ and query $Q$, contextualized representation is to generate the encoded vector of both of them. Here we use pre-trained language model BERT-base BIBREF26 model to capture contextual information.
The text input is constructed as `[CLS] $Q$ [SEP] $X$ [SEP]'. For Chinese sentence, each word in this input will be mapped to a pre-trained embedding $e_i$. To tell the model $Q$ and $X$ is two different sentence, a sentence type input is generated which is a binary label sequence to denote what sentence each character in the input belongs to. Positional encoding and mask matrix is also constructed automatically to bring in absolute position information and eliminate the impact of zero padding respectively. Then a hidden vector $V_s$ which contains both query and text information is generated through BERT-base model.
The Proposed Model for QA-CTS Task ::: Clinical Named Entity Information
Since BERT is trained on general corpus, its performance on biomedical domain can be improved by introducing biomedical domain-specific features. In this paper, we introduce clinical named entity information into the model.
The CNER task aims to identify and classify important clinical terms such as diseases, symptoms, treatments, exams, and body parts from Chinese EHRs. It can be regarded as a sequence labeling task. A CNER model typically outputs a sequence of tags. Each character of the original sentence will be tagged a label following a tag scheme. In this paper we recognize the entities by the model of our previous work BIBREF12 but trained on another corpus which has 44 entity types including operations, numbers, unit words, examinations, symptoms, negative words, etc. An illustrative example of named entity information sequence is demonstrated in Table TABREF2. In Table TABREF2, UTF8gkai“远端胃切除" is tagged as an operation, `11.5' is a number word and `cm' is an unit word. The named entity tag sequence is organized in one-hot type. We denote the sequence for clinical sentence and query term as $I_{nt}$ and $I_{nq}$, respectively.
The Proposed Model for QA-CTS Task ::: Integration Method
There are two ways to integrate two named entity information vectors $I_{nt}$ and $I_{nq}$ or hidden contextualized representation $V_s$ and named entity information $I_n$, where $I_n = [I_{nt}; I_{nq}]$. The first one is to concatenate them together because they have sequence output with a common dimension. The second one is to transform them into a new hidden representation. For the concatenation method, the integrated representation is described as follows.
While for the transformation method, we use multi-head attention BIBREF30 to encode the two vectors. It can be defined as follows where $h$ is the number of heads and $W_o$ is used to projects back the dimension of concatenated matrix.
$Attention$ denotes the traditional attention and it can be defined as follows.
where $d_k$ is the length of hidden vector.
The Proposed Model for QA-CTS Task ::: Final Prediction
The final step is to use integrated representation $H_i$ to predict the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word. We use a feed forward network (FFN) to compress and calculate the score of each word $H_f$ which makes the dimension to $\left\langle l_s, 2\right\rangle $ where $l_s$ denotes the length of sequence.
Then we permute the two dimensions for softmax calculation. The calculation process of loss function can be defined as followed.
where $O_s = softmax(permute(H_f)_0)$ denotes the probability score of each word to be the start word and similarly $O_e = softmax(permute(H_f)_1)$ denotes the end. $y_s$ and $y_e$ denotes the true answer of the output for start word and end word respectively.
The Proposed Model for QA-CTS Task ::: Two-Stage Training Mechanism
Two-stage training mechanism is previously applied on bilinear model in fine-grained visual recognition BIBREF31, BIBREF32, BIBREF33. Two CNNs are deployed in the model. One is trained at first for coarse-graind features while freezing the parameter of the other. Then unfreeze the other one and train the entire model in a low learning rate for fetching fine-grained features.
Inspired by this and due to the large amount of parameters in BERT model, to speed up the training process, we fine tune the BERT model with new prediction layer first to achieve a better contextualized representation performance. Then we deploy the proposed model and load the fine tuned BERT weights, attach named entity information layers and retrain the model.
Experimental Studies
In this section, we devote to experimentally evaluating our proposed task and approach. The best results in tables are in bold.
Experimental Studies ::: Dataset and Evaluation Metrics
Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20.
In the following experiments, two widely-used performance measures (i.e., EM-score BIBREF34 and (macro-averaged) F$_1$-score BIBREF35) are used to evaluate the methods. The Exact Match (EM-score) metric measures the percentage of predictions that match any one of the ground truth answers exactly. The F$_1$-score metric is a looser metric measures the average overlap between the prediction and ground truth answer.
Experimental Studies ::: Experimental Settings
To implement deep neural network models, we utilize the Keras library BIBREF36 with TensorFlow BIBREF37 backend. Each model is run on a single NVIDIA GeForce GTX 1080 Ti GPU. The models are trained by Adam optimization algorithm BIBREF38 whose parameters are the same as the default settings except for learning rate set to $5\times 10^{-5}$. Batch size is set to 3 or 4 due to the lack of graphical memory. We select BERT-base as the pre-trained language model in this paper. Due to the high cost of pre-training BERT language model, we directly adopt parameters pre-trained by Google in Chinese general corpus. The named entity recognition is applied on both pathology report texts and query texts.
Experimental Studies ::: Comparison with State-of-the-art Methods
Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23.
Table TABREF23 indicates that our proposed model achieved the best performance both in EM-score and F$_1$-score with EM-score of 91.84% and F$_1$-score of 93.75%. QANet outperformed BERT-Base with 3.56% score in F$_1$-score but underperformed it with 0.75% score in EM-score. Compared with BERT-Base, our model led to a 5.64% performance improvement in EM-score and 3.69% in F$_1$-score. Although our model didn't outperform much with QANet in F$_1$-score (only 0.13%), our model significantly outperformed it with 6.39% score in EM-score.
Experimental Studies ::: Ablation Analysis
To further investigate the effects of named entity information and two-stage training mechanism for our model, we apply ablation analysis to see the improvement brought by each of them, where $\times $ refers to removing that part from our model.
As demonstrated in Table TABREF25, with named entity information enabled, two-stage training mechanism improved the result by 4.36% in EM-score and 3.8% in F$_1$-score. Without two-stage training mechanism, named entity information led to an improvement by 1.28% in EM-score but it also led to a weak deterioration by 0.12% in F$_1$-score. With both of them enabled, our proposed model achieved a 5.64% score improvement in EM-score and a 3.69% score improvement in F$_1$-score. The experimental results show that both named entity information and two-stage training mechanism are helpful to our model.
Experimental Studies ::: Comparisons Between Two Integration Methods
There are two methods to integrate named entity information into existing model, we experimentally compare these two integration methods. As named entity recognition has been applied on both pathology report text and query text, there will be two integration here. One is for two named entity information and the other is for contextualized representation and integrated named entity information. For multi-head attention BIBREF30, we set heads number $h = 16$ with 256-dimension hidden vector size for each head.
From Table TABREF27, we can observe that applying concatenation on both periods achieved the best performance on both EM-score and F$_1$-score. Unfortunately, applying multi-head attention on both period one and period two can not reach convergence in our experiments. This probably because it makes the model too complex to train. The difference on other two methods are the order of concatenation and multi-head attention. Applying multi-head attention on two named entity information $I_{nt}$ and $I_{nq}$ first achieved a better performance with 89.87% in EM-score and 92.88% in F$_1$-score. Applying Concatenation first can only achieve 80.74% in EM-score and 84.42% in F$_1$-score. This is probably due to the processing depth of hidden vectors and dataset size. BERT's output has been modified after many layers but named entity information representation is very close to input. With big amount of parameters in multi-head attention, it requires massive training to find out the optimal parameters. However, our dataset is significantly smaller than what pre-trained BERT uses. This probably can also explain why applying multi-head attention method on both periods can not converge.
Although Table TABREF27 shows the best integration method is concatenation, multi-head attention still has great potential. Due to the lack of computational resources, our experiment fixed the head number and hidden vector size. However, tuning these hyper parameters may have impact on the result. Tuning integration method and try to utilize larger datasets may give help to improving the performance.
Experimental Studies ::: Data Integration Analysis
To investigate how shared task and shared model can benefit, we split our dataset by query types, train our proposed model with different datasets and demonstrate their performance on different datasets. Firstly, we investigate the performance on model without two-stage training and named entity information.
As indicated in Table TABREF30, The model trained by mixed data outperforms 2 of the 3 original tasks in EM-score with 81.55% for proximal resection margin and 86.85% for distal resection margin. The performance on tumor size declined by 1.57% score in EM-score and 3.14% score in F$_1$-score but they were still above 90%. 0.69% and 0.37% score improvement in EM-score was brought by shared model for proximal and distal resection margin prediction. Meanwhile F$_1$-score for those two tasks declined 3.11% and 0.77% score.
Then we investigate the performance on model with two-stage training and named entity information. In this experiment, pre-training process only use the specific dataset not the mixed data. From Table TABREF31 we can observe that the performance on proximal and distal resection margin achieved the best performance on both EM-score and F$_1$-score. Compared with Table TABREF30, the best performance on proximal resection margin improved by 6.9% in EM-score and 7.94% in F$_1$-score. Meanwhile, the best performance on distal resection margin improved by 5.56% in EM-score and 6.32% in F$_1$-score. Other performances also usually improved a lot. This proves the usefulness of two-stage training and named entity information as well.
Lastly, we fine tune the model for each task with a pre-trained parameter. Table TABREF32 summarizes the result. (Add some explanations for the Table TABREF32). Comparing Table TABREF32 with Table TABREF31, using mixed-data pre-trained parameters can significantly improve the model performance than task-specific data trained model. Except tumor size, the result was improved by 0.52% score in EM-score, 1.39% score in F$_1$-score for proximal resection margin and 2.6% score in EM-score, 2.96% score in F$_1$-score for distal resection margin. This proves mixed-data pre-trained parameters can lead to a great benefit for specific task. Meanwhile, the model performance on other tasks which are not trained in the final stage was also improved from around 0 to 60 or 70 percent. This proves that there is commonality between different tasks and our proposed QA-CTS task make this learnable. In conclusion, to achieve the best performance for a specific dataset, pre-training the model in multiple datasets and then fine tuning the model on the specific dataset is the best way.
Conclusion
In this paper, we present a question answering based clinical text structuring (QA-CTS) task, which unifies different clinical text structuring tasks and utilize different datasets. A novel model is also proposed to integrate named entity information into a pre-trained language model and adapt it to QA-CTS task. Initially, sequential results of named entity recognition on both paragraph and query texts are integrated together. Contextualized representation on both paragraph and query texts are transformed by a pre-trained language model. Then, the integrated named entity information and contextualized representation are integrated together and fed into a feed forward network for final prediction. Experimental results on real-world dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks. The shared task and shared model introduced by QA-CTS task has also been proved to be useful for improving the performance on most of the task-specific datasets. In conclusion, the best way to achieve the best performance for a specific dataset is to pre-train the model in multiple datasets and then fine tune it on the specific dataset.
Acknowledgment
We would like to thank Ting Li and Xizhou Hong (Ruijin Hospital) who have helped us very much in data fetching and data cleansing. This work is supported by the National Key R&D Program of China for “Precision Medical Research" (No. 2018YFC0910500). | Chinese general corpus |
71a7153e12879defa186bfb6dbafe79c74265e10 | 71a7153e12879defa186bfb6dbafe79c74265e10_1 | Q: What data is the language model pretrained on?
Text: Introduction
Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches.
However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size). Researchers have to construct different models for it, which is already costly, and hence it calls for a lot of labeled data for each model. Moreover, labeling necessary amount of data for training neural network requires expensive labor cost. To handle it, researchers turn to some rule-based structuring methods which often have lower labor cost.
Traditionally, CTS tasks can be addressed by rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2, task-specific end-to-end methods BIBREF3, BIBREF4, BIBREF5, BIBREF6 and pipeline methods BIBREF7, BIBREF8, BIBREF9. Rule and dictionary based methods suffer from costly human-designed extraction rules, while task-specific end-to-end methods have non-uniform output formats and require task-specific training dataset. Pipeline methods break down the entire process into several pieces which improves the performance and generality. However, when the pipeline depth grows, error propagation will have a greater impact on the performance.
To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows.
We first present a question answering based clinical text structuring (QA-CTS) task, which unifies different specific tasks and make dataset shareable. We also propose an effective model to integrate clinical named entity information into pre-trained language model.
Experimental results show that QA-CTS task leads to significant improvement due to shared dataset. Our proposed model also achieves significantly better performance than the strong baseline methods. In addition, we also show that two-stage training mechanism has a great improvement on QA-CTS task.
The rest of the paper is organized as follows. We briefly review the related work on clinical text structuring in Section SECREF2. Then, we present question answer based clinical text structuring task in Section SECREF3. In Section SECREF4, we present an effective model for this task. Section SECREF5 is devoted to computational studies and several investigations on the key issues of our proposed model. Finally, conclusions are given in Section SECREF6.
Related Work ::: Clinical Text Structuring
Clinical text structuring is a final problem which is highly related to practical applications. Most of existing studies are case-by-case. Few of them are developed for the general purpose structuring task. These studies can be roughly divided into three categories: rule and dictionary based methods, task-specific end-to-end methods and pipeline methods.
Rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2 rely extremely on heuristics and handcrafted extraction rules which is more of an art than a science and incurring extensive trial-and-error experiments. Fukuda et al. BIBREF0 identified protein names from biological papers by dictionaries and several features of protein names. Wang et al. BIBREF1 developed some linguistic rules (i.e. normalised/expanded term matching and substring term matching) to map specific terminology to SNOMED CT. Song et al. BIBREF2 proposed a hybrid dictionary-based bio-entity extraction technique and expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. This kind of approach features its interpretability and easy modifiability. However, with the increase of the rule amount, supplementing new rules to existing system will turn to be a rule disaster.
Task-specific end-to-end methods BIBREF3, BIBREF4 use large amount of data to automatically model the specific task. Topaz et al. BIBREF3 constructed an automated wound information identification model with five output. Tan et al. BIBREF4 identified patients undergoing radical cystectomy for bladder cancer. Although they achieved good performance, none of their models could be used to another task due to output format difference. This makes building a new model for a new task a costly job.
Pipeline methods BIBREF7, BIBREF8, BIBREF9 break down the entire task into several basic natural language processing tasks. Bill et al. BIBREF7 focused on attributes extraction which mainly relied on dependency parsing and named entity recognition BIBREF10, BIBREF11, BIBREF12. Meanwhile, Fonferko et al. BIBREF9 used more components like noun phrase chunking BIBREF13, BIBREF14, BIBREF15, part-of-speech tagging BIBREF16, BIBREF17, BIBREF18, sentence splitter, named entity linking BIBREF19, BIBREF20, BIBREF21, relation extraction BIBREF22, BIBREF23. This kind of method focus on language itself, so it can handle tasks more general. However, as the depth of pipeline grows, it is obvious that error propagation will be more and more serious. In contrary, using less components to decrease the pipeline depth will lead to a poor performance. So the upper limit of this method depends mainly on the worst component.
Related Work ::: Pre-trained Language Model
Recently, some works focused on pre-trained language representation models to capture language information from text and then utilizing the information to improve the performance of specific natural language processing tasks BIBREF24, BIBREF25, BIBREF26, BIBREF27 which makes language model a shared model to all natural language processing tasks. Radford et al. BIBREF24 proposed a framework for fine-tuning pre-trained language model. Peters et al. BIBREF25 proposed ELMo which concatenates forward and backward language models in a shallow manner. Devlin et al. BIBREF26 used bidirectional Transformers to model deep interactions between the two directions. Yang et al. BIBREF27 replaced the fixed forward or backward factorization order with all possible permutations of the factorization order and avoided using the [MASK] tag which causes pretrain-finetune discrepancy that BERT is subject to.
The main motivation of introducing pre-trained language model is to solve the shortage of labeled data and polysemy problem. Although polysemy problem is not a common phenomenon in biomedical domain, shortage of labeled data is always a non-trivial problem. Lee et al. BIBREF28 applied BERT on large-scale biomedical unannotated data and achieved improvement on biomedical named entity recognition, relation extraction and question answering. Kim et al. BIBREF29 adapted BioBERT into multi-type named entity recognition and discovered new entities. Both of them demonstrates the usefulness of introducing pre-trained language model into biomedical domain.
Question Answering based Clinical Text Structuring
Given a sequence of paragraph text $X=<x_1, x_2, ..., x_n>$, clinical text structuring (CTS) can be regarded to extract or generate a key-value pair where key $Q$ is typically a query term such as proximal resection margin and value $V$ is a result of query term $Q$ according to the paragraph text $X$.
Generally, researchers solve CTS problem in two steps. Firstly, the answer-related text is pick out. And then several steps such as entity names conversion and negative words recognition are deployed to generate the final answer. While final answer varies from task to task, which truly causes non-uniform output formats, finding the answer-related text is a common action among all tasks. Traditional methods regard both the steps as a whole. In this paper, we focus on finding the answer-related substring $Xs = <X_i, X_i+1, X_i+2, ... X_j> (1 <= i < j <= n)$ from paragraph text $X$. For example, given sentence UTF8gkai“远端胃切除标本:小弯长11.5cm,大弯长17.0cm。距上切端6.0cm、下切端8.0cm" (Distal gastrectomy specimen: measuring 11.5cm in length along the lesser curvature, 17.0cm in length along the greater curvature; 6.0cm from the proximal resection margin, and 8.0cm from the distal resection margin) and query UTF8gkai“上切缘距离"(proximal resection margin), the answer should be 6.0cm which is located in original text from index 32 to 37. With such definition, it unifies the output format of CTS tasks and therefore make the training data shareable, in order to reduce the training data quantity requirement.
Since BERT BIBREF26 has already demonstrated the usefulness of shared model, we suppose extracting commonality of this problem and unifying the output format will make the model more powerful than dedicated model and meanwhile, for a specific clinical task, use the data for other tasks to supplement the training data.
The Proposed Model for QA-CTS Task
In this section, we present an effective model for the question answering based clinical text structuring (QA-CTS). As shown in Fig. FIGREF8, paragraph text $X$ is first passed to a clinical named entity recognition (CNER) model BIBREF12 to capture named entity information and obtain one-hot CNER output tagging sequence for query text $I_{nq}$ and paragraph text $I_{nt}$ with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. $I_{nq}$ and $I_{nt}$ are then integrated together into $I_n$. Meanwhile, the paragraph text $X$ and query text $Q$ are organized and passed to contextualized representation model which is pre-trained language model BERT BIBREF26 here to obtain the contextualized representation vector $V_s$ of both text and query. Afterwards, $V_s$ and $I_n$ are integrated together and fed into a feed forward network to calculate the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word.
The Proposed Model for QA-CTS Task ::: Contextualized Representation of Sentence Text and Query Text
For any clinical free-text paragraph $X$ and query $Q$, contextualized representation is to generate the encoded vector of both of them. Here we use pre-trained language model BERT-base BIBREF26 model to capture contextual information.
The text input is constructed as `[CLS] $Q$ [SEP] $X$ [SEP]'. For Chinese sentence, each word in this input will be mapped to a pre-trained embedding $e_i$. To tell the model $Q$ and $X$ is two different sentence, a sentence type input is generated which is a binary label sequence to denote what sentence each character in the input belongs to. Positional encoding and mask matrix is also constructed automatically to bring in absolute position information and eliminate the impact of zero padding respectively. Then a hidden vector $V_s$ which contains both query and text information is generated through BERT-base model.
The Proposed Model for QA-CTS Task ::: Clinical Named Entity Information
Since BERT is trained on general corpus, its performance on biomedical domain can be improved by introducing biomedical domain-specific features. In this paper, we introduce clinical named entity information into the model.
The CNER task aims to identify and classify important clinical terms such as diseases, symptoms, treatments, exams, and body parts from Chinese EHRs. It can be regarded as a sequence labeling task. A CNER model typically outputs a sequence of tags. Each character of the original sentence will be tagged a label following a tag scheme. In this paper we recognize the entities by the model of our previous work BIBREF12 but trained on another corpus which has 44 entity types including operations, numbers, unit words, examinations, symptoms, negative words, etc. An illustrative example of named entity information sequence is demonstrated in Table TABREF2. In Table TABREF2, UTF8gkai“远端胃切除" is tagged as an operation, `11.5' is a number word and `cm' is an unit word. The named entity tag sequence is organized in one-hot type. We denote the sequence for clinical sentence and query term as $I_{nt}$ and $I_{nq}$, respectively.
The Proposed Model for QA-CTS Task ::: Integration Method
There are two ways to integrate two named entity information vectors $I_{nt}$ and $I_{nq}$ or hidden contextualized representation $V_s$ and named entity information $I_n$, where $I_n = [I_{nt}; I_{nq}]$. The first one is to concatenate them together because they have sequence output with a common dimension. The second one is to transform them into a new hidden representation. For the concatenation method, the integrated representation is described as follows.
While for the transformation method, we use multi-head attention BIBREF30 to encode the two vectors. It can be defined as follows where $h$ is the number of heads and $W_o$ is used to projects back the dimension of concatenated matrix.
$Attention$ denotes the traditional attention and it can be defined as follows.
where $d_k$ is the length of hidden vector.
The Proposed Model for QA-CTS Task ::: Final Prediction
The final step is to use integrated representation $H_i$ to predict the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word. We use a feed forward network (FFN) to compress and calculate the score of each word $H_f$ which makes the dimension to $\left\langle l_s, 2\right\rangle $ where $l_s$ denotes the length of sequence.
Then we permute the two dimensions for softmax calculation. The calculation process of loss function can be defined as followed.
where $O_s = softmax(permute(H_f)_0)$ denotes the probability score of each word to be the start word and similarly $O_e = softmax(permute(H_f)_1)$ denotes the end. $y_s$ and $y_e$ denotes the true answer of the output for start word and end word respectively.
The Proposed Model for QA-CTS Task ::: Two-Stage Training Mechanism
Two-stage training mechanism is previously applied on bilinear model in fine-grained visual recognition BIBREF31, BIBREF32, BIBREF33. Two CNNs are deployed in the model. One is trained at first for coarse-graind features while freezing the parameter of the other. Then unfreeze the other one and train the entire model in a low learning rate for fetching fine-grained features.
Inspired by this and due to the large amount of parameters in BERT model, to speed up the training process, we fine tune the BERT model with new prediction layer first to achieve a better contextualized representation performance. Then we deploy the proposed model and load the fine tuned BERT weights, attach named entity information layers and retrain the model.
Experimental Studies
In this section, we devote to experimentally evaluating our proposed task and approach. The best results in tables are in bold.
Experimental Studies ::: Dataset and Evaluation Metrics
Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20.
In the following experiments, two widely-used performance measures (i.e., EM-score BIBREF34 and (macro-averaged) F$_1$-score BIBREF35) are used to evaluate the methods. The Exact Match (EM-score) metric measures the percentage of predictions that match any one of the ground truth answers exactly. The F$_1$-score metric is a looser metric measures the average overlap between the prediction and ground truth answer.
Experimental Studies ::: Experimental Settings
To implement deep neural network models, we utilize the Keras library BIBREF36 with TensorFlow BIBREF37 backend. Each model is run on a single NVIDIA GeForce GTX 1080 Ti GPU. The models are trained by Adam optimization algorithm BIBREF38 whose parameters are the same as the default settings except for learning rate set to $5\times 10^{-5}$. Batch size is set to 3 or 4 due to the lack of graphical memory. We select BERT-base as the pre-trained language model in this paper. Due to the high cost of pre-training BERT language model, we directly adopt parameters pre-trained by Google in Chinese general corpus. The named entity recognition is applied on both pathology report texts and query texts.
Experimental Studies ::: Comparison with State-of-the-art Methods
Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23.
Table TABREF23 indicates that our proposed model achieved the best performance both in EM-score and F$_1$-score with EM-score of 91.84% and F$_1$-score of 93.75%. QANet outperformed BERT-Base with 3.56% score in F$_1$-score but underperformed it with 0.75% score in EM-score. Compared with BERT-Base, our model led to a 5.64% performance improvement in EM-score and 3.69% in F$_1$-score. Although our model didn't outperform much with QANet in F$_1$-score (only 0.13%), our model significantly outperformed it with 6.39% score in EM-score.
Experimental Studies ::: Ablation Analysis
To further investigate the effects of named entity information and two-stage training mechanism for our model, we apply ablation analysis to see the improvement brought by each of them, where $\times $ refers to removing that part from our model.
As demonstrated in Table TABREF25, with named entity information enabled, two-stage training mechanism improved the result by 4.36% in EM-score and 3.8% in F$_1$-score. Without two-stage training mechanism, named entity information led to an improvement by 1.28% in EM-score but it also led to a weak deterioration by 0.12% in F$_1$-score. With both of them enabled, our proposed model achieved a 5.64% score improvement in EM-score and a 3.69% score improvement in F$_1$-score. The experimental results show that both named entity information and two-stage training mechanism are helpful to our model.
Experimental Studies ::: Comparisons Between Two Integration Methods
There are two methods to integrate named entity information into existing model, we experimentally compare these two integration methods. As named entity recognition has been applied on both pathology report text and query text, there will be two integration here. One is for two named entity information and the other is for contextualized representation and integrated named entity information. For multi-head attention BIBREF30, we set heads number $h = 16$ with 256-dimension hidden vector size for each head.
From Table TABREF27, we can observe that applying concatenation on both periods achieved the best performance on both EM-score and F$_1$-score. Unfortunately, applying multi-head attention on both period one and period two can not reach convergence in our experiments. This probably because it makes the model too complex to train. The difference on other two methods are the order of concatenation and multi-head attention. Applying multi-head attention on two named entity information $I_{nt}$ and $I_{nq}$ first achieved a better performance with 89.87% in EM-score and 92.88% in F$_1$-score. Applying Concatenation first can only achieve 80.74% in EM-score and 84.42% in F$_1$-score. This is probably due to the processing depth of hidden vectors and dataset size. BERT's output has been modified after many layers but named entity information representation is very close to input. With big amount of parameters in multi-head attention, it requires massive training to find out the optimal parameters. However, our dataset is significantly smaller than what pre-trained BERT uses. This probably can also explain why applying multi-head attention method on both periods can not converge.
Although Table TABREF27 shows the best integration method is concatenation, multi-head attention still has great potential. Due to the lack of computational resources, our experiment fixed the head number and hidden vector size. However, tuning these hyper parameters may have impact on the result. Tuning integration method and try to utilize larger datasets may give help to improving the performance.
Experimental Studies ::: Data Integration Analysis
To investigate how shared task and shared model can benefit, we split our dataset by query types, train our proposed model with different datasets and demonstrate their performance on different datasets. Firstly, we investigate the performance on model without two-stage training and named entity information.
As indicated in Table TABREF30, The model trained by mixed data outperforms 2 of the 3 original tasks in EM-score with 81.55% for proximal resection margin and 86.85% for distal resection margin. The performance on tumor size declined by 1.57% score in EM-score and 3.14% score in F$_1$-score but they were still above 90%. 0.69% and 0.37% score improvement in EM-score was brought by shared model for proximal and distal resection margin prediction. Meanwhile F$_1$-score for those two tasks declined 3.11% and 0.77% score.
Then we investigate the performance on model with two-stage training and named entity information. In this experiment, pre-training process only use the specific dataset not the mixed data. From Table TABREF31 we can observe that the performance on proximal and distal resection margin achieved the best performance on both EM-score and F$_1$-score. Compared with Table TABREF30, the best performance on proximal resection margin improved by 6.9% in EM-score and 7.94% in F$_1$-score. Meanwhile, the best performance on distal resection margin improved by 5.56% in EM-score and 6.32% in F$_1$-score. Other performances also usually improved a lot. This proves the usefulness of two-stage training and named entity information as well.
Lastly, we fine tune the model for each task with a pre-trained parameter. Table TABREF32 summarizes the result. (Add some explanations for the Table TABREF32). Comparing Table TABREF32 with Table TABREF31, using mixed-data pre-trained parameters can significantly improve the model performance than task-specific data trained model. Except tumor size, the result was improved by 0.52% score in EM-score, 1.39% score in F$_1$-score for proximal resection margin and 2.6% score in EM-score, 2.96% score in F$_1$-score for distal resection margin. This proves mixed-data pre-trained parameters can lead to a great benefit for specific task. Meanwhile, the model performance on other tasks which are not trained in the final stage was also improved from around 0 to 60 or 70 percent. This proves that there is commonality between different tasks and our proposed QA-CTS task make this learnable. In conclusion, to achieve the best performance for a specific dataset, pre-training the model in multiple datasets and then fine tuning the model on the specific dataset is the best way.
Conclusion
In this paper, we present a question answering based clinical text structuring (QA-CTS) task, which unifies different clinical text structuring tasks and utilize different datasets. A novel model is also proposed to integrate named entity information into a pre-trained language model and adapt it to QA-CTS task. Initially, sequential results of named entity recognition on both paragraph and query texts are integrated together. Contextualized representation on both paragraph and query texts are transformed by a pre-trained language model. Then, the integrated named entity information and contextualized representation are integrated together and fed into a feed forward network for final prediction. Experimental results on real-world dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks. The shared task and shared model introduced by QA-CTS task has also been proved to be useful for improving the performance on most of the task-specific datasets. In conclusion, the best way to achieve the best performance for a specific dataset is to pre-train the model in multiple datasets and then fine tune it on the specific dataset.
Acknowledgment
We would like to thank Ting Li and Xizhou Hong (Ruijin Hospital) who have helped us very much in data fetching and data cleansing. This work is supported by the National Key R&D Program of China for “Precision Medical Research" (No. 2018YFC0910500). | Unanswerable |
85d1831c28d3c19c84472589a252e28e9884500f | 85d1831c28d3c19c84472589a252e28e9884500f_0 | Q: What baselines is the proposed model compared against?
Text: Introduction
Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches.
However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size). Researchers have to construct different models for it, which is already costly, and hence it calls for a lot of labeled data for each model. Moreover, labeling necessary amount of data for training neural network requires expensive labor cost. To handle it, researchers turn to some rule-based structuring methods which often have lower labor cost.
Traditionally, CTS tasks can be addressed by rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2, task-specific end-to-end methods BIBREF3, BIBREF4, BIBREF5, BIBREF6 and pipeline methods BIBREF7, BIBREF8, BIBREF9. Rule and dictionary based methods suffer from costly human-designed extraction rules, while task-specific end-to-end methods have non-uniform output formats and require task-specific training dataset. Pipeline methods break down the entire process into several pieces which improves the performance and generality. However, when the pipeline depth grows, error propagation will have a greater impact on the performance.
To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows.
We first present a question answering based clinical text structuring (QA-CTS) task, which unifies different specific tasks and make dataset shareable. We also propose an effective model to integrate clinical named entity information into pre-trained language model.
Experimental results show that QA-CTS task leads to significant improvement due to shared dataset. Our proposed model also achieves significantly better performance than the strong baseline methods. In addition, we also show that two-stage training mechanism has a great improvement on QA-CTS task.
The rest of the paper is organized as follows. We briefly review the related work on clinical text structuring in Section SECREF2. Then, we present question answer based clinical text structuring task in Section SECREF3. In Section SECREF4, we present an effective model for this task. Section SECREF5 is devoted to computational studies and several investigations on the key issues of our proposed model. Finally, conclusions are given in Section SECREF6.
Related Work ::: Clinical Text Structuring
Clinical text structuring is a final problem which is highly related to practical applications. Most of existing studies are case-by-case. Few of them are developed for the general purpose structuring task. These studies can be roughly divided into three categories: rule and dictionary based methods, task-specific end-to-end methods and pipeline methods.
Rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2 rely extremely on heuristics and handcrafted extraction rules which is more of an art than a science and incurring extensive trial-and-error experiments. Fukuda et al. BIBREF0 identified protein names from biological papers by dictionaries and several features of protein names. Wang et al. BIBREF1 developed some linguistic rules (i.e. normalised/expanded term matching and substring term matching) to map specific terminology to SNOMED CT. Song et al. BIBREF2 proposed a hybrid dictionary-based bio-entity extraction technique and expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. This kind of approach features its interpretability and easy modifiability. However, with the increase of the rule amount, supplementing new rules to existing system will turn to be a rule disaster.
Task-specific end-to-end methods BIBREF3, BIBREF4 use large amount of data to automatically model the specific task. Topaz et al. BIBREF3 constructed an automated wound information identification model with five output. Tan et al. BIBREF4 identified patients undergoing radical cystectomy for bladder cancer. Although they achieved good performance, none of their models could be used to another task due to output format difference. This makes building a new model for a new task a costly job.
Pipeline methods BIBREF7, BIBREF8, BIBREF9 break down the entire task into several basic natural language processing tasks. Bill et al. BIBREF7 focused on attributes extraction which mainly relied on dependency parsing and named entity recognition BIBREF10, BIBREF11, BIBREF12. Meanwhile, Fonferko et al. BIBREF9 used more components like noun phrase chunking BIBREF13, BIBREF14, BIBREF15, part-of-speech tagging BIBREF16, BIBREF17, BIBREF18, sentence splitter, named entity linking BIBREF19, BIBREF20, BIBREF21, relation extraction BIBREF22, BIBREF23. This kind of method focus on language itself, so it can handle tasks more general. However, as the depth of pipeline grows, it is obvious that error propagation will be more and more serious. In contrary, using less components to decrease the pipeline depth will lead to a poor performance. So the upper limit of this method depends mainly on the worst component.
Related Work ::: Pre-trained Language Model
Recently, some works focused on pre-trained language representation models to capture language information from text and then utilizing the information to improve the performance of specific natural language processing tasks BIBREF24, BIBREF25, BIBREF26, BIBREF27 which makes language model a shared model to all natural language processing tasks. Radford et al. BIBREF24 proposed a framework for fine-tuning pre-trained language model. Peters et al. BIBREF25 proposed ELMo which concatenates forward and backward language models in a shallow manner. Devlin et al. BIBREF26 used bidirectional Transformers to model deep interactions between the two directions. Yang et al. BIBREF27 replaced the fixed forward or backward factorization order with all possible permutations of the factorization order and avoided using the [MASK] tag which causes pretrain-finetune discrepancy that BERT is subject to.
The main motivation of introducing pre-trained language model is to solve the shortage of labeled data and polysemy problem. Although polysemy problem is not a common phenomenon in biomedical domain, shortage of labeled data is always a non-trivial problem. Lee et al. BIBREF28 applied BERT on large-scale biomedical unannotated data and achieved improvement on biomedical named entity recognition, relation extraction and question answering. Kim et al. BIBREF29 adapted BioBERT into multi-type named entity recognition and discovered new entities. Both of them demonstrates the usefulness of introducing pre-trained language model into biomedical domain.
Question Answering based Clinical Text Structuring
Given a sequence of paragraph text $X=<x_1, x_2, ..., x_n>$, clinical text structuring (CTS) can be regarded to extract or generate a key-value pair where key $Q$ is typically a query term such as proximal resection margin and value $V$ is a result of query term $Q$ according to the paragraph text $X$.
Generally, researchers solve CTS problem in two steps. Firstly, the answer-related text is pick out. And then several steps such as entity names conversion and negative words recognition are deployed to generate the final answer. While final answer varies from task to task, which truly causes non-uniform output formats, finding the answer-related text is a common action among all tasks. Traditional methods regard both the steps as a whole. In this paper, we focus on finding the answer-related substring $Xs = <X_i, X_i+1, X_i+2, ... X_j> (1 <= i < j <= n)$ from paragraph text $X$. For example, given sentence UTF8gkai“远端胃切除标本:小弯长11.5cm,大弯长17.0cm。距上切端6.0cm、下切端8.0cm" (Distal gastrectomy specimen: measuring 11.5cm in length along the lesser curvature, 17.0cm in length along the greater curvature; 6.0cm from the proximal resection margin, and 8.0cm from the distal resection margin) and query UTF8gkai“上切缘距离"(proximal resection margin), the answer should be 6.0cm which is located in original text from index 32 to 37. With such definition, it unifies the output format of CTS tasks and therefore make the training data shareable, in order to reduce the training data quantity requirement.
Since BERT BIBREF26 has already demonstrated the usefulness of shared model, we suppose extracting commonality of this problem and unifying the output format will make the model more powerful than dedicated model and meanwhile, for a specific clinical task, use the data for other tasks to supplement the training data.
The Proposed Model for QA-CTS Task
In this section, we present an effective model for the question answering based clinical text structuring (QA-CTS). As shown in Fig. FIGREF8, paragraph text $X$ is first passed to a clinical named entity recognition (CNER) model BIBREF12 to capture named entity information and obtain one-hot CNER output tagging sequence for query text $I_{nq}$ and paragraph text $I_{nt}$ with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. $I_{nq}$ and $I_{nt}$ are then integrated together into $I_n$. Meanwhile, the paragraph text $X$ and query text $Q$ are organized and passed to contextualized representation model which is pre-trained language model BERT BIBREF26 here to obtain the contextualized representation vector $V_s$ of both text and query. Afterwards, $V_s$ and $I_n$ are integrated together and fed into a feed forward network to calculate the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word.
The Proposed Model for QA-CTS Task ::: Contextualized Representation of Sentence Text and Query Text
For any clinical free-text paragraph $X$ and query $Q$, contextualized representation is to generate the encoded vector of both of them. Here we use pre-trained language model BERT-base BIBREF26 model to capture contextual information.
The text input is constructed as `[CLS] $Q$ [SEP] $X$ [SEP]'. For Chinese sentence, each word in this input will be mapped to a pre-trained embedding $e_i$. To tell the model $Q$ and $X$ is two different sentence, a sentence type input is generated which is a binary label sequence to denote what sentence each character in the input belongs to. Positional encoding and mask matrix is also constructed automatically to bring in absolute position information and eliminate the impact of zero padding respectively. Then a hidden vector $V_s$ which contains both query and text information is generated through BERT-base model.
The Proposed Model for QA-CTS Task ::: Clinical Named Entity Information
Since BERT is trained on general corpus, its performance on biomedical domain can be improved by introducing biomedical domain-specific features. In this paper, we introduce clinical named entity information into the model.
The CNER task aims to identify and classify important clinical terms such as diseases, symptoms, treatments, exams, and body parts from Chinese EHRs. It can be regarded as a sequence labeling task. A CNER model typically outputs a sequence of tags. Each character of the original sentence will be tagged a label following a tag scheme. In this paper we recognize the entities by the model of our previous work BIBREF12 but trained on another corpus which has 44 entity types including operations, numbers, unit words, examinations, symptoms, negative words, etc. An illustrative example of named entity information sequence is demonstrated in Table TABREF2. In Table TABREF2, UTF8gkai“远端胃切除" is tagged as an operation, `11.5' is a number word and `cm' is an unit word. The named entity tag sequence is organized in one-hot type. We denote the sequence for clinical sentence and query term as $I_{nt}$ and $I_{nq}$, respectively.
The Proposed Model for QA-CTS Task ::: Integration Method
There are two ways to integrate two named entity information vectors $I_{nt}$ and $I_{nq}$ or hidden contextualized representation $V_s$ and named entity information $I_n$, where $I_n = [I_{nt}; I_{nq}]$. The first one is to concatenate them together because they have sequence output with a common dimension. The second one is to transform them into a new hidden representation. For the concatenation method, the integrated representation is described as follows.
While for the transformation method, we use multi-head attention BIBREF30 to encode the two vectors. It can be defined as follows where $h$ is the number of heads and $W_o$ is used to projects back the dimension of concatenated matrix.
$Attention$ denotes the traditional attention and it can be defined as follows.
where $d_k$ is the length of hidden vector.
The Proposed Model for QA-CTS Task ::: Final Prediction
The final step is to use integrated representation $H_i$ to predict the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word. We use a feed forward network (FFN) to compress and calculate the score of each word $H_f$ which makes the dimension to $\left\langle l_s, 2\right\rangle $ where $l_s$ denotes the length of sequence.
Then we permute the two dimensions for softmax calculation. The calculation process of loss function can be defined as followed.
where $O_s = softmax(permute(H_f)_0)$ denotes the probability score of each word to be the start word and similarly $O_e = softmax(permute(H_f)_1)$ denotes the end. $y_s$ and $y_e$ denotes the true answer of the output for start word and end word respectively.
The Proposed Model for QA-CTS Task ::: Two-Stage Training Mechanism
Two-stage training mechanism is previously applied on bilinear model in fine-grained visual recognition BIBREF31, BIBREF32, BIBREF33. Two CNNs are deployed in the model. One is trained at first for coarse-graind features while freezing the parameter of the other. Then unfreeze the other one and train the entire model in a low learning rate for fetching fine-grained features.
Inspired by this and due to the large amount of parameters in BERT model, to speed up the training process, we fine tune the BERT model with new prediction layer first to achieve a better contextualized representation performance. Then we deploy the proposed model and load the fine tuned BERT weights, attach named entity information layers and retrain the model.
Experimental Studies
In this section, we devote to experimentally evaluating our proposed task and approach. The best results in tables are in bold.
Experimental Studies ::: Dataset and Evaluation Metrics
Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20.
In the following experiments, two widely-used performance measures (i.e., EM-score BIBREF34 and (macro-averaged) F$_1$-score BIBREF35) are used to evaluate the methods. The Exact Match (EM-score) metric measures the percentage of predictions that match any one of the ground truth answers exactly. The F$_1$-score metric is a looser metric measures the average overlap between the prediction and ground truth answer.
Experimental Studies ::: Experimental Settings
To implement deep neural network models, we utilize the Keras library BIBREF36 with TensorFlow BIBREF37 backend. Each model is run on a single NVIDIA GeForce GTX 1080 Ti GPU. The models are trained by Adam optimization algorithm BIBREF38 whose parameters are the same as the default settings except for learning rate set to $5\times 10^{-5}$. Batch size is set to 3 or 4 due to the lack of graphical memory. We select BERT-base as the pre-trained language model in this paper. Due to the high cost of pre-training BERT language model, we directly adopt parameters pre-trained by Google in Chinese general corpus. The named entity recognition is applied on both pathology report texts and query texts.
Experimental Studies ::: Comparison with State-of-the-art Methods
Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23.
Table TABREF23 indicates that our proposed model achieved the best performance both in EM-score and F$_1$-score with EM-score of 91.84% and F$_1$-score of 93.75%. QANet outperformed BERT-Base with 3.56% score in F$_1$-score but underperformed it with 0.75% score in EM-score. Compared with BERT-Base, our model led to a 5.64% performance improvement in EM-score and 3.69% in F$_1$-score. Although our model didn't outperform much with QANet in F$_1$-score (only 0.13%), our model significantly outperformed it with 6.39% score in EM-score.
Experimental Studies ::: Ablation Analysis
To further investigate the effects of named entity information and two-stage training mechanism for our model, we apply ablation analysis to see the improvement brought by each of them, where $\times $ refers to removing that part from our model.
As demonstrated in Table TABREF25, with named entity information enabled, two-stage training mechanism improved the result by 4.36% in EM-score and 3.8% in F$_1$-score. Without two-stage training mechanism, named entity information led to an improvement by 1.28% in EM-score but it also led to a weak deterioration by 0.12% in F$_1$-score. With both of them enabled, our proposed model achieved a 5.64% score improvement in EM-score and a 3.69% score improvement in F$_1$-score. The experimental results show that both named entity information and two-stage training mechanism are helpful to our model.
Experimental Studies ::: Comparisons Between Two Integration Methods
There are two methods to integrate named entity information into existing model, we experimentally compare these two integration methods. As named entity recognition has been applied on both pathology report text and query text, there will be two integration here. One is for two named entity information and the other is for contextualized representation and integrated named entity information. For multi-head attention BIBREF30, we set heads number $h = 16$ with 256-dimension hidden vector size for each head.
From Table TABREF27, we can observe that applying concatenation on both periods achieved the best performance on both EM-score and F$_1$-score. Unfortunately, applying multi-head attention on both period one and period two can not reach convergence in our experiments. This probably because it makes the model too complex to train. The difference on other two methods are the order of concatenation and multi-head attention. Applying multi-head attention on two named entity information $I_{nt}$ and $I_{nq}$ first achieved a better performance with 89.87% in EM-score and 92.88% in F$_1$-score. Applying Concatenation first can only achieve 80.74% in EM-score and 84.42% in F$_1$-score. This is probably due to the processing depth of hidden vectors and dataset size. BERT's output has been modified after many layers but named entity information representation is very close to input. With big amount of parameters in multi-head attention, it requires massive training to find out the optimal parameters. However, our dataset is significantly smaller than what pre-trained BERT uses. This probably can also explain why applying multi-head attention method on both periods can not converge.
Although Table TABREF27 shows the best integration method is concatenation, multi-head attention still has great potential. Due to the lack of computational resources, our experiment fixed the head number and hidden vector size. However, tuning these hyper parameters may have impact on the result. Tuning integration method and try to utilize larger datasets may give help to improving the performance.
Experimental Studies ::: Data Integration Analysis
To investigate how shared task and shared model can benefit, we split our dataset by query types, train our proposed model with different datasets and demonstrate their performance on different datasets. Firstly, we investigate the performance on model without two-stage training and named entity information.
As indicated in Table TABREF30, The model trained by mixed data outperforms 2 of the 3 original tasks in EM-score with 81.55% for proximal resection margin and 86.85% for distal resection margin. The performance on tumor size declined by 1.57% score in EM-score and 3.14% score in F$_1$-score but they were still above 90%. 0.69% and 0.37% score improvement in EM-score was brought by shared model for proximal and distal resection margin prediction. Meanwhile F$_1$-score for those two tasks declined 3.11% and 0.77% score.
Then we investigate the performance on model with two-stage training and named entity information. In this experiment, pre-training process only use the specific dataset not the mixed data. From Table TABREF31 we can observe that the performance on proximal and distal resection margin achieved the best performance on both EM-score and F$_1$-score. Compared with Table TABREF30, the best performance on proximal resection margin improved by 6.9% in EM-score and 7.94% in F$_1$-score. Meanwhile, the best performance on distal resection margin improved by 5.56% in EM-score and 6.32% in F$_1$-score. Other performances also usually improved a lot. This proves the usefulness of two-stage training and named entity information as well.
Lastly, we fine tune the model for each task with a pre-trained parameter. Table TABREF32 summarizes the result. (Add some explanations for the Table TABREF32). Comparing Table TABREF32 with Table TABREF31, using mixed-data pre-trained parameters can significantly improve the model performance than task-specific data trained model. Except tumor size, the result was improved by 0.52% score in EM-score, 1.39% score in F$_1$-score for proximal resection margin and 2.6% score in EM-score, 2.96% score in F$_1$-score for distal resection margin. This proves mixed-data pre-trained parameters can lead to a great benefit for specific task. Meanwhile, the model performance on other tasks which are not trained in the final stage was also improved from around 0 to 60 or 70 percent. This proves that there is commonality between different tasks and our proposed QA-CTS task make this learnable. In conclusion, to achieve the best performance for a specific dataset, pre-training the model in multiple datasets and then fine tuning the model on the specific dataset is the best way.
Conclusion
In this paper, we present a question answering based clinical text structuring (QA-CTS) task, which unifies different clinical text structuring tasks and utilize different datasets. A novel model is also proposed to integrate named entity information into a pre-trained language model and adapt it to QA-CTS task. Initially, sequential results of named entity recognition on both paragraph and query texts are integrated together. Contextualized representation on both paragraph and query texts are transformed by a pre-trained language model. Then, the integrated named entity information and contextualized representation are integrated together and fed into a feed forward network for final prediction. Experimental results on real-world dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks. The shared task and shared model introduced by QA-CTS task has also been proved to be useful for improving the performance on most of the task-specific datasets. In conclusion, the best way to achieve the best performance for a specific dataset is to pre-train the model in multiple datasets and then fine tune it on the specific dataset.
Acknowledgment
We would like to thank Ting Li and Xizhou Hong (Ruijin Hospital) who have helped us very much in data fetching and data cleansing. This work is supported by the National Key R&D Program of China for “Precision Medical Research" (No. 2018YFC0910500). | BERT-Base, QANet |
85d1831c28d3c19c84472589a252e28e9884500f | 85d1831c28d3c19c84472589a252e28e9884500f_1 | Q: What baselines is the proposed model compared against?
Text: Introduction
Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches.
However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size). Researchers have to construct different models for it, which is already costly, and hence it calls for a lot of labeled data for each model. Moreover, labeling necessary amount of data for training neural network requires expensive labor cost. To handle it, researchers turn to some rule-based structuring methods which often have lower labor cost.
Traditionally, CTS tasks can be addressed by rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2, task-specific end-to-end methods BIBREF3, BIBREF4, BIBREF5, BIBREF6 and pipeline methods BIBREF7, BIBREF8, BIBREF9. Rule and dictionary based methods suffer from costly human-designed extraction rules, while task-specific end-to-end methods have non-uniform output formats and require task-specific training dataset. Pipeline methods break down the entire process into several pieces which improves the performance and generality. However, when the pipeline depth grows, error propagation will have a greater impact on the performance.
To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows.
We first present a question answering based clinical text structuring (QA-CTS) task, which unifies different specific tasks and make dataset shareable. We also propose an effective model to integrate clinical named entity information into pre-trained language model.
Experimental results show that QA-CTS task leads to significant improvement due to shared dataset. Our proposed model also achieves significantly better performance than the strong baseline methods. In addition, we also show that two-stage training mechanism has a great improvement on QA-CTS task.
The rest of the paper is organized as follows. We briefly review the related work on clinical text structuring in Section SECREF2. Then, we present question answer based clinical text structuring task in Section SECREF3. In Section SECREF4, we present an effective model for this task. Section SECREF5 is devoted to computational studies and several investigations on the key issues of our proposed model. Finally, conclusions are given in Section SECREF6.
Related Work ::: Clinical Text Structuring
Clinical text structuring is a final problem which is highly related to practical applications. Most of existing studies are case-by-case. Few of them are developed for the general purpose structuring task. These studies can be roughly divided into three categories: rule and dictionary based methods, task-specific end-to-end methods and pipeline methods.
Rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2 rely extremely on heuristics and handcrafted extraction rules which is more of an art than a science and incurring extensive trial-and-error experiments. Fukuda et al. BIBREF0 identified protein names from biological papers by dictionaries and several features of protein names. Wang et al. BIBREF1 developed some linguistic rules (i.e. normalised/expanded term matching and substring term matching) to map specific terminology to SNOMED CT. Song et al. BIBREF2 proposed a hybrid dictionary-based bio-entity extraction technique and expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. This kind of approach features its interpretability and easy modifiability. However, with the increase of the rule amount, supplementing new rules to existing system will turn to be a rule disaster.
Task-specific end-to-end methods BIBREF3, BIBREF4 use large amount of data to automatically model the specific task. Topaz et al. BIBREF3 constructed an automated wound information identification model with five output. Tan et al. BIBREF4 identified patients undergoing radical cystectomy for bladder cancer. Although they achieved good performance, none of their models could be used to another task due to output format difference. This makes building a new model for a new task a costly job.
Pipeline methods BIBREF7, BIBREF8, BIBREF9 break down the entire task into several basic natural language processing tasks. Bill et al. BIBREF7 focused on attributes extraction which mainly relied on dependency parsing and named entity recognition BIBREF10, BIBREF11, BIBREF12. Meanwhile, Fonferko et al. BIBREF9 used more components like noun phrase chunking BIBREF13, BIBREF14, BIBREF15, part-of-speech tagging BIBREF16, BIBREF17, BIBREF18, sentence splitter, named entity linking BIBREF19, BIBREF20, BIBREF21, relation extraction BIBREF22, BIBREF23. This kind of method focus on language itself, so it can handle tasks more general. However, as the depth of pipeline grows, it is obvious that error propagation will be more and more serious. In contrary, using less components to decrease the pipeline depth will lead to a poor performance. So the upper limit of this method depends mainly on the worst component.
Related Work ::: Pre-trained Language Model
Recently, some works focused on pre-trained language representation models to capture language information from text and then utilizing the information to improve the performance of specific natural language processing tasks BIBREF24, BIBREF25, BIBREF26, BIBREF27 which makes language model a shared model to all natural language processing tasks. Radford et al. BIBREF24 proposed a framework for fine-tuning pre-trained language model. Peters et al. BIBREF25 proposed ELMo which concatenates forward and backward language models in a shallow manner. Devlin et al. BIBREF26 used bidirectional Transformers to model deep interactions between the two directions. Yang et al. BIBREF27 replaced the fixed forward or backward factorization order with all possible permutations of the factorization order and avoided using the [MASK] tag which causes pretrain-finetune discrepancy that BERT is subject to.
The main motivation of introducing pre-trained language model is to solve the shortage of labeled data and polysemy problem. Although polysemy problem is not a common phenomenon in biomedical domain, shortage of labeled data is always a non-trivial problem. Lee et al. BIBREF28 applied BERT on large-scale biomedical unannotated data and achieved improvement on biomedical named entity recognition, relation extraction and question answering. Kim et al. BIBREF29 adapted BioBERT into multi-type named entity recognition and discovered new entities. Both of them demonstrates the usefulness of introducing pre-trained language model into biomedical domain.
Question Answering based Clinical Text Structuring
Given a sequence of paragraph text $X=<x_1, x_2, ..., x_n>$, clinical text structuring (CTS) can be regarded to extract or generate a key-value pair where key $Q$ is typically a query term such as proximal resection margin and value $V$ is a result of query term $Q$ according to the paragraph text $X$.
Generally, researchers solve CTS problem in two steps. Firstly, the answer-related text is pick out. And then several steps such as entity names conversion and negative words recognition are deployed to generate the final answer. While final answer varies from task to task, which truly causes non-uniform output formats, finding the answer-related text is a common action among all tasks. Traditional methods regard both the steps as a whole. In this paper, we focus on finding the answer-related substring $Xs = <X_i, X_i+1, X_i+2, ... X_j> (1 <= i < j <= n)$ from paragraph text $X$. For example, given sentence UTF8gkai“远端胃切除标本:小弯长11.5cm,大弯长17.0cm。距上切端6.0cm、下切端8.0cm" (Distal gastrectomy specimen: measuring 11.5cm in length along the lesser curvature, 17.0cm in length along the greater curvature; 6.0cm from the proximal resection margin, and 8.0cm from the distal resection margin) and query UTF8gkai“上切缘距离"(proximal resection margin), the answer should be 6.0cm which is located in original text from index 32 to 37. With such definition, it unifies the output format of CTS tasks and therefore make the training data shareable, in order to reduce the training data quantity requirement.
Since BERT BIBREF26 has already demonstrated the usefulness of shared model, we suppose extracting commonality of this problem and unifying the output format will make the model more powerful than dedicated model and meanwhile, for a specific clinical task, use the data for other tasks to supplement the training data.
The Proposed Model for QA-CTS Task
In this section, we present an effective model for the question answering based clinical text structuring (QA-CTS). As shown in Fig. FIGREF8, paragraph text $X$ is first passed to a clinical named entity recognition (CNER) model BIBREF12 to capture named entity information and obtain one-hot CNER output tagging sequence for query text $I_{nq}$ and paragraph text $I_{nt}$ with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. $I_{nq}$ and $I_{nt}$ are then integrated together into $I_n$. Meanwhile, the paragraph text $X$ and query text $Q$ are organized and passed to contextualized representation model which is pre-trained language model BERT BIBREF26 here to obtain the contextualized representation vector $V_s$ of both text and query. Afterwards, $V_s$ and $I_n$ are integrated together and fed into a feed forward network to calculate the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word.
The Proposed Model for QA-CTS Task ::: Contextualized Representation of Sentence Text and Query Text
For any clinical free-text paragraph $X$ and query $Q$, contextualized representation is to generate the encoded vector of both of them. Here we use pre-trained language model BERT-base BIBREF26 model to capture contextual information.
The text input is constructed as `[CLS] $Q$ [SEP] $X$ [SEP]'. For Chinese sentence, each word in this input will be mapped to a pre-trained embedding $e_i$. To tell the model $Q$ and $X$ is two different sentence, a sentence type input is generated which is a binary label sequence to denote what sentence each character in the input belongs to. Positional encoding and mask matrix is also constructed automatically to bring in absolute position information and eliminate the impact of zero padding respectively. Then a hidden vector $V_s$ which contains both query and text information is generated through BERT-base model.
The Proposed Model for QA-CTS Task ::: Clinical Named Entity Information
Since BERT is trained on general corpus, its performance on biomedical domain can be improved by introducing biomedical domain-specific features. In this paper, we introduce clinical named entity information into the model.
The CNER task aims to identify and classify important clinical terms such as diseases, symptoms, treatments, exams, and body parts from Chinese EHRs. It can be regarded as a sequence labeling task. A CNER model typically outputs a sequence of tags. Each character of the original sentence will be tagged a label following a tag scheme. In this paper we recognize the entities by the model of our previous work BIBREF12 but trained on another corpus which has 44 entity types including operations, numbers, unit words, examinations, symptoms, negative words, etc. An illustrative example of named entity information sequence is demonstrated in Table TABREF2. In Table TABREF2, UTF8gkai“远端胃切除" is tagged as an operation, `11.5' is a number word and `cm' is an unit word. The named entity tag sequence is organized in one-hot type. We denote the sequence for clinical sentence and query term as $I_{nt}$ and $I_{nq}$, respectively.
The Proposed Model for QA-CTS Task ::: Integration Method
There are two ways to integrate two named entity information vectors $I_{nt}$ and $I_{nq}$ or hidden contextualized representation $V_s$ and named entity information $I_n$, where $I_n = [I_{nt}; I_{nq}]$. The first one is to concatenate them together because they have sequence output with a common dimension. The second one is to transform them into a new hidden representation. For the concatenation method, the integrated representation is described as follows.
While for the transformation method, we use multi-head attention BIBREF30 to encode the two vectors. It can be defined as follows where $h$ is the number of heads and $W_o$ is used to projects back the dimension of concatenated matrix.
$Attention$ denotes the traditional attention and it can be defined as follows.
where $d_k$ is the length of hidden vector.
The Proposed Model for QA-CTS Task ::: Final Prediction
The final step is to use integrated representation $H_i$ to predict the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word. We use a feed forward network (FFN) to compress and calculate the score of each word $H_f$ which makes the dimension to $\left\langle l_s, 2\right\rangle $ where $l_s$ denotes the length of sequence.
Then we permute the two dimensions for softmax calculation. The calculation process of loss function can be defined as followed.
where $O_s = softmax(permute(H_f)_0)$ denotes the probability score of each word to be the start word and similarly $O_e = softmax(permute(H_f)_1)$ denotes the end. $y_s$ and $y_e$ denotes the true answer of the output for start word and end word respectively.
The Proposed Model for QA-CTS Task ::: Two-Stage Training Mechanism
Two-stage training mechanism is previously applied on bilinear model in fine-grained visual recognition BIBREF31, BIBREF32, BIBREF33. Two CNNs are deployed in the model. One is trained at first for coarse-graind features while freezing the parameter of the other. Then unfreeze the other one and train the entire model in a low learning rate for fetching fine-grained features.
Inspired by this and due to the large amount of parameters in BERT model, to speed up the training process, we fine tune the BERT model with new prediction layer first to achieve a better contextualized representation performance. Then we deploy the proposed model and load the fine tuned BERT weights, attach named entity information layers and retrain the model.
Experimental Studies
In this section, we devote to experimentally evaluating our proposed task and approach. The best results in tables are in bold.
Experimental Studies ::: Dataset and Evaluation Metrics
Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20.
In the following experiments, two widely-used performance measures (i.e., EM-score BIBREF34 and (macro-averaged) F$_1$-score BIBREF35) are used to evaluate the methods. The Exact Match (EM-score) metric measures the percentage of predictions that match any one of the ground truth answers exactly. The F$_1$-score metric is a looser metric measures the average overlap between the prediction and ground truth answer.
Experimental Studies ::: Experimental Settings
To implement deep neural network models, we utilize the Keras library BIBREF36 with TensorFlow BIBREF37 backend. Each model is run on a single NVIDIA GeForce GTX 1080 Ti GPU. The models are trained by Adam optimization algorithm BIBREF38 whose parameters are the same as the default settings except for learning rate set to $5\times 10^{-5}$. Batch size is set to 3 or 4 due to the lack of graphical memory. We select BERT-base as the pre-trained language model in this paper. Due to the high cost of pre-training BERT language model, we directly adopt parameters pre-trained by Google in Chinese general corpus. The named entity recognition is applied on both pathology report texts and query texts.
Experimental Studies ::: Comparison with State-of-the-art Methods
Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23.
Table TABREF23 indicates that our proposed model achieved the best performance both in EM-score and F$_1$-score with EM-score of 91.84% and F$_1$-score of 93.75%. QANet outperformed BERT-Base with 3.56% score in F$_1$-score but underperformed it with 0.75% score in EM-score. Compared with BERT-Base, our model led to a 5.64% performance improvement in EM-score and 3.69% in F$_1$-score. Although our model didn't outperform much with QANet in F$_1$-score (only 0.13%), our model significantly outperformed it with 6.39% score in EM-score.
Experimental Studies ::: Ablation Analysis
To further investigate the effects of named entity information and two-stage training mechanism for our model, we apply ablation analysis to see the improvement brought by each of them, where $\times $ refers to removing that part from our model.
As demonstrated in Table TABREF25, with named entity information enabled, two-stage training mechanism improved the result by 4.36% in EM-score and 3.8% in F$_1$-score. Without two-stage training mechanism, named entity information led to an improvement by 1.28% in EM-score but it also led to a weak deterioration by 0.12% in F$_1$-score. With both of them enabled, our proposed model achieved a 5.64% score improvement in EM-score and a 3.69% score improvement in F$_1$-score. The experimental results show that both named entity information and two-stage training mechanism are helpful to our model.
Experimental Studies ::: Comparisons Between Two Integration Methods
There are two methods to integrate named entity information into existing model, we experimentally compare these two integration methods. As named entity recognition has been applied on both pathology report text and query text, there will be two integration here. One is for two named entity information and the other is for contextualized representation and integrated named entity information. For multi-head attention BIBREF30, we set heads number $h = 16$ with 256-dimension hidden vector size for each head.
From Table TABREF27, we can observe that applying concatenation on both periods achieved the best performance on both EM-score and F$_1$-score. Unfortunately, applying multi-head attention on both period one and period two can not reach convergence in our experiments. This probably because it makes the model too complex to train. The difference on other two methods are the order of concatenation and multi-head attention. Applying multi-head attention on two named entity information $I_{nt}$ and $I_{nq}$ first achieved a better performance with 89.87% in EM-score and 92.88% in F$_1$-score. Applying Concatenation first can only achieve 80.74% in EM-score and 84.42% in F$_1$-score. This is probably due to the processing depth of hidden vectors and dataset size. BERT's output has been modified after many layers but named entity information representation is very close to input. With big amount of parameters in multi-head attention, it requires massive training to find out the optimal parameters. However, our dataset is significantly smaller than what pre-trained BERT uses. This probably can also explain why applying multi-head attention method on both periods can not converge.
Although Table TABREF27 shows the best integration method is concatenation, multi-head attention still has great potential. Due to the lack of computational resources, our experiment fixed the head number and hidden vector size. However, tuning these hyper parameters may have impact on the result. Tuning integration method and try to utilize larger datasets may give help to improving the performance.
Experimental Studies ::: Data Integration Analysis
To investigate how shared task and shared model can benefit, we split our dataset by query types, train our proposed model with different datasets and demonstrate their performance on different datasets. Firstly, we investigate the performance on model without two-stage training and named entity information.
As indicated in Table TABREF30, The model trained by mixed data outperforms 2 of the 3 original tasks in EM-score with 81.55% for proximal resection margin and 86.85% for distal resection margin. The performance on tumor size declined by 1.57% score in EM-score and 3.14% score in F$_1$-score but they were still above 90%. 0.69% and 0.37% score improvement in EM-score was brought by shared model for proximal and distal resection margin prediction. Meanwhile F$_1$-score for those two tasks declined 3.11% and 0.77% score.
Then we investigate the performance on model with two-stage training and named entity information. In this experiment, pre-training process only use the specific dataset not the mixed data. From Table TABREF31 we can observe that the performance on proximal and distal resection margin achieved the best performance on both EM-score and F$_1$-score. Compared with Table TABREF30, the best performance on proximal resection margin improved by 6.9% in EM-score and 7.94% in F$_1$-score. Meanwhile, the best performance on distal resection margin improved by 5.56% in EM-score and 6.32% in F$_1$-score. Other performances also usually improved a lot. This proves the usefulness of two-stage training and named entity information as well.
Lastly, we fine tune the model for each task with a pre-trained parameter. Table TABREF32 summarizes the result. (Add some explanations for the Table TABREF32). Comparing Table TABREF32 with Table TABREF31, using mixed-data pre-trained parameters can significantly improve the model performance than task-specific data trained model. Except tumor size, the result was improved by 0.52% score in EM-score, 1.39% score in F$_1$-score for proximal resection margin and 2.6% score in EM-score, 2.96% score in F$_1$-score for distal resection margin. This proves mixed-data pre-trained parameters can lead to a great benefit for specific task. Meanwhile, the model performance on other tasks which are not trained in the final stage was also improved from around 0 to 60 or 70 percent. This proves that there is commonality between different tasks and our proposed QA-CTS task make this learnable. In conclusion, to achieve the best performance for a specific dataset, pre-training the model in multiple datasets and then fine tuning the model on the specific dataset is the best way.
Conclusion
In this paper, we present a question answering based clinical text structuring (QA-CTS) task, which unifies different clinical text structuring tasks and utilize different datasets. A novel model is also proposed to integrate named entity information into a pre-trained language model and adapt it to QA-CTS task. Initially, sequential results of named entity recognition on both paragraph and query texts are integrated together. Contextualized representation on both paragraph and query texts are transformed by a pre-trained language model. Then, the integrated named entity information and contextualized representation are integrated together and fed into a feed forward network for final prediction. Experimental results on real-world dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks. The shared task and shared model introduced by QA-CTS task has also been proved to be useful for improving the performance on most of the task-specific datasets. In conclusion, the best way to achieve the best performance for a specific dataset is to pre-train the model in multiple datasets and then fine tune it on the specific dataset.
Acknowledgment
We would like to thank Ting Li and Xizhou Hong (Ruijin Hospital) who have helped us very much in data fetching and data cleansing. This work is supported by the National Key R&D Program of China for “Precision Medical Research" (No. 2018YFC0910500). | QANet BIBREF39, BERT-Base BIBREF26 |
1959e0ebc21fafdf1dd20c6ea054161ba7446f61 | 1959e0ebc21fafdf1dd20c6ea054161ba7446f61_0 | Q: How is the clinical text structuring task defined?
Text: Introduction
Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches.
However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size). Researchers have to construct different models for it, which is already costly, and hence it calls for a lot of labeled data for each model. Moreover, labeling necessary amount of data for training neural network requires expensive labor cost. To handle it, researchers turn to some rule-based structuring methods which often have lower labor cost.
Traditionally, CTS tasks can be addressed by rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2, task-specific end-to-end methods BIBREF3, BIBREF4, BIBREF5, BIBREF6 and pipeline methods BIBREF7, BIBREF8, BIBREF9. Rule and dictionary based methods suffer from costly human-designed extraction rules, while task-specific end-to-end methods have non-uniform output formats and require task-specific training dataset. Pipeline methods break down the entire process into several pieces which improves the performance and generality. However, when the pipeline depth grows, error propagation will have a greater impact on the performance.
To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows.
We first present a question answering based clinical text structuring (QA-CTS) task, which unifies different specific tasks and make dataset shareable. We also propose an effective model to integrate clinical named entity information into pre-trained language model.
Experimental results show that QA-CTS task leads to significant improvement due to shared dataset. Our proposed model also achieves significantly better performance than the strong baseline methods. In addition, we also show that two-stage training mechanism has a great improvement on QA-CTS task.
The rest of the paper is organized as follows. We briefly review the related work on clinical text structuring in Section SECREF2. Then, we present question answer based clinical text structuring task in Section SECREF3. In Section SECREF4, we present an effective model for this task. Section SECREF5 is devoted to computational studies and several investigations on the key issues of our proposed model. Finally, conclusions are given in Section SECREF6.
Related Work ::: Clinical Text Structuring
Clinical text structuring is a final problem which is highly related to practical applications. Most of existing studies are case-by-case. Few of them are developed for the general purpose structuring task. These studies can be roughly divided into three categories: rule and dictionary based methods, task-specific end-to-end methods and pipeline methods.
Rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2 rely extremely on heuristics and handcrafted extraction rules which is more of an art than a science and incurring extensive trial-and-error experiments. Fukuda et al. BIBREF0 identified protein names from biological papers by dictionaries and several features of protein names. Wang et al. BIBREF1 developed some linguistic rules (i.e. normalised/expanded term matching and substring term matching) to map specific terminology to SNOMED CT. Song et al. BIBREF2 proposed a hybrid dictionary-based bio-entity extraction technique and expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. This kind of approach features its interpretability and easy modifiability. However, with the increase of the rule amount, supplementing new rules to existing system will turn to be a rule disaster.
Task-specific end-to-end methods BIBREF3, BIBREF4 use large amount of data to automatically model the specific task. Topaz et al. BIBREF3 constructed an automated wound information identification model with five output. Tan et al. BIBREF4 identified patients undergoing radical cystectomy for bladder cancer. Although they achieved good performance, none of their models could be used to another task due to output format difference. This makes building a new model for a new task a costly job.
Pipeline methods BIBREF7, BIBREF8, BIBREF9 break down the entire task into several basic natural language processing tasks. Bill et al. BIBREF7 focused on attributes extraction which mainly relied on dependency parsing and named entity recognition BIBREF10, BIBREF11, BIBREF12. Meanwhile, Fonferko et al. BIBREF9 used more components like noun phrase chunking BIBREF13, BIBREF14, BIBREF15, part-of-speech tagging BIBREF16, BIBREF17, BIBREF18, sentence splitter, named entity linking BIBREF19, BIBREF20, BIBREF21, relation extraction BIBREF22, BIBREF23. This kind of method focus on language itself, so it can handle tasks more general. However, as the depth of pipeline grows, it is obvious that error propagation will be more and more serious. In contrary, using less components to decrease the pipeline depth will lead to a poor performance. So the upper limit of this method depends mainly on the worst component.
Related Work ::: Pre-trained Language Model
Recently, some works focused on pre-trained language representation models to capture language information from text and then utilizing the information to improve the performance of specific natural language processing tasks BIBREF24, BIBREF25, BIBREF26, BIBREF27 which makes language model a shared model to all natural language processing tasks. Radford et al. BIBREF24 proposed a framework for fine-tuning pre-trained language model. Peters et al. BIBREF25 proposed ELMo which concatenates forward and backward language models in a shallow manner. Devlin et al. BIBREF26 used bidirectional Transformers to model deep interactions between the two directions. Yang et al. BIBREF27 replaced the fixed forward or backward factorization order with all possible permutations of the factorization order and avoided using the [MASK] tag which causes pretrain-finetune discrepancy that BERT is subject to.
The main motivation of introducing pre-trained language model is to solve the shortage of labeled data and polysemy problem. Although polysemy problem is not a common phenomenon in biomedical domain, shortage of labeled data is always a non-trivial problem. Lee et al. BIBREF28 applied BERT on large-scale biomedical unannotated data and achieved improvement on biomedical named entity recognition, relation extraction and question answering. Kim et al. BIBREF29 adapted BioBERT into multi-type named entity recognition and discovered new entities. Both of them demonstrates the usefulness of introducing pre-trained language model into biomedical domain.
Question Answering based Clinical Text Structuring
Given a sequence of paragraph text $X=<x_1, x_2, ..., x_n>$, clinical text structuring (CTS) can be regarded to extract or generate a key-value pair where key $Q$ is typically a query term such as proximal resection margin and value $V$ is a result of query term $Q$ according to the paragraph text $X$.
Generally, researchers solve CTS problem in two steps. Firstly, the answer-related text is pick out. And then several steps such as entity names conversion and negative words recognition are deployed to generate the final answer. While final answer varies from task to task, which truly causes non-uniform output formats, finding the answer-related text is a common action among all tasks. Traditional methods regard both the steps as a whole. In this paper, we focus on finding the answer-related substring $Xs = <X_i, X_i+1, X_i+2, ... X_j> (1 <= i < j <= n)$ from paragraph text $X$. For example, given sentence UTF8gkai“远端胃切除标本:小弯长11.5cm,大弯长17.0cm。距上切端6.0cm、下切端8.0cm" (Distal gastrectomy specimen: measuring 11.5cm in length along the lesser curvature, 17.0cm in length along the greater curvature; 6.0cm from the proximal resection margin, and 8.0cm from the distal resection margin) and query UTF8gkai“上切缘距离"(proximal resection margin), the answer should be 6.0cm which is located in original text from index 32 to 37. With such definition, it unifies the output format of CTS tasks and therefore make the training data shareable, in order to reduce the training data quantity requirement.
Since BERT BIBREF26 has already demonstrated the usefulness of shared model, we suppose extracting commonality of this problem and unifying the output format will make the model more powerful than dedicated model and meanwhile, for a specific clinical task, use the data for other tasks to supplement the training data.
The Proposed Model for QA-CTS Task
In this section, we present an effective model for the question answering based clinical text structuring (QA-CTS). As shown in Fig. FIGREF8, paragraph text $X$ is first passed to a clinical named entity recognition (CNER) model BIBREF12 to capture named entity information and obtain one-hot CNER output tagging sequence for query text $I_{nq}$ and paragraph text $I_{nt}$ with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. $I_{nq}$ and $I_{nt}$ are then integrated together into $I_n$. Meanwhile, the paragraph text $X$ and query text $Q$ are organized and passed to contextualized representation model which is pre-trained language model BERT BIBREF26 here to obtain the contextualized representation vector $V_s$ of both text and query. Afterwards, $V_s$ and $I_n$ are integrated together and fed into a feed forward network to calculate the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word.
The Proposed Model for QA-CTS Task ::: Contextualized Representation of Sentence Text and Query Text
For any clinical free-text paragraph $X$ and query $Q$, contextualized representation is to generate the encoded vector of both of them. Here we use pre-trained language model BERT-base BIBREF26 model to capture contextual information.
The text input is constructed as `[CLS] $Q$ [SEP] $X$ [SEP]'. For Chinese sentence, each word in this input will be mapped to a pre-trained embedding $e_i$. To tell the model $Q$ and $X$ is two different sentence, a sentence type input is generated which is a binary label sequence to denote what sentence each character in the input belongs to. Positional encoding and mask matrix is also constructed automatically to bring in absolute position information and eliminate the impact of zero padding respectively. Then a hidden vector $V_s$ which contains both query and text information is generated through BERT-base model.
The Proposed Model for QA-CTS Task ::: Clinical Named Entity Information
Since BERT is trained on general corpus, its performance on biomedical domain can be improved by introducing biomedical domain-specific features. In this paper, we introduce clinical named entity information into the model.
The CNER task aims to identify and classify important clinical terms such as diseases, symptoms, treatments, exams, and body parts from Chinese EHRs. It can be regarded as a sequence labeling task. A CNER model typically outputs a sequence of tags. Each character of the original sentence will be tagged a label following a tag scheme. In this paper we recognize the entities by the model of our previous work BIBREF12 but trained on another corpus which has 44 entity types including operations, numbers, unit words, examinations, symptoms, negative words, etc. An illustrative example of named entity information sequence is demonstrated in Table TABREF2. In Table TABREF2, UTF8gkai“远端胃切除" is tagged as an operation, `11.5' is a number word and `cm' is an unit word. The named entity tag sequence is organized in one-hot type. We denote the sequence for clinical sentence and query term as $I_{nt}$ and $I_{nq}$, respectively.
The Proposed Model for QA-CTS Task ::: Integration Method
There are two ways to integrate two named entity information vectors $I_{nt}$ and $I_{nq}$ or hidden contextualized representation $V_s$ and named entity information $I_n$, where $I_n = [I_{nt}; I_{nq}]$. The first one is to concatenate them together because they have sequence output with a common dimension. The second one is to transform them into a new hidden representation. For the concatenation method, the integrated representation is described as follows.
While for the transformation method, we use multi-head attention BIBREF30 to encode the two vectors. It can be defined as follows where $h$ is the number of heads and $W_o$ is used to projects back the dimension of concatenated matrix.
$Attention$ denotes the traditional attention and it can be defined as follows.
where $d_k$ is the length of hidden vector.
The Proposed Model for QA-CTS Task ::: Final Prediction
The final step is to use integrated representation $H_i$ to predict the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word. We use a feed forward network (FFN) to compress and calculate the score of each word $H_f$ which makes the dimension to $\left\langle l_s, 2\right\rangle $ where $l_s$ denotes the length of sequence.
Then we permute the two dimensions for softmax calculation. The calculation process of loss function can be defined as followed.
where $O_s = softmax(permute(H_f)_0)$ denotes the probability score of each word to be the start word and similarly $O_e = softmax(permute(H_f)_1)$ denotes the end. $y_s$ and $y_e$ denotes the true answer of the output for start word and end word respectively.
The Proposed Model for QA-CTS Task ::: Two-Stage Training Mechanism
Two-stage training mechanism is previously applied on bilinear model in fine-grained visual recognition BIBREF31, BIBREF32, BIBREF33. Two CNNs are deployed in the model. One is trained at first for coarse-graind features while freezing the parameter of the other. Then unfreeze the other one and train the entire model in a low learning rate for fetching fine-grained features.
Inspired by this and due to the large amount of parameters in BERT model, to speed up the training process, we fine tune the BERT model with new prediction layer first to achieve a better contextualized representation performance. Then we deploy the proposed model and load the fine tuned BERT weights, attach named entity information layers and retrain the model.
Experimental Studies
In this section, we devote to experimentally evaluating our proposed task and approach. The best results in tables are in bold.
Experimental Studies ::: Dataset and Evaluation Metrics
Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20.
In the following experiments, two widely-used performance measures (i.e., EM-score BIBREF34 and (macro-averaged) F$_1$-score BIBREF35) are used to evaluate the methods. The Exact Match (EM-score) metric measures the percentage of predictions that match any one of the ground truth answers exactly. The F$_1$-score metric is a looser metric measures the average overlap between the prediction and ground truth answer.
Experimental Studies ::: Experimental Settings
To implement deep neural network models, we utilize the Keras library BIBREF36 with TensorFlow BIBREF37 backend. Each model is run on a single NVIDIA GeForce GTX 1080 Ti GPU. The models are trained by Adam optimization algorithm BIBREF38 whose parameters are the same as the default settings except for learning rate set to $5\times 10^{-5}$. Batch size is set to 3 or 4 due to the lack of graphical memory. We select BERT-base as the pre-trained language model in this paper. Due to the high cost of pre-training BERT language model, we directly adopt parameters pre-trained by Google in Chinese general corpus. The named entity recognition is applied on both pathology report texts and query texts.
Experimental Studies ::: Comparison with State-of-the-art Methods
Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23.
Table TABREF23 indicates that our proposed model achieved the best performance both in EM-score and F$_1$-score with EM-score of 91.84% and F$_1$-score of 93.75%. QANet outperformed BERT-Base with 3.56% score in F$_1$-score but underperformed it with 0.75% score in EM-score. Compared with BERT-Base, our model led to a 5.64% performance improvement in EM-score and 3.69% in F$_1$-score. Although our model didn't outperform much with QANet in F$_1$-score (only 0.13%), our model significantly outperformed it with 6.39% score in EM-score.
Experimental Studies ::: Ablation Analysis
To further investigate the effects of named entity information and two-stage training mechanism for our model, we apply ablation analysis to see the improvement brought by each of them, where $\times $ refers to removing that part from our model.
As demonstrated in Table TABREF25, with named entity information enabled, two-stage training mechanism improved the result by 4.36% in EM-score and 3.8% in F$_1$-score. Without two-stage training mechanism, named entity information led to an improvement by 1.28% in EM-score but it also led to a weak deterioration by 0.12% in F$_1$-score. With both of them enabled, our proposed model achieved a 5.64% score improvement in EM-score and a 3.69% score improvement in F$_1$-score. The experimental results show that both named entity information and two-stage training mechanism are helpful to our model.
Experimental Studies ::: Comparisons Between Two Integration Methods
There are two methods to integrate named entity information into existing model, we experimentally compare these two integration methods. As named entity recognition has been applied on both pathology report text and query text, there will be two integration here. One is for two named entity information and the other is for contextualized representation and integrated named entity information. For multi-head attention BIBREF30, we set heads number $h = 16$ with 256-dimension hidden vector size for each head.
From Table TABREF27, we can observe that applying concatenation on both periods achieved the best performance on both EM-score and F$_1$-score. Unfortunately, applying multi-head attention on both period one and period two can not reach convergence in our experiments. This probably because it makes the model too complex to train. The difference on other two methods are the order of concatenation and multi-head attention. Applying multi-head attention on two named entity information $I_{nt}$ and $I_{nq}$ first achieved a better performance with 89.87% in EM-score and 92.88% in F$_1$-score. Applying Concatenation first can only achieve 80.74% in EM-score and 84.42% in F$_1$-score. This is probably due to the processing depth of hidden vectors and dataset size. BERT's output has been modified after many layers but named entity information representation is very close to input. With big amount of parameters in multi-head attention, it requires massive training to find out the optimal parameters. However, our dataset is significantly smaller than what pre-trained BERT uses. This probably can also explain why applying multi-head attention method on both periods can not converge.
Although Table TABREF27 shows the best integration method is concatenation, multi-head attention still has great potential. Due to the lack of computational resources, our experiment fixed the head number and hidden vector size. However, tuning these hyper parameters may have impact on the result. Tuning integration method and try to utilize larger datasets may give help to improving the performance.
Experimental Studies ::: Data Integration Analysis
To investigate how shared task and shared model can benefit, we split our dataset by query types, train our proposed model with different datasets and demonstrate their performance on different datasets. Firstly, we investigate the performance on model without two-stage training and named entity information.
As indicated in Table TABREF30, The model trained by mixed data outperforms 2 of the 3 original tasks in EM-score with 81.55% for proximal resection margin and 86.85% for distal resection margin. The performance on tumor size declined by 1.57% score in EM-score and 3.14% score in F$_1$-score but they were still above 90%. 0.69% and 0.37% score improvement in EM-score was brought by shared model for proximal and distal resection margin prediction. Meanwhile F$_1$-score for those two tasks declined 3.11% and 0.77% score.
Then we investigate the performance on model with two-stage training and named entity information. In this experiment, pre-training process only use the specific dataset not the mixed data. From Table TABREF31 we can observe that the performance on proximal and distal resection margin achieved the best performance on both EM-score and F$_1$-score. Compared with Table TABREF30, the best performance on proximal resection margin improved by 6.9% in EM-score and 7.94% in F$_1$-score. Meanwhile, the best performance on distal resection margin improved by 5.56% in EM-score and 6.32% in F$_1$-score. Other performances also usually improved a lot. This proves the usefulness of two-stage training and named entity information as well.
Lastly, we fine tune the model for each task with a pre-trained parameter. Table TABREF32 summarizes the result. (Add some explanations for the Table TABREF32). Comparing Table TABREF32 with Table TABREF31, using mixed-data pre-trained parameters can significantly improve the model performance than task-specific data trained model. Except tumor size, the result was improved by 0.52% score in EM-score, 1.39% score in F$_1$-score for proximal resection margin and 2.6% score in EM-score, 2.96% score in F$_1$-score for distal resection margin. This proves mixed-data pre-trained parameters can lead to a great benefit for specific task. Meanwhile, the model performance on other tasks which are not trained in the final stage was also improved from around 0 to 60 or 70 percent. This proves that there is commonality between different tasks and our proposed QA-CTS task make this learnable. In conclusion, to achieve the best performance for a specific dataset, pre-training the model in multiple datasets and then fine tuning the model on the specific dataset is the best way.
Conclusion
In this paper, we present a question answering based clinical text structuring (QA-CTS) task, which unifies different clinical text structuring tasks and utilize different datasets. A novel model is also proposed to integrate named entity information into a pre-trained language model and adapt it to QA-CTS task. Initially, sequential results of named entity recognition on both paragraph and query texts are integrated together. Contextualized representation on both paragraph and query texts are transformed by a pre-trained language model. Then, the integrated named entity information and contextualized representation are integrated together and fed into a feed forward network for final prediction. Experimental results on real-world dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks. The shared task and shared model introduced by QA-CTS task has also been proved to be useful for improving the performance on most of the task-specific datasets. In conclusion, the best way to achieve the best performance for a specific dataset is to pre-train the model in multiple datasets and then fine tune it on the specific dataset.
Acknowledgment
We would like to thank Ting Li and Xizhou Hong (Ruijin Hospital) who have helped us very much in data fetching and data cleansing. This work is supported by the National Key R&D Program of China for “Precision Medical Research" (No. 2018YFC0910500). | Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained., Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. |
1959e0ebc21fafdf1dd20c6ea054161ba7446f61 | 1959e0ebc21fafdf1dd20c6ea054161ba7446f61_1 | Q: How is the clinical text structuring task defined?
Text: Introduction
Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches.
However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size). Researchers have to construct different models for it, which is already costly, and hence it calls for a lot of labeled data for each model. Moreover, labeling necessary amount of data for training neural network requires expensive labor cost. To handle it, researchers turn to some rule-based structuring methods which often have lower labor cost.
Traditionally, CTS tasks can be addressed by rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2, task-specific end-to-end methods BIBREF3, BIBREF4, BIBREF5, BIBREF6 and pipeline methods BIBREF7, BIBREF8, BIBREF9. Rule and dictionary based methods suffer from costly human-designed extraction rules, while task-specific end-to-end methods have non-uniform output formats and require task-specific training dataset. Pipeline methods break down the entire process into several pieces which improves the performance and generality. However, when the pipeline depth grows, error propagation will have a greater impact on the performance.
To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows.
We first present a question answering based clinical text structuring (QA-CTS) task, which unifies different specific tasks and make dataset shareable. We also propose an effective model to integrate clinical named entity information into pre-trained language model.
Experimental results show that QA-CTS task leads to significant improvement due to shared dataset. Our proposed model also achieves significantly better performance than the strong baseline methods. In addition, we also show that two-stage training mechanism has a great improvement on QA-CTS task.
The rest of the paper is organized as follows. We briefly review the related work on clinical text structuring in Section SECREF2. Then, we present question answer based clinical text structuring task in Section SECREF3. In Section SECREF4, we present an effective model for this task. Section SECREF5 is devoted to computational studies and several investigations on the key issues of our proposed model. Finally, conclusions are given in Section SECREF6.
Related Work ::: Clinical Text Structuring
Clinical text structuring is a final problem which is highly related to practical applications. Most of existing studies are case-by-case. Few of them are developed for the general purpose structuring task. These studies can be roughly divided into three categories: rule and dictionary based methods, task-specific end-to-end methods and pipeline methods.
Rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2 rely extremely on heuristics and handcrafted extraction rules which is more of an art than a science and incurring extensive trial-and-error experiments. Fukuda et al. BIBREF0 identified protein names from biological papers by dictionaries and several features of protein names. Wang et al. BIBREF1 developed some linguistic rules (i.e. normalised/expanded term matching and substring term matching) to map specific terminology to SNOMED CT. Song et al. BIBREF2 proposed a hybrid dictionary-based bio-entity extraction technique and expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. This kind of approach features its interpretability and easy modifiability. However, with the increase of the rule amount, supplementing new rules to existing system will turn to be a rule disaster.
Task-specific end-to-end methods BIBREF3, BIBREF4 use large amount of data to automatically model the specific task. Topaz et al. BIBREF3 constructed an automated wound information identification model with five output. Tan et al. BIBREF4 identified patients undergoing radical cystectomy for bladder cancer. Although they achieved good performance, none of their models could be used to another task due to output format difference. This makes building a new model for a new task a costly job.
Pipeline methods BIBREF7, BIBREF8, BIBREF9 break down the entire task into several basic natural language processing tasks. Bill et al. BIBREF7 focused on attributes extraction which mainly relied on dependency parsing and named entity recognition BIBREF10, BIBREF11, BIBREF12. Meanwhile, Fonferko et al. BIBREF9 used more components like noun phrase chunking BIBREF13, BIBREF14, BIBREF15, part-of-speech tagging BIBREF16, BIBREF17, BIBREF18, sentence splitter, named entity linking BIBREF19, BIBREF20, BIBREF21, relation extraction BIBREF22, BIBREF23. This kind of method focus on language itself, so it can handle tasks more general. However, as the depth of pipeline grows, it is obvious that error propagation will be more and more serious. In contrary, using less components to decrease the pipeline depth will lead to a poor performance. So the upper limit of this method depends mainly on the worst component.
Related Work ::: Pre-trained Language Model
Recently, some works focused on pre-trained language representation models to capture language information from text and then utilizing the information to improve the performance of specific natural language processing tasks BIBREF24, BIBREF25, BIBREF26, BIBREF27 which makes language model a shared model to all natural language processing tasks. Radford et al. BIBREF24 proposed a framework for fine-tuning pre-trained language model. Peters et al. BIBREF25 proposed ELMo which concatenates forward and backward language models in a shallow manner. Devlin et al. BIBREF26 used bidirectional Transformers to model deep interactions between the two directions. Yang et al. BIBREF27 replaced the fixed forward or backward factorization order with all possible permutations of the factorization order and avoided using the [MASK] tag which causes pretrain-finetune discrepancy that BERT is subject to.
The main motivation of introducing pre-trained language model is to solve the shortage of labeled data and polysemy problem. Although polysemy problem is not a common phenomenon in biomedical domain, shortage of labeled data is always a non-trivial problem. Lee et al. BIBREF28 applied BERT on large-scale biomedical unannotated data and achieved improvement on biomedical named entity recognition, relation extraction and question answering. Kim et al. BIBREF29 adapted BioBERT into multi-type named entity recognition and discovered new entities. Both of them demonstrates the usefulness of introducing pre-trained language model into biomedical domain.
Question Answering based Clinical Text Structuring
Given a sequence of paragraph text $X=<x_1, x_2, ..., x_n>$, clinical text structuring (CTS) can be regarded to extract or generate a key-value pair where key $Q$ is typically a query term such as proximal resection margin and value $V$ is a result of query term $Q$ according to the paragraph text $X$.
Generally, researchers solve CTS problem in two steps. Firstly, the answer-related text is pick out. And then several steps such as entity names conversion and negative words recognition are deployed to generate the final answer. While final answer varies from task to task, which truly causes non-uniform output formats, finding the answer-related text is a common action among all tasks. Traditional methods regard both the steps as a whole. In this paper, we focus on finding the answer-related substring $Xs = <X_i, X_i+1, X_i+2, ... X_j> (1 <= i < j <= n)$ from paragraph text $X$. For example, given sentence UTF8gkai“远端胃切除标本:小弯长11.5cm,大弯长17.0cm。距上切端6.0cm、下切端8.0cm" (Distal gastrectomy specimen: measuring 11.5cm in length along the lesser curvature, 17.0cm in length along the greater curvature; 6.0cm from the proximal resection margin, and 8.0cm from the distal resection margin) and query UTF8gkai“上切缘距离"(proximal resection margin), the answer should be 6.0cm which is located in original text from index 32 to 37. With such definition, it unifies the output format of CTS tasks and therefore make the training data shareable, in order to reduce the training data quantity requirement.
Since BERT BIBREF26 has already demonstrated the usefulness of shared model, we suppose extracting commonality of this problem and unifying the output format will make the model more powerful than dedicated model and meanwhile, for a specific clinical task, use the data for other tasks to supplement the training data.
The Proposed Model for QA-CTS Task
In this section, we present an effective model for the question answering based clinical text structuring (QA-CTS). As shown in Fig. FIGREF8, paragraph text $X$ is first passed to a clinical named entity recognition (CNER) model BIBREF12 to capture named entity information and obtain one-hot CNER output tagging sequence for query text $I_{nq}$ and paragraph text $I_{nt}$ with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. $I_{nq}$ and $I_{nt}$ are then integrated together into $I_n$. Meanwhile, the paragraph text $X$ and query text $Q$ are organized and passed to contextualized representation model which is pre-trained language model BERT BIBREF26 here to obtain the contextualized representation vector $V_s$ of both text and query. Afterwards, $V_s$ and $I_n$ are integrated together and fed into a feed forward network to calculate the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word.
The Proposed Model for QA-CTS Task ::: Contextualized Representation of Sentence Text and Query Text
For any clinical free-text paragraph $X$ and query $Q$, contextualized representation is to generate the encoded vector of both of them. Here we use pre-trained language model BERT-base BIBREF26 model to capture contextual information.
The text input is constructed as `[CLS] $Q$ [SEP] $X$ [SEP]'. For Chinese sentence, each word in this input will be mapped to a pre-trained embedding $e_i$. To tell the model $Q$ and $X$ is two different sentence, a sentence type input is generated which is a binary label sequence to denote what sentence each character in the input belongs to. Positional encoding and mask matrix is also constructed automatically to bring in absolute position information and eliminate the impact of zero padding respectively. Then a hidden vector $V_s$ which contains both query and text information is generated through BERT-base model.
The Proposed Model for QA-CTS Task ::: Clinical Named Entity Information
Since BERT is trained on general corpus, its performance on biomedical domain can be improved by introducing biomedical domain-specific features. In this paper, we introduce clinical named entity information into the model.
The CNER task aims to identify and classify important clinical terms such as diseases, symptoms, treatments, exams, and body parts from Chinese EHRs. It can be regarded as a sequence labeling task. A CNER model typically outputs a sequence of tags. Each character of the original sentence will be tagged a label following a tag scheme. In this paper we recognize the entities by the model of our previous work BIBREF12 but trained on another corpus which has 44 entity types including operations, numbers, unit words, examinations, symptoms, negative words, etc. An illustrative example of named entity information sequence is demonstrated in Table TABREF2. In Table TABREF2, UTF8gkai“远端胃切除" is tagged as an operation, `11.5' is a number word and `cm' is an unit word. The named entity tag sequence is organized in one-hot type. We denote the sequence for clinical sentence and query term as $I_{nt}$ and $I_{nq}$, respectively.
The Proposed Model for QA-CTS Task ::: Integration Method
There are two ways to integrate two named entity information vectors $I_{nt}$ and $I_{nq}$ or hidden contextualized representation $V_s$ and named entity information $I_n$, where $I_n = [I_{nt}; I_{nq}]$. The first one is to concatenate them together because they have sequence output with a common dimension. The second one is to transform them into a new hidden representation. For the concatenation method, the integrated representation is described as follows.
While for the transformation method, we use multi-head attention BIBREF30 to encode the two vectors. It can be defined as follows where $h$ is the number of heads and $W_o$ is used to projects back the dimension of concatenated matrix.
$Attention$ denotes the traditional attention and it can be defined as follows.
where $d_k$ is the length of hidden vector.
The Proposed Model for QA-CTS Task ::: Final Prediction
The final step is to use integrated representation $H_i$ to predict the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word. We use a feed forward network (FFN) to compress and calculate the score of each word $H_f$ which makes the dimension to $\left\langle l_s, 2\right\rangle $ where $l_s$ denotes the length of sequence.
Then we permute the two dimensions for softmax calculation. The calculation process of loss function can be defined as followed.
where $O_s = softmax(permute(H_f)_0)$ denotes the probability score of each word to be the start word and similarly $O_e = softmax(permute(H_f)_1)$ denotes the end. $y_s$ and $y_e$ denotes the true answer of the output for start word and end word respectively.
The Proposed Model for QA-CTS Task ::: Two-Stage Training Mechanism
Two-stage training mechanism is previously applied on bilinear model in fine-grained visual recognition BIBREF31, BIBREF32, BIBREF33. Two CNNs are deployed in the model. One is trained at first for coarse-graind features while freezing the parameter of the other. Then unfreeze the other one and train the entire model in a low learning rate for fetching fine-grained features.
Inspired by this and due to the large amount of parameters in BERT model, to speed up the training process, we fine tune the BERT model with new prediction layer first to achieve a better contextualized representation performance. Then we deploy the proposed model and load the fine tuned BERT weights, attach named entity information layers and retrain the model.
Experimental Studies
In this section, we devote to experimentally evaluating our proposed task and approach. The best results in tables are in bold.
Experimental Studies ::: Dataset and Evaluation Metrics
Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20.
In the following experiments, two widely-used performance measures (i.e., EM-score BIBREF34 and (macro-averaged) F$_1$-score BIBREF35) are used to evaluate the methods. The Exact Match (EM-score) metric measures the percentage of predictions that match any one of the ground truth answers exactly. The F$_1$-score metric is a looser metric measures the average overlap between the prediction and ground truth answer.
Experimental Studies ::: Experimental Settings
To implement deep neural network models, we utilize the Keras library BIBREF36 with TensorFlow BIBREF37 backend. Each model is run on a single NVIDIA GeForce GTX 1080 Ti GPU. The models are trained by Adam optimization algorithm BIBREF38 whose parameters are the same as the default settings except for learning rate set to $5\times 10^{-5}$. Batch size is set to 3 or 4 due to the lack of graphical memory. We select BERT-base as the pre-trained language model in this paper. Due to the high cost of pre-training BERT language model, we directly adopt parameters pre-trained by Google in Chinese general corpus. The named entity recognition is applied on both pathology report texts and query texts.
Experimental Studies ::: Comparison with State-of-the-art Methods
Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23.
Table TABREF23 indicates that our proposed model achieved the best performance both in EM-score and F$_1$-score with EM-score of 91.84% and F$_1$-score of 93.75%. QANet outperformed BERT-Base with 3.56% score in F$_1$-score but underperformed it with 0.75% score in EM-score. Compared with BERT-Base, our model led to a 5.64% performance improvement in EM-score and 3.69% in F$_1$-score. Although our model didn't outperform much with QANet in F$_1$-score (only 0.13%), our model significantly outperformed it with 6.39% score in EM-score.
Experimental Studies ::: Ablation Analysis
To further investigate the effects of named entity information and two-stage training mechanism for our model, we apply ablation analysis to see the improvement brought by each of them, where $\times $ refers to removing that part from our model.
As demonstrated in Table TABREF25, with named entity information enabled, two-stage training mechanism improved the result by 4.36% in EM-score and 3.8% in F$_1$-score. Without two-stage training mechanism, named entity information led to an improvement by 1.28% in EM-score but it also led to a weak deterioration by 0.12% in F$_1$-score. With both of them enabled, our proposed model achieved a 5.64% score improvement in EM-score and a 3.69% score improvement in F$_1$-score. The experimental results show that both named entity information and two-stage training mechanism are helpful to our model.
Experimental Studies ::: Comparisons Between Two Integration Methods
There are two methods to integrate named entity information into existing model, we experimentally compare these two integration methods. As named entity recognition has been applied on both pathology report text and query text, there will be two integration here. One is for two named entity information and the other is for contextualized representation and integrated named entity information. For multi-head attention BIBREF30, we set heads number $h = 16$ with 256-dimension hidden vector size for each head.
From Table TABREF27, we can observe that applying concatenation on both periods achieved the best performance on both EM-score and F$_1$-score. Unfortunately, applying multi-head attention on both period one and period two can not reach convergence in our experiments. This probably because it makes the model too complex to train. The difference on other two methods are the order of concatenation and multi-head attention. Applying multi-head attention on two named entity information $I_{nt}$ and $I_{nq}$ first achieved a better performance with 89.87% in EM-score and 92.88% in F$_1$-score. Applying Concatenation first can only achieve 80.74% in EM-score and 84.42% in F$_1$-score. This is probably due to the processing depth of hidden vectors and dataset size. BERT's output has been modified after many layers but named entity information representation is very close to input. With big amount of parameters in multi-head attention, it requires massive training to find out the optimal parameters. However, our dataset is significantly smaller than what pre-trained BERT uses. This probably can also explain why applying multi-head attention method on both periods can not converge.
Although Table TABREF27 shows the best integration method is concatenation, multi-head attention still has great potential. Due to the lack of computational resources, our experiment fixed the head number and hidden vector size. However, tuning these hyper parameters may have impact on the result. Tuning integration method and try to utilize larger datasets may give help to improving the performance.
Experimental Studies ::: Data Integration Analysis
To investigate how shared task and shared model can benefit, we split our dataset by query types, train our proposed model with different datasets and demonstrate their performance on different datasets. Firstly, we investigate the performance on model without two-stage training and named entity information.
As indicated in Table TABREF30, The model trained by mixed data outperforms 2 of the 3 original tasks in EM-score with 81.55% for proximal resection margin and 86.85% for distal resection margin. The performance on tumor size declined by 1.57% score in EM-score and 3.14% score in F$_1$-score but they were still above 90%. 0.69% and 0.37% score improvement in EM-score was brought by shared model for proximal and distal resection margin prediction. Meanwhile F$_1$-score for those two tasks declined 3.11% and 0.77% score.
Then we investigate the performance on model with two-stage training and named entity information. In this experiment, pre-training process only use the specific dataset not the mixed data. From Table TABREF31 we can observe that the performance on proximal and distal resection margin achieved the best performance on both EM-score and F$_1$-score. Compared with Table TABREF30, the best performance on proximal resection margin improved by 6.9% in EM-score and 7.94% in F$_1$-score. Meanwhile, the best performance on distal resection margin improved by 5.56% in EM-score and 6.32% in F$_1$-score. Other performances also usually improved a lot. This proves the usefulness of two-stage training and named entity information as well.
Lastly, we fine tune the model for each task with a pre-trained parameter. Table TABREF32 summarizes the result. (Add some explanations for the Table TABREF32). Comparing Table TABREF32 with Table TABREF31, using mixed-data pre-trained parameters can significantly improve the model performance than task-specific data trained model. Except tumor size, the result was improved by 0.52% score in EM-score, 1.39% score in F$_1$-score for proximal resection margin and 2.6% score in EM-score, 2.96% score in F$_1$-score for distal resection margin. This proves mixed-data pre-trained parameters can lead to a great benefit for specific task. Meanwhile, the model performance on other tasks which are not trained in the final stage was also improved from around 0 to 60 or 70 percent. This proves that there is commonality between different tasks and our proposed QA-CTS task make this learnable. In conclusion, to achieve the best performance for a specific dataset, pre-training the model in multiple datasets and then fine tuning the model on the specific dataset is the best way.
Conclusion
In this paper, we present a question answering based clinical text structuring (QA-CTS) task, which unifies different clinical text structuring tasks and utilize different datasets. A novel model is also proposed to integrate named entity information into a pre-trained language model and adapt it to QA-CTS task. Initially, sequential results of named entity recognition on both paragraph and query texts are integrated together. Contextualized representation on both paragraph and query texts are transformed by a pre-trained language model. Then, the integrated named entity information and contextualized representation are integrated together and fed into a feed forward network for final prediction. Experimental results on real-world dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks. The shared task and shared model introduced by QA-CTS task has also been proved to be useful for improving the performance on most of the task-specific datasets. In conclusion, the best way to achieve the best performance for a specific dataset is to pre-train the model in multiple datasets and then fine tune it on the specific dataset.
Acknowledgment
We would like to thank Ting Li and Xizhou Hong (Ruijin Hospital) who have helped us very much in data fetching and data cleansing. This work is supported by the National Key R&D Program of China for “Precision Medical Research" (No. 2018YFC0910500). | CTS is extracting structural data from medical research data (unstructured). Authors define QA-CTS task that aims to discover most related text from original text. |
77cf4379106463b6ebcb5eb8fa5bb25450fa5fb8 | 77cf4379106463b6ebcb5eb8fa5bb25450fa5fb8_0 | Q: What are the specific tasks being unified?
Text: Introduction
Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches.
However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size). Researchers have to construct different models for it, which is already costly, and hence it calls for a lot of labeled data for each model. Moreover, labeling necessary amount of data for training neural network requires expensive labor cost. To handle it, researchers turn to some rule-based structuring methods which often have lower labor cost.
Traditionally, CTS tasks can be addressed by rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2, task-specific end-to-end methods BIBREF3, BIBREF4, BIBREF5, BIBREF6 and pipeline methods BIBREF7, BIBREF8, BIBREF9. Rule and dictionary based methods suffer from costly human-designed extraction rules, while task-specific end-to-end methods have non-uniform output formats and require task-specific training dataset. Pipeline methods break down the entire process into several pieces which improves the performance and generality. However, when the pipeline depth grows, error propagation will have a greater impact on the performance.
To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows.
We first present a question answering based clinical text structuring (QA-CTS) task, which unifies different specific tasks and make dataset shareable. We also propose an effective model to integrate clinical named entity information into pre-trained language model.
Experimental results show that QA-CTS task leads to significant improvement due to shared dataset. Our proposed model also achieves significantly better performance than the strong baseline methods. In addition, we also show that two-stage training mechanism has a great improvement on QA-CTS task.
The rest of the paper is organized as follows. We briefly review the related work on clinical text structuring in Section SECREF2. Then, we present question answer based clinical text structuring task in Section SECREF3. In Section SECREF4, we present an effective model for this task. Section SECREF5 is devoted to computational studies and several investigations on the key issues of our proposed model. Finally, conclusions are given in Section SECREF6.
Related Work ::: Clinical Text Structuring
Clinical text structuring is a final problem which is highly related to practical applications. Most of existing studies are case-by-case. Few of them are developed for the general purpose structuring task. These studies can be roughly divided into three categories: rule and dictionary based methods, task-specific end-to-end methods and pipeline methods.
Rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2 rely extremely on heuristics and handcrafted extraction rules which is more of an art than a science and incurring extensive trial-and-error experiments. Fukuda et al. BIBREF0 identified protein names from biological papers by dictionaries and several features of protein names. Wang et al. BIBREF1 developed some linguistic rules (i.e. normalised/expanded term matching and substring term matching) to map specific terminology to SNOMED CT. Song et al. BIBREF2 proposed a hybrid dictionary-based bio-entity extraction technique and expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. This kind of approach features its interpretability and easy modifiability. However, with the increase of the rule amount, supplementing new rules to existing system will turn to be a rule disaster.
Task-specific end-to-end methods BIBREF3, BIBREF4 use large amount of data to automatically model the specific task. Topaz et al. BIBREF3 constructed an automated wound information identification model with five output. Tan et al. BIBREF4 identified patients undergoing radical cystectomy for bladder cancer. Although they achieved good performance, none of their models could be used to another task due to output format difference. This makes building a new model for a new task a costly job.
Pipeline methods BIBREF7, BIBREF8, BIBREF9 break down the entire task into several basic natural language processing tasks. Bill et al. BIBREF7 focused on attributes extraction which mainly relied on dependency parsing and named entity recognition BIBREF10, BIBREF11, BIBREF12. Meanwhile, Fonferko et al. BIBREF9 used more components like noun phrase chunking BIBREF13, BIBREF14, BIBREF15, part-of-speech tagging BIBREF16, BIBREF17, BIBREF18, sentence splitter, named entity linking BIBREF19, BIBREF20, BIBREF21, relation extraction BIBREF22, BIBREF23. This kind of method focus on language itself, so it can handle tasks more general. However, as the depth of pipeline grows, it is obvious that error propagation will be more and more serious. In contrary, using less components to decrease the pipeline depth will lead to a poor performance. So the upper limit of this method depends mainly on the worst component.
Related Work ::: Pre-trained Language Model
Recently, some works focused on pre-trained language representation models to capture language information from text and then utilizing the information to improve the performance of specific natural language processing tasks BIBREF24, BIBREF25, BIBREF26, BIBREF27 which makes language model a shared model to all natural language processing tasks. Radford et al. BIBREF24 proposed a framework for fine-tuning pre-trained language model. Peters et al. BIBREF25 proposed ELMo which concatenates forward and backward language models in a shallow manner. Devlin et al. BIBREF26 used bidirectional Transformers to model deep interactions between the two directions. Yang et al. BIBREF27 replaced the fixed forward or backward factorization order with all possible permutations of the factorization order and avoided using the [MASK] tag which causes pretrain-finetune discrepancy that BERT is subject to.
The main motivation of introducing pre-trained language model is to solve the shortage of labeled data and polysemy problem. Although polysemy problem is not a common phenomenon in biomedical domain, shortage of labeled data is always a non-trivial problem. Lee et al. BIBREF28 applied BERT on large-scale biomedical unannotated data and achieved improvement on biomedical named entity recognition, relation extraction and question answering. Kim et al. BIBREF29 adapted BioBERT into multi-type named entity recognition and discovered new entities. Both of them demonstrates the usefulness of introducing pre-trained language model into biomedical domain.
Question Answering based Clinical Text Structuring
Given a sequence of paragraph text $X=<x_1, x_2, ..., x_n>$, clinical text structuring (CTS) can be regarded to extract or generate a key-value pair where key $Q$ is typically a query term such as proximal resection margin and value $V$ is a result of query term $Q$ according to the paragraph text $X$.
Generally, researchers solve CTS problem in two steps. Firstly, the answer-related text is pick out. And then several steps such as entity names conversion and negative words recognition are deployed to generate the final answer. While final answer varies from task to task, which truly causes non-uniform output formats, finding the answer-related text is a common action among all tasks. Traditional methods regard both the steps as a whole. In this paper, we focus on finding the answer-related substring $Xs = <X_i, X_i+1, X_i+2, ... X_j> (1 <= i < j <= n)$ from paragraph text $X$. For example, given sentence UTF8gkai“远端胃切除标本:小弯长11.5cm,大弯长17.0cm。距上切端6.0cm、下切端8.0cm" (Distal gastrectomy specimen: measuring 11.5cm in length along the lesser curvature, 17.0cm in length along the greater curvature; 6.0cm from the proximal resection margin, and 8.0cm from the distal resection margin) and query UTF8gkai“上切缘距离"(proximal resection margin), the answer should be 6.0cm which is located in original text from index 32 to 37. With such definition, it unifies the output format of CTS tasks and therefore make the training data shareable, in order to reduce the training data quantity requirement.
Since BERT BIBREF26 has already demonstrated the usefulness of shared model, we suppose extracting commonality of this problem and unifying the output format will make the model more powerful than dedicated model and meanwhile, for a specific clinical task, use the data for other tasks to supplement the training data.
The Proposed Model for QA-CTS Task
In this section, we present an effective model for the question answering based clinical text structuring (QA-CTS). As shown in Fig. FIGREF8, paragraph text $X$ is first passed to a clinical named entity recognition (CNER) model BIBREF12 to capture named entity information and obtain one-hot CNER output tagging sequence for query text $I_{nq}$ and paragraph text $I_{nt}$ with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. $I_{nq}$ and $I_{nt}$ are then integrated together into $I_n$. Meanwhile, the paragraph text $X$ and query text $Q$ are organized and passed to contextualized representation model which is pre-trained language model BERT BIBREF26 here to obtain the contextualized representation vector $V_s$ of both text and query. Afterwards, $V_s$ and $I_n$ are integrated together and fed into a feed forward network to calculate the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word.
The Proposed Model for QA-CTS Task ::: Contextualized Representation of Sentence Text and Query Text
For any clinical free-text paragraph $X$ and query $Q$, contextualized representation is to generate the encoded vector of both of them. Here we use pre-trained language model BERT-base BIBREF26 model to capture contextual information.
The text input is constructed as `[CLS] $Q$ [SEP] $X$ [SEP]'. For Chinese sentence, each word in this input will be mapped to a pre-trained embedding $e_i$. To tell the model $Q$ and $X$ is two different sentence, a sentence type input is generated which is a binary label sequence to denote what sentence each character in the input belongs to. Positional encoding and mask matrix is also constructed automatically to bring in absolute position information and eliminate the impact of zero padding respectively. Then a hidden vector $V_s$ which contains both query and text information is generated through BERT-base model.
The Proposed Model for QA-CTS Task ::: Clinical Named Entity Information
Since BERT is trained on general corpus, its performance on biomedical domain can be improved by introducing biomedical domain-specific features. In this paper, we introduce clinical named entity information into the model.
The CNER task aims to identify and classify important clinical terms such as diseases, symptoms, treatments, exams, and body parts from Chinese EHRs. It can be regarded as a sequence labeling task. A CNER model typically outputs a sequence of tags. Each character of the original sentence will be tagged a label following a tag scheme. In this paper we recognize the entities by the model of our previous work BIBREF12 but trained on another corpus which has 44 entity types including operations, numbers, unit words, examinations, symptoms, negative words, etc. An illustrative example of named entity information sequence is demonstrated in Table TABREF2. In Table TABREF2, UTF8gkai“远端胃切除" is tagged as an operation, `11.5' is a number word and `cm' is an unit word. The named entity tag sequence is organized in one-hot type. We denote the sequence for clinical sentence and query term as $I_{nt}$ and $I_{nq}$, respectively.
The Proposed Model for QA-CTS Task ::: Integration Method
There are two ways to integrate two named entity information vectors $I_{nt}$ and $I_{nq}$ or hidden contextualized representation $V_s$ and named entity information $I_n$, where $I_n = [I_{nt}; I_{nq}]$. The first one is to concatenate them together because they have sequence output with a common dimension. The second one is to transform them into a new hidden representation. For the concatenation method, the integrated representation is described as follows.
While for the transformation method, we use multi-head attention BIBREF30 to encode the two vectors. It can be defined as follows where $h$ is the number of heads and $W_o$ is used to projects back the dimension of concatenated matrix.
$Attention$ denotes the traditional attention and it can be defined as follows.
where $d_k$ is the length of hidden vector.
The Proposed Model for QA-CTS Task ::: Final Prediction
The final step is to use integrated representation $H_i$ to predict the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word. We use a feed forward network (FFN) to compress and calculate the score of each word $H_f$ which makes the dimension to $\left\langle l_s, 2\right\rangle $ where $l_s$ denotes the length of sequence.
Then we permute the two dimensions for softmax calculation. The calculation process of loss function can be defined as followed.
where $O_s = softmax(permute(H_f)_0)$ denotes the probability score of each word to be the start word and similarly $O_e = softmax(permute(H_f)_1)$ denotes the end. $y_s$ and $y_e$ denotes the true answer of the output for start word and end word respectively.
The Proposed Model for QA-CTS Task ::: Two-Stage Training Mechanism
Two-stage training mechanism is previously applied on bilinear model in fine-grained visual recognition BIBREF31, BIBREF32, BIBREF33. Two CNNs are deployed in the model. One is trained at first for coarse-graind features while freezing the parameter of the other. Then unfreeze the other one and train the entire model in a low learning rate for fetching fine-grained features.
Inspired by this and due to the large amount of parameters in BERT model, to speed up the training process, we fine tune the BERT model with new prediction layer first to achieve a better contextualized representation performance. Then we deploy the proposed model and load the fine tuned BERT weights, attach named entity information layers and retrain the model.
Experimental Studies
In this section, we devote to experimentally evaluating our proposed task and approach. The best results in tables are in bold.
Experimental Studies ::: Dataset and Evaluation Metrics
Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20.
In the following experiments, two widely-used performance measures (i.e., EM-score BIBREF34 and (macro-averaged) F$_1$-score BIBREF35) are used to evaluate the methods. The Exact Match (EM-score) metric measures the percentage of predictions that match any one of the ground truth answers exactly. The F$_1$-score metric is a looser metric measures the average overlap between the prediction and ground truth answer.
Experimental Studies ::: Experimental Settings
To implement deep neural network models, we utilize the Keras library BIBREF36 with TensorFlow BIBREF37 backend. Each model is run on a single NVIDIA GeForce GTX 1080 Ti GPU. The models are trained by Adam optimization algorithm BIBREF38 whose parameters are the same as the default settings except for learning rate set to $5\times 10^{-5}$. Batch size is set to 3 or 4 due to the lack of graphical memory. We select BERT-base as the pre-trained language model in this paper. Due to the high cost of pre-training BERT language model, we directly adopt parameters pre-trained by Google in Chinese general corpus. The named entity recognition is applied on both pathology report texts and query texts.
Experimental Studies ::: Comparison with State-of-the-art Methods
Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23.
Table TABREF23 indicates that our proposed model achieved the best performance both in EM-score and F$_1$-score with EM-score of 91.84% and F$_1$-score of 93.75%. QANet outperformed BERT-Base with 3.56% score in F$_1$-score but underperformed it with 0.75% score in EM-score. Compared with BERT-Base, our model led to a 5.64% performance improvement in EM-score and 3.69% in F$_1$-score. Although our model didn't outperform much with QANet in F$_1$-score (only 0.13%), our model significantly outperformed it with 6.39% score in EM-score.
Experimental Studies ::: Ablation Analysis
To further investigate the effects of named entity information and two-stage training mechanism for our model, we apply ablation analysis to see the improvement brought by each of them, where $\times $ refers to removing that part from our model.
As demonstrated in Table TABREF25, with named entity information enabled, two-stage training mechanism improved the result by 4.36% in EM-score and 3.8% in F$_1$-score. Without two-stage training mechanism, named entity information led to an improvement by 1.28% in EM-score but it also led to a weak deterioration by 0.12% in F$_1$-score. With both of them enabled, our proposed model achieved a 5.64% score improvement in EM-score and a 3.69% score improvement in F$_1$-score. The experimental results show that both named entity information and two-stage training mechanism are helpful to our model.
Experimental Studies ::: Comparisons Between Two Integration Methods
There are two methods to integrate named entity information into existing model, we experimentally compare these two integration methods. As named entity recognition has been applied on both pathology report text and query text, there will be two integration here. One is for two named entity information and the other is for contextualized representation and integrated named entity information. For multi-head attention BIBREF30, we set heads number $h = 16$ with 256-dimension hidden vector size for each head.
From Table TABREF27, we can observe that applying concatenation on both periods achieved the best performance on both EM-score and F$_1$-score. Unfortunately, applying multi-head attention on both period one and period two can not reach convergence in our experiments. This probably because it makes the model too complex to train. The difference on other two methods are the order of concatenation and multi-head attention. Applying multi-head attention on two named entity information $I_{nt}$ and $I_{nq}$ first achieved a better performance with 89.87% in EM-score and 92.88% in F$_1$-score. Applying Concatenation first can only achieve 80.74% in EM-score and 84.42% in F$_1$-score. This is probably due to the processing depth of hidden vectors and dataset size. BERT's output has been modified after many layers but named entity information representation is very close to input. With big amount of parameters in multi-head attention, it requires massive training to find out the optimal parameters. However, our dataset is significantly smaller than what pre-trained BERT uses. This probably can also explain why applying multi-head attention method on both periods can not converge.
Although Table TABREF27 shows the best integration method is concatenation, multi-head attention still has great potential. Due to the lack of computational resources, our experiment fixed the head number and hidden vector size. However, tuning these hyper parameters may have impact on the result. Tuning integration method and try to utilize larger datasets may give help to improving the performance.
Experimental Studies ::: Data Integration Analysis
To investigate how shared task and shared model can benefit, we split our dataset by query types, train our proposed model with different datasets and demonstrate their performance on different datasets. Firstly, we investigate the performance on model without two-stage training and named entity information.
As indicated in Table TABREF30, The model trained by mixed data outperforms 2 of the 3 original tasks in EM-score with 81.55% for proximal resection margin and 86.85% for distal resection margin. The performance on tumor size declined by 1.57% score in EM-score and 3.14% score in F$_1$-score but they were still above 90%. 0.69% and 0.37% score improvement in EM-score was brought by shared model for proximal and distal resection margin prediction. Meanwhile F$_1$-score for those two tasks declined 3.11% and 0.77% score.
Then we investigate the performance on model with two-stage training and named entity information. In this experiment, pre-training process only use the specific dataset not the mixed data. From Table TABREF31 we can observe that the performance on proximal and distal resection margin achieved the best performance on both EM-score and F$_1$-score. Compared with Table TABREF30, the best performance on proximal resection margin improved by 6.9% in EM-score and 7.94% in F$_1$-score. Meanwhile, the best performance on distal resection margin improved by 5.56% in EM-score and 6.32% in F$_1$-score. Other performances also usually improved a lot. This proves the usefulness of two-stage training and named entity information as well.
Lastly, we fine tune the model for each task with a pre-trained parameter. Table TABREF32 summarizes the result. (Add some explanations for the Table TABREF32). Comparing Table TABREF32 with Table TABREF31, using mixed-data pre-trained parameters can significantly improve the model performance than task-specific data trained model. Except tumor size, the result was improved by 0.52% score in EM-score, 1.39% score in F$_1$-score for proximal resection margin and 2.6% score in EM-score, 2.96% score in F$_1$-score for distal resection margin. This proves mixed-data pre-trained parameters can lead to a great benefit for specific task. Meanwhile, the model performance on other tasks which are not trained in the final stage was also improved from around 0 to 60 or 70 percent. This proves that there is commonality between different tasks and our proposed QA-CTS task make this learnable. In conclusion, to achieve the best performance for a specific dataset, pre-training the model in multiple datasets and then fine tuning the model on the specific dataset is the best way.
Conclusion
In this paper, we present a question answering based clinical text structuring (QA-CTS) task, which unifies different clinical text structuring tasks and utilize different datasets. A novel model is also proposed to integrate named entity information into a pre-trained language model and adapt it to QA-CTS task. Initially, sequential results of named entity recognition on both paragraph and query texts are integrated together. Contextualized representation on both paragraph and query texts are transformed by a pre-trained language model. Then, the integrated named entity information and contextualized representation are integrated together and fed into a feed forward network for final prediction. Experimental results on real-world dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks. The shared task and shared model introduced by QA-CTS task has also been proved to be useful for improving the performance on most of the task-specific datasets. In conclusion, the best way to achieve the best performance for a specific dataset is to pre-train the model in multiple datasets and then fine tune it on the specific dataset.
Acknowledgment
We would like to thank Ting Li and Xizhou Hong (Ruijin Hospital) who have helped us very much in data fetching and data cleansing. This work is supported by the National Key R&D Program of China for “Precision Medical Research" (No. 2018YFC0910500). | three types of questions, namely tumor size, proximal resection margin and distal resection margin |
77cf4379106463b6ebcb5eb8fa5bb25450fa5fb8 | 77cf4379106463b6ebcb5eb8fa5bb25450fa5fb8_1 | Q: What are the specific tasks being unified?
Text: Introduction
Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches.
However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size). Researchers have to construct different models for it, which is already costly, and hence it calls for a lot of labeled data for each model. Moreover, labeling necessary amount of data for training neural network requires expensive labor cost. To handle it, researchers turn to some rule-based structuring methods which often have lower labor cost.
Traditionally, CTS tasks can be addressed by rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2, task-specific end-to-end methods BIBREF3, BIBREF4, BIBREF5, BIBREF6 and pipeline methods BIBREF7, BIBREF8, BIBREF9. Rule and dictionary based methods suffer from costly human-designed extraction rules, while task-specific end-to-end methods have non-uniform output formats and require task-specific training dataset. Pipeline methods break down the entire process into several pieces which improves the performance and generality. However, when the pipeline depth grows, error propagation will have a greater impact on the performance.
To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows.
We first present a question answering based clinical text structuring (QA-CTS) task, which unifies different specific tasks and make dataset shareable. We also propose an effective model to integrate clinical named entity information into pre-trained language model.
Experimental results show that QA-CTS task leads to significant improvement due to shared dataset. Our proposed model also achieves significantly better performance than the strong baseline methods. In addition, we also show that two-stage training mechanism has a great improvement on QA-CTS task.
The rest of the paper is organized as follows. We briefly review the related work on clinical text structuring in Section SECREF2. Then, we present question answer based clinical text structuring task in Section SECREF3. In Section SECREF4, we present an effective model for this task. Section SECREF5 is devoted to computational studies and several investigations on the key issues of our proposed model. Finally, conclusions are given in Section SECREF6.
Related Work ::: Clinical Text Structuring
Clinical text structuring is a final problem which is highly related to practical applications. Most of existing studies are case-by-case. Few of them are developed for the general purpose structuring task. These studies can be roughly divided into three categories: rule and dictionary based methods, task-specific end-to-end methods and pipeline methods.
Rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2 rely extremely on heuristics and handcrafted extraction rules which is more of an art than a science and incurring extensive trial-and-error experiments. Fukuda et al. BIBREF0 identified protein names from biological papers by dictionaries and several features of protein names. Wang et al. BIBREF1 developed some linguistic rules (i.e. normalised/expanded term matching and substring term matching) to map specific terminology to SNOMED CT. Song et al. BIBREF2 proposed a hybrid dictionary-based bio-entity extraction technique and expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. This kind of approach features its interpretability and easy modifiability. However, with the increase of the rule amount, supplementing new rules to existing system will turn to be a rule disaster.
Task-specific end-to-end methods BIBREF3, BIBREF4 use large amount of data to automatically model the specific task. Topaz et al. BIBREF3 constructed an automated wound information identification model with five output. Tan et al. BIBREF4 identified patients undergoing radical cystectomy for bladder cancer. Although they achieved good performance, none of their models could be used to another task due to output format difference. This makes building a new model for a new task a costly job.
Pipeline methods BIBREF7, BIBREF8, BIBREF9 break down the entire task into several basic natural language processing tasks. Bill et al. BIBREF7 focused on attributes extraction which mainly relied on dependency parsing and named entity recognition BIBREF10, BIBREF11, BIBREF12. Meanwhile, Fonferko et al. BIBREF9 used more components like noun phrase chunking BIBREF13, BIBREF14, BIBREF15, part-of-speech tagging BIBREF16, BIBREF17, BIBREF18, sentence splitter, named entity linking BIBREF19, BIBREF20, BIBREF21, relation extraction BIBREF22, BIBREF23. This kind of method focus on language itself, so it can handle tasks more general. However, as the depth of pipeline grows, it is obvious that error propagation will be more and more serious. In contrary, using less components to decrease the pipeline depth will lead to a poor performance. So the upper limit of this method depends mainly on the worst component.
Related Work ::: Pre-trained Language Model
Recently, some works focused on pre-trained language representation models to capture language information from text and then utilizing the information to improve the performance of specific natural language processing tasks BIBREF24, BIBREF25, BIBREF26, BIBREF27 which makes language model a shared model to all natural language processing tasks. Radford et al. BIBREF24 proposed a framework for fine-tuning pre-trained language model. Peters et al. BIBREF25 proposed ELMo which concatenates forward and backward language models in a shallow manner. Devlin et al. BIBREF26 used bidirectional Transformers to model deep interactions between the two directions. Yang et al. BIBREF27 replaced the fixed forward or backward factorization order with all possible permutations of the factorization order and avoided using the [MASK] tag which causes pretrain-finetune discrepancy that BERT is subject to.
The main motivation of introducing pre-trained language model is to solve the shortage of labeled data and polysemy problem. Although polysemy problem is not a common phenomenon in biomedical domain, shortage of labeled data is always a non-trivial problem. Lee et al. BIBREF28 applied BERT on large-scale biomedical unannotated data and achieved improvement on biomedical named entity recognition, relation extraction and question answering. Kim et al. BIBREF29 adapted BioBERT into multi-type named entity recognition and discovered new entities. Both of them demonstrates the usefulness of introducing pre-trained language model into biomedical domain.
Question Answering based Clinical Text Structuring
Given a sequence of paragraph text $X=<x_1, x_2, ..., x_n>$, clinical text structuring (CTS) can be regarded to extract or generate a key-value pair where key $Q$ is typically a query term such as proximal resection margin and value $V$ is a result of query term $Q$ according to the paragraph text $X$.
Generally, researchers solve CTS problem in two steps. Firstly, the answer-related text is pick out. And then several steps such as entity names conversion and negative words recognition are deployed to generate the final answer. While final answer varies from task to task, which truly causes non-uniform output formats, finding the answer-related text is a common action among all tasks. Traditional methods regard both the steps as a whole. In this paper, we focus on finding the answer-related substring $Xs = <X_i, X_i+1, X_i+2, ... X_j> (1 <= i < j <= n)$ from paragraph text $X$. For example, given sentence UTF8gkai“远端胃切除标本:小弯长11.5cm,大弯长17.0cm。距上切端6.0cm、下切端8.0cm" (Distal gastrectomy specimen: measuring 11.5cm in length along the lesser curvature, 17.0cm in length along the greater curvature; 6.0cm from the proximal resection margin, and 8.0cm from the distal resection margin) and query UTF8gkai“上切缘距离"(proximal resection margin), the answer should be 6.0cm which is located in original text from index 32 to 37. With such definition, it unifies the output format of CTS tasks and therefore make the training data shareable, in order to reduce the training data quantity requirement.
Since BERT BIBREF26 has already demonstrated the usefulness of shared model, we suppose extracting commonality of this problem and unifying the output format will make the model more powerful than dedicated model and meanwhile, for a specific clinical task, use the data for other tasks to supplement the training data.
The Proposed Model for QA-CTS Task
In this section, we present an effective model for the question answering based clinical text structuring (QA-CTS). As shown in Fig. FIGREF8, paragraph text $X$ is first passed to a clinical named entity recognition (CNER) model BIBREF12 to capture named entity information and obtain one-hot CNER output tagging sequence for query text $I_{nq}$ and paragraph text $I_{nt}$ with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. $I_{nq}$ and $I_{nt}$ are then integrated together into $I_n$. Meanwhile, the paragraph text $X$ and query text $Q$ are organized and passed to contextualized representation model which is pre-trained language model BERT BIBREF26 here to obtain the contextualized representation vector $V_s$ of both text and query. Afterwards, $V_s$ and $I_n$ are integrated together and fed into a feed forward network to calculate the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word.
The Proposed Model for QA-CTS Task ::: Contextualized Representation of Sentence Text and Query Text
For any clinical free-text paragraph $X$ and query $Q$, contextualized representation is to generate the encoded vector of both of them. Here we use pre-trained language model BERT-base BIBREF26 model to capture contextual information.
The text input is constructed as `[CLS] $Q$ [SEP] $X$ [SEP]'. For Chinese sentence, each word in this input will be mapped to a pre-trained embedding $e_i$. To tell the model $Q$ and $X$ is two different sentence, a sentence type input is generated which is a binary label sequence to denote what sentence each character in the input belongs to. Positional encoding and mask matrix is also constructed automatically to bring in absolute position information and eliminate the impact of zero padding respectively. Then a hidden vector $V_s$ which contains both query and text information is generated through BERT-base model.
The Proposed Model for QA-CTS Task ::: Clinical Named Entity Information
Since BERT is trained on general corpus, its performance on biomedical domain can be improved by introducing biomedical domain-specific features. In this paper, we introduce clinical named entity information into the model.
The CNER task aims to identify and classify important clinical terms such as diseases, symptoms, treatments, exams, and body parts from Chinese EHRs. It can be regarded as a sequence labeling task. A CNER model typically outputs a sequence of tags. Each character of the original sentence will be tagged a label following a tag scheme. In this paper we recognize the entities by the model of our previous work BIBREF12 but trained on another corpus which has 44 entity types including operations, numbers, unit words, examinations, symptoms, negative words, etc. An illustrative example of named entity information sequence is demonstrated in Table TABREF2. In Table TABREF2, UTF8gkai“远端胃切除" is tagged as an operation, `11.5' is a number word and `cm' is an unit word. The named entity tag sequence is organized in one-hot type. We denote the sequence for clinical sentence and query term as $I_{nt}$ and $I_{nq}$, respectively.
The Proposed Model for QA-CTS Task ::: Integration Method
There are two ways to integrate two named entity information vectors $I_{nt}$ and $I_{nq}$ or hidden contextualized representation $V_s$ and named entity information $I_n$, where $I_n = [I_{nt}; I_{nq}]$. The first one is to concatenate them together because they have sequence output with a common dimension. The second one is to transform them into a new hidden representation. For the concatenation method, the integrated representation is described as follows.
While for the transformation method, we use multi-head attention BIBREF30 to encode the two vectors. It can be defined as follows where $h$ is the number of heads and $W_o$ is used to projects back the dimension of concatenated matrix.
$Attention$ denotes the traditional attention and it can be defined as follows.
where $d_k$ is the length of hidden vector.
The Proposed Model for QA-CTS Task ::: Final Prediction
The final step is to use integrated representation $H_i$ to predict the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word. We use a feed forward network (FFN) to compress and calculate the score of each word $H_f$ which makes the dimension to $\left\langle l_s, 2\right\rangle $ where $l_s$ denotes the length of sequence.
Then we permute the two dimensions for softmax calculation. The calculation process of loss function can be defined as followed.
where $O_s = softmax(permute(H_f)_0)$ denotes the probability score of each word to be the start word and similarly $O_e = softmax(permute(H_f)_1)$ denotes the end. $y_s$ and $y_e$ denotes the true answer of the output for start word and end word respectively.
The Proposed Model for QA-CTS Task ::: Two-Stage Training Mechanism
Two-stage training mechanism is previously applied on bilinear model in fine-grained visual recognition BIBREF31, BIBREF32, BIBREF33. Two CNNs are deployed in the model. One is trained at first for coarse-graind features while freezing the parameter of the other. Then unfreeze the other one and train the entire model in a low learning rate for fetching fine-grained features.
Inspired by this and due to the large amount of parameters in BERT model, to speed up the training process, we fine tune the BERT model with new prediction layer first to achieve a better contextualized representation performance. Then we deploy the proposed model and load the fine tuned BERT weights, attach named entity information layers and retrain the model.
Experimental Studies
In this section, we devote to experimentally evaluating our proposed task and approach. The best results in tables are in bold.
Experimental Studies ::: Dataset and Evaluation Metrics
Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20.
In the following experiments, two widely-used performance measures (i.e., EM-score BIBREF34 and (macro-averaged) F$_1$-score BIBREF35) are used to evaluate the methods. The Exact Match (EM-score) metric measures the percentage of predictions that match any one of the ground truth answers exactly. The F$_1$-score metric is a looser metric measures the average overlap between the prediction and ground truth answer.
Experimental Studies ::: Experimental Settings
To implement deep neural network models, we utilize the Keras library BIBREF36 with TensorFlow BIBREF37 backend. Each model is run on a single NVIDIA GeForce GTX 1080 Ti GPU. The models are trained by Adam optimization algorithm BIBREF38 whose parameters are the same as the default settings except for learning rate set to $5\times 10^{-5}$. Batch size is set to 3 or 4 due to the lack of graphical memory. We select BERT-base as the pre-trained language model in this paper. Due to the high cost of pre-training BERT language model, we directly adopt parameters pre-trained by Google in Chinese general corpus. The named entity recognition is applied on both pathology report texts and query texts.
Experimental Studies ::: Comparison with State-of-the-art Methods
Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23.
Table TABREF23 indicates that our proposed model achieved the best performance both in EM-score and F$_1$-score with EM-score of 91.84% and F$_1$-score of 93.75%. QANet outperformed BERT-Base with 3.56% score in F$_1$-score but underperformed it with 0.75% score in EM-score. Compared with BERT-Base, our model led to a 5.64% performance improvement in EM-score and 3.69% in F$_1$-score. Although our model didn't outperform much with QANet in F$_1$-score (only 0.13%), our model significantly outperformed it with 6.39% score in EM-score.
Experimental Studies ::: Ablation Analysis
To further investigate the effects of named entity information and two-stage training mechanism for our model, we apply ablation analysis to see the improvement brought by each of them, where $\times $ refers to removing that part from our model.
As demonstrated in Table TABREF25, with named entity information enabled, two-stage training mechanism improved the result by 4.36% in EM-score and 3.8% in F$_1$-score. Without two-stage training mechanism, named entity information led to an improvement by 1.28% in EM-score but it also led to a weak deterioration by 0.12% in F$_1$-score. With both of them enabled, our proposed model achieved a 5.64% score improvement in EM-score and a 3.69% score improvement in F$_1$-score. The experimental results show that both named entity information and two-stage training mechanism are helpful to our model.
Experimental Studies ::: Comparisons Between Two Integration Methods
There are two methods to integrate named entity information into existing model, we experimentally compare these two integration methods. As named entity recognition has been applied on both pathology report text and query text, there will be two integration here. One is for two named entity information and the other is for contextualized representation and integrated named entity information. For multi-head attention BIBREF30, we set heads number $h = 16$ with 256-dimension hidden vector size for each head.
From Table TABREF27, we can observe that applying concatenation on both periods achieved the best performance on both EM-score and F$_1$-score. Unfortunately, applying multi-head attention on both period one and period two can not reach convergence in our experiments. This probably because it makes the model too complex to train. The difference on other two methods are the order of concatenation and multi-head attention. Applying multi-head attention on two named entity information $I_{nt}$ and $I_{nq}$ first achieved a better performance with 89.87% in EM-score and 92.88% in F$_1$-score. Applying Concatenation first can only achieve 80.74% in EM-score and 84.42% in F$_1$-score. This is probably due to the processing depth of hidden vectors and dataset size. BERT's output has been modified after many layers but named entity information representation is very close to input. With big amount of parameters in multi-head attention, it requires massive training to find out the optimal parameters. However, our dataset is significantly smaller than what pre-trained BERT uses. This probably can also explain why applying multi-head attention method on both periods can not converge.
Although Table TABREF27 shows the best integration method is concatenation, multi-head attention still has great potential. Due to the lack of computational resources, our experiment fixed the head number and hidden vector size. However, tuning these hyper parameters may have impact on the result. Tuning integration method and try to utilize larger datasets may give help to improving the performance.
Experimental Studies ::: Data Integration Analysis
To investigate how shared task and shared model can benefit, we split our dataset by query types, train our proposed model with different datasets and demonstrate their performance on different datasets. Firstly, we investigate the performance on model without two-stage training and named entity information.
As indicated in Table TABREF30, The model trained by mixed data outperforms 2 of the 3 original tasks in EM-score with 81.55% for proximal resection margin and 86.85% for distal resection margin. The performance on tumor size declined by 1.57% score in EM-score and 3.14% score in F$_1$-score but they were still above 90%. 0.69% and 0.37% score improvement in EM-score was brought by shared model for proximal and distal resection margin prediction. Meanwhile F$_1$-score for those two tasks declined 3.11% and 0.77% score.
Then we investigate the performance on model with two-stage training and named entity information. In this experiment, pre-training process only use the specific dataset not the mixed data. From Table TABREF31 we can observe that the performance on proximal and distal resection margin achieved the best performance on both EM-score and F$_1$-score. Compared with Table TABREF30, the best performance on proximal resection margin improved by 6.9% in EM-score and 7.94% in F$_1$-score. Meanwhile, the best performance on distal resection margin improved by 5.56% in EM-score and 6.32% in F$_1$-score. Other performances also usually improved a lot. This proves the usefulness of two-stage training and named entity information as well.
Lastly, we fine tune the model for each task with a pre-trained parameter. Table TABREF32 summarizes the result. (Add some explanations for the Table TABREF32). Comparing Table TABREF32 with Table TABREF31, using mixed-data pre-trained parameters can significantly improve the model performance than task-specific data trained model. Except tumor size, the result was improved by 0.52% score in EM-score, 1.39% score in F$_1$-score for proximal resection margin and 2.6% score in EM-score, 2.96% score in F$_1$-score for distal resection margin. This proves mixed-data pre-trained parameters can lead to a great benefit for specific task. Meanwhile, the model performance on other tasks which are not trained in the final stage was also improved from around 0 to 60 or 70 percent. This proves that there is commonality between different tasks and our proposed QA-CTS task make this learnable. In conclusion, to achieve the best performance for a specific dataset, pre-training the model in multiple datasets and then fine tuning the model on the specific dataset is the best way.
Conclusion
In this paper, we present a question answering based clinical text structuring (QA-CTS) task, which unifies different clinical text structuring tasks and utilize different datasets. A novel model is also proposed to integrate named entity information into a pre-trained language model and adapt it to QA-CTS task. Initially, sequential results of named entity recognition on both paragraph and query texts are integrated together. Contextualized representation on both paragraph and query texts are transformed by a pre-trained language model. Then, the integrated named entity information and contextualized representation are integrated together and fed into a feed forward network for final prediction. Experimental results on real-world dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks. The shared task and shared model introduced by QA-CTS task has also been proved to be useful for improving the performance on most of the task-specific datasets. In conclusion, the best way to achieve the best performance for a specific dataset is to pre-train the model in multiple datasets and then fine tune it on the specific dataset.
Acknowledgment
We would like to thank Ting Li and Xizhou Hong (Ruijin Hospital) who have helped us very much in data fetching and data cleansing. This work is supported by the National Key R&D Program of China for “Precision Medical Research" (No. 2018YFC0910500). | Unanswerable |
06095a4dee77e9a570837b35fc38e77228664f91 | 06095a4dee77e9a570837b35fc38e77228664f91_0 | Q: Is all text in this dataset a question, or are there unrelated sentences in between questions?
Text: Introduction
Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches.
However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size). Researchers have to construct different models for it, which is already costly, and hence it calls for a lot of labeled data for each model. Moreover, labeling necessary amount of data for training neural network requires expensive labor cost. To handle it, researchers turn to some rule-based structuring methods which often have lower labor cost.
Traditionally, CTS tasks can be addressed by rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2, task-specific end-to-end methods BIBREF3, BIBREF4, BIBREF5, BIBREF6 and pipeline methods BIBREF7, BIBREF8, BIBREF9. Rule and dictionary based methods suffer from costly human-designed extraction rules, while task-specific end-to-end methods have non-uniform output formats and require task-specific training dataset. Pipeline methods break down the entire process into several pieces which improves the performance and generality. However, when the pipeline depth grows, error propagation will have a greater impact on the performance.
To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows.
We first present a question answering based clinical text structuring (QA-CTS) task, which unifies different specific tasks and make dataset shareable. We also propose an effective model to integrate clinical named entity information into pre-trained language model.
Experimental results show that QA-CTS task leads to significant improvement due to shared dataset. Our proposed model also achieves significantly better performance than the strong baseline methods. In addition, we also show that two-stage training mechanism has a great improvement on QA-CTS task.
The rest of the paper is organized as follows. We briefly review the related work on clinical text structuring in Section SECREF2. Then, we present question answer based clinical text structuring task in Section SECREF3. In Section SECREF4, we present an effective model for this task. Section SECREF5 is devoted to computational studies and several investigations on the key issues of our proposed model. Finally, conclusions are given in Section SECREF6.
Related Work ::: Clinical Text Structuring
Clinical text structuring is a final problem which is highly related to practical applications. Most of existing studies are case-by-case. Few of them are developed for the general purpose structuring task. These studies can be roughly divided into three categories: rule and dictionary based methods, task-specific end-to-end methods and pipeline methods.
Rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2 rely extremely on heuristics and handcrafted extraction rules which is more of an art than a science and incurring extensive trial-and-error experiments. Fukuda et al. BIBREF0 identified protein names from biological papers by dictionaries and several features of protein names. Wang et al. BIBREF1 developed some linguistic rules (i.e. normalised/expanded term matching and substring term matching) to map specific terminology to SNOMED CT. Song et al. BIBREF2 proposed a hybrid dictionary-based bio-entity extraction technique and expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. This kind of approach features its interpretability and easy modifiability. However, with the increase of the rule amount, supplementing new rules to existing system will turn to be a rule disaster.
Task-specific end-to-end methods BIBREF3, BIBREF4 use large amount of data to automatically model the specific task. Topaz et al. BIBREF3 constructed an automated wound information identification model with five output. Tan et al. BIBREF4 identified patients undergoing radical cystectomy for bladder cancer. Although they achieved good performance, none of their models could be used to another task due to output format difference. This makes building a new model for a new task a costly job.
Pipeline methods BIBREF7, BIBREF8, BIBREF9 break down the entire task into several basic natural language processing tasks. Bill et al. BIBREF7 focused on attributes extraction which mainly relied on dependency parsing and named entity recognition BIBREF10, BIBREF11, BIBREF12. Meanwhile, Fonferko et al. BIBREF9 used more components like noun phrase chunking BIBREF13, BIBREF14, BIBREF15, part-of-speech tagging BIBREF16, BIBREF17, BIBREF18, sentence splitter, named entity linking BIBREF19, BIBREF20, BIBREF21, relation extraction BIBREF22, BIBREF23. This kind of method focus on language itself, so it can handle tasks more general. However, as the depth of pipeline grows, it is obvious that error propagation will be more and more serious. In contrary, using less components to decrease the pipeline depth will lead to a poor performance. So the upper limit of this method depends mainly on the worst component.
Related Work ::: Pre-trained Language Model
Recently, some works focused on pre-trained language representation models to capture language information from text and then utilizing the information to improve the performance of specific natural language processing tasks BIBREF24, BIBREF25, BIBREF26, BIBREF27 which makes language model a shared model to all natural language processing tasks. Radford et al. BIBREF24 proposed a framework for fine-tuning pre-trained language model. Peters et al. BIBREF25 proposed ELMo which concatenates forward and backward language models in a shallow manner. Devlin et al. BIBREF26 used bidirectional Transformers to model deep interactions between the two directions. Yang et al. BIBREF27 replaced the fixed forward or backward factorization order with all possible permutations of the factorization order and avoided using the [MASK] tag which causes pretrain-finetune discrepancy that BERT is subject to.
The main motivation of introducing pre-trained language model is to solve the shortage of labeled data and polysemy problem. Although polysemy problem is not a common phenomenon in biomedical domain, shortage of labeled data is always a non-trivial problem. Lee et al. BIBREF28 applied BERT on large-scale biomedical unannotated data and achieved improvement on biomedical named entity recognition, relation extraction and question answering. Kim et al. BIBREF29 adapted BioBERT into multi-type named entity recognition and discovered new entities. Both of them demonstrates the usefulness of introducing pre-trained language model into biomedical domain.
Question Answering based Clinical Text Structuring
Given a sequence of paragraph text $X=<x_1, x_2, ..., x_n>$, clinical text structuring (CTS) can be regarded to extract or generate a key-value pair where key $Q$ is typically a query term such as proximal resection margin and value $V$ is a result of query term $Q$ according to the paragraph text $X$.
Generally, researchers solve CTS problem in two steps. Firstly, the answer-related text is pick out. And then several steps such as entity names conversion and negative words recognition are deployed to generate the final answer. While final answer varies from task to task, which truly causes non-uniform output formats, finding the answer-related text is a common action among all tasks. Traditional methods regard both the steps as a whole. In this paper, we focus on finding the answer-related substring $Xs = <X_i, X_i+1, X_i+2, ... X_j> (1 <= i < j <= n)$ from paragraph text $X$. For example, given sentence UTF8gkai“远端胃切除标本:小弯长11.5cm,大弯长17.0cm。距上切端6.0cm、下切端8.0cm" (Distal gastrectomy specimen: measuring 11.5cm in length along the lesser curvature, 17.0cm in length along the greater curvature; 6.0cm from the proximal resection margin, and 8.0cm from the distal resection margin) and query UTF8gkai“上切缘距离"(proximal resection margin), the answer should be 6.0cm which is located in original text from index 32 to 37. With such definition, it unifies the output format of CTS tasks and therefore make the training data shareable, in order to reduce the training data quantity requirement.
Since BERT BIBREF26 has already demonstrated the usefulness of shared model, we suppose extracting commonality of this problem and unifying the output format will make the model more powerful than dedicated model and meanwhile, for a specific clinical task, use the data for other tasks to supplement the training data.
The Proposed Model for QA-CTS Task
In this section, we present an effective model for the question answering based clinical text structuring (QA-CTS). As shown in Fig. FIGREF8, paragraph text $X$ is first passed to a clinical named entity recognition (CNER) model BIBREF12 to capture named entity information and obtain one-hot CNER output tagging sequence for query text $I_{nq}$ and paragraph text $I_{nt}$ with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. $I_{nq}$ and $I_{nt}$ are then integrated together into $I_n$. Meanwhile, the paragraph text $X$ and query text $Q$ are organized and passed to contextualized representation model which is pre-trained language model BERT BIBREF26 here to obtain the contextualized representation vector $V_s$ of both text and query. Afterwards, $V_s$ and $I_n$ are integrated together and fed into a feed forward network to calculate the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word.
The Proposed Model for QA-CTS Task ::: Contextualized Representation of Sentence Text and Query Text
For any clinical free-text paragraph $X$ and query $Q$, contextualized representation is to generate the encoded vector of both of them. Here we use pre-trained language model BERT-base BIBREF26 model to capture contextual information.
The text input is constructed as `[CLS] $Q$ [SEP] $X$ [SEP]'. For Chinese sentence, each word in this input will be mapped to a pre-trained embedding $e_i$. To tell the model $Q$ and $X$ is two different sentence, a sentence type input is generated which is a binary label sequence to denote what sentence each character in the input belongs to. Positional encoding and mask matrix is also constructed automatically to bring in absolute position information and eliminate the impact of zero padding respectively. Then a hidden vector $V_s$ which contains both query and text information is generated through BERT-base model.
The Proposed Model for QA-CTS Task ::: Clinical Named Entity Information
Since BERT is trained on general corpus, its performance on biomedical domain can be improved by introducing biomedical domain-specific features. In this paper, we introduce clinical named entity information into the model.
The CNER task aims to identify and classify important clinical terms such as diseases, symptoms, treatments, exams, and body parts from Chinese EHRs. It can be regarded as a sequence labeling task. A CNER model typically outputs a sequence of tags. Each character of the original sentence will be tagged a label following a tag scheme. In this paper we recognize the entities by the model of our previous work BIBREF12 but trained on another corpus which has 44 entity types including operations, numbers, unit words, examinations, symptoms, negative words, etc. An illustrative example of named entity information sequence is demonstrated in Table TABREF2. In Table TABREF2, UTF8gkai“远端胃切除" is tagged as an operation, `11.5' is a number word and `cm' is an unit word. The named entity tag sequence is organized in one-hot type. We denote the sequence for clinical sentence and query term as $I_{nt}$ and $I_{nq}$, respectively.
The Proposed Model for QA-CTS Task ::: Integration Method
There are two ways to integrate two named entity information vectors $I_{nt}$ and $I_{nq}$ or hidden contextualized representation $V_s$ and named entity information $I_n$, where $I_n = [I_{nt}; I_{nq}]$. The first one is to concatenate them together because they have sequence output with a common dimension. The second one is to transform them into a new hidden representation. For the concatenation method, the integrated representation is described as follows.
While for the transformation method, we use multi-head attention BIBREF30 to encode the two vectors. It can be defined as follows where $h$ is the number of heads and $W_o$ is used to projects back the dimension of concatenated matrix.
$Attention$ denotes the traditional attention and it can be defined as follows.
where $d_k$ is the length of hidden vector.
The Proposed Model for QA-CTS Task ::: Final Prediction
The final step is to use integrated representation $H_i$ to predict the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word. We use a feed forward network (FFN) to compress and calculate the score of each word $H_f$ which makes the dimension to $\left\langle l_s, 2\right\rangle $ where $l_s$ denotes the length of sequence.
Then we permute the two dimensions for softmax calculation. The calculation process of loss function can be defined as followed.
where $O_s = softmax(permute(H_f)_0)$ denotes the probability score of each word to be the start word and similarly $O_e = softmax(permute(H_f)_1)$ denotes the end. $y_s$ and $y_e$ denotes the true answer of the output for start word and end word respectively.
The Proposed Model for QA-CTS Task ::: Two-Stage Training Mechanism
Two-stage training mechanism is previously applied on bilinear model in fine-grained visual recognition BIBREF31, BIBREF32, BIBREF33. Two CNNs are deployed in the model. One is trained at first for coarse-graind features while freezing the parameter of the other. Then unfreeze the other one and train the entire model in a low learning rate for fetching fine-grained features.
Inspired by this and due to the large amount of parameters in BERT model, to speed up the training process, we fine tune the BERT model with new prediction layer first to achieve a better contextualized representation performance. Then we deploy the proposed model and load the fine tuned BERT weights, attach named entity information layers and retrain the model.
Experimental Studies
In this section, we devote to experimentally evaluating our proposed task and approach. The best results in tables are in bold.
Experimental Studies ::: Dataset and Evaluation Metrics
Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20.
In the following experiments, two widely-used performance measures (i.e., EM-score BIBREF34 and (macro-averaged) F$_1$-score BIBREF35) are used to evaluate the methods. The Exact Match (EM-score) metric measures the percentage of predictions that match any one of the ground truth answers exactly. The F$_1$-score metric is a looser metric measures the average overlap between the prediction and ground truth answer.
Experimental Studies ::: Experimental Settings
To implement deep neural network models, we utilize the Keras library BIBREF36 with TensorFlow BIBREF37 backend. Each model is run on a single NVIDIA GeForce GTX 1080 Ti GPU. The models are trained by Adam optimization algorithm BIBREF38 whose parameters are the same as the default settings except for learning rate set to $5\times 10^{-5}$. Batch size is set to 3 or 4 due to the lack of graphical memory. We select BERT-base as the pre-trained language model in this paper. Due to the high cost of pre-training BERT language model, we directly adopt parameters pre-trained by Google in Chinese general corpus. The named entity recognition is applied on both pathology report texts and query texts.
Experimental Studies ::: Comparison with State-of-the-art Methods
Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23.
Table TABREF23 indicates that our proposed model achieved the best performance both in EM-score and F$_1$-score with EM-score of 91.84% and F$_1$-score of 93.75%. QANet outperformed BERT-Base with 3.56% score in F$_1$-score but underperformed it with 0.75% score in EM-score. Compared with BERT-Base, our model led to a 5.64% performance improvement in EM-score and 3.69% in F$_1$-score. Although our model didn't outperform much with QANet in F$_1$-score (only 0.13%), our model significantly outperformed it with 6.39% score in EM-score.
Experimental Studies ::: Ablation Analysis
To further investigate the effects of named entity information and two-stage training mechanism for our model, we apply ablation analysis to see the improvement brought by each of them, where $\times $ refers to removing that part from our model.
As demonstrated in Table TABREF25, with named entity information enabled, two-stage training mechanism improved the result by 4.36% in EM-score and 3.8% in F$_1$-score. Without two-stage training mechanism, named entity information led to an improvement by 1.28% in EM-score but it also led to a weak deterioration by 0.12% in F$_1$-score. With both of them enabled, our proposed model achieved a 5.64% score improvement in EM-score and a 3.69% score improvement in F$_1$-score. The experimental results show that both named entity information and two-stage training mechanism are helpful to our model.
Experimental Studies ::: Comparisons Between Two Integration Methods
There are two methods to integrate named entity information into existing model, we experimentally compare these two integration methods. As named entity recognition has been applied on both pathology report text and query text, there will be two integration here. One is for two named entity information and the other is for contextualized representation and integrated named entity information. For multi-head attention BIBREF30, we set heads number $h = 16$ with 256-dimension hidden vector size for each head.
From Table TABREF27, we can observe that applying concatenation on both periods achieved the best performance on both EM-score and F$_1$-score. Unfortunately, applying multi-head attention on both period one and period two can not reach convergence in our experiments. This probably because it makes the model too complex to train. The difference on other two methods are the order of concatenation and multi-head attention. Applying multi-head attention on two named entity information $I_{nt}$ and $I_{nq}$ first achieved a better performance with 89.87% in EM-score and 92.88% in F$_1$-score. Applying Concatenation first can only achieve 80.74% in EM-score and 84.42% in F$_1$-score. This is probably due to the processing depth of hidden vectors and dataset size. BERT's output has been modified after many layers but named entity information representation is very close to input. With big amount of parameters in multi-head attention, it requires massive training to find out the optimal parameters. However, our dataset is significantly smaller than what pre-trained BERT uses. This probably can also explain why applying multi-head attention method on both periods can not converge.
Although Table TABREF27 shows the best integration method is concatenation, multi-head attention still has great potential. Due to the lack of computational resources, our experiment fixed the head number and hidden vector size. However, tuning these hyper parameters may have impact on the result. Tuning integration method and try to utilize larger datasets may give help to improving the performance.
Experimental Studies ::: Data Integration Analysis
To investigate how shared task and shared model can benefit, we split our dataset by query types, train our proposed model with different datasets and demonstrate their performance on different datasets. Firstly, we investigate the performance on model without two-stage training and named entity information.
As indicated in Table TABREF30, The model trained by mixed data outperforms 2 of the 3 original tasks in EM-score with 81.55% for proximal resection margin and 86.85% for distal resection margin. The performance on tumor size declined by 1.57% score in EM-score and 3.14% score in F$_1$-score but they were still above 90%. 0.69% and 0.37% score improvement in EM-score was brought by shared model for proximal and distal resection margin prediction. Meanwhile F$_1$-score for those two tasks declined 3.11% and 0.77% score.
Then we investigate the performance on model with two-stage training and named entity information. In this experiment, pre-training process only use the specific dataset not the mixed data. From Table TABREF31 we can observe that the performance on proximal and distal resection margin achieved the best performance on both EM-score and F$_1$-score. Compared with Table TABREF30, the best performance on proximal resection margin improved by 6.9% in EM-score and 7.94% in F$_1$-score. Meanwhile, the best performance on distal resection margin improved by 5.56% in EM-score and 6.32% in F$_1$-score. Other performances also usually improved a lot. This proves the usefulness of two-stage training and named entity information as well.
Lastly, we fine tune the model for each task with a pre-trained parameter. Table TABREF32 summarizes the result. (Add some explanations for the Table TABREF32). Comparing Table TABREF32 with Table TABREF31, using mixed-data pre-trained parameters can significantly improve the model performance than task-specific data trained model. Except tumor size, the result was improved by 0.52% score in EM-score, 1.39% score in F$_1$-score for proximal resection margin and 2.6% score in EM-score, 2.96% score in F$_1$-score for distal resection margin. This proves mixed-data pre-trained parameters can lead to a great benefit for specific task. Meanwhile, the model performance on other tasks which are not trained in the final stage was also improved from around 0 to 60 or 70 percent. This proves that there is commonality between different tasks and our proposed QA-CTS task make this learnable. In conclusion, to achieve the best performance for a specific dataset, pre-training the model in multiple datasets and then fine tuning the model on the specific dataset is the best way.
Conclusion
In this paper, we present a question answering based clinical text structuring (QA-CTS) task, which unifies different clinical text structuring tasks and utilize different datasets. A novel model is also proposed to integrate named entity information into a pre-trained language model and adapt it to QA-CTS task. Initially, sequential results of named entity recognition on both paragraph and query texts are integrated together. Contextualized representation on both paragraph and query texts are transformed by a pre-trained language model. Then, the integrated named entity information and contextualized representation are integrated together and fed into a feed forward network for final prediction. Experimental results on real-world dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks. The shared task and shared model introduced by QA-CTS task has also been proved to be useful for improving the performance on most of the task-specific datasets. In conclusion, the best way to achieve the best performance for a specific dataset is to pre-train the model in multiple datasets and then fine tune it on the specific dataset.
Acknowledgment
We would like to thank Ting Li and Xizhou Hong (Ruijin Hospital) who have helped us very much in data fetching and data cleansing. This work is supported by the National Key R&D Program of China for “Precision Medical Research" (No. 2018YFC0910500). | the dataset consists of pathology reports including sentences and questions and answers about tumor size and resection margins so it does include additional sentences |
19c9cfbc4f29104200393e848b7b9be41913a7ac | 19c9cfbc4f29104200393e848b7b9be41913a7ac_0 | Q: How many questions are in the dataset?
Text: Introduction
Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches.
However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size). Researchers have to construct different models for it, which is already costly, and hence it calls for a lot of labeled data for each model. Moreover, labeling necessary amount of data for training neural network requires expensive labor cost. To handle it, researchers turn to some rule-based structuring methods which often have lower labor cost.
Traditionally, CTS tasks can be addressed by rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2, task-specific end-to-end methods BIBREF3, BIBREF4, BIBREF5, BIBREF6 and pipeline methods BIBREF7, BIBREF8, BIBREF9. Rule and dictionary based methods suffer from costly human-designed extraction rules, while task-specific end-to-end methods have non-uniform output formats and require task-specific training dataset. Pipeline methods break down the entire process into several pieces which improves the performance and generality. However, when the pipeline depth grows, error propagation will have a greater impact on the performance.
To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows.
We first present a question answering based clinical text structuring (QA-CTS) task, which unifies different specific tasks and make dataset shareable. We also propose an effective model to integrate clinical named entity information into pre-trained language model.
Experimental results show that QA-CTS task leads to significant improvement due to shared dataset. Our proposed model also achieves significantly better performance than the strong baseline methods. In addition, we also show that two-stage training mechanism has a great improvement on QA-CTS task.
The rest of the paper is organized as follows. We briefly review the related work on clinical text structuring in Section SECREF2. Then, we present question answer based clinical text structuring task in Section SECREF3. In Section SECREF4, we present an effective model for this task. Section SECREF5 is devoted to computational studies and several investigations on the key issues of our proposed model. Finally, conclusions are given in Section SECREF6.
Related Work ::: Clinical Text Structuring
Clinical text structuring is a final problem which is highly related to practical applications. Most of existing studies are case-by-case. Few of them are developed for the general purpose structuring task. These studies can be roughly divided into three categories: rule and dictionary based methods, task-specific end-to-end methods and pipeline methods.
Rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2 rely extremely on heuristics and handcrafted extraction rules which is more of an art than a science and incurring extensive trial-and-error experiments. Fukuda et al. BIBREF0 identified protein names from biological papers by dictionaries and several features of protein names. Wang et al. BIBREF1 developed some linguistic rules (i.e. normalised/expanded term matching and substring term matching) to map specific terminology to SNOMED CT. Song et al. BIBREF2 proposed a hybrid dictionary-based bio-entity extraction technique and expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. This kind of approach features its interpretability and easy modifiability. However, with the increase of the rule amount, supplementing new rules to existing system will turn to be a rule disaster.
Task-specific end-to-end methods BIBREF3, BIBREF4 use large amount of data to automatically model the specific task. Topaz et al. BIBREF3 constructed an automated wound information identification model with five output. Tan et al. BIBREF4 identified patients undergoing radical cystectomy for bladder cancer. Although they achieved good performance, none of their models could be used to another task due to output format difference. This makes building a new model for a new task a costly job.
Pipeline methods BIBREF7, BIBREF8, BIBREF9 break down the entire task into several basic natural language processing tasks. Bill et al. BIBREF7 focused on attributes extraction which mainly relied on dependency parsing and named entity recognition BIBREF10, BIBREF11, BIBREF12. Meanwhile, Fonferko et al. BIBREF9 used more components like noun phrase chunking BIBREF13, BIBREF14, BIBREF15, part-of-speech tagging BIBREF16, BIBREF17, BIBREF18, sentence splitter, named entity linking BIBREF19, BIBREF20, BIBREF21, relation extraction BIBREF22, BIBREF23. This kind of method focus on language itself, so it can handle tasks more general. However, as the depth of pipeline grows, it is obvious that error propagation will be more and more serious. In contrary, using less components to decrease the pipeline depth will lead to a poor performance. So the upper limit of this method depends mainly on the worst component.
Related Work ::: Pre-trained Language Model
Recently, some works focused on pre-trained language representation models to capture language information from text and then utilizing the information to improve the performance of specific natural language processing tasks BIBREF24, BIBREF25, BIBREF26, BIBREF27 which makes language model a shared model to all natural language processing tasks. Radford et al. BIBREF24 proposed a framework for fine-tuning pre-trained language model. Peters et al. BIBREF25 proposed ELMo which concatenates forward and backward language models in a shallow manner. Devlin et al. BIBREF26 used bidirectional Transformers to model deep interactions between the two directions. Yang et al. BIBREF27 replaced the fixed forward or backward factorization order with all possible permutations of the factorization order and avoided using the [MASK] tag which causes pretrain-finetune discrepancy that BERT is subject to.
The main motivation of introducing pre-trained language model is to solve the shortage of labeled data and polysemy problem. Although polysemy problem is not a common phenomenon in biomedical domain, shortage of labeled data is always a non-trivial problem. Lee et al. BIBREF28 applied BERT on large-scale biomedical unannotated data and achieved improvement on biomedical named entity recognition, relation extraction and question answering. Kim et al. BIBREF29 adapted BioBERT into multi-type named entity recognition and discovered new entities. Both of them demonstrates the usefulness of introducing pre-trained language model into biomedical domain.
Question Answering based Clinical Text Structuring
Given a sequence of paragraph text $X=<x_1, x_2, ..., x_n>$, clinical text structuring (CTS) can be regarded to extract or generate a key-value pair where key $Q$ is typically a query term such as proximal resection margin and value $V$ is a result of query term $Q$ according to the paragraph text $X$.
Generally, researchers solve CTS problem in two steps. Firstly, the answer-related text is pick out. And then several steps such as entity names conversion and negative words recognition are deployed to generate the final answer. While final answer varies from task to task, which truly causes non-uniform output formats, finding the answer-related text is a common action among all tasks. Traditional methods regard both the steps as a whole. In this paper, we focus on finding the answer-related substring $Xs = <X_i, X_i+1, X_i+2, ... X_j> (1 <= i < j <= n)$ from paragraph text $X$. For example, given sentence UTF8gkai“远端胃切除标本:小弯长11.5cm,大弯长17.0cm。距上切端6.0cm、下切端8.0cm" (Distal gastrectomy specimen: measuring 11.5cm in length along the lesser curvature, 17.0cm in length along the greater curvature; 6.0cm from the proximal resection margin, and 8.0cm from the distal resection margin) and query UTF8gkai“上切缘距离"(proximal resection margin), the answer should be 6.0cm which is located in original text from index 32 to 37. With such definition, it unifies the output format of CTS tasks and therefore make the training data shareable, in order to reduce the training data quantity requirement.
Since BERT BIBREF26 has already demonstrated the usefulness of shared model, we suppose extracting commonality of this problem and unifying the output format will make the model more powerful than dedicated model and meanwhile, for a specific clinical task, use the data for other tasks to supplement the training data.
The Proposed Model for QA-CTS Task
In this section, we present an effective model for the question answering based clinical text structuring (QA-CTS). As shown in Fig. FIGREF8, paragraph text $X$ is first passed to a clinical named entity recognition (CNER) model BIBREF12 to capture named entity information and obtain one-hot CNER output tagging sequence for query text $I_{nq}$ and paragraph text $I_{nt}$ with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. $I_{nq}$ and $I_{nt}$ are then integrated together into $I_n$. Meanwhile, the paragraph text $X$ and query text $Q$ are organized and passed to contextualized representation model which is pre-trained language model BERT BIBREF26 here to obtain the contextualized representation vector $V_s$ of both text and query. Afterwards, $V_s$ and $I_n$ are integrated together and fed into a feed forward network to calculate the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word.
The Proposed Model for QA-CTS Task ::: Contextualized Representation of Sentence Text and Query Text
For any clinical free-text paragraph $X$ and query $Q$, contextualized representation is to generate the encoded vector of both of them. Here we use pre-trained language model BERT-base BIBREF26 model to capture contextual information.
The text input is constructed as `[CLS] $Q$ [SEP] $X$ [SEP]'. For Chinese sentence, each word in this input will be mapped to a pre-trained embedding $e_i$. To tell the model $Q$ and $X$ is two different sentence, a sentence type input is generated which is a binary label sequence to denote what sentence each character in the input belongs to. Positional encoding and mask matrix is also constructed automatically to bring in absolute position information and eliminate the impact of zero padding respectively. Then a hidden vector $V_s$ which contains both query and text information is generated through BERT-base model.
The Proposed Model for QA-CTS Task ::: Clinical Named Entity Information
Since BERT is trained on general corpus, its performance on biomedical domain can be improved by introducing biomedical domain-specific features. In this paper, we introduce clinical named entity information into the model.
The CNER task aims to identify and classify important clinical terms such as diseases, symptoms, treatments, exams, and body parts from Chinese EHRs. It can be regarded as a sequence labeling task. A CNER model typically outputs a sequence of tags. Each character of the original sentence will be tagged a label following a tag scheme. In this paper we recognize the entities by the model of our previous work BIBREF12 but trained on another corpus which has 44 entity types including operations, numbers, unit words, examinations, symptoms, negative words, etc. An illustrative example of named entity information sequence is demonstrated in Table TABREF2. In Table TABREF2, UTF8gkai“远端胃切除" is tagged as an operation, `11.5' is a number word and `cm' is an unit word. The named entity tag sequence is organized in one-hot type. We denote the sequence for clinical sentence and query term as $I_{nt}$ and $I_{nq}$, respectively.
The Proposed Model for QA-CTS Task ::: Integration Method
There are two ways to integrate two named entity information vectors $I_{nt}$ and $I_{nq}$ or hidden contextualized representation $V_s$ and named entity information $I_n$, where $I_n = [I_{nt}; I_{nq}]$. The first one is to concatenate them together because they have sequence output with a common dimension. The second one is to transform them into a new hidden representation. For the concatenation method, the integrated representation is described as follows.
While for the transformation method, we use multi-head attention BIBREF30 to encode the two vectors. It can be defined as follows where $h$ is the number of heads and $W_o$ is used to projects back the dimension of concatenated matrix.
$Attention$ denotes the traditional attention and it can be defined as follows.
where $d_k$ is the length of hidden vector.
The Proposed Model for QA-CTS Task ::: Final Prediction
The final step is to use integrated representation $H_i$ to predict the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word. We use a feed forward network (FFN) to compress and calculate the score of each word $H_f$ which makes the dimension to $\left\langle l_s, 2\right\rangle $ where $l_s$ denotes the length of sequence.
Then we permute the two dimensions for softmax calculation. The calculation process of loss function can be defined as followed.
where $O_s = softmax(permute(H_f)_0)$ denotes the probability score of each word to be the start word and similarly $O_e = softmax(permute(H_f)_1)$ denotes the end. $y_s$ and $y_e$ denotes the true answer of the output for start word and end word respectively.
The Proposed Model for QA-CTS Task ::: Two-Stage Training Mechanism
Two-stage training mechanism is previously applied on bilinear model in fine-grained visual recognition BIBREF31, BIBREF32, BIBREF33. Two CNNs are deployed in the model. One is trained at first for coarse-graind features while freezing the parameter of the other. Then unfreeze the other one and train the entire model in a low learning rate for fetching fine-grained features.
Inspired by this and due to the large amount of parameters in BERT model, to speed up the training process, we fine tune the BERT model with new prediction layer first to achieve a better contextualized representation performance. Then we deploy the proposed model and load the fine tuned BERT weights, attach named entity information layers and retrain the model.
Experimental Studies
In this section, we devote to experimentally evaluating our proposed task and approach. The best results in tables are in bold.
Experimental Studies ::: Dataset and Evaluation Metrics
Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20.
In the following experiments, two widely-used performance measures (i.e., EM-score BIBREF34 and (macro-averaged) F$_1$-score BIBREF35) are used to evaluate the methods. The Exact Match (EM-score) metric measures the percentage of predictions that match any one of the ground truth answers exactly. The F$_1$-score metric is a looser metric measures the average overlap between the prediction and ground truth answer.
Experimental Studies ::: Experimental Settings
To implement deep neural network models, we utilize the Keras library BIBREF36 with TensorFlow BIBREF37 backend. Each model is run on a single NVIDIA GeForce GTX 1080 Ti GPU. The models are trained by Adam optimization algorithm BIBREF38 whose parameters are the same as the default settings except for learning rate set to $5\times 10^{-5}$. Batch size is set to 3 or 4 due to the lack of graphical memory. We select BERT-base as the pre-trained language model in this paper. Due to the high cost of pre-training BERT language model, we directly adopt parameters pre-trained by Google in Chinese general corpus. The named entity recognition is applied on both pathology report texts and query texts.
Experimental Studies ::: Comparison with State-of-the-art Methods
Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23.
Table TABREF23 indicates that our proposed model achieved the best performance both in EM-score and F$_1$-score with EM-score of 91.84% and F$_1$-score of 93.75%. QANet outperformed BERT-Base with 3.56% score in F$_1$-score but underperformed it with 0.75% score in EM-score. Compared with BERT-Base, our model led to a 5.64% performance improvement in EM-score and 3.69% in F$_1$-score. Although our model didn't outperform much with QANet in F$_1$-score (only 0.13%), our model significantly outperformed it with 6.39% score in EM-score.
Experimental Studies ::: Ablation Analysis
To further investigate the effects of named entity information and two-stage training mechanism for our model, we apply ablation analysis to see the improvement brought by each of them, where $\times $ refers to removing that part from our model.
As demonstrated in Table TABREF25, with named entity information enabled, two-stage training mechanism improved the result by 4.36% in EM-score and 3.8% in F$_1$-score. Without two-stage training mechanism, named entity information led to an improvement by 1.28% in EM-score but it also led to a weak deterioration by 0.12% in F$_1$-score. With both of them enabled, our proposed model achieved a 5.64% score improvement in EM-score and a 3.69% score improvement in F$_1$-score. The experimental results show that both named entity information and two-stage training mechanism are helpful to our model.
Experimental Studies ::: Comparisons Between Two Integration Methods
There are two methods to integrate named entity information into existing model, we experimentally compare these two integration methods. As named entity recognition has been applied on both pathology report text and query text, there will be two integration here. One is for two named entity information and the other is for contextualized representation and integrated named entity information. For multi-head attention BIBREF30, we set heads number $h = 16$ with 256-dimension hidden vector size for each head.
From Table TABREF27, we can observe that applying concatenation on both periods achieved the best performance on both EM-score and F$_1$-score. Unfortunately, applying multi-head attention on both period one and period two can not reach convergence in our experiments. This probably because it makes the model too complex to train. The difference on other two methods are the order of concatenation and multi-head attention. Applying multi-head attention on two named entity information $I_{nt}$ and $I_{nq}$ first achieved a better performance with 89.87% in EM-score and 92.88% in F$_1$-score. Applying Concatenation first can only achieve 80.74% in EM-score and 84.42% in F$_1$-score. This is probably due to the processing depth of hidden vectors and dataset size. BERT's output has been modified after many layers but named entity information representation is very close to input. With big amount of parameters in multi-head attention, it requires massive training to find out the optimal parameters. However, our dataset is significantly smaller than what pre-trained BERT uses. This probably can also explain why applying multi-head attention method on both periods can not converge.
Although Table TABREF27 shows the best integration method is concatenation, multi-head attention still has great potential. Due to the lack of computational resources, our experiment fixed the head number and hidden vector size. However, tuning these hyper parameters may have impact on the result. Tuning integration method and try to utilize larger datasets may give help to improving the performance.
Experimental Studies ::: Data Integration Analysis
To investigate how shared task and shared model can benefit, we split our dataset by query types, train our proposed model with different datasets and demonstrate their performance on different datasets. Firstly, we investigate the performance on model without two-stage training and named entity information.
As indicated in Table TABREF30, The model trained by mixed data outperforms 2 of the 3 original tasks in EM-score with 81.55% for proximal resection margin and 86.85% for distal resection margin. The performance on tumor size declined by 1.57% score in EM-score and 3.14% score in F$_1$-score but they were still above 90%. 0.69% and 0.37% score improvement in EM-score was brought by shared model for proximal and distal resection margin prediction. Meanwhile F$_1$-score for those two tasks declined 3.11% and 0.77% score.
Then we investigate the performance on model with two-stage training and named entity information. In this experiment, pre-training process only use the specific dataset not the mixed data. From Table TABREF31 we can observe that the performance on proximal and distal resection margin achieved the best performance on both EM-score and F$_1$-score. Compared with Table TABREF30, the best performance on proximal resection margin improved by 6.9% in EM-score and 7.94% in F$_1$-score. Meanwhile, the best performance on distal resection margin improved by 5.56% in EM-score and 6.32% in F$_1$-score. Other performances also usually improved a lot. This proves the usefulness of two-stage training and named entity information as well.
Lastly, we fine tune the model for each task with a pre-trained parameter. Table TABREF32 summarizes the result. (Add some explanations for the Table TABREF32). Comparing Table TABREF32 with Table TABREF31, using mixed-data pre-trained parameters can significantly improve the model performance than task-specific data trained model. Except tumor size, the result was improved by 0.52% score in EM-score, 1.39% score in F$_1$-score for proximal resection margin and 2.6% score in EM-score, 2.96% score in F$_1$-score for distal resection margin. This proves mixed-data pre-trained parameters can lead to a great benefit for specific task. Meanwhile, the model performance on other tasks which are not trained in the final stage was also improved from around 0 to 60 or 70 percent. This proves that there is commonality between different tasks and our proposed QA-CTS task make this learnable. In conclusion, to achieve the best performance for a specific dataset, pre-training the model in multiple datasets and then fine tuning the model on the specific dataset is the best way.
Conclusion
In this paper, we present a question answering based clinical text structuring (QA-CTS) task, which unifies different clinical text structuring tasks and utilize different datasets. A novel model is also proposed to integrate named entity information into a pre-trained language model and adapt it to QA-CTS task. Initially, sequential results of named entity recognition on both paragraph and query texts are integrated together. Contextualized representation on both paragraph and query texts are transformed by a pre-trained language model. Then, the integrated named entity information and contextualized representation are integrated together and fed into a feed forward network for final prediction. Experimental results on real-world dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks. The shared task and shared model introduced by QA-CTS task has also been proved to be useful for improving the performance on most of the task-specific datasets. In conclusion, the best way to achieve the best performance for a specific dataset is to pre-train the model in multiple datasets and then fine tune it on the specific dataset.
Acknowledgment
We would like to thank Ting Li and Xizhou Hong (Ruijin Hospital) who have helped us very much in data fetching and data cleansing. This work is supported by the National Key R&D Program of China for “Precision Medical Research" (No. 2018YFC0910500). | 2,714 |
6743c1dd7764fc652cfe2ea29097ea09b5544bc3 | 6743c1dd7764fc652cfe2ea29097ea09b5544bc3_0 | Q: What is the perWhat are the tasks evaluated?
Text: Introduction
Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches.
However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size). Researchers have to construct different models for it, which is already costly, and hence it calls for a lot of labeled data for each model. Moreover, labeling necessary amount of data for training neural network requires expensive labor cost. To handle it, researchers turn to some rule-based structuring methods which often have lower labor cost.
Traditionally, CTS tasks can be addressed by rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2, task-specific end-to-end methods BIBREF3, BIBREF4, BIBREF5, BIBREF6 and pipeline methods BIBREF7, BIBREF8, BIBREF9. Rule and dictionary based methods suffer from costly human-designed extraction rules, while task-specific end-to-end methods have non-uniform output formats and require task-specific training dataset. Pipeline methods break down the entire process into several pieces which improves the performance and generality. However, when the pipeline depth grows, error propagation will have a greater impact on the performance.
To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows.
We first present a question answering based clinical text structuring (QA-CTS) task, which unifies different specific tasks and make dataset shareable. We also propose an effective model to integrate clinical named entity information into pre-trained language model.
Experimental results show that QA-CTS task leads to significant improvement due to shared dataset. Our proposed model also achieves significantly better performance than the strong baseline methods. In addition, we also show that two-stage training mechanism has a great improvement on QA-CTS task.
The rest of the paper is organized as follows. We briefly review the related work on clinical text structuring in Section SECREF2. Then, we present question answer based clinical text structuring task in Section SECREF3. In Section SECREF4, we present an effective model for this task. Section SECREF5 is devoted to computational studies and several investigations on the key issues of our proposed model. Finally, conclusions are given in Section SECREF6.
Related Work ::: Clinical Text Structuring
Clinical text structuring is a final problem which is highly related to practical applications. Most of existing studies are case-by-case. Few of them are developed for the general purpose structuring task. These studies can be roughly divided into three categories: rule and dictionary based methods, task-specific end-to-end methods and pipeline methods.
Rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2 rely extremely on heuristics and handcrafted extraction rules which is more of an art than a science and incurring extensive trial-and-error experiments. Fukuda et al. BIBREF0 identified protein names from biological papers by dictionaries and several features of protein names. Wang et al. BIBREF1 developed some linguistic rules (i.e. normalised/expanded term matching and substring term matching) to map specific terminology to SNOMED CT. Song et al. BIBREF2 proposed a hybrid dictionary-based bio-entity extraction technique and expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. This kind of approach features its interpretability and easy modifiability. However, with the increase of the rule amount, supplementing new rules to existing system will turn to be a rule disaster.
Task-specific end-to-end methods BIBREF3, BIBREF4 use large amount of data to automatically model the specific task. Topaz et al. BIBREF3 constructed an automated wound information identification model with five output. Tan et al. BIBREF4 identified patients undergoing radical cystectomy for bladder cancer. Although they achieved good performance, none of their models could be used to another task due to output format difference. This makes building a new model for a new task a costly job.
Pipeline methods BIBREF7, BIBREF8, BIBREF9 break down the entire task into several basic natural language processing tasks. Bill et al. BIBREF7 focused on attributes extraction which mainly relied on dependency parsing and named entity recognition BIBREF10, BIBREF11, BIBREF12. Meanwhile, Fonferko et al. BIBREF9 used more components like noun phrase chunking BIBREF13, BIBREF14, BIBREF15, part-of-speech tagging BIBREF16, BIBREF17, BIBREF18, sentence splitter, named entity linking BIBREF19, BIBREF20, BIBREF21, relation extraction BIBREF22, BIBREF23. This kind of method focus on language itself, so it can handle tasks more general. However, as the depth of pipeline grows, it is obvious that error propagation will be more and more serious. In contrary, using less components to decrease the pipeline depth will lead to a poor performance. So the upper limit of this method depends mainly on the worst component.
Related Work ::: Pre-trained Language Model
Recently, some works focused on pre-trained language representation models to capture language information from text and then utilizing the information to improve the performance of specific natural language processing tasks BIBREF24, BIBREF25, BIBREF26, BIBREF27 which makes language model a shared model to all natural language processing tasks. Radford et al. BIBREF24 proposed a framework for fine-tuning pre-trained language model. Peters et al. BIBREF25 proposed ELMo which concatenates forward and backward language models in a shallow manner. Devlin et al. BIBREF26 used bidirectional Transformers to model deep interactions between the two directions. Yang et al. BIBREF27 replaced the fixed forward or backward factorization order with all possible permutations of the factorization order and avoided using the [MASK] tag which causes pretrain-finetune discrepancy that BERT is subject to.
The main motivation of introducing pre-trained language model is to solve the shortage of labeled data and polysemy problem. Although polysemy problem is not a common phenomenon in biomedical domain, shortage of labeled data is always a non-trivial problem. Lee et al. BIBREF28 applied BERT on large-scale biomedical unannotated data and achieved improvement on biomedical named entity recognition, relation extraction and question answering. Kim et al. BIBREF29 adapted BioBERT into multi-type named entity recognition and discovered new entities. Both of them demonstrates the usefulness of introducing pre-trained language model into biomedical domain.
Question Answering based Clinical Text Structuring
Given a sequence of paragraph text $X=<x_1, x_2, ..., x_n>$, clinical text structuring (CTS) can be regarded to extract or generate a key-value pair where key $Q$ is typically a query term such as proximal resection margin and value $V$ is a result of query term $Q$ according to the paragraph text $X$.
Generally, researchers solve CTS problem in two steps. Firstly, the answer-related text is pick out. And then several steps such as entity names conversion and negative words recognition are deployed to generate the final answer. While final answer varies from task to task, which truly causes non-uniform output formats, finding the answer-related text is a common action among all tasks. Traditional methods regard both the steps as a whole. In this paper, we focus on finding the answer-related substring $Xs = <X_i, X_i+1, X_i+2, ... X_j> (1 <= i < j <= n)$ from paragraph text $X$. For example, given sentence UTF8gkai“远端胃切除标本:小弯长11.5cm,大弯长17.0cm。距上切端6.0cm、下切端8.0cm" (Distal gastrectomy specimen: measuring 11.5cm in length along the lesser curvature, 17.0cm in length along the greater curvature; 6.0cm from the proximal resection margin, and 8.0cm from the distal resection margin) and query UTF8gkai“上切缘距离"(proximal resection margin), the answer should be 6.0cm which is located in original text from index 32 to 37. With such definition, it unifies the output format of CTS tasks and therefore make the training data shareable, in order to reduce the training data quantity requirement.
Since BERT BIBREF26 has already demonstrated the usefulness of shared model, we suppose extracting commonality of this problem and unifying the output format will make the model more powerful than dedicated model and meanwhile, for a specific clinical task, use the data for other tasks to supplement the training data.
The Proposed Model for QA-CTS Task
In this section, we present an effective model for the question answering based clinical text structuring (QA-CTS). As shown in Fig. FIGREF8, paragraph text $X$ is first passed to a clinical named entity recognition (CNER) model BIBREF12 to capture named entity information and obtain one-hot CNER output tagging sequence for query text $I_{nq}$ and paragraph text $I_{nt}$ with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. $I_{nq}$ and $I_{nt}$ are then integrated together into $I_n$. Meanwhile, the paragraph text $X$ and query text $Q$ are organized and passed to contextualized representation model which is pre-trained language model BERT BIBREF26 here to obtain the contextualized representation vector $V_s$ of both text and query. Afterwards, $V_s$ and $I_n$ are integrated together and fed into a feed forward network to calculate the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word.
The Proposed Model for QA-CTS Task ::: Contextualized Representation of Sentence Text and Query Text
For any clinical free-text paragraph $X$ and query $Q$, contextualized representation is to generate the encoded vector of both of them. Here we use pre-trained language model BERT-base BIBREF26 model to capture contextual information.
The text input is constructed as `[CLS] $Q$ [SEP] $X$ [SEP]'. For Chinese sentence, each word in this input will be mapped to a pre-trained embedding $e_i$. To tell the model $Q$ and $X$ is two different sentence, a sentence type input is generated which is a binary label sequence to denote what sentence each character in the input belongs to. Positional encoding and mask matrix is also constructed automatically to bring in absolute position information and eliminate the impact of zero padding respectively. Then a hidden vector $V_s$ which contains both query and text information is generated through BERT-base model.
The Proposed Model for QA-CTS Task ::: Clinical Named Entity Information
Since BERT is trained on general corpus, its performance on biomedical domain can be improved by introducing biomedical domain-specific features. In this paper, we introduce clinical named entity information into the model.
The CNER task aims to identify and classify important clinical terms such as diseases, symptoms, treatments, exams, and body parts from Chinese EHRs. It can be regarded as a sequence labeling task. A CNER model typically outputs a sequence of tags. Each character of the original sentence will be tagged a label following a tag scheme. In this paper we recognize the entities by the model of our previous work BIBREF12 but trained on another corpus which has 44 entity types including operations, numbers, unit words, examinations, symptoms, negative words, etc. An illustrative example of named entity information sequence is demonstrated in Table TABREF2. In Table TABREF2, UTF8gkai“远端胃切除" is tagged as an operation, `11.5' is a number word and `cm' is an unit word. The named entity tag sequence is organized in one-hot type. We denote the sequence for clinical sentence and query term as $I_{nt}$ and $I_{nq}$, respectively.
The Proposed Model for QA-CTS Task ::: Integration Method
There are two ways to integrate two named entity information vectors $I_{nt}$ and $I_{nq}$ or hidden contextualized representation $V_s$ and named entity information $I_n$, where $I_n = [I_{nt}; I_{nq}]$. The first one is to concatenate them together because they have sequence output with a common dimension. The second one is to transform them into a new hidden representation. For the concatenation method, the integrated representation is described as follows.
While for the transformation method, we use multi-head attention BIBREF30 to encode the two vectors. It can be defined as follows where $h$ is the number of heads and $W_o$ is used to projects back the dimension of concatenated matrix.
$Attention$ denotes the traditional attention and it can be defined as follows.
where $d_k$ is the length of hidden vector.
The Proposed Model for QA-CTS Task ::: Final Prediction
The final step is to use integrated representation $H_i$ to predict the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word. We use a feed forward network (FFN) to compress and calculate the score of each word $H_f$ which makes the dimension to $\left\langle l_s, 2\right\rangle $ where $l_s$ denotes the length of sequence.
Then we permute the two dimensions for softmax calculation. The calculation process of loss function can be defined as followed.
where $O_s = softmax(permute(H_f)_0)$ denotes the probability score of each word to be the start word and similarly $O_e = softmax(permute(H_f)_1)$ denotes the end. $y_s$ and $y_e$ denotes the true answer of the output for start word and end word respectively.
The Proposed Model for QA-CTS Task ::: Two-Stage Training Mechanism
Two-stage training mechanism is previously applied on bilinear model in fine-grained visual recognition BIBREF31, BIBREF32, BIBREF33. Two CNNs are deployed in the model. One is trained at first for coarse-graind features while freezing the parameter of the other. Then unfreeze the other one and train the entire model in a low learning rate for fetching fine-grained features.
Inspired by this and due to the large amount of parameters in BERT model, to speed up the training process, we fine tune the BERT model with new prediction layer first to achieve a better contextualized representation performance. Then we deploy the proposed model and load the fine tuned BERT weights, attach named entity information layers and retrain the model.
Experimental Studies
In this section, we devote to experimentally evaluating our proposed task and approach. The best results in tables are in bold.
Experimental Studies ::: Dataset and Evaluation Metrics
Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20.
In the following experiments, two widely-used performance measures (i.e., EM-score BIBREF34 and (macro-averaged) F$_1$-score BIBREF35) are used to evaluate the methods. The Exact Match (EM-score) metric measures the percentage of predictions that match any one of the ground truth answers exactly. The F$_1$-score metric is a looser metric measures the average overlap between the prediction and ground truth answer.
Experimental Studies ::: Experimental Settings
To implement deep neural network models, we utilize the Keras library BIBREF36 with TensorFlow BIBREF37 backend. Each model is run on a single NVIDIA GeForce GTX 1080 Ti GPU. The models are trained by Adam optimization algorithm BIBREF38 whose parameters are the same as the default settings except for learning rate set to $5\times 10^{-5}$. Batch size is set to 3 or 4 due to the lack of graphical memory. We select BERT-base as the pre-trained language model in this paper. Due to the high cost of pre-training BERT language model, we directly adopt parameters pre-trained by Google in Chinese general corpus. The named entity recognition is applied on both pathology report texts and query texts.
Experimental Studies ::: Comparison with State-of-the-art Methods
Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23.
Table TABREF23 indicates that our proposed model achieved the best performance both in EM-score and F$_1$-score with EM-score of 91.84% and F$_1$-score of 93.75%. QANet outperformed BERT-Base with 3.56% score in F$_1$-score but underperformed it with 0.75% score in EM-score. Compared with BERT-Base, our model led to a 5.64% performance improvement in EM-score and 3.69% in F$_1$-score. Although our model didn't outperform much with QANet in F$_1$-score (only 0.13%), our model significantly outperformed it with 6.39% score in EM-score.
Experimental Studies ::: Ablation Analysis
To further investigate the effects of named entity information and two-stage training mechanism for our model, we apply ablation analysis to see the improvement brought by each of them, where $\times $ refers to removing that part from our model.
As demonstrated in Table TABREF25, with named entity information enabled, two-stage training mechanism improved the result by 4.36% in EM-score and 3.8% in F$_1$-score. Without two-stage training mechanism, named entity information led to an improvement by 1.28% in EM-score but it also led to a weak deterioration by 0.12% in F$_1$-score. With both of them enabled, our proposed model achieved a 5.64% score improvement in EM-score and a 3.69% score improvement in F$_1$-score. The experimental results show that both named entity information and two-stage training mechanism are helpful to our model.
Experimental Studies ::: Comparisons Between Two Integration Methods
There are two methods to integrate named entity information into existing model, we experimentally compare these two integration methods. As named entity recognition has been applied on both pathology report text and query text, there will be two integration here. One is for two named entity information and the other is for contextualized representation and integrated named entity information. For multi-head attention BIBREF30, we set heads number $h = 16$ with 256-dimension hidden vector size for each head.
From Table TABREF27, we can observe that applying concatenation on both periods achieved the best performance on both EM-score and F$_1$-score. Unfortunately, applying multi-head attention on both period one and period two can not reach convergence in our experiments. This probably because it makes the model too complex to train. The difference on other two methods are the order of concatenation and multi-head attention. Applying multi-head attention on two named entity information $I_{nt}$ and $I_{nq}$ first achieved a better performance with 89.87% in EM-score and 92.88% in F$_1$-score. Applying Concatenation first can only achieve 80.74% in EM-score and 84.42% in F$_1$-score. This is probably due to the processing depth of hidden vectors and dataset size. BERT's output has been modified after many layers but named entity information representation is very close to input. With big amount of parameters in multi-head attention, it requires massive training to find out the optimal parameters. However, our dataset is significantly smaller than what pre-trained BERT uses. This probably can also explain why applying multi-head attention method on both periods can not converge.
Although Table TABREF27 shows the best integration method is concatenation, multi-head attention still has great potential. Due to the lack of computational resources, our experiment fixed the head number and hidden vector size. However, tuning these hyper parameters may have impact on the result. Tuning integration method and try to utilize larger datasets may give help to improving the performance.
Experimental Studies ::: Data Integration Analysis
To investigate how shared task and shared model can benefit, we split our dataset by query types, train our proposed model with different datasets and demonstrate their performance on different datasets. Firstly, we investigate the performance on model without two-stage training and named entity information.
As indicated in Table TABREF30, The model trained by mixed data outperforms 2 of the 3 original tasks in EM-score with 81.55% for proximal resection margin and 86.85% for distal resection margin. The performance on tumor size declined by 1.57% score in EM-score and 3.14% score in F$_1$-score but they were still above 90%. 0.69% and 0.37% score improvement in EM-score was brought by shared model for proximal and distal resection margin prediction. Meanwhile F$_1$-score for those two tasks declined 3.11% and 0.77% score.
Then we investigate the performance on model with two-stage training and named entity information. In this experiment, pre-training process only use the specific dataset not the mixed data. From Table TABREF31 we can observe that the performance on proximal and distal resection margin achieved the best performance on both EM-score and F$_1$-score. Compared with Table TABREF30, the best performance on proximal resection margin improved by 6.9% in EM-score and 7.94% in F$_1$-score. Meanwhile, the best performance on distal resection margin improved by 5.56% in EM-score and 6.32% in F$_1$-score. Other performances also usually improved a lot. This proves the usefulness of two-stage training and named entity information as well.
Lastly, we fine tune the model for each task with a pre-trained parameter. Table TABREF32 summarizes the result. (Add some explanations for the Table TABREF32). Comparing Table TABREF32 with Table TABREF31, using mixed-data pre-trained parameters can significantly improve the model performance than task-specific data trained model. Except tumor size, the result was improved by 0.52% score in EM-score, 1.39% score in F$_1$-score for proximal resection margin and 2.6% score in EM-score, 2.96% score in F$_1$-score for distal resection margin. This proves mixed-data pre-trained parameters can lead to a great benefit for specific task. Meanwhile, the model performance on other tasks which are not trained in the final stage was also improved from around 0 to 60 or 70 percent. This proves that there is commonality between different tasks and our proposed QA-CTS task make this learnable. In conclusion, to achieve the best performance for a specific dataset, pre-training the model in multiple datasets and then fine tuning the model on the specific dataset is the best way.
Conclusion
In this paper, we present a question answering based clinical text structuring (QA-CTS) task, which unifies different clinical text structuring tasks and utilize different datasets. A novel model is also proposed to integrate named entity information into a pre-trained language model and adapt it to QA-CTS task. Initially, sequential results of named entity recognition on both paragraph and query texts are integrated together. Contextualized representation on both paragraph and query texts are transformed by a pre-trained language model. Then, the integrated named entity information and contextualized representation are integrated together and fed into a feed forward network for final prediction. Experimental results on real-world dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks. The shared task and shared model introduced by QA-CTS task has also been proved to be useful for improving the performance on most of the task-specific datasets. In conclusion, the best way to achieve the best performance for a specific dataset is to pre-train the model in multiple datasets and then fine tune it on the specific dataset.
Acknowledgment
We would like to thank Ting Li and Xizhou Hong (Ruijin Hospital) who have helped us very much in data fetching and data cleansing. This work is supported by the National Key R&D Program of China for “Precision Medical Research" (No. 2018YFC0910500). | Unanswerable |
14323046220b2aea8f15fba86819cbccc389ed8b | 14323046220b2aea8f15fba86819cbccc389ed8b_0 | Q: Are there privacy concerns with clinical data?
Text: Introduction
Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches.
However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size). Researchers have to construct different models for it, which is already costly, and hence it calls for a lot of labeled data for each model. Moreover, labeling necessary amount of data for training neural network requires expensive labor cost. To handle it, researchers turn to some rule-based structuring methods which often have lower labor cost.
Traditionally, CTS tasks can be addressed by rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2, task-specific end-to-end methods BIBREF3, BIBREF4, BIBREF5, BIBREF6 and pipeline methods BIBREF7, BIBREF8, BIBREF9. Rule and dictionary based methods suffer from costly human-designed extraction rules, while task-specific end-to-end methods have non-uniform output formats and require task-specific training dataset. Pipeline methods break down the entire process into several pieces which improves the performance and generality. However, when the pipeline depth grows, error propagation will have a greater impact on the performance.
To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows.
We first present a question answering based clinical text structuring (QA-CTS) task, which unifies different specific tasks and make dataset shareable. We also propose an effective model to integrate clinical named entity information into pre-trained language model.
Experimental results show that QA-CTS task leads to significant improvement due to shared dataset. Our proposed model also achieves significantly better performance than the strong baseline methods. In addition, we also show that two-stage training mechanism has a great improvement on QA-CTS task.
The rest of the paper is organized as follows. We briefly review the related work on clinical text structuring in Section SECREF2. Then, we present question answer based clinical text structuring task in Section SECREF3. In Section SECREF4, we present an effective model for this task. Section SECREF5 is devoted to computational studies and several investigations on the key issues of our proposed model. Finally, conclusions are given in Section SECREF6.
Related Work ::: Clinical Text Structuring
Clinical text structuring is a final problem which is highly related to practical applications. Most of existing studies are case-by-case. Few of them are developed for the general purpose structuring task. These studies can be roughly divided into three categories: rule and dictionary based methods, task-specific end-to-end methods and pipeline methods.
Rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2 rely extremely on heuristics and handcrafted extraction rules which is more of an art than a science and incurring extensive trial-and-error experiments. Fukuda et al. BIBREF0 identified protein names from biological papers by dictionaries and several features of protein names. Wang et al. BIBREF1 developed some linguistic rules (i.e. normalised/expanded term matching and substring term matching) to map specific terminology to SNOMED CT. Song et al. BIBREF2 proposed a hybrid dictionary-based bio-entity extraction technique and expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. This kind of approach features its interpretability and easy modifiability. However, with the increase of the rule amount, supplementing new rules to existing system will turn to be a rule disaster.
Task-specific end-to-end methods BIBREF3, BIBREF4 use large amount of data to automatically model the specific task. Topaz et al. BIBREF3 constructed an automated wound information identification model with five output. Tan et al. BIBREF4 identified patients undergoing radical cystectomy for bladder cancer. Although they achieved good performance, none of their models could be used to another task due to output format difference. This makes building a new model for a new task a costly job.
Pipeline methods BIBREF7, BIBREF8, BIBREF9 break down the entire task into several basic natural language processing tasks. Bill et al. BIBREF7 focused on attributes extraction which mainly relied on dependency parsing and named entity recognition BIBREF10, BIBREF11, BIBREF12. Meanwhile, Fonferko et al. BIBREF9 used more components like noun phrase chunking BIBREF13, BIBREF14, BIBREF15, part-of-speech tagging BIBREF16, BIBREF17, BIBREF18, sentence splitter, named entity linking BIBREF19, BIBREF20, BIBREF21, relation extraction BIBREF22, BIBREF23. This kind of method focus on language itself, so it can handle tasks more general. However, as the depth of pipeline grows, it is obvious that error propagation will be more and more serious. In contrary, using less components to decrease the pipeline depth will lead to a poor performance. So the upper limit of this method depends mainly on the worst component.
Related Work ::: Pre-trained Language Model
Recently, some works focused on pre-trained language representation models to capture language information from text and then utilizing the information to improve the performance of specific natural language processing tasks BIBREF24, BIBREF25, BIBREF26, BIBREF27 which makes language model a shared model to all natural language processing tasks. Radford et al. BIBREF24 proposed a framework for fine-tuning pre-trained language model. Peters et al. BIBREF25 proposed ELMo which concatenates forward and backward language models in a shallow manner. Devlin et al. BIBREF26 used bidirectional Transformers to model deep interactions between the two directions. Yang et al. BIBREF27 replaced the fixed forward or backward factorization order with all possible permutations of the factorization order and avoided using the [MASK] tag which causes pretrain-finetune discrepancy that BERT is subject to.
The main motivation of introducing pre-trained language model is to solve the shortage of labeled data and polysemy problem. Although polysemy problem is not a common phenomenon in biomedical domain, shortage of labeled data is always a non-trivial problem. Lee et al. BIBREF28 applied BERT on large-scale biomedical unannotated data and achieved improvement on biomedical named entity recognition, relation extraction and question answering. Kim et al. BIBREF29 adapted BioBERT into multi-type named entity recognition and discovered new entities. Both of them demonstrates the usefulness of introducing pre-trained language model into biomedical domain.
Question Answering based Clinical Text Structuring
Given a sequence of paragraph text $X=<x_1, x_2, ..., x_n>$, clinical text structuring (CTS) can be regarded to extract or generate a key-value pair where key $Q$ is typically a query term such as proximal resection margin and value $V$ is a result of query term $Q$ according to the paragraph text $X$.
Generally, researchers solve CTS problem in two steps. Firstly, the answer-related text is pick out. And then several steps such as entity names conversion and negative words recognition are deployed to generate the final answer. While final answer varies from task to task, which truly causes non-uniform output formats, finding the answer-related text is a common action among all tasks. Traditional methods regard both the steps as a whole. In this paper, we focus on finding the answer-related substring $Xs = <X_i, X_i+1, X_i+2, ... X_j> (1 <= i < j <= n)$ from paragraph text $X$. For example, given sentence UTF8gkai“远端胃切除标本:小弯长11.5cm,大弯长17.0cm。距上切端6.0cm、下切端8.0cm" (Distal gastrectomy specimen: measuring 11.5cm in length along the lesser curvature, 17.0cm in length along the greater curvature; 6.0cm from the proximal resection margin, and 8.0cm from the distal resection margin) and query UTF8gkai“上切缘距离"(proximal resection margin), the answer should be 6.0cm which is located in original text from index 32 to 37. With such definition, it unifies the output format of CTS tasks and therefore make the training data shareable, in order to reduce the training data quantity requirement.
Since BERT BIBREF26 has already demonstrated the usefulness of shared model, we suppose extracting commonality of this problem and unifying the output format will make the model more powerful than dedicated model and meanwhile, for a specific clinical task, use the data for other tasks to supplement the training data.
The Proposed Model for QA-CTS Task
In this section, we present an effective model for the question answering based clinical text structuring (QA-CTS). As shown in Fig. FIGREF8, paragraph text $X$ is first passed to a clinical named entity recognition (CNER) model BIBREF12 to capture named entity information and obtain one-hot CNER output tagging sequence for query text $I_{nq}$ and paragraph text $I_{nt}$ with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. $I_{nq}$ and $I_{nt}$ are then integrated together into $I_n$. Meanwhile, the paragraph text $X$ and query text $Q$ are organized and passed to contextualized representation model which is pre-trained language model BERT BIBREF26 here to obtain the contextualized representation vector $V_s$ of both text and query. Afterwards, $V_s$ and $I_n$ are integrated together and fed into a feed forward network to calculate the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word.
The Proposed Model for QA-CTS Task ::: Contextualized Representation of Sentence Text and Query Text
For any clinical free-text paragraph $X$ and query $Q$, contextualized representation is to generate the encoded vector of both of them. Here we use pre-trained language model BERT-base BIBREF26 model to capture contextual information.
The text input is constructed as `[CLS] $Q$ [SEP] $X$ [SEP]'. For Chinese sentence, each word in this input will be mapped to a pre-trained embedding $e_i$. To tell the model $Q$ and $X$ is two different sentence, a sentence type input is generated which is a binary label sequence to denote what sentence each character in the input belongs to. Positional encoding and mask matrix is also constructed automatically to bring in absolute position information and eliminate the impact of zero padding respectively. Then a hidden vector $V_s$ which contains both query and text information is generated through BERT-base model.
The Proposed Model for QA-CTS Task ::: Clinical Named Entity Information
Since BERT is trained on general corpus, its performance on biomedical domain can be improved by introducing biomedical domain-specific features. In this paper, we introduce clinical named entity information into the model.
The CNER task aims to identify and classify important clinical terms such as diseases, symptoms, treatments, exams, and body parts from Chinese EHRs. It can be regarded as a sequence labeling task. A CNER model typically outputs a sequence of tags. Each character of the original sentence will be tagged a label following a tag scheme. In this paper we recognize the entities by the model of our previous work BIBREF12 but trained on another corpus which has 44 entity types including operations, numbers, unit words, examinations, symptoms, negative words, etc. An illustrative example of named entity information sequence is demonstrated in Table TABREF2. In Table TABREF2, UTF8gkai“远端胃切除" is tagged as an operation, `11.5' is a number word and `cm' is an unit word. The named entity tag sequence is organized in one-hot type. We denote the sequence for clinical sentence and query term as $I_{nt}$ and $I_{nq}$, respectively.
The Proposed Model for QA-CTS Task ::: Integration Method
There are two ways to integrate two named entity information vectors $I_{nt}$ and $I_{nq}$ or hidden contextualized representation $V_s$ and named entity information $I_n$, where $I_n = [I_{nt}; I_{nq}]$. The first one is to concatenate them together because they have sequence output with a common dimension. The second one is to transform them into a new hidden representation. For the concatenation method, the integrated representation is described as follows.
While for the transformation method, we use multi-head attention BIBREF30 to encode the two vectors. It can be defined as follows where $h$ is the number of heads and $W_o$ is used to projects back the dimension of concatenated matrix.
$Attention$ denotes the traditional attention and it can be defined as follows.
where $d_k$ is the length of hidden vector.
The Proposed Model for QA-CTS Task ::: Final Prediction
The final step is to use integrated representation $H_i$ to predict the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word. We use a feed forward network (FFN) to compress and calculate the score of each word $H_f$ which makes the dimension to $\left\langle l_s, 2\right\rangle $ where $l_s$ denotes the length of sequence.
Then we permute the two dimensions for softmax calculation. The calculation process of loss function can be defined as followed.
where $O_s = softmax(permute(H_f)_0)$ denotes the probability score of each word to be the start word and similarly $O_e = softmax(permute(H_f)_1)$ denotes the end. $y_s$ and $y_e$ denotes the true answer of the output for start word and end word respectively.
The Proposed Model for QA-CTS Task ::: Two-Stage Training Mechanism
Two-stage training mechanism is previously applied on bilinear model in fine-grained visual recognition BIBREF31, BIBREF32, BIBREF33. Two CNNs are deployed in the model. One is trained at first for coarse-graind features while freezing the parameter of the other. Then unfreeze the other one and train the entire model in a low learning rate for fetching fine-grained features.
Inspired by this and due to the large amount of parameters in BERT model, to speed up the training process, we fine tune the BERT model with new prediction layer first to achieve a better contextualized representation performance. Then we deploy the proposed model and load the fine tuned BERT weights, attach named entity information layers and retrain the model.
Experimental Studies
In this section, we devote to experimentally evaluating our proposed task and approach. The best results in tables are in bold.
Experimental Studies ::: Dataset and Evaluation Metrics
Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20.
In the following experiments, two widely-used performance measures (i.e., EM-score BIBREF34 and (macro-averaged) F$_1$-score BIBREF35) are used to evaluate the methods. The Exact Match (EM-score) metric measures the percentage of predictions that match any one of the ground truth answers exactly. The F$_1$-score metric is a looser metric measures the average overlap between the prediction and ground truth answer.
Experimental Studies ::: Experimental Settings
To implement deep neural network models, we utilize the Keras library BIBREF36 with TensorFlow BIBREF37 backend. Each model is run on a single NVIDIA GeForce GTX 1080 Ti GPU. The models are trained by Adam optimization algorithm BIBREF38 whose parameters are the same as the default settings except for learning rate set to $5\times 10^{-5}$. Batch size is set to 3 or 4 due to the lack of graphical memory. We select BERT-base as the pre-trained language model in this paper. Due to the high cost of pre-training BERT language model, we directly adopt parameters pre-trained by Google in Chinese general corpus. The named entity recognition is applied on both pathology report texts and query texts.
Experimental Studies ::: Comparison with State-of-the-art Methods
Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23.
Table TABREF23 indicates that our proposed model achieved the best performance both in EM-score and F$_1$-score with EM-score of 91.84% and F$_1$-score of 93.75%. QANet outperformed BERT-Base with 3.56% score in F$_1$-score but underperformed it with 0.75% score in EM-score. Compared with BERT-Base, our model led to a 5.64% performance improvement in EM-score and 3.69% in F$_1$-score. Although our model didn't outperform much with QANet in F$_1$-score (only 0.13%), our model significantly outperformed it with 6.39% score in EM-score.
Experimental Studies ::: Ablation Analysis
To further investigate the effects of named entity information and two-stage training mechanism for our model, we apply ablation analysis to see the improvement brought by each of them, where $\times $ refers to removing that part from our model.
As demonstrated in Table TABREF25, with named entity information enabled, two-stage training mechanism improved the result by 4.36% in EM-score and 3.8% in F$_1$-score. Without two-stage training mechanism, named entity information led to an improvement by 1.28% in EM-score but it also led to a weak deterioration by 0.12% in F$_1$-score. With both of them enabled, our proposed model achieved a 5.64% score improvement in EM-score and a 3.69% score improvement in F$_1$-score. The experimental results show that both named entity information and two-stage training mechanism are helpful to our model.
Experimental Studies ::: Comparisons Between Two Integration Methods
There are two methods to integrate named entity information into existing model, we experimentally compare these two integration methods. As named entity recognition has been applied on both pathology report text and query text, there will be two integration here. One is for two named entity information and the other is for contextualized representation and integrated named entity information. For multi-head attention BIBREF30, we set heads number $h = 16$ with 256-dimension hidden vector size for each head.
From Table TABREF27, we can observe that applying concatenation on both periods achieved the best performance on both EM-score and F$_1$-score. Unfortunately, applying multi-head attention on both period one and period two can not reach convergence in our experiments. This probably because it makes the model too complex to train. The difference on other two methods are the order of concatenation and multi-head attention. Applying multi-head attention on two named entity information $I_{nt}$ and $I_{nq}$ first achieved a better performance with 89.87% in EM-score and 92.88% in F$_1$-score. Applying Concatenation first can only achieve 80.74% in EM-score and 84.42% in F$_1$-score. This is probably due to the processing depth of hidden vectors and dataset size. BERT's output has been modified after many layers but named entity information representation is very close to input. With big amount of parameters in multi-head attention, it requires massive training to find out the optimal parameters. However, our dataset is significantly smaller than what pre-trained BERT uses. This probably can also explain why applying multi-head attention method on both periods can not converge.
Although Table TABREF27 shows the best integration method is concatenation, multi-head attention still has great potential. Due to the lack of computational resources, our experiment fixed the head number and hidden vector size. However, tuning these hyper parameters may have impact on the result. Tuning integration method and try to utilize larger datasets may give help to improving the performance.
Experimental Studies ::: Data Integration Analysis
To investigate how shared task and shared model can benefit, we split our dataset by query types, train our proposed model with different datasets and demonstrate their performance on different datasets. Firstly, we investigate the performance on model without two-stage training and named entity information.
As indicated in Table TABREF30, The model trained by mixed data outperforms 2 of the 3 original tasks in EM-score with 81.55% for proximal resection margin and 86.85% for distal resection margin. The performance on tumor size declined by 1.57% score in EM-score and 3.14% score in F$_1$-score but they were still above 90%. 0.69% and 0.37% score improvement in EM-score was brought by shared model for proximal and distal resection margin prediction. Meanwhile F$_1$-score for those two tasks declined 3.11% and 0.77% score.
Then we investigate the performance on model with two-stage training and named entity information. In this experiment, pre-training process only use the specific dataset not the mixed data. From Table TABREF31 we can observe that the performance on proximal and distal resection margin achieved the best performance on both EM-score and F$_1$-score. Compared with Table TABREF30, the best performance on proximal resection margin improved by 6.9% in EM-score and 7.94% in F$_1$-score. Meanwhile, the best performance on distal resection margin improved by 5.56% in EM-score and 6.32% in F$_1$-score. Other performances also usually improved a lot. This proves the usefulness of two-stage training and named entity information as well.
Lastly, we fine tune the model for each task with a pre-trained parameter. Table TABREF32 summarizes the result. (Add some explanations for the Table TABREF32). Comparing Table TABREF32 with Table TABREF31, using mixed-data pre-trained parameters can significantly improve the model performance than task-specific data trained model. Except tumor size, the result was improved by 0.52% score in EM-score, 1.39% score in F$_1$-score for proximal resection margin and 2.6% score in EM-score, 2.96% score in F$_1$-score for distal resection margin. This proves mixed-data pre-trained parameters can lead to a great benefit for specific task. Meanwhile, the model performance on other tasks which are not trained in the final stage was also improved from around 0 to 60 or 70 percent. This proves that there is commonality between different tasks and our proposed QA-CTS task make this learnable. In conclusion, to achieve the best performance for a specific dataset, pre-training the model in multiple datasets and then fine tuning the model on the specific dataset is the best way.
Conclusion
In this paper, we present a question answering based clinical text structuring (QA-CTS) task, which unifies different clinical text structuring tasks and utilize different datasets. A novel model is also proposed to integrate named entity information into a pre-trained language model and adapt it to QA-CTS task. Initially, sequential results of named entity recognition on both paragraph and query texts are integrated together. Contextualized representation on both paragraph and query texts are transformed by a pre-trained language model. Then, the integrated named entity information and contextualized representation are integrated together and fed into a feed forward network for final prediction. Experimental results on real-world dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks. The shared task and shared model introduced by QA-CTS task has also been proved to be useful for improving the performance on most of the task-specific datasets. In conclusion, the best way to achieve the best performance for a specific dataset is to pre-train the model in multiple datasets and then fine tune it on the specific dataset.
Acknowledgment
We would like to thank Ting Li and Xizhou Hong (Ruijin Hospital) who have helped us very much in data fetching and data cleansing. This work is supported by the National Key R&D Program of China for “Precision Medical Research" (No. 2018YFC0910500). | Unanswerable |
08a5f8d36298b57f6a4fcb4b6ae5796dc5d944a4 | 08a5f8d36298b57f6a4fcb4b6ae5796dc5d944a4_0 | Q: How they introduce domain-specific features into pre-trained language model?
Text: Introduction
Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches.
However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size). Researchers have to construct different models for it, which is already costly, and hence it calls for a lot of labeled data for each model. Moreover, labeling necessary amount of data for training neural network requires expensive labor cost. To handle it, researchers turn to some rule-based structuring methods which often have lower labor cost.
Traditionally, CTS tasks can be addressed by rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2, task-specific end-to-end methods BIBREF3, BIBREF4, BIBREF5, BIBREF6 and pipeline methods BIBREF7, BIBREF8, BIBREF9. Rule and dictionary based methods suffer from costly human-designed extraction rules, while task-specific end-to-end methods have non-uniform output formats and require task-specific training dataset. Pipeline methods break down the entire process into several pieces which improves the performance and generality. However, when the pipeline depth grows, error propagation will have a greater impact on the performance.
To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows.
We first present a question answering based clinical text structuring (QA-CTS) task, which unifies different specific tasks and make dataset shareable. We also propose an effective model to integrate clinical named entity information into pre-trained language model.
Experimental results show that QA-CTS task leads to significant improvement due to shared dataset. Our proposed model also achieves significantly better performance than the strong baseline methods. In addition, we also show that two-stage training mechanism has a great improvement on QA-CTS task.
The rest of the paper is organized as follows. We briefly review the related work on clinical text structuring in Section SECREF2. Then, we present question answer based clinical text structuring task in Section SECREF3. In Section SECREF4, we present an effective model for this task. Section SECREF5 is devoted to computational studies and several investigations on the key issues of our proposed model. Finally, conclusions are given in Section SECREF6.
Related Work ::: Clinical Text Structuring
Clinical text structuring is a final problem which is highly related to practical applications. Most of existing studies are case-by-case. Few of them are developed for the general purpose structuring task. These studies can be roughly divided into three categories: rule and dictionary based methods, task-specific end-to-end methods and pipeline methods.
Rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2 rely extremely on heuristics and handcrafted extraction rules which is more of an art than a science and incurring extensive trial-and-error experiments. Fukuda et al. BIBREF0 identified protein names from biological papers by dictionaries and several features of protein names. Wang et al. BIBREF1 developed some linguistic rules (i.e. normalised/expanded term matching and substring term matching) to map specific terminology to SNOMED CT. Song et al. BIBREF2 proposed a hybrid dictionary-based bio-entity extraction technique and expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. This kind of approach features its interpretability and easy modifiability. However, with the increase of the rule amount, supplementing new rules to existing system will turn to be a rule disaster.
Task-specific end-to-end methods BIBREF3, BIBREF4 use large amount of data to automatically model the specific task. Topaz et al. BIBREF3 constructed an automated wound information identification model with five output. Tan et al. BIBREF4 identified patients undergoing radical cystectomy for bladder cancer. Although they achieved good performance, none of their models could be used to another task due to output format difference. This makes building a new model for a new task a costly job.
Pipeline methods BIBREF7, BIBREF8, BIBREF9 break down the entire task into several basic natural language processing tasks. Bill et al. BIBREF7 focused on attributes extraction which mainly relied on dependency parsing and named entity recognition BIBREF10, BIBREF11, BIBREF12. Meanwhile, Fonferko et al. BIBREF9 used more components like noun phrase chunking BIBREF13, BIBREF14, BIBREF15, part-of-speech tagging BIBREF16, BIBREF17, BIBREF18, sentence splitter, named entity linking BIBREF19, BIBREF20, BIBREF21, relation extraction BIBREF22, BIBREF23. This kind of method focus on language itself, so it can handle tasks more general. However, as the depth of pipeline grows, it is obvious that error propagation will be more and more serious. In contrary, using less components to decrease the pipeline depth will lead to a poor performance. So the upper limit of this method depends mainly on the worst component.
Related Work ::: Pre-trained Language Model
Recently, some works focused on pre-trained language representation models to capture language information from text and then utilizing the information to improve the performance of specific natural language processing tasks BIBREF24, BIBREF25, BIBREF26, BIBREF27 which makes language model a shared model to all natural language processing tasks. Radford et al. BIBREF24 proposed a framework for fine-tuning pre-trained language model. Peters et al. BIBREF25 proposed ELMo which concatenates forward and backward language models in a shallow manner. Devlin et al. BIBREF26 used bidirectional Transformers to model deep interactions between the two directions. Yang et al. BIBREF27 replaced the fixed forward or backward factorization order with all possible permutations of the factorization order and avoided using the [MASK] tag which causes pretrain-finetune discrepancy that BERT is subject to.
The main motivation of introducing pre-trained language model is to solve the shortage of labeled data and polysemy problem. Although polysemy problem is not a common phenomenon in biomedical domain, shortage of labeled data is always a non-trivial problem. Lee et al. BIBREF28 applied BERT on large-scale biomedical unannotated data and achieved improvement on biomedical named entity recognition, relation extraction and question answering. Kim et al. BIBREF29 adapted BioBERT into multi-type named entity recognition and discovered new entities. Both of them demonstrates the usefulness of introducing pre-trained language model into biomedical domain.
Question Answering based Clinical Text Structuring
Given a sequence of paragraph text $X=<x_1, x_2, ..., x_n>$, clinical text structuring (CTS) can be regarded to extract or generate a key-value pair where key $Q$ is typically a query term such as proximal resection margin and value $V$ is a result of query term $Q$ according to the paragraph text $X$.
Generally, researchers solve CTS problem in two steps. Firstly, the answer-related text is pick out. And then several steps such as entity names conversion and negative words recognition are deployed to generate the final answer. While final answer varies from task to task, which truly causes non-uniform output formats, finding the answer-related text is a common action among all tasks. Traditional methods regard both the steps as a whole. In this paper, we focus on finding the answer-related substring $Xs = <X_i, X_i+1, X_i+2, ... X_j> (1 <= i < j <= n)$ from paragraph text $X$. For example, given sentence UTF8gkai“远端胃切除标本:小弯长11.5cm,大弯长17.0cm。距上切端6.0cm、下切端8.0cm" (Distal gastrectomy specimen: measuring 11.5cm in length along the lesser curvature, 17.0cm in length along the greater curvature; 6.0cm from the proximal resection margin, and 8.0cm from the distal resection margin) and query UTF8gkai“上切缘距离"(proximal resection margin), the answer should be 6.0cm which is located in original text from index 32 to 37. With such definition, it unifies the output format of CTS tasks and therefore make the training data shareable, in order to reduce the training data quantity requirement.
Since BERT BIBREF26 has already demonstrated the usefulness of shared model, we suppose extracting commonality of this problem and unifying the output format will make the model more powerful than dedicated model and meanwhile, for a specific clinical task, use the data for other tasks to supplement the training data.
The Proposed Model for QA-CTS Task
In this section, we present an effective model for the question answering based clinical text structuring (QA-CTS). As shown in Fig. FIGREF8, paragraph text $X$ is first passed to a clinical named entity recognition (CNER) model BIBREF12 to capture named entity information and obtain one-hot CNER output tagging sequence for query text $I_{nq}$ and paragraph text $I_{nt}$ with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. $I_{nq}$ and $I_{nt}$ are then integrated together into $I_n$. Meanwhile, the paragraph text $X$ and query text $Q$ are organized and passed to contextualized representation model which is pre-trained language model BERT BIBREF26 here to obtain the contextualized representation vector $V_s$ of both text and query. Afterwards, $V_s$ and $I_n$ are integrated together and fed into a feed forward network to calculate the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word.
The Proposed Model for QA-CTS Task ::: Contextualized Representation of Sentence Text and Query Text
For any clinical free-text paragraph $X$ and query $Q$, contextualized representation is to generate the encoded vector of both of them. Here we use pre-trained language model BERT-base BIBREF26 model to capture contextual information.
The text input is constructed as `[CLS] $Q$ [SEP] $X$ [SEP]'. For Chinese sentence, each word in this input will be mapped to a pre-trained embedding $e_i$. To tell the model $Q$ and $X$ is two different sentence, a sentence type input is generated which is a binary label sequence to denote what sentence each character in the input belongs to. Positional encoding and mask matrix is also constructed automatically to bring in absolute position information and eliminate the impact of zero padding respectively. Then a hidden vector $V_s$ which contains both query and text information is generated through BERT-base model.
The Proposed Model for QA-CTS Task ::: Clinical Named Entity Information
Since BERT is trained on general corpus, its performance on biomedical domain can be improved by introducing biomedical domain-specific features. In this paper, we introduce clinical named entity information into the model.
The CNER task aims to identify and classify important clinical terms such as diseases, symptoms, treatments, exams, and body parts from Chinese EHRs. It can be regarded as a sequence labeling task. A CNER model typically outputs a sequence of tags. Each character of the original sentence will be tagged a label following a tag scheme. In this paper we recognize the entities by the model of our previous work BIBREF12 but trained on another corpus which has 44 entity types including operations, numbers, unit words, examinations, symptoms, negative words, etc. An illustrative example of named entity information sequence is demonstrated in Table TABREF2. In Table TABREF2, UTF8gkai“远端胃切除" is tagged as an operation, `11.5' is a number word and `cm' is an unit word. The named entity tag sequence is organized in one-hot type. We denote the sequence for clinical sentence and query term as $I_{nt}$ and $I_{nq}$, respectively.
The Proposed Model for QA-CTS Task ::: Integration Method
There are two ways to integrate two named entity information vectors $I_{nt}$ and $I_{nq}$ or hidden contextualized representation $V_s$ and named entity information $I_n$, where $I_n = [I_{nt}; I_{nq}]$. The first one is to concatenate them together because they have sequence output with a common dimension. The second one is to transform them into a new hidden representation. For the concatenation method, the integrated representation is described as follows.
While for the transformation method, we use multi-head attention BIBREF30 to encode the two vectors. It can be defined as follows where $h$ is the number of heads and $W_o$ is used to projects back the dimension of concatenated matrix.
$Attention$ denotes the traditional attention and it can be defined as follows.
where $d_k$ is the length of hidden vector.
The Proposed Model for QA-CTS Task ::: Final Prediction
The final step is to use integrated representation $H_i$ to predict the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word. We use a feed forward network (FFN) to compress and calculate the score of each word $H_f$ which makes the dimension to $\left\langle l_s, 2\right\rangle $ where $l_s$ denotes the length of sequence.
Then we permute the two dimensions for softmax calculation. The calculation process of loss function can be defined as followed.
where $O_s = softmax(permute(H_f)_0)$ denotes the probability score of each word to be the start word and similarly $O_e = softmax(permute(H_f)_1)$ denotes the end. $y_s$ and $y_e$ denotes the true answer of the output for start word and end word respectively.
The Proposed Model for QA-CTS Task ::: Two-Stage Training Mechanism
Two-stage training mechanism is previously applied on bilinear model in fine-grained visual recognition BIBREF31, BIBREF32, BIBREF33. Two CNNs are deployed in the model. One is trained at first for coarse-graind features while freezing the parameter of the other. Then unfreeze the other one and train the entire model in a low learning rate for fetching fine-grained features.
Inspired by this and due to the large amount of parameters in BERT model, to speed up the training process, we fine tune the BERT model with new prediction layer first to achieve a better contextualized representation performance. Then we deploy the proposed model and load the fine tuned BERT weights, attach named entity information layers and retrain the model.
Experimental Studies
In this section, we devote to experimentally evaluating our proposed task and approach. The best results in tables are in bold.
Experimental Studies ::: Dataset and Evaluation Metrics
Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20.
In the following experiments, two widely-used performance measures (i.e., EM-score BIBREF34 and (macro-averaged) F$_1$-score BIBREF35) are used to evaluate the methods. The Exact Match (EM-score) metric measures the percentage of predictions that match any one of the ground truth answers exactly. The F$_1$-score metric is a looser metric measures the average overlap between the prediction and ground truth answer.
Experimental Studies ::: Experimental Settings
To implement deep neural network models, we utilize the Keras library BIBREF36 with TensorFlow BIBREF37 backend. Each model is run on a single NVIDIA GeForce GTX 1080 Ti GPU. The models are trained by Adam optimization algorithm BIBREF38 whose parameters are the same as the default settings except for learning rate set to $5\times 10^{-5}$. Batch size is set to 3 or 4 due to the lack of graphical memory. We select BERT-base as the pre-trained language model in this paper. Due to the high cost of pre-training BERT language model, we directly adopt parameters pre-trained by Google in Chinese general corpus. The named entity recognition is applied on both pathology report texts and query texts.
Experimental Studies ::: Comparison with State-of-the-art Methods
Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23.
Table TABREF23 indicates that our proposed model achieved the best performance both in EM-score and F$_1$-score with EM-score of 91.84% and F$_1$-score of 93.75%. QANet outperformed BERT-Base with 3.56% score in F$_1$-score but underperformed it with 0.75% score in EM-score. Compared with BERT-Base, our model led to a 5.64% performance improvement in EM-score and 3.69% in F$_1$-score. Although our model didn't outperform much with QANet in F$_1$-score (only 0.13%), our model significantly outperformed it with 6.39% score in EM-score.
Experimental Studies ::: Ablation Analysis
To further investigate the effects of named entity information and two-stage training mechanism for our model, we apply ablation analysis to see the improvement brought by each of them, where $\times $ refers to removing that part from our model.
As demonstrated in Table TABREF25, with named entity information enabled, two-stage training mechanism improved the result by 4.36% in EM-score and 3.8% in F$_1$-score. Without two-stage training mechanism, named entity information led to an improvement by 1.28% in EM-score but it also led to a weak deterioration by 0.12% in F$_1$-score. With both of them enabled, our proposed model achieved a 5.64% score improvement in EM-score and a 3.69% score improvement in F$_1$-score. The experimental results show that both named entity information and two-stage training mechanism are helpful to our model.
Experimental Studies ::: Comparisons Between Two Integration Methods
There are two methods to integrate named entity information into existing model, we experimentally compare these two integration methods. As named entity recognition has been applied on both pathology report text and query text, there will be two integration here. One is for two named entity information and the other is for contextualized representation and integrated named entity information. For multi-head attention BIBREF30, we set heads number $h = 16$ with 256-dimension hidden vector size for each head.
From Table TABREF27, we can observe that applying concatenation on both periods achieved the best performance on both EM-score and F$_1$-score. Unfortunately, applying multi-head attention on both period one and period two can not reach convergence in our experiments. This probably because it makes the model too complex to train. The difference on other two methods are the order of concatenation and multi-head attention. Applying multi-head attention on two named entity information $I_{nt}$ and $I_{nq}$ first achieved a better performance with 89.87% in EM-score and 92.88% in F$_1$-score. Applying Concatenation first can only achieve 80.74% in EM-score and 84.42% in F$_1$-score. This is probably due to the processing depth of hidden vectors and dataset size. BERT's output has been modified after many layers but named entity information representation is very close to input. With big amount of parameters in multi-head attention, it requires massive training to find out the optimal parameters. However, our dataset is significantly smaller than what pre-trained BERT uses. This probably can also explain why applying multi-head attention method on both periods can not converge.
Although Table TABREF27 shows the best integration method is concatenation, multi-head attention still has great potential. Due to the lack of computational resources, our experiment fixed the head number and hidden vector size. However, tuning these hyper parameters may have impact on the result. Tuning integration method and try to utilize larger datasets may give help to improving the performance.
Experimental Studies ::: Data Integration Analysis
To investigate how shared task and shared model can benefit, we split our dataset by query types, train our proposed model with different datasets and demonstrate their performance on different datasets. Firstly, we investigate the performance on model without two-stage training and named entity information.
As indicated in Table TABREF30, The model trained by mixed data outperforms 2 of the 3 original tasks in EM-score with 81.55% for proximal resection margin and 86.85% for distal resection margin. The performance on tumor size declined by 1.57% score in EM-score and 3.14% score in F$_1$-score but they were still above 90%. 0.69% and 0.37% score improvement in EM-score was brought by shared model for proximal and distal resection margin prediction. Meanwhile F$_1$-score for those two tasks declined 3.11% and 0.77% score.
Then we investigate the performance on model with two-stage training and named entity information. In this experiment, pre-training process only use the specific dataset not the mixed data. From Table TABREF31 we can observe that the performance on proximal and distal resection margin achieved the best performance on both EM-score and F$_1$-score. Compared with Table TABREF30, the best performance on proximal resection margin improved by 6.9% in EM-score and 7.94% in F$_1$-score. Meanwhile, the best performance on distal resection margin improved by 5.56% in EM-score and 6.32% in F$_1$-score. Other performances also usually improved a lot. This proves the usefulness of two-stage training and named entity information as well.
Lastly, we fine tune the model for each task with a pre-trained parameter. Table TABREF32 summarizes the result. (Add some explanations for the Table TABREF32). Comparing Table TABREF32 with Table TABREF31, using mixed-data pre-trained parameters can significantly improve the model performance than task-specific data trained model. Except tumor size, the result was improved by 0.52% score in EM-score, 1.39% score in F$_1$-score for proximal resection margin and 2.6% score in EM-score, 2.96% score in F$_1$-score for distal resection margin. This proves mixed-data pre-trained parameters can lead to a great benefit for specific task. Meanwhile, the model performance on other tasks which are not trained in the final stage was also improved from around 0 to 60 or 70 percent. This proves that there is commonality between different tasks and our proposed QA-CTS task make this learnable. In conclusion, to achieve the best performance for a specific dataset, pre-training the model in multiple datasets and then fine tuning the model on the specific dataset is the best way.
Conclusion
In this paper, we present a question answering based clinical text structuring (QA-CTS) task, which unifies different clinical text structuring tasks and utilize different datasets. A novel model is also proposed to integrate named entity information into a pre-trained language model and adapt it to QA-CTS task. Initially, sequential results of named entity recognition on both paragraph and query texts are integrated together. Contextualized representation on both paragraph and query texts are transformed by a pre-trained language model. Then, the integrated named entity information and contextualized representation are integrated together and fed into a feed forward network for final prediction. Experimental results on real-world dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks. The shared task and shared model introduced by QA-CTS task has also been proved to be useful for improving the performance on most of the task-specific datasets. In conclusion, the best way to achieve the best performance for a specific dataset is to pre-train the model in multiple datasets and then fine tune it on the specific dataset.
Acknowledgment
We would like to thank Ting Li and Xizhou Hong (Ruijin Hospital) who have helped us very much in data fetching and data cleansing. This work is supported by the National Key R&D Program of China for “Precision Medical Research" (No. 2018YFC0910500). | integrate clinical named entity information into pre-trained language model |
975a4ac9773a4af551142c324b64a0858670d06e | 975a4ac9773a4af551142c324b64a0858670d06e_0 | Q: How big is QA-CTS task dataset?
Text: Introduction
Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches.
However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size). Researchers have to construct different models for it, which is already costly, and hence it calls for a lot of labeled data for each model. Moreover, labeling necessary amount of data for training neural network requires expensive labor cost. To handle it, researchers turn to some rule-based structuring methods which often have lower labor cost.
Traditionally, CTS tasks can be addressed by rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2, task-specific end-to-end methods BIBREF3, BIBREF4, BIBREF5, BIBREF6 and pipeline methods BIBREF7, BIBREF8, BIBREF9. Rule and dictionary based methods suffer from costly human-designed extraction rules, while task-specific end-to-end methods have non-uniform output formats and require task-specific training dataset. Pipeline methods break down the entire process into several pieces which improves the performance and generality. However, when the pipeline depth grows, error propagation will have a greater impact on the performance.
To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows.
We first present a question answering based clinical text structuring (QA-CTS) task, which unifies different specific tasks and make dataset shareable. We also propose an effective model to integrate clinical named entity information into pre-trained language model.
Experimental results show that QA-CTS task leads to significant improvement due to shared dataset. Our proposed model also achieves significantly better performance than the strong baseline methods. In addition, we also show that two-stage training mechanism has a great improvement on QA-CTS task.
The rest of the paper is organized as follows. We briefly review the related work on clinical text structuring in Section SECREF2. Then, we present question answer based clinical text structuring task in Section SECREF3. In Section SECREF4, we present an effective model for this task. Section SECREF5 is devoted to computational studies and several investigations on the key issues of our proposed model. Finally, conclusions are given in Section SECREF6.
Related Work ::: Clinical Text Structuring
Clinical text structuring is a final problem which is highly related to practical applications. Most of existing studies are case-by-case. Few of them are developed for the general purpose structuring task. These studies can be roughly divided into three categories: rule and dictionary based methods, task-specific end-to-end methods and pipeline methods.
Rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2 rely extremely on heuristics and handcrafted extraction rules which is more of an art than a science and incurring extensive trial-and-error experiments. Fukuda et al. BIBREF0 identified protein names from biological papers by dictionaries and several features of protein names. Wang et al. BIBREF1 developed some linguistic rules (i.e. normalised/expanded term matching and substring term matching) to map specific terminology to SNOMED CT. Song et al. BIBREF2 proposed a hybrid dictionary-based bio-entity extraction technique and expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. This kind of approach features its interpretability and easy modifiability. However, with the increase of the rule amount, supplementing new rules to existing system will turn to be a rule disaster.
Task-specific end-to-end methods BIBREF3, BIBREF4 use large amount of data to automatically model the specific task. Topaz et al. BIBREF3 constructed an automated wound information identification model with five output. Tan et al. BIBREF4 identified patients undergoing radical cystectomy for bladder cancer. Although they achieved good performance, none of their models could be used to another task due to output format difference. This makes building a new model for a new task a costly job.
Pipeline methods BIBREF7, BIBREF8, BIBREF9 break down the entire task into several basic natural language processing tasks. Bill et al. BIBREF7 focused on attributes extraction which mainly relied on dependency parsing and named entity recognition BIBREF10, BIBREF11, BIBREF12. Meanwhile, Fonferko et al. BIBREF9 used more components like noun phrase chunking BIBREF13, BIBREF14, BIBREF15, part-of-speech tagging BIBREF16, BIBREF17, BIBREF18, sentence splitter, named entity linking BIBREF19, BIBREF20, BIBREF21, relation extraction BIBREF22, BIBREF23. This kind of method focus on language itself, so it can handle tasks more general. However, as the depth of pipeline grows, it is obvious that error propagation will be more and more serious. In contrary, using less components to decrease the pipeline depth will lead to a poor performance. So the upper limit of this method depends mainly on the worst component.
Related Work ::: Pre-trained Language Model
Recently, some works focused on pre-trained language representation models to capture language information from text and then utilizing the information to improve the performance of specific natural language processing tasks BIBREF24, BIBREF25, BIBREF26, BIBREF27 which makes language model a shared model to all natural language processing tasks. Radford et al. BIBREF24 proposed a framework for fine-tuning pre-trained language model. Peters et al. BIBREF25 proposed ELMo which concatenates forward and backward language models in a shallow manner. Devlin et al. BIBREF26 used bidirectional Transformers to model deep interactions between the two directions. Yang et al. BIBREF27 replaced the fixed forward or backward factorization order with all possible permutations of the factorization order and avoided using the [MASK] tag which causes pretrain-finetune discrepancy that BERT is subject to.
The main motivation of introducing pre-trained language model is to solve the shortage of labeled data and polysemy problem. Although polysemy problem is not a common phenomenon in biomedical domain, shortage of labeled data is always a non-trivial problem. Lee et al. BIBREF28 applied BERT on large-scale biomedical unannotated data and achieved improvement on biomedical named entity recognition, relation extraction and question answering. Kim et al. BIBREF29 adapted BioBERT into multi-type named entity recognition and discovered new entities. Both of them demonstrates the usefulness of introducing pre-trained language model into biomedical domain.
Question Answering based Clinical Text Structuring
Given a sequence of paragraph text $X=<x_1, x_2, ..., x_n>$, clinical text structuring (CTS) can be regarded to extract or generate a key-value pair where key $Q$ is typically a query term such as proximal resection margin and value $V$ is a result of query term $Q$ according to the paragraph text $X$.
Generally, researchers solve CTS problem in two steps. Firstly, the answer-related text is pick out. And then several steps such as entity names conversion and negative words recognition are deployed to generate the final answer. While final answer varies from task to task, which truly causes non-uniform output formats, finding the answer-related text is a common action among all tasks. Traditional methods regard both the steps as a whole. In this paper, we focus on finding the answer-related substring $Xs = <X_i, X_i+1, X_i+2, ... X_j> (1 <= i < j <= n)$ from paragraph text $X$. For example, given sentence UTF8gkai“远端胃切除标本:小弯长11.5cm,大弯长17.0cm。距上切端6.0cm、下切端8.0cm" (Distal gastrectomy specimen: measuring 11.5cm in length along the lesser curvature, 17.0cm in length along the greater curvature; 6.0cm from the proximal resection margin, and 8.0cm from the distal resection margin) and query UTF8gkai“上切缘距离"(proximal resection margin), the answer should be 6.0cm which is located in original text from index 32 to 37. With such definition, it unifies the output format of CTS tasks and therefore make the training data shareable, in order to reduce the training data quantity requirement.
Since BERT BIBREF26 has already demonstrated the usefulness of shared model, we suppose extracting commonality of this problem and unifying the output format will make the model more powerful than dedicated model and meanwhile, for a specific clinical task, use the data for other tasks to supplement the training data.
The Proposed Model for QA-CTS Task
In this section, we present an effective model for the question answering based clinical text structuring (QA-CTS). As shown in Fig. FIGREF8, paragraph text $X$ is first passed to a clinical named entity recognition (CNER) model BIBREF12 to capture named entity information and obtain one-hot CNER output tagging sequence for query text $I_{nq}$ and paragraph text $I_{nt}$ with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. $I_{nq}$ and $I_{nt}$ are then integrated together into $I_n$. Meanwhile, the paragraph text $X$ and query text $Q$ are organized and passed to contextualized representation model which is pre-trained language model BERT BIBREF26 here to obtain the contextualized representation vector $V_s$ of both text and query. Afterwards, $V_s$ and $I_n$ are integrated together and fed into a feed forward network to calculate the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word.
The Proposed Model for QA-CTS Task ::: Contextualized Representation of Sentence Text and Query Text
For any clinical free-text paragraph $X$ and query $Q$, contextualized representation is to generate the encoded vector of both of them. Here we use pre-trained language model BERT-base BIBREF26 model to capture contextual information.
The text input is constructed as `[CLS] $Q$ [SEP] $X$ [SEP]'. For Chinese sentence, each word in this input will be mapped to a pre-trained embedding $e_i$. To tell the model $Q$ and $X$ is two different sentence, a sentence type input is generated which is a binary label sequence to denote what sentence each character in the input belongs to. Positional encoding and mask matrix is also constructed automatically to bring in absolute position information and eliminate the impact of zero padding respectively. Then a hidden vector $V_s$ which contains both query and text information is generated through BERT-base model.
The Proposed Model for QA-CTS Task ::: Clinical Named Entity Information
Since BERT is trained on general corpus, its performance on biomedical domain can be improved by introducing biomedical domain-specific features. In this paper, we introduce clinical named entity information into the model.
The CNER task aims to identify and classify important clinical terms such as diseases, symptoms, treatments, exams, and body parts from Chinese EHRs. It can be regarded as a sequence labeling task. A CNER model typically outputs a sequence of tags. Each character of the original sentence will be tagged a label following a tag scheme. In this paper we recognize the entities by the model of our previous work BIBREF12 but trained on another corpus which has 44 entity types including operations, numbers, unit words, examinations, symptoms, negative words, etc. An illustrative example of named entity information sequence is demonstrated in Table TABREF2. In Table TABREF2, UTF8gkai“远端胃切除" is tagged as an operation, `11.5' is a number word and `cm' is an unit word. The named entity tag sequence is organized in one-hot type. We denote the sequence for clinical sentence and query term as $I_{nt}$ and $I_{nq}$, respectively.
The Proposed Model for QA-CTS Task ::: Integration Method
There are two ways to integrate two named entity information vectors $I_{nt}$ and $I_{nq}$ or hidden contextualized representation $V_s$ and named entity information $I_n$, where $I_n = [I_{nt}; I_{nq}]$. The first one is to concatenate them together because they have sequence output with a common dimension. The second one is to transform them into a new hidden representation. For the concatenation method, the integrated representation is described as follows.
While for the transformation method, we use multi-head attention BIBREF30 to encode the two vectors. It can be defined as follows where $h$ is the number of heads and $W_o$ is used to projects back the dimension of concatenated matrix.
$Attention$ denotes the traditional attention and it can be defined as follows.
where $d_k$ is the length of hidden vector.
The Proposed Model for QA-CTS Task ::: Final Prediction
The final step is to use integrated representation $H_i$ to predict the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word. We use a feed forward network (FFN) to compress and calculate the score of each word $H_f$ which makes the dimension to $\left\langle l_s, 2\right\rangle $ where $l_s$ denotes the length of sequence.
Then we permute the two dimensions for softmax calculation. The calculation process of loss function can be defined as followed.
where $O_s = softmax(permute(H_f)_0)$ denotes the probability score of each word to be the start word and similarly $O_e = softmax(permute(H_f)_1)$ denotes the end. $y_s$ and $y_e$ denotes the true answer of the output for start word and end word respectively.
The Proposed Model for QA-CTS Task ::: Two-Stage Training Mechanism
Two-stage training mechanism is previously applied on bilinear model in fine-grained visual recognition BIBREF31, BIBREF32, BIBREF33. Two CNNs are deployed in the model. One is trained at first for coarse-graind features while freezing the parameter of the other. Then unfreeze the other one and train the entire model in a low learning rate for fetching fine-grained features.
Inspired by this and due to the large amount of parameters in BERT model, to speed up the training process, we fine tune the BERT model with new prediction layer first to achieve a better contextualized representation performance. Then we deploy the proposed model and load the fine tuned BERT weights, attach named entity information layers and retrain the model.
Experimental Studies
In this section, we devote to experimentally evaluating our proposed task and approach. The best results in tables are in bold.
Experimental Studies ::: Dataset and Evaluation Metrics
Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20.
In the following experiments, two widely-used performance measures (i.e., EM-score BIBREF34 and (macro-averaged) F$_1$-score BIBREF35) are used to evaluate the methods. The Exact Match (EM-score) metric measures the percentage of predictions that match any one of the ground truth answers exactly. The F$_1$-score metric is a looser metric measures the average overlap between the prediction and ground truth answer.
Experimental Studies ::: Experimental Settings
To implement deep neural network models, we utilize the Keras library BIBREF36 with TensorFlow BIBREF37 backend. Each model is run on a single NVIDIA GeForce GTX 1080 Ti GPU. The models are trained by Adam optimization algorithm BIBREF38 whose parameters are the same as the default settings except for learning rate set to $5\times 10^{-5}$. Batch size is set to 3 or 4 due to the lack of graphical memory. We select BERT-base as the pre-trained language model in this paper. Due to the high cost of pre-training BERT language model, we directly adopt parameters pre-trained by Google in Chinese general corpus. The named entity recognition is applied on both pathology report texts and query texts.
Experimental Studies ::: Comparison with State-of-the-art Methods
Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23.
Table TABREF23 indicates that our proposed model achieved the best performance both in EM-score and F$_1$-score with EM-score of 91.84% and F$_1$-score of 93.75%. QANet outperformed BERT-Base with 3.56% score in F$_1$-score but underperformed it with 0.75% score in EM-score. Compared with BERT-Base, our model led to a 5.64% performance improvement in EM-score and 3.69% in F$_1$-score. Although our model didn't outperform much with QANet in F$_1$-score (only 0.13%), our model significantly outperformed it with 6.39% score in EM-score.
Experimental Studies ::: Ablation Analysis
To further investigate the effects of named entity information and two-stage training mechanism for our model, we apply ablation analysis to see the improvement brought by each of them, where $\times $ refers to removing that part from our model.
As demonstrated in Table TABREF25, with named entity information enabled, two-stage training mechanism improved the result by 4.36% in EM-score and 3.8% in F$_1$-score. Without two-stage training mechanism, named entity information led to an improvement by 1.28% in EM-score but it also led to a weak deterioration by 0.12% in F$_1$-score. With both of them enabled, our proposed model achieved a 5.64% score improvement in EM-score and a 3.69% score improvement in F$_1$-score. The experimental results show that both named entity information and two-stage training mechanism are helpful to our model.
Experimental Studies ::: Comparisons Between Two Integration Methods
There are two methods to integrate named entity information into existing model, we experimentally compare these two integration methods. As named entity recognition has been applied on both pathology report text and query text, there will be two integration here. One is for two named entity information and the other is for contextualized representation and integrated named entity information. For multi-head attention BIBREF30, we set heads number $h = 16$ with 256-dimension hidden vector size for each head.
From Table TABREF27, we can observe that applying concatenation on both periods achieved the best performance on both EM-score and F$_1$-score. Unfortunately, applying multi-head attention on both period one and period two can not reach convergence in our experiments. This probably because it makes the model too complex to train. The difference on other two methods are the order of concatenation and multi-head attention. Applying multi-head attention on two named entity information $I_{nt}$ and $I_{nq}$ first achieved a better performance with 89.87% in EM-score and 92.88% in F$_1$-score. Applying Concatenation first can only achieve 80.74% in EM-score and 84.42% in F$_1$-score. This is probably due to the processing depth of hidden vectors and dataset size. BERT's output has been modified after many layers but named entity information representation is very close to input. With big amount of parameters in multi-head attention, it requires massive training to find out the optimal parameters. However, our dataset is significantly smaller than what pre-trained BERT uses. This probably can also explain why applying multi-head attention method on both periods can not converge.
Although Table TABREF27 shows the best integration method is concatenation, multi-head attention still has great potential. Due to the lack of computational resources, our experiment fixed the head number and hidden vector size. However, tuning these hyper parameters may have impact on the result. Tuning integration method and try to utilize larger datasets may give help to improving the performance.
Experimental Studies ::: Data Integration Analysis
To investigate how shared task and shared model can benefit, we split our dataset by query types, train our proposed model with different datasets and demonstrate their performance on different datasets. Firstly, we investigate the performance on model without two-stage training and named entity information.
As indicated in Table TABREF30, The model trained by mixed data outperforms 2 of the 3 original tasks in EM-score with 81.55% for proximal resection margin and 86.85% for distal resection margin. The performance on tumor size declined by 1.57% score in EM-score and 3.14% score in F$_1$-score but they were still above 90%. 0.69% and 0.37% score improvement in EM-score was brought by shared model for proximal and distal resection margin prediction. Meanwhile F$_1$-score for those two tasks declined 3.11% and 0.77% score.
Then we investigate the performance on model with two-stage training and named entity information. In this experiment, pre-training process only use the specific dataset not the mixed data. From Table TABREF31 we can observe that the performance on proximal and distal resection margin achieved the best performance on both EM-score and F$_1$-score. Compared with Table TABREF30, the best performance on proximal resection margin improved by 6.9% in EM-score and 7.94% in F$_1$-score. Meanwhile, the best performance on distal resection margin improved by 5.56% in EM-score and 6.32% in F$_1$-score. Other performances also usually improved a lot. This proves the usefulness of two-stage training and named entity information as well.
Lastly, we fine tune the model for each task with a pre-trained parameter. Table TABREF32 summarizes the result. (Add some explanations for the Table TABREF32). Comparing Table TABREF32 with Table TABREF31, using mixed-data pre-trained parameters can significantly improve the model performance than task-specific data trained model. Except tumor size, the result was improved by 0.52% score in EM-score, 1.39% score in F$_1$-score for proximal resection margin and 2.6% score in EM-score, 2.96% score in F$_1$-score for distal resection margin. This proves mixed-data pre-trained parameters can lead to a great benefit for specific task. Meanwhile, the model performance on other tasks which are not trained in the final stage was also improved from around 0 to 60 or 70 percent. This proves that there is commonality between different tasks and our proposed QA-CTS task make this learnable. In conclusion, to achieve the best performance for a specific dataset, pre-training the model in multiple datasets and then fine tuning the model on the specific dataset is the best way.
Conclusion
In this paper, we present a question answering based clinical text structuring (QA-CTS) task, which unifies different clinical text structuring tasks and utilize different datasets. A novel model is also proposed to integrate named entity information into a pre-trained language model and adapt it to QA-CTS task. Initially, sequential results of named entity recognition on both paragraph and query texts are integrated together. Contextualized representation on both paragraph and query texts are transformed by a pre-trained language model. Then, the integrated named entity information and contextualized representation are integrated together and fed into a feed forward network for final prediction. Experimental results on real-world dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks. The shared task and shared model introduced by QA-CTS task has also been proved to be useful for improving the performance on most of the task-specific datasets. In conclusion, the best way to achieve the best performance for a specific dataset is to pre-train the model in multiple datasets and then fine tune it on the specific dataset.
Acknowledgment
We would like to thank Ting Li and Xizhou Hong (Ruijin Hospital) who have helped us very much in data fetching and data cleansing. This work is supported by the National Key R&D Program of China for “Precision Medical Research" (No. 2018YFC0910500). | 17,833 sentences, 826,987 characters and 2,714 question-answer pairs |
326e08a0f5753b90622902bd4a9c94849a24b773 | 326e08a0f5753b90622902bd4a9c94849a24b773_0 | Q: How big is dataset of pathology reports collected from Ruijing Hospital?
Text: Introduction
Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches.
However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size). Researchers have to construct different models for it, which is already costly, and hence it calls for a lot of labeled data for each model. Moreover, labeling necessary amount of data for training neural network requires expensive labor cost. To handle it, researchers turn to some rule-based structuring methods which often have lower labor cost.
Traditionally, CTS tasks can be addressed by rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2, task-specific end-to-end methods BIBREF3, BIBREF4, BIBREF5, BIBREF6 and pipeline methods BIBREF7, BIBREF8, BIBREF9. Rule and dictionary based methods suffer from costly human-designed extraction rules, while task-specific end-to-end methods have non-uniform output formats and require task-specific training dataset. Pipeline methods break down the entire process into several pieces which improves the performance and generality. However, when the pipeline depth grows, error propagation will have a greater impact on the performance.
To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows.
We first present a question answering based clinical text structuring (QA-CTS) task, which unifies different specific tasks and make dataset shareable. We also propose an effective model to integrate clinical named entity information into pre-trained language model.
Experimental results show that QA-CTS task leads to significant improvement due to shared dataset. Our proposed model also achieves significantly better performance than the strong baseline methods. In addition, we also show that two-stage training mechanism has a great improvement on QA-CTS task.
The rest of the paper is organized as follows. We briefly review the related work on clinical text structuring in Section SECREF2. Then, we present question answer based clinical text structuring task in Section SECREF3. In Section SECREF4, we present an effective model for this task. Section SECREF5 is devoted to computational studies and several investigations on the key issues of our proposed model. Finally, conclusions are given in Section SECREF6.
Related Work ::: Clinical Text Structuring
Clinical text structuring is a final problem which is highly related to practical applications. Most of existing studies are case-by-case. Few of them are developed for the general purpose structuring task. These studies can be roughly divided into three categories: rule and dictionary based methods, task-specific end-to-end methods and pipeline methods.
Rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2 rely extremely on heuristics and handcrafted extraction rules which is more of an art than a science and incurring extensive trial-and-error experiments. Fukuda et al. BIBREF0 identified protein names from biological papers by dictionaries and several features of protein names. Wang et al. BIBREF1 developed some linguistic rules (i.e. normalised/expanded term matching and substring term matching) to map specific terminology to SNOMED CT. Song et al. BIBREF2 proposed a hybrid dictionary-based bio-entity extraction technique and expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. This kind of approach features its interpretability and easy modifiability. However, with the increase of the rule amount, supplementing new rules to existing system will turn to be a rule disaster.
Task-specific end-to-end methods BIBREF3, BIBREF4 use large amount of data to automatically model the specific task. Topaz et al. BIBREF3 constructed an automated wound information identification model with five output. Tan et al. BIBREF4 identified patients undergoing radical cystectomy for bladder cancer. Although they achieved good performance, none of their models could be used to another task due to output format difference. This makes building a new model for a new task a costly job.
Pipeline methods BIBREF7, BIBREF8, BIBREF9 break down the entire task into several basic natural language processing tasks. Bill et al. BIBREF7 focused on attributes extraction which mainly relied on dependency parsing and named entity recognition BIBREF10, BIBREF11, BIBREF12. Meanwhile, Fonferko et al. BIBREF9 used more components like noun phrase chunking BIBREF13, BIBREF14, BIBREF15, part-of-speech tagging BIBREF16, BIBREF17, BIBREF18, sentence splitter, named entity linking BIBREF19, BIBREF20, BIBREF21, relation extraction BIBREF22, BIBREF23. This kind of method focus on language itself, so it can handle tasks more general. However, as the depth of pipeline grows, it is obvious that error propagation will be more and more serious. In contrary, using less components to decrease the pipeline depth will lead to a poor performance. So the upper limit of this method depends mainly on the worst component.
Related Work ::: Pre-trained Language Model
Recently, some works focused on pre-trained language representation models to capture language information from text and then utilizing the information to improve the performance of specific natural language processing tasks BIBREF24, BIBREF25, BIBREF26, BIBREF27 which makes language model a shared model to all natural language processing tasks. Radford et al. BIBREF24 proposed a framework for fine-tuning pre-trained language model. Peters et al. BIBREF25 proposed ELMo which concatenates forward and backward language models in a shallow manner. Devlin et al. BIBREF26 used bidirectional Transformers to model deep interactions between the two directions. Yang et al. BIBREF27 replaced the fixed forward or backward factorization order with all possible permutations of the factorization order and avoided using the [MASK] tag which causes pretrain-finetune discrepancy that BERT is subject to.
The main motivation of introducing pre-trained language model is to solve the shortage of labeled data and polysemy problem. Although polysemy problem is not a common phenomenon in biomedical domain, shortage of labeled data is always a non-trivial problem. Lee et al. BIBREF28 applied BERT on large-scale biomedical unannotated data and achieved improvement on biomedical named entity recognition, relation extraction and question answering. Kim et al. BIBREF29 adapted BioBERT into multi-type named entity recognition and discovered new entities. Both of them demonstrates the usefulness of introducing pre-trained language model into biomedical domain.
Question Answering based Clinical Text Structuring
Given a sequence of paragraph text $X=<x_1, x_2, ..., x_n>$, clinical text structuring (CTS) can be regarded to extract or generate a key-value pair where key $Q$ is typically a query term such as proximal resection margin and value $V$ is a result of query term $Q$ according to the paragraph text $X$.
Generally, researchers solve CTS problem in two steps. Firstly, the answer-related text is pick out. And then several steps such as entity names conversion and negative words recognition are deployed to generate the final answer. While final answer varies from task to task, which truly causes non-uniform output formats, finding the answer-related text is a common action among all tasks. Traditional methods regard both the steps as a whole. In this paper, we focus on finding the answer-related substring $Xs = <X_i, X_i+1, X_i+2, ... X_j> (1 <= i < j <= n)$ from paragraph text $X$. For example, given sentence UTF8gkai“远端胃切除标本:小弯长11.5cm,大弯长17.0cm。距上切端6.0cm、下切端8.0cm" (Distal gastrectomy specimen: measuring 11.5cm in length along the lesser curvature, 17.0cm in length along the greater curvature; 6.0cm from the proximal resection margin, and 8.0cm from the distal resection margin) and query UTF8gkai“上切缘距离"(proximal resection margin), the answer should be 6.0cm which is located in original text from index 32 to 37. With such definition, it unifies the output format of CTS tasks and therefore make the training data shareable, in order to reduce the training data quantity requirement.
Since BERT BIBREF26 has already demonstrated the usefulness of shared model, we suppose extracting commonality of this problem and unifying the output format will make the model more powerful than dedicated model and meanwhile, for a specific clinical task, use the data for other tasks to supplement the training data.
The Proposed Model for QA-CTS Task
In this section, we present an effective model for the question answering based clinical text structuring (QA-CTS). As shown in Fig. FIGREF8, paragraph text $X$ is first passed to a clinical named entity recognition (CNER) model BIBREF12 to capture named entity information and obtain one-hot CNER output tagging sequence for query text $I_{nq}$ and paragraph text $I_{nt}$ with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. $I_{nq}$ and $I_{nt}$ are then integrated together into $I_n$. Meanwhile, the paragraph text $X$ and query text $Q$ are organized and passed to contextualized representation model which is pre-trained language model BERT BIBREF26 here to obtain the contextualized representation vector $V_s$ of both text and query. Afterwards, $V_s$ and $I_n$ are integrated together and fed into a feed forward network to calculate the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word.
The Proposed Model for QA-CTS Task ::: Contextualized Representation of Sentence Text and Query Text
For any clinical free-text paragraph $X$ and query $Q$, contextualized representation is to generate the encoded vector of both of them. Here we use pre-trained language model BERT-base BIBREF26 model to capture contextual information.
The text input is constructed as `[CLS] $Q$ [SEP] $X$ [SEP]'. For Chinese sentence, each word in this input will be mapped to a pre-trained embedding $e_i$. To tell the model $Q$ and $X$ is two different sentence, a sentence type input is generated which is a binary label sequence to denote what sentence each character in the input belongs to. Positional encoding and mask matrix is also constructed automatically to bring in absolute position information and eliminate the impact of zero padding respectively. Then a hidden vector $V_s$ which contains both query and text information is generated through BERT-base model.
The Proposed Model for QA-CTS Task ::: Clinical Named Entity Information
Since BERT is trained on general corpus, its performance on biomedical domain can be improved by introducing biomedical domain-specific features. In this paper, we introduce clinical named entity information into the model.
The CNER task aims to identify and classify important clinical terms such as diseases, symptoms, treatments, exams, and body parts from Chinese EHRs. It can be regarded as a sequence labeling task. A CNER model typically outputs a sequence of tags. Each character of the original sentence will be tagged a label following a tag scheme. In this paper we recognize the entities by the model of our previous work BIBREF12 but trained on another corpus which has 44 entity types including operations, numbers, unit words, examinations, symptoms, negative words, etc. An illustrative example of named entity information sequence is demonstrated in Table TABREF2. In Table TABREF2, UTF8gkai“远端胃切除" is tagged as an operation, `11.5' is a number word and `cm' is an unit word. The named entity tag sequence is organized in one-hot type. We denote the sequence for clinical sentence and query term as $I_{nt}$ and $I_{nq}$, respectively.
The Proposed Model for QA-CTS Task ::: Integration Method
There are two ways to integrate two named entity information vectors $I_{nt}$ and $I_{nq}$ or hidden contextualized representation $V_s$ and named entity information $I_n$, where $I_n = [I_{nt}; I_{nq}]$. The first one is to concatenate them together because they have sequence output with a common dimension. The second one is to transform them into a new hidden representation. For the concatenation method, the integrated representation is described as follows.
While for the transformation method, we use multi-head attention BIBREF30 to encode the two vectors. It can be defined as follows where $h$ is the number of heads and $W_o$ is used to projects back the dimension of concatenated matrix.
$Attention$ denotes the traditional attention and it can be defined as follows.
where $d_k$ is the length of hidden vector.
The Proposed Model for QA-CTS Task ::: Final Prediction
The final step is to use integrated representation $H_i$ to predict the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word. We use a feed forward network (FFN) to compress and calculate the score of each word $H_f$ which makes the dimension to $\left\langle l_s, 2\right\rangle $ where $l_s$ denotes the length of sequence.
Then we permute the two dimensions for softmax calculation. The calculation process of loss function can be defined as followed.
where $O_s = softmax(permute(H_f)_0)$ denotes the probability score of each word to be the start word and similarly $O_e = softmax(permute(H_f)_1)$ denotes the end. $y_s$ and $y_e$ denotes the true answer of the output for start word and end word respectively.
The Proposed Model for QA-CTS Task ::: Two-Stage Training Mechanism
Two-stage training mechanism is previously applied on bilinear model in fine-grained visual recognition BIBREF31, BIBREF32, BIBREF33. Two CNNs are deployed in the model. One is trained at first for coarse-graind features while freezing the parameter of the other. Then unfreeze the other one and train the entire model in a low learning rate for fetching fine-grained features.
Inspired by this and due to the large amount of parameters in BERT model, to speed up the training process, we fine tune the BERT model with new prediction layer first to achieve a better contextualized representation performance. Then we deploy the proposed model and load the fine tuned BERT weights, attach named entity information layers and retrain the model.
Experimental Studies
In this section, we devote to experimentally evaluating our proposed task and approach. The best results in tables are in bold.
Experimental Studies ::: Dataset and Evaluation Metrics
Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20.
In the following experiments, two widely-used performance measures (i.e., EM-score BIBREF34 and (macro-averaged) F$_1$-score BIBREF35) are used to evaluate the methods. The Exact Match (EM-score) metric measures the percentage of predictions that match any one of the ground truth answers exactly. The F$_1$-score metric is a looser metric measures the average overlap between the prediction and ground truth answer.
Experimental Studies ::: Experimental Settings
To implement deep neural network models, we utilize the Keras library BIBREF36 with TensorFlow BIBREF37 backend. Each model is run on a single NVIDIA GeForce GTX 1080 Ti GPU. The models are trained by Adam optimization algorithm BIBREF38 whose parameters are the same as the default settings except for learning rate set to $5\times 10^{-5}$. Batch size is set to 3 or 4 due to the lack of graphical memory. We select BERT-base as the pre-trained language model in this paper. Due to the high cost of pre-training BERT language model, we directly adopt parameters pre-trained by Google in Chinese general corpus. The named entity recognition is applied on both pathology report texts and query texts.
Experimental Studies ::: Comparison with State-of-the-art Methods
Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23.
Table TABREF23 indicates that our proposed model achieved the best performance both in EM-score and F$_1$-score with EM-score of 91.84% and F$_1$-score of 93.75%. QANet outperformed BERT-Base with 3.56% score in F$_1$-score but underperformed it with 0.75% score in EM-score. Compared with BERT-Base, our model led to a 5.64% performance improvement in EM-score and 3.69% in F$_1$-score. Although our model didn't outperform much with QANet in F$_1$-score (only 0.13%), our model significantly outperformed it with 6.39% score in EM-score.
Experimental Studies ::: Ablation Analysis
To further investigate the effects of named entity information and two-stage training mechanism for our model, we apply ablation analysis to see the improvement brought by each of them, where $\times $ refers to removing that part from our model.
As demonstrated in Table TABREF25, with named entity information enabled, two-stage training mechanism improved the result by 4.36% in EM-score and 3.8% in F$_1$-score. Without two-stage training mechanism, named entity information led to an improvement by 1.28% in EM-score but it also led to a weak deterioration by 0.12% in F$_1$-score. With both of them enabled, our proposed model achieved a 5.64% score improvement in EM-score and a 3.69% score improvement in F$_1$-score. The experimental results show that both named entity information and two-stage training mechanism are helpful to our model.
Experimental Studies ::: Comparisons Between Two Integration Methods
There are two methods to integrate named entity information into existing model, we experimentally compare these two integration methods. As named entity recognition has been applied on both pathology report text and query text, there will be two integration here. One is for two named entity information and the other is for contextualized representation and integrated named entity information. For multi-head attention BIBREF30, we set heads number $h = 16$ with 256-dimension hidden vector size for each head.
From Table TABREF27, we can observe that applying concatenation on both periods achieved the best performance on both EM-score and F$_1$-score. Unfortunately, applying multi-head attention on both period one and period two can not reach convergence in our experiments. This probably because it makes the model too complex to train. The difference on other two methods are the order of concatenation and multi-head attention. Applying multi-head attention on two named entity information $I_{nt}$ and $I_{nq}$ first achieved a better performance with 89.87% in EM-score and 92.88% in F$_1$-score. Applying Concatenation first can only achieve 80.74% in EM-score and 84.42% in F$_1$-score. This is probably due to the processing depth of hidden vectors and dataset size. BERT's output has been modified after many layers but named entity information representation is very close to input. With big amount of parameters in multi-head attention, it requires massive training to find out the optimal parameters. However, our dataset is significantly smaller than what pre-trained BERT uses. This probably can also explain why applying multi-head attention method on both periods can not converge.
Although Table TABREF27 shows the best integration method is concatenation, multi-head attention still has great potential. Due to the lack of computational resources, our experiment fixed the head number and hidden vector size. However, tuning these hyper parameters may have impact on the result. Tuning integration method and try to utilize larger datasets may give help to improving the performance.
Experimental Studies ::: Data Integration Analysis
To investigate how shared task and shared model can benefit, we split our dataset by query types, train our proposed model with different datasets and demonstrate their performance on different datasets. Firstly, we investigate the performance on model without two-stage training and named entity information.
As indicated in Table TABREF30, The model trained by mixed data outperforms 2 of the 3 original tasks in EM-score with 81.55% for proximal resection margin and 86.85% for distal resection margin. The performance on tumor size declined by 1.57% score in EM-score and 3.14% score in F$_1$-score but they were still above 90%. 0.69% and 0.37% score improvement in EM-score was brought by shared model for proximal and distal resection margin prediction. Meanwhile F$_1$-score for those two tasks declined 3.11% and 0.77% score.
Then we investigate the performance on model with two-stage training and named entity information. In this experiment, pre-training process only use the specific dataset not the mixed data. From Table TABREF31 we can observe that the performance on proximal and distal resection margin achieved the best performance on both EM-score and F$_1$-score. Compared with Table TABREF30, the best performance on proximal resection margin improved by 6.9% in EM-score and 7.94% in F$_1$-score. Meanwhile, the best performance on distal resection margin improved by 5.56% in EM-score and 6.32% in F$_1$-score. Other performances also usually improved a lot. This proves the usefulness of two-stage training and named entity information as well.
Lastly, we fine tune the model for each task with a pre-trained parameter. Table TABREF32 summarizes the result. (Add some explanations for the Table TABREF32). Comparing Table TABREF32 with Table TABREF31, using mixed-data pre-trained parameters can significantly improve the model performance than task-specific data trained model. Except tumor size, the result was improved by 0.52% score in EM-score, 1.39% score in F$_1$-score for proximal resection margin and 2.6% score in EM-score, 2.96% score in F$_1$-score for distal resection margin. This proves mixed-data pre-trained parameters can lead to a great benefit for specific task. Meanwhile, the model performance on other tasks which are not trained in the final stage was also improved from around 0 to 60 or 70 percent. This proves that there is commonality between different tasks and our proposed QA-CTS task make this learnable. In conclusion, to achieve the best performance for a specific dataset, pre-training the model in multiple datasets and then fine tuning the model on the specific dataset is the best way.
Conclusion
In this paper, we present a question answering based clinical text structuring (QA-CTS) task, which unifies different clinical text structuring tasks and utilize different datasets. A novel model is also proposed to integrate named entity information into a pre-trained language model and adapt it to QA-CTS task. Initially, sequential results of named entity recognition on both paragraph and query texts are integrated together. Contextualized representation on both paragraph and query texts are transformed by a pre-trained language model. Then, the integrated named entity information and contextualized representation are integrated together and fed into a feed forward network for final prediction. Experimental results on real-world dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks. The shared task and shared model introduced by QA-CTS task has also been proved to be useful for improving the performance on most of the task-specific datasets. In conclusion, the best way to achieve the best performance for a specific dataset is to pre-train the model in multiple datasets and then fine tune it on the specific dataset.
Acknowledgment
We would like to thank Ting Li and Xizhou Hong (Ruijin Hospital) who have helped us very much in data fetching and data cleansing. This work is supported by the National Key R&D Program of China for “Precision Medical Research" (No. 2018YFC0910500). | 17,833 sentences, 826,987 characters and 2,714 question-answer pairs |
bd78483a746fda4805a7678286f82d9621bc45cf | bd78483a746fda4805a7678286f82d9621bc45cf_0 | Q: What are strong baseline models in specific tasks?
Text: Introduction
Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches.
However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size). Researchers have to construct different models for it, which is already costly, and hence it calls for a lot of labeled data for each model. Moreover, labeling necessary amount of data for training neural network requires expensive labor cost. To handle it, researchers turn to some rule-based structuring methods which often have lower labor cost.
Traditionally, CTS tasks can be addressed by rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2, task-specific end-to-end methods BIBREF3, BIBREF4, BIBREF5, BIBREF6 and pipeline methods BIBREF7, BIBREF8, BIBREF9. Rule and dictionary based methods suffer from costly human-designed extraction rules, while task-specific end-to-end methods have non-uniform output formats and require task-specific training dataset. Pipeline methods break down the entire process into several pieces which improves the performance and generality. However, when the pipeline depth grows, error propagation will have a greater impact on the performance.
To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows.
We first present a question answering based clinical text structuring (QA-CTS) task, which unifies different specific tasks and make dataset shareable. We also propose an effective model to integrate clinical named entity information into pre-trained language model.
Experimental results show that QA-CTS task leads to significant improvement due to shared dataset. Our proposed model also achieves significantly better performance than the strong baseline methods. In addition, we also show that two-stage training mechanism has a great improvement on QA-CTS task.
The rest of the paper is organized as follows. We briefly review the related work on clinical text structuring in Section SECREF2. Then, we present question answer based clinical text structuring task in Section SECREF3. In Section SECREF4, we present an effective model for this task. Section SECREF5 is devoted to computational studies and several investigations on the key issues of our proposed model. Finally, conclusions are given in Section SECREF6.
Related Work ::: Clinical Text Structuring
Clinical text structuring is a final problem which is highly related to practical applications. Most of existing studies are case-by-case. Few of them are developed for the general purpose structuring task. These studies can be roughly divided into three categories: rule and dictionary based methods, task-specific end-to-end methods and pipeline methods.
Rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2 rely extremely on heuristics and handcrafted extraction rules which is more of an art than a science and incurring extensive trial-and-error experiments. Fukuda et al. BIBREF0 identified protein names from biological papers by dictionaries and several features of protein names. Wang et al. BIBREF1 developed some linguistic rules (i.e. normalised/expanded term matching and substring term matching) to map specific terminology to SNOMED CT. Song et al. BIBREF2 proposed a hybrid dictionary-based bio-entity extraction technique and expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. This kind of approach features its interpretability and easy modifiability. However, with the increase of the rule amount, supplementing new rules to existing system will turn to be a rule disaster.
Task-specific end-to-end methods BIBREF3, BIBREF4 use large amount of data to automatically model the specific task. Topaz et al. BIBREF3 constructed an automated wound information identification model with five output. Tan et al. BIBREF4 identified patients undergoing radical cystectomy for bladder cancer. Although they achieved good performance, none of their models could be used to another task due to output format difference. This makes building a new model for a new task a costly job.
Pipeline methods BIBREF7, BIBREF8, BIBREF9 break down the entire task into several basic natural language processing tasks. Bill et al. BIBREF7 focused on attributes extraction which mainly relied on dependency parsing and named entity recognition BIBREF10, BIBREF11, BIBREF12. Meanwhile, Fonferko et al. BIBREF9 used more components like noun phrase chunking BIBREF13, BIBREF14, BIBREF15, part-of-speech tagging BIBREF16, BIBREF17, BIBREF18, sentence splitter, named entity linking BIBREF19, BIBREF20, BIBREF21, relation extraction BIBREF22, BIBREF23. This kind of method focus on language itself, so it can handle tasks more general. However, as the depth of pipeline grows, it is obvious that error propagation will be more and more serious. In contrary, using less components to decrease the pipeline depth will lead to a poor performance. So the upper limit of this method depends mainly on the worst component.
Related Work ::: Pre-trained Language Model
Recently, some works focused on pre-trained language representation models to capture language information from text and then utilizing the information to improve the performance of specific natural language processing tasks BIBREF24, BIBREF25, BIBREF26, BIBREF27 which makes language model a shared model to all natural language processing tasks. Radford et al. BIBREF24 proposed a framework for fine-tuning pre-trained language model. Peters et al. BIBREF25 proposed ELMo which concatenates forward and backward language models in a shallow manner. Devlin et al. BIBREF26 used bidirectional Transformers to model deep interactions between the two directions. Yang et al. BIBREF27 replaced the fixed forward or backward factorization order with all possible permutations of the factorization order and avoided using the [MASK] tag which causes pretrain-finetune discrepancy that BERT is subject to.
The main motivation of introducing pre-trained language model is to solve the shortage of labeled data and polysemy problem. Although polysemy problem is not a common phenomenon in biomedical domain, shortage of labeled data is always a non-trivial problem. Lee et al. BIBREF28 applied BERT on large-scale biomedical unannotated data and achieved improvement on biomedical named entity recognition, relation extraction and question answering. Kim et al. BIBREF29 adapted BioBERT into multi-type named entity recognition and discovered new entities. Both of them demonstrates the usefulness of introducing pre-trained language model into biomedical domain.
Question Answering based Clinical Text Structuring
Given a sequence of paragraph text $X=<x_1, x_2, ..., x_n>$, clinical text structuring (CTS) can be regarded to extract or generate a key-value pair where key $Q$ is typically a query term such as proximal resection margin and value $V$ is a result of query term $Q$ according to the paragraph text $X$.
Generally, researchers solve CTS problem in two steps. Firstly, the answer-related text is pick out. And then several steps such as entity names conversion and negative words recognition are deployed to generate the final answer. While final answer varies from task to task, which truly causes non-uniform output formats, finding the answer-related text is a common action among all tasks. Traditional methods regard both the steps as a whole. In this paper, we focus on finding the answer-related substring $Xs = <X_i, X_i+1, X_i+2, ... X_j> (1 <= i < j <= n)$ from paragraph text $X$. For example, given sentence UTF8gkai“远端胃切除标本:小弯长11.5cm,大弯长17.0cm。距上切端6.0cm、下切端8.0cm" (Distal gastrectomy specimen: measuring 11.5cm in length along the lesser curvature, 17.0cm in length along the greater curvature; 6.0cm from the proximal resection margin, and 8.0cm from the distal resection margin) and query UTF8gkai“上切缘距离"(proximal resection margin), the answer should be 6.0cm which is located in original text from index 32 to 37. With such definition, it unifies the output format of CTS tasks and therefore make the training data shareable, in order to reduce the training data quantity requirement.
Since BERT BIBREF26 has already demonstrated the usefulness of shared model, we suppose extracting commonality of this problem and unifying the output format will make the model more powerful than dedicated model and meanwhile, for a specific clinical task, use the data for other tasks to supplement the training data.
The Proposed Model for QA-CTS Task
In this section, we present an effective model for the question answering based clinical text structuring (QA-CTS). As shown in Fig. FIGREF8, paragraph text $X$ is first passed to a clinical named entity recognition (CNER) model BIBREF12 to capture named entity information and obtain one-hot CNER output tagging sequence for query text $I_{nq}$ and paragraph text $I_{nt}$ with BIEOS (Begin, Inside, End, Outside, Single) tag scheme. $I_{nq}$ and $I_{nt}$ are then integrated together into $I_n$. Meanwhile, the paragraph text $X$ and query text $Q$ are organized and passed to contextualized representation model which is pre-trained language model BERT BIBREF26 here to obtain the contextualized representation vector $V_s$ of both text and query. Afterwards, $V_s$ and $I_n$ are integrated together and fed into a feed forward network to calculate the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word.
The Proposed Model for QA-CTS Task ::: Contextualized Representation of Sentence Text and Query Text
For any clinical free-text paragraph $X$ and query $Q$, contextualized representation is to generate the encoded vector of both of them. Here we use pre-trained language model BERT-base BIBREF26 model to capture contextual information.
The text input is constructed as `[CLS] $Q$ [SEP] $X$ [SEP]'. For Chinese sentence, each word in this input will be mapped to a pre-trained embedding $e_i$. To tell the model $Q$ and $X$ is two different sentence, a sentence type input is generated which is a binary label sequence to denote what sentence each character in the input belongs to. Positional encoding and mask matrix is also constructed automatically to bring in absolute position information and eliminate the impact of zero padding respectively. Then a hidden vector $V_s$ which contains both query and text information is generated through BERT-base model.
The Proposed Model for QA-CTS Task ::: Clinical Named Entity Information
Since BERT is trained on general corpus, its performance on biomedical domain can be improved by introducing biomedical domain-specific features. In this paper, we introduce clinical named entity information into the model.
The CNER task aims to identify and classify important clinical terms such as diseases, symptoms, treatments, exams, and body parts from Chinese EHRs. It can be regarded as a sequence labeling task. A CNER model typically outputs a sequence of tags. Each character of the original sentence will be tagged a label following a tag scheme. In this paper we recognize the entities by the model of our previous work BIBREF12 but trained on another corpus which has 44 entity types including operations, numbers, unit words, examinations, symptoms, negative words, etc. An illustrative example of named entity information sequence is demonstrated in Table TABREF2. In Table TABREF2, UTF8gkai“远端胃切除" is tagged as an operation, `11.5' is a number word and `cm' is an unit word. The named entity tag sequence is organized in one-hot type. We denote the sequence for clinical sentence and query term as $I_{nt}$ and $I_{nq}$, respectively.
The Proposed Model for QA-CTS Task ::: Integration Method
There are two ways to integrate two named entity information vectors $I_{nt}$ and $I_{nq}$ or hidden contextualized representation $V_s$ and named entity information $I_n$, where $I_n = [I_{nt}; I_{nq}]$. The first one is to concatenate them together because they have sequence output with a common dimension. The second one is to transform them into a new hidden representation. For the concatenation method, the integrated representation is described as follows.
While for the transformation method, we use multi-head attention BIBREF30 to encode the two vectors. It can be defined as follows where $h$ is the number of heads and $W_o$ is used to projects back the dimension of concatenated matrix.
$Attention$ denotes the traditional attention and it can be defined as follows.
where $d_k$ is the length of hidden vector.
The Proposed Model for QA-CTS Task ::: Final Prediction
The final step is to use integrated representation $H_i$ to predict the start and end index of answer-related text. Here we define this calculation problem as a classification for each word to be the start or end word. We use a feed forward network (FFN) to compress and calculate the score of each word $H_f$ which makes the dimension to $\left\langle l_s, 2\right\rangle $ where $l_s$ denotes the length of sequence.
Then we permute the two dimensions for softmax calculation. The calculation process of loss function can be defined as followed.
where $O_s = softmax(permute(H_f)_0)$ denotes the probability score of each word to be the start word and similarly $O_e = softmax(permute(H_f)_1)$ denotes the end. $y_s$ and $y_e$ denotes the true answer of the output for start word and end word respectively.
The Proposed Model for QA-CTS Task ::: Two-Stage Training Mechanism
Two-stage training mechanism is previously applied on bilinear model in fine-grained visual recognition BIBREF31, BIBREF32, BIBREF33. Two CNNs are deployed in the model. One is trained at first for coarse-graind features while freezing the parameter of the other. Then unfreeze the other one and train the entire model in a low learning rate for fetching fine-grained features.
Inspired by this and due to the large amount of parameters in BERT model, to speed up the training process, we fine tune the BERT model with new prediction layer first to achieve a better contextualized representation performance. Then we deploy the proposed model and load the fine tuned BERT weights, attach named entity information layers and retrain the model.
Experimental Studies
In this section, we devote to experimentally evaluating our proposed task and approach. The best results in tables are in bold.
Experimental Studies ::: Dataset and Evaluation Metrics
Our dataset is annotated based on Chinese pathology reports provided by the Department of Gastrointestinal Surgery, Ruijin Hospital. It contains 17,833 sentences, 826,987 characters and 2,714 question-answer pairs. All question-answer pairs are annotated and reviewed by four clinicians with three types of questions, namely tumor size, proximal resection margin and distal resection margin. These annotated instances have been partitioned into 1,899 training instances (12,412 sentences) and 815 test instances (5,421 sentences). Each instance has one or several sentences. Detailed statistics of different types of entities are listed in Table TABREF20.
In the following experiments, two widely-used performance measures (i.e., EM-score BIBREF34 and (macro-averaged) F$_1$-score BIBREF35) are used to evaluate the methods. The Exact Match (EM-score) metric measures the percentage of predictions that match any one of the ground truth answers exactly. The F$_1$-score metric is a looser metric measures the average overlap between the prediction and ground truth answer.
Experimental Studies ::: Experimental Settings
To implement deep neural network models, we utilize the Keras library BIBREF36 with TensorFlow BIBREF37 backend. Each model is run on a single NVIDIA GeForce GTX 1080 Ti GPU. The models are trained by Adam optimization algorithm BIBREF38 whose parameters are the same as the default settings except for learning rate set to $5\times 10^{-5}$. Batch size is set to 3 or 4 due to the lack of graphical memory. We select BERT-base as the pre-trained language model in this paper. Due to the high cost of pre-training BERT language model, we directly adopt parameters pre-trained by Google in Chinese general corpus. The named entity recognition is applied on both pathology report texts and query texts.
Experimental Studies ::: Comparison with State-of-the-art Methods
Since BERT has already achieved the state-of-the-art performance of question-answering, in this section we compare our proposed model with state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26. As BERT has two versions: BERT-Base and BERT-Large, due to the lack of computational resource, we can only compare with BERT-Base model instead of BERT-Large. Prediction layer is attached at the end of the original BERT-Base model and we fine tune it on our dataset. In this section, the named entity integration method is chosen to pure concatenation (Concatenate the named entity information on pathology report text and query text first and then concatenate contextualized representation and concatenated named entity information). Comparative results are summarized in Table TABREF23.
Table TABREF23 indicates that our proposed model achieved the best performance both in EM-score and F$_1$-score with EM-score of 91.84% and F$_1$-score of 93.75%. QANet outperformed BERT-Base with 3.56% score in F$_1$-score but underperformed it with 0.75% score in EM-score. Compared with BERT-Base, our model led to a 5.64% performance improvement in EM-score and 3.69% in F$_1$-score. Although our model didn't outperform much with QANet in F$_1$-score (only 0.13%), our model significantly outperformed it with 6.39% score in EM-score.
Experimental Studies ::: Ablation Analysis
To further investigate the effects of named entity information and two-stage training mechanism for our model, we apply ablation analysis to see the improvement brought by each of them, where $\times $ refers to removing that part from our model.
As demonstrated in Table TABREF25, with named entity information enabled, two-stage training mechanism improved the result by 4.36% in EM-score and 3.8% in F$_1$-score. Without two-stage training mechanism, named entity information led to an improvement by 1.28% in EM-score but it also led to a weak deterioration by 0.12% in F$_1$-score. With both of them enabled, our proposed model achieved a 5.64% score improvement in EM-score and a 3.69% score improvement in F$_1$-score. The experimental results show that both named entity information and two-stage training mechanism are helpful to our model.
Experimental Studies ::: Comparisons Between Two Integration Methods
There are two methods to integrate named entity information into existing model, we experimentally compare these two integration methods. As named entity recognition has been applied on both pathology report text and query text, there will be two integration here. One is for two named entity information and the other is for contextualized representation and integrated named entity information. For multi-head attention BIBREF30, we set heads number $h = 16$ with 256-dimension hidden vector size for each head.
From Table TABREF27, we can observe that applying concatenation on both periods achieved the best performance on both EM-score and F$_1$-score. Unfortunately, applying multi-head attention on both period one and period two can not reach convergence in our experiments. This probably because it makes the model too complex to train. The difference on other two methods are the order of concatenation and multi-head attention. Applying multi-head attention on two named entity information $I_{nt}$ and $I_{nq}$ first achieved a better performance with 89.87% in EM-score and 92.88% in F$_1$-score. Applying Concatenation first can only achieve 80.74% in EM-score and 84.42% in F$_1$-score. This is probably due to the processing depth of hidden vectors and dataset size. BERT's output has been modified after many layers but named entity information representation is very close to input. With big amount of parameters in multi-head attention, it requires massive training to find out the optimal parameters. However, our dataset is significantly smaller than what pre-trained BERT uses. This probably can also explain why applying multi-head attention method on both periods can not converge.
Although Table TABREF27 shows the best integration method is concatenation, multi-head attention still has great potential. Due to the lack of computational resources, our experiment fixed the head number and hidden vector size. However, tuning these hyper parameters may have impact on the result. Tuning integration method and try to utilize larger datasets may give help to improving the performance.
Experimental Studies ::: Data Integration Analysis
To investigate how shared task and shared model can benefit, we split our dataset by query types, train our proposed model with different datasets and demonstrate their performance on different datasets. Firstly, we investigate the performance on model without two-stage training and named entity information.
As indicated in Table TABREF30, The model trained by mixed data outperforms 2 of the 3 original tasks in EM-score with 81.55% for proximal resection margin and 86.85% for distal resection margin. The performance on tumor size declined by 1.57% score in EM-score and 3.14% score in F$_1$-score but they were still above 90%. 0.69% and 0.37% score improvement in EM-score was brought by shared model for proximal and distal resection margin prediction. Meanwhile F$_1$-score for those two tasks declined 3.11% and 0.77% score.
Then we investigate the performance on model with two-stage training and named entity information. In this experiment, pre-training process only use the specific dataset not the mixed data. From Table TABREF31 we can observe that the performance on proximal and distal resection margin achieved the best performance on both EM-score and F$_1$-score. Compared with Table TABREF30, the best performance on proximal resection margin improved by 6.9% in EM-score and 7.94% in F$_1$-score. Meanwhile, the best performance on distal resection margin improved by 5.56% in EM-score and 6.32% in F$_1$-score. Other performances also usually improved a lot. This proves the usefulness of two-stage training and named entity information as well.
Lastly, we fine tune the model for each task with a pre-trained parameter. Table TABREF32 summarizes the result. (Add some explanations for the Table TABREF32). Comparing Table TABREF32 with Table TABREF31, using mixed-data pre-trained parameters can significantly improve the model performance than task-specific data trained model. Except tumor size, the result was improved by 0.52% score in EM-score, 1.39% score in F$_1$-score for proximal resection margin and 2.6% score in EM-score, 2.96% score in F$_1$-score for distal resection margin. This proves mixed-data pre-trained parameters can lead to a great benefit for specific task. Meanwhile, the model performance on other tasks which are not trained in the final stage was also improved from around 0 to 60 or 70 percent. This proves that there is commonality between different tasks and our proposed QA-CTS task make this learnable. In conclusion, to achieve the best performance for a specific dataset, pre-training the model in multiple datasets and then fine tuning the model on the specific dataset is the best way.
Conclusion
In this paper, we present a question answering based clinical text structuring (QA-CTS) task, which unifies different clinical text structuring tasks and utilize different datasets. A novel model is also proposed to integrate named entity information into a pre-trained language model and adapt it to QA-CTS task. Initially, sequential results of named entity recognition on both paragraph and query texts are integrated together. Contextualized representation on both paragraph and query texts are transformed by a pre-trained language model. Then, the integrated named entity information and contextualized representation are integrated together and fed into a feed forward network for final prediction. Experimental results on real-world dataset demonstrate that our proposed model competes favorably with strong baseline models in all three specific tasks. The shared task and shared model introduced by QA-CTS task has also been proved to be useful for improving the performance on most of the task-specific datasets. In conclusion, the best way to achieve the best performance for a specific dataset is to pre-train the model in multiple datasets and then fine tune it on the specific dataset.
Acknowledgment
We would like to thank Ting Li and Xizhou Hong (Ruijin Hospital) who have helped us very much in data fetching and data cleansing. This work is supported by the National Key R&D Program of China for “Precision Medical Research" (No. 2018YFC0910500). | state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26 |
dd155f01f6f4a14f9d25afc97504aefdc6d29c13 | dd155f01f6f4a14f9d25afc97504aefdc6d29c13_0 | Q: What aspects have been compared between various language models?
Text: Introduction
Deep learning has unquestionably advanced the state of the art in many natural language processing tasks, from syntactic dependency parsing BIBREF0 to named-entity recognition BIBREF1 to machine translation BIBREF2 . The same certainly applies to language modeling, where recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 , BIBREF4 .
Specifically focused on language modeling, this paper examines an issue that to our knowledge has not been explored: advances in neural language models have come at a significant cost in terms of increased computational complexity. Computing the probability of a token sequence using non-neural techniques requires a number of phrase lookups and perhaps a few arithmetic operations, whereas model inference with NLMs require large matrix multiplications consuming perhaps millions of floating point operations (FLOPs). These performance tradeoffs are worth discussing.
In truth, language models exist in a quality–performance tradeoff space. As model quality increases (e.g., lower perplexity), performance as measured in terms of energy consumption, query latency, etc. tends to decrease. For applications primarily running in the cloud—say, machine translation—practitioners often solely optimize for the lowest perplexity. This is because such applications are embarrassingly parallel and hence trivial to scale in a data center environment.
There are, however, applications of NLMs that require less one-sided optimizations. On mobile devices such as smartphones and tablets, for example, NLMs may be integrated into software keyboards for next-word prediction, allowing much faster text entry. Popular Android apps that enthusiastically tout this technology include SwiftKey and Swype. The greater computational costs of NLMs lead to higher energy usage in model inference, translating into shorter battery life.
In this paper, we examine the quality–performance tradeoff in the shift from non-neural to neural language models. In particular, we compare Kneser–Ney smoothing, widely accepted as the state of the art prior to NLMs, to the best NLMs today. The decrease in perplexity on standard datasets has been well documented BIBREF3 , but to our knowledge no one has examined the performances tradeoffs. With deployment on a mobile device in mind, we evaluate energy usage and inference latency on a Raspberry Pi (which shares the same ARM architecture as nearly all smartphones today). We find that a 2.5 $\times $ reduction in perplexity on PTB comes at a staggering cost in terms of performance: inference with NLMs takes 49 $\times $ longer and requires 32 $\times $ more energy. Furthermore, we find that impressive reductions in perplexity translate into at best modest improvements in next-word prediction, which is arguable a better metric for evaluating software keyboards on a smartphone. The contribution of this paper is the first known elucidation of this quality–performance tradeoff. Note that we refrain from prescriptive recommendations: whether or not a tradeoff is worthwhile depends on the application. Nevertheless, NLP engineers should arguably keep these tradeoffs in mind when selecting a particular operating point.
Background and Related Work
BIBREF3 evaluate recent neural language models; however, their focus is not on the computational footprint of each model, but rather the perplexity. To further reduce perplexity, many neural language model extensions exist, such as continuous cache pointer BIBREF5 and mixture of softmaxes BIBREF6 . Since our focus is on comparing “core” neural and non-neural approaches, we disregard these extra optimizations techniques in all of our models.
Other work focus on designing lightweight models for resource-efficient inference on mobile devices. BIBREF7 explore LSTMs BIBREF8 with binary weights for language modeling; BIBREF9 examine shallow feedforward neural networks for natural language processing.
AWD-LSTM. BIBREF4 show that a simple three-layer LSTM, with proper regularization and optimization techniques, can achieve state of the art on various language modeling datasets, surpassing more complex models. Specifically, BIBREF4 apply randomized backpropagation through time, variational dropout, activation regularization, embedding dropout, and temporal activation regularization. A novel scheduler for optimization, non-monotonically triggered ASGD (NT-ASGD) is also introduced. BIBREF4 name their three-layer LSTM model trained with such tricks, “AWD-LSTM.”
Quasi-Recurrent Neural Networks. Quasi-recurrent neural networks (QRNNs; BIBREF10 ) achieve current state of the art in word-level language modeling BIBREF11 . A quasi-recurrent layer comprises two separate parts: a convolution layer with three weights, and a recurrent pooling layer. Given an input $\mathbf {X} \in \mathbb {R}^{k \times n}$ , the convolution layer is $ \mathbf {Z} = \tanh (\mathbf {W}_z \cdot \mathbf {X})\\ \mathbf {F} = \sigma (\mathbf {W}_f \cdot \mathbf {X})\\ \mathbf {O} = \sigma (\mathbf {W}_o \cdot \mathbf {X}) $
where $\sigma $ denotes the sigmoid function, $\cdot $ represents masked convolution across time, and $\mathbf {W}_{\lbrace z, f, o\rbrace } \in \mathbb {R}^{m \times k \times r}$ are convolution weights with $k$ input channels, $m$ output channels, and a window size of $r$ . In the recurrent pooling layer, the convolution outputs are combined sequentially: $ \mathbf {c}_t &= \mathbf {f}_t \odot \mathbf {c}_{t-1} + (1 - \mathbf {f}_t) \odot \mathbf {z}_t\\ \mathbf {h}_t &= \mathbf {o}_t \odot \mathbf {c}_t $
Multiple QRNN layers can be stacked for deeper hierarchical representation, with the output $\mathbf {h}_{1:t}$ being fed as the input into the subsequent layer: In language modeling, a four-layer QRNN is a standard architecture BIBREF11 .
Perplexity–Recall Scale. Word-level perplexity does not have a strictly monotonic relationship with recall-at- $k$ , the fraction of top $k$ predictions that contain the correct word. A given R@ $k$ imposes a weak minimum perplexity constraint—there are many free parameters that allow for large variability in the perplexity given a certain R@ $k$ . Consider the corpus, “choo choo train,” with an associated unigram model $P(\text{``choo''}) = 0.1$ , $P(\text{``train''}) = 0.9$ , resulting in an R@1 of $1/3$ and perplexity of $4.8$ . Clearly, R@1 $ =1/3$ for all $P(\text{``choo''}) \le 0.5$ ; thus, perplexity can drop as low as 2 without affecting recall.
Experimental Setup
We conducted our experiments on Penn Treebank (PTB; BIBREF12 ) and WikiText-103 (WT103; BIBREF13 ). Preprocessed by BIBREF14 , PTB contains 887K tokens for training, 70K for validation, and 78K for test, with a vocabulary size of 10,000. On the other hand, WT103 comprises 103 million tokens for training, 217K for validation, and 245K for test, spanning a vocabulary of 267K unique tokens.
For the neural language model, we used a four-layer QRNN BIBREF10 , which achieves state-of-the-art results on a variety of datasets, such as WT103 BIBREF11 and PTB. To compare against more common LSTM architectures, we also evaluated AWD-LSTM BIBREF4 on PTB. For the non-neural approach, we used a standard five-gram model with modified Kneser-Ney smoothing BIBREF15 , as explored in BIBREF16 on PTB. We denote the QRNN models for PTB and WT103 as ptb-qrnn and wt103-qrnn, respectively.
For each model, we examined word-level perplexity, R@3 in next-word prediction, latency (ms/q), and energy usage (mJ/q). To explore the perplexity–recall relationship, we collected individual perplexity and recall statistics for each sentence in the test set.
Hyperparameters and Training
The QRNN models followed the exact training procedure and architecture delineated in the official codebase from BIBREF11 . For ptb-qrnn, we trained the model for 550 epochs using NT-ASGD BIBREF4 , then finetuned for 300 epochs using ASGD BIBREF17 , all with a learning rate of 30 throughout. For wt103-qrnn, we followed BIBREF11 and trained the QRNN for 14 epochs, using the Adam optimizer with a learning rate of $10^{-3}$ . We also applied regularization techniques from BIBREF4 ; all the specific hyperparameters are the same as those in the repository. Our model architecture consists of 400-dimensional tied embedding weights BIBREF18 and four QRNN layers, with 1550 hidden units per layer on PTB and 2500 per layer on WT103. Both QRNN models have window sizes of $r=2$ for the first layer and $r=1$ for the rest.
For the KN-5 model, we trained an off-the-shelf five-gram model using the popular SRILM toolkit BIBREF19 . We did not specify any special hyperparameters.
Infrastructure
We trained the QRNNs with PyTorch (0.4.0; commit 1807bac) on a Titan V GPU. To evaluate the models under a resource-constrained environment, we deployed them on a Raspberry Pi 3 (Model B) running Raspbian Stretch (4.9.41-v7+). The Raspberry Pi (RPi) is not only a standard platform, but also a close surrogate to mobile phones, using the same Cortex-A7 in many phones. We then transferred the trained models to the RPi, using the same frameworks for evaluation. We plugged the RPi into a Watts Up Pro meter, a power meter that can be read programatically over USB at a frequency of 1 Hz. For the QRNNs, we used the first 350 words of the test set, and averaged the ms/query and mJ/query. For KN-5, we used the entire test set for evaluation, since the latency was much lower. To adjust for the base power load, we subtracted idle power draw from energy usage.
For a different perspective, we further evaluated all the models under a desktop environment, using an i7-4790k CPU and Titan V GPU. Because the base power load for powering a desktop is much higher than running neural language models, we collected only latency statistics. We used the entire test set, since the QRNN runs quickly.
In addition to energy and latency, another consideration for the NLP developer selecting an operating point is the cost of underlying hardware. For our setup, the RPi costs $35 USD, the CPU costs $350 USD, and the GPU costs $3000 USD.
Results and Discussion
To demonstrate the effectiveness of the QRNN models, we present the results of past and current state-of-the-art neural language models in Table 1 ; we report the Skip- and AWD-LSTM results as seen in the original papers, while we report our QRNN results. Skip LSTM denotes the four-layer Skip LSTM in BIBREF3 . BIBREF20 focus on Hebbian softmax, a model extension technique—Rae-LSTM refers to their base LSTM model without any extensions. In our results, KN-5 refers to the traditional five-gram model with modified Kneser-Ney smoothing, and AWD is shorthand for AWD-LSTM.
Perplexity–recall scale. In Figure 1 , using KN-5 as the model, we plot the log perplexity (cross entropy) and R@3 error ( $1 - \text{R@3}$ ) for every sentence in PTB and WT103. The horizontal clusters arise from multiple perplexity points representing the same R@3 value, as explained in Section "Infrastructure" . We also observe that the perplexity–recall scale is non-linear—instead, log perplexity appears to have a moderate linear relationship with R@3 error on PTB ( $r=0.85$ ), and an even stronger relationship on WT103 ( $r=0.94$ ). This is partially explained by WT103 having much longer sentences, and thus less noisy statistics.
From Figure 1 , we find that QRNN models yield strongly linear log perplexity–recall plots as well, where $r=0.88$ and $r=0.93$ for PTB and WT103, respectively. Note that, due to the improved model quality over KN-5, the point clouds are shifted downward compared to Figure 1 . We conclude that log perplexity, or cross entropy, provides a more human-understandable indicator of R@3 than perplexity does. Overall, these findings agree with those from BIBREF21 , which explores the log perplexity–word error rate scale in language modeling for speech recognition.
Quality–performance tradeoff. In Table 2 , from left to right, we report perplexity results on the validation and test sets, R@3 on test, and finally per-query latency and energy usage. On the RPi, KN-5 is both fast and power-efficient to run, using only about 7 ms/query and 6 mJ/query for PTB (Table 2 , row 1), and 264 ms/q and 229 mJ/q on WT103 (row 5). Taking 220 ms/query and consuming 300 mJ/query, AWD-LSTM and ptb-qrnn are still viable for mobile phones: The modern smartphone holds upwards of 10,000 joules BIBREF22 , and the latency is within usability standards BIBREF23 . Nevertheless, the models are still 49 $\times $ slower and 32 $\times $ more power-hungry than KN-5. The wt103-qrnn model is completely unusable on phones, taking over 1.2 seconds per next-word prediction. Neural models achieve perplexity drops of 60–80% and R@3 increases of 22–34%, but these improvements come at a much higher cost in latency and energy usage.
In Table 2 (last two columns), the desktop yields very different results: the neural models on PTB (rows 2–3) are 9 $\times $ slower than KN-5, but the absolute latency is only 8 ms/q, which is still much faster than what humans perceive as instantaneous BIBREF23 . If a high-end commodity GPU is available, then the models are only twice as slow as KN-5 is. From row 5, even better results are noted with wt103-qrnn: On the CPU, the QRNN is only 60% slower than KN-5 is, while the model is faster by 11 $\times $ on a GPU. These results suggest that, if only latency is considered under a commodity desktop environment, the QRNN model is humanly indistinguishable from the KN-5 model, even without using GPU acceleration.
Conclusion
In the present work, we describe and examine the tradeoff space between quality and performance for the task of language modeling. Specifically, we explore the quality–performance tradeoffs between KN-5, a non-neural approach, and AWD-LSTM and QRNN, two neural language models. We find that with decreased perplexity comes vastly increased computational requirements: In one of the NLMs, a perplexity reduction by 2.5 $\times $ results in a 49 $\times $ rise in latency and 32 $\times $ increase in energy usage, when compared to KN-5. | Quality measures using perplexity and recall, and performance measured using latency and energy usage. |
a9d530d68fb45b52d9bad9da2cd139db5a4b2f7c | a9d530d68fb45b52d9bad9da2cd139db5a4b2f7c_0 | Q: what classic language models are mentioned in the paper?
Text: Introduction
Deep learning has unquestionably advanced the state of the art in many natural language processing tasks, from syntactic dependency parsing BIBREF0 to named-entity recognition BIBREF1 to machine translation BIBREF2 . The same certainly applies to language modeling, where recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 , BIBREF4 .
Specifically focused on language modeling, this paper examines an issue that to our knowledge has not been explored: advances in neural language models have come at a significant cost in terms of increased computational complexity. Computing the probability of a token sequence using non-neural techniques requires a number of phrase lookups and perhaps a few arithmetic operations, whereas model inference with NLMs require large matrix multiplications consuming perhaps millions of floating point operations (FLOPs). These performance tradeoffs are worth discussing.
In truth, language models exist in a quality–performance tradeoff space. As model quality increases (e.g., lower perplexity), performance as measured in terms of energy consumption, query latency, etc. tends to decrease. For applications primarily running in the cloud—say, machine translation—practitioners often solely optimize for the lowest perplexity. This is because such applications are embarrassingly parallel and hence trivial to scale in a data center environment.
There are, however, applications of NLMs that require less one-sided optimizations. On mobile devices such as smartphones and tablets, for example, NLMs may be integrated into software keyboards for next-word prediction, allowing much faster text entry. Popular Android apps that enthusiastically tout this technology include SwiftKey and Swype. The greater computational costs of NLMs lead to higher energy usage in model inference, translating into shorter battery life.
In this paper, we examine the quality–performance tradeoff in the shift from non-neural to neural language models. In particular, we compare Kneser–Ney smoothing, widely accepted as the state of the art prior to NLMs, to the best NLMs today. The decrease in perplexity on standard datasets has been well documented BIBREF3 , but to our knowledge no one has examined the performances tradeoffs. With deployment on a mobile device in mind, we evaluate energy usage and inference latency on a Raspberry Pi (which shares the same ARM architecture as nearly all smartphones today). We find that a 2.5 $\times $ reduction in perplexity on PTB comes at a staggering cost in terms of performance: inference with NLMs takes 49 $\times $ longer and requires 32 $\times $ more energy. Furthermore, we find that impressive reductions in perplexity translate into at best modest improvements in next-word prediction, which is arguable a better metric for evaluating software keyboards on a smartphone. The contribution of this paper is the first known elucidation of this quality–performance tradeoff. Note that we refrain from prescriptive recommendations: whether or not a tradeoff is worthwhile depends on the application. Nevertheless, NLP engineers should arguably keep these tradeoffs in mind when selecting a particular operating point.
Background and Related Work
BIBREF3 evaluate recent neural language models; however, their focus is not on the computational footprint of each model, but rather the perplexity. To further reduce perplexity, many neural language model extensions exist, such as continuous cache pointer BIBREF5 and mixture of softmaxes BIBREF6 . Since our focus is on comparing “core” neural and non-neural approaches, we disregard these extra optimizations techniques in all of our models.
Other work focus on designing lightweight models for resource-efficient inference on mobile devices. BIBREF7 explore LSTMs BIBREF8 with binary weights for language modeling; BIBREF9 examine shallow feedforward neural networks for natural language processing.
AWD-LSTM. BIBREF4 show that a simple three-layer LSTM, with proper regularization and optimization techniques, can achieve state of the art on various language modeling datasets, surpassing more complex models. Specifically, BIBREF4 apply randomized backpropagation through time, variational dropout, activation regularization, embedding dropout, and temporal activation regularization. A novel scheduler for optimization, non-monotonically triggered ASGD (NT-ASGD) is also introduced. BIBREF4 name their three-layer LSTM model trained with such tricks, “AWD-LSTM.”
Quasi-Recurrent Neural Networks. Quasi-recurrent neural networks (QRNNs; BIBREF10 ) achieve current state of the art in word-level language modeling BIBREF11 . A quasi-recurrent layer comprises two separate parts: a convolution layer with three weights, and a recurrent pooling layer. Given an input $\mathbf {X} \in \mathbb {R}^{k \times n}$ , the convolution layer is $ \mathbf {Z} = \tanh (\mathbf {W}_z \cdot \mathbf {X})\\ \mathbf {F} = \sigma (\mathbf {W}_f \cdot \mathbf {X})\\ \mathbf {O} = \sigma (\mathbf {W}_o \cdot \mathbf {X}) $
where $\sigma $ denotes the sigmoid function, $\cdot $ represents masked convolution across time, and $\mathbf {W}_{\lbrace z, f, o\rbrace } \in \mathbb {R}^{m \times k \times r}$ are convolution weights with $k$ input channels, $m$ output channels, and a window size of $r$ . In the recurrent pooling layer, the convolution outputs are combined sequentially: $ \mathbf {c}_t &= \mathbf {f}_t \odot \mathbf {c}_{t-1} + (1 - \mathbf {f}_t) \odot \mathbf {z}_t\\ \mathbf {h}_t &= \mathbf {o}_t \odot \mathbf {c}_t $
Multiple QRNN layers can be stacked for deeper hierarchical representation, with the output $\mathbf {h}_{1:t}$ being fed as the input into the subsequent layer: In language modeling, a four-layer QRNN is a standard architecture BIBREF11 .
Perplexity–Recall Scale. Word-level perplexity does not have a strictly monotonic relationship with recall-at- $k$ , the fraction of top $k$ predictions that contain the correct word. A given R@ $k$ imposes a weak minimum perplexity constraint—there are many free parameters that allow for large variability in the perplexity given a certain R@ $k$ . Consider the corpus, “choo choo train,” with an associated unigram model $P(\text{``choo''}) = 0.1$ , $P(\text{``train''}) = 0.9$ , resulting in an R@1 of $1/3$ and perplexity of $4.8$ . Clearly, R@1 $ =1/3$ for all $P(\text{``choo''}) \le 0.5$ ; thus, perplexity can drop as low as 2 without affecting recall.
Experimental Setup
We conducted our experiments on Penn Treebank (PTB; BIBREF12 ) and WikiText-103 (WT103; BIBREF13 ). Preprocessed by BIBREF14 , PTB contains 887K tokens for training, 70K for validation, and 78K for test, with a vocabulary size of 10,000. On the other hand, WT103 comprises 103 million tokens for training, 217K for validation, and 245K for test, spanning a vocabulary of 267K unique tokens.
For the neural language model, we used a four-layer QRNN BIBREF10 , which achieves state-of-the-art results on a variety of datasets, such as WT103 BIBREF11 and PTB. To compare against more common LSTM architectures, we also evaluated AWD-LSTM BIBREF4 on PTB. For the non-neural approach, we used a standard five-gram model with modified Kneser-Ney smoothing BIBREF15 , as explored in BIBREF16 on PTB. We denote the QRNN models for PTB and WT103 as ptb-qrnn and wt103-qrnn, respectively.
For each model, we examined word-level perplexity, R@3 in next-word prediction, latency (ms/q), and energy usage (mJ/q). To explore the perplexity–recall relationship, we collected individual perplexity and recall statistics for each sentence in the test set.
Hyperparameters and Training
The QRNN models followed the exact training procedure and architecture delineated in the official codebase from BIBREF11 . For ptb-qrnn, we trained the model for 550 epochs using NT-ASGD BIBREF4 , then finetuned for 300 epochs using ASGD BIBREF17 , all with a learning rate of 30 throughout. For wt103-qrnn, we followed BIBREF11 and trained the QRNN for 14 epochs, using the Adam optimizer with a learning rate of $10^{-3}$ . We also applied regularization techniques from BIBREF4 ; all the specific hyperparameters are the same as those in the repository. Our model architecture consists of 400-dimensional tied embedding weights BIBREF18 and four QRNN layers, with 1550 hidden units per layer on PTB and 2500 per layer on WT103. Both QRNN models have window sizes of $r=2$ for the first layer and $r=1$ for the rest.
For the KN-5 model, we trained an off-the-shelf five-gram model using the popular SRILM toolkit BIBREF19 . We did not specify any special hyperparameters.
Infrastructure
We trained the QRNNs with PyTorch (0.4.0; commit 1807bac) on a Titan V GPU. To evaluate the models under a resource-constrained environment, we deployed them on a Raspberry Pi 3 (Model B) running Raspbian Stretch (4.9.41-v7+). The Raspberry Pi (RPi) is not only a standard platform, but also a close surrogate to mobile phones, using the same Cortex-A7 in many phones. We then transferred the trained models to the RPi, using the same frameworks for evaluation. We plugged the RPi into a Watts Up Pro meter, a power meter that can be read programatically over USB at a frequency of 1 Hz. For the QRNNs, we used the first 350 words of the test set, and averaged the ms/query and mJ/query. For KN-5, we used the entire test set for evaluation, since the latency was much lower. To adjust for the base power load, we subtracted idle power draw from energy usage.
For a different perspective, we further evaluated all the models under a desktop environment, using an i7-4790k CPU and Titan V GPU. Because the base power load for powering a desktop is much higher than running neural language models, we collected only latency statistics. We used the entire test set, since the QRNN runs quickly.
In addition to energy and latency, another consideration for the NLP developer selecting an operating point is the cost of underlying hardware. For our setup, the RPi costs $35 USD, the CPU costs $350 USD, and the GPU costs $3000 USD.
Results and Discussion
To demonstrate the effectiveness of the QRNN models, we present the results of past and current state-of-the-art neural language models in Table 1 ; we report the Skip- and AWD-LSTM results as seen in the original papers, while we report our QRNN results. Skip LSTM denotes the four-layer Skip LSTM in BIBREF3 . BIBREF20 focus on Hebbian softmax, a model extension technique—Rae-LSTM refers to their base LSTM model without any extensions. In our results, KN-5 refers to the traditional five-gram model with modified Kneser-Ney smoothing, and AWD is shorthand for AWD-LSTM.
Perplexity–recall scale. In Figure 1 , using KN-5 as the model, we plot the log perplexity (cross entropy) and R@3 error ( $1 - \text{R@3}$ ) for every sentence in PTB and WT103. The horizontal clusters arise from multiple perplexity points representing the same R@3 value, as explained in Section "Infrastructure" . We also observe that the perplexity–recall scale is non-linear—instead, log perplexity appears to have a moderate linear relationship with R@3 error on PTB ( $r=0.85$ ), and an even stronger relationship on WT103 ( $r=0.94$ ). This is partially explained by WT103 having much longer sentences, and thus less noisy statistics.
From Figure 1 , we find that QRNN models yield strongly linear log perplexity–recall plots as well, where $r=0.88$ and $r=0.93$ for PTB and WT103, respectively. Note that, due to the improved model quality over KN-5, the point clouds are shifted downward compared to Figure 1 . We conclude that log perplexity, or cross entropy, provides a more human-understandable indicator of R@3 than perplexity does. Overall, these findings agree with those from BIBREF21 , which explores the log perplexity–word error rate scale in language modeling for speech recognition.
Quality–performance tradeoff. In Table 2 , from left to right, we report perplexity results on the validation and test sets, R@3 on test, and finally per-query latency and energy usage. On the RPi, KN-5 is both fast and power-efficient to run, using only about 7 ms/query and 6 mJ/query for PTB (Table 2 , row 1), and 264 ms/q and 229 mJ/q on WT103 (row 5). Taking 220 ms/query and consuming 300 mJ/query, AWD-LSTM and ptb-qrnn are still viable for mobile phones: The modern smartphone holds upwards of 10,000 joules BIBREF22 , and the latency is within usability standards BIBREF23 . Nevertheless, the models are still 49 $\times $ slower and 32 $\times $ more power-hungry than KN-5. The wt103-qrnn model is completely unusable on phones, taking over 1.2 seconds per next-word prediction. Neural models achieve perplexity drops of 60–80% and R@3 increases of 22–34%, but these improvements come at a much higher cost in latency and energy usage.
In Table 2 (last two columns), the desktop yields very different results: the neural models on PTB (rows 2–3) are 9 $\times $ slower than KN-5, but the absolute latency is only 8 ms/q, which is still much faster than what humans perceive as instantaneous BIBREF23 . If a high-end commodity GPU is available, then the models are only twice as slow as KN-5 is. From row 5, even better results are noted with wt103-qrnn: On the CPU, the QRNN is only 60% slower than KN-5 is, while the model is faster by 11 $\times $ on a GPU. These results suggest that, if only latency is considered under a commodity desktop environment, the QRNN model is humanly indistinguishable from the KN-5 model, even without using GPU acceleration.
Conclusion
In the present work, we describe and examine the tradeoff space between quality and performance for the task of language modeling. Specifically, we explore the quality–performance tradeoffs between KN-5, a non-neural approach, and AWD-LSTM and QRNN, two neural language models. We find that with decreased perplexity comes vastly increased computational requirements: In one of the NLMs, a perplexity reduction by 2.5 $\times $ results in a 49 $\times $ rise in latency and 32 $\times $ increase in energy usage, when compared to KN-5. | Kneser–Ney smoothing |
e07df8f613dbd567a35318cd6f6f4cb959f5c82d | e07df8f613dbd567a35318cd6f6f4cb959f5c82d_0 | Q: What is a commonly used evaluation metric for language models?
Text: Introduction
Deep learning has unquestionably advanced the state of the art in many natural language processing tasks, from syntactic dependency parsing BIBREF0 to named-entity recognition BIBREF1 to machine translation BIBREF2 . The same certainly applies to language modeling, where recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 , BIBREF4 .
Specifically focused on language modeling, this paper examines an issue that to our knowledge has not been explored: advances in neural language models have come at a significant cost in terms of increased computational complexity. Computing the probability of a token sequence using non-neural techniques requires a number of phrase lookups and perhaps a few arithmetic operations, whereas model inference with NLMs require large matrix multiplications consuming perhaps millions of floating point operations (FLOPs). These performance tradeoffs are worth discussing.
In truth, language models exist in a quality–performance tradeoff space. As model quality increases (e.g., lower perplexity), performance as measured in terms of energy consumption, query latency, etc. tends to decrease. For applications primarily running in the cloud—say, machine translation—practitioners often solely optimize for the lowest perplexity. This is because such applications are embarrassingly parallel and hence trivial to scale in a data center environment.
There are, however, applications of NLMs that require less one-sided optimizations. On mobile devices such as smartphones and tablets, for example, NLMs may be integrated into software keyboards for next-word prediction, allowing much faster text entry. Popular Android apps that enthusiastically tout this technology include SwiftKey and Swype. The greater computational costs of NLMs lead to higher energy usage in model inference, translating into shorter battery life.
In this paper, we examine the quality–performance tradeoff in the shift from non-neural to neural language models. In particular, we compare Kneser–Ney smoothing, widely accepted as the state of the art prior to NLMs, to the best NLMs today. The decrease in perplexity on standard datasets has been well documented BIBREF3 , but to our knowledge no one has examined the performances tradeoffs. With deployment on a mobile device in mind, we evaluate energy usage and inference latency on a Raspberry Pi (which shares the same ARM architecture as nearly all smartphones today). We find that a 2.5 $\times $ reduction in perplexity on PTB comes at a staggering cost in terms of performance: inference with NLMs takes 49 $\times $ longer and requires 32 $\times $ more energy. Furthermore, we find that impressive reductions in perplexity translate into at best modest improvements in next-word prediction, which is arguable a better metric for evaluating software keyboards on a smartphone. The contribution of this paper is the first known elucidation of this quality–performance tradeoff. Note that we refrain from prescriptive recommendations: whether or not a tradeoff is worthwhile depends on the application. Nevertheless, NLP engineers should arguably keep these tradeoffs in mind when selecting a particular operating point.
Background and Related Work
BIBREF3 evaluate recent neural language models; however, their focus is not on the computational footprint of each model, but rather the perplexity. To further reduce perplexity, many neural language model extensions exist, such as continuous cache pointer BIBREF5 and mixture of softmaxes BIBREF6 . Since our focus is on comparing “core” neural and non-neural approaches, we disregard these extra optimizations techniques in all of our models.
Other work focus on designing lightweight models for resource-efficient inference on mobile devices. BIBREF7 explore LSTMs BIBREF8 with binary weights for language modeling; BIBREF9 examine shallow feedforward neural networks for natural language processing.
AWD-LSTM. BIBREF4 show that a simple three-layer LSTM, with proper regularization and optimization techniques, can achieve state of the art on various language modeling datasets, surpassing more complex models. Specifically, BIBREF4 apply randomized backpropagation through time, variational dropout, activation regularization, embedding dropout, and temporal activation regularization. A novel scheduler for optimization, non-monotonically triggered ASGD (NT-ASGD) is also introduced. BIBREF4 name their three-layer LSTM model trained with such tricks, “AWD-LSTM.”
Quasi-Recurrent Neural Networks. Quasi-recurrent neural networks (QRNNs; BIBREF10 ) achieve current state of the art in word-level language modeling BIBREF11 . A quasi-recurrent layer comprises two separate parts: a convolution layer with three weights, and a recurrent pooling layer. Given an input $\mathbf {X} \in \mathbb {R}^{k \times n}$ , the convolution layer is $ \mathbf {Z} = \tanh (\mathbf {W}_z \cdot \mathbf {X})\\ \mathbf {F} = \sigma (\mathbf {W}_f \cdot \mathbf {X})\\ \mathbf {O} = \sigma (\mathbf {W}_o \cdot \mathbf {X}) $
where $\sigma $ denotes the sigmoid function, $\cdot $ represents masked convolution across time, and $\mathbf {W}_{\lbrace z, f, o\rbrace } \in \mathbb {R}^{m \times k \times r}$ are convolution weights with $k$ input channels, $m$ output channels, and a window size of $r$ . In the recurrent pooling layer, the convolution outputs are combined sequentially: $ \mathbf {c}_t &= \mathbf {f}_t \odot \mathbf {c}_{t-1} + (1 - \mathbf {f}_t) \odot \mathbf {z}_t\\ \mathbf {h}_t &= \mathbf {o}_t \odot \mathbf {c}_t $
Multiple QRNN layers can be stacked for deeper hierarchical representation, with the output $\mathbf {h}_{1:t}$ being fed as the input into the subsequent layer: In language modeling, a four-layer QRNN is a standard architecture BIBREF11 .
Perplexity–Recall Scale. Word-level perplexity does not have a strictly monotonic relationship with recall-at- $k$ , the fraction of top $k$ predictions that contain the correct word. A given R@ $k$ imposes a weak minimum perplexity constraint—there are many free parameters that allow for large variability in the perplexity given a certain R@ $k$ . Consider the corpus, “choo choo train,” with an associated unigram model $P(\text{``choo''}) = 0.1$ , $P(\text{``train''}) = 0.9$ , resulting in an R@1 of $1/3$ and perplexity of $4.8$ . Clearly, R@1 $ =1/3$ for all $P(\text{``choo''}) \le 0.5$ ; thus, perplexity can drop as low as 2 without affecting recall.
Experimental Setup
We conducted our experiments on Penn Treebank (PTB; BIBREF12 ) and WikiText-103 (WT103; BIBREF13 ). Preprocessed by BIBREF14 , PTB contains 887K tokens for training, 70K for validation, and 78K for test, with a vocabulary size of 10,000. On the other hand, WT103 comprises 103 million tokens for training, 217K for validation, and 245K for test, spanning a vocabulary of 267K unique tokens.
For the neural language model, we used a four-layer QRNN BIBREF10 , which achieves state-of-the-art results on a variety of datasets, such as WT103 BIBREF11 and PTB. To compare against more common LSTM architectures, we also evaluated AWD-LSTM BIBREF4 on PTB. For the non-neural approach, we used a standard five-gram model with modified Kneser-Ney smoothing BIBREF15 , as explored in BIBREF16 on PTB. We denote the QRNN models for PTB and WT103 as ptb-qrnn and wt103-qrnn, respectively.
For each model, we examined word-level perplexity, R@3 in next-word prediction, latency (ms/q), and energy usage (mJ/q). To explore the perplexity–recall relationship, we collected individual perplexity and recall statistics for each sentence in the test set.
Hyperparameters and Training
The QRNN models followed the exact training procedure and architecture delineated in the official codebase from BIBREF11 . For ptb-qrnn, we trained the model for 550 epochs using NT-ASGD BIBREF4 , then finetuned for 300 epochs using ASGD BIBREF17 , all with a learning rate of 30 throughout. For wt103-qrnn, we followed BIBREF11 and trained the QRNN for 14 epochs, using the Adam optimizer with a learning rate of $10^{-3}$ . We also applied regularization techniques from BIBREF4 ; all the specific hyperparameters are the same as those in the repository. Our model architecture consists of 400-dimensional tied embedding weights BIBREF18 and four QRNN layers, with 1550 hidden units per layer on PTB and 2500 per layer on WT103. Both QRNN models have window sizes of $r=2$ for the first layer and $r=1$ for the rest.
For the KN-5 model, we trained an off-the-shelf five-gram model using the popular SRILM toolkit BIBREF19 . We did not specify any special hyperparameters.
Infrastructure
We trained the QRNNs with PyTorch (0.4.0; commit 1807bac) on a Titan V GPU. To evaluate the models under a resource-constrained environment, we deployed them on a Raspberry Pi 3 (Model B) running Raspbian Stretch (4.9.41-v7+). The Raspberry Pi (RPi) is not only a standard platform, but also a close surrogate to mobile phones, using the same Cortex-A7 in many phones. We then transferred the trained models to the RPi, using the same frameworks for evaluation. We plugged the RPi into a Watts Up Pro meter, a power meter that can be read programatically over USB at a frequency of 1 Hz. For the QRNNs, we used the first 350 words of the test set, and averaged the ms/query and mJ/query. For KN-5, we used the entire test set for evaluation, since the latency was much lower. To adjust for the base power load, we subtracted idle power draw from energy usage.
For a different perspective, we further evaluated all the models under a desktop environment, using an i7-4790k CPU and Titan V GPU. Because the base power load for powering a desktop is much higher than running neural language models, we collected only latency statistics. We used the entire test set, since the QRNN runs quickly.
In addition to energy and latency, another consideration for the NLP developer selecting an operating point is the cost of underlying hardware. For our setup, the RPi costs $35 USD, the CPU costs $350 USD, and the GPU costs $3000 USD.
Results and Discussion
To demonstrate the effectiveness of the QRNN models, we present the results of past and current state-of-the-art neural language models in Table 1 ; we report the Skip- and AWD-LSTM results as seen in the original papers, while we report our QRNN results. Skip LSTM denotes the four-layer Skip LSTM in BIBREF3 . BIBREF20 focus on Hebbian softmax, a model extension technique—Rae-LSTM refers to their base LSTM model without any extensions. In our results, KN-5 refers to the traditional five-gram model with modified Kneser-Ney smoothing, and AWD is shorthand for AWD-LSTM.
Perplexity–recall scale. In Figure 1 , using KN-5 as the model, we plot the log perplexity (cross entropy) and R@3 error ( $1 - \text{R@3}$ ) for every sentence in PTB and WT103. The horizontal clusters arise from multiple perplexity points representing the same R@3 value, as explained in Section "Infrastructure" . We also observe that the perplexity–recall scale is non-linear—instead, log perplexity appears to have a moderate linear relationship with R@3 error on PTB ( $r=0.85$ ), and an even stronger relationship on WT103 ( $r=0.94$ ). This is partially explained by WT103 having much longer sentences, and thus less noisy statistics.
From Figure 1 , we find that QRNN models yield strongly linear log perplexity–recall plots as well, where $r=0.88$ and $r=0.93$ for PTB and WT103, respectively. Note that, due to the improved model quality over KN-5, the point clouds are shifted downward compared to Figure 1 . We conclude that log perplexity, or cross entropy, provides a more human-understandable indicator of R@3 than perplexity does. Overall, these findings agree with those from BIBREF21 , which explores the log perplexity–word error rate scale in language modeling for speech recognition.
Quality–performance tradeoff. In Table 2 , from left to right, we report perplexity results on the validation and test sets, R@3 on test, and finally per-query latency and energy usage. On the RPi, KN-5 is both fast and power-efficient to run, using only about 7 ms/query and 6 mJ/query for PTB (Table 2 , row 1), and 264 ms/q and 229 mJ/q on WT103 (row 5). Taking 220 ms/query and consuming 300 mJ/query, AWD-LSTM and ptb-qrnn are still viable for mobile phones: The modern smartphone holds upwards of 10,000 joules BIBREF22 , and the latency is within usability standards BIBREF23 . Nevertheless, the models are still 49 $\times $ slower and 32 $\times $ more power-hungry than KN-5. The wt103-qrnn model is completely unusable on phones, taking over 1.2 seconds per next-word prediction. Neural models achieve perplexity drops of 60–80% and R@3 increases of 22–34%, but these improvements come at a much higher cost in latency and energy usage.
In Table 2 (last two columns), the desktop yields very different results: the neural models on PTB (rows 2–3) are 9 $\times $ slower than KN-5, but the absolute latency is only 8 ms/q, which is still much faster than what humans perceive as instantaneous BIBREF23 . If a high-end commodity GPU is available, then the models are only twice as slow as KN-5 is. From row 5, even better results are noted with wt103-qrnn: On the CPU, the QRNN is only 60% slower than KN-5 is, while the model is faster by 11 $\times $ on a GPU. These results suggest that, if only latency is considered under a commodity desktop environment, the QRNN model is humanly indistinguishable from the KN-5 model, even without using GPU acceleration.
Conclusion
In the present work, we describe and examine the tradeoff space between quality and performance for the task of language modeling. Specifically, we explore the quality–performance tradeoffs between KN-5, a non-neural approach, and AWD-LSTM and QRNN, two neural language models. We find that with decreased perplexity comes vastly increased computational requirements: In one of the NLMs, a perplexity reduction by 2.5 $\times $ results in a 49 $\times $ rise in latency and 32 $\times $ increase in energy usage, when compared to KN-5. | perplexity |
e07df8f613dbd567a35318cd6f6f4cb959f5c82d | e07df8f613dbd567a35318cd6f6f4cb959f5c82d_1 | Q: What is a commonly used evaluation metric for language models?
Text: Introduction
Deep learning has unquestionably advanced the state of the art in many natural language processing tasks, from syntactic dependency parsing BIBREF0 to named-entity recognition BIBREF1 to machine translation BIBREF2 . The same certainly applies to language modeling, where recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 , BIBREF4 .
Specifically focused on language modeling, this paper examines an issue that to our knowledge has not been explored: advances in neural language models have come at a significant cost in terms of increased computational complexity. Computing the probability of a token sequence using non-neural techniques requires a number of phrase lookups and perhaps a few arithmetic operations, whereas model inference with NLMs require large matrix multiplications consuming perhaps millions of floating point operations (FLOPs). These performance tradeoffs are worth discussing.
In truth, language models exist in a quality–performance tradeoff space. As model quality increases (e.g., lower perplexity), performance as measured in terms of energy consumption, query latency, etc. tends to decrease. For applications primarily running in the cloud—say, machine translation—practitioners often solely optimize for the lowest perplexity. This is because such applications are embarrassingly parallel and hence trivial to scale in a data center environment.
There are, however, applications of NLMs that require less one-sided optimizations. On mobile devices such as smartphones and tablets, for example, NLMs may be integrated into software keyboards for next-word prediction, allowing much faster text entry. Popular Android apps that enthusiastically tout this technology include SwiftKey and Swype. The greater computational costs of NLMs lead to higher energy usage in model inference, translating into shorter battery life.
In this paper, we examine the quality–performance tradeoff in the shift from non-neural to neural language models. In particular, we compare Kneser–Ney smoothing, widely accepted as the state of the art prior to NLMs, to the best NLMs today. The decrease in perplexity on standard datasets has been well documented BIBREF3 , but to our knowledge no one has examined the performances tradeoffs. With deployment on a mobile device in mind, we evaluate energy usage and inference latency on a Raspberry Pi (which shares the same ARM architecture as nearly all smartphones today). We find that a 2.5 $\times $ reduction in perplexity on PTB comes at a staggering cost in terms of performance: inference with NLMs takes 49 $\times $ longer and requires 32 $\times $ more energy. Furthermore, we find that impressive reductions in perplexity translate into at best modest improvements in next-word prediction, which is arguable a better metric for evaluating software keyboards on a smartphone. The contribution of this paper is the first known elucidation of this quality–performance tradeoff. Note that we refrain from prescriptive recommendations: whether or not a tradeoff is worthwhile depends on the application. Nevertheless, NLP engineers should arguably keep these tradeoffs in mind when selecting a particular operating point.
Background and Related Work
BIBREF3 evaluate recent neural language models; however, their focus is not on the computational footprint of each model, but rather the perplexity. To further reduce perplexity, many neural language model extensions exist, such as continuous cache pointer BIBREF5 and mixture of softmaxes BIBREF6 . Since our focus is on comparing “core” neural and non-neural approaches, we disregard these extra optimizations techniques in all of our models.
Other work focus on designing lightweight models for resource-efficient inference on mobile devices. BIBREF7 explore LSTMs BIBREF8 with binary weights for language modeling; BIBREF9 examine shallow feedforward neural networks for natural language processing.
AWD-LSTM. BIBREF4 show that a simple three-layer LSTM, with proper regularization and optimization techniques, can achieve state of the art on various language modeling datasets, surpassing more complex models. Specifically, BIBREF4 apply randomized backpropagation through time, variational dropout, activation regularization, embedding dropout, and temporal activation regularization. A novel scheduler for optimization, non-monotonically triggered ASGD (NT-ASGD) is also introduced. BIBREF4 name their three-layer LSTM model trained with such tricks, “AWD-LSTM.”
Quasi-Recurrent Neural Networks. Quasi-recurrent neural networks (QRNNs; BIBREF10 ) achieve current state of the art in word-level language modeling BIBREF11 . A quasi-recurrent layer comprises two separate parts: a convolution layer with three weights, and a recurrent pooling layer. Given an input $\mathbf {X} \in \mathbb {R}^{k \times n}$ , the convolution layer is $ \mathbf {Z} = \tanh (\mathbf {W}_z \cdot \mathbf {X})\\ \mathbf {F} = \sigma (\mathbf {W}_f \cdot \mathbf {X})\\ \mathbf {O} = \sigma (\mathbf {W}_o \cdot \mathbf {X}) $
where $\sigma $ denotes the sigmoid function, $\cdot $ represents masked convolution across time, and $\mathbf {W}_{\lbrace z, f, o\rbrace } \in \mathbb {R}^{m \times k \times r}$ are convolution weights with $k$ input channels, $m$ output channels, and a window size of $r$ . In the recurrent pooling layer, the convolution outputs are combined sequentially: $ \mathbf {c}_t &= \mathbf {f}_t \odot \mathbf {c}_{t-1} + (1 - \mathbf {f}_t) \odot \mathbf {z}_t\\ \mathbf {h}_t &= \mathbf {o}_t \odot \mathbf {c}_t $
Multiple QRNN layers can be stacked for deeper hierarchical representation, with the output $\mathbf {h}_{1:t}$ being fed as the input into the subsequent layer: In language modeling, a four-layer QRNN is a standard architecture BIBREF11 .
Perplexity–Recall Scale. Word-level perplexity does not have a strictly monotonic relationship with recall-at- $k$ , the fraction of top $k$ predictions that contain the correct word. A given R@ $k$ imposes a weak minimum perplexity constraint—there are many free parameters that allow for large variability in the perplexity given a certain R@ $k$ . Consider the corpus, “choo choo train,” with an associated unigram model $P(\text{``choo''}) = 0.1$ , $P(\text{``train''}) = 0.9$ , resulting in an R@1 of $1/3$ and perplexity of $4.8$ . Clearly, R@1 $ =1/3$ for all $P(\text{``choo''}) \le 0.5$ ; thus, perplexity can drop as low as 2 without affecting recall.
Experimental Setup
We conducted our experiments on Penn Treebank (PTB; BIBREF12 ) and WikiText-103 (WT103; BIBREF13 ). Preprocessed by BIBREF14 , PTB contains 887K tokens for training, 70K for validation, and 78K for test, with a vocabulary size of 10,000. On the other hand, WT103 comprises 103 million tokens for training, 217K for validation, and 245K for test, spanning a vocabulary of 267K unique tokens.
For the neural language model, we used a four-layer QRNN BIBREF10 , which achieves state-of-the-art results on a variety of datasets, such as WT103 BIBREF11 and PTB. To compare against more common LSTM architectures, we also evaluated AWD-LSTM BIBREF4 on PTB. For the non-neural approach, we used a standard five-gram model with modified Kneser-Ney smoothing BIBREF15 , as explored in BIBREF16 on PTB. We denote the QRNN models for PTB and WT103 as ptb-qrnn and wt103-qrnn, respectively.
For each model, we examined word-level perplexity, R@3 in next-word prediction, latency (ms/q), and energy usage (mJ/q). To explore the perplexity–recall relationship, we collected individual perplexity and recall statistics for each sentence in the test set.
Hyperparameters and Training
The QRNN models followed the exact training procedure and architecture delineated in the official codebase from BIBREF11 . For ptb-qrnn, we trained the model for 550 epochs using NT-ASGD BIBREF4 , then finetuned for 300 epochs using ASGD BIBREF17 , all with a learning rate of 30 throughout. For wt103-qrnn, we followed BIBREF11 and trained the QRNN for 14 epochs, using the Adam optimizer with a learning rate of $10^{-3}$ . We also applied regularization techniques from BIBREF4 ; all the specific hyperparameters are the same as those in the repository. Our model architecture consists of 400-dimensional tied embedding weights BIBREF18 and four QRNN layers, with 1550 hidden units per layer on PTB and 2500 per layer on WT103. Both QRNN models have window sizes of $r=2$ for the first layer and $r=1$ for the rest.
For the KN-5 model, we trained an off-the-shelf five-gram model using the popular SRILM toolkit BIBREF19 . We did not specify any special hyperparameters.
Infrastructure
We trained the QRNNs with PyTorch (0.4.0; commit 1807bac) on a Titan V GPU. To evaluate the models under a resource-constrained environment, we deployed them on a Raspberry Pi 3 (Model B) running Raspbian Stretch (4.9.41-v7+). The Raspberry Pi (RPi) is not only a standard platform, but also a close surrogate to mobile phones, using the same Cortex-A7 in many phones. We then transferred the trained models to the RPi, using the same frameworks for evaluation. We plugged the RPi into a Watts Up Pro meter, a power meter that can be read programatically over USB at a frequency of 1 Hz. For the QRNNs, we used the first 350 words of the test set, and averaged the ms/query and mJ/query. For KN-5, we used the entire test set for evaluation, since the latency was much lower. To adjust for the base power load, we subtracted idle power draw from energy usage.
For a different perspective, we further evaluated all the models under a desktop environment, using an i7-4790k CPU and Titan V GPU. Because the base power load for powering a desktop is much higher than running neural language models, we collected only latency statistics. We used the entire test set, since the QRNN runs quickly.
In addition to energy and latency, another consideration for the NLP developer selecting an operating point is the cost of underlying hardware. For our setup, the RPi costs $35 USD, the CPU costs $350 USD, and the GPU costs $3000 USD.
Results and Discussion
To demonstrate the effectiveness of the QRNN models, we present the results of past and current state-of-the-art neural language models in Table 1 ; we report the Skip- and AWD-LSTM results as seen in the original papers, while we report our QRNN results. Skip LSTM denotes the four-layer Skip LSTM in BIBREF3 . BIBREF20 focus on Hebbian softmax, a model extension technique—Rae-LSTM refers to their base LSTM model without any extensions. In our results, KN-5 refers to the traditional five-gram model with modified Kneser-Ney smoothing, and AWD is shorthand for AWD-LSTM.
Perplexity–recall scale. In Figure 1 , using KN-5 as the model, we plot the log perplexity (cross entropy) and R@3 error ( $1 - \text{R@3}$ ) for every sentence in PTB and WT103. The horizontal clusters arise from multiple perplexity points representing the same R@3 value, as explained in Section "Infrastructure" . We also observe that the perplexity–recall scale is non-linear—instead, log perplexity appears to have a moderate linear relationship with R@3 error on PTB ( $r=0.85$ ), and an even stronger relationship on WT103 ( $r=0.94$ ). This is partially explained by WT103 having much longer sentences, and thus less noisy statistics.
From Figure 1 , we find that QRNN models yield strongly linear log perplexity–recall plots as well, where $r=0.88$ and $r=0.93$ for PTB and WT103, respectively. Note that, due to the improved model quality over KN-5, the point clouds are shifted downward compared to Figure 1 . We conclude that log perplexity, or cross entropy, provides a more human-understandable indicator of R@3 than perplexity does. Overall, these findings agree with those from BIBREF21 , which explores the log perplexity–word error rate scale in language modeling for speech recognition.
Quality–performance tradeoff. In Table 2 , from left to right, we report perplexity results on the validation and test sets, R@3 on test, and finally per-query latency and energy usage. On the RPi, KN-5 is both fast and power-efficient to run, using only about 7 ms/query and 6 mJ/query for PTB (Table 2 , row 1), and 264 ms/q and 229 mJ/q on WT103 (row 5). Taking 220 ms/query and consuming 300 mJ/query, AWD-LSTM and ptb-qrnn are still viable for mobile phones: The modern smartphone holds upwards of 10,000 joules BIBREF22 , and the latency is within usability standards BIBREF23 . Nevertheless, the models are still 49 $\times $ slower and 32 $\times $ more power-hungry than KN-5. The wt103-qrnn model is completely unusable on phones, taking over 1.2 seconds per next-word prediction. Neural models achieve perplexity drops of 60–80% and R@3 increases of 22–34%, but these improvements come at a much higher cost in latency and energy usage.
In Table 2 (last two columns), the desktop yields very different results: the neural models on PTB (rows 2–3) are 9 $\times $ slower than KN-5, but the absolute latency is only 8 ms/q, which is still much faster than what humans perceive as instantaneous BIBREF23 . If a high-end commodity GPU is available, then the models are only twice as slow as KN-5 is. From row 5, even better results are noted with wt103-qrnn: On the CPU, the QRNN is only 60% slower than KN-5 is, while the model is faster by 11 $\times $ on a GPU. These results suggest that, if only latency is considered under a commodity desktop environment, the QRNN model is humanly indistinguishable from the KN-5 model, even without using GPU acceleration.
Conclusion
In the present work, we describe and examine the tradeoff space between quality and performance for the task of language modeling. Specifically, we explore the quality–performance tradeoffs between KN-5, a non-neural approach, and AWD-LSTM and QRNN, two neural language models. We find that with decreased perplexity comes vastly increased computational requirements: In one of the NLMs, a perplexity reduction by 2.5 $\times $ results in a 49 $\times $ rise in latency and 32 $\times $ increase in energy usage, when compared to KN-5. | perplexity |
1a43df221a567869964ad3b275de30af2ac35598 | 1a43df221a567869964ad3b275de30af2ac35598_0 | Q: Which dataset do they use a starting point in generating fake reviews?
Text: Introduction
Automatically generated fake reviews have only recently become natural enough to fool human readers. Yao et al. BIBREF0 use a deep neural network (a so-called 2-layer LSTM BIBREF1 ) to generate fake reviews, and concluded that these fake reviews look sufficiently genuine to fool native English speakers. They train their model using real restaurant reviews from yelp.com BIBREF2 . Once trained, the model is used to generate reviews character-by-character. Due to the generation methodology, it cannot be easily targeted for a specific context (meaningful side information). Consequently, the review generation process may stray off-topic. For instance, when generating a review for a Japanese restaurant in Las Vegas, the review generation process may include references to an Italian restaurant in Baltimore. The authors of BIBREF0 apply a post-processing step (customization), which replaces food-related words with more suitable ones (sampled from the targeted restaurant). The word replacement strategy has drawbacks: it can miss certain words and replace others independent of their surrounding words, which may alert savvy readers. As an example: when we applied the customization technique described in BIBREF0 to a review for a Japanese restaurant it changed the snippet garlic knots for breakfast with garlic knots for sushi).
We propose a methodology based on neural machine translation (NMT) that improves the generation process by defining a context for the each generated fake review. Our context is a clear-text sequence of: the review rating, restaurant name, city, state and food tags (e.g. Japanese, Italian). We show that our technique generates review that stay on topic. We can instantiate our basic technique into several variants. We vet them on Amazon Mechanical Turk and find that native English speakers are very poor at recognizing our fake generated reviews. For one variant, the participants' performance is close to random: the class-averaged F-score of detection is INLINEFORM0 (whereas random would be INLINEFORM1 given the 1:6 imbalance in the test). Via a user study with experienced, highly educated participants, we compare this variant (which we will henceforth refer to as NMT-Fake* reviews) with fake reviews generated using the char-LSTM-based technique from BIBREF0 .
We demonstrate that NMT-Fake* reviews constitute a new category of fake reviews that cannot be detected by classifiers trained only using previously known categories of fake reviews BIBREF0 , BIBREF3 , BIBREF4 . Therefore, NMT-Fake* reviews may go undetected in existing online review sites. To meet this challenge, we develop an effective classifier that detects NMT-Fake* reviews effectively (97% F-score). Our main contributions are:
Background
Fake reviews User-generated content BIBREF5 is an integral part of the contemporary user experience on the web. Sites like tripadvisor.com, yelp.com and Google Play use user-written reviews to provide rich information that helps other users choose where to spend money and time. User reviews are used for rating services or products, and for providing qualitative opinions. User reviews and ratings may be used to rank services in recommendations. Ratings have an affect on the outwards appearance. Already 8 years ago, researchers estimated that a one-star rating increase affects the business revenue by 5 – 9% on yelp.com BIBREF6 .
Due to monetary impact of user-generated content, some businesses have relied on so-called crowd-turfing agents BIBREF7 that promise to deliver positive ratings written by workers to a customer in exchange for a monetary compensation. Crowd-turfing ethics are complicated. For example, Amazon community guidelines prohibit buying content relating to promotions, but the act of writing fabricated content is not considered illegal, nor is matching workers to customers BIBREF8 . Year 2015, approximately 20% of online reviews on yelp.com were suspected of being fake BIBREF9 .
Nowadays, user-generated review sites like yelp.com use filters and fraudulent review detection techniques. These factors have resulted in an increase in the requirements of crowd-turfed reviews provided to review sites, which in turn has led to an increase in the cost of high-quality review. Due to the cost increase, researchers hypothesize the existence of neural network-generated fake reviews. These neural-network-based fake reviews are statistically different from human-written fake reviews, and are not caught by classifiers trained on these BIBREF0 .
Detecting fake reviews can either be done on an individual level or as a system-wide detection tool (i.e. regulation). Detecting fake online content on a personal level requires knowledge and skills in critical reading. In 2017, the National Literacy Trust assessed that young people in the UK do not have the skillset to differentiate fake news from real news BIBREF10 . For example, 20% of children that use online news sites in age group 12-15 believe that all information on news sites are true.
Neural Networks Neural networks are function compositions that map input data through INLINEFORM0 subsequent layers: DISPLAYFORM0
where the functions INLINEFORM0 are typically non-linear and chosen by experts partly for known good performance on datasets and partly for simplicity of computational evaluation. Language models (LMs) BIBREF11 are generative probability distributions that assign probabilities to sequences of tokens ( INLINEFORM1 ): DISPLAYFORM0
such that the language model can be used to predict how likely a specific token at time step INLINEFORM0 is, based on the INLINEFORM1 previous tokens. Tokens are typically either words or characters.
For decades, deep neural networks were thought to be computationally too difficult to train. However, advances in optimization, hardware and the availability of frameworks have shown otherwise BIBREF1 , BIBREF12 . Neural language models (NLMs) have been one of the promising application areas. NLMs are typically various forms of recurrent neural networks (RNNs), which pass through the data sequentially and maintain a memory representation of the past tokens with a hidden context vector. There are many RNN architectures that focus on different ways of updating and maintaining context vectors: Long Short-Term Memory units (LSTM) and Gated Recurrent Units (GRUs) are perhaps most popular. Neural LMs have been used for free-form text generation. In certain application areas, the quality has been high enough to sometimes fool human readers BIBREF0 . Encoder-decoder (seq2seq) models BIBREF13 are architectures of stacked RNNs, which have the ability to generate output sequences based on input sequences. The encoder network reads in a sequence of tokens, and passes it to a decoder network (a LM). In contrast to simpler NLMs, encoder-decoder networks have the ability to use additional context for generating text, which enables more accurate generation of text. Encoder-decoder models are integral in Neural Machine Translation (NMT) BIBREF14 , where the task is to translate a source text from one language to another language. NMT models additionally use beam search strategies to heuristically search the set of possible translations. Training datasets are parallel corpora; large sets of paired sentences in the source and target languages. The application of NMT techniques for online machine translation has significantly improved the quality of translations, bringing it closer to human performance BIBREF15 .
Neural machine translation models are efficient at mapping one expression to another (one-to-one mapping). Researchers have evaluated these models for conversation generation BIBREF16 , with mixed results. Some researchers attribute poor performance to the use of the negative log likelihood cost function during training, which emphasizes generation of high-confidence phrases rather than diverse phrases BIBREF17 . The results are often generic text, which lacks variation. Li et al. have suggested various augmentations to this, among others suppressing typical responses in the decoder language model to promote response diversity BIBREF17 .
System Model
We discuss the attack model, our generative machine learning method and controlling the generative process in this section.
Attack Model
Wang et al. BIBREF7 described a model of crowd-turfing attacks consisting of three entities: customers who desire to have fake reviews for a particular target (e.g. their restaurant) on a particular platform (e.g. Yelp), agents who offer fake review services to customers, and workers who are orchestrated by the agent to compose and post fake reviews.
Automated crowd-turfing attacks (ACA) replace workers by a generative model. This has several benefits including better economy and scalability (human workers are more expensive and slower) and reduced detectability (agent can better control the rate at which fake reviews are generated and posted).
We assume that the agent has access to public reviews on the review platform, by which it can train its generative model. We also assume that it is easy for the agent to create a large number of accounts on the review platform so that account-based detection or rate-limiting techniques are ineffective against fake reviews.
The quality of the generative model plays a crucial role in the attack. Yao et al. BIBREF0 propose the use of a character-based LSTM as base for generative model. LSTMs are not conditioned to generate reviews for a specific target BIBREF1 , and may mix-up concepts from different contexts during free-form generation. Mixing contextually separate words is one of the key criteria that humans use to identify fake reviews. These may result in violations of known indicators for fake content BIBREF18 . For example, the review content may not match prior expectations nor the information need that the reader has. We improve the attack model by considering a more capable generative model that produces more appropriate reviews: a neural machine translation (NMT) model.
Generative Model
We propose the use of NMT models for fake review generation. The method has several benefits: 1) the ability to learn how to associate context (keywords) to reviews, 2) fast training time, and 3) a high-degree of customization during production time, e.g. introduction of specific waiter or food items names into reviews.
NMT models are constructions of stacked recurrent neural networks (RNNs). They include an encoder network and a decoder network, which are jointly optimized to produce a translation of one sequence to another. The encoder rolls over the input data in sequence and produces one INLINEFORM0 -dimensional context vector representation for the sentence. The decoder then generates output sequences based on the embedding vector and an attention module, which is taught to associate output words with certain input words. The generation typically continues until a specific EOS (end of sentence) token is encountered. The review length can be controlled in many ways, e.g. by setting the probability of generating the EOS token to zero until the required length is reached.
NMT models often also include a beam search BIBREF14 , which generates several hypotheses and chooses the best ones amongst them. In our work, we use the greedy beam search technique. We forgo the use of additional beam searches as we found that the quality of the output was already adequate and the translation phase time consumption increases linearly for each beam used.
We use the Yelp Challenge dataset BIBREF2 for our fake review generation. The dataset (Aug 2017) contains 2.9 million 1 –5 star restaurant reviews. We treat all reviews as genuine human-written reviews for the purpose of this work, since wide-scale deployment of machine-generated review attacks are not yet reported (Sep 2017) BIBREF19 . As preprocessing, we remove non-printable (non-ASCII) characters and excessive white-space. We separate punctuation from words. We reserve 15,000 reviews for validation and 3,000 for testing, and the rest we use for training. NMT models require a parallel corpus of source and target sentences, i.e. a large set of (source, target)-pairs. We set up a parallel corpus by constructing (context, review)-pairs from the dataset. Next, we describe how we created our input context.
The Yelp Challenge dataset includes metadata about restaurants, including their names, food tags, cities and states these restaurants are located in. For each restaurant review, we fetch this metadata and use it as our input context in the NMT model. The corresponding restaurant review is similarly set as the target sentence. This method produced 2.9 million pairs of sentences in our parallel corpus. We show one example of the parallel training corpus in Example 1 below:
5 Public House Las Vegas NV Gastropubs Restaurants > Excellent
food and service . Pricey , but well worth it . I would recommend
the bone marrow and sampler platter for appetizers . \end{verbatim}
\noindent The order {\textbf{[rating name city state tags]}} is kept constant.
Training the model conditions it to associate certain sequences of words in the input sentence with others in the output.
\subsubsection{Training Settings}
We train our NMT model on a commodity PC with a i7-4790k CPU (4.00GHz), with 32GB RAM and one NVidia GeForce GTX 980 GPU. Our system can process approximately 1,300 \textendash 1,500 source tokens/s and approximately 5,730 \textendash 5,830 output tokens/s. Training one epoch takes in average 72 minutes. The model is trained for 8 epochs, i.e. over night. We call fake review generated by this model \emph{NMT-Fake reviews}. We only need to train one model to produce reviews of different ratings.
We use the training settings: adam optimizer \cite{kingma2014adam} with the suggested learning rate 0.001 \cite{klein2017opennmt}. For most parts, parameters are at their default values. Notably, the maximum sentence length of input and output is 50 tokens by default.
We leverage the framework openNMT-py \cite{klein2017opennmt} to teach the our NMT model.
We list used openNMT-py commands in Appendix Table~\ref{table:openNMT-py_commands}.
\begin{figure}[t]
\begin{center}
\begin{tabular}{ | l | }
\hline
Example 2. Greedy NMT \\
Great food, \underline{great} service, \underline{great} \textit{\textit{beer selection}}. I had the \textit{Gastropubs burger} and it
\\
was delicious. The \underline{\textit{beer selection}} was also \underline{great}. \\
\\
Example 3. NMT-Fake* \\
I love this restaurant. Great food, great service. It's \textit{a little pricy} but worth\\
it for the \textit{quality} of the \textit{beer} and atmosphere you can see in \textit{Vegas}
\\
\hline
\end{tabular}
\label{table:output_comparison}
\end{center}
\caption{Na\"{i}ve text generation with NMT vs. generation using our NTM model. Repetitive patterns are \underline{underlined}. Contextual words are \emph{italicized}. Both examples here are generated based on the context given in Example~1.}
\label{fig:comparison}
\end{figure}
\subsection{Controlling generation of fake reviews}
\label{sec:generating}
Greedy NMT beam searches are practical in many NMT cases. However, the results are simply repetitive, when naively applied to fake review generation (See Example~2 in Figure~\ref{fig:comparison}).
The NMT model produces many \emph{high-confidence} word predictions, which are repetitive and obviously fake. We calculated that in fact, 43\% of the generated sentences started with the phrase ``Great food''. The lack of diversity in greedy use of NMTs for text generation is clear.
\begin{algorithm}[!b]
\KwData{Desired review context $C_\mathrm{input}$ (given as cleartext), NMT model}
\KwResult{Generated review $out$ for input context $C_\mathrm{input}$}
set $b=0.3$, $\lambda=-5$, $\alpha=\frac{2}{3}$, $p_\mathrm{typo}$, $p_\mathrm{spell}$ \\
$\log p \leftarrow \text{NMT.decode(NMT.encode(}C_\mathrm{input}\text{))}$ \\
out $\leftarrow$ [~] \\
$i \leftarrow 0$ \\
$\log p \leftarrow \text{Augment}(\log p$, $b$, $\lambda$, $1$, $[~]$, 0)~~~~~~~~~~~~~~~ |~random penalty~\\
\While{$i=0$ or $o_i$ not EOS}{
$\log \Tilde{p} \leftarrow \text{Augment}(\log p$, $b$, $\lambda$, $\alpha$, $o_i$, $i$)~~~~~~~~~~~ |~start \& memory penalty~\\
$o_i \leftarrow$ \text{NMT.beam}($\log \Tilde{p}$, out) \\
out.append($o_i$) \\
$i \leftarrow i+1$
}\text{return}~$\text{Obfuscate}$(out,~$p_\mathrm{typo}$,~$p_\mathrm{spell}$)
\caption{Generation of NMT-Fake* reviews.}
\label{alg:base}
\end{algorithm}
In this work, we describe how we succeeded in creating more diverse and less repetitive generated reviews, such as Example 3 in Figure~\ref{fig:comparison}.
We outline pseudocode for our methodology of generating fake reviews in Algorithm~\ref{alg:base}. There are several parameters in our algorithm.
The details of the algorithm will be shown later.
We modify the openNMT-py translation phase by changing log-probabilities before passing them to the beam search.
We notice that reviews generated with openNMT-py contain almost no language errors. As an optional post-processing step, we obfuscate reviews by introducing natural typos/misspellings randomly. In the next sections, we describe how we succeeded in generating more natural sentences from our NMT model, i.e. generating reviews like Example~3 instead of reviews like Example~2.
\subsubsection{Variation in word content}
Example 2 in Figure~\ref{fig:comparison} repeats commonly occurring words given for a specific context (e.g. \textit{great, food, service, beer, selection, burger} for Example~1). Generic review generation can be avoided by decreasing probabilities (log-likelihoods \cite{murphy2012machine}) of the generators LM, the decoder.
We constrain the generation of sentences by randomly \emph{imposing penalties to words}.
We tried several forms of added randomness, and found that adding constant penalties to a \emph{random subset} of the target words resulted in the most natural sentence flow. We call these penalties \emph{Bernoulli penalties}, since the random variables are chosen as either 1 or 0 (on or off).
\paragraph{Bernoulli penalties to language model}
To avoid generic sentences components, we augment the default language model $p(\cdot)$ of the decoder by
\begin{equation}
\log \Tilde{p}(t_k) = \log p(t_k | t_i, \dots, t_1) + \lambda q,
\end{equation}
where $q \in R^{V}$ is a vector of Bernoulli-distributed random values that obtain values $1$ with probability $b$ and value $0$ with probability $1-b_i$, and $\lambda < 0$. Parameter $b$ controls how much of the vocabulary is forgotten and $\lambda$ is a soft penalty of including ``forgotten'' words in a review.
$\lambda q_k$ emphasizes sentence forming with non-penalized words. The randomness is reset at the start of generating a new review.
Using Bernoulli penalties in the language model, we can ``forget'' a certain proportion of words and essentially ``force'' the creation of less typical sentences. We will test the effect of these two parameters, the Bernoulli probability $b$ and log-likelihood penalty of including ``forgotten'' words $\lambda$, with a user study in Section~\ref{sec:varying}.
\paragraph{Start penalty}
We introduce start penalties to avoid generic sentence starts (e.g. ``Great food, great service''). Inspired by \cite{li2016diversity}, we add a random start penalty $\lambda s^\mathrm{i}$, to our language model, which decreases monotonically for each generated token. We set $\alpha \leftarrow 0.66$ as it's effect decreases by 90\% every 5 words generated.
\paragraph{Penalty for reusing words}
Bernoulli penalties do not prevent excessive use of certain words in a sentence (such as \textit{great} in Example~2).
To avoid excessive reuse of words, we included a memory penalty for previously used words in each translation.
Concretely, we add the penalty $\lambda$ to each word that has been generated by the greedy search.
\subsubsection{Improving sentence coherence}
\label{sec:grammar}
We visually analyzed reviews after applying these penalties to our NMT model. While the models were clearly diverse, they were \emph{incoherent}: the introduction of random penalties had degraded the grammaticality of the sentences. Amongst others, the use of punctuation was erratic, and pronouns were used semantically wrongly (e.g. \emph{he}, \emph{she} might be replaced, as could ``and''/``but''). To improve the authenticity of our reviews, we added several \emph{grammar-based rules}.
English language has several classes of words which are important for the natural flow of sentences.
We built a list of common pronouns (e.g. I, them, our), conjunctions (e.g. and, thus, if), punctuation (e.g. ,/.,..), and apply only half memory penalties for these words. We found that this change made the reviews more coherent. The pseudocode for this and the previous step is shown in Algorithm~\ref{alg:aug}.
The combined effect of grammar-based rules and LM augmentation is visible in Example~3, Figure~\ref{fig:comparison}.
\begin{algorithm}[!t]
\KwData{Initial log LM $\log p$, Bernoulli probability $b$, soft-penalty $\lambda$, monotonic factor $\alpha$, last generated token $o_i$, grammar rules set $G$}
\KwResult{Augmented log LM $\log \Tilde{p}$}
\begin{algorithmic}[1]
\Procedure {Augment}{$\log p$, $b$, $\lambda$, $\alpha$, $o_i$, $i$}{ \\
generate $P_{\mathrm{1:N}} \leftarrow Bernoulli(b)$~~~~~~~~~~~~~~~|~$\text{One value} \in \{0,1\}~\text{per token}$~ \\
$I \leftarrow P>0$ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~|~Select positive indices~\\
$\log \Tilde{p} \leftarrow$ $\text{Discount}$($\log p$, $I$, $\lambda \cdot \alpha^i$,$G$) ~~~~~~ |~start penalty~\\
$\log \Tilde{p} \leftarrow$ $\text{Discount}$($\log \Tilde{p}$, $[o_i]$, $\lambda$,$G$) ~~~~~~~~~ |~memory penalty~\\
\textbf{return}~$\log \Tilde{p}$
}
\EndProcedure
\\
\Procedure {Discount}{$\log p$, $I$, $\lambda$, $G$}{
\State{\For{$i \in I$}{
\eIf{$o_i \in G$}{
$\log p_{i} \leftarrow \log p_{i} + \lambda/2$
}{
$\log p_{i} \leftarrow \log p_{i} + \lambda$}
}\textbf{return}~$\log p$
\EndProcedure
}}
\end{algorithmic}
\caption{Pseudocode for augmenting language model. }
\label{alg:aug}
\end{algorithm}
\subsubsection{Human-like errors}
\label{sec:obfuscation}
We notice that our NMT model produces reviews without grammar mistakes.
This is unlike real human writers, whose sentences contain two types of language mistakes 1) \emph{typos} that are caused by mistakes in the human motoric input, and 2) \emph{common spelling mistakes}.
We scraped a list of common English language spelling mistakes from Oxford dictionary\footnote{\url{https://en.oxforddictionaries.com/spelling/common-misspellings}} and created 80 rules for randomly \emph{re-introducing spelling mistakes}.
Similarly, typos are randomly reintroduced based on the weighted edit distance\footnote{\url{https://pypi.python.org/pypi/weighted-levenshtein/0.1}}, such that typos resulting in real English words with small perturbations are emphasized.
We use autocorrection tools\footnote{\url{https://pypi.python.org/pypi/autocorrect/0.1.0}} for finding these words.
We call these augmentations \emph{obfuscations}, since they aim to confound the reader to think a human has written them. We omit the pseudocode description for brevity.
\subsection{Experiment: Varying generation parameters in our NMT model}
\label{sec:varying}
Parameters $b$ and $\lambda$ control different aspects in fake reviews.
We show six different examples of generated fake reviews in Table~\ref{table:categories}.
Here, the largest differences occur with increasing values of $b$: visibly, the restaurant reviews become more extreme.
This occurs because a large portion of vocabulary is ``forgotten''. Reviews with $b \geq 0.7$ contain more rare word combinations, e.g. ``!!!!!'' as punctuation, and they occasionally break grammaticality (''experience was awesome'').
Reviews with lower $b$ are more generic: they contain safe word combinations like ``Great place, good service'' that occur in many reviews. Parameter $\lambda$'s is more subtle: it affects how random review starts are and to a degree, the discontinuation between statements within the review.
We conducted an Amazon Mechanical Turk (MTurk) survey in order to determine what kind of NMT-Fake reviews are convincing to native English speakers. We describe the survey and results in the next section.
\begin{table}[!b]
\caption{Six different parametrizations of our NMT reviews and one example for each. The context is ``5 P~.~F~.~Chang ' s Scottsdale AZ'' in all examples.}
\begin{center}
\begin{tabular}{ | l | l | }
\hline
$(b, \lambda)$ & Example review for context \\ \hline
\hline
$(0.3, -3)$ & I love this location! Great service, great food and the best drinks in Scottsdale. \\
& The staff is very friendly and always remembers u when we come in\\\hline
$(0.3, -5)$ & Love love the food here! I always go for lunch. They have a great menu and \\
& they make it fresh to order. Great place, good service and nice staff\\\hline
$(0.5, -4)$ & I love their chicken lettuce wraps and fried rice!! The service is good, they are\\
& always so polite. They have great happy hour specials and they have a lot\\
& of options.\\\hline
$(0.7, -3)$ & Great place to go with friends! They always make sure your dining \\
& experience was awesome.\\ \hline
$(0.7, -5)$ & Still haven't ordered an entree before but today we tried them once..\\
& both of us love this restaurant....\\\hline
$(0.9, -4)$ & AMAZING!!!!! Food was awesome with excellent service. Loved the lettuce \\
& wraps. Great drinks and wine! Can't wait to go back so soon!!\\ \hline
\end{tabular}
\label{table:categories}
\end{center}
\end{table}
\subsubsection{MTurk study}
\label{sec:amt}
We created 20 jobs, each with 100 questions, and requested master workers in MTurk to complete the jobs.
We randomly generated each survey for the participants. Each review had a 50\% chance to be real or fake. The fake ones further were chosen among six (6) categories of fake reviews (Table~\ref{table:categories}).
The restaurant and the city was given as contextual information to the participants. Our aim was to use this survey to understand how well English-speakers react to different parametrizations of NMT-Fake reviews.
Table~\ref{table:amt_pop} in Appendix summarizes the statistics for respondents in the survey. All participants were native English speakers from America. The base rate (50\%) was revealed to the participants prior to the study.
We first investigated overall detection of any NMT-Fake reviews (1,006 fake reviews and 994 real reviews). We found that the participants had big difficulties in detecting our fake reviews. In average, the reviews were detected with class-averaged \emph{F-score of only 56\%}, with 53\% F-score for fake review detection and 59\% F-score for real review detection. The results are very close to \emph{random detection}, where precision, recall and F-score would each be 50\%. Results are recorded in Table~\ref{table:MTurk_super}. Overall, the fake review generation is very successful, since human detection rate across categories is close to random.
\begin{table}[t]
\caption{Effectiveness of Mechanical Turkers in distinguishing human-written reviews from fake reviews generated by our NMT model (all variants).}
\begin{center}
\begin{tabular}{ | c | c |c |c | c | }
\hline
\multicolumn{5}{|c|}{Classification report}
\\ \hline
Review Type & Precision & Recall & F-score & Support \\ \hline
\hline
Human & 55\% & 63\% & 59\% & 994\\
NMT-Fake & 57\% & 50\% & 53\% & 1006 \\
\hline
\end{tabular}
\label{table:MTurk_super}
\end{center}
\end{table}
We noticed some variation in the detection of different fake review categories. The respondents in our MTurk survey had most difficulties recognizing reviews of category $(b=0.3, \lambda=-5)$, where true positive rate was $40.4\%$, while the true negative rate of the real class was $62.7\%$. The precision were $16\%$ and $86\%$, respectively. The class-averaged F-score is $47.6\%$, which is close to random. Detailed classification reports are shown in Table~\ref{table:MTurk_sub} in Appendix. Our MTurk-study shows that \emph{our NMT-Fake reviews pose a significant threat to review systems}, since \emph{ordinary native English-speakers have very big difficulties in separating real reviews from fake reviews}. We use the review category $(b=0.3, \lambda=-5)$ for future user tests in this paper, since MTurk participants had most difficulties detecting these reviews. We refer to this category as NMT-Fake* in this paper.
\section{Evaluation}
\graphicspath{ {figures/}}
We evaluate our fake reviews by first comparing them statistically to previously proposed types of fake reviews, and proceed with a user study with experienced participants. We demonstrate the statistical difference to existing fake review types \cite{yao2017automated,mukherjee2013yelp,rayana2015collective} by training classifiers to detect previous types and investigate classification performance.
\subsection{Replication of state-of-the-art model: LSTM}
\label{sec:repl}
Yao et al. \cite{yao2017automated} presented the current state-of-the-art generative model for fake reviews. The model is trained over the Yelp Challenge dataset using a two-layer character-based LSTM model.
We requested the authors of \cite{yao2017automated} for access to their LSTM model or a fake review dataset generated by their model. Unfortunately they were not able to share either of these with us. We therefore replicated their model as closely as we could, based on their paper and e-mail correspondence\footnote{We are committed to sharing our code with bonafide researchers for the sake of reproducibility.}.
We used the same graphics card (GeForce GTX) and trained using the same framework (torch-RNN in lua). We downloaded the reviews from Yelp Challenge and preprocessed the data to only contain printable ASCII characters, and filtered out non-restaurant reviews. We trained the model for approximately 72 hours. We post-processed the reviews using the customization methodology described in \cite{yao2017automated} and email correspondence. We call fake reviews generated by this model LSTM-Fake reviews.
\subsection{Similarity to existing fake reviews}
\label{sec:automated}
We now want to understand how NMT-Fake* reviews compare to a) LSTM fake reviews and b) human-generated fake reviews. We do this by comparing the statistical similarity between these classes.
For `a' (Figure~\ref{fig:lstm}), we use the Yelp Challenge dataset. We trained a classifier using 5,000 random reviews from the Yelp Challenge dataset (``human'') and 5,000 fake reviews generated by LSTM-Fake. Yao et al. \cite{yao2017automated} found that character features are essential in identifying LSTM-Fake reviews. Consequently, we use character features (n-grams up to 3).
For `b' (Figure~\ref{fig:shill}),we the ``Yelp Shills'' dataset (combination of YelpZip \cite{mukherjee2013yelp}, YelpNYC \cite{mukherjee2013yelp}, YelpChi \cite{rayana2015collective}). This dataset labels entries that are identified as fraudulent by Yelp's filtering mechanism (''shill reviews'')\footnote{Note that shill reviews are probably generated by human shills \cite{zhao2017news}.}. The rest are treated as genuine reviews from human users (''genuine''). We use 100,000 reviews from each category to train a classifier. We use features from the commercial psychometric tool LIWC2015 \cite{pennebaker2015development} to generated features.
In both cases, we use AdaBoost (with 200 shallow decision trees) for training. For testing each classifier, we use a held out test set of 1,000 reviews from both classes in each case. In addition, we test 1,000 NMT-Fake* reviews. Figures~\ref{fig:lstm} and~\ref{fig:shill} show the results. The classification threshold of 50\% is marked with a dashed line.
\begin{figure}
\begin{subfigure}[b]{0.5\columnwidth}
\includegraphics[width=\columnwidth]{figures/lstm.png}
\caption{Human--LSTM reviews.}
\label{fig:lstm}
\end{subfigure}
\begin{subfigure}[b]{0.5\columnwidth}
\includegraphics[width=\columnwidth]{figures/distribution_shill.png}
\caption{Genuine--Shill reviews.}
\label{fig:shill}
\end{subfigure}
\caption{
Histogram comparison of NMT-Fake* reviews with LSTM-Fake reviews and human-generated (\emph{genuine} and \emph{shill}) reviews. Figure~\ref{fig:lstm} shows that a classifier trained to distinguish ``human'' vs. LSTM-Fake cannot distinguish ``human'' vs NMT-Fake* reviews. Figure~\ref{fig:shill} shows NMT-Fake* reviews are more similar to \emph{genuine} reviews than \emph{shill} reviews.
}
\label{fig:statistical_similarity}
\end{figure}
We can see that our new generated reviews do not share strong attributes with previous known categories of fake reviews. If anything, our fake reviews are more similar to genuine reviews than previous fake reviews. We thus conjecture that our NMT-Fake* fake reviews present a category of fake reviews that may go undetected on online review sites.
\subsection{Comparative user study}
\label{sec:comparison}
We wanted to evaluate the effectiveness of fake reviews againsttech-savvy users who understand and know to expect machine-generated fake reviews. We conducted a user study with 20 participants, all with computer science education and at least one university degree. Participant demographics are shown in Table~\ref{table:amt_pop} in the Appendix. Each participant first attended a training session where they were asked to label reviews (fake and genuine) and could later compare them to the correct answers -- we call these participants \emph{experienced participants}.
No personal data was collected during the user study.
Each person was given two randomly selected sets of 30 of reviews (a total of 60 reviews per person) with reviews containing 10 \textendash 50 words each.
Each set contained 26 (87\%) real reviews from Yelp and 4 (13\%) machine-generated reviews,
numbers chosen based on suspicious review prevalence on Yelp~\cite{mukherjee2013yelp,rayana2015collective}.
One set contained machine-generated reviews from one of the two models (NMT ($b=0.3, \lambda=-5$) or LSTM),
and the other set reviews from the other in randomized order. The number of fake reviews was revealed to each participant in the study description. Each participant was requested to mark four (4) reviews as fake.
Each review targeted a real restaurant. A screenshot of that restaurant's Yelp page was shown to each participant prior to the study. Each participant evaluated reviews for one specific, randomly selected, restaurant. An example of the first page of the user study is shown in Figure~\ref{fig:screenshot} in Appendix.
\begin{figure}[!ht]
\centering
\includegraphics[width=.7\columnwidth]{detection2.png}
\caption{Violin plots of detection rate in comparative study. Mean and standard deviations for number of detected fakes are $0.8\pm0.7$ for NMT-Fake* and $2.5\pm1.0$ for LSTM-Fake. $n=20$. A sample of random detection is shown as comparison.}
\label{fig:aalto}
\end{figure}
Figure~\ref{fig:aalto} shows the distribution of detected reviews of both types. A hypothetical random detector is shown for comparison.
NMT-Fake* reviews are significantly more difficult to detect for our experienced participants. In average, detection rate (recall) is $20\%$ for NMT-Fake* reviews, compared to $61\%$ for LSTM-based reviews.
The precision (and F-score) is the same as the recall in our study, since participants labeled 4 fakes in each set of 30 reviews \cite{murphy2012machine}.
The distribution of the detection across participants is shown in Figure~\ref{fig:aalto}. \emph{The difference is statistically significant with confidence level $99\%$} (Welch's t-test).
We compared the detection rate of NMT-Fake* reviews to a random detector, and find that \emph{our participants detection rate of NMT-Fake* reviews is not statistically different from random predictions with 95\% confidence level} (Welch's t-test).
\section{Defenses}
\label{sec:detection}
We developed an AdaBoost-based classifier to detect our new fake reviews, consisting of 200 shallow decision trees (depth 2). The features we used are recorded in Table~\ref{table:features_adaboost} (Appendix).
We used word-level features based on spaCy-tokenization \cite{honnibal-johnson:2015:EMNLP} and constructed n-gram representation of POS-tags and dependency tree tags. We added readability features from NLTK~\cite{bird2004nltk}.
\begin{figure}[ht]
\centering
\includegraphics[width=.7\columnwidth]{obf_score_fair_2.png}
\caption{
Adaboost-based classification of NMT-Fake and human-written reviews.
Effect of varying $b$ and $\lambda$ in fake review generation.
The variant native speakers had most difficulties detecting is well detectable by AdaBoost (97\%).}
\label{fig:adaboost_matrix_b_lambda}
\end{figure}
Figure~\ref{fig:adaboost_matrix_b_lambda} shows our AdaBoost classifier's class-averaged F-score at detecting different kind of fake reviews. The classifier is very effective in detecting reviews that humans have difficulties detecting. For example, the fake reviews MTurk users had most difficulty detecting ($b=0.3, \lambda=-5$) are detected with an excellent 97\% F-score.
The most important features for the classification were counts for frequently occurring words in fake reviews (such as punctuation, pronouns, articles) as well as the readability feature ``Automated Readability Index''. We thus conclude that while NMT-Fake reviews are difficult to detect for humans, they can be well detected with the right tools.
\section{Related Work}
Kumar and Shah~\cite{kumar2018false} survey and categorize false information research. Automatically generated fake reviews are a form of \emph{opinion-based false information}, where the creator of the review may influence reader's opinions or decisions.
Yao et al. \cite{yao2017automated} presented their study on machine-generated fake reviews. Contrary to us, they investigated character-level language models, without specifying a specific context before generation. We leverage existing NMT tools to encode a specific context to the restaurant before generating reviews.
Supporting our study, Everett et al~\cite{Everett2016Automated} found that security researchers were less likely to be fooled by Markov chain-generated Reddit comments compared to ordinary Internet users.
Diversification of NMT model outputs has been studied in \cite{li2016diversity}. The authors proposed the use of a penalty to commonly occurring sentences (\emph{n-grams}) in order to emphasize maximum mutual information-based generation.
The authors investigated the use of NMT models in chatbot systems.
We found that unigram penalties to random tokens (Algorithm~\ref{alg:aug}) was easy to implement and produced sufficiently diverse responses.
\section {Discussion and Future Work}
\paragraph{What makes NMT-Fake* reviews difficult to detect?} First, NMT models allow the encoding of a relevant context for each review, which narrows down the possible choices of words that the model has to choose from. Our NMT model had a perplexity of approximately $25$, while the model of \cite{yao2017automated} had a perplexity of approximately $90$ \footnote{Personal communication with the authors}. Second, the beam search in NMT models narrows down choices to natural-looking sentences. Third, we observed that the NMT model produced \emph{better structure} in the generated sentences (i.e. a more coherent story).
\paragraph{Cost of generating reviews} With our setup, generating one review took less than one second. The cost of generation stems mainly from the overnight training. Assuming an electricity cost of 16 cents / kWh (California) and 8 hours of training, training the NMT model requires approximately 1.30 USD. This is a 90\% reduction in time compared to the state-of-the-art \cite{yao2017automated}. Furthermore, it is possible to generate both positive and negative reviews with the same model.
\paragraph{Ease of customization} We experimented with inserting specific words into the text by increasing their log likelihoods in the beam search. We noticed that the success depended on the prevalence of the word in the training set. For example, adding a +5 to \emph{Mike} in the log-likelihood resulted in approximately 10\% prevalence of this word in the reviews. An attacker can therefore easily insert specific keywords to reviews, which can increase evasion probability.
\paragraph{Ease of testing} Our diversification scheme is applicable during \emph{generation phase}, and does not affect the training setup of the network in any way. Once the NMT model is obtained, it is easy to obtain several different variants of NMT-Fake reviews by varying parameters $b$ and $\lambda$.
\paragraph{Languages} The generation methodology is not per-se language-dependent. The requirement for successful generation is that sufficiently much data exists in the targeted language. However, our language model modifications require some knowledge of that target language's grammar to produce high-quality reviews.
\paragraph{Generalizability of detection techniques} Currently, fake reviews are not universally detectable. Our results highlight that it is difficult to claim detection performance on unseen types of fake reviews (Section~\ref{sec:automated}). We see this an open problem that deserves more attention in fake reviews research.
\paragraph{Generalizability to other types of datasets} Our technique can be applied to any dataset, as long as there is sufficient training data for the NMT model. We used approximately 2.9 million reviews for this work.
\section{Conclusion}
In this paper, we showed that neural machine translation models can be used to generate fake reviews that are very effective in deceiving even experienced, tech-savvy users.
This supports anecdotal evidence \cite{national2017commission}.
Our technique is more effective than state-of-the-art \cite{yao2017automated}.
We conclude that machine-aided fake review detection is necessary since human users are ineffective in identifying fake reviews.
We also showed that detectors trained using one type of fake reviews are not effective in identifying other types of fake reviews.
Robust detection of fake reviews is thus still an open problem.
\section*{Acknowledgments}
We thank Tommi Gr\"{o}ndahl for assistance in planning user studies and the
participants of the user study for their time and feedback. We also thank
Luiza Sayfullina for comments that improved the manuscript.
We thank the authors of \cite{yao2017automated} for answering questions about
their work.
\bibliographystyle{splncs}
\begin{thebibliography}{10}
\bibitem{yao2017automated}
Yao, Y., Viswanath, B., Cryan, J., Zheng, H., Zhao, B.Y.:
\newblock Automated crowdturfing attacks and defenses in online review systems.
\newblock In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and
Communications Security, ACM (2017)
\bibitem{murphy2012machine}
Murphy, K.:
\newblock Machine learning: a probabilistic approach.
\newblock Massachusetts Institute of Technology (2012)
\bibitem{challenge2013yelp}
Yelp:
\newblock {Yelp Challenge Dataset} (2013)
\bibitem{mukherjee2013yelp}
Mukherjee, A., Venkataraman, V., Liu, B., Glance, N.:
\newblock What yelp fake review filter might be doing?
\newblock In: Seventh International AAAI Conference on Weblogs and Social Media
(ICWSM). (2013)
\bibitem{rayana2015collective}
Rayana, S., Akoglu, L.:
\newblock Collective opinion spam detection: Bridging review networks and
metadata.
\newblock In: {}Proceedings of the 21th ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining
\bibitem{o2008user}
{O'Connor}, P.:
\newblock {User-generated content and travel: A case study on Tripadvisor.com}.
\newblock Information and communication technologies in tourism 2008 (2008)
\bibitem{luca2010reviews}
Luca, M.:
\newblock {Reviews, Reputation, and Revenue: The Case of Yelp. com}.
\newblock {Harvard Business School} (2010)
\bibitem{wang2012serf}
Wang, G., Wilson, C., Zhao, X., Zhu, Y., Mohanlal, M., Zheng, H., Zhao, B.Y.:
\newblock Serf and turf: crowdturfing for fun and profit.
\newblock In: Proceedings of the 21st international conference on World Wide
Web (WWW), ACM (2012)
\bibitem{rinta2017understanding}
Rinta-Kahila, T., Soliman, W.:
\newblock Understanding crowdturfing: The different ethical logics behind the
clandestine industry of deception.
\newblock In: ECIS 2017: Proceedings of the 25th European Conference on
Information Systems. (2017)
\bibitem{luca2016fake}
Luca, M., Zervas, G.:
\newblock Fake it till you make it: Reputation, competition, and yelp review
fraud.
\newblock Management Science (2016)
\bibitem{national2017commission}
{National Literacy Trust}:
\newblock Commission on fake news and the teaching of critical literacy skills
in schools URL:
\url{https://literacytrust.org.uk/policy-and-campaigns/all-party-parliamentary-group-literacy/fakenews/}.
\bibitem{jurafsky2014speech}
Jurafsky, D., Martin, J.H.:
\newblock Speech and language processing. Volume~3.
\newblock Pearson London: (2014)
\bibitem{kingma2014adam}
Kingma, D.P., Ba, J.:
\newblock Adam: A method for stochastic optimization.
\newblock arXiv preprint arXiv:1412.6980 (2014)
\bibitem{cho2014learning}
Cho, K., van Merrienboer, B., Gulcehre, C., Bahdanau, D., Bougares, F.,
Schwenk, H., Bengio, Y.:
\newblock Learning phrase representations using rnn encoder--decoder for
statistical machine translation.
\newblock In: Proceedings of the 2014 Conference on Empirical Methods in
Natural Language Processing (EMNLP). (2014)
\bibitem{klein2017opennmt}
Klein, G., Kim, Y., Deng, Y., Senellart, J., Rush, A.:
\newblock Opennmt: Open-source toolkit for neural machine translation.
\newblock Proceedings of ACL, System Demonstrations (2017)
\bibitem{wu2016google}
Wu, Y., Schuster, M., Chen, Z., Le, Q.V., Norouzi, M., Macherey, W., Krikun,
M., Cao, Y., Gao, Q., Macherey, K., et~al.:
\newblock Google's neural machine translation system: Bridging the gap between
human and machine translation.
\newblock arXiv preprint arXiv:1609.08144 (2016)
\bibitem{mei2017coherent}
Mei, H., Bansal, M., Walter, M.R.:
\newblock Coherent dialogue with attention-based language models.
\newblock In: AAAI. (2017) 3252--3258
\bibitem{li2016diversity}
Li, J., Galley, M., Brockett, C., Gao, J., Dolan, B.:
\newblock A diversity-promoting objective function for neural conversation
models.
\newblock In: Proceedings of NAACL-HLT. (2016)
\bibitem{rubin2006assessing}
Rubin, V.L., Liddy, E.D.:
\newblock Assessing credibility of weblogs.
\newblock In: AAAI Spring Symposium: Computational Approaches to Analyzing
Weblogs. (2006)
\bibitem{zhao2017news}
news.com.au:
\newblock {The potential of AI generated 'crowdturfing' could undermine online
reviews and dramatically erode public trust} URL:
\url{http://www.news.com.au/technology/online/security/the-potential-of-ai-generated-crowdturfing-could-undermine-online-reviews-and-dramatically-erode-public-trust/news-story/e1c84ad909b586f8a08238d5f80b6982}.
\bibitem{pennebaker2015development}
Pennebaker, J.W., Boyd, R.L., Jordan, K., Blackburn, K.:
\newblock {The development and psychometric properties of LIWC2015}.
\newblock Technical report (2015)
\bibitem{honnibal-johnson:2015:EMNLP}
Honnibal, M., Johnson, M.:
\newblock An improved non-monotonic transition system for dependency parsing.
\newblock In: Proceedings of the 2015 Conference on Empirical Methods in
Natural Language Processing (EMNLP), ACM (2015)
\bibitem{bird2004nltk}
Bird, S., Loper, E.:
\newblock {NLTK: the natural language toolkit}.
\newblock In: Proceedings of the ACL 2004 on Interactive poster and
demonstration sessions, Association for Computational Linguistics (2004)
\bibitem{kumar2018false}
Kumar, S., Shah, N.:
\newblock False information on web and social media: A survey.
\newblock arXiv preprint arXiv:1804.08559 (2018)
\bibitem{Everett2016Automated}
Everett, R.M., Nurse, J.R.C., Erola, A.:
\newblock The anatomy of online deception: What makes automated text
convincing?
\newblock In: Proceedings of the 31st Annual ACM Symposium on Applied
Computing. SAC '16, ACM (2016)
\end{thebibliography}
\section*{Appendix}
We present basic demographics of our MTurk study and the comparative study with experienced users in Table~\ref{table:amt_pop}.
\begin{table}
\caption{User study statistics.}
\begin{center}
\begin{tabular}{ | l | c | c | }
\hline
Quality & Mechanical Turk users & Experienced users\\
\hline
Native English Speaker & Yes (20) & Yes (1) No (19) \\
Fluent in English & Yes (20) & Yes (20) \\
Age & 21-40 (17) 41-60 (3) & 21-25 (8) 26-30 (7) 31-35 (4) 41-45 (1)\\
Gender & Male (14) Female (6) & Male (17) Female (3)\\
Highest Education & High School (10) Bachelor (10) & Bachelor (9) Master (6) Ph.D. (5) \\
\hline
\end{tabular}
\label{table:amt_pop}
\end{center}
\end{table}
Table~\ref{table:openNMT-py_commands} shows a listing of the openNMT-py commands we used to create our NMT model and to generate fake reviews.
\begin{table}[t]
\caption{Listing of used openNMT-py commands.}
\begin{center}
\begin{tabular}{ | l | l | }
\hline
Phase & Bash command \\
\hline
Preprocessing & \begin{lstlisting}[language=bash]
python preprocess.py -train_src context-train.txt
-train_tgt reviews-train.txt -valid_src context-val.txt
-valid_tgt reviews-val.txt -save_data model
-lower -tgt_words_min_frequency 10
\end{lstlisting}
\\ & \\
Training & \begin{lstlisting}[language=bash]
python train.py -data model -save_model model -epochs 8
-gpuid 0 -learning_rate_decay 0.5 -optim adam
-learning_rate 0.001 -start_decay_at 3\end{lstlisting}
\\ & \\
Generation & \begin{lstlisting}[language=bash]
python translate.py -model model_acc_35.54_ppl_25.68_e8.pt
-src context-tst.txt -output pred-e8.txt -replace_unk
-verbose -max_length 50 -gpu 0
\end{lstlisting} \\
\hline
\end{tabular}
\label{table:openNMT-py_commands}
\end{center}
\end{table}
Table~\ref{table:MTurk_sub} shows the classification performance of Amazon Mechanical Turkers, separated across different categories of NMT-Fake reviews. The category with best performance ($b=0.3, \lambda=-5$) is denoted as NMT-Fake*.
\begin{table}[b]
\caption{MTurk study subclass classification reports. Classes are imbalanced in ratio 1:6. Random predictions are $p_\mathrm{human} = 86\%$ and $p_\mathrm{machine} = 14\%$, with $r_\mathrm{human} = r_\mathrm{machine} = 50\%$. Class-averaged F-scores for random predictions are $42\%$.}
\begin{center}
\begin{tabular}{ | c || c |c |c | c | }
\hline
$(b=0.3, \lambda = -3)$ & Precision & Recall & F-score & Support \\ \hline
Human & 89\% & 63\% & 73\% & 994\\
NMT-Fake & 15\% & 45\% & 22\% & 146 \\
\hline
\hline
$(b=0.3, \lambda = -5)$ & Precision & Recall & F-score & Support \\ \hline
Human & 86\% & 63\% & 73\% & 994\\
NMT-Fake* & 16\% & 40\% & 23\% & 171 \\
\hline
\hline
$(b=0.5, \lambda = -4)$ & Precision & Recall & F-score & Support \\ \hline
Human & 88\% & 63\% & 73\% & 994\\
NMT-Fake & 21\% & 55\% & 30\% & 181 \\
\hline
\hline
$(b=0.7, \lambda = -3)$ & Precision & Recall & F-score & Support \\ \hline
Human & 88\% & 63\% & 73\% & 994\\
NMT-Fake & 19\% & 50\% & 27\% & 170 \\
\hline
\hline
$(b=0.7, \lambda = -5)$ & Precision & Recall & F-score & Support \\ \hline
Human & 89\% & 63\% & 74\% & 994\\
NMT-Fake & 21\% & 57\% & 31\% & 174 \\
\hline
\hline
$(b=0.9, \lambda = -4)$ & Precision & Recall & F-score & Support \\ \hline
Human & 88\% & 63\% & 73\% & 994\\
NMT-Fake & 18\% & 50\% & 27\% & 164 \\
\hline
\end{tabular}
\label{table:MTurk_sub}
\end{center}
\end{table}
Figure~\ref{fig:screenshot} shows screenshots of the first two pages of our user study with experienced participants.
\begin{figure}[ht]
\centering
\includegraphics[width=1.\columnwidth]{figures/screenshot_7-3.png}
\caption{
Screenshots of the first two pages in the user study. Example 1 is a NMT-Fake* review, the rest are human-written.
}
\label{fig:screenshot}
\end{figure}
Table~\ref{table:features_adaboost} shows the features used to detect NMT-Fake reviews using the AdaBoost classifier.
\begin{table}
\caption{Features used in NMT-Fake review detector.}
\begin{center}
\begin{tabular}{ | l | c | }
\hline
Feature type & Number of features \\ \hline
\hline
Readability features & 13 \\ \hline
Unique POS tags & $~20$ \\ \hline
Word unigrams & 22,831 \\ \hline
1/2/3/4-grams of simple part-of-speech tags & 54,240 \\ \hline
1/2/3-grams of detailed part-of-speech tags & 112,944 \\ \hline
1/2/3-grams of syntactic dependency tags & 93,195 \\ \hline
\end{tabular}
\label{table:features_adaboost}
\end{center}
\end{table}
\end{document} | the Yelp Challenge dataset |
1a43df221a567869964ad3b275de30af2ac35598 | 1a43df221a567869964ad3b275de30af2ac35598_1 | Q: Which dataset do they use a starting point in generating fake reviews?
Text: Introduction
Automatically generated fake reviews have only recently become natural enough to fool human readers. Yao et al. BIBREF0 use a deep neural network (a so-called 2-layer LSTM BIBREF1 ) to generate fake reviews, and concluded that these fake reviews look sufficiently genuine to fool native English speakers. They train their model using real restaurant reviews from yelp.com BIBREF2 . Once trained, the model is used to generate reviews character-by-character. Due to the generation methodology, it cannot be easily targeted for a specific context (meaningful side information). Consequently, the review generation process may stray off-topic. For instance, when generating a review for a Japanese restaurant in Las Vegas, the review generation process may include references to an Italian restaurant in Baltimore. The authors of BIBREF0 apply a post-processing step (customization), which replaces food-related words with more suitable ones (sampled from the targeted restaurant). The word replacement strategy has drawbacks: it can miss certain words and replace others independent of their surrounding words, which may alert savvy readers. As an example: when we applied the customization technique described in BIBREF0 to a review for a Japanese restaurant it changed the snippet garlic knots for breakfast with garlic knots for sushi).
We propose a methodology based on neural machine translation (NMT) that improves the generation process by defining a context for the each generated fake review. Our context is a clear-text sequence of: the review rating, restaurant name, city, state and food tags (e.g. Japanese, Italian). We show that our technique generates review that stay on topic. We can instantiate our basic technique into several variants. We vet them on Amazon Mechanical Turk and find that native English speakers are very poor at recognizing our fake generated reviews. For one variant, the participants' performance is close to random: the class-averaged F-score of detection is INLINEFORM0 (whereas random would be INLINEFORM1 given the 1:6 imbalance in the test). Via a user study with experienced, highly educated participants, we compare this variant (which we will henceforth refer to as NMT-Fake* reviews) with fake reviews generated using the char-LSTM-based technique from BIBREF0 .
We demonstrate that NMT-Fake* reviews constitute a new category of fake reviews that cannot be detected by classifiers trained only using previously known categories of fake reviews BIBREF0 , BIBREF3 , BIBREF4 . Therefore, NMT-Fake* reviews may go undetected in existing online review sites. To meet this challenge, we develop an effective classifier that detects NMT-Fake* reviews effectively (97% F-score). Our main contributions are:
Background
Fake reviews User-generated content BIBREF5 is an integral part of the contemporary user experience on the web. Sites like tripadvisor.com, yelp.com and Google Play use user-written reviews to provide rich information that helps other users choose where to spend money and time. User reviews are used for rating services or products, and for providing qualitative opinions. User reviews and ratings may be used to rank services in recommendations. Ratings have an affect on the outwards appearance. Already 8 years ago, researchers estimated that a one-star rating increase affects the business revenue by 5 – 9% on yelp.com BIBREF6 .
Due to monetary impact of user-generated content, some businesses have relied on so-called crowd-turfing agents BIBREF7 that promise to deliver positive ratings written by workers to a customer in exchange for a monetary compensation. Crowd-turfing ethics are complicated. For example, Amazon community guidelines prohibit buying content relating to promotions, but the act of writing fabricated content is not considered illegal, nor is matching workers to customers BIBREF8 . Year 2015, approximately 20% of online reviews on yelp.com were suspected of being fake BIBREF9 .
Nowadays, user-generated review sites like yelp.com use filters and fraudulent review detection techniques. These factors have resulted in an increase in the requirements of crowd-turfed reviews provided to review sites, which in turn has led to an increase in the cost of high-quality review. Due to the cost increase, researchers hypothesize the existence of neural network-generated fake reviews. These neural-network-based fake reviews are statistically different from human-written fake reviews, and are not caught by classifiers trained on these BIBREF0 .
Detecting fake reviews can either be done on an individual level or as a system-wide detection tool (i.e. regulation). Detecting fake online content on a personal level requires knowledge and skills in critical reading. In 2017, the National Literacy Trust assessed that young people in the UK do not have the skillset to differentiate fake news from real news BIBREF10 . For example, 20% of children that use online news sites in age group 12-15 believe that all information on news sites are true.
Neural Networks Neural networks are function compositions that map input data through INLINEFORM0 subsequent layers: DISPLAYFORM0
where the functions INLINEFORM0 are typically non-linear and chosen by experts partly for known good performance on datasets and partly for simplicity of computational evaluation. Language models (LMs) BIBREF11 are generative probability distributions that assign probabilities to sequences of tokens ( INLINEFORM1 ): DISPLAYFORM0
such that the language model can be used to predict how likely a specific token at time step INLINEFORM0 is, based on the INLINEFORM1 previous tokens. Tokens are typically either words or characters.
For decades, deep neural networks were thought to be computationally too difficult to train. However, advances in optimization, hardware and the availability of frameworks have shown otherwise BIBREF1 , BIBREF12 . Neural language models (NLMs) have been one of the promising application areas. NLMs are typically various forms of recurrent neural networks (RNNs), which pass through the data sequentially and maintain a memory representation of the past tokens with a hidden context vector. There are many RNN architectures that focus on different ways of updating and maintaining context vectors: Long Short-Term Memory units (LSTM) and Gated Recurrent Units (GRUs) are perhaps most popular. Neural LMs have been used for free-form text generation. In certain application areas, the quality has been high enough to sometimes fool human readers BIBREF0 . Encoder-decoder (seq2seq) models BIBREF13 are architectures of stacked RNNs, which have the ability to generate output sequences based on input sequences. The encoder network reads in a sequence of tokens, and passes it to a decoder network (a LM). In contrast to simpler NLMs, encoder-decoder networks have the ability to use additional context for generating text, which enables more accurate generation of text. Encoder-decoder models are integral in Neural Machine Translation (NMT) BIBREF14 , where the task is to translate a source text from one language to another language. NMT models additionally use beam search strategies to heuristically search the set of possible translations. Training datasets are parallel corpora; large sets of paired sentences in the source and target languages. The application of NMT techniques for online machine translation has significantly improved the quality of translations, bringing it closer to human performance BIBREF15 .
Neural machine translation models are efficient at mapping one expression to another (one-to-one mapping). Researchers have evaluated these models for conversation generation BIBREF16 , with mixed results. Some researchers attribute poor performance to the use of the negative log likelihood cost function during training, which emphasizes generation of high-confidence phrases rather than diverse phrases BIBREF17 . The results are often generic text, which lacks variation. Li et al. have suggested various augmentations to this, among others suppressing typical responses in the decoder language model to promote response diversity BIBREF17 .
System Model
We discuss the attack model, our generative machine learning method and controlling the generative process in this section.
Attack Model
Wang et al. BIBREF7 described a model of crowd-turfing attacks consisting of three entities: customers who desire to have fake reviews for a particular target (e.g. their restaurant) on a particular platform (e.g. Yelp), agents who offer fake review services to customers, and workers who are orchestrated by the agent to compose and post fake reviews.
Automated crowd-turfing attacks (ACA) replace workers by a generative model. This has several benefits including better economy and scalability (human workers are more expensive and slower) and reduced detectability (agent can better control the rate at which fake reviews are generated and posted).
We assume that the agent has access to public reviews on the review platform, by which it can train its generative model. We also assume that it is easy for the agent to create a large number of accounts on the review platform so that account-based detection or rate-limiting techniques are ineffective against fake reviews.
The quality of the generative model plays a crucial role in the attack. Yao et al. BIBREF0 propose the use of a character-based LSTM as base for generative model. LSTMs are not conditioned to generate reviews for a specific target BIBREF1 , and may mix-up concepts from different contexts during free-form generation. Mixing contextually separate words is one of the key criteria that humans use to identify fake reviews. These may result in violations of known indicators for fake content BIBREF18 . For example, the review content may not match prior expectations nor the information need that the reader has. We improve the attack model by considering a more capable generative model that produces more appropriate reviews: a neural machine translation (NMT) model.
Generative Model
We propose the use of NMT models for fake review generation. The method has several benefits: 1) the ability to learn how to associate context (keywords) to reviews, 2) fast training time, and 3) a high-degree of customization during production time, e.g. introduction of specific waiter or food items names into reviews.
NMT models are constructions of stacked recurrent neural networks (RNNs). They include an encoder network and a decoder network, which are jointly optimized to produce a translation of one sequence to another. The encoder rolls over the input data in sequence and produces one INLINEFORM0 -dimensional context vector representation for the sentence. The decoder then generates output sequences based on the embedding vector and an attention module, which is taught to associate output words with certain input words. The generation typically continues until a specific EOS (end of sentence) token is encountered. The review length can be controlled in many ways, e.g. by setting the probability of generating the EOS token to zero until the required length is reached.
NMT models often also include a beam search BIBREF14 , which generates several hypotheses and chooses the best ones amongst them. In our work, we use the greedy beam search technique. We forgo the use of additional beam searches as we found that the quality of the output was already adequate and the translation phase time consumption increases linearly for each beam used.
We use the Yelp Challenge dataset BIBREF2 for our fake review generation. The dataset (Aug 2017) contains 2.9 million 1 –5 star restaurant reviews. We treat all reviews as genuine human-written reviews for the purpose of this work, since wide-scale deployment of machine-generated review attacks are not yet reported (Sep 2017) BIBREF19 . As preprocessing, we remove non-printable (non-ASCII) characters and excessive white-space. We separate punctuation from words. We reserve 15,000 reviews for validation and 3,000 for testing, and the rest we use for training. NMT models require a parallel corpus of source and target sentences, i.e. a large set of (source, target)-pairs. We set up a parallel corpus by constructing (context, review)-pairs from the dataset. Next, we describe how we created our input context.
The Yelp Challenge dataset includes metadata about restaurants, including their names, food tags, cities and states these restaurants are located in. For each restaurant review, we fetch this metadata and use it as our input context in the NMT model. The corresponding restaurant review is similarly set as the target sentence. This method produced 2.9 million pairs of sentences in our parallel corpus. We show one example of the parallel training corpus in Example 1 below:
5 Public House Las Vegas NV Gastropubs Restaurants > Excellent
food and service . Pricey , but well worth it . I would recommend
the bone marrow and sampler platter for appetizers . \end{verbatim}
\noindent The order {\textbf{[rating name city state tags]}} is kept constant.
Training the model conditions it to associate certain sequences of words in the input sentence with others in the output.
\subsubsection{Training Settings}
We train our NMT model on a commodity PC with a i7-4790k CPU (4.00GHz), with 32GB RAM and one NVidia GeForce GTX 980 GPU. Our system can process approximately 1,300 \textendash 1,500 source tokens/s and approximately 5,730 \textendash 5,830 output tokens/s. Training one epoch takes in average 72 minutes. The model is trained for 8 epochs, i.e. over night. We call fake review generated by this model \emph{NMT-Fake reviews}. We only need to train one model to produce reviews of different ratings.
We use the training settings: adam optimizer \cite{kingma2014adam} with the suggested learning rate 0.001 \cite{klein2017opennmt}. For most parts, parameters are at their default values. Notably, the maximum sentence length of input and output is 50 tokens by default.
We leverage the framework openNMT-py \cite{klein2017opennmt} to teach the our NMT model.
We list used openNMT-py commands in Appendix Table~\ref{table:openNMT-py_commands}.
\begin{figure}[t]
\begin{center}
\begin{tabular}{ | l | }
\hline
Example 2. Greedy NMT \\
Great food, \underline{great} service, \underline{great} \textit{\textit{beer selection}}. I had the \textit{Gastropubs burger} and it
\\
was delicious. The \underline{\textit{beer selection}} was also \underline{great}. \\
\\
Example 3. NMT-Fake* \\
I love this restaurant. Great food, great service. It's \textit{a little pricy} but worth\\
it for the \textit{quality} of the \textit{beer} and atmosphere you can see in \textit{Vegas}
\\
\hline
\end{tabular}
\label{table:output_comparison}
\end{center}
\caption{Na\"{i}ve text generation with NMT vs. generation using our NTM model. Repetitive patterns are \underline{underlined}. Contextual words are \emph{italicized}. Both examples here are generated based on the context given in Example~1.}
\label{fig:comparison}
\end{figure}
\subsection{Controlling generation of fake reviews}
\label{sec:generating}
Greedy NMT beam searches are practical in many NMT cases. However, the results are simply repetitive, when naively applied to fake review generation (See Example~2 in Figure~\ref{fig:comparison}).
The NMT model produces many \emph{high-confidence} word predictions, which are repetitive and obviously fake. We calculated that in fact, 43\% of the generated sentences started with the phrase ``Great food''. The lack of diversity in greedy use of NMTs for text generation is clear.
\begin{algorithm}[!b]
\KwData{Desired review context $C_\mathrm{input}$ (given as cleartext), NMT model}
\KwResult{Generated review $out$ for input context $C_\mathrm{input}$}
set $b=0.3$, $\lambda=-5$, $\alpha=\frac{2}{3}$, $p_\mathrm{typo}$, $p_\mathrm{spell}$ \\
$\log p \leftarrow \text{NMT.decode(NMT.encode(}C_\mathrm{input}\text{))}$ \\
out $\leftarrow$ [~] \\
$i \leftarrow 0$ \\
$\log p \leftarrow \text{Augment}(\log p$, $b$, $\lambda$, $1$, $[~]$, 0)~~~~~~~~~~~~~~~ |~random penalty~\\
\While{$i=0$ or $o_i$ not EOS}{
$\log \Tilde{p} \leftarrow \text{Augment}(\log p$, $b$, $\lambda$, $\alpha$, $o_i$, $i$)~~~~~~~~~~~ |~start \& memory penalty~\\
$o_i \leftarrow$ \text{NMT.beam}($\log \Tilde{p}$, out) \\
out.append($o_i$) \\
$i \leftarrow i+1$
}\text{return}~$\text{Obfuscate}$(out,~$p_\mathrm{typo}$,~$p_\mathrm{spell}$)
\caption{Generation of NMT-Fake* reviews.}
\label{alg:base}
\end{algorithm}
In this work, we describe how we succeeded in creating more diverse and less repetitive generated reviews, such as Example 3 in Figure~\ref{fig:comparison}.
We outline pseudocode for our methodology of generating fake reviews in Algorithm~\ref{alg:base}. There are several parameters in our algorithm.
The details of the algorithm will be shown later.
We modify the openNMT-py translation phase by changing log-probabilities before passing them to the beam search.
We notice that reviews generated with openNMT-py contain almost no language errors. As an optional post-processing step, we obfuscate reviews by introducing natural typos/misspellings randomly. In the next sections, we describe how we succeeded in generating more natural sentences from our NMT model, i.e. generating reviews like Example~3 instead of reviews like Example~2.
\subsubsection{Variation in word content}
Example 2 in Figure~\ref{fig:comparison} repeats commonly occurring words given for a specific context (e.g. \textit{great, food, service, beer, selection, burger} for Example~1). Generic review generation can be avoided by decreasing probabilities (log-likelihoods \cite{murphy2012machine}) of the generators LM, the decoder.
We constrain the generation of sentences by randomly \emph{imposing penalties to words}.
We tried several forms of added randomness, and found that adding constant penalties to a \emph{random subset} of the target words resulted in the most natural sentence flow. We call these penalties \emph{Bernoulli penalties}, since the random variables are chosen as either 1 or 0 (on or off).
\paragraph{Bernoulli penalties to language model}
To avoid generic sentences components, we augment the default language model $p(\cdot)$ of the decoder by
\begin{equation}
\log \Tilde{p}(t_k) = \log p(t_k | t_i, \dots, t_1) + \lambda q,
\end{equation}
where $q \in R^{V}$ is a vector of Bernoulli-distributed random values that obtain values $1$ with probability $b$ and value $0$ with probability $1-b_i$, and $\lambda < 0$. Parameter $b$ controls how much of the vocabulary is forgotten and $\lambda$ is a soft penalty of including ``forgotten'' words in a review.
$\lambda q_k$ emphasizes sentence forming with non-penalized words. The randomness is reset at the start of generating a new review.
Using Bernoulli penalties in the language model, we can ``forget'' a certain proportion of words and essentially ``force'' the creation of less typical sentences. We will test the effect of these two parameters, the Bernoulli probability $b$ and log-likelihood penalty of including ``forgotten'' words $\lambda$, with a user study in Section~\ref{sec:varying}.
\paragraph{Start penalty}
We introduce start penalties to avoid generic sentence starts (e.g. ``Great food, great service''). Inspired by \cite{li2016diversity}, we add a random start penalty $\lambda s^\mathrm{i}$, to our language model, which decreases monotonically for each generated token. We set $\alpha \leftarrow 0.66$ as it's effect decreases by 90\% every 5 words generated.
\paragraph{Penalty for reusing words}
Bernoulli penalties do not prevent excessive use of certain words in a sentence (such as \textit{great} in Example~2).
To avoid excessive reuse of words, we included a memory penalty for previously used words in each translation.
Concretely, we add the penalty $\lambda$ to each word that has been generated by the greedy search.
\subsubsection{Improving sentence coherence}
\label{sec:grammar}
We visually analyzed reviews after applying these penalties to our NMT model. While the models were clearly diverse, they were \emph{incoherent}: the introduction of random penalties had degraded the grammaticality of the sentences. Amongst others, the use of punctuation was erratic, and pronouns were used semantically wrongly (e.g. \emph{he}, \emph{she} might be replaced, as could ``and''/``but''). To improve the authenticity of our reviews, we added several \emph{grammar-based rules}.
English language has several classes of words which are important for the natural flow of sentences.
We built a list of common pronouns (e.g. I, them, our), conjunctions (e.g. and, thus, if), punctuation (e.g. ,/.,..), and apply only half memory penalties for these words. We found that this change made the reviews more coherent. The pseudocode for this and the previous step is shown in Algorithm~\ref{alg:aug}.
The combined effect of grammar-based rules and LM augmentation is visible in Example~3, Figure~\ref{fig:comparison}.
\begin{algorithm}[!t]
\KwData{Initial log LM $\log p$, Bernoulli probability $b$, soft-penalty $\lambda$, monotonic factor $\alpha$, last generated token $o_i$, grammar rules set $G$}
\KwResult{Augmented log LM $\log \Tilde{p}$}
\begin{algorithmic}[1]
\Procedure {Augment}{$\log p$, $b$, $\lambda$, $\alpha$, $o_i$, $i$}{ \\
generate $P_{\mathrm{1:N}} \leftarrow Bernoulli(b)$~~~~~~~~~~~~~~~|~$\text{One value} \in \{0,1\}~\text{per token}$~ \\
$I \leftarrow P>0$ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~|~Select positive indices~\\
$\log \Tilde{p} \leftarrow$ $\text{Discount}$($\log p$, $I$, $\lambda \cdot \alpha^i$,$G$) ~~~~~~ |~start penalty~\\
$\log \Tilde{p} \leftarrow$ $\text{Discount}$($\log \Tilde{p}$, $[o_i]$, $\lambda$,$G$) ~~~~~~~~~ |~memory penalty~\\
\textbf{return}~$\log \Tilde{p}$
}
\EndProcedure
\\
\Procedure {Discount}{$\log p$, $I$, $\lambda$, $G$}{
\State{\For{$i \in I$}{
\eIf{$o_i \in G$}{
$\log p_{i} \leftarrow \log p_{i} + \lambda/2$
}{
$\log p_{i} \leftarrow \log p_{i} + \lambda$}
}\textbf{return}~$\log p$
\EndProcedure
}}
\end{algorithmic}
\caption{Pseudocode for augmenting language model. }
\label{alg:aug}
\end{algorithm}
\subsubsection{Human-like errors}
\label{sec:obfuscation}
We notice that our NMT model produces reviews without grammar mistakes.
This is unlike real human writers, whose sentences contain two types of language mistakes 1) \emph{typos} that are caused by mistakes in the human motoric input, and 2) \emph{common spelling mistakes}.
We scraped a list of common English language spelling mistakes from Oxford dictionary\footnote{\url{https://en.oxforddictionaries.com/spelling/common-misspellings}} and created 80 rules for randomly \emph{re-introducing spelling mistakes}.
Similarly, typos are randomly reintroduced based on the weighted edit distance\footnote{\url{https://pypi.python.org/pypi/weighted-levenshtein/0.1}}, such that typos resulting in real English words with small perturbations are emphasized.
We use autocorrection tools\footnote{\url{https://pypi.python.org/pypi/autocorrect/0.1.0}} for finding these words.
We call these augmentations \emph{obfuscations}, since they aim to confound the reader to think a human has written them. We omit the pseudocode description for brevity.
\subsection{Experiment: Varying generation parameters in our NMT model}
\label{sec:varying}
Parameters $b$ and $\lambda$ control different aspects in fake reviews.
We show six different examples of generated fake reviews in Table~\ref{table:categories}.
Here, the largest differences occur with increasing values of $b$: visibly, the restaurant reviews become more extreme.
This occurs because a large portion of vocabulary is ``forgotten''. Reviews with $b \geq 0.7$ contain more rare word combinations, e.g. ``!!!!!'' as punctuation, and they occasionally break grammaticality (''experience was awesome'').
Reviews with lower $b$ are more generic: they contain safe word combinations like ``Great place, good service'' that occur in many reviews. Parameter $\lambda$'s is more subtle: it affects how random review starts are and to a degree, the discontinuation between statements within the review.
We conducted an Amazon Mechanical Turk (MTurk) survey in order to determine what kind of NMT-Fake reviews are convincing to native English speakers. We describe the survey and results in the next section.
\begin{table}[!b]
\caption{Six different parametrizations of our NMT reviews and one example for each. The context is ``5 P~.~F~.~Chang ' s Scottsdale AZ'' in all examples.}
\begin{center}
\begin{tabular}{ | l | l | }
\hline
$(b, \lambda)$ & Example review for context \\ \hline
\hline
$(0.3, -3)$ & I love this location! Great service, great food and the best drinks in Scottsdale. \\
& The staff is very friendly and always remembers u when we come in\\\hline
$(0.3, -5)$ & Love love the food here! I always go for lunch. They have a great menu and \\
& they make it fresh to order. Great place, good service and nice staff\\\hline
$(0.5, -4)$ & I love their chicken lettuce wraps and fried rice!! The service is good, they are\\
& always so polite. They have great happy hour specials and they have a lot\\
& of options.\\\hline
$(0.7, -3)$ & Great place to go with friends! They always make sure your dining \\
& experience was awesome.\\ \hline
$(0.7, -5)$ & Still haven't ordered an entree before but today we tried them once..\\
& both of us love this restaurant....\\\hline
$(0.9, -4)$ & AMAZING!!!!! Food was awesome with excellent service. Loved the lettuce \\
& wraps. Great drinks and wine! Can't wait to go back so soon!!\\ \hline
\end{tabular}
\label{table:categories}
\end{center}
\end{table}
\subsubsection{MTurk study}
\label{sec:amt}
We created 20 jobs, each with 100 questions, and requested master workers in MTurk to complete the jobs.
We randomly generated each survey for the participants. Each review had a 50\% chance to be real or fake. The fake ones further were chosen among six (6) categories of fake reviews (Table~\ref{table:categories}).
The restaurant and the city was given as contextual information to the participants. Our aim was to use this survey to understand how well English-speakers react to different parametrizations of NMT-Fake reviews.
Table~\ref{table:amt_pop} in Appendix summarizes the statistics for respondents in the survey. All participants were native English speakers from America. The base rate (50\%) was revealed to the participants prior to the study.
We first investigated overall detection of any NMT-Fake reviews (1,006 fake reviews and 994 real reviews). We found that the participants had big difficulties in detecting our fake reviews. In average, the reviews were detected with class-averaged \emph{F-score of only 56\%}, with 53\% F-score for fake review detection and 59\% F-score for real review detection. The results are very close to \emph{random detection}, where precision, recall and F-score would each be 50\%. Results are recorded in Table~\ref{table:MTurk_super}. Overall, the fake review generation is very successful, since human detection rate across categories is close to random.
\begin{table}[t]
\caption{Effectiveness of Mechanical Turkers in distinguishing human-written reviews from fake reviews generated by our NMT model (all variants).}
\begin{center}
\begin{tabular}{ | c | c |c |c | c | }
\hline
\multicolumn{5}{|c|}{Classification report}
\\ \hline
Review Type & Precision & Recall & F-score & Support \\ \hline
\hline
Human & 55\% & 63\% & 59\% & 994\\
NMT-Fake & 57\% & 50\% & 53\% & 1006 \\
\hline
\end{tabular}
\label{table:MTurk_super}
\end{center}
\end{table}
We noticed some variation in the detection of different fake review categories. The respondents in our MTurk survey had most difficulties recognizing reviews of category $(b=0.3, \lambda=-5)$, where true positive rate was $40.4\%$, while the true negative rate of the real class was $62.7\%$. The precision were $16\%$ and $86\%$, respectively. The class-averaged F-score is $47.6\%$, which is close to random. Detailed classification reports are shown in Table~\ref{table:MTurk_sub} in Appendix. Our MTurk-study shows that \emph{our NMT-Fake reviews pose a significant threat to review systems}, since \emph{ordinary native English-speakers have very big difficulties in separating real reviews from fake reviews}. We use the review category $(b=0.3, \lambda=-5)$ for future user tests in this paper, since MTurk participants had most difficulties detecting these reviews. We refer to this category as NMT-Fake* in this paper.
\section{Evaluation}
\graphicspath{ {figures/}}
We evaluate our fake reviews by first comparing them statistically to previously proposed types of fake reviews, and proceed with a user study with experienced participants. We demonstrate the statistical difference to existing fake review types \cite{yao2017automated,mukherjee2013yelp,rayana2015collective} by training classifiers to detect previous types and investigate classification performance.
\subsection{Replication of state-of-the-art model: LSTM}
\label{sec:repl}
Yao et al. \cite{yao2017automated} presented the current state-of-the-art generative model for fake reviews. The model is trained over the Yelp Challenge dataset using a two-layer character-based LSTM model.
We requested the authors of \cite{yao2017automated} for access to their LSTM model or a fake review dataset generated by their model. Unfortunately they were not able to share either of these with us. We therefore replicated their model as closely as we could, based on their paper and e-mail correspondence\footnote{We are committed to sharing our code with bonafide researchers for the sake of reproducibility.}.
We used the same graphics card (GeForce GTX) and trained using the same framework (torch-RNN in lua). We downloaded the reviews from Yelp Challenge and preprocessed the data to only contain printable ASCII characters, and filtered out non-restaurant reviews. We trained the model for approximately 72 hours. We post-processed the reviews using the customization methodology described in \cite{yao2017automated} and email correspondence. We call fake reviews generated by this model LSTM-Fake reviews.
\subsection{Similarity to existing fake reviews}
\label{sec:automated}
We now want to understand how NMT-Fake* reviews compare to a) LSTM fake reviews and b) human-generated fake reviews. We do this by comparing the statistical similarity between these classes.
For `a' (Figure~\ref{fig:lstm}), we use the Yelp Challenge dataset. We trained a classifier using 5,000 random reviews from the Yelp Challenge dataset (``human'') and 5,000 fake reviews generated by LSTM-Fake. Yao et al. \cite{yao2017automated} found that character features are essential in identifying LSTM-Fake reviews. Consequently, we use character features (n-grams up to 3).
For `b' (Figure~\ref{fig:shill}),we the ``Yelp Shills'' dataset (combination of YelpZip \cite{mukherjee2013yelp}, YelpNYC \cite{mukherjee2013yelp}, YelpChi \cite{rayana2015collective}). This dataset labels entries that are identified as fraudulent by Yelp's filtering mechanism (''shill reviews'')\footnote{Note that shill reviews are probably generated by human shills \cite{zhao2017news}.}. The rest are treated as genuine reviews from human users (''genuine''). We use 100,000 reviews from each category to train a classifier. We use features from the commercial psychometric tool LIWC2015 \cite{pennebaker2015development} to generated features.
In both cases, we use AdaBoost (with 200 shallow decision trees) for training. For testing each classifier, we use a held out test set of 1,000 reviews from both classes in each case. In addition, we test 1,000 NMT-Fake* reviews. Figures~\ref{fig:lstm} and~\ref{fig:shill} show the results. The classification threshold of 50\% is marked with a dashed line.
\begin{figure}
\begin{subfigure}[b]{0.5\columnwidth}
\includegraphics[width=\columnwidth]{figures/lstm.png}
\caption{Human--LSTM reviews.}
\label{fig:lstm}
\end{subfigure}
\begin{subfigure}[b]{0.5\columnwidth}
\includegraphics[width=\columnwidth]{figures/distribution_shill.png}
\caption{Genuine--Shill reviews.}
\label{fig:shill}
\end{subfigure}
\caption{
Histogram comparison of NMT-Fake* reviews with LSTM-Fake reviews and human-generated (\emph{genuine} and \emph{shill}) reviews. Figure~\ref{fig:lstm} shows that a classifier trained to distinguish ``human'' vs. LSTM-Fake cannot distinguish ``human'' vs NMT-Fake* reviews. Figure~\ref{fig:shill} shows NMT-Fake* reviews are more similar to \emph{genuine} reviews than \emph{shill} reviews.
}
\label{fig:statistical_similarity}
\end{figure}
We can see that our new generated reviews do not share strong attributes with previous known categories of fake reviews. If anything, our fake reviews are more similar to genuine reviews than previous fake reviews. We thus conjecture that our NMT-Fake* fake reviews present a category of fake reviews that may go undetected on online review sites.
\subsection{Comparative user study}
\label{sec:comparison}
We wanted to evaluate the effectiveness of fake reviews againsttech-savvy users who understand and know to expect machine-generated fake reviews. We conducted a user study with 20 participants, all with computer science education and at least one university degree. Participant demographics are shown in Table~\ref{table:amt_pop} in the Appendix. Each participant first attended a training session where they were asked to label reviews (fake and genuine) and could later compare them to the correct answers -- we call these participants \emph{experienced participants}.
No personal data was collected during the user study.
Each person was given two randomly selected sets of 30 of reviews (a total of 60 reviews per person) with reviews containing 10 \textendash 50 words each.
Each set contained 26 (87\%) real reviews from Yelp and 4 (13\%) machine-generated reviews,
numbers chosen based on suspicious review prevalence on Yelp~\cite{mukherjee2013yelp,rayana2015collective}.
One set contained machine-generated reviews from one of the two models (NMT ($b=0.3, \lambda=-5$) or LSTM),
and the other set reviews from the other in randomized order. The number of fake reviews was revealed to each participant in the study description. Each participant was requested to mark four (4) reviews as fake.
Each review targeted a real restaurant. A screenshot of that restaurant's Yelp page was shown to each participant prior to the study. Each participant evaluated reviews for one specific, randomly selected, restaurant. An example of the first page of the user study is shown in Figure~\ref{fig:screenshot} in Appendix.
\begin{figure}[!ht]
\centering
\includegraphics[width=.7\columnwidth]{detection2.png}
\caption{Violin plots of detection rate in comparative study. Mean and standard deviations for number of detected fakes are $0.8\pm0.7$ for NMT-Fake* and $2.5\pm1.0$ for LSTM-Fake. $n=20$. A sample of random detection is shown as comparison.}
\label{fig:aalto}
\end{figure}
Figure~\ref{fig:aalto} shows the distribution of detected reviews of both types. A hypothetical random detector is shown for comparison.
NMT-Fake* reviews are significantly more difficult to detect for our experienced participants. In average, detection rate (recall) is $20\%$ for NMT-Fake* reviews, compared to $61\%$ for LSTM-based reviews.
The precision (and F-score) is the same as the recall in our study, since participants labeled 4 fakes in each set of 30 reviews \cite{murphy2012machine}.
The distribution of the detection across participants is shown in Figure~\ref{fig:aalto}. \emph{The difference is statistically significant with confidence level $99\%$} (Welch's t-test).
We compared the detection rate of NMT-Fake* reviews to a random detector, and find that \emph{our participants detection rate of NMT-Fake* reviews is not statistically different from random predictions with 95\% confidence level} (Welch's t-test).
\section{Defenses}
\label{sec:detection}
We developed an AdaBoost-based classifier to detect our new fake reviews, consisting of 200 shallow decision trees (depth 2). The features we used are recorded in Table~\ref{table:features_adaboost} (Appendix).
We used word-level features based on spaCy-tokenization \cite{honnibal-johnson:2015:EMNLP} and constructed n-gram representation of POS-tags and dependency tree tags. We added readability features from NLTK~\cite{bird2004nltk}.
\begin{figure}[ht]
\centering
\includegraphics[width=.7\columnwidth]{obf_score_fair_2.png}
\caption{
Adaboost-based classification of NMT-Fake and human-written reviews.
Effect of varying $b$ and $\lambda$ in fake review generation.
The variant native speakers had most difficulties detecting is well detectable by AdaBoost (97\%).}
\label{fig:adaboost_matrix_b_lambda}
\end{figure}
Figure~\ref{fig:adaboost_matrix_b_lambda} shows our AdaBoost classifier's class-averaged F-score at detecting different kind of fake reviews. The classifier is very effective in detecting reviews that humans have difficulties detecting. For example, the fake reviews MTurk users had most difficulty detecting ($b=0.3, \lambda=-5$) are detected with an excellent 97\% F-score.
The most important features for the classification were counts for frequently occurring words in fake reviews (such as punctuation, pronouns, articles) as well as the readability feature ``Automated Readability Index''. We thus conclude that while NMT-Fake reviews are difficult to detect for humans, they can be well detected with the right tools.
\section{Related Work}
Kumar and Shah~\cite{kumar2018false} survey and categorize false information research. Automatically generated fake reviews are a form of \emph{opinion-based false information}, where the creator of the review may influence reader's opinions or decisions.
Yao et al. \cite{yao2017automated} presented their study on machine-generated fake reviews. Contrary to us, they investigated character-level language models, without specifying a specific context before generation. We leverage existing NMT tools to encode a specific context to the restaurant before generating reviews.
Supporting our study, Everett et al~\cite{Everett2016Automated} found that security researchers were less likely to be fooled by Markov chain-generated Reddit comments compared to ordinary Internet users.
Diversification of NMT model outputs has been studied in \cite{li2016diversity}. The authors proposed the use of a penalty to commonly occurring sentences (\emph{n-grams}) in order to emphasize maximum mutual information-based generation.
The authors investigated the use of NMT models in chatbot systems.
We found that unigram penalties to random tokens (Algorithm~\ref{alg:aug}) was easy to implement and produced sufficiently diverse responses.
\section {Discussion and Future Work}
\paragraph{What makes NMT-Fake* reviews difficult to detect?} First, NMT models allow the encoding of a relevant context for each review, which narrows down the possible choices of words that the model has to choose from. Our NMT model had a perplexity of approximately $25$, while the model of \cite{yao2017automated} had a perplexity of approximately $90$ \footnote{Personal communication with the authors}. Second, the beam search in NMT models narrows down choices to natural-looking sentences. Third, we observed that the NMT model produced \emph{better structure} in the generated sentences (i.e. a more coherent story).
\paragraph{Cost of generating reviews} With our setup, generating one review took less than one second. The cost of generation stems mainly from the overnight training. Assuming an electricity cost of 16 cents / kWh (California) and 8 hours of training, training the NMT model requires approximately 1.30 USD. This is a 90\% reduction in time compared to the state-of-the-art \cite{yao2017automated}. Furthermore, it is possible to generate both positive and negative reviews with the same model.
\paragraph{Ease of customization} We experimented with inserting specific words into the text by increasing their log likelihoods in the beam search. We noticed that the success depended on the prevalence of the word in the training set. For example, adding a +5 to \emph{Mike} in the log-likelihood resulted in approximately 10\% prevalence of this word in the reviews. An attacker can therefore easily insert specific keywords to reviews, which can increase evasion probability.
\paragraph{Ease of testing} Our diversification scheme is applicable during \emph{generation phase}, and does not affect the training setup of the network in any way. Once the NMT model is obtained, it is easy to obtain several different variants of NMT-Fake reviews by varying parameters $b$ and $\lambda$.
\paragraph{Languages} The generation methodology is not per-se language-dependent. The requirement for successful generation is that sufficiently much data exists in the targeted language. However, our language model modifications require some knowledge of that target language's grammar to produce high-quality reviews.
\paragraph{Generalizability of detection techniques} Currently, fake reviews are not universally detectable. Our results highlight that it is difficult to claim detection performance on unseen types of fake reviews (Section~\ref{sec:automated}). We see this an open problem that deserves more attention in fake reviews research.
\paragraph{Generalizability to other types of datasets} Our technique can be applied to any dataset, as long as there is sufficient training data for the NMT model. We used approximately 2.9 million reviews for this work.
\section{Conclusion}
In this paper, we showed that neural machine translation models can be used to generate fake reviews that are very effective in deceiving even experienced, tech-savvy users.
This supports anecdotal evidence \cite{national2017commission}.
Our technique is more effective than state-of-the-art \cite{yao2017automated}.
We conclude that machine-aided fake review detection is necessary since human users are ineffective in identifying fake reviews.
We also showed that detectors trained using one type of fake reviews are not effective in identifying other types of fake reviews.
Robust detection of fake reviews is thus still an open problem.
\section*{Acknowledgments}
We thank Tommi Gr\"{o}ndahl for assistance in planning user studies and the
participants of the user study for their time and feedback. We also thank
Luiza Sayfullina for comments that improved the manuscript.
We thank the authors of \cite{yao2017automated} for answering questions about
their work.
\bibliographystyle{splncs}
\begin{thebibliography}{10}
\bibitem{yao2017automated}
Yao, Y., Viswanath, B., Cryan, J., Zheng, H., Zhao, B.Y.:
\newblock Automated crowdturfing attacks and defenses in online review systems.
\newblock In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and
Communications Security, ACM (2017)
\bibitem{murphy2012machine}
Murphy, K.:
\newblock Machine learning: a probabilistic approach.
\newblock Massachusetts Institute of Technology (2012)
\bibitem{challenge2013yelp}
Yelp:
\newblock {Yelp Challenge Dataset} (2013)
\bibitem{mukherjee2013yelp}
Mukherjee, A., Venkataraman, V., Liu, B., Glance, N.:
\newblock What yelp fake review filter might be doing?
\newblock In: Seventh International AAAI Conference on Weblogs and Social Media
(ICWSM). (2013)
\bibitem{rayana2015collective}
Rayana, S., Akoglu, L.:
\newblock Collective opinion spam detection: Bridging review networks and
metadata.
\newblock In: {}Proceedings of the 21th ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining
\bibitem{o2008user}
{O'Connor}, P.:
\newblock {User-generated content and travel: A case study on Tripadvisor.com}.
\newblock Information and communication technologies in tourism 2008 (2008)
\bibitem{luca2010reviews}
Luca, M.:
\newblock {Reviews, Reputation, and Revenue: The Case of Yelp. com}.
\newblock {Harvard Business School} (2010)
\bibitem{wang2012serf}
Wang, G., Wilson, C., Zhao, X., Zhu, Y., Mohanlal, M., Zheng, H., Zhao, B.Y.:
\newblock Serf and turf: crowdturfing for fun and profit.
\newblock In: Proceedings of the 21st international conference on World Wide
Web (WWW), ACM (2012)
\bibitem{rinta2017understanding}
Rinta-Kahila, T., Soliman, W.:
\newblock Understanding crowdturfing: The different ethical logics behind the
clandestine industry of deception.
\newblock In: ECIS 2017: Proceedings of the 25th European Conference on
Information Systems. (2017)
\bibitem{luca2016fake}
Luca, M., Zervas, G.:
\newblock Fake it till you make it: Reputation, competition, and yelp review
fraud.
\newblock Management Science (2016)
\bibitem{national2017commission}
{National Literacy Trust}:
\newblock Commission on fake news and the teaching of critical literacy skills
in schools URL:
\url{https://literacytrust.org.uk/policy-and-campaigns/all-party-parliamentary-group-literacy/fakenews/}.
\bibitem{jurafsky2014speech}
Jurafsky, D., Martin, J.H.:
\newblock Speech and language processing. Volume~3.
\newblock Pearson London: (2014)
\bibitem{kingma2014adam}
Kingma, D.P., Ba, J.:
\newblock Adam: A method for stochastic optimization.
\newblock arXiv preprint arXiv:1412.6980 (2014)
\bibitem{cho2014learning}
Cho, K., van Merrienboer, B., Gulcehre, C., Bahdanau, D., Bougares, F.,
Schwenk, H., Bengio, Y.:
\newblock Learning phrase representations using rnn encoder--decoder for
statistical machine translation.
\newblock In: Proceedings of the 2014 Conference on Empirical Methods in
Natural Language Processing (EMNLP). (2014)
\bibitem{klein2017opennmt}
Klein, G., Kim, Y., Deng, Y., Senellart, J., Rush, A.:
\newblock Opennmt: Open-source toolkit for neural machine translation.
\newblock Proceedings of ACL, System Demonstrations (2017)
\bibitem{wu2016google}
Wu, Y., Schuster, M., Chen, Z., Le, Q.V., Norouzi, M., Macherey, W., Krikun,
M., Cao, Y., Gao, Q., Macherey, K., et~al.:
\newblock Google's neural machine translation system: Bridging the gap between
human and machine translation.
\newblock arXiv preprint arXiv:1609.08144 (2016)
\bibitem{mei2017coherent}
Mei, H., Bansal, M., Walter, M.R.:
\newblock Coherent dialogue with attention-based language models.
\newblock In: AAAI. (2017) 3252--3258
\bibitem{li2016diversity}
Li, J., Galley, M., Brockett, C., Gao, J., Dolan, B.:
\newblock A diversity-promoting objective function for neural conversation
models.
\newblock In: Proceedings of NAACL-HLT. (2016)
\bibitem{rubin2006assessing}
Rubin, V.L., Liddy, E.D.:
\newblock Assessing credibility of weblogs.
\newblock In: AAAI Spring Symposium: Computational Approaches to Analyzing
Weblogs. (2006)
\bibitem{zhao2017news}
news.com.au:
\newblock {The potential of AI generated 'crowdturfing' could undermine online
reviews and dramatically erode public trust} URL:
\url{http://www.news.com.au/technology/online/security/the-potential-of-ai-generated-crowdturfing-could-undermine-online-reviews-and-dramatically-erode-public-trust/news-story/e1c84ad909b586f8a08238d5f80b6982}.
\bibitem{pennebaker2015development}
Pennebaker, J.W., Boyd, R.L., Jordan, K., Blackburn, K.:
\newblock {The development and psychometric properties of LIWC2015}.
\newblock Technical report (2015)
\bibitem{honnibal-johnson:2015:EMNLP}
Honnibal, M., Johnson, M.:
\newblock An improved non-monotonic transition system for dependency parsing.
\newblock In: Proceedings of the 2015 Conference on Empirical Methods in
Natural Language Processing (EMNLP), ACM (2015)
\bibitem{bird2004nltk}
Bird, S., Loper, E.:
\newblock {NLTK: the natural language toolkit}.
\newblock In: Proceedings of the ACL 2004 on Interactive poster and
demonstration sessions, Association for Computational Linguistics (2004)
\bibitem{kumar2018false}
Kumar, S., Shah, N.:
\newblock False information on web and social media: A survey.
\newblock arXiv preprint arXiv:1804.08559 (2018)
\bibitem{Everett2016Automated}
Everett, R.M., Nurse, J.R.C., Erola, A.:
\newblock The anatomy of online deception: What makes automated text
convincing?
\newblock In: Proceedings of the 31st Annual ACM Symposium on Applied
Computing. SAC '16, ACM (2016)
\end{thebibliography}
\section*{Appendix}
We present basic demographics of our MTurk study and the comparative study with experienced users in Table~\ref{table:amt_pop}.
\begin{table}
\caption{User study statistics.}
\begin{center}
\begin{tabular}{ | l | c | c | }
\hline
Quality & Mechanical Turk users & Experienced users\\
\hline
Native English Speaker & Yes (20) & Yes (1) No (19) \\
Fluent in English & Yes (20) & Yes (20) \\
Age & 21-40 (17) 41-60 (3) & 21-25 (8) 26-30 (7) 31-35 (4) 41-45 (1)\\
Gender & Male (14) Female (6) & Male (17) Female (3)\\
Highest Education & High School (10) Bachelor (10) & Bachelor (9) Master (6) Ph.D. (5) \\
\hline
\end{tabular}
\label{table:amt_pop}
\end{center}
\end{table}
Table~\ref{table:openNMT-py_commands} shows a listing of the openNMT-py commands we used to create our NMT model and to generate fake reviews.
\begin{table}[t]
\caption{Listing of used openNMT-py commands.}
\begin{center}
\begin{tabular}{ | l | l | }
\hline
Phase & Bash command \\
\hline
Preprocessing & \begin{lstlisting}[language=bash]
python preprocess.py -train_src context-train.txt
-train_tgt reviews-train.txt -valid_src context-val.txt
-valid_tgt reviews-val.txt -save_data model
-lower -tgt_words_min_frequency 10
\end{lstlisting}
\\ & \\
Training & \begin{lstlisting}[language=bash]
python train.py -data model -save_model model -epochs 8
-gpuid 0 -learning_rate_decay 0.5 -optim adam
-learning_rate 0.001 -start_decay_at 3\end{lstlisting}
\\ & \\
Generation & \begin{lstlisting}[language=bash]
python translate.py -model model_acc_35.54_ppl_25.68_e8.pt
-src context-tst.txt -output pred-e8.txt -replace_unk
-verbose -max_length 50 -gpu 0
\end{lstlisting} \\
\hline
\end{tabular}
\label{table:openNMT-py_commands}
\end{center}
\end{table}
Table~\ref{table:MTurk_sub} shows the classification performance of Amazon Mechanical Turkers, separated across different categories of NMT-Fake reviews. The category with best performance ($b=0.3, \lambda=-5$) is denoted as NMT-Fake*.
\begin{table}[b]
\caption{MTurk study subclass classification reports. Classes are imbalanced in ratio 1:6. Random predictions are $p_\mathrm{human} = 86\%$ and $p_\mathrm{machine} = 14\%$, with $r_\mathrm{human} = r_\mathrm{machine} = 50\%$. Class-averaged F-scores for random predictions are $42\%$.}
\begin{center}
\begin{tabular}{ | c || c |c |c | c | }
\hline
$(b=0.3, \lambda = -3)$ & Precision & Recall & F-score & Support \\ \hline
Human & 89\% & 63\% & 73\% & 994\\
NMT-Fake & 15\% & 45\% & 22\% & 146 \\
\hline
\hline
$(b=0.3, \lambda = -5)$ & Precision & Recall & F-score & Support \\ \hline
Human & 86\% & 63\% & 73\% & 994\\
NMT-Fake* & 16\% & 40\% & 23\% & 171 \\
\hline
\hline
$(b=0.5, \lambda = -4)$ & Precision & Recall & F-score & Support \\ \hline
Human & 88\% & 63\% & 73\% & 994\\
NMT-Fake & 21\% & 55\% & 30\% & 181 \\
\hline
\hline
$(b=0.7, \lambda = -3)$ & Precision & Recall & F-score & Support \\ \hline
Human & 88\% & 63\% & 73\% & 994\\
NMT-Fake & 19\% & 50\% & 27\% & 170 \\
\hline
\hline
$(b=0.7, \lambda = -5)$ & Precision & Recall & F-score & Support \\ \hline
Human & 89\% & 63\% & 74\% & 994\\
NMT-Fake & 21\% & 57\% & 31\% & 174 \\
\hline
\hline
$(b=0.9, \lambda = -4)$ & Precision & Recall & F-score & Support \\ \hline
Human & 88\% & 63\% & 73\% & 994\\
NMT-Fake & 18\% & 50\% & 27\% & 164 \\
\hline
\end{tabular}
\label{table:MTurk_sub}
\end{center}
\end{table}
Figure~\ref{fig:screenshot} shows screenshots of the first two pages of our user study with experienced participants.
\begin{figure}[ht]
\centering
\includegraphics[width=1.\columnwidth]{figures/screenshot_7-3.png}
\caption{
Screenshots of the first two pages in the user study. Example 1 is a NMT-Fake* review, the rest are human-written.
}
\label{fig:screenshot}
\end{figure}
Table~\ref{table:features_adaboost} shows the features used to detect NMT-Fake reviews using the AdaBoost classifier.
\begin{table}
\caption{Features used in NMT-Fake review detector.}
\begin{center}
\begin{tabular}{ | l | c | }
\hline
Feature type & Number of features \\ \hline
\hline
Readability features & 13 \\ \hline
Unique POS tags & $~20$ \\ \hline
Word unigrams & 22,831 \\ \hline
1/2/3/4-grams of simple part-of-speech tags & 54,240 \\ \hline
1/2/3-grams of detailed part-of-speech tags & 112,944 \\ \hline
1/2/3-grams of syntactic dependency tags & 93,195 \\ \hline
\end{tabular}
\label{table:features_adaboost}
\end{center}
\end{table}
\end{document} | Yelp Challenge dataset BIBREF2 |
98b11f70239ef0e22511a3ecf6e413ecb726f954 | 98b11f70239ef0e22511a3ecf6e413ecb726f954_0 | Q: Do they use a pretrained NMT model to help generating reviews?
Text: Introduction
Automatically generated fake reviews have only recently become natural enough to fool human readers. Yao et al. BIBREF0 use a deep neural network (a so-called 2-layer LSTM BIBREF1 ) to generate fake reviews, and concluded that these fake reviews look sufficiently genuine to fool native English speakers. They train their model using real restaurant reviews from yelp.com BIBREF2 . Once trained, the model is used to generate reviews character-by-character. Due to the generation methodology, it cannot be easily targeted for a specific context (meaningful side information). Consequently, the review generation process may stray off-topic. For instance, when generating a review for a Japanese restaurant in Las Vegas, the review generation process may include references to an Italian restaurant in Baltimore. The authors of BIBREF0 apply a post-processing step (customization), which replaces food-related words with more suitable ones (sampled from the targeted restaurant). The word replacement strategy has drawbacks: it can miss certain words and replace others independent of their surrounding words, which may alert savvy readers. As an example: when we applied the customization technique described in BIBREF0 to a review for a Japanese restaurant it changed the snippet garlic knots for breakfast with garlic knots for sushi).
We propose a methodology based on neural machine translation (NMT) that improves the generation process by defining a context for the each generated fake review. Our context is a clear-text sequence of: the review rating, restaurant name, city, state and food tags (e.g. Japanese, Italian). We show that our technique generates review that stay on topic. We can instantiate our basic technique into several variants. We vet them on Amazon Mechanical Turk and find that native English speakers are very poor at recognizing our fake generated reviews. For one variant, the participants' performance is close to random: the class-averaged F-score of detection is INLINEFORM0 (whereas random would be INLINEFORM1 given the 1:6 imbalance in the test). Via a user study with experienced, highly educated participants, we compare this variant (which we will henceforth refer to as NMT-Fake* reviews) with fake reviews generated using the char-LSTM-based technique from BIBREF0 .
We demonstrate that NMT-Fake* reviews constitute a new category of fake reviews that cannot be detected by classifiers trained only using previously known categories of fake reviews BIBREF0 , BIBREF3 , BIBREF4 . Therefore, NMT-Fake* reviews may go undetected in existing online review sites. To meet this challenge, we develop an effective classifier that detects NMT-Fake* reviews effectively (97% F-score). Our main contributions are:
Background
Fake reviews User-generated content BIBREF5 is an integral part of the contemporary user experience on the web. Sites like tripadvisor.com, yelp.com and Google Play use user-written reviews to provide rich information that helps other users choose where to spend money and time. User reviews are used for rating services or products, and for providing qualitative opinions. User reviews and ratings may be used to rank services in recommendations. Ratings have an affect on the outwards appearance. Already 8 years ago, researchers estimated that a one-star rating increase affects the business revenue by 5 – 9% on yelp.com BIBREF6 .
Due to monetary impact of user-generated content, some businesses have relied on so-called crowd-turfing agents BIBREF7 that promise to deliver positive ratings written by workers to a customer in exchange for a monetary compensation. Crowd-turfing ethics are complicated. For example, Amazon community guidelines prohibit buying content relating to promotions, but the act of writing fabricated content is not considered illegal, nor is matching workers to customers BIBREF8 . Year 2015, approximately 20% of online reviews on yelp.com were suspected of being fake BIBREF9 .
Nowadays, user-generated review sites like yelp.com use filters and fraudulent review detection techniques. These factors have resulted in an increase in the requirements of crowd-turfed reviews provided to review sites, which in turn has led to an increase in the cost of high-quality review. Due to the cost increase, researchers hypothesize the existence of neural network-generated fake reviews. These neural-network-based fake reviews are statistically different from human-written fake reviews, and are not caught by classifiers trained on these BIBREF0 .
Detecting fake reviews can either be done on an individual level or as a system-wide detection tool (i.e. regulation). Detecting fake online content on a personal level requires knowledge and skills in critical reading. In 2017, the National Literacy Trust assessed that young people in the UK do not have the skillset to differentiate fake news from real news BIBREF10 . For example, 20% of children that use online news sites in age group 12-15 believe that all information on news sites are true.
Neural Networks Neural networks are function compositions that map input data through INLINEFORM0 subsequent layers: DISPLAYFORM0
where the functions INLINEFORM0 are typically non-linear and chosen by experts partly for known good performance on datasets and partly for simplicity of computational evaluation. Language models (LMs) BIBREF11 are generative probability distributions that assign probabilities to sequences of tokens ( INLINEFORM1 ): DISPLAYFORM0
such that the language model can be used to predict how likely a specific token at time step INLINEFORM0 is, based on the INLINEFORM1 previous tokens. Tokens are typically either words or characters.
For decades, deep neural networks were thought to be computationally too difficult to train. However, advances in optimization, hardware and the availability of frameworks have shown otherwise BIBREF1 , BIBREF12 . Neural language models (NLMs) have been one of the promising application areas. NLMs are typically various forms of recurrent neural networks (RNNs), which pass through the data sequentially and maintain a memory representation of the past tokens with a hidden context vector. There are many RNN architectures that focus on different ways of updating and maintaining context vectors: Long Short-Term Memory units (LSTM) and Gated Recurrent Units (GRUs) are perhaps most popular. Neural LMs have been used for free-form text generation. In certain application areas, the quality has been high enough to sometimes fool human readers BIBREF0 . Encoder-decoder (seq2seq) models BIBREF13 are architectures of stacked RNNs, which have the ability to generate output sequences based on input sequences. The encoder network reads in a sequence of tokens, and passes it to a decoder network (a LM). In contrast to simpler NLMs, encoder-decoder networks have the ability to use additional context for generating text, which enables more accurate generation of text. Encoder-decoder models are integral in Neural Machine Translation (NMT) BIBREF14 , where the task is to translate a source text from one language to another language. NMT models additionally use beam search strategies to heuristically search the set of possible translations. Training datasets are parallel corpora; large sets of paired sentences in the source and target languages. The application of NMT techniques for online machine translation has significantly improved the quality of translations, bringing it closer to human performance BIBREF15 .
Neural machine translation models are efficient at mapping one expression to another (one-to-one mapping). Researchers have evaluated these models for conversation generation BIBREF16 , with mixed results. Some researchers attribute poor performance to the use of the negative log likelihood cost function during training, which emphasizes generation of high-confidence phrases rather than diverse phrases BIBREF17 . The results are often generic text, which lacks variation. Li et al. have suggested various augmentations to this, among others suppressing typical responses in the decoder language model to promote response diversity BIBREF17 .
System Model
We discuss the attack model, our generative machine learning method and controlling the generative process in this section.
Attack Model
Wang et al. BIBREF7 described a model of crowd-turfing attacks consisting of three entities: customers who desire to have fake reviews for a particular target (e.g. their restaurant) on a particular platform (e.g. Yelp), agents who offer fake review services to customers, and workers who are orchestrated by the agent to compose and post fake reviews.
Automated crowd-turfing attacks (ACA) replace workers by a generative model. This has several benefits including better economy and scalability (human workers are more expensive and slower) and reduced detectability (agent can better control the rate at which fake reviews are generated and posted).
We assume that the agent has access to public reviews on the review platform, by which it can train its generative model. We also assume that it is easy for the agent to create a large number of accounts on the review platform so that account-based detection or rate-limiting techniques are ineffective against fake reviews.
The quality of the generative model plays a crucial role in the attack. Yao et al. BIBREF0 propose the use of a character-based LSTM as base for generative model. LSTMs are not conditioned to generate reviews for a specific target BIBREF1 , and may mix-up concepts from different contexts during free-form generation. Mixing contextually separate words is one of the key criteria that humans use to identify fake reviews. These may result in violations of known indicators for fake content BIBREF18 . For example, the review content may not match prior expectations nor the information need that the reader has. We improve the attack model by considering a more capable generative model that produces more appropriate reviews: a neural machine translation (NMT) model.
Generative Model
We propose the use of NMT models for fake review generation. The method has several benefits: 1) the ability to learn how to associate context (keywords) to reviews, 2) fast training time, and 3) a high-degree of customization during production time, e.g. introduction of specific waiter or food items names into reviews.
NMT models are constructions of stacked recurrent neural networks (RNNs). They include an encoder network and a decoder network, which are jointly optimized to produce a translation of one sequence to another. The encoder rolls over the input data in sequence and produces one INLINEFORM0 -dimensional context vector representation for the sentence. The decoder then generates output sequences based on the embedding vector and an attention module, which is taught to associate output words with certain input words. The generation typically continues until a specific EOS (end of sentence) token is encountered. The review length can be controlled in many ways, e.g. by setting the probability of generating the EOS token to zero until the required length is reached.
NMT models often also include a beam search BIBREF14 , which generates several hypotheses and chooses the best ones amongst them. In our work, we use the greedy beam search technique. We forgo the use of additional beam searches as we found that the quality of the output was already adequate and the translation phase time consumption increases linearly for each beam used.
We use the Yelp Challenge dataset BIBREF2 for our fake review generation. The dataset (Aug 2017) contains 2.9 million 1 –5 star restaurant reviews. We treat all reviews as genuine human-written reviews for the purpose of this work, since wide-scale deployment of machine-generated review attacks are not yet reported (Sep 2017) BIBREF19 . As preprocessing, we remove non-printable (non-ASCII) characters and excessive white-space. We separate punctuation from words. We reserve 15,000 reviews for validation and 3,000 for testing, and the rest we use for training. NMT models require a parallel corpus of source and target sentences, i.e. a large set of (source, target)-pairs. We set up a parallel corpus by constructing (context, review)-pairs from the dataset. Next, we describe how we created our input context.
The Yelp Challenge dataset includes metadata about restaurants, including their names, food tags, cities and states these restaurants are located in. For each restaurant review, we fetch this metadata and use it as our input context in the NMT model. The corresponding restaurant review is similarly set as the target sentence. This method produced 2.9 million pairs of sentences in our parallel corpus. We show one example of the parallel training corpus in Example 1 below:
5 Public House Las Vegas NV Gastropubs Restaurants > Excellent
food and service . Pricey , but well worth it . I would recommend
the bone marrow and sampler platter for appetizers . \end{verbatim}
\noindent The order {\textbf{[rating name city state tags]}} is kept constant.
Training the model conditions it to associate certain sequences of words in the input sentence with others in the output.
\subsubsection{Training Settings}
We train our NMT model on a commodity PC with a i7-4790k CPU (4.00GHz), with 32GB RAM and one NVidia GeForce GTX 980 GPU. Our system can process approximately 1,300 \textendash 1,500 source tokens/s and approximately 5,730 \textendash 5,830 output tokens/s. Training one epoch takes in average 72 minutes. The model is trained for 8 epochs, i.e. over night. We call fake review generated by this model \emph{NMT-Fake reviews}. We only need to train one model to produce reviews of different ratings.
We use the training settings: adam optimizer \cite{kingma2014adam} with the suggested learning rate 0.001 \cite{klein2017opennmt}. For most parts, parameters are at their default values. Notably, the maximum sentence length of input and output is 50 tokens by default.
We leverage the framework openNMT-py \cite{klein2017opennmt} to teach the our NMT model.
We list used openNMT-py commands in Appendix Table~\ref{table:openNMT-py_commands}.
\begin{figure}[t]
\begin{center}
\begin{tabular}{ | l | }
\hline
Example 2. Greedy NMT \\
Great food, \underline{great} service, \underline{great} \textit{\textit{beer selection}}. I had the \textit{Gastropubs burger} and it
\\
was delicious. The \underline{\textit{beer selection}} was also \underline{great}. \\
\\
Example 3. NMT-Fake* \\
I love this restaurant. Great food, great service. It's \textit{a little pricy} but worth\\
it for the \textit{quality} of the \textit{beer} and atmosphere you can see in \textit{Vegas}
\\
\hline
\end{tabular}
\label{table:output_comparison}
\end{center}
\caption{Na\"{i}ve text generation with NMT vs. generation using our NTM model. Repetitive patterns are \underline{underlined}. Contextual words are \emph{italicized}. Both examples here are generated based on the context given in Example~1.}
\label{fig:comparison}
\end{figure}
\subsection{Controlling generation of fake reviews}
\label{sec:generating}
Greedy NMT beam searches are practical in many NMT cases. However, the results are simply repetitive, when naively applied to fake review generation (See Example~2 in Figure~\ref{fig:comparison}).
The NMT model produces many \emph{high-confidence} word predictions, which are repetitive and obviously fake. We calculated that in fact, 43\% of the generated sentences started with the phrase ``Great food''. The lack of diversity in greedy use of NMTs for text generation is clear.
\begin{algorithm}[!b]
\KwData{Desired review context $C_\mathrm{input}$ (given as cleartext), NMT model}
\KwResult{Generated review $out$ for input context $C_\mathrm{input}$}
set $b=0.3$, $\lambda=-5$, $\alpha=\frac{2}{3}$, $p_\mathrm{typo}$, $p_\mathrm{spell}$ \\
$\log p \leftarrow \text{NMT.decode(NMT.encode(}C_\mathrm{input}\text{))}$ \\
out $\leftarrow$ [~] \\
$i \leftarrow 0$ \\
$\log p \leftarrow \text{Augment}(\log p$, $b$, $\lambda$, $1$, $[~]$, 0)~~~~~~~~~~~~~~~ |~random penalty~\\
\While{$i=0$ or $o_i$ not EOS}{
$\log \Tilde{p} \leftarrow \text{Augment}(\log p$, $b$, $\lambda$, $\alpha$, $o_i$, $i$)~~~~~~~~~~~ |~start \& memory penalty~\\
$o_i \leftarrow$ \text{NMT.beam}($\log \Tilde{p}$, out) \\
out.append($o_i$) \\
$i \leftarrow i+1$
}\text{return}~$\text{Obfuscate}$(out,~$p_\mathrm{typo}$,~$p_\mathrm{spell}$)
\caption{Generation of NMT-Fake* reviews.}
\label{alg:base}
\end{algorithm}
In this work, we describe how we succeeded in creating more diverse and less repetitive generated reviews, such as Example 3 in Figure~\ref{fig:comparison}.
We outline pseudocode for our methodology of generating fake reviews in Algorithm~\ref{alg:base}. There are several parameters in our algorithm.
The details of the algorithm will be shown later.
We modify the openNMT-py translation phase by changing log-probabilities before passing them to the beam search.
We notice that reviews generated with openNMT-py contain almost no language errors. As an optional post-processing step, we obfuscate reviews by introducing natural typos/misspellings randomly. In the next sections, we describe how we succeeded in generating more natural sentences from our NMT model, i.e. generating reviews like Example~3 instead of reviews like Example~2.
\subsubsection{Variation in word content}
Example 2 in Figure~\ref{fig:comparison} repeats commonly occurring words given for a specific context (e.g. \textit{great, food, service, beer, selection, burger} for Example~1). Generic review generation can be avoided by decreasing probabilities (log-likelihoods \cite{murphy2012machine}) of the generators LM, the decoder.
We constrain the generation of sentences by randomly \emph{imposing penalties to words}.
We tried several forms of added randomness, and found that adding constant penalties to a \emph{random subset} of the target words resulted in the most natural sentence flow. We call these penalties \emph{Bernoulli penalties}, since the random variables are chosen as either 1 or 0 (on or off).
\paragraph{Bernoulli penalties to language model}
To avoid generic sentences components, we augment the default language model $p(\cdot)$ of the decoder by
\begin{equation}
\log \Tilde{p}(t_k) = \log p(t_k | t_i, \dots, t_1) + \lambda q,
\end{equation}
where $q \in R^{V}$ is a vector of Bernoulli-distributed random values that obtain values $1$ with probability $b$ and value $0$ with probability $1-b_i$, and $\lambda < 0$. Parameter $b$ controls how much of the vocabulary is forgotten and $\lambda$ is a soft penalty of including ``forgotten'' words in a review.
$\lambda q_k$ emphasizes sentence forming with non-penalized words. The randomness is reset at the start of generating a new review.
Using Bernoulli penalties in the language model, we can ``forget'' a certain proportion of words and essentially ``force'' the creation of less typical sentences. We will test the effect of these two parameters, the Bernoulli probability $b$ and log-likelihood penalty of including ``forgotten'' words $\lambda$, with a user study in Section~\ref{sec:varying}.
\paragraph{Start penalty}
We introduce start penalties to avoid generic sentence starts (e.g. ``Great food, great service''). Inspired by \cite{li2016diversity}, we add a random start penalty $\lambda s^\mathrm{i}$, to our language model, which decreases monotonically for each generated token. We set $\alpha \leftarrow 0.66$ as it's effect decreases by 90\% every 5 words generated.
\paragraph{Penalty for reusing words}
Bernoulli penalties do not prevent excessive use of certain words in a sentence (such as \textit{great} in Example~2).
To avoid excessive reuse of words, we included a memory penalty for previously used words in each translation.
Concretely, we add the penalty $\lambda$ to each word that has been generated by the greedy search.
\subsubsection{Improving sentence coherence}
\label{sec:grammar}
We visually analyzed reviews after applying these penalties to our NMT model. While the models were clearly diverse, they were \emph{incoherent}: the introduction of random penalties had degraded the grammaticality of the sentences. Amongst others, the use of punctuation was erratic, and pronouns were used semantically wrongly (e.g. \emph{he}, \emph{she} might be replaced, as could ``and''/``but''). To improve the authenticity of our reviews, we added several \emph{grammar-based rules}.
English language has several classes of words which are important for the natural flow of sentences.
We built a list of common pronouns (e.g. I, them, our), conjunctions (e.g. and, thus, if), punctuation (e.g. ,/.,..), and apply only half memory penalties for these words. We found that this change made the reviews more coherent. The pseudocode for this and the previous step is shown in Algorithm~\ref{alg:aug}.
The combined effect of grammar-based rules and LM augmentation is visible in Example~3, Figure~\ref{fig:comparison}.
\begin{algorithm}[!t]
\KwData{Initial log LM $\log p$, Bernoulli probability $b$, soft-penalty $\lambda$, monotonic factor $\alpha$, last generated token $o_i$, grammar rules set $G$}
\KwResult{Augmented log LM $\log \Tilde{p}$}
\begin{algorithmic}[1]
\Procedure {Augment}{$\log p$, $b$, $\lambda$, $\alpha$, $o_i$, $i$}{ \\
generate $P_{\mathrm{1:N}} \leftarrow Bernoulli(b)$~~~~~~~~~~~~~~~|~$\text{One value} \in \{0,1\}~\text{per token}$~ \\
$I \leftarrow P>0$ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~|~Select positive indices~\\
$\log \Tilde{p} \leftarrow$ $\text{Discount}$($\log p$, $I$, $\lambda \cdot \alpha^i$,$G$) ~~~~~~ |~start penalty~\\
$\log \Tilde{p} \leftarrow$ $\text{Discount}$($\log \Tilde{p}$, $[o_i]$, $\lambda$,$G$) ~~~~~~~~~ |~memory penalty~\\
\textbf{return}~$\log \Tilde{p}$
}
\EndProcedure
\\
\Procedure {Discount}{$\log p$, $I$, $\lambda$, $G$}{
\State{\For{$i \in I$}{
\eIf{$o_i \in G$}{
$\log p_{i} \leftarrow \log p_{i} + \lambda/2$
}{
$\log p_{i} \leftarrow \log p_{i} + \lambda$}
}\textbf{return}~$\log p$
\EndProcedure
}}
\end{algorithmic}
\caption{Pseudocode for augmenting language model. }
\label{alg:aug}
\end{algorithm}
\subsubsection{Human-like errors}
\label{sec:obfuscation}
We notice that our NMT model produces reviews without grammar mistakes.
This is unlike real human writers, whose sentences contain two types of language mistakes 1) \emph{typos} that are caused by mistakes in the human motoric input, and 2) \emph{common spelling mistakes}.
We scraped a list of common English language spelling mistakes from Oxford dictionary\footnote{\url{https://en.oxforddictionaries.com/spelling/common-misspellings}} and created 80 rules for randomly \emph{re-introducing spelling mistakes}.
Similarly, typos are randomly reintroduced based on the weighted edit distance\footnote{\url{https://pypi.python.org/pypi/weighted-levenshtein/0.1}}, such that typos resulting in real English words with small perturbations are emphasized.
We use autocorrection tools\footnote{\url{https://pypi.python.org/pypi/autocorrect/0.1.0}} for finding these words.
We call these augmentations \emph{obfuscations}, since they aim to confound the reader to think a human has written them. We omit the pseudocode description for brevity.
\subsection{Experiment: Varying generation parameters in our NMT model}
\label{sec:varying}
Parameters $b$ and $\lambda$ control different aspects in fake reviews.
We show six different examples of generated fake reviews in Table~\ref{table:categories}.
Here, the largest differences occur with increasing values of $b$: visibly, the restaurant reviews become more extreme.
This occurs because a large portion of vocabulary is ``forgotten''. Reviews with $b \geq 0.7$ contain more rare word combinations, e.g. ``!!!!!'' as punctuation, and they occasionally break grammaticality (''experience was awesome'').
Reviews with lower $b$ are more generic: they contain safe word combinations like ``Great place, good service'' that occur in many reviews. Parameter $\lambda$'s is more subtle: it affects how random review starts are and to a degree, the discontinuation between statements within the review.
We conducted an Amazon Mechanical Turk (MTurk) survey in order to determine what kind of NMT-Fake reviews are convincing to native English speakers. We describe the survey and results in the next section.
\begin{table}[!b]
\caption{Six different parametrizations of our NMT reviews and one example for each. The context is ``5 P~.~F~.~Chang ' s Scottsdale AZ'' in all examples.}
\begin{center}
\begin{tabular}{ | l | l | }
\hline
$(b, \lambda)$ & Example review for context \\ \hline
\hline
$(0.3, -3)$ & I love this location! Great service, great food and the best drinks in Scottsdale. \\
& The staff is very friendly and always remembers u when we come in\\\hline
$(0.3, -5)$ & Love love the food here! I always go for lunch. They have a great menu and \\
& they make it fresh to order. Great place, good service and nice staff\\\hline
$(0.5, -4)$ & I love their chicken lettuce wraps and fried rice!! The service is good, they are\\
& always so polite. They have great happy hour specials and they have a lot\\
& of options.\\\hline
$(0.7, -3)$ & Great place to go with friends! They always make sure your dining \\
& experience was awesome.\\ \hline
$(0.7, -5)$ & Still haven't ordered an entree before but today we tried them once..\\
& both of us love this restaurant....\\\hline
$(0.9, -4)$ & AMAZING!!!!! Food was awesome with excellent service. Loved the lettuce \\
& wraps. Great drinks and wine! Can't wait to go back so soon!!\\ \hline
\end{tabular}
\label{table:categories}
\end{center}
\end{table}
\subsubsection{MTurk study}
\label{sec:amt}
We created 20 jobs, each with 100 questions, and requested master workers in MTurk to complete the jobs.
We randomly generated each survey for the participants. Each review had a 50\% chance to be real or fake. The fake ones further were chosen among six (6) categories of fake reviews (Table~\ref{table:categories}).
The restaurant and the city was given as contextual information to the participants. Our aim was to use this survey to understand how well English-speakers react to different parametrizations of NMT-Fake reviews.
Table~\ref{table:amt_pop} in Appendix summarizes the statistics for respondents in the survey. All participants were native English speakers from America. The base rate (50\%) was revealed to the participants prior to the study.
We first investigated overall detection of any NMT-Fake reviews (1,006 fake reviews and 994 real reviews). We found that the participants had big difficulties in detecting our fake reviews. In average, the reviews were detected with class-averaged \emph{F-score of only 56\%}, with 53\% F-score for fake review detection and 59\% F-score for real review detection. The results are very close to \emph{random detection}, where precision, recall and F-score would each be 50\%. Results are recorded in Table~\ref{table:MTurk_super}. Overall, the fake review generation is very successful, since human detection rate across categories is close to random.
\begin{table}[t]
\caption{Effectiveness of Mechanical Turkers in distinguishing human-written reviews from fake reviews generated by our NMT model (all variants).}
\begin{center}
\begin{tabular}{ | c | c |c |c | c | }
\hline
\multicolumn{5}{|c|}{Classification report}
\\ \hline
Review Type & Precision & Recall & F-score & Support \\ \hline
\hline
Human & 55\% & 63\% & 59\% & 994\\
NMT-Fake & 57\% & 50\% & 53\% & 1006 \\
\hline
\end{tabular}
\label{table:MTurk_super}
\end{center}
\end{table}
We noticed some variation in the detection of different fake review categories. The respondents in our MTurk survey had most difficulties recognizing reviews of category $(b=0.3, \lambda=-5)$, where true positive rate was $40.4\%$, while the true negative rate of the real class was $62.7\%$. The precision were $16\%$ and $86\%$, respectively. The class-averaged F-score is $47.6\%$, which is close to random. Detailed classification reports are shown in Table~\ref{table:MTurk_sub} in Appendix. Our MTurk-study shows that \emph{our NMT-Fake reviews pose a significant threat to review systems}, since \emph{ordinary native English-speakers have very big difficulties in separating real reviews from fake reviews}. We use the review category $(b=0.3, \lambda=-5)$ for future user tests in this paper, since MTurk participants had most difficulties detecting these reviews. We refer to this category as NMT-Fake* in this paper.
\section{Evaluation}
\graphicspath{ {figures/}}
We evaluate our fake reviews by first comparing them statistically to previously proposed types of fake reviews, and proceed with a user study with experienced participants. We demonstrate the statistical difference to existing fake review types \cite{yao2017automated,mukherjee2013yelp,rayana2015collective} by training classifiers to detect previous types and investigate classification performance.
\subsection{Replication of state-of-the-art model: LSTM}
\label{sec:repl}
Yao et al. \cite{yao2017automated} presented the current state-of-the-art generative model for fake reviews. The model is trained over the Yelp Challenge dataset using a two-layer character-based LSTM model.
We requested the authors of \cite{yao2017automated} for access to their LSTM model or a fake review dataset generated by their model. Unfortunately they were not able to share either of these with us. We therefore replicated their model as closely as we could, based on their paper and e-mail correspondence\footnote{We are committed to sharing our code with bonafide researchers for the sake of reproducibility.}.
We used the same graphics card (GeForce GTX) and trained using the same framework (torch-RNN in lua). We downloaded the reviews from Yelp Challenge and preprocessed the data to only contain printable ASCII characters, and filtered out non-restaurant reviews. We trained the model for approximately 72 hours. We post-processed the reviews using the customization methodology described in \cite{yao2017automated} and email correspondence. We call fake reviews generated by this model LSTM-Fake reviews.
\subsection{Similarity to existing fake reviews}
\label{sec:automated}
We now want to understand how NMT-Fake* reviews compare to a) LSTM fake reviews and b) human-generated fake reviews. We do this by comparing the statistical similarity between these classes.
For `a' (Figure~\ref{fig:lstm}), we use the Yelp Challenge dataset. We trained a classifier using 5,000 random reviews from the Yelp Challenge dataset (``human'') and 5,000 fake reviews generated by LSTM-Fake. Yao et al. \cite{yao2017automated} found that character features are essential in identifying LSTM-Fake reviews. Consequently, we use character features (n-grams up to 3).
For `b' (Figure~\ref{fig:shill}),we the ``Yelp Shills'' dataset (combination of YelpZip \cite{mukherjee2013yelp}, YelpNYC \cite{mukherjee2013yelp}, YelpChi \cite{rayana2015collective}). This dataset labels entries that are identified as fraudulent by Yelp's filtering mechanism (''shill reviews'')\footnote{Note that shill reviews are probably generated by human shills \cite{zhao2017news}.}. The rest are treated as genuine reviews from human users (''genuine''). We use 100,000 reviews from each category to train a classifier. We use features from the commercial psychometric tool LIWC2015 \cite{pennebaker2015development} to generated features.
In both cases, we use AdaBoost (with 200 shallow decision trees) for training. For testing each classifier, we use a held out test set of 1,000 reviews from both classes in each case. In addition, we test 1,000 NMT-Fake* reviews. Figures~\ref{fig:lstm} and~\ref{fig:shill} show the results. The classification threshold of 50\% is marked with a dashed line.
\begin{figure}
\begin{subfigure}[b]{0.5\columnwidth}
\includegraphics[width=\columnwidth]{figures/lstm.png}
\caption{Human--LSTM reviews.}
\label{fig:lstm}
\end{subfigure}
\begin{subfigure}[b]{0.5\columnwidth}
\includegraphics[width=\columnwidth]{figures/distribution_shill.png}
\caption{Genuine--Shill reviews.}
\label{fig:shill}
\end{subfigure}
\caption{
Histogram comparison of NMT-Fake* reviews with LSTM-Fake reviews and human-generated (\emph{genuine} and \emph{shill}) reviews. Figure~\ref{fig:lstm} shows that a classifier trained to distinguish ``human'' vs. LSTM-Fake cannot distinguish ``human'' vs NMT-Fake* reviews. Figure~\ref{fig:shill} shows NMT-Fake* reviews are more similar to \emph{genuine} reviews than \emph{shill} reviews.
}
\label{fig:statistical_similarity}
\end{figure}
We can see that our new generated reviews do not share strong attributes with previous known categories of fake reviews. If anything, our fake reviews are more similar to genuine reviews than previous fake reviews. We thus conjecture that our NMT-Fake* fake reviews present a category of fake reviews that may go undetected on online review sites.
\subsection{Comparative user study}
\label{sec:comparison}
We wanted to evaluate the effectiveness of fake reviews againsttech-savvy users who understand and know to expect machine-generated fake reviews. We conducted a user study with 20 participants, all with computer science education and at least one university degree. Participant demographics are shown in Table~\ref{table:amt_pop} in the Appendix. Each participant first attended a training session where they were asked to label reviews (fake and genuine) and could later compare them to the correct answers -- we call these participants \emph{experienced participants}.
No personal data was collected during the user study.
Each person was given two randomly selected sets of 30 of reviews (a total of 60 reviews per person) with reviews containing 10 \textendash 50 words each.
Each set contained 26 (87\%) real reviews from Yelp and 4 (13\%) machine-generated reviews,
numbers chosen based on suspicious review prevalence on Yelp~\cite{mukherjee2013yelp,rayana2015collective}.
One set contained machine-generated reviews from one of the two models (NMT ($b=0.3, \lambda=-5$) or LSTM),
and the other set reviews from the other in randomized order. The number of fake reviews was revealed to each participant in the study description. Each participant was requested to mark four (4) reviews as fake.
Each review targeted a real restaurant. A screenshot of that restaurant's Yelp page was shown to each participant prior to the study. Each participant evaluated reviews for one specific, randomly selected, restaurant. An example of the first page of the user study is shown in Figure~\ref{fig:screenshot} in Appendix.
\begin{figure}[!ht]
\centering
\includegraphics[width=.7\columnwidth]{detection2.png}
\caption{Violin plots of detection rate in comparative study. Mean and standard deviations for number of detected fakes are $0.8\pm0.7$ for NMT-Fake* and $2.5\pm1.0$ for LSTM-Fake. $n=20$. A sample of random detection is shown as comparison.}
\label{fig:aalto}
\end{figure}
Figure~\ref{fig:aalto} shows the distribution of detected reviews of both types. A hypothetical random detector is shown for comparison.
NMT-Fake* reviews are significantly more difficult to detect for our experienced participants. In average, detection rate (recall) is $20\%$ for NMT-Fake* reviews, compared to $61\%$ for LSTM-based reviews.
The precision (and F-score) is the same as the recall in our study, since participants labeled 4 fakes in each set of 30 reviews \cite{murphy2012machine}.
The distribution of the detection across participants is shown in Figure~\ref{fig:aalto}. \emph{The difference is statistically significant with confidence level $99\%$} (Welch's t-test).
We compared the detection rate of NMT-Fake* reviews to a random detector, and find that \emph{our participants detection rate of NMT-Fake* reviews is not statistically different from random predictions with 95\% confidence level} (Welch's t-test).
\section{Defenses}
\label{sec:detection}
We developed an AdaBoost-based classifier to detect our new fake reviews, consisting of 200 shallow decision trees (depth 2). The features we used are recorded in Table~\ref{table:features_adaboost} (Appendix).
We used word-level features based on spaCy-tokenization \cite{honnibal-johnson:2015:EMNLP} and constructed n-gram representation of POS-tags and dependency tree tags. We added readability features from NLTK~\cite{bird2004nltk}.
\begin{figure}[ht]
\centering
\includegraphics[width=.7\columnwidth]{obf_score_fair_2.png}
\caption{
Adaboost-based classification of NMT-Fake and human-written reviews.
Effect of varying $b$ and $\lambda$ in fake review generation.
The variant native speakers had most difficulties detecting is well detectable by AdaBoost (97\%).}
\label{fig:adaboost_matrix_b_lambda}
\end{figure}
Figure~\ref{fig:adaboost_matrix_b_lambda} shows our AdaBoost classifier's class-averaged F-score at detecting different kind of fake reviews. The classifier is very effective in detecting reviews that humans have difficulties detecting. For example, the fake reviews MTurk users had most difficulty detecting ($b=0.3, \lambda=-5$) are detected with an excellent 97\% F-score.
The most important features for the classification were counts for frequently occurring words in fake reviews (such as punctuation, pronouns, articles) as well as the readability feature ``Automated Readability Index''. We thus conclude that while NMT-Fake reviews are difficult to detect for humans, they can be well detected with the right tools.
\section{Related Work}
Kumar and Shah~\cite{kumar2018false} survey and categorize false information research. Automatically generated fake reviews are a form of \emph{opinion-based false information}, where the creator of the review may influence reader's opinions or decisions.
Yao et al. \cite{yao2017automated} presented their study on machine-generated fake reviews. Contrary to us, they investigated character-level language models, without specifying a specific context before generation. We leverage existing NMT tools to encode a specific context to the restaurant before generating reviews.
Supporting our study, Everett et al~\cite{Everett2016Automated} found that security researchers were less likely to be fooled by Markov chain-generated Reddit comments compared to ordinary Internet users.
Diversification of NMT model outputs has been studied in \cite{li2016diversity}. The authors proposed the use of a penalty to commonly occurring sentences (\emph{n-grams}) in order to emphasize maximum mutual information-based generation.
The authors investigated the use of NMT models in chatbot systems.
We found that unigram penalties to random tokens (Algorithm~\ref{alg:aug}) was easy to implement and produced sufficiently diverse responses.
\section {Discussion and Future Work}
\paragraph{What makes NMT-Fake* reviews difficult to detect?} First, NMT models allow the encoding of a relevant context for each review, which narrows down the possible choices of words that the model has to choose from. Our NMT model had a perplexity of approximately $25$, while the model of \cite{yao2017automated} had a perplexity of approximately $90$ \footnote{Personal communication with the authors}. Second, the beam search in NMT models narrows down choices to natural-looking sentences. Third, we observed that the NMT model produced \emph{better structure} in the generated sentences (i.e. a more coherent story).
\paragraph{Cost of generating reviews} With our setup, generating one review took less than one second. The cost of generation stems mainly from the overnight training. Assuming an electricity cost of 16 cents / kWh (California) and 8 hours of training, training the NMT model requires approximately 1.30 USD. This is a 90\% reduction in time compared to the state-of-the-art \cite{yao2017automated}. Furthermore, it is possible to generate both positive and negative reviews with the same model.
\paragraph{Ease of customization} We experimented with inserting specific words into the text by increasing their log likelihoods in the beam search. We noticed that the success depended on the prevalence of the word in the training set. For example, adding a +5 to \emph{Mike} in the log-likelihood resulted in approximately 10\% prevalence of this word in the reviews. An attacker can therefore easily insert specific keywords to reviews, which can increase evasion probability.
\paragraph{Ease of testing} Our diversification scheme is applicable during \emph{generation phase}, and does not affect the training setup of the network in any way. Once the NMT model is obtained, it is easy to obtain several different variants of NMT-Fake reviews by varying parameters $b$ and $\lambda$.
\paragraph{Languages} The generation methodology is not per-se language-dependent. The requirement for successful generation is that sufficiently much data exists in the targeted language. However, our language model modifications require some knowledge of that target language's grammar to produce high-quality reviews.
\paragraph{Generalizability of detection techniques} Currently, fake reviews are not universally detectable. Our results highlight that it is difficult to claim detection performance on unseen types of fake reviews (Section~\ref{sec:automated}). We see this an open problem that deserves more attention in fake reviews research.
\paragraph{Generalizability to other types of datasets} Our technique can be applied to any dataset, as long as there is sufficient training data for the NMT model. We used approximately 2.9 million reviews for this work.
\section{Conclusion}
In this paper, we showed that neural machine translation models can be used to generate fake reviews that are very effective in deceiving even experienced, tech-savvy users.
This supports anecdotal evidence \cite{national2017commission}.
Our technique is more effective than state-of-the-art \cite{yao2017automated}.
We conclude that machine-aided fake review detection is necessary since human users are ineffective in identifying fake reviews.
We also showed that detectors trained using one type of fake reviews are not effective in identifying other types of fake reviews.
Robust detection of fake reviews is thus still an open problem.
\section*{Acknowledgments}
We thank Tommi Gr\"{o}ndahl for assistance in planning user studies and the
participants of the user study for their time and feedback. We also thank
Luiza Sayfullina for comments that improved the manuscript.
We thank the authors of \cite{yao2017automated} for answering questions about
their work.
\bibliographystyle{splncs}
\begin{thebibliography}{10}
\bibitem{yao2017automated}
Yao, Y., Viswanath, B., Cryan, J., Zheng, H., Zhao, B.Y.:
\newblock Automated crowdturfing attacks and defenses in online review systems.
\newblock In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and
Communications Security, ACM (2017)
\bibitem{murphy2012machine}
Murphy, K.:
\newblock Machine learning: a probabilistic approach.
\newblock Massachusetts Institute of Technology (2012)
\bibitem{challenge2013yelp}
Yelp:
\newblock {Yelp Challenge Dataset} (2013)
\bibitem{mukherjee2013yelp}
Mukherjee, A., Venkataraman, V., Liu, B., Glance, N.:
\newblock What yelp fake review filter might be doing?
\newblock In: Seventh International AAAI Conference on Weblogs and Social Media
(ICWSM). (2013)
\bibitem{rayana2015collective}
Rayana, S., Akoglu, L.:
\newblock Collective opinion spam detection: Bridging review networks and
metadata.
\newblock In: {}Proceedings of the 21th ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining
\bibitem{o2008user}
{O'Connor}, P.:
\newblock {User-generated content and travel: A case study on Tripadvisor.com}.
\newblock Information and communication technologies in tourism 2008 (2008)
\bibitem{luca2010reviews}
Luca, M.:
\newblock {Reviews, Reputation, and Revenue: The Case of Yelp. com}.
\newblock {Harvard Business School} (2010)
\bibitem{wang2012serf}
Wang, G., Wilson, C., Zhao, X., Zhu, Y., Mohanlal, M., Zheng, H., Zhao, B.Y.:
\newblock Serf and turf: crowdturfing for fun and profit.
\newblock In: Proceedings of the 21st international conference on World Wide
Web (WWW), ACM (2012)
\bibitem{rinta2017understanding}
Rinta-Kahila, T., Soliman, W.:
\newblock Understanding crowdturfing: The different ethical logics behind the
clandestine industry of deception.
\newblock In: ECIS 2017: Proceedings of the 25th European Conference on
Information Systems. (2017)
\bibitem{luca2016fake}
Luca, M., Zervas, G.:
\newblock Fake it till you make it: Reputation, competition, and yelp review
fraud.
\newblock Management Science (2016)
\bibitem{national2017commission}
{National Literacy Trust}:
\newblock Commission on fake news and the teaching of critical literacy skills
in schools URL:
\url{https://literacytrust.org.uk/policy-and-campaigns/all-party-parliamentary-group-literacy/fakenews/}.
\bibitem{jurafsky2014speech}
Jurafsky, D., Martin, J.H.:
\newblock Speech and language processing. Volume~3.
\newblock Pearson London: (2014)
\bibitem{kingma2014adam}
Kingma, D.P., Ba, J.:
\newblock Adam: A method for stochastic optimization.
\newblock arXiv preprint arXiv:1412.6980 (2014)
\bibitem{cho2014learning}
Cho, K., van Merrienboer, B., Gulcehre, C., Bahdanau, D., Bougares, F.,
Schwenk, H., Bengio, Y.:
\newblock Learning phrase representations using rnn encoder--decoder for
statistical machine translation.
\newblock In: Proceedings of the 2014 Conference on Empirical Methods in
Natural Language Processing (EMNLP). (2014)
\bibitem{klein2017opennmt}
Klein, G., Kim, Y., Deng, Y., Senellart, J., Rush, A.:
\newblock Opennmt: Open-source toolkit for neural machine translation.
\newblock Proceedings of ACL, System Demonstrations (2017)
\bibitem{wu2016google}
Wu, Y., Schuster, M., Chen, Z., Le, Q.V., Norouzi, M., Macherey, W., Krikun,
M., Cao, Y., Gao, Q., Macherey, K., et~al.:
\newblock Google's neural machine translation system: Bridging the gap between
human and machine translation.
\newblock arXiv preprint arXiv:1609.08144 (2016)
\bibitem{mei2017coherent}
Mei, H., Bansal, M., Walter, M.R.:
\newblock Coherent dialogue with attention-based language models.
\newblock In: AAAI. (2017) 3252--3258
\bibitem{li2016diversity}
Li, J., Galley, M., Brockett, C., Gao, J., Dolan, B.:
\newblock A diversity-promoting objective function for neural conversation
models.
\newblock In: Proceedings of NAACL-HLT. (2016)
\bibitem{rubin2006assessing}
Rubin, V.L., Liddy, E.D.:
\newblock Assessing credibility of weblogs.
\newblock In: AAAI Spring Symposium: Computational Approaches to Analyzing
Weblogs. (2006)
\bibitem{zhao2017news}
news.com.au:
\newblock {The potential of AI generated 'crowdturfing' could undermine online
reviews and dramatically erode public trust} URL:
\url{http://www.news.com.au/technology/online/security/the-potential-of-ai-generated-crowdturfing-could-undermine-online-reviews-and-dramatically-erode-public-trust/news-story/e1c84ad909b586f8a08238d5f80b6982}.
\bibitem{pennebaker2015development}
Pennebaker, J.W., Boyd, R.L., Jordan, K., Blackburn, K.:
\newblock {The development and psychometric properties of LIWC2015}.
\newblock Technical report (2015)
\bibitem{honnibal-johnson:2015:EMNLP}
Honnibal, M., Johnson, M.:
\newblock An improved non-monotonic transition system for dependency parsing.
\newblock In: Proceedings of the 2015 Conference on Empirical Methods in
Natural Language Processing (EMNLP), ACM (2015)
\bibitem{bird2004nltk}
Bird, S., Loper, E.:
\newblock {NLTK: the natural language toolkit}.
\newblock In: Proceedings of the ACL 2004 on Interactive poster and
demonstration sessions, Association for Computational Linguistics (2004)
\bibitem{kumar2018false}
Kumar, S., Shah, N.:
\newblock False information on web and social media: A survey.
\newblock arXiv preprint arXiv:1804.08559 (2018)
\bibitem{Everett2016Automated}
Everett, R.M., Nurse, J.R.C., Erola, A.:
\newblock The anatomy of online deception: What makes automated text
convincing?
\newblock In: Proceedings of the 31st Annual ACM Symposium on Applied
Computing. SAC '16, ACM (2016)
\end{thebibliography}
\section*{Appendix}
We present basic demographics of our MTurk study and the comparative study with experienced users in Table~\ref{table:amt_pop}.
\begin{table}
\caption{User study statistics.}
\begin{center}
\begin{tabular}{ | l | c | c | }
\hline
Quality & Mechanical Turk users & Experienced users\\
\hline
Native English Speaker & Yes (20) & Yes (1) No (19) \\
Fluent in English & Yes (20) & Yes (20) \\
Age & 21-40 (17) 41-60 (3) & 21-25 (8) 26-30 (7) 31-35 (4) 41-45 (1)\\
Gender & Male (14) Female (6) & Male (17) Female (3)\\
Highest Education & High School (10) Bachelor (10) & Bachelor (9) Master (6) Ph.D. (5) \\
\hline
\end{tabular}
\label{table:amt_pop}
\end{center}
\end{table}
Table~\ref{table:openNMT-py_commands} shows a listing of the openNMT-py commands we used to create our NMT model and to generate fake reviews.
\begin{table}[t]
\caption{Listing of used openNMT-py commands.}
\begin{center}
\begin{tabular}{ | l | l | }
\hline
Phase & Bash command \\
\hline
Preprocessing & \begin{lstlisting}[language=bash]
python preprocess.py -train_src context-train.txt
-train_tgt reviews-train.txt -valid_src context-val.txt
-valid_tgt reviews-val.txt -save_data model
-lower -tgt_words_min_frequency 10
\end{lstlisting}
\\ & \\
Training & \begin{lstlisting}[language=bash]
python train.py -data model -save_model model -epochs 8
-gpuid 0 -learning_rate_decay 0.5 -optim adam
-learning_rate 0.001 -start_decay_at 3\end{lstlisting}
\\ & \\
Generation & \begin{lstlisting}[language=bash]
python translate.py -model model_acc_35.54_ppl_25.68_e8.pt
-src context-tst.txt -output pred-e8.txt -replace_unk
-verbose -max_length 50 -gpu 0
\end{lstlisting} \\
\hline
\end{tabular}
\label{table:openNMT-py_commands}
\end{center}
\end{table}
Table~\ref{table:MTurk_sub} shows the classification performance of Amazon Mechanical Turkers, separated across different categories of NMT-Fake reviews. The category with best performance ($b=0.3, \lambda=-5$) is denoted as NMT-Fake*.
\begin{table}[b]
\caption{MTurk study subclass classification reports. Classes are imbalanced in ratio 1:6. Random predictions are $p_\mathrm{human} = 86\%$ and $p_\mathrm{machine} = 14\%$, with $r_\mathrm{human} = r_\mathrm{machine} = 50\%$. Class-averaged F-scores for random predictions are $42\%$.}
\begin{center}
\begin{tabular}{ | c || c |c |c | c | }
\hline
$(b=0.3, \lambda = -3)$ & Precision & Recall & F-score & Support \\ \hline
Human & 89\% & 63\% & 73\% & 994\\
NMT-Fake & 15\% & 45\% & 22\% & 146 \\
\hline
\hline
$(b=0.3, \lambda = -5)$ & Precision & Recall & F-score & Support \\ \hline
Human & 86\% & 63\% & 73\% & 994\\
NMT-Fake* & 16\% & 40\% & 23\% & 171 \\
\hline
\hline
$(b=0.5, \lambda = -4)$ & Precision & Recall & F-score & Support \\ \hline
Human & 88\% & 63\% & 73\% & 994\\
NMT-Fake & 21\% & 55\% & 30\% & 181 \\
\hline
\hline
$(b=0.7, \lambda = -3)$ & Precision & Recall & F-score & Support \\ \hline
Human & 88\% & 63\% & 73\% & 994\\
NMT-Fake & 19\% & 50\% & 27\% & 170 \\
\hline
\hline
$(b=0.7, \lambda = -5)$ & Precision & Recall & F-score & Support \\ \hline
Human & 89\% & 63\% & 74\% & 994\\
NMT-Fake & 21\% & 57\% & 31\% & 174 \\
\hline
\hline
$(b=0.9, \lambda = -4)$ & Precision & Recall & F-score & Support \\ \hline
Human & 88\% & 63\% & 73\% & 994\\
NMT-Fake & 18\% & 50\% & 27\% & 164 \\
\hline
\end{tabular}
\label{table:MTurk_sub}
\end{center}
\end{table}
Figure~\ref{fig:screenshot} shows screenshots of the first two pages of our user study with experienced participants.
\begin{figure}[ht]
\centering
\includegraphics[width=1.\columnwidth]{figures/screenshot_7-3.png}
\caption{
Screenshots of the first two pages in the user study. Example 1 is a NMT-Fake* review, the rest are human-written.
}
\label{fig:screenshot}
\end{figure}
Table~\ref{table:features_adaboost} shows the features used to detect NMT-Fake reviews using the AdaBoost classifier.
\begin{table}
\caption{Features used in NMT-Fake review detector.}
\begin{center}
\begin{tabular}{ | l | c | }
\hline
Feature type & Number of features \\ \hline
\hline
Readability features & 13 \\ \hline
Unique POS tags & $~20$ \\ \hline
Word unigrams & 22,831 \\ \hline
1/2/3/4-grams of simple part-of-speech tags & 54,240 \\ \hline
1/2/3-grams of detailed part-of-speech tags & 112,944 \\ \hline
1/2/3-grams of syntactic dependency tags & 93,195 \\ \hline
\end{tabular}
\label{table:features_adaboost}
\end{center}
\end{table}
\end{document} | No |
98b11f70239ef0e22511a3ecf6e413ecb726f954 | 98b11f70239ef0e22511a3ecf6e413ecb726f954_1 | Q: Do they use a pretrained NMT model to help generating reviews?
Text: Introduction
Automatically generated fake reviews have only recently become natural enough to fool human readers. Yao et al. BIBREF0 use a deep neural network (a so-called 2-layer LSTM BIBREF1 ) to generate fake reviews, and concluded that these fake reviews look sufficiently genuine to fool native English speakers. They train their model using real restaurant reviews from yelp.com BIBREF2 . Once trained, the model is used to generate reviews character-by-character. Due to the generation methodology, it cannot be easily targeted for a specific context (meaningful side information). Consequently, the review generation process may stray off-topic. For instance, when generating a review for a Japanese restaurant in Las Vegas, the review generation process may include references to an Italian restaurant in Baltimore. The authors of BIBREF0 apply a post-processing step (customization), which replaces food-related words with more suitable ones (sampled from the targeted restaurant). The word replacement strategy has drawbacks: it can miss certain words and replace others independent of their surrounding words, which may alert savvy readers. As an example: when we applied the customization technique described in BIBREF0 to a review for a Japanese restaurant it changed the snippet garlic knots for breakfast with garlic knots for sushi).
We propose a methodology based on neural machine translation (NMT) that improves the generation process by defining a context for the each generated fake review. Our context is a clear-text sequence of: the review rating, restaurant name, city, state and food tags (e.g. Japanese, Italian). We show that our technique generates review that stay on topic. We can instantiate our basic technique into several variants. We vet them on Amazon Mechanical Turk and find that native English speakers are very poor at recognizing our fake generated reviews. For one variant, the participants' performance is close to random: the class-averaged F-score of detection is INLINEFORM0 (whereas random would be INLINEFORM1 given the 1:6 imbalance in the test). Via a user study with experienced, highly educated participants, we compare this variant (which we will henceforth refer to as NMT-Fake* reviews) with fake reviews generated using the char-LSTM-based technique from BIBREF0 .
We demonstrate that NMT-Fake* reviews constitute a new category of fake reviews that cannot be detected by classifiers trained only using previously known categories of fake reviews BIBREF0 , BIBREF3 , BIBREF4 . Therefore, NMT-Fake* reviews may go undetected in existing online review sites. To meet this challenge, we develop an effective classifier that detects NMT-Fake* reviews effectively (97% F-score). Our main contributions are:
Background
Fake reviews User-generated content BIBREF5 is an integral part of the contemporary user experience on the web. Sites like tripadvisor.com, yelp.com and Google Play use user-written reviews to provide rich information that helps other users choose where to spend money and time. User reviews are used for rating services or products, and for providing qualitative opinions. User reviews and ratings may be used to rank services in recommendations. Ratings have an affect on the outwards appearance. Already 8 years ago, researchers estimated that a one-star rating increase affects the business revenue by 5 – 9% on yelp.com BIBREF6 .
Due to monetary impact of user-generated content, some businesses have relied on so-called crowd-turfing agents BIBREF7 that promise to deliver positive ratings written by workers to a customer in exchange for a monetary compensation. Crowd-turfing ethics are complicated. For example, Amazon community guidelines prohibit buying content relating to promotions, but the act of writing fabricated content is not considered illegal, nor is matching workers to customers BIBREF8 . Year 2015, approximately 20% of online reviews on yelp.com were suspected of being fake BIBREF9 .
Nowadays, user-generated review sites like yelp.com use filters and fraudulent review detection techniques. These factors have resulted in an increase in the requirements of crowd-turfed reviews provided to review sites, which in turn has led to an increase in the cost of high-quality review. Due to the cost increase, researchers hypothesize the existence of neural network-generated fake reviews. These neural-network-based fake reviews are statistically different from human-written fake reviews, and are not caught by classifiers trained on these BIBREF0 .
Detecting fake reviews can either be done on an individual level or as a system-wide detection tool (i.e. regulation). Detecting fake online content on a personal level requires knowledge and skills in critical reading. In 2017, the National Literacy Trust assessed that young people in the UK do not have the skillset to differentiate fake news from real news BIBREF10 . For example, 20% of children that use online news sites in age group 12-15 believe that all information on news sites are true.
Neural Networks Neural networks are function compositions that map input data through INLINEFORM0 subsequent layers: DISPLAYFORM0
where the functions INLINEFORM0 are typically non-linear and chosen by experts partly for known good performance on datasets and partly for simplicity of computational evaluation. Language models (LMs) BIBREF11 are generative probability distributions that assign probabilities to sequences of tokens ( INLINEFORM1 ): DISPLAYFORM0
such that the language model can be used to predict how likely a specific token at time step INLINEFORM0 is, based on the INLINEFORM1 previous tokens. Tokens are typically either words or characters.
For decades, deep neural networks were thought to be computationally too difficult to train. However, advances in optimization, hardware and the availability of frameworks have shown otherwise BIBREF1 , BIBREF12 . Neural language models (NLMs) have been one of the promising application areas. NLMs are typically various forms of recurrent neural networks (RNNs), which pass through the data sequentially and maintain a memory representation of the past tokens with a hidden context vector. There are many RNN architectures that focus on different ways of updating and maintaining context vectors: Long Short-Term Memory units (LSTM) and Gated Recurrent Units (GRUs) are perhaps most popular. Neural LMs have been used for free-form text generation. In certain application areas, the quality has been high enough to sometimes fool human readers BIBREF0 . Encoder-decoder (seq2seq) models BIBREF13 are architectures of stacked RNNs, which have the ability to generate output sequences based on input sequences. The encoder network reads in a sequence of tokens, and passes it to a decoder network (a LM). In contrast to simpler NLMs, encoder-decoder networks have the ability to use additional context for generating text, which enables more accurate generation of text. Encoder-decoder models are integral in Neural Machine Translation (NMT) BIBREF14 , where the task is to translate a source text from one language to another language. NMT models additionally use beam search strategies to heuristically search the set of possible translations. Training datasets are parallel corpora; large sets of paired sentences in the source and target languages. The application of NMT techniques for online machine translation has significantly improved the quality of translations, bringing it closer to human performance BIBREF15 .
Neural machine translation models are efficient at mapping one expression to another (one-to-one mapping). Researchers have evaluated these models for conversation generation BIBREF16 , with mixed results. Some researchers attribute poor performance to the use of the negative log likelihood cost function during training, which emphasizes generation of high-confidence phrases rather than diverse phrases BIBREF17 . The results are often generic text, which lacks variation. Li et al. have suggested various augmentations to this, among others suppressing typical responses in the decoder language model to promote response diversity BIBREF17 .
System Model
We discuss the attack model, our generative machine learning method and controlling the generative process in this section.
Attack Model
Wang et al. BIBREF7 described a model of crowd-turfing attacks consisting of three entities: customers who desire to have fake reviews for a particular target (e.g. their restaurant) on a particular platform (e.g. Yelp), agents who offer fake review services to customers, and workers who are orchestrated by the agent to compose and post fake reviews.
Automated crowd-turfing attacks (ACA) replace workers by a generative model. This has several benefits including better economy and scalability (human workers are more expensive and slower) and reduced detectability (agent can better control the rate at which fake reviews are generated and posted).
We assume that the agent has access to public reviews on the review platform, by which it can train its generative model. We also assume that it is easy for the agent to create a large number of accounts on the review platform so that account-based detection or rate-limiting techniques are ineffective against fake reviews.
The quality of the generative model plays a crucial role in the attack. Yao et al. BIBREF0 propose the use of a character-based LSTM as base for generative model. LSTMs are not conditioned to generate reviews for a specific target BIBREF1 , and may mix-up concepts from different contexts during free-form generation. Mixing contextually separate words is one of the key criteria that humans use to identify fake reviews. These may result in violations of known indicators for fake content BIBREF18 . For example, the review content may not match prior expectations nor the information need that the reader has. We improve the attack model by considering a more capable generative model that produces more appropriate reviews: a neural machine translation (NMT) model.
Generative Model
We propose the use of NMT models for fake review generation. The method has several benefits: 1) the ability to learn how to associate context (keywords) to reviews, 2) fast training time, and 3) a high-degree of customization during production time, e.g. introduction of specific waiter or food items names into reviews.
NMT models are constructions of stacked recurrent neural networks (RNNs). They include an encoder network and a decoder network, which are jointly optimized to produce a translation of one sequence to another. The encoder rolls over the input data in sequence and produces one INLINEFORM0 -dimensional context vector representation for the sentence. The decoder then generates output sequences based on the embedding vector and an attention module, which is taught to associate output words with certain input words. The generation typically continues until a specific EOS (end of sentence) token is encountered. The review length can be controlled in many ways, e.g. by setting the probability of generating the EOS token to zero until the required length is reached.
NMT models often also include a beam search BIBREF14 , which generates several hypotheses and chooses the best ones amongst them. In our work, we use the greedy beam search technique. We forgo the use of additional beam searches as we found that the quality of the output was already adequate and the translation phase time consumption increases linearly for each beam used.
We use the Yelp Challenge dataset BIBREF2 for our fake review generation. The dataset (Aug 2017) contains 2.9 million 1 –5 star restaurant reviews. We treat all reviews as genuine human-written reviews for the purpose of this work, since wide-scale deployment of machine-generated review attacks are not yet reported (Sep 2017) BIBREF19 . As preprocessing, we remove non-printable (non-ASCII) characters and excessive white-space. We separate punctuation from words. We reserve 15,000 reviews for validation and 3,000 for testing, and the rest we use for training. NMT models require a parallel corpus of source and target sentences, i.e. a large set of (source, target)-pairs. We set up a parallel corpus by constructing (context, review)-pairs from the dataset. Next, we describe how we created our input context.
The Yelp Challenge dataset includes metadata about restaurants, including their names, food tags, cities and states these restaurants are located in. For each restaurant review, we fetch this metadata and use it as our input context in the NMT model. The corresponding restaurant review is similarly set as the target sentence. This method produced 2.9 million pairs of sentences in our parallel corpus. We show one example of the parallel training corpus in Example 1 below:
5 Public House Las Vegas NV Gastropubs Restaurants > Excellent
food and service . Pricey , but well worth it . I would recommend
the bone marrow and sampler platter for appetizers . \end{verbatim}
\noindent The order {\textbf{[rating name city state tags]}} is kept constant.
Training the model conditions it to associate certain sequences of words in the input sentence with others in the output.
\subsubsection{Training Settings}
We train our NMT model on a commodity PC with a i7-4790k CPU (4.00GHz), with 32GB RAM and one NVidia GeForce GTX 980 GPU. Our system can process approximately 1,300 \textendash 1,500 source tokens/s and approximately 5,730 \textendash 5,830 output tokens/s. Training one epoch takes in average 72 minutes. The model is trained for 8 epochs, i.e. over night. We call fake review generated by this model \emph{NMT-Fake reviews}. We only need to train one model to produce reviews of different ratings.
We use the training settings: adam optimizer \cite{kingma2014adam} with the suggested learning rate 0.001 \cite{klein2017opennmt}. For most parts, parameters are at their default values. Notably, the maximum sentence length of input and output is 50 tokens by default.
We leverage the framework openNMT-py \cite{klein2017opennmt} to teach the our NMT model.
We list used openNMT-py commands in Appendix Table~\ref{table:openNMT-py_commands}.
\begin{figure}[t]
\begin{center}
\begin{tabular}{ | l | }
\hline
Example 2. Greedy NMT \\
Great food, \underline{great} service, \underline{great} \textit{\textit{beer selection}}. I had the \textit{Gastropubs burger} and it
\\
was delicious. The \underline{\textit{beer selection}} was also \underline{great}. \\
\\
Example 3. NMT-Fake* \\
I love this restaurant. Great food, great service. It's \textit{a little pricy} but worth\\
it for the \textit{quality} of the \textit{beer} and atmosphere you can see in \textit{Vegas}
\\
\hline
\end{tabular}
\label{table:output_comparison}
\end{center}
\caption{Na\"{i}ve text generation with NMT vs. generation using our NTM model. Repetitive patterns are \underline{underlined}. Contextual words are \emph{italicized}. Both examples here are generated based on the context given in Example~1.}
\label{fig:comparison}
\end{figure}
\subsection{Controlling generation of fake reviews}
\label{sec:generating}
Greedy NMT beam searches are practical in many NMT cases. However, the results are simply repetitive, when naively applied to fake review generation (See Example~2 in Figure~\ref{fig:comparison}).
The NMT model produces many \emph{high-confidence} word predictions, which are repetitive and obviously fake. We calculated that in fact, 43\% of the generated sentences started with the phrase ``Great food''. The lack of diversity in greedy use of NMTs for text generation is clear.
\begin{algorithm}[!b]
\KwData{Desired review context $C_\mathrm{input}$ (given as cleartext), NMT model}
\KwResult{Generated review $out$ for input context $C_\mathrm{input}$}
set $b=0.3$, $\lambda=-5$, $\alpha=\frac{2}{3}$, $p_\mathrm{typo}$, $p_\mathrm{spell}$ \\
$\log p \leftarrow \text{NMT.decode(NMT.encode(}C_\mathrm{input}\text{))}$ \\
out $\leftarrow$ [~] \\
$i \leftarrow 0$ \\
$\log p \leftarrow \text{Augment}(\log p$, $b$, $\lambda$, $1$, $[~]$, 0)~~~~~~~~~~~~~~~ |~random penalty~\\
\While{$i=0$ or $o_i$ not EOS}{
$\log \Tilde{p} \leftarrow \text{Augment}(\log p$, $b$, $\lambda$, $\alpha$, $o_i$, $i$)~~~~~~~~~~~ |~start \& memory penalty~\\
$o_i \leftarrow$ \text{NMT.beam}($\log \Tilde{p}$, out) \\
out.append($o_i$) \\
$i \leftarrow i+1$
}\text{return}~$\text{Obfuscate}$(out,~$p_\mathrm{typo}$,~$p_\mathrm{spell}$)
\caption{Generation of NMT-Fake* reviews.}
\label{alg:base}
\end{algorithm}
In this work, we describe how we succeeded in creating more diverse and less repetitive generated reviews, such as Example 3 in Figure~\ref{fig:comparison}.
We outline pseudocode for our methodology of generating fake reviews in Algorithm~\ref{alg:base}. There are several parameters in our algorithm.
The details of the algorithm will be shown later.
We modify the openNMT-py translation phase by changing log-probabilities before passing them to the beam search.
We notice that reviews generated with openNMT-py contain almost no language errors. As an optional post-processing step, we obfuscate reviews by introducing natural typos/misspellings randomly. In the next sections, we describe how we succeeded in generating more natural sentences from our NMT model, i.e. generating reviews like Example~3 instead of reviews like Example~2.
\subsubsection{Variation in word content}
Example 2 in Figure~\ref{fig:comparison} repeats commonly occurring words given for a specific context (e.g. \textit{great, food, service, beer, selection, burger} for Example~1). Generic review generation can be avoided by decreasing probabilities (log-likelihoods \cite{murphy2012machine}) of the generators LM, the decoder.
We constrain the generation of sentences by randomly \emph{imposing penalties to words}.
We tried several forms of added randomness, and found that adding constant penalties to a \emph{random subset} of the target words resulted in the most natural sentence flow. We call these penalties \emph{Bernoulli penalties}, since the random variables are chosen as either 1 or 0 (on or off).
\paragraph{Bernoulli penalties to language model}
To avoid generic sentences components, we augment the default language model $p(\cdot)$ of the decoder by
\begin{equation}
\log \Tilde{p}(t_k) = \log p(t_k | t_i, \dots, t_1) + \lambda q,
\end{equation}
where $q \in R^{V}$ is a vector of Bernoulli-distributed random values that obtain values $1$ with probability $b$ and value $0$ with probability $1-b_i$, and $\lambda < 0$. Parameter $b$ controls how much of the vocabulary is forgotten and $\lambda$ is a soft penalty of including ``forgotten'' words in a review.
$\lambda q_k$ emphasizes sentence forming with non-penalized words. The randomness is reset at the start of generating a new review.
Using Bernoulli penalties in the language model, we can ``forget'' a certain proportion of words and essentially ``force'' the creation of less typical sentences. We will test the effect of these two parameters, the Bernoulli probability $b$ and log-likelihood penalty of including ``forgotten'' words $\lambda$, with a user study in Section~\ref{sec:varying}.
\paragraph{Start penalty}
We introduce start penalties to avoid generic sentence starts (e.g. ``Great food, great service''). Inspired by \cite{li2016diversity}, we add a random start penalty $\lambda s^\mathrm{i}$, to our language model, which decreases monotonically for each generated token. We set $\alpha \leftarrow 0.66$ as it's effect decreases by 90\% every 5 words generated.
\paragraph{Penalty for reusing words}
Bernoulli penalties do not prevent excessive use of certain words in a sentence (such as \textit{great} in Example~2).
To avoid excessive reuse of words, we included a memory penalty for previously used words in each translation.
Concretely, we add the penalty $\lambda$ to each word that has been generated by the greedy search.
\subsubsection{Improving sentence coherence}
\label{sec:grammar}
We visually analyzed reviews after applying these penalties to our NMT model. While the models were clearly diverse, they were \emph{incoherent}: the introduction of random penalties had degraded the grammaticality of the sentences. Amongst others, the use of punctuation was erratic, and pronouns were used semantically wrongly (e.g. \emph{he}, \emph{she} might be replaced, as could ``and''/``but''). To improve the authenticity of our reviews, we added several \emph{grammar-based rules}.
English language has several classes of words which are important for the natural flow of sentences.
We built a list of common pronouns (e.g. I, them, our), conjunctions (e.g. and, thus, if), punctuation (e.g. ,/.,..), and apply only half memory penalties for these words. We found that this change made the reviews more coherent. The pseudocode for this and the previous step is shown in Algorithm~\ref{alg:aug}.
The combined effect of grammar-based rules and LM augmentation is visible in Example~3, Figure~\ref{fig:comparison}.
\begin{algorithm}[!t]
\KwData{Initial log LM $\log p$, Bernoulli probability $b$, soft-penalty $\lambda$, monotonic factor $\alpha$, last generated token $o_i$, grammar rules set $G$}
\KwResult{Augmented log LM $\log \Tilde{p}$}
\begin{algorithmic}[1]
\Procedure {Augment}{$\log p$, $b$, $\lambda$, $\alpha$, $o_i$, $i$}{ \\
generate $P_{\mathrm{1:N}} \leftarrow Bernoulli(b)$~~~~~~~~~~~~~~~|~$\text{One value} \in \{0,1\}~\text{per token}$~ \\
$I \leftarrow P>0$ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~|~Select positive indices~\\
$\log \Tilde{p} \leftarrow$ $\text{Discount}$($\log p$, $I$, $\lambda \cdot \alpha^i$,$G$) ~~~~~~ |~start penalty~\\
$\log \Tilde{p} \leftarrow$ $\text{Discount}$($\log \Tilde{p}$, $[o_i]$, $\lambda$,$G$) ~~~~~~~~~ |~memory penalty~\\
\textbf{return}~$\log \Tilde{p}$
}
\EndProcedure
\\
\Procedure {Discount}{$\log p$, $I$, $\lambda$, $G$}{
\State{\For{$i \in I$}{
\eIf{$o_i \in G$}{
$\log p_{i} \leftarrow \log p_{i} + \lambda/2$
}{
$\log p_{i} \leftarrow \log p_{i} + \lambda$}
}\textbf{return}~$\log p$
\EndProcedure
}}
\end{algorithmic}
\caption{Pseudocode for augmenting language model. }
\label{alg:aug}
\end{algorithm}
\subsubsection{Human-like errors}
\label{sec:obfuscation}
We notice that our NMT model produces reviews without grammar mistakes.
This is unlike real human writers, whose sentences contain two types of language mistakes 1) \emph{typos} that are caused by mistakes in the human motoric input, and 2) \emph{common spelling mistakes}.
We scraped a list of common English language spelling mistakes from Oxford dictionary\footnote{\url{https://en.oxforddictionaries.com/spelling/common-misspellings}} and created 80 rules for randomly \emph{re-introducing spelling mistakes}.
Similarly, typos are randomly reintroduced based on the weighted edit distance\footnote{\url{https://pypi.python.org/pypi/weighted-levenshtein/0.1}}, such that typos resulting in real English words with small perturbations are emphasized.
We use autocorrection tools\footnote{\url{https://pypi.python.org/pypi/autocorrect/0.1.0}} for finding these words.
We call these augmentations \emph{obfuscations}, since they aim to confound the reader to think a human has written them. We omit the pseudocode description for brevity.
\subsection{Experiment: Varying generation parameters in our NMT model}
\label{sec:varying}
Parameters $b$ and $\lambda$ control different aspects in fake reviews.
We show six different examples of generated fake reviews in Table~\ref{table:categories}.
Here, the largest differences occur with increasing values of $b$: visibly, the restaurant reviews become more extreme.
This occurs because a large portion of vocabulary is ``forgotten''. Reviews with $b \geq 0.7$ contain more rare word combinations, e.g. ``!!!!!'' as punctuation, and they occasionally break grammaticality (''experience was awesome'').
Reviews with lower $b$ are more generic: they contain safe word combinations like ``Great place, good service'' that occur in many reviews. Parameter $\lambda$'s is more subtle: it affects how random review starts are and to a degree, the discontinuation between statements within the review.
We conducted an Amazon Mechanical Turk (MTurk) survey in order to determine what kind of NMT-Fake reviews are convincing to native English speakers. We describe the survey and results in the next section.
\begin{table}[!b]
\caption{Six different parametrizations of our NMT reviews and one example for each. The context is ``5 P~.~F~.~Chang ' s Scottsdale AZ'' in all examples.}
\begin{center}
\begin{tabular}{ | l | l | }
\hline
$(b, \lambda)$ & Example review for context \\ \hline
\hline
$(0.3, -3)$ & I love this location! Great service, great food and the best drinks in Scottsdale. \\
& The staff is very friendly and always remembers u when we come in\\\hline
$(0.3, -5)$ & Love love the food here! I always go for lunch. They have a great menu and \\
& they make it fresh to order. Great place, good service and nice staff\\\hline
$(0.5, -4)$ & I love their chicken lettuce wraps and fried rice!! The service is good, they are\\
& always so polite. They have great happy hour specials and they have a lot\\
& of options.\\\hline
$(0.7, -3)$ & Great place to go with friends! They always make sure your dining \\
& experience was awesome.\\ \hline
$(0.7, -5)$ & Still haven't ordered an entree before but today we tried them once..\\
& both of us love this restaurant....\\\hline
$(0.9, -4)$ & AMAZING!!!!! Food was awesome with excellent service. Loved the lettuce \\
& wraps. Great drinks and wine! Can't wait to go back so soon!!\\ \hline
\end{tabular}
\label{table:categories}
\end{center}
\end{table}
\subsubsection{MTurk study}
\label{sec:amt}
We created 20 jobs, each with 100 questions, and requested master workers in MTurk to complete the jobs.
We randomly generated each survey for the participants. Each review had a 50\% chance to be real or fake. The fake ones further were chosen among six (6) categories of fake reviews (Table~\ref{table:categories}).
The restaurant and the city was given as contextual information to the participants. Our aim was to use this survey to understand how well English-speakers react to different parametrizations of NMT-Fake reviews.
Table~\ref{table:amt_pop} in Appendix summarizes the statistics for respondents in the survey. All participants were native English speakers from America. The base rate (50\%) was revealed to the participants prior to the study.
We first investigated overall detection of any NMT-Fake reviews (1,006 fake reviews and 994 real reviews). We found that the participants had big difficulties in detecting our fake reviews. In average, the reviews were detected with class-averaged \emph{F-score of only 56\%}, with 53\% F-score for fake review detection and 59\% F-score for real review detection. The results are very close to \emph{random detection}, where precision, recall and F-score would each be 50\%. Results are recorded in Table~\ref{table:MTurk_super}. Overall, the fake review generation is very successful, since human detection rate across categories is close to random.
\begin{table}[t]
\caption{Effectiveness of Mechanical Turkers in distinguishing human-written reviews from fake reviews generated by our NMT model (all variants).}
\begin{center}
\begin{tabular}{ | c | c |c |c | c | }
\hline
\multicolumn{5}{|c|}{Classification report}
\\ \hline
Review Type & Precision & Recall & F-score & Support \\ \hline
\hline
Human & 55\% & 63\% & 59\% & 994\\
NMT-Fake & 57\% & 50\% & 53\% & 1006 \\
\hline
\end{tabular}
\label{table:MTurk_super}
\end{center}
\end{table}
We noticed some variation in the detection of different fake review categories. The respondents in our MTurk survey had most difficulties recognizing reviews of category $(b=0.3, \lambda=-5)$, where true positive rate was $40.4\%$, while the true negative rate of the real class was $62.7\%$. The precision were $16\%$ and $86\%$, respectively. The class-averaged F-score is $47.6\%$, which is close to random. Detailed classification reports are shown in Table~\ref{table:MTurk_sub} in Appendix. Our MTurk-study shows that \emph{our NMT-Fake reviews pose a significant threat to review systems}, since \emph{ordinary native English-speakers have very big difficulties in separating real reviews from fake reviews}. We use the review category $(b=0.3, \lambda=-5)$ for future user tests in this paper, since MTurk participants had most difficulties detecting these reviews. We refer to this category as NMT-Fake* in this paper.
\section{Evaluation}
\graphicspath{ {figures/}}
We evaluate our fake reviews by first comparing them statistically to previously proposed types of fake reviews, and proceed with a user study with experienced participants. We demonstrate the statistical difference to existing fake review types \cite{yao2017automated,mukherjee2013yelp,rayana2015collective} by training classifiers to detect previous types and investigate classification performance.
\subsection{Replication of state-of-the-art model: LSTM}
\label{sec:repl}
Yao et al. \cite{yao2017automated} presented the current state-of-the-art generative model for fake reviews. The model is trained over the Yelp Challenge dataset using a two-layer character-based LSTM model.
We requested the authors of \cite{yao2017automated} for access to their LSTM model or a fake review dataset generated by their model. Unfortunately they were not able to share either of these with us. We therefore replicated their model as closely as we could, based on their paper and e-mail correspondence\footnote{We are committed to sharing our code with bonafide researchers for the sake of reproducibility.}.
We used the same graphics card (GeForce GTX) and trained using the same framework (torch-RNN in lua). We downloaded the reviews from Yelp Challenge and preprocessed the data to only contain printable ASCII characters, and filtered out non-restaurant reviews. We trained the model for approximately 72 hours. We post-processed the reviews using the customization methodology described in \cite{yao2017automated} and email correspondence. We call fake reviews generated by this model LSTM-Fake reviews.
\subsection{Similarity to existing fake reviews}
\label{sec:automated}
We now want to understand how NMT-Fake* reviews compare to a) LSTM fake reviews and b) human-generated fake reviews. We do this by comparing the statistical similarity between these classes.
For `a' (Figure~\ref{fig:lstm}), we use the Yelp Challenge dataset. We trained a classifier using 5,000 random reviews from the Yelp Challenge dataset (``human'') and 5,000 fake reviews generated by LSTM-Fake. Yao et al. \cite{yao2017automated} found that character features are essential in identifying LSTM-Fake reviews. Consequently, we use character features (n-grams up to 3).
For `b' (Figure~\ref{fig:shill}),we the ``Yelp Shills'' dataset (combination of YelpZip \cite{mukherjee2013yelp}, YelpNYC \cite{mukherjee2013yelp}, YelpChi \cite{rayana2015collective}). This dataset labels entries that are identified as fraudulent by Yelp's filtering mechanism (''shill reviews'')\footnote{Note that shill reviews are probably generated by human shills \cite{zhao2017news}.}. The rest are treated as genuine reviews from human users (''genuine''). We use 100,000 reviews from each category to train a classifier. We use features from the commercial psychometric tool LIWC2015 \cite{pennebaker2015development} to generated features.
In both cases, we use AdaBoost (with 200 shallow decision trees) for training. For testing each classifier, we use a held out test set of 1,000 reviews from both classes in each case. In addition, we test 1,000 NMT-Fake* reviews. Figures~\ref{fig:lstm} and~\ref{fig:shill} show the results. The classification threshold of 50\% is marked with a dashed line.
\begin{figure}
\begin{subfigure}[b]{0.5\columnwidth}
\includegraphics[width=\columnwidth]{figures/lstm.png}
\caption{Human--LSTM reviews.}
\label{fig:lstm}
\end{subfigure}
\begin{subfigure}[b]{0.5\columnwidth}
\includegraphics[width=\columnwidth]{figures/distribution_shill.png}
\caption{Genuine--Shill reviews.}
\label{fig:shill}
\end{subfigure}
\caption{
Histogram comparison of NMT-Fake* reviews with LSTM-Fake reviews and human-generated (\emph{genuine} and \emph{shill}) reviews. Figure~\ref{fig:lstm} shows that a classifier trained to distinguish ``human'' vs. LSTM-Fake cannot distinguish ``human'' vs NMT-Fake* reviews. Figure~\ref{fig:shill} shows NMT-Fake* reviews are more similar to \emph{genuine} reviews than \emph{shill} reviews.
}
\label{fig:statistical_similarity}
\end{figure}
We can see that our new generated reviews do not share strong attributes with previous known categories of fake reviews. If anything, our fake reviews are more similar to genuine reviews than previous fake reviews. We thus conjecture that our NMT-Fake* fake reviews present a category of fake reviews that may go undetected on online review sites.
\subsection{Comparative user study}
\label{sec:comparison}
We wanted to evaluate the effectiveness of fake reviews againsttech-savvy users who understand and know to expect machine-generated fake reviews. We conducted a user study with 20 participants, all with computer science education and at least one university degree. Participant demographics are shown in Table~\ref{table:amt_pop} in the Appendix. Each participant first attended a training session where they were asked to label reviews (fake and genuine) and could later compare them to the correct answers -- we call these participants \emph{experienced participants}.
No personal data was collected during the user study.
Each person was given two randomly selected sets of 30 of reviews (a total of 60 reviews per person) with reviews containing 10 \textendash 50 words each.
Each set contained 26 (87\%) real reviews from Yelp and 4 (13\%) machine-generated reviews,
numbers chosen based on suspicious review prevalence on Yelp~\cite{mukherjee2013yelp,rayana2015collective}.
One set contained machine-generated reviews from one of the two models (NMT ($b=0.3, \lambda=-5$) or LSTM),
and the other set reviews from the other in randomized order. The number of fake reviews was revealed to each participant in the study description. Each participant was requested to mark four (4) reviews as fake.
Each review targeted a real restaurant. A screenshot of that restaurant's Yelp page was shown to each participant prior to the study. Each participant evaluated reviews for one specific, randomly selected, restaurant. An example of the first page of the user study is shown in Figure~\ref{fig:screenshot} in Appendix.
\begin{figure}[!ht]
\centering
\includegraphics[width=.7\columnwidth]{detection2.png}
\caption{Violin plots of detection rate in comparative study. Mean and standard deviations for number of detected fakes are $0.8\pm0.7$ for NMT-Fake* and $2.5\pm1.0$ for LSTM-Fake. $n=20$. A sample of random detection is shown as comparison.}
\label{fig:aalto}
\end{figure}
Figure~\ref{fig:aalto} shows the distribution of detected reviews of both types. A hypothetical random detector is shown for comparison.
NMT-Fake* reviews are significantly more difficult to detect for our experienced participants. In average, detection rate (recall) is $20\%$ for NMT-Fake* reviews, compared to $61\%$ for LSTM-based reviews.
The precision (and F-score) is the same as the recall in our study, since participants labeled 4 fakes in each set of 30 reviews \cite{murphy2012machine}.
The distribution of the detection across participants is shown in Figure~\ref{fig:aalto}. \emph{The difference is statistically significant with confidence level $99\%$} (Welch's t-test).
We compared the detection rate of NMT-Fake* reviews to a random detector, and find that \emph{our participants detection rate of NMT-Fake* reviews is not statistically different from random predictions with 95\% confidence level} (Welch's t-test).
\section{Defenses}
\label{sec:detection}
We developed an AdaBoost-based classifier to detect our new fake reviews, consisting of 200 shallow decision trees (depth 2). The features we used are recorded in Table~\ref{table:features_adaboost} (Appendix).
We used word-level features based on spaCy-tokenization \cite{honnibal-johnson:2015:EMNLP} and constructed n-gram representation of POS-tags and dependency tree tags. We added readability features from NLTK~\cite{bird2004nltk}.
\begin{figure}[ht]
\centering
\includegraphics[width=.7\columnwidth]{obf_score_fair_2.png}
\caption{
Adaboost-based classification of NMT-Fake and human-written reviews.
Effect of varying $b$ and $\lambda$ in fake review generation.
The variant native speakers had most difficulties detecting is well detectable by AdaBoost (97\%).}
\label{fig:adaboost_matrix_b_lambda}
\end{figure}
Figure~\ref{fig:adaboost_matrix_b_lambda} shows our AdaBoost classifier's class-averaged F-score at detecting different kind of fake reviews. The classifier is very effective in detecting reviews that humans have difficulties detecting. For example, the fake reviews MTurk users had most difficulty detecting ($b=0.3, \lambda=-5$) are detected with an excellent 97\% F-score.
The most important features for the classification were counts for frequently occurring words in fake reviews (such as punctuation, pronouns, articles) as well as the readability feature ``Automated Readability Index''. We thus conclude that while NMT-Fake reviews are difficult to detect for humans, they can be well detected with the right tools.
\section{Related Work}
Kumar and Shah~\cite{kumar2018false} survey and categorize false information research. Automatically generated fake reviews are a form of \emph{opinion-based false information}, where the creator of the review may influence reader's opinions or decisions.
Yao et al. \cite{yao2017automated} presented their study on machine-generated fake reviews. Contrary to us, they investigated character-level language models, without specifying a specific context before generation. We leverage existing NMT tools to encode a specific context to the restaurant before generating reviews.
Supporting our study, Everett et al~\cite{Everett2016Automated} found that security researchers were less likely to be fooled by Markov chain-generated Reddit comments compared to ordinary Internet users.
Diversification of NMT model outputs has been studied in \cite{li2016diversity}. The authors proposed the use of a penalty to commonly occurring sentences (\emph{n-grams}) in order to emphasize maximum mutual information-based generation.
The authors investigated the use of NMT models in chatbot systems.
We found that unigram penalties to random tokens (Algorithm~\ref{alg:aug}) was easy to implement and produced sufficiently diverse responses.
\section {Discussion and Future Work}
\paragraph{What makes NMT-Fake* reviews difficult to detect?} First, NMT models allow the encoding of a relevant context for each review, which narrows down the possible choices of words that the model has to choose from. Our NMT model had a perplexity of approximately $25$, while the model of \cite{yao2017automated} had a perplexity of approximately $90$ \footnote{Personal communication with the authors}. Second, the beam search in NMT models narrows down choices to natural-looking sentences. Third, we observed that the NMT model produced \emph{better structure} in the generated sentences (i.e. a more coherent story).
\paragraph{Cost of generating reviews} With our setup, generating one review took less than one second. The cost of generation stems mainly from the overnight training. Assuming an electricity cost of 16 cents / kWh (California) and 8 hours of training, training the NMT model requires approximately 1.30 USD. This is a 90\% reduction in time compared to the state-of-the-art \cite{yao2017automated}. Furthermore, it is possible to generate both positive and negative reviews with the same model.
\paragraph{Ease of customization} We experimented with inserting specific words into the text by increasing their log likelihoods in the beam search. We noticed that the success depended on the prevalence of the word in the training set. For example, adding a +5 to \emph{Mike} in the log-likelihood resulted in approximately 10\% prevalence of this word in the reviews. An attacker can therefore easily insert specific keywords to reviews, which can increase evasion probability.
\paragraph{Ease of testing} Our diversification scheme is applicable during \emph{generation phase}, and does not affect the training setup of the network in any way. Once the NMT model is obtained, it is easy to obtain several different variants of NMT-Fake reviews by varying parameters $b$ and $\lambda$.
\paragraph{Languages} The generation methodology is not per-se language-dependent. The requirement for successful generation is that sufficiently much data exists in the targeted language. However, our language model modifications require some knowledge of that target language's grammar to produce high-quality reviews.
\paragraph{Generalizability of detection techniques} Currently, fake reviews are not universally detectable. Our results highlight that it is difficult to claim detection performance on unseen types of fake reviews (Section~\ref{sec:automated}). We see this an open problem that deserves more attention in fake reviews research.
\paragraph{Generalizability to other types of datasets} Our technique can be applied to any dataset, as long as there is sufficient training data for the NMT model. We used approximately 2.9 million reviews for this work.
\section{Conclusion}
In this paper, we showed that neural machine translation models can be used to generate fake reviews that are very effective in deceiving even experienced, tech-savvy users.
This supports anecdotal evidence \cite{national2017commission}.
Our technique is more effective than state-of-the-art \cite{yao2017automated}.
We conclude that machine-aided fake review detection is necessary since human users are ineffective in identifying fake reviews.
We also showed that detectors trained using one type of fake reviews are not effective in identifying other types of fake reviews.
Robust detection of fake reviews is thus still an open problem.
\section*{Acknowledgments}
We thank Tommi Gr\"{o}ndahl for assistance in planning user studies and the
participants of the user study for their time and feedback. We also thank
Luiza Sayfullina for comments that improved the manuscript.
We thank the authors of \cite{yao2017automated} for answering questions about
their work.
\bibliographystyle{splncs}
\begin{thebibliography}{10}
\bibitem{yao2017automated}
Yao, Y., Viswanath, B., Cryan, J., Zheng, H., Zhao, B.Y.:
\newblock Automated crowdturfing attacks and defenses in online review systems.
\newblock In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and
Communications Security, ACM (2017)
\bibitem{murphy2012machine}
Murphy, K.:
\newblock Machine learning: a probabilistic approach.
\newblock Massachusetts Institute of Technology (2012)
\bibitem{challenge2013yelp}
Yelp:
\newblock {Yelp Challenge Dataset} (2013)
\bibitem{mukherjee2013yelp}
Mukherjee, A., Venkataraman, V., Liu, B., Glance, N.:
\newblock What yelp fake review filter might be doing?
\newblock In: Seventh International AAAI Conference on Weblogs and Social Media
(ICWSM). (2013)
\bibitem{rayana2015collective}
Rayana, S., Akoglu, L.:
\newblock Collective opinion spam detection: Bridging review networks and
metadata.
\newblock In: {}Proceedings of the 21th ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining
\bibitem{o2008user}
{O'Connor}, P.:
\newblock {User-generated content and travel: A case study on Tripadvisor.com}.
\newblock Information and communication technologies in tourism 2008 (2008)
\bibitem{luca2010reviews}
Luca, M.:
\newblock {Reviews, Reputation, and Revenue: The Case of Yelp. com}.
\newblock {Harvard Business School} (2010)
\bibitem{wang2012serf}
Wang, G., Wilson, C., Zhao, X., Zhu, Y., Mohanlal, M., Zheng, H., Zhao, B.Y.:
\newblock Serf and turf: crowdturfing for fun and profit.
\newblock In: Proceedings of the 21st international conference on World Wide
Web (WWW), ACM (2012)
\bibitem{rinta2017understanding}
Rinta-Kahila, T., Soliman, W.:
\newblock Understanding crowdturfing: The different ethical logics behind the
clandestine industry of deception.
\newblock In: ECIS 2017: Proceedings of the 25th European Conference on
Information Systems. (2017)
\bibitem{luca2016fake}
Luca, M., Zervas, G.:
\newblock Fake it till you make it: Reputation, competition, and yelp review
fraud.
\newblock Management Science (2016)
\bibitem{national2017commission}
{National Literacy Trust}:
\newblock Commission on fake news and the teaching of critical literacy skills
in schools URL:
\url{https://literacytrust.org.uk/policy-and-campaigns/all-party-parliamentary-group-literacy/fakenews/}.
\bibitem{jurafsky2014speech}
Jurafsky, D., Martin, J.H.:
\newblock Speech and language processing. Volume~3.
\newblock Pearson London: (2014)
\bibitem{kingma2014adam}
Kingma, D.P., Ba, J.:
\newblock Adam: A method for stochastic optimization.
\newblock arXiv preprint arXiv:1412.6980 (2014)
\bibitem{cho2014learning}
Cho, K., van Merrienboer, B., Gulcehre, C., Bahdanau, D., Bougares, F.,
Schwenk, H., Bengio, Y.:
\newblock Learning phrase representations using rnn encoder--decoder for
statistical machine translation.
\newblock In: Proceedings of the 2014 Conference on Empirical Methods in
Natural Language Processing (EMNLP). (2014)
\bibitem{klein2017opennmt}
Klein, G., Kim, Y., Deng, Y., Senellart, J., Rush, A.:
\newblock Opennmt: Open-source toolkit for neural machine translation.
\newblock Proceedings of ACL, System Demonstrations (2017)
\bibitem{wu2016google}
Wu, Y., Schuster, M., Chen, Z., Le, Q.V., Norouzi, M., Macherey, W., Krikun,
M., Cao, Y., Gao, Q., Macherey, K., et~al.:
\newblock Google's neural machine translation system: Bridging the gap between
human and machine translation.
\newblock arXiv preprint arXiv:1609.08144 (2016)
\bibitem{mei2017coherent}
Mei, H., Bansal, M., Walter, M.R.:
\newblock Coherent dialogue with attention-based language models.
\newblock In: AAAI. (2017) 3252--3258
\bibitem{li2016diversity}
Li, J., Galley, M., Brockett, C., Gao, J., Dolan, B.:
\newblock A diversity-promoting objective function for neural conversation
models.
\newblock In: Proceedings of NAACL-HLT. (2016)
\bibitem{rubin2006assessing}
Rubin, V.L., Liddy, E.D.:
\newblock Assessing credibility of weblogs.
\newblock In: AAAI Spring Symposium: Computational Approaches to Analyzing
Weblogs. (2006)
\bibitem{zhao2017news}
news.com.au:
\newblock {The potential of AI generated 'crowdturfing' could undermine online
reviews and dramatically erode public trust} URL:
\url{http://www.news.com.au/technology/online/security/the-potential-of-ai-generated-crowdturfing-could-undermine-online-reviews-and-dramatically-erode-public-trust/news-story/e1c84ad909b586f8a08238d5f80b6982}.
\bibitem{pennebaker2015development}
Pennebaker, J.W., Boyd, R.L., Jordan, K., Blackburn, K.:
\newblock {The development and psychometric properties of LIWC2015}.
\newblock Technical report (2015)
\bibitem{honnibal-johnson:2015:EMNLP}
Honnibal, M., Johnson, M.:
\newblock An improved non-monotonic transition system for dependency parsing.
\newblock In: Proceedings of the 2015 Conference on Empirical Methods in
Natural Language Processing (EMNLP), ACM (2015)
\bibitem{bird2004nltk}
Bird, S., Loper, E.:
\newblock {NLTK: the natural language toolkit}.
\newblock In: Proceedings of the ACL 2004 on Interactive poster and
demonstration sessions, Association for Computational Linguistics (2004)
\bibitem{kumar2018false}
Kumar, S., Shah, N.:
\newblock False information on web and social media: A survey.
\newblock arXiv preprint arXiv:1804.08559 (2018)
\bibitem{Everett2016Automated}
Everett, R.M., Nurse, J.R.C., Erola, A.:
\newblock The anatomy of online deception: What makes automated text
convincing?
\newblock In: Proceedings of the 31st Annual ACM Symposium on Applied
Computing. SAC '16, ACM (2016)
\end{thebibliography}
\section*{Appendix}
We present basic demographics of our MTurk study and the comparative study with experienced users in Table~\ref{table:amt_pop}.
\begin{table}
\caption{User study statistics.}
\begin{center}
\begin{tabular}{ | l | c | c | }
\hline
Quality & Mechanical Turk users & Experienced users\\
\hline
Native English Speaker & Yes (20) & Yes (1) No (19) \\
Fluent in English & Yes (20) & Yes (20) \\
Age & 21-40 (17) 41-60 (3) & 21-25 (8) 26-30 (7) 31-35 (4) 41-45 (1)\\
Gender & Male (14) Female (6) & Male (17) Female (3)\\
Highest Education & High School (10) Bachelor (10) & Bachelor (9) Master (6) Ph.D. (5) \\
\hline
\end{tabular}
\label{table:amt_pop}
\end{center}
\end{table}
Table~\ref{table:openNMT-py_commands} shows a listing of the openNMT-py commands we used to create our NMT model and to generate fake reviews.
\begin{table}[t]
\caption{Listing of used openNMT-py commands.}
\begin{center}
\begin{tabular}{ | l | l | }
\hline
Phase & Bash command \\
\hline
Preprocessing & \begin{lstlisting}[language=bash]
python preprocess.py -train_src context-train.txt
-train_tgt reviews-train.txt -valid_src context-val.txt
-valid_tgt reviews-val.txt -save_data model
-lower -tgt_words_min_frequency 10
\end{lstlisting}
\\ & \\
Training & \begin{lstlisting}[language=bash]
python train.py -data model -save_model model -epochs 8
-gpuid 0 -learning_rate_decay 0.5 -optim adam
-learning_rate 0.001 -start_decay_at 3\end{lstlisting}
\\ & \\
Generation & \begin{lstlisting}[language=bash]
python translate.py -model model_acc_35.54_ppl_25.68_e8.pt
-src context-tst.txt -output pred-e8.txt -replace_unk
-verbose -max_length 50 -gpu 0
\end{lstlisting} \\
\hline
\end{tabular}
\label{table:openNMT-py_commands}
\end{center}
\end{table}
Table~\ref{table:MTurk_sub} shows the classification performance of Amazon Mechanical Turkers, separated across different categories of NMT-Fake reviews. The category with best performance ($b=0.3, \lambda=-5$) is denoted as NMT-Fake*.
\begin{table}[b]
\caption{MTurk study subclass classification reports. Classes are imbalanced in ratio 1:6. Random predictions are $p_\mathrm{human} = 86\%$ and $p_\mathrm{machine} = 14\%$, with $r_\mathrm{human} = r_\mathrm{machine} = 50\%$. Class-averaged F-scores for random predictions are $42\%$.}
\begin{center}
\begin{tabular}{ | c || c |c |c | c | }
\hline
$(b=0.3, \lambda = -3)$ & Precision & Recall & F-score & Support \\ \hline
Human & 89\% & 63\% & 73\% & 994\\
NMT-Fake & 15\% & 45\% & 22\% & 146 \\
\hline
\hline
$(b=0.3, \lambda = -5)$ & Precision & Recall & F-score & Support \\ \hline
Human & 86\% & 63\% & 73\% & 994\\
NMT-Fake* & 16\% & 40\% & 23\% & 171 \\
\hline
\hline
$(b=0.5, \lambda = -4)$ & Precision & Recall & F-score & Support \\ \hline
Human & 88\% & 63\% & 73\% & 994\\
NMT-Fake & 21\% & 55\% & 30\% & 181 \\
\hline
\hline
$(b=0.7, \lambda = -3)$ & Precision & Recall & F-score & Support \\ \hline
Human & 88\% & 63\% & 73\% & 994\\
NMT-Fake & 19\% & 50\% & 27\% & 170 \\
\hline
\hline
$(b=0.7, \lambda = -5)$ & Precision & Recall & F-score & Support \\ \hline
Human & 89\% & 63\% & 74\% & 994\\
NMT-Fake & 21\% & 57\% & 31\% & 174 \\
\hline
\hline
$(b=0.9, \lambda = -4)$ & Precision & Recall & F-score & Support \\ \hline
Human & 88\% & 63\% & 73\% & 994\\
NMT-Fake & 18\% & 50\% & 27\% & 164 \\
\hline
\end{tabular}
\label{table:MTurk_sub}
\end{center}
\end{table}
Figure~\ref{fig:screenshot} shows screenshots of the first two pages of our user study with experienced participants.
\begin{figure}[ht]
\centering
\includegraphics[width=1.\columnwidth]{figures/screenshot_7-3.png}
\caption{
Screenshots of the first two pages in the user study. Example 1 is a NMT-Fake* review, the rest are human-written.
}
\label{fig:screenshot}
\end{figure}
Table~\ref{table:features_adaboost} shows the features used to detect NMT-Fake reviews using the AdaBoost classifier.
\begin{table}
\caption{Features used in NMT-Fake review detector.}
\begin{center}
\begin{tabular}{ | l | c | }
\hline
Feature type & Number of features \\ \hline
\hline
Readability features & 13 \\ \hline
Unique POS tags & $~20$ \\ \hline
Word unigrams & 22,831 \\ \hline
1/2/3/4-grams of simple part-of-speech tags & 54,240 \\ \hline
1/2/3-grams of detailed part-of-speech tags & 112,944 \\ \hline
1/2/3-grams of syntactic dependency tags & 93,195 \\ \hline
\end{tabular}
\label{table:features_adaboost}
\end{center}
\end{table}
\end{document} | No |
d4d771bcb59bab4f3eb9026cda7d182eb582027d | d4d771bcb59bab4f3eb9026cda7d182eb582027d_0 | Q: How does using NMT ensure generated reviews stay on topic?
Text: Introduction
Automatically generated fake reviews have only recently become natural enough to fool human readers. Yao et al. BIBREF0 use a deep neural network (a so-called 2-layer LSTM BIBREF1 ) to generate fake reviews, and concluded that these fake reviews look sufficiently genuine to fool native English speakers. They train their model using real restaurant reviews from yelp.com BIBREF2 . Once trained, the model is used to generate reviews character-by-character. Due to the generation methodology, it cannot be easily targeted for a specific context (meaningful side information). Consequently, the review generation process may stray off-topic. For instance, when generating a review for a Japanese restaurant in Las Vegas, the review generation process may include references to an Italian restaurant in Baltimore. The authors of BIBREF0 apply a post-processing step (customization), which replaces food-related words with more suitable ones (sampled from the targeted restaurant). The word replacement strategy has drawbacks: it can miss certain words and replace others independent of their surrounding words, which may alert savvy readers. As an example: when we applied the customization technique described in BIBREF0 to a review for a Japanese restaurant it changed the snippet garlic knots for breakfast with garlic knots for sushi).
We propose a methodology based on neural machine translation (NMT) that improves the generation process by defining a context for the each generated fake review. Our context is a clear-text sequence of: the review rating, restaurant name, city, state and food tags (e.g. Japanese, Italian). We show that our technique generates review that stay on topic. We can instantiate our basic technique into several variants. We vet them on Amazon Mechanical Turk and find that native English speakers are very poor at recognizing our fake generated reviews. For one variant, the participants' performance is close to random: the class-averaged F-score of detection is INLINEFORM0 (whereas random would be INLINEFORM1 given the 1:6 imbalance in the test). Via a user study with experienced, highly educated participants, we compare this variant (which we will henceforth refer to as NMT-Fake* reviews) with fake reviews generated using the char-LSTM-based technique from BIBREF0 .
We demonstrate that NMT-Fake* reviews constitute a new category of fake reviews that cannot be detected by classifiers trained only using previously known categories of fake reviews BIBREF0 , BIBREF3 , BIBREF4 . Therefore, NMT-Fake* reviews may go undetected in existing online review sites. To meet this challenge, we develop an effective classifier that detects NMT-Fake* reviews effectively (97% F-score). Our main contributions are:
Background
Fake reviews User-generated content BIBREF5 is an integral part of the contemporary user experience on the web. Sites like tripadvisor.com, yelp.com and Google Play use user-written reviews to provide rich information that helps other users choose where to spend money and time. User reviews are used for rating services or products, and for providing qualitative opinions. User reviews and ratings may be used to rank services in recommendations. Ratings have an affect on the outwards appearance. Already 8 years ago, researchers estimated that a one-star rating increase affects the business revenue by 5 – 9% on yelp.com BIBREF6 .
Due to monetary impact of user-generated content, some businesses have relied on so-called crowd-turfing agents BIBREF7 that promise to deliver positive ratings written by workers to a customer in exchange for a monetary compensation. Crowd-turfing ethics are complicated. For example, Amazon community guidelines prohibit buying content relating to promotions, but the act of writing fabricated content is not considered illegal, nor is matching workers to customers BIBREF8 . Year 2015, approximately 20% of online reviews on yelp.com were suspected of being fake BIBREF9 .
Nowadays, user-generated review sites like yelp.com use filters and fraudulent review detection techniques. These factors have resulted in an increase in the requirements of crowd-turfed reviews provided to review sites, which in turn has led to an increase in the cost of high-quality review. Due to the cost increase, researchers hypothesize the existence of neural network-generated fake reviews. These neural-network-based fake reviews are statistically different from human-written fake reviews, and are not caught by classifiers trained on these BIBREF0 .
Detecting fake reviews can either be done on an individual level or as a system-wide detection tool (i.e. regulation). Detecting fake online content on a personal level requires knowledge and skills in critical reading. In 2017, the National Literacy Trust assessed that young people in the UK do not have the skillset to differentiate fake news from real news BIBREF10 . For example, 20% of children that use online news sites in age group 12-15 believe that all information on news sites are true.
Neural Networks Neural networks are function compositions that map input data through INLINEFORM0 subsequent layers: DISPLAYFORM0
where the functions INLINEFORM0 are typically non-linear and chosen by experts partly for known good performance on datasets and partly for simplicity of computational evaluation. Language models (LMs) BIBREF11 are generative probability distributions that assign probabilities to sequences of tokens ( INLINEFORM1 ): DISPLAYFORM0
such that the language model can be used to predict how likely a specific token at time step INLINEFORM0 is, based on the INLINEFORM1 previous tokens. Tokens are typically either words or characters.
For decades, deep neural networks were thought to be computationally too difficult to train. However, advances in optimization, hardware and the availability of frameworks have shown otherwise BIBREF1 , BIBREF12 . Neural language models (NLMs) have been one of the promising application areas. NLMs are typically various forms of recurrent neural networks (RNNs), which pass through the data sequentially and maintain a memory representation of the past tokens with a hidden context vector. There are many RNN architectures that focus on different ways of updating and maintaining context vectors: Long Short-Term Memory units (LSTM) and Gated Recurrent Units (GRUs) are perhaps most popular. Neural LMs have been used for free-form text generation. In certain application areas, the quality has been high enough to sometimes fool human readers BIBREF0 . Encoder-decoder (seq2seq) models BIBREF13 are architectures of stacked RNNs, which have the ability to generate output sequences based on input sequences. The encoder network reads in a sequence of tokens, and passes it to a decoder network (a LM). In contrast to simpler NLMs, encoder-decoder networks have the ability to use additional context for generating text, which enables more accurate generation of text. Encoder-decoder models are integral in Neural Machine Translation (NMT) BIBREF14 , where the task is to translate a source text from one language to another language. NMT models additionally use beam search strategies to heuristically search the set of possible translations. Training datasets are parallel corpora; large sets of paired sentences in the source and target languages. The application of NMT techniques for online machine translation has significantly improved the quality of translations, bringing it closer to human performance BIBREF15 .
Neural machine translation models are efficient at mapping one expression to another (one-to-one mapping). Researchers have evaluated these models for conversation generation BIBREF16 , with mixed results. Some researchers attribute poor performance to the use of the negative log likelihood cost function during training, which emphasizes generation of high-confidence phrases rather than diverse phrases BIBREF17 . The results are often generic text, which lacks variation. Li et al. have suggested various augmentations to this, among others suppressing typical responses in the decoder language model to promote response diversity BIBREF17 .
System Model
We discuss the attack model, our generative machine learning method and controlling the generative process in this section.
Attack Model
Wang et al. BIBREF7 described a model of crowd-turfing attacks consisting of three entities: customers who desire to have fake reviews for a particular target (e.g. their restaurant) on a particular platform (e.g. Yelp), agents who offer fake review services to customers, and workers who are orchestrated by the agent to compose and post fake reviews.
Automated crowd-turfing attacks (ACA) replace workers by a generative model. This has several benefits including better economy and scalability (human workers are more expensive and slower) and reduced detectability (agent can better control the rate at which fake reviews are generated and posted).
We assume that the agent has access to public reviews on the review platform, by which it can train its generative model. We also assume that it is easy for the agent to create a large number of accounts on the review platform so that account-based detection or rate-limiting techniques are ineffective against fake reviews.
The quality of the generative model plays a crucial role in the attack. Yao et al. BIBREF0 propose the use of a character-based LSTM as base for generative model. LSTMs are not conditioned to generate reviews for a specific target BIBREF1 , and may mix-up concepts from different contexts during free-form generation. Mixing contextually separate words is one of the key criteria that humans use to identify fake reviews. These may result in violations of known indicators for fake content BIBREF18 . For example, the review content may not match prior expectations nor the information need that the reader has. We improve the attack model by considering a more capable generative model that produces more appropriate reviews: a neural machine translation (NMT) model.
Generative Model
We propose the use of NMT models for fake review generation. The method has several benefits: 1) the ability to learn how to associate context (keywords) to reviews, 2) fast training time, and 3) a high-degree of customization during production time, e.g. introduction of specific waiter or food items names into reviews.
NMT models are constructions of stacked recurrent neural networks (RNNs). They include an encoder network and a decoder network, which are jointly optimized to produce a translation of one sequence to another. The encoder rolls over the input data in sequence and produces one INLINEFORM0 -dimensional context vector representation for the sentence. The decoder then generates output sequences based on the embedding vector and an attention module, which is taught to associate output words with certain input words. The generation typically continues until a specific EOS (end of sentence) token is encountered. The review length can be controlled in many ways, e.g. by setting the probability of generating the EOS token to zero until the required length is reached.
NMT models often also include a beam search BIBREF14 , which generates several hypotheses and chooses the best ones amongst them. In our work, we use the greedy beam search technique. We forgo the use of additional beam searches as we found that the quality of the output was already adequate and the translation phase time consumption increases linearly for each beam used.
We use the Yelp Challenge dataset BIBREF2 for our fake review generation. The dataset (Aug 2017) contains 2.9 million 1 –5 star restaurant reviews. We treat all reviews as genuine human-written reviews for the purpose of this work, since wide-scale deployment of machine-generated review attacks are not yet reported (Sep 2017) BIBREF19 . As preprocessing, we remove non-printable (non-ASCII) characters and excessive white-space. We separate punctuation from words. We reserve 15,000 reviews for validation and 3,000 for testing, and the rest we use for training. NMT models require a parallel corpus of source and target sentences, i.e. a large set of (source, target)-pairs. We set up a parallel corpus by constructing (context, review)-pairs from the dataset. Next, we describe how we created our input context.
The Yelp Challenge dataset includes metadata about restaurants, including their names, food tags, cities and states these restaurants are located in. For each restaurant review, we fetch this metadata and use it as our input context in the NMT model. The corresponding restaurant review is similarly set as the target sentence. This method produced 2.9 million pairs of sentences in our parallel corpus. We show one example of the parallel training corpus in Example 1 below:
5 Public House Las Vegas NV Gastropubs Restaurants > Excellent
food and service . Pricey , but well worth it . I would recommend
the bone marrow and sampler platter for appetizers . \end{verbatim}
\noindent The order {\textbf{[rating name city state tags]}} is kept constant.
Training the model conditions it to associate certain sequences of words in the input sentence with others in the output.
\subsubsection{Training Settings}
We train our NMT model on a commodity PC with a i7-4790k CPU (4.00GHz), with 32GB RAM and one NVidia GeForce GTX 980 GPU. Our system can process approximately 1,300 \textendash 1,500 source tokens/s and approximately 5,730 \textendash 5,830 output tokens/s. Training one epoch takes in average 72 minutes. The model is trained for 8 epochs, i.e. over night. We call fake review generated by this model \emph{NMT-Fake reviews}. We only need to train one model to produce reviews of different ratings.
We use the training settings: adam optimizer \cite{kingma2014adam} with the suggested learning rate 0.001 \cite{klein2017opennmt}. For most parts, parameters are at their default values. Notably, the maximum sentence length of input and output is 50 tokens by default.
We leverage the framework openNMT-py \cite{klein2017opennmt} to teach the our NMT model.
We list used openNMT-py commands in Appendix Table~\ref{table:openNMT-py_commands}.
\begin{figure}[t]
\begin{center}
\begin{tabular}{ | l | }
\hline
Example 2. Greedy NMT \\
Great food, \underline{great} service, \underline{great} \textit{\textit{beer selection}}. I had the \textit{Gastropubs burger} and it
\\
was delicious. The \underline{\textit{beer selection}} was also \underline{great}. \\
\\
Example 3. NMT-Fake* \\
I love this restaurant. Great food, great service. It's \textit{a little pricy} but worth\\
it for the \textit{quality} of the \textit{beer} and atmosphere you can see in \textit{Vegas}
\\
\hline
\end{tabular}
\label{table:output_comparison}
\end{center}
\caption{Na\"{i}ve text generation with NMT vs. generation using our NTM model. Repetitive patterns are \underline{underlined}. Contextual words are \emph{italicized}. Both examples here are generated based on the context given in Example~1.}
\label{fig:comparison}
\end{figure}
\subsection{Controlling generation of fake reviews}
\label{sec:generating}
Greedy NMT beam searches are practical in many NMT cases. However, the results are simply repetitive, when naively applied to fake review generation (See Example~2 in Figure~\ref{fig:comparison}).
The NMT model produces many \emph{high-confidence} word predictions, which are repetitive and obviously fake. We calculated that in fact, 43\% of the generated sentences started with the phrase ``Great food''. The lack of diversity in greedy use of NMTs for text generation is clear.
\begin{algorithm}[!b]
\KwData{Desired review context $C_\mathrm{input}$ (given as cleartext), NMT model}
\KwResult{Generated review $out$ for input context $C_\mathrm{input}$}
set $b=0.3$, $\lambda=-5$, $\alpha=\frac{2}{3}$, $p_\mathrm{typo}$, $p_\mathrm{spell}$ \\
$\log p \leftarrow \text{NMT.decode(NMT.encode(}C_\mathrm{input}\text{))}$ \\
out $\leftarrow$ [~] \\
$i \leftarrow 0$ \\
$\log p \leftarrow \text{Augment}(\log p$, $b$, $\lambda$, $1$, $[~]$, 0)~~~~~~~~~~~~~~~ |~random penalty~\\
\While{$i=0$ or $o_i$ not EOS}{
$\log \Tilde{p} \leftarrow \text{Augment}(\log p$, $b$, $\lambda$, $\alpha$, $o_i$, $i$)~~~~~~~~~~~ |~start \& memory penalty~\\
$o_i \leftarrow$ \text{NMT.beam}($\log \Tilde{p}$, out) \\
out.append($o_i$) \\
$i \leftarrow i+1$
}\text{return}~$\text{Obfuscate}$(out,~$p_\mathrm{typo}$,~$p_\mathrm{spell}$)
\caption{Generation of NMT-Fake* reviews.}
\label{alg:base}
\end{algorithm}
In this work, we describe how we succeeded in creating more diverse and less repetitive generated reviews, such as Example 3 in Figure~\ref{fig:comparison}.
We outline pseudocode for our methodology of generating fake reviews in Algorithm~\ref{alg:base}. There are several parameters in our algorithm.
The details of the algorithm will be shown later.
We modify the openNMT-py translation phase by changing log-probabilities before passing them to the beam search.
We notice that reviews generated with openNMT-py contain almost no language errors. As an optional post-processing step, we obfuscate reviews by introducing natural typos/misspellings randomly. In the next sections, we describe how we succeeded in generating more natural sentences from our NMT model, i.e. generating reviews like Example~3 instead of reviews like Example~2.
\subsubsection{Variation in word content}
Example 2 in Figure~\ref{fig:comparison} repeats commonly occurring words given for a specific context (e.g. \textit{great, food, service, beer, selection, burger} for Example~1). Generic review generation can be avoided by decreasing probabilities (log-likelihoods \cite{murphy2012machine}) of the generators LM, the decoder.
We constrain the generation of sentences by randomly \emph{imposing penalties to words}.
We tried several forms of added randomness, and found that adding constant penalties to a \emph{random subset} of the target words resulted in the most natural sentence flow. We call these penalties \emph{Bernoulli penalties}, since the random variables are chosen as either 1 or 0 (on or off).
\paragraph{Bernoulli penalties to language model}
To avoid generic sentences components, we augment the default language model $p(\cdot)$ of the decoder by
\begin{equation}
\log \Tilde{p}(t_k) = \log p(t_k | t_i, \dots, t_1) + \lambda q,
\end{equation}
where $q \in R^{V}$ is a vector of Bernoulli-distributed random values that obtain values $1$ with probability $b$ and value $0$ with probability $1-b_i$, and $\lambda < 0$. Parameter $b$ controls how much of the vocabulary is forgotten and $\lambda$ is a soft penalty of including ``forgotten'' words in a review.
$\lambda q_k$ emphasizes sentence forming with non-penalized words. The randomness is reset at the start of generating a new review.
Using Bernoulli penalties in the language model, we can ``forget'' a certain proportion of words and essentially ``force'' the creation of less typical sentences. We will test the effect of these two parameters, the Bernoulli probability $b$ and log-likelihood penalty of including ``forgotten'' words $\lambda$, with a user study in Section~\ref{sec:varying}.
\paragraph{Start penalty}
We introduce start penalties to avoid generic sentence starts (e.g. ``Great food, great service''). Inspired by \cite{li2016diversity}, we add a random start penalty $\lambda s^\mathrm{i}$, to our language model, which decreases monotonically for each generated token. We set $\alpha \leftarrow 0.66$ as it's effect decreases by 90\% every 5 words generated.
\paragraph{Penalty for reusing words}
Bernoulli penalties do not prevent excessive use of certain words in a sentence (such as \textit{great} in Example~2).
To avoid excessive reuse of words, we included a memory penalty for previously used words in each translation.
Concretely, we add the penalty $\lambda$ to each word that has been generated by the greedy search.
\subsubsection{Improving sentence coherence}
\label{sec:grammar}
We visually analyzed reviews after applying these penalties to our NMT model. While the models were clearly diverse, they were \emph{incoherent}: the introduction of random penalties had degraded the grammaticality of the sentences. Amongst others, the use of punctuation was erratic, and pronouns were used semantically wrongly (e.g. \emph{he}, \emph{she} might be replaced, as could ``and''/``but''). To improve the authenticity of our reviews, we added several \emph{grammar-based rules}.
English language has several classes of words which are important for the natural flow of sentences.
We built a list of common pronouns (e.g. I, them, our), conjunctions (e.g. and, thus, if), punctuation (e.g. ,/.,..), and apply only half memory penalties for these words. We found that this change made the reviews more coherent. The pseudocode for this and the previous step is shown in Algorithm~\ref{alg:aug}.
The combined effect of grammar-based rules and LM augmentation is visible in Example~3, Figure~\ref{fig:comparison}.
\begin{algorithm}[!t]
\KwData{Initial log LM $\log p$, Bernoulli probability $b$, soft-penalty $\lambda$, monotonic factor $\alpha$, last generated token $o_i$, grammar rules set $G$}
\KwResult{Augmented log LM $\log \Tilde{p}$}
\begin{algorithmic}[1]
\Procedure {Augment}{$\log p$, $b$, $\lambda$, $\alpha$, $o_i$, $i$}{ \\
generate $P_{\mathrm{1:N}} \leftarrow Bernoulli(b)$~~~~~~~~~~~~~~~|~$\text{One value} \in \{0,1\}~\text{per token}$~ \\
$I \leftarrow P>0$ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~|~Select positive indices~\\
$\log \Tilde{p} \leftarrow$ $\text{Discount}$($\log p$, $I$, $\lambda \cdot \alpha^i$,$G$) ~~~~~~ |~start penalty~\\
$\log \Tilde{p} \leftarrow$ $\text{Discount}$($\log \Tilde{p}$, $[o_i]$, $\lambda$,$G$) ~~~~~~~~~ |~memory penalty~\\
\textbf{return}~$\log \Tilde{p}$
}
\EndProcedure
\\
\Procedure {Discount}{$\log p$, $I$, $\lambda$, $G$}{
\State{\For{$i \in I$}{
\eIf{$o_i \in G$}{
$\log p_{i} \leftarrow \log p_{i} + \lambda/2$
}{
$\log p_{i} \leftarrow \log p_{i} + \lambda$}
}\textbf{return}~$\log p$
\EndProcedure
}}
\end{algorithmic}
\caption{Pseudocode for augmenting language model. }
\label{alg:aug}
\end{algorithm}
\subsubsection{Human-like errors}
\label{sec:obfuscation}
We notice that our NMT model produces reviews without grammar mistakes.
This is unlike real human writers, whose sentences contain two types of language mistakes 1) \emph{typos} that are caused by mistakes in the human motoric input, and 2) \emph{common spelling mistakes}.
We scraped a list of common English language spelling mistakes from Oxford dictionary\footnote{\url{https://en.oxforddictionaries.com/spelling/common-misspellings}} and created 80 rules for randomly \emph{re-introducing spelling mistakes}.
Similarly, typos are randomly reintroduced based on the weighted edit distance\footnote{\url{https://pypi.python.org/pypi/weighted-levenshtein/0.1}}, such that typos resulting in real English words with small perturbations are emphasized.
We use autocorrection tools\footnote{\url{https://pypi.python.org/pypi/autocorrect/0.1.0}} for finding these words.
We call these augmentations \emph{obfuscations}, since they aim to confound the reader to think a human has written them. We omit the pseudocode description for brevity.
\subsection{Experiment: Varying generation parameters in our NMT model}
\label{sec:varying}
Parameters $b$ and $\lambda$ control different aspects in fake reviews.
We show six different examples of generated fake reviews in Table~\ref{table:categories}.
Here, the largest differences occur with increasing values of $b$: visibly, the restaurant reviews become more extreme.
This occurs because a large portion of vocabulary is ``forgotten''. Reviews with $b \geq 0.7$ contain more rare word combinations, e.g. ``!!!!!'' as punctuation, and they occasionally break grammaticality (''experience was awesome'').
Reviews with lower $b$ are more generic: they contain safe word combinations like ``Great place, good service'' that occur in many reviews. Parameter $\lambda$'s is more subtle: it affects how random review starts are and to a degree, the discontinuation between statements within the review.
We conducted an Amazon Mechanical Turk (MTurk) survey in order to determine what kind of NMT-Fake reviews are convincing to native English speakers. We describe the survey and results in the next section.
\begin{table}[!b]
\caption{Six different parametrizations of our NMT reviews and one example for each. The context is ``5 P~.~F~.~Chang ' s Scottsdale AZ'' in all examples.}
\begin{center}
\begin{tabular}{ | l | l | }
\hline
$(b, \lambda)$ & Example review for context \\ \hline
\hline
$(0.3, -3)$ & I love this location! Great service, great food and the best drinks in Scottsdale. \\
& The staff is very friendly and always remembers u when we come in\\\hline
$(0.3, -5)$ & Love love the food here! I always go for lunch. They have a great menu and \\
& they make it fresh to order. Great place, good service and nice staff\\\hline
$(0.5, -4)$ & I love their chicken lettuce wraps and fried rice!! The service is good, they are\\
& always so polite. They have great happy hour specials and they have a lot\\
& of options.\\\hline
$(0.7, -3)$ & Great place to go with friends! They always make sure your dining \\
& experience was awesome.\\ \hline
$(0.7, -5)$ & Still haven't ordered an entree before but today we tried them once..\\
& both of us love this restaurant....\\\hline
$(0.9, -4)$ & AMAZING!!!!! Food was awesome with excellent service. Loved the lettuce \\
& wraps. Great drinks and wine! Can't wait to go back so soon!!\\ \hline
\end{tabular}
\label{table:categories}
\end{center}
\end{table}
\subsubsection{MTurk study}
\label{sec:amt}
We created 20 jobs, each with 100 questions, and requested master workers in MTurk to complete the jobs.
We randomly generated each survey for the participants. Each review had a 50\% chance to be real or fake. The fake ones further were chosen among six (6) categories of fake reviews (Table~\ref{table:categories}).
The restaurant and the city was given as contextual information to the participants. Our aim was to use this survey to understand how well English-speakers react to different parametrizations of NMT-Fake reviews.
Table~\ref{table:amt_pop} in Appendix summarizes the statistics for respondents in the survey. All participants were native English speakers from America. The base rate (50\%) was revealed to the participants prior to the study.
We first investigated overall detection of any NMT-Fake reviews (1,006 fake reviews and 994 real reviews). We found that the participants had big difficulties in detecting our fake reviews. In average, the reviews were detected with class-averaged \emph{F-score of only 56\%}, with 53\% F-score for fake review detection and 59\% F-score for real review detection. The results are very close to \emph{random detection}, where precision, recall and F-score would each be 50\%. Results are recorded in Table~\ref{table:MTurk_super}. Overall, the fake review generation is very successful, since human detection rate across categories is close to random.
\begin{table}[t]
\caption{Effectiveness of Mechanical Turkers in distinguishing human-written reviews from fake reviews generated by our NMT model (all variants).}
\begin{center}
\begin{tabular}{ | c | c |c |c | c | }
\hline
\multicolumn{5}{|c|}{Classification report}
\\ \hline
Review Type & Precision & Recall & F-score & Support \\ \hline
\hline
Human & 55\% & 63\% & 59\% & 994\\
NMT-Fake & 57\% & 50\% & 53\% & 1006 \\
\hline
\end{tabular}
\label{table:MTurk_super}
\end{center}
\end{table}
We noticed some variation in the detection of different fake review categories. The respondents in our MTurk survey had most difficulties recognizing reviews of category $(b=0.3, \lambda=-5)$, where true positive rate was $40.4\%$, while the true negative rate of the real class was $62.7\%$. The precision were $16\%$ and $86\%$, respectively. The class-averaged F-score is $47.6\%$, which is close to random. Detailed classification reports are shown in Table~\ref{table:MTurk_sub} in Appendix. Our MTurk-study shows that \emph{our NMT-Fake reviews pose a significant threat to review systems}, since \emph{ordinary native English-speakers have very big difficulties in separating real reviews from fake reviews}. We use the review category $(b=0.3, \lambda=-5)$ for future user tests in this paper, since MTurk participants had most difficulties detecting these reviews. We refer to this category as NMT-Fake* in this paper.
\section{Evaluation}
\graphicspath{ {figures/}}
We evaluate our fake reviews by first comparing them statistically to previously proposed types of fake reviews, and proceed with a user study with experienced participants. We demonstrate the statistical difference to existing fake review types \cite{yao2017automated,mukherjee2013yelp,rayana2015collective} by training classifiers to detect previous types and investigate classification performance.
\subsection{Replication of state-of-the-art model: LSTM}
\label{sec:repl}
Yao et al. \cite{yao2017automated} presented the current state-of-the-art generative model for fake reviews. The model is trained over the Yelp Challenge dataset using a two-layer character-based LSTM model.
We requested the authors of \cite{yao2017automated} for access to their LSTM model or a fake review dataset generated by their model. Unfortunately they were not able to share either of these with us. We therefore replicated their model as closely as we could, based on their paper and e-mail correspondence\footnote{We are committed to sharing our code with bonafide researchers for the sake of reproducibility.}.
We used the same graphics card (GeForce GTX) and trained using the same framework (torch-RNN in lua). We downloaded the reviews from Yelp Challenge and preprocessed the data to only contain printable ASCII characters, and filtered out non-restaurant reviews. We trained the model for approximately 72 hours. We post-processed the reviews using the customization methodology described in \cite{yao2017automated} and email correspondence. We call fake reviews generated by this model LSTM-Fake reviews.
\subsection{Similarity to existing fake reviews}
\label{sec:automated}
We now want to understand how NMT-Fake* reviews compare to a) LSTM fake reviews and b) human-generated fake reviews. We do this by comparing the statistical similarity between these classes.
For `a' (Figure~\ref{fig:lstm}), we use the Yelp Challenge dataset. We trained a classifier using 5,000 random reviews from the Yelp Challenge dataset (``human'') and 5,000 fake reviews generated by LSTM-Fake. Yao et al. \cite{yao2017automated} found that character features are essential in identifying LSTM-Fake reviews. Consequently, we use character features (n-grams up to 3).
For `b' (Figure~\ref{fig:shill}),we the ``Yelp Shills'' dataset (combination of YelpZip \cite{mukherjee2013yelp}, YelpNYC \cite{mukherjee2013yelp}, YelpChi \cite{rayana2015collective}). This dataset labels entries that are identified as fraudulent by Yelp's filtering mechanism (''shill reviews'')\footnote{Note that shill reviews are probably generated by human shills \cite{zhao2017news}.}. The rest are treated as genuine reviews from human users (''genuine''). We use 100,000 reviews from each category to train a classifier. We use features from the commercial psychometric tool LIWC2015 \cite{pennebaker2015development} to generated features.
In both cases, we use AdaBoost (with 200 shallow decision trees) for training. For testing each classifier, we use a held out test set of 1,000 reviews from both classes in each case. In addition, we test 1,000 NMT-Fake* reviews. Figures~\ref{fig:lstm} and~\ref{fig:shill} show the results. The classification threshold of 50\% is marked with a dashed line.
\begin{figure}
\begin{subfigure}[b]{0.5\columnwidth}
\includegraphics[width=\columnwidth]{figures/lstm.png}
\caption{Human--LSTM reviews.}
\label{fig:lstm}
\end{subfigure}
\begin{subfigure}[b]{0.5\columnwidth}
\includegraphics[width=\columnwidth]{figures/distribution_shill.png}
\caption{Genuine--Shill reviews.}
\label{fig:shill}
\end{subfigure}
\caption{
Histogram comparison of NMT-Fake* reviews with LSTM-Fake reviews and human-generated (\emph{genuine} and \emph{shill}) reviews. Figure~\ref{fig:lstm} shows that a classifier trained to distinguish ``human'' vs. LSTM-Fake cannot distinguish ``human'' vs NMT-Fake* reviews. Figure~\ref{fig:shill} shows NMT-Fake* reviews are more similar to \emph{genuine} reviews than \emph{shill} reviews.
}
\label{fig:statistical_similarity}
\end{figure}
We can see that our new generated reviews do not share strong attributes with previous known categories of fake reviews. If anything, our fake reviews are more similar to genuine reviews than previous fake reviews. We thus conjecture that our NMT-Fake* fake reviews present a category of fake reviews that may go undetected on online review sites.
\subsection{Comparative user study}
\label{sec:comparison}
We wanted to evaluate the effectiveness of fake reviews againsttech-savvy users who understand and know to expect machine-generated fake reviews. We conducted a user study with 20 participants, all with computer science education and at least one university degree. Participant demographics are shown in Table~\ref{table:amt_pop} in the Appendix. Each participant first attended a training session where they were asked to label reviews (fake and genuine) and could later compare them to the correct answers -- we call these participants \emph{experienced participants}.
No personal data was collected during the user study.
Each person was given two randomly selected sets of 30 of reviews (a total of 60 reviews per person) with reviews containing 10 \textendash 50 words each.
Each set contained 26 (87\%) real reviews from Yelp and 4 (13\%) machine-generated reviews,
numbers chosen based on suspicious review prevalence on Yelp~\cite{mukherjee2013yelp,rayana2015collective}.
One set contained machine-generated reviews from one of the two models (NMT ($b=0.3, \lambda=-5$) or LSTM),
and the other set reviews from the other in randomized order. The number of fake reviews was revealed to each participant in the study description. Each participant was requested to mark four (4) reviews as fake.
Each review targeted a real restaurant. A screenshot of that restaurant's Yelp page was shown to each participant prior to the study. Each participant evaluated reviews for one specific, randomly selected, restaurant. An example of the first page of the user study is shown in Figure~\ref{fig:screenshot} in Appendix.
\begin{figure}[!ht]
\centering
\includegraphics[width=.7\columnwidth]{detection2.png}
\caption{Violin plots of detection rate in comparative study. Mean and standard deviations for number of detected fakes are $0.8\pm0.7$ for NMT-Fake* and $2.5\pm1.0$ for LSTM-Fake. $n=20$. A sample of random detection is shown as comparison.}
\label{fig:aalto}
\end{figure}
Figure~\ref{fig:aalto} shows the distribution of detected reviews of both types. A hypothetical random detector is shown for comparison.
NMT-Fake* reviews are significantly more difficult to detect for our experienced participants. In average, detection rate (recall) is $20\%$ for NMT-Fake* reviews, compared to $61\%$ for LSTM-based reviews.
The precision (and F-score) is the same as the recall in our study, since participants labeled 4 fakes in each set of 30 reviews \cite{murphy2012machine}.
The distribution of the detection across participants is shown in Figure~\ref{fig:aalto}. \emph{The difference is statistically significant with confidence level $99\%$} (Welch's t-test).
We compared the detection rate of NMT-Fake* reviews to a random detector, and find that \emph{our participants detection rate of NMT-Fake* reviews is not statistically different from random predictions with 95\% confidence level} (Welch's t-test).
\section{Defenses}
\label{sec:detection}
We developed an AdaBoost-based classifier to detect our new fake reviews, consisting of 200 shallow decision trees (depth 2). The features we used are recorded in Table~\ref{table:features_adaboost} (Appendix).
We used word-level features based on spaCy-tokenization \cite{honnibal-johnson:2015:EMNLP} and constructed n-gram representation of POS-tags and dependency tree tags. We added readability features from NLTK~\cite{bird2004nltk}.
\begin{figure}[ht]
\centering
\includegraphics[width=.7\columnwidth]{obf_score_fair_2.png}
\caption{
Adaboost-based classification of NMT-Fake and human-written reviews.
Effect of varying $b$ and $\lambda$ in fake review generation.
The variant native speakers had most difficulties detecting is well detectable by AdaBoost (97\%).}
\label{fig:adaboost_matrix_b_lambda}
\end{figure}
Figure~\ref{fig:adaboost_matrix_b_lambda} shows our AdaBoost classifier's class-averaged F-score at detecting different kind of fake reviews. The classifier is very effective in detecting reviews that humans have difficulties detecting. For example, the fake reviews MTurk users had most difficulty detecting ($b=0.3, \lambda=-5$) are detected with an excellent 97\% F-score.
The most important features for the classification were counts for frequently occurring words in fake reviews (such as punctuation, pronouns, articles) as well as the readability feature ``Automated Readability Index''. We thus conclude that while NMT-Fake reviews are difficult to detect for humans, they can be well detected with the right tools.
\section{Related Work}
Kumar and Shah~\cite{kumar2018false} survey and categorize false information research. Automatically generated fake reviews are a form of \emph{opinion-based false information}, where the creator of the review may influence reader's opinions or decisions.
Yao et al. \cite{yao2017automated} presented their study on machine-generated fake reviews. Contrary to us, they investigated character-level language models, without specifying a specific context before generation. We leverage existing NMT tools to encode a specific context to the restaurant before generating reviews.
Supporting our study, Everett et al~\cite{Everett2016Automated} found that security researchers were less likely to be fooled by Markov chain-generated Reddit comments compared to ordinary Internet users.
Diversification of NMT model outputs has been studied in \cite{li2016diversity}. The authors proposed the use of a penalty to commonly occurring sentences (\emph{n-grams}) in order to emphasize maximum mutual information-based generation.
The authors investigated the use of NMT models in chatbot systems.
We found that unigram penalties to random tokens (Algorithm~\ref{alg:aug}) was easy to implement and produced sufficiently diverse responses.
\section {Discussion and Future Work}
\paragraph{What makes NMT-Fake* reviews difficult to detect?} First, NMT models allow the encoding of a relevant context for each review, which narrows down the possible choices of words that the model has to choose from. Our NMT model had a perplexity of approximately $25$, while the model of \cite{yao2017automated} had a perplexity of approximately $90$ \footnote{Personal communication with the authors}. Second, the beam search in NMT models narrows down choices to natural-looking sentences. Third, we observed that the NMT model produced \emph{better structure} in the generated sentences (i.e. a more coherent story).
\paragraph{Cost of generating reviews} With our setup, generating one review took less than one second. The cost of generation stems mainly from the overnight training. Assuming an electricity cost of 16 cents / kWh (California) and 8 hours of training, training the NMT model requires approximately 1.30 USD. This is a 90\% reduction in time compared to the state-of-the-art \cite{yao2017automated}. Furthermore, it is possible to generate both positive and negative reviews with the same model.
\paragraph{Ease of customization} We experimented with inserting specific words into the text by increasing their log likelihoods in the beam search. We noticed that the success depended on the prevalence of the word in the training set. For example, adding a +5 to \emph{Mike} in the log-likelihood resulted in approximately 10\% prevalence of this word in the reviews. An attacker can therefore easily insert specific keywords to reviews, which can increase evasion probability.
\paragraph{Ease of testing} Our diversification scheme is applicable during \emph{generation phase}, and does not affect the training setup of the network in any way. Once the NMT model is obtained, it is easy to obtain several different variants of NMT-Fake reviews by varying parameters $b$ and $\lambda$.
\paragraph{Languages} The generation methodology is not per-se language-dependent. The requirement for successful generation is that sufficiently much data exists in the targeted language. However, our language model modifications require some knowledge of that target language's grammar to produce high-quality reviews.
\paragraph{Generalizability of detection techniques} Currently, fake reviews are not universally detectable. Our results highlight that it is difficult to claim detection performance on unseen types of fake reviews (Section~\ref{sec:automated}). We see this an open problem that deserves more attention in fake reviews research.
\paragraph{Generalizability to other types of datasets} Our technique can be applied to any dataset, as long as there is sufficient training data for the NMT model. We used approximately 2.9 million reviews for this work.
\section{Conclusion}
In this paper, we showed that neural machine translation models can be used to generate fake reviews that are very effective in deceiving even experienced, tech-savvy users.
This supports anecdotal evidence \cite{national2017commission}.
Our technique is more effective than state-of-the-art \cite{yao2017automated}.
We conclude that machine-aided fake review detection is necessary since human users are ineffective in identifying fake reviews.
We also showed that detectors trained using one type of fake reviews are not effective in identifying other types of fake reviews.
Robust detection of fake reviews is thus still an open problem.
\section*{Acknowledgments}
We thank Tommi Gr\"{o}ndahl for assistance in planning user studies and the
participants of the user study for their time and feedback. We also thank
Luiza Sayfullina for comments that improved the manuscript.
We thank the authors of \cite{yao2017automated} for answering questions about
their work.
\bibliographystyle{splncs}
\begin{thebibliography}{10}
\bibitem{yao2017automated}
Yao, Y., Viswanath, B., Cryan, J., Zheng, H., Zhao, B.Y.:
\newblock Automated crowdturfing attacks and defenses in online review systems.
\newblock In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and
Communications Security, ACM (2017)
\bibitem{murphy2012machine}
Murphy, K.:
\newblock Machine learning: a probabilistic approach.
\newblock Massachusetts Institute of Technology (2012)
\bibitem{challenge2013yelp}
Yelp:
\newblock {Yelp Challenge Dataset} (2013)
\bibitem{mukherjee2013yelp}
Mukherjee, A., Venkataraman, V., Liu, B., Glance, N.:
\newblock What yelp fake review filter might be doing?
\newblock In: Seventh International AAAI Conference on Weblogs and Social Media
(ICWSM). (2013)
\bibitem{rayana2015collective}
Rayana, S., Akoglu, L.:
\newblock Collective opinion spam detection: Bridging review networks and
metadata.
\newblock In: {}Proceedings of the 21th ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining
\bibitem{o2008user}
{O'Connor}, P.:
\newblock {User-generated content and travel: A case study on Tripadvisor.com}.
\newblock Information and communication technologies in tourism 2008 (2008)
\bibitem{luca2010reviews}
Luca, M.:
\newblock {Reviews, Reputation, and Revenue: The Case of Yelp. com}.
\newblock {Harvard Business School} (2010)
\bibitem{wang2012serf}
Wang, G., Wilson, C., Zhao, X., Zhu, Y., Mohanlal, M., Zheng, H., Zhao, B.Y.:
\newblock Serf and turf: crowdturfing for fun and profit.
\newblock In: Proceedings of the 21st international conference on World Wide
Web (WWW), ACM (2012)
\bibitem{rinta2017understanding}
Rinta-Kahila, T., Soliman, W.:
\newblock Understanding crowdturfing: The different ethical logics behind the
clandestine industry of deception.
\newblock In: ECIS 2017: Proceedings of the 25th European Conference on
Information Systems. (2017)
\bibitem{luca2016fake}
Luca, M., Zervas, G.:
\newblock Fake it till you make it: Reputation, competition, and yelp review
fraud.
\newblock Management Science (2016)
\bibitem{national2017commission}
{National Literacy Trust}:
\newblock Commission on fake news and the teaching of critical literacy skills
in schools URL:
\url{https://literacytrust.org.uk/policy-and-campaigns/all-party-parliamentary-group-literacy/fakenews/}.
\bibitem{jurafsky2014speech}
Jurafsky, D., Martin, J.H.:
\newblock Speech and language processing. Volume~3.
\newblock Pearson London: (2014)
\bibitem{kingma2014adam}
Kingma, D.P., Ba, J.:
\newblock Adam: A method for stochastic optimization.
\newblock arXiv preprint arXiv:1412.6980 (2014)
\bibitem{cho2014learning}
Cho, K., van Merrienboer, B., Gulcehre, C., Bahdanau, D., Bougares, F.,
Schwenk, H., Bengio, Y.:
\newblock Learning phrase representations using rnn encoder--decoder for
statistical machine translation.
\newblock In: Proceedings of the 2014 Conference on Empirical Methods in
Natural Language Processing (EMNLP). (2014)
\bibitem{klein2017opennmt}
Klein, G., Kim, Y., Deng, Y., Senellart, J., Rush, A.:
\newblock Opennmt: Open-source toolkit for neural machine translation.
\newblock Proceedings of ACL, System Demonstrations (2017)
\bibitem{wu2016google}
Wu, Y., Schuster, M., Chen, Z., Le, Q.V., Norouzi, M., Macherey, W., Krikun,
M., Cao, Y., Gao, Q., Macherey, K., et~al.:
\newblock Google's neural machine translation system: Bridging the gap between
human and machine translation.
\newblock arXiv preprint arXiv:1609.08144 (2016)
\bibitem{mei2017coherent}
Mei, H., Bansal, M., Walter, M.R.:
\newblock Coherent dialogue with attention-based language models.
\newblock In: AAAI. (2017) 3252--3258
\bibitem{li2016diversity}
Li, J., Galley, M., Brockett, C., Gao, J., Dolan, B.:
\newblock A diversity-promoting objective function for neural conversation
models.
\newblock In: Proceedings of NAACL-HLT. (2016)
\bibitem{rubin2006assessing}
Rubin, V.L., Liddy, E.D.:
\newblock Assessing credibility of weblogs.
\newblock In: AAAI Spring Symposium: Computational Approaches to Analyzing
Weblogs. (2006)
\bibitem{zhao2017news}
news.com.au:
\newblock {The potential of AI generated 'crowdturfing' could undermine online
reviews and dramatically erode public trust} URL:
\url{http://www.news.com.au/technology/online/security/the-potential-of-ai-generated-crowdturfing-could-undermine-online-reviews-and-dramatically-erode-public-trust/news-story/e1c84ad909b586f8a08238d5f80b6982}.
\bibitem{pennebaker2015development}
Pennebaker, J.W., Boyd, R.L., Jordan, K., Blackburn, K.:
\newblock {The development and psychometric properties of LIWC2015}.
\newblock Technical report (2015)
\bibitem{honnibal-johnson:2015:EMNLP}
Honnibal, M., Johnson, M.:
\newblock An improved non-monotonic transition system for dependency parsing.
\newblock In: Proceedings of the 2015 Conference on Empirical Methods in
Natural Language Processing (EMNLP), ACM (2015)
\bibitem{bird2004nltk}
Bird, S., Loper, E.:
\newblock {NLTK: the natural language toolkit}.
\newblock In: Proceedings of the ACL 2004 on Interactive poster and
demonstration sessions, Association for Computational Linguistics (2004)
\bibitem{kumar2018false}
Kumar, S., Shah, N.:
\newblock False information on web and social media: A survey.
\newblock arXiv preprint arXiv:1804.08559 (2018)
\bibitem{Everett2016Automated}
Everett, R.M., Nurse, J.R.C., Erola, A.:
\newblock The anatomy of online deception: What makes automated text
convincing?
\newblock In: Proceedings of the 31st Annual ACM Symposium on Applied
Computing. SAC '16, ACM (2016)
\end{thebibliography}
\section*{Appendix}
We present basic demographics of our MTurk study and the comparative study with experienced users in Table~\ref{table:amt_pop}.
\begin{table}
\caption{User study statistics.}
\begin{center}
\begin{tabular}{ | l | c | c | }
\hline
Quality & Mechanical Turk users & Experienced users\\
\hline
Native English Speaker & Yes (20) & Yes (1) No (19) \\
Fluent in English & Yes (20) & Yes (20) \\
Age & 21-40 (17) 41-60 (3) & 21-25 (8) 26-30 (7) 31-35 (4) 41-45 (1)\\
Gender & Male (14) Female (6) & Male (17) Female (3)\\
Highest Education & High School (10) Bachelor (10) & Bachelor (9) Master (6) Ph.D. (5) \\
\hline
\end{tabular}
\label{table:amt_pop}
\end{center}
\end{table}
Table~\ref{table:openNMT-py_commands} shows a listing of the openNMT-py commands we used to create our NMT model and to generate fake reviews.
\begin{table}[t]
\caption{Listing of used openNMT-py commands.}
\begin{center}
\begin{tabular}{ | l | l | }
\hline
Phase & Bash command \\
\hline
Preprocessing & \begin{lstlisting}[language=bash]
python preprocess.py -train_src context-train.txt
-train_tgt reviews-train.txt -valid_src context-val.txt
-valid_tgt reviews-val.txt -save_data model
-lower -tgt_words_min_frequency 10
\end{lstlisting}
\\ & \\
Training & \begin{lstlisting}[language=bash]
python train.py -data model -save_model model -epochs 8
-gpuid 0 -learning_rate_decay 0.5 -optim adam
-learning_rate 0.001 -start_decay_at 3\end{lstlisting}
\\ & \\
Generation & \begin{lstlisting}[language=bash]
python translate.py -model model_acc_35.54_ppl_25.68_e8.pt
-src context-tst.txt -output pred-e8.txt -replace_unk
-verbose -max_length 50 -gpu 0
\end{lstlisting} \\
\hline
\end{tabular}
\label{table:openNMT-py_commands}
\end{center}
\end{table}
Table~\ref{table:MTurk_sub} shows the classification performance of Amazon Mechanical Turkers, separated across different categories of NMT-Fake reviews. The category with best performance ($b=0.3, \lambda=-5$) is denoted as NMT-Fake*.
\begin{table}[b]
\caption{MTurk study subclass classification reports. Classes are imbalanced in ratio 1:6. Random predictions are $p_\mathrm{human} = 86\%$ and $p_\mathrm{machine} = 14\%$, with $r_\mathrm{human} = r_\mathrm{machine} = 50\%$. Class-averaged F-scores for random predictions are $42\%$.}
\begin{center}
\begin{tabular}{ | c || c |c |c | c | }
\hline
$(b=0.3, \lambda = -3)$ & Precision & Recall & F-score & Support \\ \hline
Human & 89\% & 63\% & 73\% & 994\\
NMT-Fake & 15\% & 45\% & 22\% & 146 \\
\hline
\hline
$(b=0.3, \lambda = -5)$ & Precision & Recall & F-score & Support \\ \hline
Human & 86\% & 63\% & 73\% & 994\\
NMT-Fake* & 16\% & 40\% & 23\% & 171 \\
\hline
\hline
$(b=0.5, \lambda = -4)$ & Precision & Recall & F-score & Support \\ \hline
Human & 88\% & 63\% & 73\% & 994\\
NMT-Fake & 21\% & 55\% & 30\% & 181 \\
\hline
\hline
$(b=0.7, \lambda = -3)$ & Precision & Recall & F-score & Support \\ \hline
Human & 88\% & 63\% & 73\% & 994\\
NMT-Fake & 19\% & 50\% & 27\% & 170 \\
\hline
\hline
$(b=0.7, \lambda = -5)$ & Precision & Recall & F-score & Support \\ \hline
Human & 89\% & 63\% & 74\% & 994\\
NMT-Fake & 21\% & 57\% & 31\% & 174 \\
\hline
\hline
$(b=0.9, \lambda = -4)$ & Precision & Recall & F-score & Support \\ \hline
Human & 88\% & 63\% & 73\% & 994\\
NMT-Fake & 18\% & 50\% & 27\% & 164 \\
\hline
\end{tabular}
\label{table:MTurk_sub}
\end{center}
\end{table}
Figure~\ref{fig:screenshot} shows screenshots of the first two pages of our user study with experienced participants.
\begin{figure}[ht]
\centering
\includegraphics[width=1.\columnwidth]{figures/screenshot_7-3.png}
\caption{
Screenshots of the first two pages in the user study. Example 1 is a NMT-Fake* review, the rest are human-written.
}
\label{fig:screenshot}
\end{figure}
Table~\ref{table:features_adaboost} shows the features used to detect NMT-Fake reviews using the AdaBoost classifier.
\begin{table}
\caption{Features used in NMT-Fake review detector.}
\begin{center}
\begin{tabular}{ | l | c | }
\hline
Feature type & Number of features \\ \hline
\hline
Readability features & 13 \\ \hline
Unique POS tags & $~20$ \\ \hline
Word unigrams & 22,831 \\ \hline
1/2/3/4-grams of simple part-of-speech tags & 54,240 \\ \hline
1/2/3-grams of detailed part-of-speech tags & 112,944 \\ \hline
1/2/3-grams of syntactic dependency tags & 93,195 \\ \hline
\end{tabular}
\label{table:features_adaboost}
\end{center}
\end{table}
\end{document} | Unanswerable |
12f1919a3e8ca460b931c6cacc268a926399dff4 | 12f1919a3e8ca460b931c6cacc268a926399dff4_0 | Q: What kind of model do they use for detection?
Text: Introduction
Automatically generated fake reviews have only recently become natural enough to fool human readers. Yao et al. BIBREF0 use a deep neural network (a so-called 2-layer LSTM BIBREF1 ) to generate fake reviews, and concluded that these fake reviews look sufficiently genuine to fool native English speakers. They train their model using real restaurant reviews from yelp.com BIBREF2 . Once trained, the model is used to generate reviews character-by-character. Due to the generation methodology, it cannot be easily targeted for a specific context (meaningful side information). Consequently, the review generation process may stray off-topic. For instance, when generating a review for a Japanese restaurant in Las Vegas, the review generation process may include references to an Italian restaurant in Baltimore. The authors of BIBREF0 apply a post-processing step (customization), which replaces food-related words with more suitable ones (sampled from the targeted restaurant). The word replacement strategy has drawbacks: it can miss certain words and replace others independent of their surrounding words, which may alert savvy readers. As an example: when we applied the customization technique described in BIBREF0 to a review for a Japanese restaurant it changed the snippet garlic knots for breakfast with garlic knots for sushi).
We propose a methodology based on neural machine translation (NMT) that improves the generation process by defining a context for the each generated fake review. Our context is a clear-text sequence of: the review rating, restaurant name, city, state and food tags (e.g. Japanese, Italian). We show that our technique generates review that stay on topic. We can instantiate our basic technique into several variants. We vet them on Amazon Mechanical Turk and find that native English speakers are very poor at recognizing our fake generated reviews. For one variant, the participants' performance is close to random: the class-averaged F-score of detection is INLINEFORM0 (whereas random would be INLINEFORM1 given the 1:6 imbalance in the test). Via a user study with experienced, highly educated participants, we compare this variant (which we will henceforth refer to as NMT-Fake* reviews) with fake reviews generated using the char-LSTM-based technique from BIBREF0 .
We demonstrate that NMT-Fake* reviews constitute a new category of fake reviews that cannot be detected by classifiers trained only using previously known categories of fake reviews BIBREF0 , BIBREF3 , BIBREF4 . Therefore, NMT-Fake* reviews may go undetected in existing online review sites. To meet this challenge, we develop an effective classifier that detects NMT-Fake* reviews effectively (97% F-score). Our main contributions are:
Background
Fake reviews User-generated content BIBREF5 is an integral part of the contemporary user experience on the web. Sites like tripadvisor.com, yelp.com and Google Play use user-written reviews to provide rich information that helps other users choose where to spend money and time. User reviews are used for rating services or products, and for providing qualitative opinions. User reviews and ratings may be used to rank services in recommendations. Ratings have an affect on the outwards appearance. Already 8 years ago, researchers estimated that a one-star rating increase affects the business revenue by 5 – 9% on yelp.com BIBREF6 .
Due to monetary impact of user-generated content, some businesses have relied on so-called crowd-turfing agents BIBREF7 that promise to deliver positive ratings written by workers to a customer in exchange for a monetary compensation. Crowd-turfing ethics are complicated. For example, Amazon community guidelines prohibit buying content relating to promotions, but the act of writing fabricated content is not considered illegal, nor is matching workers to customers BIBREF8 . Year 2015, approximately 20% of online reviews on yelp.com were suspected of being fake BIBREF9 .
Nowadays, user-generated review sites like yelp.com use filters and fraudulent review detection techniques. These factors have resulted in an increase in the requirements of crowd-turfed reviews provided to review sites, which in turn has led to an increase in the cost of high-quality review. Due to the cost increase, researchers hypothesize the existence of neural network-generated fake reviews. These neural-network-based fake reviews are statistically different from human-written fake reviews, and are not caught by classifiers trained on these BIBREF0 .
Detecting fake reviews can either be done on an individual level or as a system-wide detection tool (i.e. regulation). Detecting fake online content on a personal level requires knowledge and skills in critical reading. In 2017, the National Literacy Trust assessed that young people in the UK do not have the skillset to differentiate fake news from real news BIBREF10 . For example, 20% of children that use online news sites in age group 12-15 believe that all information on news sites are true.
Neural Networks Neural networks are function compositions that map input data through INLINEFORM0 subsequent layers: DISPLAYFORM0
where the functions INLINEFORM0 are typically non-linear and chosen by experts partly for known good performance on datasets and partly for simplicity of computational evaluation. Language models (LMs) BIBREF11 are generative probability distributions that assign probabilities to sequences of tokens ( INLINEFORM1 ): DISPLAYFORM0
such that the language model can be used to predict how likely a specific token at time step INLINEFORM0 is, based on the INLINEFORM1 previous tokens. Tokens are typically either words or characters.
For decades, deep neural networks were thought to be computationally too difficult to train. However, advances in optimization, hardware and the availability of frameworks have shown otherwise BIBREF1 , BIBREF12 . Neural language models (NLMs) have been one of the promising application areas. NLMs are typically various forms of recurrent neural networks (RNNs), which pass through the data sequentially and maintain a memory representation of the past tokens with a hidden context vector. There are many RNN architectures that focus on different ways of updating and maintaining context vectors: Long Short-Term Memory units (LSTM) and Gated Recurrent Units (GRUs) are perhaps most popular. Neural LMs have been used for free-form text generation. In certain application areas, the quality has been high enough to sometimes fool human readers BIBREF0 . Encoder-decoder (seq2seq) models BIBREF13 are architectures of stacked RNNs, which have the ability to generate output sequences based on input sequences. The encoder network reads in a sequence of tokens, and passes it to a decoder network (a LM). In contrast to simpler NLMs, encoder-decoder networks have the ability to use additional context for generating text, which enables more accurate generation of text. Encoder-decoder models are integral in Neural Machine Translation (NMT) BIBREF14 , where the task is to translate a source text from one language to another language. NMT models additionally use beam search strategies to heuristically search the set of possible translations. Training datasets are parallel corpora; large sets of paired sentences in the source and target languages. The application of NMT techniques for online machine translation has significantly improved the quality of translations, bringing it closer to human performance BIBREF15 .
Neural machine translation models are efficient at mapping one expression to another (one-to-one mapping). Researchers have evaluated these models for conversation generation BIBREF16 , with mixed results. Some researchers attribute poor performance to the use of the negative log likelihood cost function during training, which emphasizes generation of high-confidence phrases rather than diverse phrases BIBREF17 . The results are often generic text, which lacks variation. Li et al. have suggested various augmentations to this, among others suppressing typical responses in the decoder language model to promote response diversity BIBREF17 .
System Model
We discuss the attack model, our generative machine learning method and controlling the generative process in this section.
Attack Model
Wang et al. BIBREF7 described a model of crowd-turfing attacks consisting of three entities: customers who desire to have fake reviews for a particular target (e.g. their restaurant) on a particular platform (e.g. Yelp), agents who offer fake review services to customers, and workers who are orchestrated by the agent to compose and post fake reviews.
Automated crowd-turfing attacks (ACA) replace workers by a generative model. This has several benefits including better economy and scalability (human workers are more expensive and slower) and reduced detectability (agent can better control the rate at which fake reviews are generated and posted).
We assume that the agent has access to public reviews on the review platform, by which it can train its generative model. We also assume that it is easy for the agent to create a large number of accounts on the review platform so that account-based detection or rate-limiting techniques are ineffective against fake reviews.
The quality of the generative model plays a crucial role in the attack. Yao et al. BIBREF0 propose the use of a character-based LSTM as base for generative model. LSTMs are not conditioned to generate reviews for a specific target BIBREF1 , and may mix-up concepts from different contexts during free-form generation. Mixing contextually separate words is one of the key criteria that humans use to identify fake reviews. These may result in violations of known indicators for fake content BIBREF18 . For example, the review content may not match prior expectations nor the information need that the reader has. We improve the attack model by considering a more capable generative model that produces more appropriate reviews: a neural machine translation (NMT) model.
Generative Model
We propose the use of NMT models for fake review generation. The method has several benefits: 1) the ability to learn how to associate context (keywords) to reviews, 2) fast training time, and 3) a high-degree of customization during production time, e.g. introduction of specific waiter or food items names into reviews.
NMT models are constructions of stacked recurrent neural networks (RNNs). They include an encoder network and a decoder network, which are jointly optimized to produce a translation of one sequence to another. The encoder rolls over the input data in sequence and produces one INLINEFORM0 -dimensional context vector representation for the sentence. The decoder then generates output sequences based on the embedding vector and an attention module, which is taught to associate output words with certain input words. The generation typically continues until a specific EOS (end of sentence) token is encountered. The review length can be controlled in many ways, e.g. by setting the probability of generating the EOS token to zero until the required length is reached.
NMT models often also include a beam search BIBREF14 , which generates several hypotheses and chooses the best ones amongst them. In our work, we use the greedy beam search technique. We forgo the use of additional beam searches as we found that the quality of the output was already adequate and the translation phase time consumption increases linearly for each beam used.
We use the Yelp Challenge dataset BIBREF2 for our fake review generation. The dataset (Aug 2017) contains 2.9 million 1 –5 star restaurant reviews. We treat all reviews as genuine human-written reviews for the purpose of this work, since wide-scale deployment of machine-generated review attacks are not yet reported (Sep 2017) BIBREF19 . As preprocessing, we remove non-printable (non-ASCII) characters and excessive white-space. We separate punctuation from words. We reserve 15,000 reviews for validation and 3,000 for testing, and the rest we use for training. NMT models require a parallel corpus of source and target sentences, i.e. a large set of (source, target)-pairs. We set up a parallel corpus by constructing (context, review)-pairs from the dataset. Next, we describe how we created our input context.
The Yelp Challenge dataset includes metadata about restaurants, including their names, food tags, cities and states these restaurants are located in. For each restaurant review, we fetch this metadata and use it as our input context in the NMT model. The corresponding restaurant review is similarly set as the target sentence. This method produced 2.9 million pairs of sentences in our parallel corpus. We show one example of the parallel training corpus in Example 1 below:
5 Public House Las Vegas NV Gastropubs Restaurants > Excellent
food and service . Pricey , but well worth it . I would recommend
the bone marrow and sampler platter for appetizers . \end{verbatim}
\noindent The order {\textbf{[rating name city state tags]}} is kept constant.
Training the model conditions it to associate certain sequences of words in the input sentence with others in the output.
\subsubsection{Training Settings}
We train our NMT model on a commodity PC with a i7-4790k CPU (4.00GHz), with 32GB RAM and one NVidia GeForce GTX 980 GPU. Our system can process approximately 1,300 \textendash 1,500 source tokens/s and approximately 5,730 \textendash 5,830 output tokens/s. Training one epoch takes in average 72 minutes. The model is trained for 8 epochs, i.e. over night. We call fake review generated by this model \emph{NMT-Fake reviews}. We only need to train one model to produce reviews of different ratings.
We use the training settings: adam optimizer \cite{kingma2014adam} with the suggested learning rate 0.001 \cite{klein2017opennmt}. For most parts, parameters are at their default values. Notably, the maximum sentence length of input and output is 50 tokens by default.
We leverage the framework openNMT-py \cite{klein2017opennmt} to teach the our NMT model.
We list used openNMT-py commands in Appendix Table~\ref{table:openNMT-py_commands}.
\begin{figure}[t]
\begin{center}
\begin{tabular}{ | l | }
\hline
Example 2. Greedy NMT \\
Great food, \underline{great} service, \underline{great} \textit{\textit{beer selection}}. I had the \textit{Gastropubs burger} and it
\\
was delicious. The \underline{\textit{beer selection}} was also \underline{great}. \\
\\
Example 3. NMT-Fake* \\
I love this restaurant. Great food, great service. It's \textit{a little pricy} but worth\\
it for the \textit{quality} of the \textit{beer} and atmosphere you can see in \textit{Vegas}
\\
\hline
\end{tabular}
\label{table:output_comparison}
\end{center}
\caption{Na\"{i}ve text generation with NMT vs. generation using our NTM model. Repetitive patterns are \underline{underlined}. Contextual words are \emph{italicized}. Both examples here are generated based on the context given in Example~1.}
\label{fig:comparison}
\end{figure}
\subsection{Controlling generation of fake reviews}
\label{sec:generating}
Greedy NMT beam searches are practical in many NMT cases. However, the results are simply repetitive, when naively applied to fake review generation (See Example~2 in Figure~\ref{fig:comparison}).
The NMT model produces many \emph{high-confidence} word predictions, which are repetitive and obviously fake. We calculated that in fact, 43\% of the generated sentences started with the phrase ``Great food''. The lack of diversity in greedy use of NMTs for text generation is clear.
\begin{algorithm}[!b]
\KwData{Desired review context $C_\mathrm{input}$ (given as cleartext), NMT model}
\KwResult{Generated review $out$ for input context $C_\mathrm{input}$}
set $b=0.3$, $\lambda=-5$, $\alpha=\frac{2}{3}$, $p_\mathrm{typo}$, $p_\mathrm{spell}$ \\
$\log p \leftarrow \text{NMT.decode(NMT.encode(}C_\mathrm{input}\text{))}$ \\
out $\leftarrow$ [~] \\
$i \leftarrow 0$ \\
$\log p \leftarrow \text{Augment}(\log p$, $b$, $\lambda$, $1$, $[~]$, 0)~~~~~~~~~~~~~~~ |~random penalty~\\
\While{$i=0$ or $o_i$ not EOS}{
$\log \Tilde{p} \leftarrow \text{Augment}(\log p$, $b$, $\lambda$, $\alpha$, $o_i$, $i$)~~~~~~~~~~~ |~start \& memory penalty~\\
$o_i \leftarrow$ \text{NMT.beam}($\log \Tilde{p}$, out) \\
out.append($o_i$) \\
$i \leftarrow i+1$
}\text{return}~$\text{Obfuscate}$(out,~$p_\mathrm{typo}$,~$p_\mathrm{spell}$)
\caption{Generation of NMT-Fake* reviews.}
\label{alg:base}
\end{algorithm}
In this work, we describe how we succeeded in creating more diverse and less repetitive generated reviews, such as Example 3 in Figure~\ref{fig:comparison}.
We outline pseudocode for our methodology of generating fake reviews in Algorithm~\ref{alg:base}. There are several parameters in our algorithm.
The details of the algorithm will be shown later.
We modify the openNMT-py translation phase by changing log-probabilities before passing them to the beam search.
We notice that reviews generated with openNMT-py contain almost no language errors. As an optional post-processing step, we obfuscate reviews by introducing natural typos/misspellings randomly. In the next sections, we describe how we succeeded in generating more natural sentences from our NMT model, i.e. generating reviews like Example~3 instead of reviews like Example~2.
\subsubsection{Variation in word content}
Example 2 in Figure~\ref{fig:comparison} repeats commonly occurring words given for a specific context (e.g. \textit{great, food, service, beer, selection, burger} for Example~1). Generic review generation can be avoided by decreasing probabilities (log-likelihoods \cite{murphy2012machine}) of the generators LM, the decoder.
We constrain the generation of sentences by randomly \emph{imposing penalties to words}.
We tried several forms of added randomness, and found that adding constant penalties to a \emph{random subset} of the target words resulted in the most natural sentence flow. We call these penalties \emph{Bernoulli penalties}, since the random variables are chosen as either 1 or 0 (on or off).
\paragraph{Bernoulli penalties to language model}
To avoid generic sentences components, we augment the default language model $p(\cdot)$ of the decoder by
\begin{equation}
\log \Tilde{p}(t_k) = \log p(t_k | t_i, \dots, t_1) + \lambda q,
\end{equation}
where $q \in R^{V}$ is a vector of Bernoulli-distributed random values that obtain values $1$ with probability $b$ and value $0$ with probability $1-b_i$, and $\lambda < 0$. Parameter $b$ controls how much of the vocabulary is forgotten and $\lambda$ is a soft penalty of including ``forgotten'' words in a review.
$\lambda q_k$ emphasizes sentence forming with non-penalized words. The randomness is reset at the start of generating a new review.
Using Bernoulli penalties in the language model, we can ``forget'' a certain proportion of words and essentially ``force'' the creation of less typical sentences. We will test the effect of these two parameters, the Bernoulli probability $b$ and log-likelihood penalty of including ``forgotten'' words $\lambda$, with a user study in Section~\ref{sec:varying}.
\paragraph{Start penalty}
We introduce start penalties to avoid generic sentence starts (e.g. ``Great food, great service''). Inspired by \cite{li2016diversity}, we add a random start penalty $\lambda s^\mathrm{i}$, to our language model, which decreases monotonically for each generated token. We set $\alpha \leftarrow 0.66$ as it's effect decreases by 90\% every 5 words generated.
\paragraph{Penalty for reusing words}
Bernoulli penalties do not prevent excessive use of certain words in a sentence (such as \textit{great} in Example~2).
To avoid excessive reuse of words, we included a memory penalty for previously used words in each translation.
Concretely, we add the penalty $\lambda$ to each word that has been generated by the greedy search.
\subsubsection{Improving sentence coherence}
\label{sec:grammar}
We visually analyzed reviews after applying these penalties to our NMT model. While the models were clearly diverse, they were \emph{incoherent}: the introduction of random penalties had degraded the grammaticality of the sentences. Amongst others, the use of punctuation was erratic, and pronouns were used semantically wrongly (e.g. \emph{he}, \emph{she} might be replaced, as could ``and''/``but''). To improve the authenticity of our reviews, we added several \emph{grammar-based rules}.
English language has several classes of words which are important for the natural flow of sentences.
We built a list of common pronouns (e.g. I, them, our), conjunctions (e.g. and, thus, if), punctuation (e.g. ,/.,..), and apply only half memory penalties for these words. We found that this change made the reviews more coherent. The pseudocode for this and the previous step is shown in Algorithm~\ref{alg:aug}.
The combined effect of grammar-based rules and LM augmentation is visible in Example~3, Figure~\ref{fig:comparison}.
\begin{algorithm}[!t]
\KwData{Initial log LM $\log p$, Bernoulli probability $b$, soft-penalty $\lambda$, monotonic factor $\alpha$, last generated token $o_i$, grammar rules set $G$}
\KwResult{Augmented log LM $\log \Tilde{p}$}
\begin{algorithmic}[1]
\Procedure {Augment}{$\log p$, $b$, $\lambda$, $\alpha$, $o_i$, $i$}{ \\
generate $P_{\mathrm{1:N}} \leftarrow Bernoulli(b)$~~~~~~~~~~~~~~~|~$\text{One value} \in \{0,1\}~\text{per token}$~ \\
$I \leftarrow P>0$ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~|~Select positive indices~\\
$\log \Tilde{p} \leftarrow$ $\text{Discount}$($\log p$, $I$, $\lambda \cdot \alpha^i$,$G$) ~~~~~~ |~start penalty~\\
$\log \Tilde{p} \leftarrow$ $\text{Discount}$($\log \Tilde{p}$, $[o_i]$, $\lambda$,$G$) ~~~~~~~~~ |~memory penalty~\\
\textbf{return}~$\log \Tilde{p}$
}
\EndProcedure
\\
\Procedure {Discount}{$\log p$, $I$, $\lambda$, $G$}{
\State{\For{$i \in I$}{
\eIf{$o_i \in G$}{
$\log p_{i} \leftarrow \log p_{i} + \lambda/2$
}{
$\log p_{i} \leftarrow \log p_{i} + \lambda$}
}\textbf{return}~$\log p$
\EndProcedure
}}
\end{algorithmic}
\caption{Pseudocode for augmenting language model. }
\label{alg:aug}
\end{algorithm}
\subsubsection{Human-like errors}
\label{sec:obfuscation}
We notice that our NMT model produces reviews without grammar mistakes.
This is unlike real human writers, whose sentences contain two types of language mistakes 1) \emph{typos} that are caused by mistakes in the human motoric input, and 2) \emph{common spelling mistakes}.
We scraped a list of common English language spelling mistakes from Oxford dictionary\footnote{\url{https://en.oxforddictionaries.com/spelling/common-misspellings}} and created 80 rules for randomly \emph{re-introducing spelling mistakes}.
Similarly, typos are randomly reintroduced based on the weighted edit distance\footnote{\url{https://pypi.python.org/pypi/weighted-levenshtein/0.1}}, such that typos resulting in real English words with small perturbations are emphasized.
We use autocorrection tools\footnote{\url{https://pypi.python.org/pypi/autocorrect/0.1.0}} for finding these words.
We call these augmentations \emph{obfuscations}, since they aim to confound the reader to think a human has written them. We omit the pseudocode description for brevity.
\subsection{Experiment: Varying generation parameters in our NMT model}
\label{sec:varying}
Parameters $b$ and $\lambda$ control different aspects in fake reviews.
We show six different examples of generated fake reviews in Table~\ref{table:categories}.
Here, the largest differences occur with increasing values of $b$: visibly, the restaurant reviews become more extreme.
This occurs because a large portion of vocabulary is ``forgotten''. Reviews with $b \geq 0.7$ contain more rare word combinations, e.g. ``!!!!!'' as punctuation, and they occasionally break grammaticality (''experience was awesome'').
Reviews with lower $b$ are more generic: they contain safe word combinations like ``Great place, good service'' that occur in many reviews. Parameter $\lambda$'s is more subtle: it affects how random review starts are and to a degree, the discontinuation between statements within the review.
We conducted an Amazon Mechanical Turk (MTurk) survey in order to determine what kind of NMT-Fake reviews are convincing to native English speakers. We describe the survey and results in the next section.
\begin{table}[!b]
\caption{Six different parametrizations of our NMT reviews and one example for each. The context is ``5 P~.~F~.~Chang ' s Scottsdale AZ'' in all examples.}
\begin{center}
\begin{tabular}{ | l | l | }
\hline
$(b, \lambda)$ & Example review for context \\ \hline
\hline
$(0.3, -3)$ & I love this location! Great service, great food and the best drinks in Scottsdale. \\
& The staff is very friendly and always remembers u when we come in\\\hline
$(0.3, -5)$ & Love love the food here! I always go for lunch. They have a great menu and \\
& they make it fresh to order. Great place, good service and nice staff\\\hline
$(0.5, -4)$ & I love their chicken lettuce wraps and fried rice!! The service is good, they are\\
& always so polite. They have great happy hour specials and they have a lot\\
& of options.\\\hline
$(0.7, -3)$ & Great place to go with friends! They always make sure your dining \\
& experience was awesome.\\ \hline
$(0.7, -5)$ & Still haven't ordered an entree before but today we tried them once..\\
& both of us love this restaurant....\\\hline
$(0.9, -4)$ & AMAZING!!!!! Food was awesome with excellent service. Loved the lettuce \\
& wraps. Great drinks and wine! Can't wait to go back so soon!!\\ \hline
\end{tabular}
\label{table:categories}
\end{center}
\end{table}
\subsubsection{MTurk study}
\label{sec:amt}
We created 20 jobs, each with 100 questions, and requested master workers in MTurk to complete the jobs.
We randomly generated each survey for the participants. Each review had a 50\% chance to be real or fake. The fake ones further were chosen among six (6) categories of fake reviews (Table~\ref{table:categories}).
The restaurant and the city was given as contextual information to the participants. Our aim was to use this survey to understand how well English-speakers react to different parametrizations of NMT-Fake reviews.
Table~\ref{table:amt_pop} in Appendix summarizes the statistics for respondents in the survey. All participants were native English speakers from America. The base rate (50\%) was revealed to the participants prior to the study.
We first investigated overall detection of any NMT-Fake reviews (1,006 fake reviews and 994 real reviews). We found that the participants had big difficulties in detecting our fake reviews. In average, the reviews were detected with class-averaged \emph{F-score of only 56\%}, with 53\% F-score for fake review detection and 59\% F-score for real review detection. The results are very close to \emph{random detection}, where precision, recall and F-score would each be 50\%. Results are recorded in Table~\ref{table:MTurk_super}. Overall, the fake review generation is very successful, since human detection rate across categories is close to random.
\begin{table}[t]
\caption{Effectiveness of Mechanical Turkers in distinguishing human-written reviews from fake reviews generated by our NMT model (all variants).}
\begin{center}
\begin{tabular}{ | c | c |c |c | c | }
\hline
\multicolumn{5}{|c|}{Classification report}
\\ \hline
Review Type & Precision & Recall & F-score & Support \\ \hline
\hline
Human & 55\% & 63\% & 59\% & 994\\
NMT-Fake & 57\% & 50\% & 53\% & 1006 \\
\hline
\end{tabular}
\label{table:MTurk_super}
\end{center}
\end{table}
We noticed some variation in the detection of different fake review categories. The respondents in our MTurk survey had most difficulties recognizing reviews of category $(b=0.3, \lambda=-5)$, where true positive rate was $40.4\%$, while the true negative rate of the real class was $62.7\%$. The precision were $16\%$ and $86\%$, respectively. The class-averaged F-score is $47.6\%$, which is close to random. Detailed classification reports are shown in Table~\ref{table:MTurk_sub} in Appendix. Our MTurk-study shows that \emph{our NMT-Fake reviews pose a significant threat to review systems}, since \emph{ordinary native English-speakers have very big difficulties in separating real reviews from fake reviews}. We use the review category $(b=0.3, \lambda=-5)$ for future user tests in this paper, since MTurk participants had most difficulties detecting these reviews. We refer to this category as NMT-Fake* in this paper.
\section{Evaluation}
\graphicspath{ {figures/}}
We evaluate our fake reviews by first comparing them statistically to previously proposed types of fake reviews, and proceed with a user study with experienced participants. We demonstrate the statistical difference to existing fake review types \cite{yao2017automated,mukherjee2013yelp,rayana2015collective} by training classifiers to detect previous types and investigate classification performance.
\subsection{Replication of state-of-the-art model: LSTM}
\label{sec:repl}
Yao et al. \cite{yao2017automated} presented the current state-of-the-art generative model for fake reviews. The model is trained over the Yelp Challenge dataset using a two-layer character-based LSTM model.
We requested the authors of \cite{yao2017automated} for access to their LSTM model or a fake review dataset generated by their model. Unfortunately they were not able to share either of these with us. We therefore replicated their model as closely as we could, based on their paper and e-mail correspondence\footnote{We are committed to sharing our code with bonafide researchers for the sake of reproducibility.}.
We used the same graphics card (GeForce GTX) and trained using the same framework (torch-RNN in lua). We downloaded the reviews from Yelp Challenge and preprocessed the data to only contain printable ASCII characters, and filtered out non-restaurant reviews. We trained the model for approximately 72 hours. We post-processed the reviews using the customization methodology described in \cite{yao2017automated} and email correspondence. We call fake reviews generated by this model LSTM-Fake reviews.
\subsection{Similarity to existing fake reviews}
\label{sec:automated}
We now want to understand how NMT-Fake* reviews compare to a) LSTM fake reviews and b) human-generated fake reviews. We do this by comparing the statistical similarity between these classes.
For `a' (Figure~\ref{fig:lstm}), we use the Yelp Challenge dataset. We trained a classifier using 5,000 random reviews from the Yelp Challenge dataset (``human'') and 5,000 fake reviews generated by LSTM-Fake. Yao et al. \cite{yao2017automated} found that character features are essential in identifying LSTM-Fake reviews. Consequently, we use character features (n-grams up to 3).
For `b' (Figure~\ref{fig:shill}),we the ``Yelp Shills'' dataset (combination of YelpZip \cite{mukherjee2013yelp}, YelpNYC \cite{mukherjee2013yelp}, YelpChi \cite{rayana2015collective}). This dataset labels entries that are identified as fraudulent by Yelp's filtering mechanism (''shill reviews'')\footnote{Note that shill reviews are probably generated by human shills \cite{zhao2017news}.}. The rest are treated as genuine reviews from human users (''genuine''). We use 100,000 reviews from each category to train a classifier. We use features from the commercial psychometric tool LIWC2015 \cite{pennebaker2015development} to generated features.
In both cases, we use AdaBoost (with 200 shallow decision trees) for training. For testing each classifier, we use a held out test set of 1,000 reviews from both classes in each case. In addition, we test 1,000 NMT-Fake* reviews. Figures~\ref{fig:lstm} and~\ref{fig:shill} show the results. The classification threshold of 50\% is marked with a dashed line.
\begin{figure}
\begin{subfigure}[b]{0.5\columnwidth}
\includegraphics[width=\columnwidth]{figures/lstm.png}
\caption{Human--LSTM reviews.}
\label{fig:lstm}
\end{subfigure}
\begin{subfigure}[b]{0.5\columnwidth}
\includegraphics[width=\columnwidth]{figures/distribution_shill.png}
\caption{Genuine--Shill reviews.}
\label{fig:shill}
\end{subfigure}
\caption{
Histogram comparison of NMT-Fake* reviews with LSTM-Fake reviews and human-generated (\emph{genuine} and \emph{shill}) reviews. Figure~\ref{fig:lstm} shows that a classifier trained to distinguish ``human'' vs. LSTM-Fake cannot distinguish ``human'' vs NMT-Fake* reviews. Figure~\ref{fig:shill} shows NMT-Fake* reviews are more similar to \emph{genuine} reviews than \emph{shill} reviews.
}
\label{fig:statistical_similarity}
\end{figure}
We can see that our new generated reviews do not share strong attributes with previous known categories of fake reviews. If anything, our fake reviews are more similar to genuine reviews than previous fake reviews. We thus conjecture that our NMT-Fake* fake reviews present a category of fake reviews that may go undetected on online review sites.
\subsection{Comparative user study}
\label{sec:comparison}
We wanted to evaluate the effectiveness of fake reviews againsttech-savvy users who understand and know to expect machine-generated fake reviews. We conducted a user study with 20 participants, all with computer science education and at least one university degree. Participant demographics are shown in Table~\ref{table:amt_pop} in the Appendix. Each participant first attended a training session where they were asked to label reviews (fake and genuine) and could later compare them to the correct answers -- we call these participants \emph{experienced participants}.
No personal data was collected during the user study.
Each person was given two randomly selected sets of 30 of reviews (a total of 60 reviews per person) with reviews containing 10 \textendash 50 words each.
Each set contained 26 (87\%) real reviews from Yelp and 4 (13\%) machine-generated reviews,
numbers chosen based on suspicious review prevalence on Yelp~\cite{mukherjee2013yelp,rayana2015collective}.
One set contained machine-generated reviews from one of the two models (NMT ($b=0.3, \lambda=-5$) or LSTM),
and the other set reviews from the other in randomized order. The number of fake reviews was revealed to each participant in the study description. Each participant was requested to mark four (4) reviews as fake.
Each review targeted a real restaurant. A screenshot of that restaurant's Yelp page was shown to each participant prior to the study. Each participant evaluated reviews for one specific, randomly selected, restaurant. An example of the first page of the user study is shown in Figure~\ref{fig:screenshot} in Appendix.
\begin{figure}[!ht]
\centering
\includegraphics[width=.7\columnwidth]{detection2.png}
\caption{Violin plots of detection rate in comparative study. Mean and standard deviations for number of detected fakes are $0.8\pm0.7$ for NMT-Fake* and $2.5\pm1.0$ for LSTM-Fake. $n=20$. A sample of random detection is shown as comparison.}
\label{fig:aalto}
\end{figure}
Figure~\ref{fig:aalto} shows the distribution of detected reviews of both types. A hypothetical random detector is shown for comparison.
NMT-Fake* reviews are significantly more difficult to detect for our experienced participants. In average, detection rate (recall) is $20\%$ for NMT-Fake* reviews, compared to $61\%$ for LSTM-based reviews.
The precision (and F-score) is the same as the recall in our study, since participants labeled 4 fakes in each set of 30 reviews \cite{murphy2012machine}.
The distribution of the detection across participants is shown in Figure~\ref{fig:aalto}. \emph{The difference is statistically significant with confidence level $99\%$} (Welch's t-test).
We compared the detection rate of NMT-Fake* reviews to a random detector, and find that \emph{our participants detection rate of NMT-Fake* reviews is not statistically different from random predictions with 95\% confidence level} (Welch's t-test).
\section{Defenses}
\label{sec:detection}
We developed an AdaBoost-based classifier to detect our new fake reviews, consisting of 200 shallow decision trees (depth 2). The features we used are recorded in Table~\ref{table:features_adaboost} (Appendix).
We used word-level features based on spaCy-tokenization \cite{honnibal-johnson:2015:EMNLP} and constructed n-gram representation of POS-tags and dependency tree tags. We added readability features from NLTK~\cite{bird2004nltk}.
\begin{figure}[ht]
\centering
\includegraphics[width=.7\columnwidth]{obf_score_fair_2.png}
\caption{
Adaboost-based classification of NMT-Fake and human-written reviews.
Effect of varying $b$ and $\lambda$ in fake review generation.
The variant native speakers had most difficulties detecting is well detectable by AdaBoost (97\%).}
\label{fig:adaboost_matrix_b_lambda}
\end{figure}
Figure~\ref{fig:adaboost_matrix_b_lambda} shows our AdaBoost classifier's class-averaged F-score at detecting different kind of fake reviews. The classifier is very effective in detecting reviews that humans have difficulties detecting. For example, the fake reviews MTurk users had most difficulty detecting ($b=0.3, \lambda=-5$) are detected with an excellent 97\% F-score.
The most important features for the classification were counts for frequently occurring words in fake reviews (such as punctuation, pronouns, articles) as well as the readability feature ``Automated Readability Index''. We thus conclude that while NMT-Fake reviews are difficult to detect for humans, they can be well detected with the right tools.
\section{Related Work}
Kumar and Shah~\cite{kumar2018false} survey and categorize false information research. Automatically generated fake reviews are a form of \emph{opinion-based false information}, where the creator of the review may influence reader's opinions or decisions.
Yao et al. \cite{yao2017automated} presented their study on machine-generated fake reviews. Contrary to us, they investigated character-level language models, without specifying a specific context before generation. We leverage existing NMT tools to encode a specific context to the restaurant before generating reviews.
Supporting our study, Everett et al~\cite{Everett2016Automated} found that security researchers were less likely to be fooled by Markov chain-generated Reddit comments compared to ordinary Internet users.
Diversification of NMT model outputs has been studied in \cite{li2016diversity}. The authors proposed the use of a penalty to commonly occurring sentences (\emph{n-grams}) in order to emphasize maximum mutual information-based generation.
The authors investigated the use of NMT models in chatbot systems.
We found that unigram penalties to random tokens (Algorithm~\ref{alg:aug}) was easy to implement and produced sufficiently diverse responses.
\section {Discussion and Future Work}
\paragraph{What makes NMT-Fake* reviews difficult to detect?} First, NMT models allow the encoding of a relevant context for each review, which narrows down the possible choices of words that the model has to choose from. Our NMT model had a perplexity of approximately $25$, while the model of \cite{yao2017automated} had a perplexity of approximately $90$ \footnote{Personal communication with the authors}. Second, the beam search in NMT models narrows down choices to natural-looking sentences. Third, we observed that the NMT model produced \emph{better structure} in the generated sentences (i.e. a more coherent story).
\paragraph{Cost of generating reviews} With our setup, generating one review took less than one second. The cost of generation stems mainly from the overnight training. Assuming an electricity cost of 16 cents / kWh (California) and 8 hours of training, training the NMT model requires approximately 1.30 USD. This is a 90\% reduction in time compared to the state-of-the-art \cite{yao2017automated}. Furthermore, it is possible to generate both positive and negative reviews with the same model.
\paragraph{Ease of customization} We experimented with inserting specific words into the text by increasing their log likelihoods in the beam search. We noticed that the success depended on the prevalence of the word in the training set. For example, adding a +5 to \emph{Mike} in the log-likelihood resulted in approximately 10\% prevalence of this word in the reviews. An attacker can therefore easily insert specific keywords to reviews, which can increase evasion probability.
\paragraph{Ease of testing} Our diversification scheme is applicable during \emph{generation phase}, and does not affect the training setup of the network in any way. Once the NMT model is obtained, it is easy to obtain several different variants of NMT-Fake reviews by varying parameters $b$ and $\lambda$.
\paragraph{Languages} The generation methodology is not per-se language-dependent. The requirement for successful generation is that sufficiently much data exists in the targeted language. However, our language model modifications require some knowledge of that target language's grammar to produce high-quality reviews.
\paragraph{Generalizability of detection techniques} Currently, fake reviews are not universally detectable. Our results highlight that it is difficult to claim detection performance on unseen types of fake reviews (Section~\ref{sec:automated}). We see this an open problem that deserves more attention in fake reviews research.
\paragraph{Generalizability to other types of datasets} Our technique can be applied to any dataset, as long as there is sufficient training data for the NMT model. We used approximately 2.9 million reviews for this work.
\section{Conclusion}
In this paper, we showed that neural machine translation models can be used to generate fake reviews that are very effective in deceiving even experienced, tech-savvy users.
This supports anecdotal evidence \cite{national2017commission}.
Our technique is more effective than state-of-the-art \cite{yao2017automated}.
We conclude that machine-aided fake review detection is necessary since human users are ineffective in identifying fake reviews.
We also showed that detectors trained using one type of fake reviews are not effective in identifying other types of fake reviews.
Robust detection of fake reviews is thus still an open problem.
\section*{Acknowledgments}
We thank Tommi Gr\"{o}ndahl for assistance in planning user studies and the
participants of the user study for their time and feedback. We also thank
Luiza Sayfullina for comments that improved the manuscript.
We thank the authors of \cite{yao2017automated} for answering questions about
their work.
\bibliographystyle{splncs}
\begin{thebibliography}{10}
\bibitem{yao2017automated}
Yao, Y., Viswanath, B., Cryan, J., Zheng, H., Zhao, B.Y.:
\newblock Automated crowdturfing attacks and defenses in online review systems.
\newblock In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and
Communications Security, ACM (2017)
\bibitem{murphy2012machine}
Murphy, K.:
\newblock Machine learning: a probabilistic approach.
\newblock Massachusetts Institute of Technology (2012)
\bibitem{challenge2013yelp}
Yelp:
\newblock {Yelp Challenge Dataset} (2013)
\bibitem{mukherjee2013yelp}
Mukherjee, A., Venkataraman, V., Liu, B., Glance, N.:
\newblock What yelp fake review filter might be doing?
\newblock In: Seventh International AAAI Conference on Weblogs and Social Media
(ICWSM). (2013)
\bibitem{rayana2015collective}
Rayana, S., Akoglu, L.:
\newblock Collective opinion spam detection: Bridging review networks and
metadata.
\newblock In: {}Proceedings of the 21th ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining
\bibitem{o2008user}
{O'Connor}, P.:
\newblock {User-generated content and travel: A case study on Tripadvisor.com}.
\newblock Information and communication technologies in tourism 2008 (2008)
\bibitem{luca2010reviews}
Luca, M.:
\newblock {Reviews, Reputation, and Revenue: The Case of Yelp. com}.
\newblock {Harvard Business School} (2010)
\bibitem{wang2012serf}
Wang, G., Wilson, C., Zhao, X., Zhu, Y., Mohanlal, M., Zheng, H., Zhao, B.Y.:
\newblock Serf and turf: crowdturfing for fun and profit.
\newblock In: Proceedings of the 21st international conference on World Wide
Web (WWW), ACM (2012)
\bibitem{rinta2017understanding}
Rinta-Kahila, T., Soliman, W.:
\newblock Understanding crowdturfing: The different ethical logics behind the
clandestine industry of deception.
\newblock In: ECIS 2017: Proceedings of the 25th European Conference on
Information Systems. (2017)
\bibitem{luca2016fake}
Luca, M., Zervas, G.:
\newblock Fake it till you make it: Reputation, competition, and yelp review
fraud.
\newblock Management Science (2016)
\bibitem{national2017commission}
{National Literacy Trust}:
\newblock Commission on fake news and the teaching of critical literacy skills
in schools URL:
\url{https://literacytrust.org.uk/policy-and-campaigns/all-party-parliamentary-group-literacy/fakenews/}.
\bibitem{jurafsky2014speech}
Jurafsky, D., Martin, J.H.:
\newblock Speech and language processing. Volume~3.
\newblock Pearson London: (2014)
\bibitem{kingma2014adam}
Kingma, D.P., Ba, J.:
\newblock Adam: A method for stochastic optimization.
\newblock arXiv preprint arXiv:1412.6980 (2014)
\bibitem{cho2014learning}
Cho, K., van Merrienboer, B., Gulcehre, C., Bahdanau, D., Bougares, F.,
Schwenk, H., Bengio, Y.:
\newblock Learning phrase representations using rnn encoder--decoder for
statistical machine translation.
\newblock In: Proceedings of the 2014 Conference on Empirical Methods in
Natural Language Processing (EMNLP). (2014)
\bibitem{klein2017opennmt}
Klein, G., Kim, Y., Deng, Y., Senellart, J., Rush, A.:
\newblock Opennmt: Open-source toolkit for neural machine translation.
\newblock Proceedings of ACL, System Demonstrations (2017)
\bibitem{wu2016google}
Wu, Y., Schuster, M., Chen, Z., Le, Q.V., Norouzi, M., Macherey, W., Krikun,
M., Cao, Y., Gao, Q., Macherey, K., et~al.:
\newblock Google's neural machine translation system: Bridging the gap between
human and machine translation.
\newblock arXiv preprint arXiv:1609.08144 (2016)
\bibitem{mei2017coherent}
Mei, H., Bansal, M., Walter, M.R.:
\newblock Coherent dialogue with attention-based language models.
\newblock In: AAAI. (2017) 3252--3258
\bibitem{li2016diversity}
Li, J., Galley, M., Brockett, C., Gao, J., Dolan, B.:
\newblock A diversity-promoting objective function for neural conversation
models.
\newblock In: Proceedings of NAACL-HLT. (2016)
\bibitem{rubin2006assessing}
Rubin, V.L., Liddy, E.D.:
\newblock Assessing credibility of weblogs.
\newblock In: AAAI Spring Symposium: Computational Approaches to Analyzing
Weblogs. (2006)
\bibitem{zhao2017news}
news.com.au:
\newblock {The potential of AI generated 'crowdturfing' could undermine online
reviews and dramatically erode public trust} URL:
\url{http://www.news.com.au/technology/online/security/the-potential-of-ai-generated-crowdturfing-could-undermine-online-reviews-and-dramatically-erode-public-trust/news-story/e1c84ad909b586f8a08238d5f80b6982}.
\bibitem{pennebaker2015development}
Pennebaker, J.W., Boyd, R.L., Jordan, K., Blackburn, K.:
\newblock {The development and psychometric properties of LIWC2015}.
\newblock Technical report (2015)
\bibitem{honnibal-johnson:2015:EMNLP}
Honnibal, M., Johnson, M.:
\newblock An improved non-monotonic transition system for dependency parsing.
\newblock In: Proceedings of the 2015 Conference on Empirical Methods in
Natural Language Processing (EMNLP), ACM (2015)
\bibitem{bird2004nltk}
Bird, S., Loper, E.:
\newblock {NLTK: the natural language toolkit}.
\newblock In: Proceedings of the ACL 2004 on Interactive poster and
demonstration sessions, Association for Computational Linguistics (2004)
\bibitem{kumar2018false}
Kumar, S., Shah, N.:
\newblock False information on web and social media: A survey.
\newblock arXiv preprint arXiv:1804.08559 (2018)
\bibitem{Everett2016Automated}
Everett, R.M., Nurse, J.R.C., Erola, A.:
\newblock The anatomy of online deception: What makes automated text
convincing?
\newblock In: Proceedings of the 31st Annual ACM Symposium on Applied
Computing. SAC '16, ACM (2016)
\end{thebibliography}
\section*{Appendix}
We present basic demographics of our MTurk study and the comparative study with experienced users in Table~\ref{table:amt_pop}.
\begin{table}
\caption{User study statistics.}
\begin{center}
\begin{tabular}{ | l | c | c | }
\hline
Quality & Mechanical Turk users & Experienced users\\
\hline
Native English Speaker & Yes (20) & Yes (1) No (19) \\
Fluent in English & Yes (20) & Yes (20) \\
Age & 21-40 (17) 41-60 (3) & 21-25 (8) 26-30 (7) 31-35 (4) 41-45 (1)\\
Gender & Male (14) Female (6) & Male (17) Female (3)\\
Highest Education & High School (10) Bachelor (10) & Bachelor (9) Master (6) Ph.D. (5) \\
\hline
\end{tabular}
\label{table:amt_pop}
\end{center}
\end{table}
Table~\ref{table:openNMT-py_commands} shows a listing of the openNMT-py commands we used to create our NMT model and to generate fake reviews.
\begin{table}[t]
\caption{Listing of used openNMT-py commands.}
\begin{center}
\begin{tabular}{ | l | l | }
\hline
Phase & Bash command \\
\hline
Preprocessing & \begin{lstlisting}[language=bash]
python preprocess.py -train_src context-train.txt
-train_tgt reviews-train.txt -valid_src context-val.txt
-valid_tgt reviews-val.txt -save_data model
-lower -tgt_words_min_frequency 10
\end{lstlisting}
\\ & \\
Training & \begin{lstlisting}[language=bash]
python train.py -data model -save_model model -epochs 8
-gpuid 0 -learning_rate_decay 0.5 -optim adam
-learning_rate 0.001 -start_decay_at 3\end{lstlisting}
\\ & \\
Generation & \begin{lstlisting}[language=bash]
python translate.py -model model_acc_35.54_ppl_25.68_e8.pt
-src context-tst.txt -output pred-e8.txt -replace_unk
-verbose -max_length 50 -gpu 0
\end{lstlisting} \\
\hline
\end{tabular}
\label{table:openNMT-py_commands}
\end{center}
\end{table}
Table~\ref{table:MTurk_sub} shows the classification performance of Amazon Mechanical Turkers, separated across different categories of NMT-Fake reviews. The category with best performance ($b=0.3, \lambda=-5$) is denoted as NMT-Fake*.
\begin{table}[b]
\caption{MTurk study subclass classification reports. Classes are imbalanced in ratio 1:6. Random predictions are $p_\mathrm{human} = 86\%$ and $p_\mathrm{machine} = 14\%$, with $r_\mathrm{human} = r_\mathrm{machine} = 50\%$. Class-averaged F-scores for random predictions are $42\%$.}
\begin{center}
\begin{tabular}{ | c || c |c |c | c | }
\hline
$(b=0.3, \lambda = -3)$ & Precision & Recall & F-score & Support \\ \hline
Human & 89\% & 63\% & 73\% & 994\\
NMT-Fake & 15\% & 45\% & 22\% & 146 \\
\hline
\hline
$(b=0.3, \lambda = -5)$ & Precision & Recall & F-score & Support \\ \hline
Human & 86\% & 63\% & 73\% & 994\\
NMT-Fake* & 16\% & 40\% & 23\% & 171 \\
\hline
\hline
$(b=0.5, \lambda = -4)$ & Precision & Recall & F-score & Support \\ \hline
Human & 88\% & 63\% & 73\% & 994\\
NMT-Fake & 21\% & 55\% & 30\% & 181 \\
\hline
\hline
$(b=0.7, \lambda = -3)$ & Precision & Recall & F-score & Support \\ \hline
Human & 88\% & 63\% & 73\% & 994\\
NMT-Fake & 19\% & 50\% & 27\% & 170 \\
\hline
\hline
$(b=0.7, \lambda = -5)$ & Precision & Recall & F-score & Support \\ \hline
Human & 89\% & 63\% & 74\% & 994\\
NMT-Fake & 21\% & 57\% & 31\% & 174 \\
\hline
\hline
$(b=0.9, \lambda = -4)$ & Precision & Recall & F-score & Support \\ \hline
Human & 88\% & 63\% & 73\% & 994\\
NMT-Fake & 18\% & 50\% & 27\% & 164 \\
\hline
\end{tabular}
\label{table:MTurk_sub}
\end{center}
\end{table}
Figure~\ref{fig:screenshot} shows screenshots of the first two pages of our user study with experienced participants.
\begin{figure}[ht]
\centering
\includegraphics[width=1.\columnwidth]{figures/screenshot_7-3.png}
\caption{
Screenshots of the first two pages in the user study. Example 1 is a NMT-Fake* review, the rest are human-written.
}
\label{fig:screenshot}
\end{figure}
Table~\ref{table:features_adaboost} shows the features used to detect NMT-Fake reviews using the AdaBoost classifier.
\begin{table}
\caption{Features used in NMT-Fake review detector.}
\begin{center}
\begin{tabular}{ | l | c | }
\hline
Feature type & Number of features \\ \hline
\hline
Readability features & 13 \\ \hline
Unique POS tags & $~20$ \\ \hline
Word unigrams & 22,831 \\ \hline
1/2/3/4-grams of simple part-of-speech tags & 54,240 \\ \hline
1/2/3-grams of detailed part-of-speech tags & 112,944 \\ \hline
1/2/3-grams of syntactic dependency tags & 93,195 \\ \hline
\end{tabular}
\label{table:features_adaboost}
\end{center}
\end{table}
\end{document} | AdaBoost-based classifier |
cd1034c183edf630018f47ff70b48d74d2bb1649 | cd1034c183edf630018f47ff70b48d74d2bb1649_0 | Q: Does their detection tool work better than human detection?
Text: Introduction
Automatically generated fake reviews have only recently become natural enough to fool human readers. Yao et al. BIBREF0 use a deep neural network (a so-called 2-layer LSTM BIBREF1 ) to generate fake reviews, and concluded that these fake reviews look sufficiently genuine to fool native English speakers. They train their model using real restaurant reviews from yelp.com BIBREF2 . Once trained, the model is used to generate reviews character-by-character. Due to the generation methodology, it cannot be easily targeted for a specific context (meaningful side information). Consequently, the review generation process may stray off-topic. For instance, when generating a review for a Japanese restaurant in Las Vegas, the review generation process may include references to an Italian restaurant in Baltimore. The authors of BIBREF0 apply a post-processing step (customization), which replaces food-related words with more suitable ones (sampled from the targeted restaurant). The word replacement strategy has drawbacks: it can miss certain words and replace others independent of their surrounding words, which may alert savvy readers. As an example: when we applied the customization technique described in BIBREF0 to a review for a Japanese restaurant it changed the snippet garlic knots for breakfast with garlic knots for sushi).
We propose a methodology based on neural machine translation (NMT) that improves the generation process by defining a context for the each generated fake review. Our context is a clear-text sequence of: the review rating, restaurant name, city, state and food tags (e.g. Japanese, Italian). We show that our technique generates review that stay on topic. We can instantiate our basic technique into several variants. We vet them on Amazon Mechanical Turk and find that native English speakers are very poor at recognizing our fake generated reviews. For one variant, the participants' performance is close to random: the class-averaged F-score of detection is INLINEFORM0 (whereas random would be INLINEFORM1 given the 1:6 imbalance in the test). Via a user study with experienced, highly educated participants, we compare this variant (which we will henceforth refer to as NMT-Fake* reviews) with fake reviews generated using the char-LSTM-based technique from BIBREF0 .
We demonstrate that NMT-Fake* reviews constitute a new category of fake reviews that cannot be detected by classifiers trained only using previously known categories of fake reviews BIBREF0 , BIBREF3 , BIBREF4 . Therefore, NMT-Fake* reviews may go undetected in existing online review sites. To meet this challenge, we develop an effective classifier that detects NMT-Fake* reviews effectively (97% F-score). Our main contributions are:
Background
Fake reviews User-generated content BIBREF5 is an integral part of the contemporary user experience on the web. Sites like tripadvisor.com, yelp.com and Google Play use user-written reviews to provide rich information that helps other users choose where to spend money and time. User reviews are used for rating services or products, and for providing qualitative opinions. User reviews and ratings may be used to rank services in recommendations. Ratings have an affect on the outwards appearance. Already 8 years ago, researchers estimated that a one-star rating increase affects the business revenue by 5 – 9% on yelp.com BIBREF6 .
Due to monetary impact of user-generated content, some businesses have relied on so-called crowd-turfing agents BIBREF7 that promise to deliver positive ratings written by workers to a customer in exchange for a monetary compensation. Crowd-turfing ethics are complicated. For example, Amazon community guidelines prohibit buying content relating to promotions, but the act of writing fabricated content is not considered illegal, nor is matching workers to customers BIBREF8 . Year 2015, approximately 20% of online reviews on yelp.com were suspected of being fake BIBREF9 .
Nowadays, user-generated review sites like yelp.com use filters and fraudulent review detection techniques. These factors have resulted in an increase in the requirements of crowd-turfed reviews provided to review sites, which in turn has led to an increase in the cost of high-quality review. Due to the cost increase, researchers hypothesize the existence of neural network-generated fake reviews. These neural-network-based fake reviews are statistically different from human-written fake reviews, and are not caught by classifiers trained on these BIBREF0 .
Detecting fake reviews can either be done on an individual level or as a system-wide detection tool (i.e. regulation). Detecting fake online content on a personal level requires knowledge and skills in critical reading. In 2017, the National Literacy Trust assessed that young people in the UK do not have the skillset to differentiate fake news from real news BIBREF10 . For example, 20% of children that use online news sites in age group 12-15 believe that all information on news sites are true.
Neural Networks Neural networks are function compositions that map input data through INLINEFORM0 subsequent layers: DISPLAYFORM0
where the functions INLINEFORM0 are typically non-linear and chosen by experts partly for known good performance on datasets and partly for simplicity of computational evaluation. Language models (LMs) BIBREF11 are generative probability distributions that assign probabilities to sequences of tokens ( INLINEFORM1 ): DISPLAYFORM0
such that the language model can be used to predict how likely a specific token at time step INLINEFORM0 is, based on the INLINEFORM1 previous tokens. Tokens are typically either words or characters.
For decades, deep neural networks were thought to be computationally too difficult to train. However, advances in optimization, hardware and the availability of frameworks have shown otherwise BIBREF1 , BIBREF12 . Neural language models (NLMs) have been one of the promising application areas. NLMs are typically various forms of recurrent neural networks (RNNs), which pass through the data sequentially and maintain a memory representation of the past tokens with a hidden context vector. There are many RNN architectures that focus on different ways of updating and maintaining context vectors: Long Short-Term Memory units (LSTM) and Gated Recurrent Units (GRUs) are perhaps most popular. Neural LMs have been used for free-form text generation. In certain application areas, the quality has been high enough to sometimes fool human readers BIBREF0 . Encoder-decoder (seq2seq) models BIBREF13 are architectures of stacked RNNs, which have the ability to generate output sequences based on input sequences. The encoder network reads in a sequence of tokens, and passes it to a decoder network (a LM). In contrast to simpler NLMs, encoder-decoder networks have the ability to use additional context for generating text, which enables more accurate generation of text. Encoder-decoder models are integral in Neural Machine Translation (NMT) BIBREF14 , where the task is to translate a source text from one language to another language. NMT models additionally use beam search strategies to heuristically search the set of possible translations. Training datasets are parallel corpora; large sets of paired sentences in the source and target languages. The application of NMT techniques for online machine translation has significantly improved the quality of translations, bringing it closer to human performance BIBREF15 .
Neural machine translation models are efficient at mapping one expression to another (one-to-one mapping). Researchers have evaluated these models for conversation generation BIBREF16 , with mixed results. Some researchers attribute poor performance to the use of the negative log likelihood cost function during training, which emphasizes generation of high-confidence phrases rather than diverse phrases BIBREF17 . The results are often generic text, which lacks variation. Li et al. have suggested various augmentations to this, among others suppressing typical responses in the decoder language model to promote response diversity BIBREF17 .
System Model
We discuss the attack model, our generative machine learning method and controlling the generative process in this section.
Attack Model
Wang et al. BIBREF7 described a model of crowd-turfing attacks consisting of three entities: customers who desire to have fake reviews for a particular target (e.g. their restaurant) on a particular platform (e.g. Yelp), agents who offer fake review services to customers, and workers who are orchestrated by the agent to compose and post fake reviews.
Automated crowd-turfing attacks (ACA) replace workers by a generative model. This has several benefits including better economy and scalability (human workers are more expensive and slower) and reduced detectability (agent can better control the rate at which fake reviews are generated and posted).
We assume that the agent has access to public reviews on the review platform, by which it can train its generative model. We also assume that it is easy for the agent to create a large number of accounts on the review platform so that account-based detection or rate-limiting techniques are ineffective against fake reviews.
The quality of the generative model plays a crucial role in the attack. Yao et al. BIBREF0 propose the use of a character-based LSTM as base for generative model. LSTMs are not conditioned to generate reviews for a specific target BIBREF1 , and may mix-up concepts from different contexts during free-form generation. Mixing contextually separate words is one of the key criteria that humans use to identify fake reviews. These may result in violations of known indicators for fake content BIBREF18 . For example, the review content may not match prior expectations nor the information need that the reader has. We improve the attack model by considering a more capable generative model that produces more appropriate reviews: a neural machine translation (NMT) model.
Generative Model
We propose the use of NMT models for fake review generation. The method has several benefits: 1) the ability to learn how to associate context (keywords) to reviews, 2) fast training time, and 3) a high-degree of customization during production time, e.g. introduction of specific waiter or food items names into reviews.
NMT models are constructions of stacked recurrent neural networks (RNNs). They include an encoder network and a decoder network, which are jointly optimized to produce a translation of one sequence to another. The encoder rolls over the input data in sequence and produces one INLINEFORM0 -dimensional context vector representation for the sentence. The decoder then generates output sequences based on the embedding vector and an attention module, which is taught to associate output words with certain input words. The generation typically continues until a specific EOS (end of sentence) token is encountered. The review length can be controlled in many ways, e.g. by setting the probability of generating the EOS token to zero until the required length is reached.
NMT models often also include a beam search BIBREF14 , which generates several hypotheses and chooses the best ones amongst them. In our work, we use the greedy beam search technique. We forgo the use of additional beam searches as we found that the quality of the output was already adequate and the translation phase time consumption increases linearly for each beam used.
We use the Yelp Challenge dataset BIBREF2 for our fake review generation. The dataset (Aug 2017) contains 2.9 million 1 –5 star restaurant reviews. We treat all reviews as genuine human-written reviews for the purpose of this work, since wide-scale deployment of machine-generated review attacks are not yet reported (Sep 2017) BIBREF19 . As preprocessing, we remove non-printable (non-ASCII) characters and excessive white-space. We separate punctuation from words. We reserve 15,000 reviews for validation and 3,000 for testing, and the rest we use for training. NMT models require a parallel corpus of source and target sentences, i.e. a large set of (source, target)-pairs. We set up a parallel corpus by constructing (context, review)-pairs from the dataset. Next, we describe how we created our input context.
The Yelp Challenge dataset includes metadata about restaurants, including their names, food tags, cities and states these restaurants are located in. For each restaurant review, we fetch this metadata and use it as our input context in the NMT model. The corresponding restaurant review is similarly set as the target sentence. This method produced 2.9 million pairs of sentences in our parallel corpus. We show one example of the parallel training corpus in Example 1 below:
5 Public House Las Vegas NV Gastropubs Restaurants > Excellent
food and service . Pricey , but well worth it . I would recommend
the bone marrow and sampler platter for appetizers . \end{verbatim}
\noindent The order {\textbf{[rating name city state tags]}} is kept constant.
Training the model conditions it to associate certain sequences of words in the input sentence with others in the output.
\subsubsection{Training Settings}
We train our NMT model on a commodity PC with a i7-4790k CPU (4.00GHz), with 32GB RAM and one NVidia GeForce GTX 980 GPU. Our system can process approximately 1,300 \textendash 1,500 source tokens/s and approximately 5,730 \textendash 5,830 output tokens/s. Training one epoch takes in average 72 minutes. The model is trained for 8 epochs, i.e. over night. We call fake review generated by this model \emph{NMT-Fake reviews}. We only need to train one model to produce reviews of different ratings.
We use the training settings: adam optimizer \cite{kingma2014adam} with the suggested learning rate 0.001 \cite{klein2017opennmt}. For most parts, parameters are at their default values. Notably, the maximum sentence length of input and output is 50 tokens by default.
We leverage the framework openNMT-py \cite{klein2017opennmt} to teach the our NMT model.
We list used openNMT-py commands in Appendix Table~\ref{table:openNMT-py_commands}.
\begin{figure}[t]
\begin{center}
\begin{tabular}{ | l | }
\hline
Example 2. Greedy NMT \\
Great food, \underline{great} service, \underline{great} \textit{\textit{beer selection}}. I had the \textit{Gastropubs burger} and it
\\
was delicious. The \underline{\textit{beer selection}} was also \underline{great}. \\
\\
Example 3. NMT-Fake* \\
I love this restaurant. Great food, great service. It's \textit{a little pricy} but worth\\
it for the \textit{quality} of the \textit{beer} and atmosphere you can see in \textit{Vegas}
\\
\hline
\end{tabular}
\label{table:output_comparison}
\end{center}
\caption{Na\"{i}ve text generation with NMT vs. generation using our NTM model. Repetitive patterns are \underline{underlined}. Contextual words are \emph{italicized}. Both examples here are generated based on the context given in Example~1.}
\label{fig:comparison}
\end{figure}
\subsection{Controlling generation of fake reviews}
\label{sec:generating}
Greedy NMT beam searches are practical in many NMT cases. However, the results are simply repetitive, when naively applied to fake review generation (See Example~2 in Figure~\ref{fig:comparison}).
The NMT model produces many \emph{high-confidence} word predictions, which are repetitive and obviously fake. We calculated that in fact, 43\% of the generated sentences started with the phrase ``Great food''. The lack of diversity in greedy use of NMTs for text generation is clear.
\begin{algorithm}[!b]
\KwData{Desired review context $C_\mathrm{input}$ (given as cleartext), NMT model}
\KwResult{Generated review $out$ for input context $C_\mathrm{input}$}
set $b=0.3$, $\lambda=-5$, $\alpha=\frac{2}{3}$, $p_\mathrm{typo}$, $p_\mathrm{spell}$ \\
$\log p \leftarrow \text{NMT.decode(NMT.encode(}C_\mathrm{input}\text{))}$ \\
out $\leftarrow$ [~] \\
$i \leftarrow 0$ \\
$\log p \leftarrow \text{Augment}(\log p$, $b$, $\lambda$, $1$, $[~]$, 0)~~~~~~~~~~~~~~~ |~random penalty~\\
\While{$i=0$ or $o_i$ not EOS}{
$\log \Tilde{p} \leftarrow \text{Augment}(\log p$, $b$, $\lambda$, $\alpha$, $o_i$, $i$)~~~~~~~~~~~ |~start \& memory penalty~\\
$o_i \leftarrow$ \text{NMT.beam}($\log \Tilde{p}$, out) \\
out.append($o_i$) \\
$i \leftarrow i+1$
}\text{return}~$\text{Obfuscate}$(out,~$p_\mathrm{typo}$,~$p_\mathrm{spell}$)
\caption{Generation of NMT-Fake* reviews.}
\label{alg:base}
\end{algorithm}
In this work, we describe how we succeeded in creating more diverse and less repetitive generated reviews, such as Example 3 in Figure~\ref{fig:comparison}.
We outline pseudocode for our methodology of generating fake reviews in Algorithm~\ref{alg:base}. There are several parameters in our algorithm.
The details of the algorithm will be shown later.
We modify the openNMT-py translation phase by changing log-probabilities before passing them to the beam search.
We notice that reviews generated with openNMT-py contain almost no language errors. As an optional post-processing step, we obfuscate reviews by introducing natural typos/misspellings randomly. In the next sections, we describe how we succeeded in generating more natural sentences from our NMT model, i.e. generating reviews like Example~3 instead of reviews like Example~2.
\subsubsection{Variation in word content}
Example 2 in Figure~\ref{fig:comparison} repeats commonly occurring words given for a specific context (e.g. \textit{great, food, service, beer, selection, burger} for Example~1). Generic review generation can be avoided by decreasing probabilities (log-likelihoods \cite{murphy2012machine}) of the generators LM, the decoder.
We constrain the generation of sentences by randomly \emph{imposing penalties to words}.
We tried several forms of added randomness, and found that adding constant penalties to a \emph{random subset} of the target words resulted in the most natural sentence flow. We call these penalties \emph{Bernoulli penalties}, since the random variables are chosen as either 1 or 0 (on or off).
\paragraph{Bernoulli penalties to language model}
To avoid generic sentences components, we augment the default language model $p(\cdot)$ of the decoder by
\begin{equation}
\log \Tilde{p}(t_k) = \log p(t_k | t_i, \dots, t_1) + \lambda q,
\end{equation}
where $q \in R^{V}$ is a vector of Bernoulli-distributed random values that obtain values $1$ with probability $b$ and value $0$ with probability $1-b_i$, and $\lambda < 0$. Parameter $b$ controls how much of the vocabulary is forgotten and $\lambda$ is a soft penalty of including ``forgotten'' words in a review.
$\lambda q_k$ emphasizes sentence forming with non-penalized words. The randomness is reset at the start of generating a new review.
Using Bernoulli penalties in the language model, we can ``forget'' a certain proportion of words and essentially ``force'' the creation of less typical sentences. We will test the effect of these two parameters, the Bernoulli probability $b$ and log-likelihood penalty of including ``forgotten'' words $\lambda$, with a user study in Section~\ref{sec:varying}.
\paragraph{Start penalty}
We introduce start penalties to avoid generic sentence starts (e.g. ``Great food, great service''). Inspired by \cite{li2016diversity}, we add a random start penalty $\lambda s^\mathrm{i}$, to our language model, which decreases monotonically for each generated token. We set $\alpha \leftarrow 0.66$ as it's effect decreases by 90\% every 5 words generated.
\paragraph{Penalty for reusing words}
Bernoulli penalties do not prevent excessive use of certain words in a sentence (such as \textit{great} in Example~2).
To avoid excessive reuse of words, we included a memory penalty for previously used words in each translation.
Concretely, we add the penalty $\lambda$ to each word that has been generated by the greedy search.
\subsubsection{Improving sentence coherence}
\label{sec:grammar}
We visually analyzed reviews after applying these penalties to our NMT model. While the models were clearly diverse, they were \emph{incoherent}: the introduction of random penalties had degraded the grammaticality of the sentences. Amongst others, the use of punctuation was erratic, and pronouns were used semantically wrongly (e.g. \emph{he}, \emph{she} might be replaced, as could ``and''/``but''). To improve the authenticity of our reviews, we added several \emph{grammar-based rules}.
English language has several classes of words which are important for the natural flow of sentences.
We built a list of common pronouns (e.g. I, them, our), conjunctions (e.g. and, thus, if), punctuation (e.g. ,/.,..), and apply only half memory penalties for these words. We found that this change made the reviews more coherent. The pseudocode for this and the previous step is shown in Algorithm~\ref{alg:aug}.
The combined effect of grammar-based rules and LM augmentation is visible in Example~3, Figure~\ref{fig:comparison}.
\begin{algorithm}[!t]
\KwData{Initial log LM $\log p$, Bernoulli probability $b$, soft-penalty $\lambda$, monotonic factor $\alpha$, last generated token $o_i$, grammar rules set $G$}
\KwResult{Augmented log LM $\log \Tilde{p}$}
\begin{algorithmic}[1]
\Procedure {Augment}{$\log p$, $b$, $\lambda$, $\alpha$, $o_i$, $i$}{ \\
generate $P_{\mathrm{1:N}} \leftarrow Bernoulli(b)$~~~~~~~~~~~~~~~|~$\text{One value} \in \{0,1\}~\text{per token}$~ \\
$I \leftarrow P>0$ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~|~Select positive indices~\\
$\log \Tilde{p} \leftarrow$ $\text{Discount}$($\log p$, $I$, $\lambda \cdot \alpha^i$,$G$) ~~~~~~ |~start penalty~\\
$\log \Tilde{p} \leftarrow$ $\text{Discount}$($\log \Tilde{p}$, $[o_i]$, $\lambda$,$G$) ~~~~~~~~~ |~memory penalty~\\
\textbf{return}~$\log \Tilde{p}$
}
\EndProcedure
\\
\Procedure {Discount}{$\log p$, $I$, $\lambda$, $G$}{
\State{\For{$i \in I$}{
\eIf{$o_i \in G$}{
$\log p_{i} \leftarrow \log p_{i} + \lambda/2$
}{
$\log p_{i} \leftarrow \log p_{i} + \lambda$}
}\textbf{return}~$\log p$
\EndProcedure
}}
\end{algorithmic}
\caption{Pseudocode for augmenting language model. }
\label{alg:aug}
\end{algorithm}
\subsubsection{Human-like errors}
\label{sec:obfuscation}
We notice that our NMT model produces reviews without grammar mistakes.
This is unlike real human writers, whose sentences contain two types of language mistakes 1) \emph{typos} that are caused by mistakes in the human motoric input, and 2) \emph{common spelling mistakes}.
We scraped a list of common English language spelling mistakes from Oxford dictionary\footnote{\url{https://en.oxforddictionaries.com/spelling/common-misspellings}} and created 80 rules for randomly \emph{re-introducing spelling mistakes}.
Similarly, typos are randomly reintroduced based on the weighted edit distance\footnote{\url{https://pypi.python.org/pypi/weighted-levenshtein/0.1}}, such that typos resulting in real English words with small perturbations are emphasized.
We use autocorrection tools\footnote{\url{https://pypi.python.org/pypi/autocorrect/0.1.0}} for finding these words.
We call these augmentations \emph{obfuscations}, since they aim to confound the reader to think a human has written them. We omit the pseudocode description for brevity.
\subsection{Experiment: Varying generation parameters in our NMT model}
\label{sec:varying}
Parameters $b$ and $\lambda$ control different aspects in fake reviews.
We show six different examples of generated fake reviews in Table~\ref{table:categories}.
Here, the largest differences occur with increasing values of $b$: visibly, the restaurant reviews become more extreme.
This occurs because a large portion of vocabulary is ``forgotten''. Reviews with $b \geq 0.7$ contain more rare word combinations, e.g. ``!!!!!'' as punctuation, and they occasionally break grammaticality (''experience was awesome'').
Reviews with lower $b$ are more generic: they contain safe word combinations like ``Great place, good service'' that occur in many reviews. Parameter $\lambda$'s is more subtle: it affects how random review starts are and to a degree, the discontinuation between statements within the review.
We conducted an Amazon Mechanical Turk (MTurk) survey in order to determine what kind of NMT-Fake reviews are convincing to native English speakers. We describe the survey and results in the next section.
\begin{table}[!b]
\caption{Six different parametrizations of our NMT reviews and one example for each. The context is ``5 P~.~F~.~Chang ' s Scottsdale AZ'' in all examples.}
\begin{center}
\begin{tabular}{ | l | l | }
\hline
$(b, \lambda)$ & Example review for context \\ \hline
\hline
$(0.3, -3)$ & I love this location! Great service, great food and the best drinks in Scottsdale. \\
& The staff is very friendly and always remembers u when we come in\\\hline
$(0.3, -5)$ & Love love the food here! I always go for lunch. They have a great menu and \\
& they make it fresh to order. Great place, good service and nice staff\\\hline
$(0.5, -4)$ & I love their chicken lettuce wraps and fried rice!! The service is good, they are\\
& always so polite. They have great happy hour specials and they have a lot\\
& of options.\\\hline
$(0.7, -3)$ & Great place to go with friends! They always make sure your dining \\
& experience was awesome.\\ \hline
$(0.7, -5)$ & Still haven't ordered an entree before but today we tried them once..\\
& both of us love this restaurant....\\\hline
$(0.9, -4)$ & AMAZING!!!!! Food was awesome with excellent service. Loved the lettuce \\
& wraps. Great drinks and wine! Can't wait to go back so soon!!\\ \hline
\end{tabular}
\label{table:categories}
\end{center}
\end{table}
\subsubsection{MTurk study}
\label{sec:amt}
We created 20 jobs, each with 100 questions, and requested master workers in MTurk to complete the jobs.
We randomly generated each survey for the participants. Each review had a 50\% chance to be real or fake. The fake ones further were chosen among six (6) categories of fake reviews (Table~\ref{table:categories}).
The restaurant and the city was given as contextual information to the participants. Our aim was to use this survey to understand how well English-speakers react to different parametrizations of NMT-Fake reviews.
Table~\ref{table:amt_pop} in Appendix summarizes the statistics for respondents in the survey. All participants were native English speakers from America. The base rate (50\%) was revealed to the participants prior to the study.
We first investigated overall detection of any NMT-Fake reviews (1,006 fake reviews and 994 real reviews). We found that the participants had big difficulties in detecting our fake reviews. In average, the reviews were detected with class-averaged \emph{F-score of only 56\%}, with 53\% F-score for fake review detection and 59\% F-score for real review detection. The results are very close to \emph{random detection}, where precision, recall and F-score would each be 50\%. Results are recorded in Table~\ref{table:MTurk_super}. Overall, the fake review generation is very successful, since human detection rate across categories is close to random.
\begin{table}[t]
\caption{Effectiveness of Mechanical Turkers in distinguishing human-written reviews from fake reviews generated by our NMT model (all variants).}
\begin{center}
\begin{tabular}{ | c | c |c |c | c | }
\hline
\multicolumn{5}{|c|}{Classification report}
\\ \hline
Review Type & Precision & Recall & F-score & Support \\ \hline
\hline
Human & 55\% & 63\% & 59\% & 994\\
NMT-Fake & 57\% & 50\% & 53\% & 1006 \\
\hline
\end{tabular}
\label{table:MTurk_super}
\end{center}
\end{table}
We noticed some variation in the detection of different fake review categories. The respondents in our MTurk survey had most difficulties recognizing reviews of category $(b=0.3, \lambda=-5)$, where true positive rate was $40.4\%$, while the true negative rate of the real class was $62.7\%$. The precision were $16\%$ and $86\%$, respectively. The class-averaged F-score is $47.6\%$, which is close to random. Detailed classification reports are shown in Table~\ref{table:MTurk_sub} in Appendix. Our MTurk-study shows that \emph{our NMT-Fake reviews pose a significant threat to review systems}, since \emph{ordinary native English-speakers have very big difficulties in separating real reviews from fake reviews}. We use the review category $(b=0.3, \lambda=-5)$ for future user tests in this paper, since MTurk participants had most difficulties detecting these reviews. We refer to this category as NMT-Fake* in this paper.
\section{Evaluation}
\graphicspath{ {figures/}}
We evaluate our fake reviews by first comparing them statistically to previously proposed types of fake reviews, and proceed with a user study with experienced participants. We demonstrate the statistical difference to existing fake review types \cite{yao2017automated,mukherjee2013yelp,rayana2015collective} by training classifiers to detect previous types and investigate classification performance.
\subsection{Replication of state-of-the-art model: LSTM}
\label{sec:repl}
Yao et al. \cite{yao2017automated} presented the current state-of-the-art generative model for fake reviews. The model is trained over the Yelp Challenge dataset using a two-layer character-based LSTM model.
We requested the authors of \cite{yao2017automated} for access to their LSTM model or a fake review dataset generated by their model. Unfortunately they were not able to share either of these with us. We therefore replicated their model as closely as we could, based on their paper and e-mail correspondence\footnote{We are committed to sharing our code with bonafide researchers for the sake of reproducibility.}.
We used the same graphics card (GeForce GTX) and trained using the same framework (torch-RNN in lua). We downloaded the reviews from Yelp Challenge and preprocessed the data to only contain printable ASCII characters, and filtered out non-restaurant reviews. We trained the model for approximately 72 hours. We post-processed the reviews using the customization methodology described in \cite{yao2017automated} and email correspondence. We call fake reviews generated by this model LSTM-Fake reviews.
\subsection{Similarity to existing fake reviews}
\label{sec:automated}
We now want to understand how NMT-Fake* reviews compare to a) LSTM fake reviews and b) human-generated fake reviews. We do this by comparing the statistical similarity between these classes.
For `a' (Figure~\ref{fig:lstm}), we use the Yelp Challenge dataset. We trained a classifier using 5,000 random reviews from the Yelp Challenge dataset (``human'') and 5,000 fake reviews generated by LSTM-Fake. Yao et al. \cite{yao2017automated} found that character features are essential in identifying LSTM-Fake reviews. Consequently, we use character features (n-grams up to 3).
For `b' (Figure~\ref{fig:shill}),we the ``Yelp Shills'' dataset (combination of YelpZip \cite{mukherjee2013yelp}, YelpNYC \cite{mukherjee2013yelp}, YelpChi \cite{rayana2015collective}). This dataset labels entries that are identified as fraudulent by Yelp's filtering mechanism (''shill reviews'')\footnote{Note that shill reviews are probably generated by human shills \cite{zhao2017news}.}. The rest are treated as genuine reviews from human users (''genuine''). We use 100,000 reviews from each category to train a classifier. We use features from the commercial psychometric tool LIWC2015 \cite{pennebaker2015development} to generated features.
In both cases, we use AdaBoost (with 200 shallow decision trees) for training. For testing each classifier, we use a held out test set of 1,000 reviews from both classes in each case. In addition, we test 1,000 NMT-Fake* reviews. Figures~\ref{fig:lstm} and~\ref{fig:shill} show the results. The classification threshold of 50\% is marked with a dashed line.
\begin{figure}
\begin{subfigure}[b]{0.5\columnwidth}
\includegraphics[width=\columnwidth]{figures/lstm.png}
\caption{Human--LSTM reviews.}
\label{fig:lstm}
\end{subfigure}
\begin{subfigure}[b]{0.5\columnwidth}
\includegraphics[width=\columnwidth]{figures/distribution_shill.png}
\caption{Genuine--Shill reviews.}
\label{fig:shill}
\end{subfigure}
\caption{
Histogram comparison of NMT-Fake* reviews with LSTM-Fake reviews and human-generated (\emph{genuine} and \emph{shill}) reviews. Figure~\ref{fig:lstm} shows that a classifier trained to distinguish ``human'' vs. LSTM-Fake cannot distinguish ``human'' vs NMT-Fake* reviews. Figure~\ref{fig:shill} shows NMT-Fake* reviews are more similar to \emph{genuine} reviews than \emph{shill} reviews.
}
\label{fig:statistical_similarity}
\end{figure}
We can see that our new generated reviews do not share strong attributes with previous known categories of fake reviews. If anything, our fake reviews are more similar to genuine reviews than previous fake reviews. We thus conjecture that our NMT-Fake* fake reviews present a category of fake reviews that may go undetected on online review sites.
\subsection{Comparative user study}
\label{sec:comparison}
We wanted to evaluate the effectiveness of fake reviews againsttech-savvy users who understand and know to expect machine-generated fake reviews. We conducted a user study with 20 participants, all with computer science education and at least one university degree. Participant demographics are shown in Table~\ref{table:amt_pop} in the Appendix. Each participant first attended a training session where they were asked to label reviews (fake and genuine) and could later compare them to the correct answers -- we call these participants \emph{experienced participants}.
No personal data was collected during the user study.
Each person was given two randomly selected sets of 30 of reviews (a total of 60 reviews per person) with reviews containing 10 \textendash 50 words each.
Each set contained 26 (87\%) real reviews from Yelp and 4 (13\%) machine-generated reviews,
numbers chosen based on suspicious review prevalence on Yelp~\cite{mukherjee2013yelp,rayana2015collective}.
One set contained machine-generated reviews from one of the two models (NMT ($b=0.3, \lambda=-5$) or LSTM),
and the other set reviews from the other in randomized order. The number of fake reviews was revealed to each participant in the study description. Each participant was requested to mark four (4) reviews as fake.
Each review targeted a real restaurant. A screenshot of that restaurant's Yelp page was shown to each participant prior to the study. Each participant evaluated reviews for one specific, randomly selected, restaurant. An example of the first page of the user study is shown in Figure~\ref{fig:screenshot} in Appendix.
\begin{figure}[!ht]
\centering
\includegraphics[width=.7\columnwidth]{detection2.png}
\caption{Violin plots of detection rate in comparative study. Mean and standard deviations for number of detected fakes are $0.8\pm0.7$ for NMT-Fake* and $2.5\pm1.0$ for LSTM-Fake. $n=20$. A sample of random detection is shown as comparison.}
\label{fig:aalto}
\end{figure}
Figure~\ref{fig:aalto} shows the distribution of detected reviews of both types. A hypothetical random detector is shown for comparison.
NMT-Fake* reviews are significantly more difficult to detect for our experienced participants. In average, detection rate (recall) is $20\%$ for NMT-Fake* reviews, compared to $61\%$ for LSTM-based reviews.
The precision (and F-score) is the same as the recall in our study, since participants labeled 4 fakes in each set of 30 reviews \cite{murphy2012machine}.
The distribution of the detection across participants is shown in Figure~\ref{fig:aalto}. \emph{The difference is statistically significant with confidence level $99\%$} (Welch's t-test).
We compared the detection rate of NMT-Fake* reviews to a random detector, and find that \emph{our participants detection rate of NMT-Fake* reviews is not statistically different from random predictions with 95\% confidence level} (Welch's t-test).
\section{Defenses}
\label{sec:detection}
We developed an AdaBoost-based classifier to detect our new fake reviews, consisting of 200 shallow decision trees (depth 2). The features we used are recorded in Table~\ref{table:features_adaboost} (Appendix).
We used word-level features based on spaCy-tokenization \cite{honnibal-johnson:2015:EMNLP} and constructed n-gram representation of POS-tags and dependency tree tags. We added readability features from NLTK~\cite{bird2004nltk}.
\begin{figure}[ht]
\centering
\includegraphics[width=.7\columnwidth]{obf_score_fair_2.png}
\caption{
Adaboost-based classification of NMT-Fake and human-written reviews.
Effect of varying $b$ and $\lambda$ in fake review generation.
The variant native speakers had most difficulties detecting is well detectable by AdaBoost (97\%).}
\label{fig:adaboost_matrix_b_lambda}
\end{figure}
Figure~\ref{fig:adaboost_matrix_b_lambda} shows our AdaBoost classifier's class-averaged F-score at detecting different kind of fake reviews. The classifier is very effective in detecting reviews that humans have difficulties detecting. For example, the fake reviews MTurk users had most difficulty detecting ($b=0.3, \lambda=-5$) are detected with an excellent 97\% F-score.
The most important features for the classification were counts for frequently occurring words in fake reviews (such as punctuation, pronouns, articles) as well as the readability feature ``Automated Readability Index''. We thus conclude that while NMT-Fake reviews are difficult to detect for humans, they can be well detected with the right tools.
\section{Related Work}
Kumar and Shah~\cite{kumar2018false} survey and categorize false information research. Automatically generated fake reviews are a form of \emph{opinion-based false information}, where the creator of the review may influence reader's opinions or decisions.
Yao et al. \cite{yao2017automated} presented their study on machine-generated fake reviews. Contrary to us, they investigated character-level language models, without specifying a specific context before generation. We leverage existing NMT tools to encode a specific context to the restaurant before generating reviews.
Supporting our study, Everett et al~\cite{Everett2016Automated} found that security researchers were less likely to be fooled by Markov chain-generated Reddit comments compared to ordinary Internet users.
Diversification of NMT model outputs has been studied in \cite{li2016diversity}. The authors proposed the use of a penalty to commonly occurring sentences (\emph{n-grams}) in order to emphasize maximum mutual information-based generation.
The authors investigated the use of NMT models in chatbot systems.
We found that unigram penalties to random tokens (Algorithm~\ref{alg:aug}) was easy to implement and produced sufficiently diverse responses.
\section {Discussion and Future Work}
\paragraph{What makes NMT-Fake* reviews difficult to detect?} First, NMT models allow the encoding of a relevant context for each review, which narrows down the possible choices of words that the model has to choose from. Our NMT model had a perplexity of approximately $25$, while the model of \cite{yao2017automated} had a perplexity of approximately $90$ \footnote{Personal communication with the authors}. Second, the beam search in NMT models narrows down choices to natural-looking sentences. Third, we observed that the NMT model produced \emph{better structure} in the generated sentences (i.e. a more coherent story).
\paragraph{Cost of generating reviews} With our setup, generating one review took less than one second. The cost of generation stems mainly from the overnight training. Assuming an electricity cost of 16 cents / kWh (California) and 8 hours of training, training the NMT model requires approximately 1.30 USD. This is a 90\% reduction in time compared to the state-of-the-art \cite{yao2017automated}. Furthermore, it is possible to generate both positive and negative reviews with the same model.
\paragraph{Ease of customization} We experimented with inserting specific words into the text by increasing their log likelihoods in the beam search. We noticed that the success depended on the prevalence of the word in the training set. For example, adding a +5 to \emph{Mike} in the log-likelihood resulted in approximately 10\% prevalence of this word in the reviews. An attacker can therefore easily insert specific keywords to reviews, which can increase evasion probability.
\paragraph{Ease of testing} Our diversification scheme is applicable during \emph{generation phase}, and does not affect the training setup of the network in any way. Once the NMT model is obtained, it is easy to obtain several different variants of NMT-Fake reviews by varying parameters $b$ and $\lambda$.
\paragraph{Languages} The generation methodology is not per-se language-dependent. The requirement for successful generation is that sufficiently much data exists in the targeted language. However, our language model modifications require some knowledge of that target language's grammar to produce high-quality reviews.
\paragraph{Generalizability of detection techniques} Currently, fake reviews are not universally detectable. Our results highlight that it is difficult to claim detection performance on unseen types of fake reviews (Section~\ref{sec:automated}). We see this an open problem that deserves more attention in fake reviews research.
\paragraph{Generalizability to other types of datasets} Our technique can be applied to any dataset, as long as there is sufficient training data for the NMT model. We used approximately 2.9 million reviews for this work.
\section{Conclusion}
In this paper, we showed that neural machine translation models can be used to generate fake reviews that are very effective in deceiving even experienced, tech-savvy users.
This supports anecdotal evidence \cite{national2017commission}.
Our technique is more effective than state-of-the-art \cite{yao2017automated}.
We conclude that machine-aided fake review detection is necessary since human users are ineffective in identifying fake reviews.
We also showed that detectors trained using one type of fake reviews are not effective in identifying other types of fake reviews.
Robust detection of fake reviews is thus still an open problem.
\section*{Acknowledgments}
We thank Tommi Gr\"{o}ndahl for assistance in planning user studies and the
participants of the user study for their time and feedback. We also thank
Luiza Sayfullina for comments that improved the manuscript.
We thank the authors of \cite{yao2017automated} for answering questions about
their work.
\bibliographystyle{splncs}
\begin{thebibliography}{10}
\bibitem{yao2017automated}
Yao, Y., Viswanath, B., Cryan, J., Zheng, H., Zhao, B.Y.:
\newblock Automated crowdturfing attacks and defenses in online review systems.
\newblock In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and
Communications Security, ACM (2017)
\bibitem{murphy2012machine}
Murphy, K.:
\newblock Machine learning: a probabilistic approach.
\newblock Massachusetts Institute of Technology (2012)
\bibitem{challenge2013yelp}
Yelp:
\newblock {Yelp Challenge Dataset} (2013)
\bibitem{mukherjee2013yelp}
Mukherjee, A., Venkataraman, V., Liu, B., Glance, N.:
\newblock What yelp fake review filter might be doing?
\newblock In: Seventh International AAAI Conference on Weblogs and Social Media
(ICWSM). (2013)
\bibitem{rayana2015collective}
Rayana, S., Akoglu, L.:
\newblock Collective opinion spam detection: Bridging review networks and
metadata.
\newblock In: {}Proceedings of the 21th ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining
\bibitem{o2008user}
{O'Connor}, P.:
\newblock {User-generated content and travel: A case study on Tripadvisor.com}.
\newblock Information and communication technologies in tourism 2008 (2008)
\bibitem{luca2010reviews}
Luca, M.:
\newblock {Reviews, Reputation, and Revenue: The Case of Yelp. com}.
\newblock {Harvard Business School} (2010)
\bibitem{wang2012serf}
Wang, G., Wilson, C., Zhao, X., Zhu, Y., Mohanlal, M., Zheng, H., Zhao, B.Y.:
\newblock Serf and turf: crowdturfing for fun and profit.
\newblock In: Proceedings of the 21st international conference on World Wide
Web (WWW), ACM (2012)
\bibitem{rinta2017understanding}
Rinta-Kahila, T., Soliman, W.:
\newblock Understanding crowdturfing: The different ethical logics behind the
clandestine industry of deception.
\newblock In: ECIS 2017: Proceedings of the 25th European Conference on
Information Systems. (2017)
\bibitem{luca2016fake}
Luca, M., Zervas, G.:
\newblock Fake it till you make it: Reputation, competition, and yelp review
fraud.
\newblock Management Science (2016)
\bibitem{national2017commission}
{National Literacy Trust}:
\newblock Commission on fake news and the teaching of critical literacy skills
in schools URL:
\url{https://literacytrust.org.uk/policy-and-campaigns/all-party-parliamentary-group-literacy/fakenews/}.
\bibitem{jurafsky2014speech}
Jurafsky, D., Martin, J.H.:
\newblock Speech and language processing. Volume~3.
\newblock Pearson London: (2014)
\bibitem{kingma2014adam}
Kingma, D.P., Ba, J.:
\newblock Adam: A method for stochastic optimization.
\newblock arXiv preprint arXiv:1412.6980 (2014)
\bibitem{cho2014learning}
Cho, K., van Merrienboer, B., Gulcehre, C., Bahdanau, D., Bougares, F.,
Schwenk, H., Bengio, Y.:
\newblock Learning phrase representations using rnn encoder--decoder for
statistical machine translation.
\newblock In: Proceedings of the 2014 Conference on Empirical Methods in
Natural Language Processing (EMNLP). (2014)
\bibitem{klein2017opennmt}
Klein, G., Kim, Y., Deng, Y., Senellart, J., Rush, A.:
\newblock Opennmt: Open-source toolkit for neural machine translation.
\newblock Proceedings of ACL, System Demonstrations (2017)
\bibitem{wu2016google}
Wu, Y., Schuster, M., Chen, Z., Le, Q.V., Norouzi, M., Macherey, W., Krikun,
M., Cao, Y., Gao, Q., Macherey, K., et~al.:
\newblock Google's neural machine translation system: Bridging the gap between
human and machine translation.
\newblock arXiv preprint arXiv:1609.08144 (2016)
\bibitem{mei2017coherent}
Mei, H., Bansal, M., Walter, M.R.:
\newblock Coherent dialogue with attention-based language models.
\newblock In: AAAI. (2017) 3252--3258
\bibitem{li2016diversity}
Li, J., Galley, M., Brockett, C., Gao, J., Dolan, B.:
\newblock A diversity-promoting objective function for neural conversation
models.
\newblock In: Proceedings of NAACL-HLT. (2016)
\bibitem{rubin2006assessing}
Rubin, V.L., Liddy, E.D.:
\newblock Assessing credibility of weblogs.
\newblock In: AAAI Spring Symposium: Computational Approaches to Analyzing
Weblogs. (2006)
\bibitem{zhao2017news}
news.com.au:
\newblock {The potential of AI generated 'crowdturfing' could undermine online
reviews and dramatically erode public trust} URL:
\url{http://www.news.com.au/technology/online/security/the-potential-of-ai-generated-crowdturfing-could-undermine-online-reviews-and-dramatically-erode-public-trust/news-story/e1c84ad909b586f8a08238d5f80b6982}.
\bibitem{pennebaker2015development}
Pennebaker, J.W., Boyd, R.L., Jordan, K., Blackburn, K.:
\newblock {The development and psychometric properties of LIWC2015}.
\newblock Technical report (2015)
\bibitem{honnibal-johnson:2015:EMNLP}
Honnibal, M., Johnson, M.:
\newblock An improved non-monotonic transition system for dependency parsing.
\newblock In: Proceedings of the 2015 Conference on Empirical Methods in
Natural Language Processing (EMNLP), ACM (2015)
\bibitem{bird2004nltk}
Bird, S., Loper, E.:
\newblock {NLTK: the natural language toolkit}.
\newblock In: Proceedings of the ACL 2004 on Interactive poster and
demonstration sessions, Association for Computational Linguistics (2004)
\bibitem{kumar2018false}
Kumar, S., Shah, N.:
\newblock False information on web and social media: A survey.
\newblock arXiv preprint arXiv:1804.08559 (2018)
\bibitem{Everett2016Automated}
Everett, R.M., Nurse, J.R.C., Erola, A.:
\newblock The anatomy of online deception: What makes automated text
convincing?
\newblock In: Proceedings of the 31st Annual ACM Symposium on Applied
Computing. SAC '16, ACM (2016)
\end{thebibliography}
\section*{Appendix}
We present basic demographics of our MTurk study and the comparative study with experienced users in Table~\ref{table:amt_pop}.
\begin{table}
\caption{User study statistics.}
\begin{center}
\begin{tabular}{ | l | c | c | }
\hline
Quality & Mechanical Turk users & Experienced users\\
\hline
Native English Speaker & Yes (20) & Yes (1) No (19) \\
Fluent in English & Yes (20) & Yes (20) \\
Age & 21-40 (17) 41-60 (3) & 21-25 (8) 26-30 (7) 31-35 (4) 41-45 (1)\\
Gender & Male (14) Female (6) & Male (17) Female (3)\\
Highest Education & High School (10) Bachelor (10) & Bachelor (9) Master (6) Ph.D. (5) \\
\hline
\end{tabular}
\label{table:amt_pop}
\end{center}
\end{table}
Table~\ref{table:openNMT-py_commands} shows a listing of the openNMT-py commands we used to create our NMT model and to generate fake reviews.
\begin{table}[t]
\caption{Listing of used openNMT-py commands.}
\begin{center}
\begin{tabular}{ | l | l | }
\hline
Phase & Bash command \\
\hline
Preprocessing & \begin{lstlisting}[language=bash]
python preprocess.py -train_src context-train.txt
-train_tgt reviews-train.txt -valid_src context-val.txt
-valid_tgt reviews-val.txt -save_data model
-lower -tgt_words_min_frequency 10
\end{lstlisting}
\\ & \\
Training & \begin{lstlisting}[language=bash]
python train.py -data model -save_model model -epochs 8
-gpuid 0 -learning_rate_decay 0.5 -optim adam
-learning_rate 0.001 -start_decay_at 3\end{lstlisting}
\\ & \\
Generation & \begin{lstlisting}[language=bash]
python translate.py -model model_acc_35.54_ppl_25.68_e8.pt
-src context-tst.txt -output pred-e8.txt -replace_unk
-verbose -max_length 50 -gpu 0
\end{lstlisting} \\
\hline
\end{tabular}
\label{table:openNMT-py_commands}
\end{center}
\end{table}
Table~\ref{table:MTurk_sub} shows the classification performance of Amazon Mechanical Turkers, separated across different categories of NMT-Fake reviews. The category with best performance ($b=0.3, \lambda=-5$) is denoted as NMT-Fake*.
\begin{table}[b]
\caption{MTurk study subclass classification reports. Classes are imbalanced in ratio 1:6. Random predictions are $p_\mathrm{human} = 86\%$ and $p_\mathrm{machine} = 14\%$, with $r_\mathrm{human} = r_\mathrm{machine} = 50\%$. Class-averaged F-scores for random predictions are $42\%$.}
\begin{center}
\begin{tabular}{ | c || c |c |c | c | }
\hline
$(b=0.3, \lambda = -3)$ & Precision & Recall & F-score & Support \\ \hline
Human & 89\% & 63\% & 73\% & 994\\
NMT-Fake & 15\% & 45\% & 22\% & 146 \\
\hline
\hline
$(b=0.3, \lambda = -5)$ & Precision & Recall & F-score & Support \\ \hline
Human & 86\% & 63\% & 73\% & 994\\
NMT-Fake* & 16\% & 40\% & 23\% & 171 \\
\hline
\hline
$(b=0.5, \lambda = -4)$ & Precision & Recall & F-score & Support \\ \hline
Human & 88\% & 63\% & 73\% & 994\\
NMT-Fake & 21\% & 55\% & 30\% & 181 \\
\hline
\hline
$(b=0.7, \lambda = -3)$ & Precision & Recall & F-score & Support \\ \hline
Human & 88\% & 63\% & 73\% & 994\\
NMT-Fake & 19\% & 50\% & 27\% & 170 \\
\hline
\hline
$(b=0.7, \lambda = -5)$ & Precision & Recall & F-score & Support \\ \hline
Human & 89\% & 63\% & 74\% & 994\\
NMT-Fake & 21\% & 57\% & 31\% & 174 \\
\hline
\hline
$(b=0.9, \lambda = -4)$ & Precision & Recall & F-score & Support \\ \hline
Human & 88\% & 63\% & 73\% & 994\\
NMT-Fake & 18\% & 50\% & 27\% & 164 \\
\hline
\end{tabular}
\label{table:MTurk_sub}
\end{center}
\end{table}
Figure~\ref{fig:screenshot} shows screenshots of the first two pages of our user study with experienced participants.
\begin{figure}[ht]
\centering
\includegraphics[width=1.\columnwidth]{figures/screenshot_7-3.png}
\caption{
Screenshots of the first two pages in the user study. Example 1 is a NMT-Fake* review, the rest are human-written.
}
\label{fig:screenshot}
\end{figure}
Table~\ref{table:features_adaboost} shows the features used to detect NMT-Fake reviews using the AdaBoost classifier.
\begin{table}
\caption{Features used in NMT-Fake review detector.}
\begin{center}
\begin{tabular}{ | l | c | }
\hline
Feature type & Number of features \\ \hline
\hline
Readability features & 13 \\ \hline
Unique POS tags & $~20$ \\ \hline
Word unigrams & 22,831 \\ \hline
1/2/3/4-grams of simple part-of-speech tags & 54,240 \\ \hline
1/2/3-grams of detailed part-of-speech tags & 112,944 \\ \hline
1/2/3-grams of syntactic dependency tags & 93,195 \\ \hline
\end{tabular}
\label{table:features_adaboost}
\end{center}
\end{table}
\end{document} | Yes |
bd9930a613dd36646e2fc016b6eb21ab34c77621 | bd9930a613dd36646e2fc016b6eb21ab34c77621_0 | Q: How many reviews in total (both generated and true) do they evaluate on Amazon Mechanical Turk?
Text: Introduction
Automatically generated fake reviews have only recently become natural enough to fool human readers. Yao et al. BIBREF0 use a deep neural network (a so-called 2-layer LSTM BIBREF1 ) to generate fake reviews, and concluded that these fake reviews look sufficiently genuine to fool native English speakers. They train their model using real restaurant reviews from yelp.com BIBREF2 . Once trained, the model is used to generate reviews character-by-character. Due to the generation methodology, it cannot be easily targeted for a specific context (meaningful side information). Consequently, the review generation process may stray off-topic. For instance, when generating a review for a Japanese restaurant in Las Vegas, the review generation process may include references to an Italian restaurant in Baltimore. The authors of BIBREF0 apply a post-processing step (customization), which replaces food-related words with more suitable ones (sampled from the targeted restaurant). The word replacement strategy has drawbacks: it can miss certain words and replace others independent of their surrounding words, which may alert savvy readers. As an example: when we applied the customization technique described in BIBREF0 to a review for a Japanese restaurant it changed the snippet garlic knots for breakfast with garlic knots for sushi).
We propose a methodology based on neural machine translation (NMT) that improves the generation process by defining a context for the each generated fake review. Our context is a clear-text sequence of: the review rating, restaurant name, city, state and food tags (e.g. Japanese, Italian). We show that our technique generates review that stay on topic. We can instantiate our basic technique into several variants. We vet them on Amazon Mechanical Turk and find that native English speakers are very poor at recognizing our fake generated reviews. For one variant, the participants' performance is close to random: the class-averaged F-score of detection is INLINEFORM0 (whereas random would be INLINEFORM1 given the 1:6 imbalance in the test). Via a user study with experienced, highly educated participants, we compare this variant (which we will henceforth refer to as NMT-Fake* reviews) with fake reviews generated using the char-LSTM-based technique from BIBREF0 .
We demonstrate that NMT-Fake* reviews constitute a new category of fake reviews that cannot be detected by classifiers trained only using previously known categories of fake reviews BIBREF0 , BIBREF3 , BIBREF4 . Therefore, NMT-Fake* reviews may go undetected in existing online review sites. To meet this challenge, we develop an effective classifier that detects NMT-Fake* reviews effectively (97% F-score). Our main contributions are:
Background
Fake reviews User-generated content BIBREF5 is an integral part of the contemporary user experience on the web. Sites like tripadvisor.com, yelp.com and Google Play use user-written reviews to provide rich information that helps other users choose where to spend money and time. User reviews are used for rating services or products, and for providing qualitative opinions. User reviews and ratings may be used to rank services in recommendations. Ratings have an affect on the outwards appearance. Already 8 years ago, researchers estimated that a one-star rating increase affects the business revenue by 5 – 9% on yelp.com BIBREF6 .
Due to monetary impact of user-generated content, some businesses have relied on so-called crowd-turfing agents BIBREF7 that promise to deliver positive ratings written by workers to a customer in exchange for a monetary compensation. Crowd-turfing ethics are complicated. For example, Amazon community guidelines prohibit buying content relating to promotions, but the act of writing fabricated content is not considered illegal, nor is matching workers to customers BIBREF8 . Year 2015, approximately 20% of online reviews on yelp.com were suspected of being fake BIBREF9 .
Nowadays, user-generated review sites like yelp.com use filters and fraudulent review detection techniques. These factors have resulted in an increase in the requirements of crowd-turfed reviews provided to review sites, which in turn has led to an increase in the cost of high-quality review. Due to the cost increase, researchers hypothesize the existence of neural network-generated fake reviews. These neural-network-based fake reviews are statistically different from human-written fake reviews, and are not caught by classifiers trained on these BIBREF0 .
Detecting fake reviews can either be done on an individual level or as a system-wide detection tool (i.e. regulation). Detecting fake online content on a personal level requires knowledge and skills in critical reading. In 2017, the National Literacy Trust assessed that young people in the UK do not have the skillset to differentiate fake news from real news BIBREF10 . For example, 20% of children that use online news sites in age group 12-15 believe that all information on news sites are true.
Neural Networks Neural networks are function compositions that map input data through INLINEFORM0 subsequent layers: DISPLAYFORM0
where the functions INLINEFORM0 are typically non-linear and chosen by experts partly for known good performance on datasets and partly for simplicity of computational evaluation. Language models (LMs) BIBREF11 are generative probability distributions that assign probabilities to sequences of tokens ( INLINEFORM1 ): DISPLAYFORM0
such that the language model can be used to predict how likely a specific token at time step INLINEFORM0 is, based on the INLINEFORM1 previous tokens. Tokens are typically either words or characters.
For decades, deep neural networks were thought to be computationally too difficult to train. However, advances in optimization, hardware and the availability of frameworks have shown otherwise BIBREF1 , BIBREF12 . Neural language models (NLMs) have been one of the promising application areas. NLMs are typically various forms of recurrent neural networks (RNNs), which pass through the data sequentially and maintain a memory representation of the past tokens with a hidden context vector. There are many RNN architectures that focus on different ways of updating and maintaining context vectors: Long Short-Term Memory units (LSTM) and Gated Recurrent Units (GRUs) are perhaps most popular. Neural LMs have been used for free-form text generation. In certain application areas, the quality has been high enough to sometimes fool human readers BIBREF0 . Encoder-decoder (seq2seq) models BIBREF13 are architectures of stacked RNNs, which have the ability to generate output sequences based on input sequences. The encoder network reads in a sequence of tokens, and passes it to a decoder network (a LM). In contrast to simpler NLMs, encoder-decoder networks have the ability to use additional context for generating text, which enables more accurate generation of text. Encoder-decoder models are integral in Neural Machine Translation (NMT) BIBREF14 , where the task is to translate a source text from one language to another language. NMT models additionally use beam search strategies to heuristically search the set of possible translations. Training datasets are parallel corpora; large sets of paired sentences in the source and target languages. The application of NMT techniques for online machine translation has significantly improved the quality of translations, bringing it closer to human performance BIBREF15 .
Neural machine translation models are efficient at mapping one expression to another (one-to-one mapping). Researchers have evaluated these models for conversation generation BIBREF16 , with mixed results. Some researchers attribute poor performance to the use of the negative log likelihood cost function during training, which emphasizes generation of high-confidence phrases rather than diverse phrases BIBREF17 . The results are often generic text, which lacks variation. Li et al. have suggested various augmentations to this, among others suppressing typical responses in the decoder language model to promote response diversity BIBREF17 .
System Model
We discuss the attack model, our generative machine learning method and controlling the generative process in this section.
Attack Model
Wang et al. BIBREF7 described a model of crowd-turfing attacks consisting of three entities: customers who desire to have fake reviews for a particular target (e.g. their restaurant) on a particular platform (e.g. Yelp), agents who offer fake review services to customers, and workers who are orchestrated by the agent to compose and post fake reviews.
Automated crowd-turfing attacks (ACA) replace workers by a generative model. This has several benefits including better economy and scalability (human workers are more expensive and slower) and reduced detectability (agent can better control the rate at which fake reviews are generated and posted).
We assume that the agent has access to public reviews on the review platform, by which it can train its generative model. We also assume that it is easy for the agent to create a large number of accounts on the review platform so that account-based detection or rate-limiting techniques are ineffective against fake reviews.
The quality of the generative model plays a crucial role in the attack. Yao et al. BIBREF0 propose the use of a character-based LSTM as base for generative model. LSTMs are not conditioned to generate reviews for a specific target BIBREF1 , and may mix-up concepts from different contexts during free-form generation. Mixing contextually separate words is one of the key criteria that humans use to identify fake reviews. These may result in violations of known indicators for fake content BIBREF18 . For example, the review content may not match prior expectations nor the information need that the reader has. We improve the attack model by considering a more capable generative model that produces more appropriate reviews: a neural machine translation (NMT) model.
Generative Model
We propose the use of NMT models for fake review generation. The method has several benefits: 1) the ability to learn how to associate context (keywords) to reviews, 2) fast training time, and 3) a high-degree of customization during production time, e.g. introduction of specific waiter or food items names into reviews.
NMT models are constructions of stacked recurrent neural networks (RNNs). They include an encoder network and a decoder network, which are jointly optimized to produce a translation of one sequence to another. The encoder rolls over the input data in sequence and produces one INLINEFORM0 -dimensional context vector representation for the sentence. The decoder then generates output sequences based on the embedding vector and an attention module, which is taught to associate output words with certain input words. The generation typically continues until a specific EOS (end of sentence) token is encountered. The review length can be controlled in many ways, e.g. by setting the probability of generating the EOS token to zero until the required length is reached.
NMT models often also include a beam search BIBREF14 , which generates several hypotheses and chooses the best ones amongst them. In our work, we use the greedy beam search technique. We forgo the use of additional beam searches as we found that the quality of the output was already adequate and the translation phase time consumption increases linearly for each beam used.
We use the Yelp Challenge dataset BIBREF2 for our fake review generation. The dataset (Aug 2017) contains 2.9 million 1 –5 star restaurant reviews. We treat all reviews as genuine human-written reviews for the purpose of this work, since wide-scale deployment of machine-generated review attacks are not yet reported (Sep 2017) BIBREF19 . As preprocessing, we remove non-printable (non-ASCII) characters and excessive white-space. We separate punctuation from words. We reserve 15,000 reviews for validation and 3,000 for testing, and the rest we use for training. NMT models require a parallel corpus of source and target sentences, i.e. a large set of (source, target)-pairs. We set up a parallel corpus by constructing (context, review)-pairs from the dataset. Next, we describe how we created our input context.
The Yelp Challenge dataset includes metadata about restaurants, including their names, food tags, cities and states these restaurants are located in. For each restaurant review, we fetch this metadata and use it as our input context in the NMT model. The corresponding restaurant review is similarly set as the target sentence. This method produced 2.9 million pairs of sentences in our parallel corpus. We show one example of the parallel training corpus in Example 1 below:
5 Public House Las Vegas NV Gastropubs Restaurants > Excellent
food and service . Pricey , but well worth it . I would recommend
the bone marrow and sampler platter for appetizers . \end{verbatim}
\noindent The order {\textbf{[rating name city state tags]}} is kept constant.
Training the model conditions it to associate certain sequences of words in the input sentence with others in the output.
\subsubsection{Training Settings}
We train our NMT model on a commodity PC with a i7-4790k CPU (4.00GHz), with 32GB RAM and one NVidia GeForce GTX 980 GPU. Our system can process approximately 1,300 \textendash 1,500 source tokens/s and approximately 5,730 \textendash 5,830 output tokens/s. Training one epoch takes in average 72 minutes. The model is trained for 8 epochs, i.e. over night. We call fake review generated by this model \emph{NMT-Fake reviews}. We only need to train one model to produce reviews of different ratings.
We use the training settings: adam optimizer \cite{kingma2014adam} with the suggested learning rate 0.001 \cite{klein2017opennmt}. For most parts, parameters are at their default values. Notably, the maximum sentence length of input and output is 50 tokens by default.
We leverage the framework openNMT-py \cite{klein2017opennmt} to teach the our NMT model.
We list used openNMT-py commands in Appendix Table~\ref{table:openNMT-py_commands}.
\begin{figure}[t]
\begin{center}
\begin{tabular}{ | l | }
\hline
Example 2. Greedy NMT \\
Great food, \underline{great} service, \underline{great} \textit{\textit{beer selection}}. I had the \textit{Gastropubs burger} and it
\\
was delicious. The \underline{\textit{beer selection}} was also \underline{great}. \\
\\
Example 3. NMT-Fake* \\
I love this restaurant. Great food, great service. It's \textit{a little pricy} but worth\\
it for the \textit{quality} of the \textit{beer} and atmosphere you can see in \textit{Vegas}
\\
\hline
\end{tabular}
\label{table:output_comparison}
\end{center}
\caption{Na\"{i}ve text generation with NMT vs. generation using our NTM model. Repetitive patterns are \underline{underlined}. Contextual words are \emph{italicized}. Both examples here are generated based on the context given in Example~1.}
\label{fig:comparison}
\end{figure}
\subsection{Controlling generation of fake reviews}
\label{sec:generating}
Greedy NMT beam searches are practical in many NMT cases. However, the results are simply repetitive, when naively applied to fake review generation (See Example~2 in Figure~\ref{fig:comparison}).
The NMT model produces many \emph{high-confidence} word predictions, which are repetitive and obviously fake. We calculated that in fact, 43\% of the generated sentences started with the phrase ``Great food''. The lack of diversity in greedy use of NMTs for text generation is clear.
\begin{algorithm}[!b]
\KwData{Desired review context $C_\mathrm{input}$ (given as cleartext), NMT model}
\KwResult{Generated review $out$ for input context $C_\mathrm{input}$}
set $b=0.3$, $\lambda=-5$, $\alpha=\frac{2}{3}$, $p_\mathrm{typo}$, $p_\mathrm{spell}$ \\
$\log p \leftarrow \text{NMT.decode(NMT.encode(}C_\mathrm{input}\text{))}$ \\
out $\leftarrow$ [~] \\
$i \leftarrow 0$ \\
$\log p \leftarrow \text{Augment}(\log p$, $b$, $\lambda$, $1$, $[~]$, 0)~~~~~~~~~~~~~~~ |~random penalty~\\
\While{$i=0$ or $o_i$ not EOS}{
$\log \Tilde{p} \leftarrow \text{Augment}(\log p$, $b$, $\lambda$, $\alpha$, $o_i$, $i$)~~~~~~~~~~~ |~start \& memory penalty~\\
$o_i \leftarrow$ \text{NMT.beam}($\log \Tilde{p}$, out) \\
out.append($o_i$) \\
$i \leftarrow i+1$
}\text{return}~$\text{Obfuscate}$(out,~$p_\mathrm{typo}$,~$p_\mathrm{spell}$)
\caption{Generation of NMT-Fake* reviews.}
\label{alg:base}
\end{algorithm}
In this work, we describe how we succeeded in creating more diverse and less repetitive generated reviews, such as Example 3 in Figure~\ref{fig:comparison}.
We outline pseudocode for our methodology of generating fake reviews in Algorithm~\ref{alg:base}. There are several parameters in our algorithm.
The details of the algorithm will be shown later.
We modify the openNMT-py translation phase by changing log-probabilities before passing them to the beam search.
We notice that reviews generated with openNMT-py contain almost no language errors. As an optional post-processing step, we obfuscate reviews by introducing natural typos/misspellings randomly. In the next sections, we describe how we succeeded in generating more natural sentences from our NMT model, i.e. generating reviews like Example~3 instead of reviews like Example~2.
\subsubsection{Variation in word content}
Example 2 in Figure~\ref{fig:comparison} repeats commonly occurring words given for a specific context (e.g. \textit{great, food, service, beer, selection, burger} for Example~1). Generic review generation can be avoided by decreasing probabilities (log-likelihoods \cite{murphy2012machine}) of the generators LM, the decoder.
We constrain the generation of sentences by randomly \emph{imposing penalties to words}.
We tried several forms of added randomness, and found that adding constant penalties to a \emph{random subset} of the target words resulted in the most natural sentence flow. We call these penalties \emph{Bernoulli penalties}, since the random variables are chosen as either 1 or 0 (on or off).
\paragraph{Bernoulli penalties to language model}
To avoid generic sentences components, we augment the default language model $p(\cdot)$ of the decoder by
\begin{equation}
\log \Tilde{p}(t_k) = \log p(t_k | t_i, \dots, t_1) + \lambda q,
\end{equation}
where $q \in R^{V}$ is a vector of Bernoulli-distributed random values that obtain values $1$ with probability $b$ and value $0$ with probability $1-b_i$, and $\lambda < 0$. Parameter $b$ controls how much of the vocabulary is forgotten and $\lambda$ is a soft penalty of including ``forgotten'' words in a review.
$\lambda q_k$ emphasizes sentence forming with non-penalized words. The randomness is reset at the start of generating a new review.
Using Bernoulli penalties in the language model, we can ``forget'' a certain proportion of words and essentially ``force'' the creation of less typical sentences. We will test the effect of these two parameters, the Bernoulli probability $b$ and log-likelihood penalty of including ``forgotten'' words $\lambda$, with a user study in Section~\ref{sec:varying}.
\paragraph{Start penalty}
We introduce start penalties to avoid generic sentence starts (e.g. ``Great food, great service''). Inspired by \cite{li2016diversity}, we add a random start penalty $\lambda s^\mathrm{i}$, to our language model, which decreases monotonically for each generated token. We set $\alpha \leftarrow 0.66$ as it's effect decreases by 90\% every 5 words generated.
\paragraph{Penalty for reusing words}
Bernoulli penalties do not prevent excessive use of certain words in a sentence (such as \textit{great} in Example~2).
To avoid excessive reuse of words, we included a memory penalty for previously used words in each translation.
Concretely, we add the penalty $\lambda$ to each word that has been generated by the greedy search.
\subsubsection{Improving sentence coherence}
\label{sec:grammar}
We visually analyzed reviews after applying these penalties to our NMT model. While the models were clearly diverse, they were \emph{incoherent}: the introduction of random penalties had degraded the grammaticality of the sentences. Amongst others, the use of punctuation was erratic, and pronouns were used semantically wrongly (e.g. \emph{he}, \emph{she} might be replaced, as could ``and''/``but''). To improve the authenticity of our reviews, we added several \emph{grammar-based rules}.
English language has several classes of words which are important for the natural flow of sentences.
We built a list of common pronouns (e.g. I, them, our), conjunctions (e.g. and, thus, if), punctuation (e.g. ,/.,..), and apply only half memory penalties for these words. We found that this change made the reviews more coherent. The pseudocode for this and the previous step is shown in Algorithm~\ref{alg:aug}.
The combined effect of grammar-based rules and LM augmentation is visible in Example~3, Figure~\ref{fig:comparison}.
\begin{algorithm}[!t]
\KwData{Initial log LM $\log p$, Bernoulli probability $b$, soft-penalty $\lambda$, monotonic factor $\alpha$, last generated token $o_i$, grammar rules set $G$}
\KwResult{Augmented log LM $\log \Tilde{p}$}
\begin{algorithmic}[1]
\Procedure {Augment}{$\log p$, $b$, $\lambda$, $\alpha$, $o_i$, $i$}{ \\
generate $P_{\mathrm{1:N}} \leftarrow Bernoulli(b)$~~~~~~~~~~~~~~~|~$\text{One value} \in \{0,1\}~\text{per token}$~ \\
$I \leftarrow P>0$ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~|~Select positive indices~\\
$\log \Tilde{p} \leftarrow$ $\text{Discount}$($\log p$, $I$, $\lambda \cdot \alpha^i$,$G$) ~~~~~~ |~start penalty~\\
$\log \Tilde{p} \leftarrow$ $\text{Discount}$($\log \Tilde{p}$, $[o_i]$, $\lambda$,$G$) ~~~~~~~~~ |~memory penalty~\\
\textbf{return}~$\log \Tilde{p}$
}
\EndProcedure
\\
\Procedure {Discount}{$\log p$, $I$, $\lambda$, $G$}{
\State{\For{$i \in I$}{
\eIf{$o_i \in G$}{
$\log p_{i} \leftarrow \log p_{i} + \lambda/2$
}{
$\log p_{i} \leftarrow \log p_{i} + \lambda$}
}\textbf{return}~$\log p$
\EndProcedure
}}
\end{algorithmic}
\caption{Pseudocode for augmenting language model. }
\label{alg:aug}
\end{algorithm}
\subsubsection{Human-like errors}
\label{sec:obfuscation}
We notice that our NMT model produces reviews without grammar mistakes.
This is unlike real human writers, whose sentences contain two types of language mistakes 1) \emph{typos} that are caused by mistakes in the human motoric input, and 2) \emph{common spelling mistakes}.
We scraped a list of common English language spelling mistakes from Oxford dictionary\footnote{\url{https://en.oxforddictionaries.com/spelling/common-misspellings}} and created 80 rules for randomly \emph{re-introducing spelling mistakes}.
Similarly, typos are randomly reintroduced based on the weighted edit distance\footnote{\url{https://pypi.python.org/pypi/weighted-levenshtein/0.1}}, such that typos resulting in real English words with small perturbations are emphasized.
We use autocorrection tools\footnote{\url{https://pypi.python.org/pypi/autocorrect/0.1.0}} for finding these words.
We call these augmentations \emph{obfuscations}, since they aim to confound the reader to think a human has written them. We omit the pseudocode description for brevity.
\subsection{Experiment: Varying generation parameters in our NMT model}
\label{sec:varying}
Parameters $b$ and $\lambda$ control different aspects in fake reviews.
We show six different examples of generated fake reviews in Table~\ref{table:categories}.
Here, the largest differences occur with increasing values of $b$: visibly, the restaurant reviews become more extreme.
This occurs because a large portion of vocabulary is ``forgotten''. Reviews with $b \geq 0.7$ contain more rare word combinations, e.g. ``!!!!!'' as punctuation, and they occasionally break grammaticality (''experience was awesome'').
Reviews with lower $b$ are more generic: they contain safe word combinations like ``Great place, good service'' that occur in many reviews. Parameter $\lambda$'s is more subtle: it affects how random review starts are and to a degree, the discontinuation between statements within the review.
We conducted an Amazon Mechanical Turk (MTurk) survey in order to determine what kind of NMT-Fake reviews are convincing to native English speakers. We describe the survey and results in the next section.
\begin{table}[!b]
\caption{Six different parametrizations of our NMT reviews and one example for each. The context is ``5 P~.~F~.~Chang ' s Scottsdale AZ'' in all examples.}
\begin{center}
\begin{tabular}{ | l | l | }
\hline
$(b, \lambda)$ & Example review for context \\ \hline
\hline
$(0.3, -3)$ & I love this location! Great service, great food and the best drinks in Scottsdale. \\
& The staff is very friendly and always remembers u when we come in\\\hline
$(0.3, -5)$ & Love love the food here! I always go for lunch. They have a great menu and \\
& they make it fresh to order. Great place, good service and nice staff\\\hline
$(0.5, -4)$ & I love their chicken lettuce wraps and fried rice!! The service is good, they are\\
& always so polite. They have great happy hour specials and they have a lot\\
& of options.\\\hline
$(0.7, -3)$ & Great place to go with friends! They always make sure your dining \\
& experience was awesome.\\ \hline
$(0.7, -5)$ & Still haven't ordered an entree before but today we tried them once..\\
& both of us love this restaurant....\\\hline
$(0.9, -4)$ & AMAZING!!!!! Food was awesome with excellent service. Loved the lettuce \\
& wraps. Great drinks and wine! Can't wait to go back so soon!!\\ \hline
\end{tabular}
\label{table:categories}
\end{center}
\end{table}
\subsubsection{MTurk study}
\label{sec:amt}
We created 20 jobs, each with 100 questions, and requested master workers in MTurk to complete the jobs.
We randomly generated each survey for the participants. Each review had a 50\% chance to be real or fake. The fake ones further were chosen among six (6) categories of fake reviews (Table~\ref{table:categories}).
The restaurant and the city was given as contextual information to the participants. Our aim was to use this survey to understand how well English-speakers react to different parametrizations of NMT-Fake reviews.
Table~\ref{table:amt_pop} in Appendix summarizes the statistics for respondents in the survey. All participants were native English speakers from America. The base rate (50\%) was revealed to the participants prior to the study.
We first investigated overall detection of any NMT-Fake reviews (1,006 fake reviews and 994 real reviews). We found that the participants had big difficulties in detecting our fake reviews. In average, the reviews were detected with class-averaged \emph{F-score of only 56\%}, with 53\% F-score for fake review detection and 59\% F-score for real review detection. The results are very close to \emph{random detection}, where precision, recall and F-score would each be 50\%. Results are recorded in Table~\ref{table:MTurk_super}. Overall, the fake review generation is very successful, since human detection rate across categories is close to random.
\begin{table}[t]
\caption{Effectiveness of Mechanical Turkers in distinguishing human-written reviews from fake reviews generated by our NMT model (all variants).}
\begin{center}
\begin{tabular}{ | c | c |c |c | c | }
\hline
\multicolumn{5}{|c|}{Classification report}
\\ \hline
Review Type & Precision & Recall & F-score & Support \\ \hline
\hline
Human & 55\% & 63\% & 59\% & 994\\
NMT-Fake & 57\% & 50\% & 53\% & 1006 \\
\hline
\end{tabular}
\label{table:MTurk_super}
\end{center}
\end{table}
We noticed some variation in the detection of different fake review categories. The respondents in our MTurk survey had most difficulties recognizing reviews of category $(b=0.3, \lambda=-5)$, where true positive rate was $40.4\%$, while the true negative rate of the real class was $62.7\%$. The precision were $16\%$ and $86\%$, respectively. The class-averaged F-score is $47.6\%$, which is close to random. Detailed classification reports are shown in Table~\ref{table:MTurk_sub} in Appendix. Our MTurk-study shows that \emph{our NMT-Fake reviews pose a significant threat to review systems}, since \emph{ordinary native English-speakers have very big difficulties in separating real reviews from fake reviews}. We use the review category $(b=0.3, \lambda=-5)$ for future user tests in this paper, since MTurk participants had most difficulties detecting these reviews. We refer to this category as NMT-Fake* in this paper.
\section{Evaluation}
\graphicspath{ {figures/}}
We evaluate our fake reviews by first comparing them statistically to previously proposed types of fake reviews, and proceed with a user study with experienced participants. We demonstrate the statistical difference to existing fake review types \cite{yao2017automated,mukherjee2013yelp,rayana2015collective} by training classifiers to detect previous types and investigate classification performance.
\subsection{Replication of state-of-the-art model: LSTM}
\label{sec:repl}
Yao et al. \cite{yao2017automated} presented the current state-of-the-art generative model for fake reviews. The model is trained over the Yelp Challenge dataset using a two-layer character-based LSTM model.
We requested the authors of \cite{yao2017automated} for access to their LSTM model or a fake review dataset generated by their model. Unfortunately they were not able to share either of these with us. We therefore replicated their model as closely as we could, based on their paper and e-mail correspondence\footnote{We are committed to sharing our code with bonafide researchers for the sake of reproducibility.}.
We used the same graphics card (GeForce GTX) and trained using the same framework (torch-RNN in lua). We downloaded the reviews from Yelp Challenge and preprocessed the data to only contain printable ASCII characters, and filtered out non-restaurant reviews. We trained the model for approximately 72 hours. We post-processed the reviews using the customization methodology described in \cite{yao2017automated} and email correspondence. We call fake reviews generated by this model LSTM-Fake reviews.
\subsection{Similarity to existing fake reviews}
\label{sec:automated}
We now want to understand how NMT-Fake* reviews compare to a) LSTM fake reviews and b) human-generated fake reviews. We do this by comparing the statistical similarity between these classes.
For `a' (Figure~\ref{fig:lstm}), we use the Yelp Challenge dataset. We trained a classifier using 5,000 random reviews from the Yelp Challenge dataset (``human'') and 5,000 fake reviews generated by LSTM-Fake. Yao et al. \cite{yao2017automated} found that character features are essential in identifying LSTM-Fake reviews. Consequently, we use character features (n-grams up to 3).
For `b' (Figure~\ref{fig:shill}),we the ``Yelp Shills'' dataset (combination of YelpZip \cite{mukherjee2013yelp}, YelpNYC \cite{mukherjee2013yelp}, YelpChi \cite{rayana2015collective}). This dataset labels entries that are identified as fraudulent by Yelp's filtering mechanism (''shill reviews'')\footnote{Note that shill reviews are probably generated by human shills \cite{zhao2017news}.}. The rest are treated as genuine reviews from human users (''genuine''). We use 100,000 reviews from each category to train a classifier. We use features from the commercial psychometric tool LIWC2015 \cite{pennebaker2015development} to generated features.
In both cases, we use AdaBoost (with 200 shallow decision trees) for training. For testing each classifier, we use a held out test set of 1,000 reviews from both classes in each case. In addition, we test 1,000 NMT-Fake* reviews. Figures~\ref{fig:lstm} and~\ref{fig:shill} show the results. The classification threshold of 50\% is marked with a dashed line.
\begin{figure}
\begin{subfigure}[b]{0.5\columnwidth}
\includegraphics[width=\columnwidth]{figures/lstm.png}
\caption{Human--LSTM reviews.}
\label{fig:lstm}
\end{subfigure}
\begin{subfigure}[b]{0.5\columnwidth}
\includegraphics[width=\columnwidth]{figures/distribution_shill.png}
\caption{Genuine--Shill reviews.}
\label{fig:shill}
\end{subfigure}
\caption{
Histogram comparison of NMT-Fake* reviews with LSTM-Fake reviews and human-generated (\emph{genuine} and \emph{shill}) reviews. Figure~\ref{fig:lstm} shows that a classifier trained to distinguish ``human'' vs. LSTM-Fake cannot distinguish ``human'' vs NMT-Fake* reviews. Figure~\ref{fig:shill} shows NMT-Fake* reviews are more similar to \emph{genuine} reviews than \emph{shill} reviews.
}
\label{fig:statistical_similarity}
\end{figure}
We can see that our new generated reviews do not share strong attributes with previous known categories of fake reviews. If anything, our fake reviews are more similar to genuine reviews than previous fake reviews. We thus conjecture that our NMT-Fake* fake reviews present a category of fake reviews that may go undetected on online review sites.
\subsection{Comparative user study}
\label{sec:comparison}
We wanted to evaluate the effectiveness of fake reviews againsttech-savvy users who understand and know to expect machine-generated fake reviews. We conducted a user study with 20 participants, all with computer science education and at least one university degree. Participant demographics are shown in Table~\ref{table:amt_pop} in the Appendix. Each participant first attended a training session where they were asked to label reviews (fake and genuine) and could later compare them to the correct answers -- we call these participants \emph{experienced participants}.
No personal data was collected during the user study.
Each person was given two randomly selected sets of 30 of reviews (a total of 60 reviews per person) with reviews containing 10 \textendash 50 words each.
Each set contained 26 (87\%) real reviews from Yelp and 4 (13\%) machine-generated reviews,
numbers chosen based on suspicious review prevalence on Yelp~\cite{mukherjee2013yelp,rayana2015collective}.
One set contained machine-generated reviews from one of the two models (NMT ($b=0.3, \lambda=-5$) or LSTM),
and the other set reviews from the other in randomized order. The number of fake reviews was revealed to each participant in the study description. Each participant was requested to mark four (4) reviews as fake.
Each review targeted a real restaurant. A screenshot of that restaurant's Yelp page was shown to each participant prior to the study. Each participant evaluated reviews for one specific, randomly selected, restaurant. An example of the first page of the user study is shown in Figure~\ref{fig:screenshot} in Appendix.
\begin{figure}[!ht]
\centering
\includegraphics[width=.7\columnwidth]{detection2.png}
\caption{Violin plots of detection rate in comparative study. Mean and standard deviations for number of detected fakes are $0.8\pm0.7$ for NMT-Fake* and $2.5\pm1.0$ for LSTM-Fake. $n=20$. A sample of random detection is shown as comparison.}
\label{fig:aalto}
\end{figure}
Figure~\ref{fig:aalto} shows the distribution of detected reviews of both types. A hypothetical random detector is shown for comparison.
NMT-Fake* reviews are significantly more difficult to detect for our experienced participants. In average, detection rate (recall) is $20\%$ for NMT-Fake* reviews, compared to $61\%$ for LSTM-based reviews.
The precision (and F-score) is the same as the recall in our study, since participants labeled 4 fakes in each set of 30 reviews \cite{murphy2012machine}.
The distribution of the detection across participants is shown in Figure~\ref{fig:aalto}. \emph{The difference is statistically significant with confidence level $99\%$} (Welch's t-test).
We compared the detection rate of NMT-Fake* reviews to a random detector, and find that \emph{our participants detection rate of NMT-Fake* reviews is not statistically different from random predictions with 95\% confidence level} (Welch's t-test).
\section{Defenses}
\label{sec:detection}
We developed an AdaBoost-based classifier to detect our new fake reviews, consisting of 200 shallow decision trees (depth 2). The features we used are recorded in Table~\ref{table:features_adaboost} (Appendix).
We used word-level features based on spaCy-tokenization \cite{honnibal-johnson:2015:EMNLP} and constructed n-gram representation of POS-tags and dependency tree tags. We added readability features from NLTK~\cite{bird2004nltk}.
\begin{figure}[ht]
\centering
\includegraphics[width=.7\columnwidth]{obf_score_fair_2.png}
\caption{
Adaboost-based classification of NMT-Fake and human-written reviews.
Effect of varying $b$ and $\lambda$ in fake review generation.
The variant native speakers had most difficulties detecting is well detectable by AdaBoost (97\%).}
\label{fig:adaboost_matrix_b_lambda}
\end{figure}
Figure~\ref{fig:adaboost_matrix_b_lambda} shows our AdaBoost classifier's class-averaged F-score at detecting different kind of fake reviews. The classifier is very effective in detecting reviews that humans have difficulties detecting. For example, the fake reviews MTurk users had most difficulty detecting ($b=0.3, \lambda=-5$) are detected with an excellent 97\% F-score.
The most important features for the classification were counts for frequently occurring words in fake reviews (such as punctuation, pronouns, articles) as well as the readability feature ``Automated Readability Index''. We thus conclude that while NMT-Fake reviews are difficult to detect for humans, they can be well detected with the right tools.
\section{Related Work}
Kumar and Shah~\cite{kumar2018false} survey and categorize false information research. Automatically generated fake reviews are a form of \emph{opinion-based false information}, where the creator of the review may influence reader's opinions or decisions.
Yao et al. \cite{yao2017automated} presented their study on machine-generated fake reviews. Contrary to us, they investigated character-level language models, without specifying a specific context before generation. We leverage existing NMT tools to encode a specific context to the restaurant before generating reviews.
Supporting our study, Everett et al~\cite{Everett2016Automated} found that security researchers were less likely to be fooled by Markov chain-generated Reddit comments compared to ordinary Internet users.
Diversification of NMT model outputs has been studied in \cite{li2016diversity}. The authors proposed the use of a penalty to commonly occurring sentences (\emph{n-grams}) in order to emphasize maximum mutual information-based generation.
The authors investigated the use of NMT models in chatbot systems.
We found that unigram penalties to random tokens (Algorithm~\ref{alg:aug}) was easy to implement and produced sufficiently diverse responses.
\section {Discussion and Future Work}
\paragraph{What makes NMT-Fake* reviews difficult to detect?} First, NMT models allow the encoding of a relevant context for each review, which narrows down the possible choices of words that the model has to choose from. Our NMT model had a perplexity of approximately $25$, while the model of \cite{yao2017automated} had a perplexity of approximately $90$ \footnote{Personal communication with the authors}. Second, the beam search in NMT models narrows down choices to natural-looking sentences. Third, we observed that the NMT model produced \emph{better structure} in the generated sentences (i.e. a more coherent story).
\paragraph{Cost of generating reviews} With our setup, generating one review took less than one second. The cost of generation stems mainly from the overnight training. Assuming an electricity cost of 16 cents / kWh (California) and 8 hours of training, training the NMT model requires approximately 1.30 USD. This is a 90\% reduction in time compared to the state-of-the-art \cite{yao2017automated}. Furthermore, it is possible to generate both positive and negative reviews with the same model.
\paragraph{Ease of customization} We experimented with inserting specific words into the text by increasing their log likelihoods in the beam search. We noticed that the success depended on the prevalence of the word in the training set. For example, adding a +5 to \emph{Mike} in the log-likelihood resulted in approximately 10\% prevalence of this word in the reviews. An attacker can therefore easily insert specific keywords to reviews, which can increase evasion probability.
\paragraph{Ease of testing} Our diversification scheme is applicable during \emph{generation phase}, and does not affect the training setup of the network in any way. Once the NMT model is obtained, it is easy to obtain several different variants of NMT-Fake reviews by varying parameters $b$ and $\lambda$.
\paragraph{Languages} The generation methodology is not per-se language-dependent. The requirement for successful generation is that sufficiently much data exists in the targeted language. However, our language model modifications require some knowledge of that target language's grammar to produce high-quality reviews.
\paragraph{Generalizability of detection techniques} Currently, fake reviews are not universally detectable. Our results highlight that it is difficult to claim detection performance on unseen types of fake reviews (Section~\ref{sec:automated}). We see this an open problem that deserves more attention in fake reviews research.
\paragraph{Generalizability to other types of datasets} Our technique can be applied to any dataset, as long as there is sufficient training data for the NMT model. We used approximately 2.9 million reviews for this work.
\section{Conclusion}
In this paper, we showed that neural machine translation models can be used to generate fake reviews that are very effective in deceiving even experienced, tech-savvy users.
This supports anecdotal evidence \cite{national2017commission}.
Our technique is more effective than state-of-the-art \cite{yao2017automated}.
We conclude that machine-aided fake review detection is necessary since human users are ineffective in identifying fake reviews.
We also showed that detectors trained using one type of fake reviews are not effective in identifying other types of fake reviews.
Robust detection of fake reviews is thus still an open problem.
\section*{Acknowledgments}
We thank Tommi Gr\"{o}ndahl for assistance in planning user studies and the
participants of the user study for their time and feedback. We also thank
Luiza Sayfullina for comments that improved the manuscript.
We thank the authors of \cite{yao2017automated} for answering questions about
their work.
\bibliographystyle{splncs}
\begin{thebibliography}{10}
\bibitem{yao2017automated}
Yao, Y., Viswanath, B., Cryan, J., Zheng, H., Zhao, B.Y.:
\newblock Automated crowdturfing attacks and defenses in online review systems.
\newblock In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and
Communications Security, ACM (2017)
\bibitem{murphy2012machine}
Murphy, K.:
\newblock Machine learning: a probabilistic approach.
\newblock Massachusetts Institute of Technology (2012)
\bibitem{challenge2013yelp}
Yelp:
\newblock {Yelp Challenge Dataset} (2013)
\bibitem{mukherjee2013yelp}
Mukherjee, A., Venkataraman, V., Liu, B., Glance, N.:
\newblock What yelp fake review filter might be doing?
\newblock In: Seventh International AAAI Conference on Weblogs and Social Media
(ICWSM). (2013)
\bibitem{rayana2015collective}
Rayana, S., Akoglu, L.:
\newblock Collective opinion spam detection: Bridging review networks and
metadata.
\newblock In: {}Proceedings of the 21th ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining
\bibitem{o2008user}
{O'Connor}, P.:
\newblock {User-generated content and travel: A case study on Tripadvisor.com}.
\newblock Information and communication technologies in tourism 2008 (2008)
\bibitem{luca2010reviews}
Luca, M.:
\newblock {Reviews, Reputation, and Revenue: The Case of Yelp. com}.
\newblock {Harvard Business School} (2010)
\bibitem{wang2012serf}
Wang, G., Wilson, C., Zhao, X., Zhu, Y., Mohanlal, M., Zheng, H., Zhao, B.Y.:
\newblock Serf and turf: crowdturfing for fun and profit.
\newblock In: Proceedings of the 21st international conference on World Wide
Web (WWW), ACM (2012)
\bibitem{rinta2017understanding}
Rinta-Kahila, T., Soliman, W.:
\newblock Understanding crowdturfing: The different ethical logics behind the
clandestine industry of deception.
\newblock In: ECIS 2017: Proceedings of the 25th European Conference on
Information Systems. (2017)
\bibitem{luca2016fake}
Luca, M., Zervas, G.:
\newblock Fake it till you make it: Reputation, competition, and yelp review
fraud.
\newblock Management Science (2016)
\bibitem{national2017commission}
{National Literacy Trust}:
\newblock Commission on fake news and the teaching of critical literacy skills
in schools URL:
\url{https://literacytrust.org.uk/policy-and-campaigns/all-party-parliamentary-group-literacy/fakenews/}.
\bibitem{jurafsky2014speech}
Jurafsky, D., Martin, J.H.:
\newblock Speech and language processing. Volume~3.
\newblock Pearson London: (2014)
\bibitem{kingma2014adam}
Kingma, D.P., Ba, J.:
\newblock Adam: A method for stochastic optimization.
\newblock arXiv preprint arXiv:1412.6980 (2014)
\bibitem{cho2014learning}
Cho, K., van Merrienboer, B., Gulcehre, C., Bahdanau, D., Bougares, F.,
Schwenk, H., Bengio, Y.:
\newblock Learning phrase representations using rnn encoder--decoder for
statistical machine translation.
\newblock In: Proceedings of the 2014 Conference on Empirical Methods in
Natural Language Processing (EMNLP). (2014)
\bibitem{klein2017opennmt}
Klein, G., Kim, Y., Deng, Y., Senellart, J., Rush, A.:
\newblock Opennmt: Open-source toolkit for neural machine translation.
\newblock Proceedings of ACL, System Demonstrations (2017)
\bibitem{wu2016google}
Wu, Y., Schuster, M., Chen, Z., Le, Q.V., Norouzi, M., Macherey, W., Krikun,
M., Cao, Y., Gao, Q., Macherey, K., et~al.:
\newblock Google's neural machine translation system: Bridging the gap between
human and machine translation.
\newblock arXiv preprint arXiv:1609.08144 (2016)
\bibitem{mei2017coherent}
Mei, H., Bansal, M., Walter, M.R.:
\newblock Coherent dialogue with attention-based language models.
\newblock In: AAAI. (2017) 3252--3258
\bibitem{li2016diversity}
Li, J., Galley, M., Brockett, C., Gao, J., Dolan, B.:
\newblock A diversity-promoting objective function for neural conversation
models.
\newblock In: Proceedings of NAACL-HLT. (2016)
\bibitem{rubin2006assessing}
Rubin, V.L., Liddy, E.D.:
\newblock Assessing credibility of weblogs.
\newblock In: AAAI Spring Symposium: Computational Approaches to Analyzing
Weblogs. (2006)
\bibitem{zhao2017news}
news.com.au:
\newblock {The potential of AI generated 'crowdturfing' could undermine online
reviews and dramatically erode public trust} URL:
\url{http://www.news.com.au/technology/online/security/the-potential-of-ai-generated-crowdturfing-could-undermine-online-reviews-and-dramatically-erode-public-trust/news-story/e1c84ad909b586f8a08238d5f80b6982}.
\bibitem{pennebaker2015development}
Pennebaker, J.W., Boyd, R.L., Jordan, K., Blackburn, K.:
\newblock {The development and psychometric properties of LIWC2015}.
\newblock Technical report (2015)
\bibitem{honnibal-johnson:2015:EMNLP}
Honnibal, M., Johnson, M.:
\newblock An improved non-monotonic transition system for dependency parsing.
\newblock In: Proceedings of the 2015 Conference on Empirical Methods in
Natural Language Processing (EMNLP), ACM (2015)
\bibitem{bird2004nltk}
Bird, S., Loper, E.:
\newblock {NLTK: the natural language toolkit}.
\newblock In: Proceedings of the ACL 2004 on Interactive poster and
demonstration sessions, Association for Computational Linguistics (2004)
\bibitem{kumar2018false}
Kumar, S., Shah, N.:
\newblock False information on web and social media: A survey.
\newblock arXiv preprint arXiv:1804.08559 (2018)
\bibitem{Everett2016Automated}
Everett, R.M., Nurse, J.R.C., Erola, A.:
\newblock The anatomy of online deception: What makes automated text
convincing?
\newblock In: Proceedings of the 31st Annual ACM Symposium on Applied
Computing. SAC '16, ACM (2016)
\end{thebibliography}
\section*{Appendix}
We present basic demographics of our MTurk study and the comparative study with experienced users in Table~\ref{table:amt_pop}.
\begin{table}
\caption{User study statistics.}
\begin{center}
\begin{tabular}{ | l | c | c | }
\hline
Quality & Mechanical Turk users & Experienced users\\
\hline
Native English Speaker & Yes (20) & Yes (1) No (19) \\
Fluent in English & Yes (20) & Yes (20) \\
Age & 21-40 (17) 41-60 (3) & 21-25 (8) 26-30 (7) 31-35 (4) 41-45 (1)\\
Gender & Male (14) Female (6) & Male (17) Female (3)\\
Highest Education & High School (10) Bachelor (10) & Bachelor (9) Master (6) Ph.D. (5) \\
\hline
\end{tabular}
\label{table:amt_pop}
\end{center}
\end{table}
Table~\ref{table:openNMT-py_commands} shows a listing of the openNMT-py commands we used to create our NMT model and to generate fake reviews.
\begin{table}[t]
\caption{Listing of used openNMT-py commands.}
\begin{center}
\begin{tabular}{ | l | l | }
\hline
Phase & Bash command \\
\hline
Preprocessing & \begin{lstlisting}[language=bash]
python preprocess.py -train_src context-train.txt
-train_tgt reviews-train.txt -valid_src context-val.txt
-valid_tgt reviews-val.txt -save_data model
-lower -tgt_words_min_frequency 10
\end{lstlisting}
\\ & \\
Training & \begin{lstlisting}[language=bash]
python train.py -data model -save_model model -epochs 8
-gpuid 0 -learning_rate_decay 0.5 -optim adam
-learning_rate 0.001 -start_decay_at 3\end{lstlisting}
\\ & \\
Generation & \begin{lstlisting}[language=bash]
python translate.py -model model_acc_35.54_ppl_25.68_e8.pt
-src context-tst.txt -output pred-e8.txt -replace_unk
-verbose -max_length 50 -gpu 0
\end{lstlisting} \\
\hline
\end{tabular}
\label{table:openNMT-py_commands}
\end{center}
\end{table}
Table~\ref{table:MTurk_sub} shows the classification performance of Amazon Mechanical Turkers, separated across different categories of NMT-Fake reviews. The category with best performance ($b=0.3, \lambda=-5$) is denoted as NMT-Fake*.
\begin{table}[b]
\caption{MTurk study subclass classification reports. Classes are imbalanced in ratio 1:6. Random predictions are $p_\mathrm{human} = 86\%$ and $p_\mathrm{machine} = 14\%$, with $r_\mathrm{human} = r_\mathrm{machine} = 50\%$. Class-averaged F-scores for random predictions are $42\%$.}
\begin{center}
\begin{tabular}{ | c || c |c |c | c | }
\hline
$(b=0.3, \lambda = -3)$ & Precision & Recall & F-score & Support \\ \hline
Human & 89\% & 63\% & 73\% & 994\\
NMT-Fake & 15\% & 45\% & 22\% & 146 \\
\hline
\hline
$(b=0.3, \lambda = -5)$ & Precision & Recall & F-score & Support \\ \hline
Human & 86\% & 63\% & 73\% & 994\\
NMT-Fake* & 16\% & 40\% & 23\% & 171 \\
\hline
\hline
$(b=0.5, \lambda = -4)$ & Precision & Recall & F-score & Support \\ \hline
Human & 88\% & 63\% & 73\% & 994\\
NMT-Fake & 21\% & 55\% & 30\% & 181 \\
\hline
\hline
$(b=0.7, \lambda = -3)$ & Precision & Recall & F-score & Support \\ \hline
Human & 88\% & 63\% & 73\% & 994\\
NMT-Fake & 19\% & 50\% & 27\% & 170 \\
\hline
\hline
$(b=0.7, \lambda = -5)$ & Precision & Recall & F-score & Support \\ \hline
Human & 89\% & 63\% & 74\% & 994\\
NMT-Fake & 21\% & 57\% & 31\% & 174 \\
\hline
\hline
$(b=0.9, \lambda = -4)$ & Precision & Recall & F-score & Support \\ \hline
Human & 88\% & 63\% & 73\% & 994\\
NMT-Fake & 18\% & 50\% & 27\% & 164 \\
\hline
\end{tabular}
\label{table:MTurk_sub}
\end{center}
\end{table}
Figure~\ref{fig:screenshot} shows screenshots of the first two pages of our user study with experienced participants.
\begin{figure}[ht]
\centering
\includegraphics[width=1.\columnwidth]{figures/screenshot_7-3.png}
\caption{
Screenshots of the first two pages in the user study. Example 1 is a NMT-Fake* review, the rest are human-written.
}
\label{fig:screenshot}
\end{figure}
Table~\ref{table:features_adaboost} shows the features used to detect NMT-Fake reviews using the AdaBoost classifier.
\begin{table}
\caption{Features used in NMT-Fake review detector.}
\begin{center}
\begin{tabular}{ | l | c | }
\hline
Feature type & Number of features \\ \hline
\hline
Readability features & 13 \\ \hline
Unique POS tags & $~20$ \\ \hline
Word unigrams & 22,831 \\ \hline
1/2/3/4-grams of simple part-of-speech tags & 54,240 \\ \hline
1/2/3-grams of detailed part-of-speech tags & 112,944 \\ \hline
1/2/3-grams of syntactic dependency tags & 93,195 \\ \hline
\end{tabular}
\label{table:features_adaboost}
\end{center}
\end{table}
\end{document} | 1,006 fake reviews and 994 real reviews |
6e2ad9ad88cceabb6977222f5e090ece36aa84ea | 6e2ad9ad88cceabb6977222f5e090ece36aa84ea_0 | Q: Which baselines did they compare?
Text: Introduction
Ever since the LIME algorithm BIBREF0 , "explanation" techniques focusing on finding the importance of input features in regard of a specific prediction have soared and we now have many ways of finding saliency maps (also called heat-maps because of the way we like to visualize them). We are interested in this paper by the use of such a technique in an extreme task that highlights questions about the validity and evaluation of the approach. We would like to first set the vocabulary we will use. We agree that saliency maps are not explanations in themselves and that they are more similar to attribution, which is only one part of the human explanation process BIBREF1 . We will prefer to call this importance mapping of the input an attribution rather than an explanation. We will talk about the importance of the input relevance score in regard to the model's computation and not make allusion to any human understanding of the model as a result.
There exist multiple ways to generate saliency maps over the input for non-linear classifiers BIBREF2 , BIBREF3 , BIBREF4 . We refer the reader to BIBREF5 for a survey of explainable AI in general. We use in this paper Layer-Wise Relevance Propagation (LRP) BIBREF2 which aims at redistributing the value of the classifying function on the input to obtain the importance attribution. It was first created to “explain" the classification of neural networks on image recognition tasks. It was later successfully applied to text using convolutional neural networks (CNN) BIBREF6 and then Long-Short Term Memory (LSTM) networks for sentiment analysis BIBREF7 .
Our goal in this paper is to test the limits of the use of such a technique for more complex tasks, where the notion of input importance might not be as simple as in topic classification or sentiment analysis. We changed from a classification task to a generative task and chose a more complex one than text translation (in which we can easily find a word to word correspondence/importance between input and output). We chose text summarization. We consider abstractive and informative text summarization, meaning that we write a summary “in our own words" and retain the important information of the original text. We refer the reader to BIBREF8 for more details on the task and the different variants that exist. Since the success of deep sequence-to-sequence models for text translation BIBREF9 , the same approaches have been applied to text summarization tasks BIBREF10 , BIBREF11 , BIBREF12 which use architectures on which we can apply LRP.
We obtain one saliency map for each word in the generated summaries, supposed to represent the use of the input features for each element of the output sequence. We observe that all the saliency maps for a text are nearly identical and decorrelated with the attention distribution. We propose a way to check their validity by creating what could be seen as a counterfactual experiment from a synthesis of the saliency maps, using the same technique as in Arras et al. Arras2017. We show that in some but not all cases they help identify the important input features and that we need to rigorously check importance attributions before trusting them, regardless of whether or not the mapping “makes sense" to us. We finally argue that in the process of identifying the important input features, verifying the saliency maps is as important as the generation step, if not more.
The Task and the Model
We present in this section the baseline model from See et al. See2017 trained on the CNN/Daily Mail dataset. We reproduce the results from See et al. See2017 to then apply LRP on it.
Dataset and Training Task
The CNN/Daily mail dataset BIBREF12 is a text summarization dataset adapted from the Deepmind question-answering dataset BIBREF13 . It contains around three hundred thousand news articles coupled with summaries of about three sentences. These summaries are in fact “highlights" of the articles provided by the media themselves. Articles have an average length of 780 words and the summaries of 50 words. We had 287 000 training pairs and 11 500 test pairs. Similarly to See et al. See2017, we limit during training and prediction the input text to 400 words and generate summaries of 200 words. We pad the shorter texts using an UNKNOWN token and truncate the longer texts. We embed the texts and summaries using a vocabulary of size 50 000, thus recreating the same parameters as See et al. See2017.
The Model
The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254.
Obtained Summaries
We train the 21 350 992 parameters of the network for about 60 epochs until we achieve results that are qualitatively equivalent to the results of See et al. See2017. We obtain summaries that are broadly relevant to the text but do not match the target summaries very well. We observe the same problems such as wrong reproduction of factual details, replacing rare words with more common alternatives or repeating non-sense after the third sentence. We can see in Figure 1 an example of summary obtained compared to the target one.
The “summaries" we generate are far from being valid summaries of the information in the texts but are sufficient to look at the attribution that LRP will give us. They pick up the general subject of the original text.
Layer-Wise Relevance Propagation
We present in this section the Layer-Wise Relevance Propagation (LRP) BIBREF2 technique that we used to attribute importance to the input features, together with how we adapted it to our model and how we generated the saliency maps. LRP redistributes the output of the model from the output layer to the input by transmitting information backwards through the layers. We call this propagated backwards importance the relevance. LRP has the particularity to attribute negative and positive relevance: a positive relevance is supposed to represent evidence that led to the classifier's result while negative relevance represents evidence that participated negatively in the prediction.
Mathematical Description
We initialize the relevance of the output layer to the value of the predicted class before softmax and we then describe locally the propagation backwards of the relevance from layer to layer. For normal neural network layers we use the form of LRP with epsilon stabilizer BIBREF2 . We write down $R_{i\leftarrow j}^{(l, l+1)}$ the relevance received by the neuron $i$ of layer $l$ from the neuron $j$ of layer $l+1$ :
$$\begin{split} R_{i\leftarrow j}^{(l, l+1)} &= \dfrac{w_{i\rightarrow j}^{l,l+1}\textbf {z}^l_i + \dfrac{\epsilon \textrm { sign}(\textbf {z}^{l+1}_j) + \textbf {b}^{l+1}_j}{D_l}}{\textbf {z}^{l+1}_j + \epsilon * \textrm { sign}(\textbf {z}^{l+1}_j)} * R_j^{l+1} \\ \end{split}$$ (Eq. 7)
where $w_{i\rightarrow j}^{l,l+1}$ is the network's weight parameter set during training, $\textbf {b}^{l+1}_j$ is the bias for neuron $j$ of layer $l+1$ , $\textbf {z}^{l}_i$ is the activation of neuron $i$ on layer $l$ , $\epsilon $ is the stabilizing term set to 0.00001 and $D_l$ is the dimension of the $l$ -th layer.
The relevance of a neuron is then computed as the sum of the relevance he received from the above layer(s).
For LSTM cells we use the method from Arras et al.Arras2017 to solve the problem posed by the element-wise multiplications of vectors. Arras et al. noted that when such computation happened inside an LSTM cell, it always involved a “gate" vector and another vector containing information. The gate vector containing only value between 0 and 1 is essentially filtering the second vector to allow the passing of “relevant" information. Considering this, when we propagate relevance through an element-wise multiplication operation, we give all the upper-layer's relevance to the “information" vector and none to the “gate" vector.
Generation of the Saliency Maps
We use the same method to transmit relevance through the attention mechanism back to the encoder because Bahdanau's attention BIBREF9 uses element-wise multiplications as well. We depict in Figure 2 the transmission end-to-end from the output layer to the input through the decoder, attention mechanism and then the bidirectional encoder. We then sum up the relevance on the word embedding to get the token's relevance as Arras et al. Arras2017.
The way we generate saliency maps differs a bit from the usual context in which LRP is used as we essentially don't have one classification, but 200 (one for each word in the summary). We generate a relevance attribution for the 50 first words of the generated summary as after this point they often repeat themselves.
This means that for each text we obtain 50 different saliency maps, each one supposed to represent the relevance of the input for a specific generated word in the summary.
Experimental results
In this section, we present our results from extracting attributions from the sequence-to-sequence model trained for abstractive text summarization. We first have to discuss the difference between the 50 different saliency maps we obtain and then we propose a protocol to validate the mappings.
First Observations
The first observation that is made is that for one text, the 50 saliency maps are almost identical. Indeed each mapping highlights mainly the same input words with only slight variations of importance. We can see in Figure 3 an example of two nearly identical attributions for two distant and unrelated words of the summary. The saliency map generated using LRP is also uncorrelated with the attention distribution that participated in the generation of the output word. The attention distribution changes drastically between the words in the generated summary while not impacting significantly the attribution over the input text. We deleted in an experiment the relevance propagated through the attention mechanism to the encoder and didn't observe much changes in the saliency map.
It can be seen as evidence that using the attention distribution as an “explanation" of the prediction can be misleading. It is not the only information received by the decoder and the importance it “allocates" to this attention state might be very low. What seems to happen in this application is that most of the information used is transmitted from the encoder to the decoder and the attention mechanism at each decoding step just changes marginally how it is used. Quantifying the difference between attention distribution and saliency map across multiple tasks is a possible future work.
The second observation we can make is that the saliency map doesn't seem to highlight the right things in the input for the summary it generates. The saliency maps on Figure 3 correspond to the summary from Figure 1 , and we don't see the word “video" highlighted in the input text, which seems to be important for the output.
This allows us to question how good the saliency maps are in the sense that we question how well they actually represent the network's use of the input features. We will call that truthfulness of the attribution in regard to the computation, meaning that an attribution is truthful in regard to the computation if it actually highlights the important input features that the network attended to during prediction. We proceed to measure the truthfulness of the attributions by validating them quantitatively.
Validating the Attributions
We propose to validate the saliency maps in a similar way as Arras et al. Arras2017 by incrementally deleting “important" words from the input text and observe the change in the resulting generated summaries.
We first define what “important" (and “unimportant") input words mean across the 50 saliency maps per texts. Relevance transmitted by LRP being positive or negative, we average the absolute value of the relevance across the saliency maps to obtain one ranking of the most “relevant" words. The idea is that input words with negative relevance have an impact on the resulting generated word, even if it is not participating positively, while a word with a relevance close to zero should not be important at all. We did however also try with different methods, like averaging the raw relevance or averaging a scaled absolute value where negative relevance is scaled down by a constant factor. The absolute value average seemed to deliver the best results.
We delete incrementally the important words (words with the highest average) in the input and compared it to the control experiment that consists of deleting the least important word and compare the degradation of the resulting summaries. We obtain mitigated results: for some texts, we observe a quick degradation when deleting important words which are not observed when deleting unimportant words (see Figure 4 ), but for other test examples we don't observe a significant difference between the two settings (see Figure 5 ).
One might argue that the second summary in Figure 5 is better than the first one as it makes better sentences but as the model generates inaccurate summaries, we do not wish to make such a statement.
This however allows us to say that the attribution generated for the text at the origin of the summaries in Figure 4 are truthful in regard to the network's computation and we may use it for further studies of the example, whereas for the text at the origin of Figure 5 we shouldn't draw any further conclusions from the attribution generated.
One interesting point is that one saliency map didn't look “better" than the other, meaning that there is no apparent way of determining their truthfulness in regard of the computation without doing a quantitative validation. This brings us to believe that even in simpler tasks, the saliency maps might make sense to us (for example highlighting the animal in an image classification task), without actually representing what the network really attended too, or in what way.
We defined without saying it the counterfactual case in our experiment: “Would the important words in the input be deleted, we would have a different summary". Such counterfactuals are however more difficult to define for image classification for example, where it could be applying a mask over an image, or just filtering a colour or a pattern. We believe that defining a counterfactual and testing it allows us to measure and evaluate the truthfulness of the attributions and thus weight how much we can trust them.
Conclusion
In this work, we have implemented and applied LRP to a sequence-to-sequence model trained on a more complex task than usual: text summarization. We used previous work to solve the difficulties posed by LRP in LSTM cells and adapted the same technique for Bahdanau et al. Bahdanau2014 attention mechanism.
We observed a peculiar behaviour of the saliency maps for the words in the output summary: they are almost all identical and seem uncorrelated with the attention distribution. We then proceeded to validate our attributions by averaging the absolute value of the relevance across the saliency maps. We obtain a ranking of the word from the most important to the least important and proceeded to delete one or another.
We showed that in some cases the saliency maps are truthful to the network's computation, meaning that they do highlight the input features that the network focused on. But we also showed that in some cases the saliency maps seem to not capture the important input features. This brought us to discuss the fact that these attributions are not sufficient by themselves, and that we need to define the counter-factual case and test it to measure how truthful the saliency maps are.
Future work would look into the saliency maps generated by applying LRP to pointer-generator networks and compare to our current results as well as mathematically justifying the average that we did when validating our saliency maps. Some additional work is also needed on the validation of the saliency maps with counterfactual tests. The exploitation and evaluation of saliency map are a very important step and should not be overlooked. | The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254. |
6e2ad9ad88cceabb6977222f5e090ece36aa84ea | 6e2ad9ad88cceabb6977222f5e090ece36aa84ea_1 | Q: Which baselines did they compare?
Text: Introduction
Ever since the LIME algorithm BIBREF0 , "explanation" techniques focusing on finding the importance of input features in regard of a specific prediction have soared and we now have many ways of finding saliency maps (also called heat-maps because of the way we like to visualize them). We are interested in this paper by the use of such a technique in an extreme task that highlights questions about the validity and evaluation of the approach. We would like to first set the vocabulary we will use. We agree that saliency maps are not explanations in themselves and that they are more similar to attribution, which is only one part of the human explanation process BIBREF1 . We will prefer to call this importance mapping of the input an attribution rather than an explanation. We will talk about the importance of the input relevance score in regard to the model's computation and not make allusion to any human understanding of the model as a result.
There exist multiple ways to generate saliency maps over the input for non-linear classifiers BIBREF2 , BIBREF3 , BIBREF4 . We refer the reader to BIBREF5 for a survey of explainable AI in general. We use in this paper Layer-Wise Relevance Propagation (LRP) BIBREF2 which aims at redistributing the value of the classifying function on the input to obtain the importance attribution. It was first created to “explain" the classification of neural networks on image recognition tasks. It was later successfully applied to text using convolutional neural networks (CNN) BIBREF6 and then Long-Short Term Memory (LSTM) networks for sentiment analysis BIBREF7 .
Our goal in this paper is to test the limits of the use of such a technique for more complex tasks, where the notion of input importance might not be as simple as in topic classification or sentiment analysis. We changed from a classification task to a generative task and chose a more complex one than text translation (in which we can easily find a word to word correspondence/importance between input and output). We chose text summarization. We consider abstractive and informative text summarization, meaning that we write a summary “in our own words" and retain the important information of the original text. We refer the reader to BIBREF8 for more details on the task and the different variants that exist. Since the success of deep sequence-to-sequence models for text translation BIBREF9 , the same approaches have been applied to text summarization tasks BIBREF10 , BIBREF11 , BIBREF12 which use architectures on which we can apply LRP.
We obtain one saliency map for each word in the generated summaries, supposed to represent the use of the input features for each element of the output sequence. We observe that all the saliency maps for a text are nearly identical and decorrelated with the attention distribution. We propose a way to check their validity by creating what could be seen as a counterfactual experiment from a synthesis of the saliency maps, using the same technique as in Arras et al. Arras2017. We show that in some but not all cases they help identify the important input features and that we need to rigorously check importance attributions before trusting them, regardless of whether or not the mapping “makes sense" to us. We finally argue that in the process of identifying the important input features, verifying the saliency maps is as important as the generation step, if not more.
The Task and the Model
We present in this section the baseline model from See et al. See2017 trained on the CNN/Daily Mail dataset. We reproduce the results from See et al. See2017 to then apply LRP on it.
Dataset and Training Task
The CNN/Daily mail dataset BIBREF12 is a text summarization dataset adapted from the Deepmind question-answering dataset BIBREF13 . It contains around three hundred thousand news articles coupled with summaries of about three sentences. These summaries are in fact “highlights" of the articles provided by the media themselves. Articles have an average length of 780 words and the summaries of 50 words. We had 287 000 training pairs and 11 500 test pairs. Similarly to See et al. See2017, we limit during training and prediction the input text to 400 words and generate summaries of 200 words. We pad the shorter texts using an UNKNOWN token and truncate the longer texts. We embed the texts and summaries using a vocabulary of size 50 000, thus recreating the same parameters as See et al. See2017.
The Model
The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254.
Obtained Summaries
We train the 21 350 992 parameters of the network for about 60 epochs until we achieve results that are qualitatively equivalent to the results of See et al. See2017. We obtain summaries that are broadly relevant to the text but do not match the target summaries very well. We observe the same problems such as wrong reproduction of factual details, replacing rare words with more common alternatives or repeating non-sense after the third sentence. We can see in Figure 1 an example of summary obtained compared to the target one.
The “summaries" we generate are far from being valid summaries of the information in the texts but are sufficient to look at the attribution that LRP will give us. They pick up the general subject of the original text.
Layer-Wise Relevance Propagation
We present in this section the Layer-Wise Relevance Propagation (LRP) BIBREF2 technique that we used to attribute importance to the input features, together with how we adapted it to our model and how we generated the saliency maps. LRP redistributes the output of the model from the output layer to the input by transmitting information backwards through the layers. We call this propagated backwards importance the relevance. LRP has the particularity to attribute negative and positive relevance: a positive relevance is supposed to represent evidence that led to the classifier's result while negative relevance represents evidence that participated negatively in the prediction.
Mathematical Description
We initialize the relevance of the output layer to the value of the predicted class before softmax and we then describe locally the propagation backwards of the relevance from layer to layer. For normal neural network layers we use the form of LRP with epsilon stabilizer BIBREF2 . We write down $R_{i\leftarrow j}^{(l, l+1)}$ the relevance received by the neuron $i$ of layer $l$ from the neuron $j$ of layer $l+1$ :
$$\begin{split} R_{i\leftarrow j}^{(l, l+1)} &= \dfrac{w_{i\rightarrow j}^{l,l+1}\textbf {z}^l_i + \dfrac{\epsilon \textrm { sign}(\textbf {z}^{l+1}_j) + \textbf {b}^{l+1}_j}{D_l}}{\textbf {z}^{l+1}_j + \epsilon * \textrm { sign}(\textbf {z}^{l+1}_j)} * R_j^{l+1} \\ \end{split}$$ (Eq. 7)
where $w_{i\rightarrow j}^{l,l+1}$ is the network's weight parameter set during training, $\textbf {b}^{l+1}_j$ is the bias for neuron $j$ of layer $l+1$ , $\textbf {z}^{l}_i$ is the activation of neuron $i$ on layer $l$ , $\epsilon $ is the stabilizing term set to 0.00001 and $D_l$ is the dimension of the $l$ -th layer.
The relevance of a neuron is then computed as the sum of the relevance he received from the above layer(s).
For LSTM cells we use the method from Arras et al.Arras2017 to solve the problem posed by the element-wise multiplications of vectors. Arras et al. noted that when such computation happened inside an LSTM cell, it always involved a “gate" vector and another vector containing information. The gate vector containing only value between 0 and 1 is essentially filtering the second vector to allow the passing of “relevant" information. Considering this, when we propagate relevance through an element-wise multiplication operation, we give all the upper-layer's relevance to the “information" vector and none to the “gate" vector.
Generation of the Saliency Maps
We use the same method to transmit relevance through the attention mechanism back to the encoder because Bahdanau's attention BIBREF9 uses element-wise multiplications as well. We depict in Figure 2 the transmission end-to-end from the output layer to the input through the decoder, attention mechanism and then the bidirectional encoder. We then sum up the relevance on the word embedding to get the token's relevance as Arras et al. Arras2017.
The way we generate saliency maps differs a bit from the usual context in which LRP is used as we essentially don't have one classification, but 200 (one for each word in the summary). We generate a relevance attribution for the 50 first words of the generated summary as after this point they often repeat themselves.
This means that for each text we obtain 50 different saliency maps, each one supposed to represent the relevance of the input for a specific generated word in the summary.
Experimental results
In this section, we present our results from extracting attributions from the sequence-to-sequence model trained for abstractive text summarization. We first have to discuss the difference between the 50 different saliency maps we obtain and then we propose a protocol to validate the mappings.
First Observations
The first observation that is made is that for one text, the 50 saliency maps are almost identical. Indeed each mapping highlights mainly the same input words with only slight variations of importance. We can see in Figure 3 an example of two nearly identical attributions for two distant and unrelated words of the summary. The saliency map generated using LRP is also uncorrelated with the attention distribution that participated in the generation of the output word. The attention distribution changes drastically between the words in the generated summary while not impacting significantly the attribution over the input text. We deleted in an experiment the relevance propagated through the attention mechanism to the encoder and didn't observe much changes in the saliency map.
It can be seen as evidence that using the attention distribution as an “explanation" of the prediction can be misleading. It is not the only information received by the decoder and the importance it “allocates" to this attention state might be very low. What seems to happen in this application is that most of the information used is transmitted from the encoder to the decoder and the attention mechanism at each decoding step just changes marginally how it is used. Quantifying the difference between attention distribution and saliency map across multiple tasks is a possible future work.
The second observation we can make is that the saliency map doesn't seem to highlight the right things in the input for the summary it generates. The saliency maps on Figure 3 correspond to the summary from Figure 1 , and we don't see the word “video" highlighted in the input text, which seems to be important for the output.
This allows us to question how good the saliency maps are in the sense that we question how well they actually represent the network's use of the input features. We will call that truthfulness of the attribution in regard to the computation, meaning that an attribution is truthful in regard to the computation if it actually highlights the important input features that the network attended to during prediction. We proceed to measure the truthfulness of the attributions by validating them quantitatively.
Validating the Attributions
We propose to validate the saliency maps in a similar way as Arras et al. Arras2017 by incrementally deleting “important" words from the input text and observe the change in the resulting generated summaries.
We first define what “important" (and “unimportant") input words mean across the 50 saliency maps per texts. Relevance transmitted by LRP being positive or negative, we average the absolute value of the relevance across the saliency maps to obtain one ranking of the most “relevant" words. The idea is that input words with negative relevance have an impact on the resulting generated word, even if it is not participating positively, while a word with a relevance close to zero should not be important at all. We did however also try with different methods, like averaging the raw relevance or averaging a scaled absolute value where negative relevance is scaled down by a constant factor. The absolute value average seemed to deliver the best results.
We delete incrementally the important words (words with the highest average) in the input and compared it to the control experiment that consists of deleting the least important word and compare the degradation of the resulting summaries. We obtain mitigated results: for some texts, we observe a quick degradation when deleting important words which are not observed when deleting unimportant words (see Figure 4 ), but for other test examples we don't observe a significant difference between the two settings (see Figure 5 ).
One might argue that the second summary in Figure 5 is better than the first one as it makes better sentences but as the model generates inaccurate summaries, we do not wish to make such a statement.
This however allows us to say that the attribution generated for the text at the origin of the summaries in Figure 4 are truthful in regard to the network's computation and we may use it for further studies of the example, whereas for the text at the origin of Figure 5 we shouldn't draw any further conclusions from the attribution generated.
One interesting point is that one saliency map didn't look “better" than the other, meaning that there is no apparent way of determining their truthfulness in regard of the computation without doing a quantitative validation. This brings us to believe that even in simpler tasks, the saliency maps might make sense to us (for example highlighting the animal in an image classification task), without actually representing what the network really attended too, or in what way.
We defined without saying it the counterfactual case in our experiment: “Would the important words in the input be deleted, we would have a different summary". Such counterfactuals are however more difficult to define for image classification for example, where it could be applying a mask over an image, or just filtering a colour or a pattern. We believe that defining a counterfactual and testing it allows us to measure and evaluate the truthfulness of the attributions and thus weight how much we can trust them.
Conclusion
In this work, we have implemented and applied LRP to a sequence-to-sequence model trained on a more complex task than usual: text summarization. We used previous work to solve the difficulties posed by LRP in LSTM cells and adapted the same technique for Bahdanau et al. Bahdanau2014 attention mechanism.
We observed a peculiar behaviour of the saliency maps for the words in the output summary: they are almost all identical and seem uncorrelated with the attention distribution. We then proceeded to validate our attributions by averaging the absolute value of the relevance across the saliency maps. We obtain a ranking of the word from the most important to the least important and proceeded to delete one or another.
We showed that in some cases the saliency maps are truthful to the network's computation, meaning that they do highlight the input features that the network focused on. But we also showed that in some cases the saliency maps seem to not capture the important input features. This brought us to discuss the fact that these attributions are not sufficient by themselves, and that we need to define the counter-factual case and test it to measure how truthful the saliency maps are.
Future work would look into the saliency maps generated by applying LRP to pointer-generator networks and compare to our current results as well as mathematically justifying the average that we did when validating our saliency maps. Some additional work is also needed on the validation of the saliency maps with counterfactual tests. The exploitation and evaluation of saliency map are a very important step and should not be overlooked. | The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254. |
aacb0b97aed6fc6a8b471b8c2e5c4ddb60988bf5 | aacb0b97aed6fc6a8b471b8c2e5c4ddb60988bf5_0 | Q: How many attention layers are there in their model?
Text: Introduction
Ever since the LIME algorithm BIBREF0 , "explanation" techniques focusing on finding the importance of input features in regard of a specific prediction have soared and we now have many ways of finding saliency maps (also called heat-maps because of the way we like to visualize them). We are interested in this paper by the use of such a technique in an extreme task that highlights questions about the validity and evaluation of the approach. We would like to first set the vocabulary we will use. We agree that saliency maps are not explanations in themselves and that they are more similar to attribution, which is only one part of the human explanation process BIBREF1 . We will prefer to call this importance mapping of the input an attribution rather than an explanation. We will talk about the importance of the input relevance score in regard to the model's computation and not make allusion to any human understanding of the model as a result.
There exist multiple ways to generate saliency maps over the input for non-linear classifiers BIBREF2 , BIBREF3 , BIBREF4 . We refer the reader to BIBREF5 for a survey of explainable AI in general. We use in this paper Layer-Wise Relevance Propagation (LRP) BIBREF2 which aims at redistributing the value of the classifying function on the input to obtain the importance attribution. It was first created to “explain" the classification of neural networks on image recognition tasks. It was later successfully applied to text using convolutional neural networks (CNN) BIBREF6 and then Long-Short Term Memory (LSTM) networks for sentiment analysis BIBREF7 .
Our goal in this paper is to test the limits of the use of such a technique for more complex tasks, where the notion of input importance might not be as simple as in topic classification or sentiment analysis. We changed from a classification task to a generative task and chose a more complex one than text translation (in which we can easily find a word to word correspondence/importance between input and output). We chose text summarization. We consider abstractive and informative text summarization, meaning that we write a summary “in our own words" and retain the important information of the original text. We refer the reader to BIBREF8 for more details on the task and the different variants that exist. Since the success of deep sequence-to-sequence models for text translation BIBREF9 , the same approaches have been applied to text summarization tasks BIBREF10 , BIBREF11 , BIBREF12 which use architectures on which we can apply LRP.
We obtain one saliency map for each word in the generated summaries, supposed to represent the use of the input features for each element of the output sequence. We observe that all the saliency maps for a text are nearly identical and decorrelated with the attention distribution. We propose a way to check their validity by creating what could be seen as a counterfactual experiment from a synthesis of the saliency maps, using the same technique as in Arras et al. Arras2017. We show that in some but not all cases they help identify the important input features and that we need to rigorously check importance attributions before trusting them, regardless of whether or not the mapping “makes sense" to us. We finally argue that in the process of identifying the important input features, verifying the saliency maps is as important as the generation step, if not more.
The Task and the Model
We present in this section the baseline model from See et al. See2017 trained on the CNN/Daily Mail dataset. We reproduce the results from See et al. See2017 to then apply LRP on it.
Dataset and Training Task
The CNN/Daily mail dataset BIBREF12 is a text summarization dataset adapted from the Deepmind question-answering dataset BIBREF13 . It contains around three hundred thousand news articles coupled with summaries of about three sentences. These summaries are in fact “highlights" of the articles provided by the media themselves. Articles have an average length of 780 words and the summaries of 50 words. We had 287 000 training pairs and 11 500 test pairs. Similarly to See et al. See2017, we limit during training and prediction the input text to 400 words and generate summaries of 200 words. We pad the shorter texts using an UNKNOWN token and truncate the longer texts. We embed the texts and summaries using a vocabulary of size 50 000, thus recreating the same parameters as See et al. See2017.
The Model
The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254.
Obtained Summaries
We train the 21 350 992 parameters of the network for about 60 epochs until we achieve results that are qualitatively equivalent to the results of See et al. See2017. We obtain summaries that are broadly relevant to the text but do not match the target summaries very well. We observe the same problems such as wrong reproduction of factual details, replacing rare words with more common alternatives or repeating non-sense after the third sentence. We can see in Figure 1 an example of summary obtained compared to the target one.
The “summaries" we generate are far from being valid summaries of the information in the texts but are sufficient to look at the attribution that LRP will give us. They pick up the general subject of the original text.
Layer-Wise Relevance Propagation
We present in this section the Layer-Wise Relevance Propagation (LRP) BIBREF2 technique that we used to attribute importance to the input features, together with how we adapted it to our model and how we generated the saliency maps. LRP redistributes the output of the model from the output layer to the input by transmitting information backwards through the layers. We call this propagated backwards importance the relevance. LRP has the particularity to attribute negative and positive relevance: a positive relevance is supposed to represent evidence that led to the classifier's result while negative relevance represents evidence that participated negatively in the prediction.
Mathematical Description
We initialize the relevance of the output layer to the value of the predicted class before softmax and we then describe locally the propagation backwards of the relevance from layer to layer. For normal neural network layers we use the form of LRP with epsilon stabilizer BIBREF2 . We write down $R_{i\leftarrow j}^{(l, l+1)}$ the relevance received by the neuron $i$ of layer $l$ from the neuron $j$ of layer $l+1$ :
$$\begin{split} R_{i\leftarrow j}^{(l, l+1)} &= \dfrac{w_{i\rightarrow j}^{l,l+1}\textbf {z}^l_i + \dfrac{\epsilon \textrm { sign}(\textbf {z}^{l+1}_j) + \textbf {b}^{l+1}_j}{D_l}}{\textbf {z}^{l+1}_j + \epsilon * \textrm { sign}(\textbf {z}^{l+1}_j)} * R_j^{l+1} \\ \end{split}$$ (Eq. 7)
where $w_{i\rightarrow j}^{l,l+1}$ is the network's weight parameter set during training, $\textbf {b}^{l+1}_j$ is the bias for neuron $j$ of layer $l+1$ , $\textbf {z}^{l}_i$ is the activation of neuron $i$ on layer $l$ , $\epsilon $ is the stabilizing term set to 0.00001 and $D_l$ is the dimension of the $l$ -th layer.
The relevance of a neuron is then computed as the sum of the relevance he received from the above layer(s).
For LSTM cells we use the method from Arras et al.Arras2017 to solve the problem posed by the element-wise multiplications of vectors. Arras et al. noted that when such computation happened inside an LSTM cell, it always involved a “gate" vector and another vector containing information. The gate vector containing only value between 0 and 1 is essentially filtering the second vector to allow the passing of “relevant" information. Considering this, when we propagate relevance through an element-wise multiplication operation, we give all the upper-layer's relevance to the “information" vector and none to the “gate" vector.
Generation of the Saliency Maps
We use the same method to transmit relevance through the attention mechanism back to the encoder because Bahdanau's attention BIBREF9 uses element-wise multiplications as well. We depict in Figure 2 the transmission end-to-end from the output layer to the input through the decoder, attention mechanism and then the bidirectional encoder. We then sum up the relevance on the word embedding to get the token's relevance as Arras et al. Arras2017.
The way we generate saliency maps differs a bit from the usual context in which LRP is used as we essentially don't have one classification, but 200 (one for each word in the summary). We generate a relevance attribution for the 50 first words of the generated summary as after this point they often repeat themselves.
This means that for each text we obtain 50 different saliency maps, each one supposed to represent the relevance of the input for a specific generated word in the summary.
Experimental results
In this section, we present our results from extracting attributions from the sequence-to-sequence model trained for abstractive text summarization. We first have to discuss the difference between the 50 different saliency maps we obtain and then we propose a protocol to validate the mappings.
First Observations
The first observation that is made is that for one text, the 50 saliency maps are almost identical. Indeed each mapping highlights mainly the same input words with only slight variations of importance. We can see in Figure 3 an example of two nearly identical attributions for two distant and unrelated words of the summary. The saliency map generated using LRP is also uncorrelated with the attention distribution that participated in the generation of the output word. The attention distribution changes drastically between the words in the generated summary while not impacting significantly the attribution over the input text. We deleted in an experiment the relevance propagated through the attention mechanism to the encoder and didn't observe much changes in the saliency map.
It can be seen as evidence that using the attention distribution as an “explanation" of the prediction can be misleading. It is not the only information received by the decoder and the importance it “allocates" to this attention state might be very low. What seems to happen in this application is that most of the information used is transmitted from the encoder to the decoder and the attention mechanism at each decoding step just changes marginally how it is used. Quantifying the difference between attention distribution and saliency map across multiple tasks is a possible future work.
The second observation we can make is that the saliency map doesn't seem to highlight the right things in the input for the summary it generates. The saliency maps on Figure 3 correspond to the summary from Figure 1 , and we don't see the word “video" highlighted in the input text, which seems to be important for the output.
This allows us to question how good the saliency maps are in the sense that we question how well they actually represent the network's use of the input features. We will call that truthfulness of the attribution in regard to the computation, meaning that an attribution is truthful in regard to the computation if it actually highlights the important input features that the network attended to during prediction. We proceed to measure the truthfulness of the attributions by validating them quantitatively.
Validating the Attributions
We propose to validate the saliency maps in a similar way as Arras et al. Arras2017 by incrementally deleting “important" words from the input text and observe the change in the resulting generated summaries.
We first define what “important" (and “unimportant") input words mean across the 50 saliency maps per texts. Relevance transmitted by LRP being positive or negative, we average the absolute value of the relevance across the saliency maps to obtain one ranking of the most “relevant" words. The idea is that input words with negative relevance have an impact on the resulting generated word, even if it is not participating positively, while a word with a relevance close to zero should not be important at all. We did however also try with different methods, like averaging the raw relevance or averaging a scaled absolute value where negative relevance is scaled down by a constant factor. The absolute value average seemed to deliver the best results.
We delete incrementally the important words (words with the highest average) in the input and compared it to the control experiment that consists of deleting the least important word and compare the degradation of the resulting summaries. We obtain mitigated results: for some texts, we observe a quick degradation when deleting important words which are not observed when deleting unimportant words (see Figure 4 ), but for other test examples we don't observe a significant difference between the two settings (see Figure 5 ).
One might argue that the second summary in Figure 5 is better than the first one as it makes better sentences but as the model generates inaccurate summaries, we do not wish to make such a statement.
This however allows us to say that the attribution generated for the text at the origin of the summaries in Figure 4 are truthful in regard to the network's computation and we may use it for further studies of the example, whereas for the text at the origin of Figure 5 we shouldn't draw any further conclusions from the attribution generated.
One interesting point is that one saliency map didn't look “better" than the other, meaning that there is no apparent way of determining their truthfulness in regard of the computation without doing a quantitative validation. This brings us to believe that even in simpler tasks, the saliency maps might make sense to us (for example highlighting the animal in an image classification task), without actually representing what the network really attended too, or in what way.
We defined without saying it the counterfactual case in our experiment: “Would the important words in the input be deleted, we would have a different summary". Such counterfactuals are however more difficult to define for image classification for example, where it could be applying a mask over an image, or just filtering a colour or a pattern. We believe that defining a counterfactual and testing it allows us to measure and evaluate the truthfulness of the attributions and thus weight how much we can trust them.
Conclusion
In this work, we have implemented and applied LRP to a sequence-to-sequence model trained on a more complex task than usual: text summarization. We used previous work to solve the difficulties posed by LRP in LSTM cells and adapted the same technique for Bahdanau et al. Bahdanau2014 attention mechanism.
We observed a peculiar behaviour of the saliency maps for the words in the output summary: they are almost all identical and seem uncorrelated with the attention distribution. We then proceeded to validate our attributions by averaging the absolute value of the relevance across the saliency maps. We obtain a ranking of the word from the most important to the least important and proceeded to delete one or another.
We showed that in some cases the saliency maps are truthful to the network's computation, meaning that they do highlight the input features that the network focused on. But we also showed that in some cases the saliency maps seem to not capture the important input features. This brought us to discuss the fact that these attributions are not sufficient by themselves, and that we need to define the counter-factual case and test it to measure how truthful the saliency maps are.
Future work would look into the saliency maps generated by applying LRP to pointer-generator networks and compare to our current results as well as mathematically justifying the average that we did when validating our saliency maps. Some additional work is also needed on the validation of the saliency maps with counterfactual tests. The exploitation and evaluation of saliency map are a very important step and should not be overlooked. | one |
710c1f8d4c137c8dad9972f5ceacdbf8004db208 | 710c1f8d4c137c8dad9972f5ceacdbf8004db208_0 | Q: Is the explanation from saliency map correct?
Text: Introduction
Ever since the LIME algorithm BIBREF0 , "explanation" techniques focusing on finding the importance of input features in regard of a specific prediction have soared and we now have many ways of finding saliency maps (also called heat-maps because of the way we like to visualize them). We are interested in this paper by the use of such a technique in an extreme task that highlights questions about the validity and evaluation of the approach. We would like to first set the vocabulary we will use. We agree that saliency maps are not explanations in themselves and that they are more similar to attribution, which is only one part of the human explanation process BIBREF1 . We will prefer to call this importance mapping of the input an attribution rather than an explanation. We will talk about the importance of the input relevance score in regard to the model's computation and not make allusion to any human understanding of the model as a result.
There exist multiple ways to generate saliency maps over the input for non-linear classifiers BIBREF2 , BIBREF3 , BIBREF4 . We refer the reader to BIBREF5 for a survey of explainable AI in general. We use in this paper Layer-Wise Relevance Propagation (LRP) BIBREF2 which aims at redistributing the value of the classifying function on the input to obtain the importance attribution. It was first created to “explain" the classification of neural networks on image recognition tasks. It was later successfully applied to text using convolutional neural networks (CNN) BIBREF6 and then Long-Short Term Memory (LSTM) networks for sentiment analysis BIBREF7 .
Our goal in this paper is to test the limits of the use of such a technique for more complex tasks, where the notion of input importance might not be as simple as in topic classification or sentiment analysis. We changed from a classification task to a generative task and chose a more complex one than text translation (in which we can easily find a word to word correspondence/importance between input and output). We chose text summarization. We consider abstractive and informative text summarization, meaning that we write a summary “in our own words" and retain the important information of the original text. We refer the reader to BIBREF8 for more details on the task and the different variants that exist. Since the success of deep sequence-to-sequence models for text translation BIBREF9 , the same approaches have been applied to text summarization tasks BIBREF10 , BIBREF11 , BIBREF12 which use architectures on which we can apply LRP.
We obtain one saliency map for each word in the generated summaries, supposed to represent the use of the input features for each element of the output sequence. We observe that all the saliency maps for a text are nearly identical and decorrelated with the attention distribution. We propose a way to check their validity by creating what could be seen as a counterfactual experiment from a synthesis of the saliency maps, using the same technique as in Arras et al. Arras2017. We show that in some but not all cases they help identify the important input features and that we need to rigorously check importance attributions before trusting them, regardless of whether or not the mapping “makes sense" to us. We finally argue that in the process of identifying the important input features, verifying the saliency maps is as important as the generation step, if not more.
The Task and the Model
We present in this section the baseline model from See et al. See2017 trained on the CNN/Daily Mail dataset. We reproduce the results from See et al. See2017 to then apply LRP on it.
Dataset and Training Task
The CNN/Daily mail dataset BIBREF12 is a text summarization dataset adapted from the Deepmind question-answering dataset BIBREF13 . It contains around three hundred thousand news articles coupled with summaries of about three sentences. These summaries are in fact “highlights" of the articles provided by the media themselves. Articles have an average length of 780 words and the summaries of 50 words. We had 287 000 training pairs and 11 500 test pairs. Similarly to See et al. See2017, we limit during training and prediction the input text to 400 words and generate summaries of 200 words. We pad the shorter texts using an UNKNOWN token and truncate the longer texts. We embed the texts and summaries using a vocabulary of size 50 000, thus recreating the same parameters as See et al. See2017.
The Model
The baseline model is a deep sequence-to-sequence encoder/decoder model with attention. The encoder is a bidirectional Long-Short Term Memory(LSTM) cell BIBREF14 and the decoder a single LSTM cell with attention mechanism. The attention mechanism is computed as in BIBREF9 and we use a greedy search for decoding. We train end-to-end including the words embeddings. The embedding size used is of 128 and the hidden state size of the LSTM cells is of 254.
Obtained Summaries
We train the 21 350 992 parameters of the network for about 60 epochs until we achieve results that are qualitatively equivalent to the results of See et al. See2017. We obtain summaries that are broadly relevant to the text but do not match the target summaries very well. We observe the same problems such as wrong reproduction of factual details, replacing rare words with more common alternatives or repeating non-sense after the third sentence. We can see in Figure 1 an example of summary obtained compared to the target one.
The “summaries" we generate are far from being valid summaries of the information in the texts but are sufficient to look at the attribution that LRP will give us. They pick up the general subject of the original text.
Layer-Wise Relevance Propagation
We present in this section the Layer-Wise Relevance Propagation (LRP) BIBREF2 technique that we used to attribute importance to the input features, together with how we adapted it to our model and how we generated the saliency maps. LRP redistributes the output of the model from the output layer to the input by transmitting information backwards through the layers. We call this propagated backwards importance the relevance. LRP has the particularity to attribute negative and positive relevance: a positive relevance is supposed to represent evidence that led to the classifier's result while negative relevance represents evidence that participated negatively in the prediction.
Mathematical Description
We initialize the relevance of the output layer to the value of the predicted class before softmax and we then describe locally the propagation backwards of the relevance from layer to layer. For normal neural network layers we use the form of LRP with epsilon stabilizer BIBREF2 . We write down $R_{i\leftarrow j}^{(l, l+1)}$ the relevance received by the neuron $i$ of layer $l$ from the neuron $j$ of layer $l+1$ :
$$\begin{split} R_{i\leftarrow j}^{(l, l+1)} &= \dfrac{w_{i\rightarrow j}^{l,l+1}\textbf {z}^l_i + \dfrac{\epsilon \textrm { sign}(\textbf {z}^{l+1}_j) + \textbf {b}^{l+1}_j}{D_l}}{\textbf {z}^{l+1}_j + \epsilon * \textrm { sign}(\textbf {z}^{l+1}_j)} * R_j^{l+1} \\ \end{split}$$ (Eq. 7)
where $w_{i\rightarrow j}^{l,l+1}$ is the network's weight parameter set during training, $\textbf {b}^{l+1}_j$ is the bias for neuron $j$ of layer $l+1$ , $\textbf {z}^{l}_i$ is the activation of neuron $i$ on layer $l$ , $\epsilon $ is the stabilizing term set to 0.00001 and $D_l$ is the dimension of the $l$ -th layer.
The relevance of a neuron is then computed as the sum of the relevance he received from the above layer(s).
For LSTM cells we use the method from Arras et al.Arras2017 to solve the problem posed by the element-wise multiplications of vectors. Arras et al. noted that when such computation happened inside an LSTM cell, it always involved a “gate" vector and another vector containing information. The gate vector containing only value between 0 and 1 is essentially filtering the second vector to allow the passing of “relevant" information. Considering this, when we propagate relevance through an element-wise multiplication operation, we give all the upper-layer's relevance to the “information" vector and none to the “gate" vector.
Generation of the Saliency Maps
We use the same method to transmit relevance through the attention mechanism back to the encoder because Bahdanau's attention BIBREF9 uses element-wise multiplications as well. We depict in Figure 2 the transmission end-to-end from the output layer to the input through the decoder, attention mechanism and then the bidirectional encoder. We then sum up the relevance on the word embedding to get the token's relevance as Arras et al. Arras2017.
The way we generate saliency maps differs a bit from the usual context in which LRP is used as we essentially don't have one classification, but 200 (one for each word in the summary). We generate a relevance attribution for the 50 first words of the generated summary as after this point they often repeat themselves.
This means that for each text we obtain 50 different saliency maps, each one supposed to represent the relevance of the input for a specific generated word in the summary.
Experimental results
In this section, we present our results from extracting attributions from the sequence-to-sequence model trained for abstractive text summarization. We first have to discuss the difference between the 50 different saliency maps we obtain and then we propose a protocol to validate the mappings.
First Observations
The first observation that is made is that for one text, the 50 saliency maps are almost identical. Indeed each mapping highlights mainly the same input words with only slight variations of importance. We can see in Figure 3 an example of two nearly identical attributions for two distant and unrelated words of the summary. The saliency map generated using LRP is also uncorrelated with the attention distribution that participated in the generation of the output word. The attention distribution changes drastically between the words in the generated summary while not impacting significantly the attribution over the input text. We deleted in an experiment the relevance propagated through the attention mechanism to the encoder and didn't observe much changes in the saliency map.
It can be seen as evidence that using the attention distribution as an “explanation" of the prediction can be misleading. It is not the only information received by the decoder and the importance it “allocates" to this attention state might be very low. What seems to happen in this application is that most of the information used is transmitted from the encoder to the decoder and the attention mechanism at each decoding step just changes marginally how it is used. Quantifying the difference between attention distribution and saliency map across multiple tasks is a possible future work.
The second observation we can make is that the saliency map doesn't seem to highlight the right things in the input for the summary it generates. The saliency maps on Figure 3 correspond to the summary from Figure 1 , and we don't see the word “video" highlighted in the input text, which seems to be important for the output.
This allows us to question how good the saliency maps are in the sense that we question how well they actually represent the network's use of the input features. We will call that truthfulness of the attribution in regard to the computation, meaning that an attribution is truthful in regard to the computation if it actually highlights the important input features that the network attended to during prediction. We proceed to measure the truthfulness of the attributions by validating them quantitatively.
Validating the Attributions
We propose to validate the saliency maps in a similar way as Arras et al. Arras2017 by incrementally deleting “important" words from the input text and observe the change in the resulting generated summaries.
We first define what “important" (and “unimportant") input words mean across the 50 saliency maps per texts. Relevance transmitted by LRP being positive or negative, we average the absolute value of the relevance across the saliency maps to obtain one ranking of the most “relevant" words. The idea is that input words with negative relevance have an impact on the resulting generated word, even if it is not participating positively, while a word with a relevance close to zero should not be important at all. We did however also try with different methods, like averaging the raw relevance or averaging a scaled absolute value where negative relevance is scaled down by a constant factor. The absolute value average seemed to deliver the best results.
We delete incrementally the important words (words with the highest average) in the input and compared it to the control experiment that consists of deleting the least important word and compare the degradation of the resulting summaries. We obtain mitigated results: for some texts, we observe a quick degradation when deleting important words which are not observed when deleting unimportant words (see Figure 4 ), but for other test examples we don't observe a significant difference between the two settings (see Figure 5 ).
One might argue that the second summary in Figure 5 is better than the first one as it makes better sentences but as the model generates inaccurate summaries, we do not wish to make such a statement.
This however allows us to say that the attribution generated for the text at the origin of the summaries in Figure 4 are truthful in regard to the network's computation and we may use it for further studies of the example, whereas for the text at the origin of Figure 5 we shouldn't draw any further conclusions from the attribution generated.
One interesting point is that one saliency map didn't look “better" than the other, meaning that there is no apparent way of determining their truthfulness in regard of the computation without doing a quantitative validation. This brings us to believe that even in simpler tasks, the saliency maps might make sense to us (for example highlighting the animal in an image classification task), without actually representing what the network really attended too, or in what way.
We defined without saying it the counterfactual case in our experiment: “Would the important words in the input be deleted, we would have a different summary". Such counterfactuals are however more difficult to define for image classification for example, where it could be applying a mask over an image, or just filtering a colour or a pattern. We believe that defining a counterfactual and testing it allows us to measure and evaluate the truthfulness of the attributions and thus weight how much we can trust them.
Conclusion
In this work, we have implemented and applied LRP to a sequence-to-sequence model trained on a more complex task than usual: text summarization. We used previous work to solve the difficulties posed by LRP in LSTM cells and adapted the same technique for Bahdanau et al. Bahdanau2014 attention mechanism.
We observed a peculiar behaviour of the saliency maps for the words in the output summary: they are almost all identical and seem uncorrelated with the attention distribution. We then proceeded to validate our attributions by averaging the absolute value of the relevance across the saliency maps. We obtain a ranking of the word from the most important to the least important and proceeded to delete one or another.
We showed that in some cases the saliency maps are truthful to the network's computation, meaning that they do highlight the input features that the network focused on. But we also showed that in some cases the saliency maps seem to not capture the important input features. This brought us to discuss the fact that these attributions are not sufficient by themselves, and that we need to define the counter-factual case and test it to measure how truthful the saliency maps are.
Future work would look into the saliency maps generated by applying LRP to pointer-generator networks and compare to our current results as well as mathematically justifying the average that we did when validating our saliency maps. Some additional work is also needed on the validation of the saliency maps with counterfactual tests. The exploitation and evaluation of saliency map are a very important step and should not be overlooked. | No |
47726be8641e1b864f17f85db9644ce676861576 | 47726be8641e1b864f17f85db9644ce676861576_0 | Q: How is embedding quality assessed?
Text: Introduction
Word embeddings, or vector representations of words, are an important component of Natural Language Processing (NLP) models and necessary for many downstream tasks. However, word embeddings, including embeddings commonly deployed for public use, have been shown to exhibit unwanted societal stereotypes and biases, raising concerns about disparate impact on axes of gender, race, ethnicity, and religion BIBREF0, BIBREF1. The impact of this bias has manifested in a range of downstream tasks, ranging from autocomplete suggestions BIBREF2 to advertisement delivery BIBREF3, increasing the likelihood of amplifying harmful biases through the use of these models.
The most well-established method thus far for mitigating bias relies on projecting target words onto a bias subspace (such as a gender subspace) and subtracting out the difference between the resulting distances BIBREF0. On the other hand, the most popular metric for measuring bias is the WEAT statistic BIBREF1, which compares the cosine similarities between groups of words. However, WEAT has been recently shown to overestimate bias as a result of implicitly relying on similar frequencies for the target words BIBREF4, and BIBREF5 demonstrated that evidence of bias can still be recovered after geometric bias mitigation by examining the neighborhood of a target word among socially-biased words.
In response to this, we propose an alternative framework for bias mitigation in word embeddings that approaches this problem from a probabilistic perspective. The motivation for this approach is two-fold. First, most popular word embedding algorithms are probabilistic at their core – i.e., they are trained (explicitly or implicitly BIBREF6) to minimize some form of word co-occurrence probabilities. Thus, we argue that a framework for measuring and treating bias in these embeddings should take into account, in addition to their geometric aspect, their probabilistic nature too. On the other hand, the issue of bias has also been approached (albeit in different contexts) in the fairness literature, where various intuitive notions of equity such as equalized odds have been formalized through probabilistic criteria. By considering analogous criteria for the word embedding setting, we seek to draw connections between these two bodies of work.
We present experiments on various bias mitigation benchmarks and show that our framework is comparable to state-of-the-art alternatives according to measures of geometric bias mitigation and that it performs far better according to measures of neighborhood bias. For fair comparison, we focus on mitigating a binary gender bias in pre-trained word embeddings using SGNS (skip-gram with negative-sampling), though we note that this framework and methods could be extended to other types of bias and word embedding algorithms.
Background ::: Geometric Bias Mitigation
Geometric bias mitigation uses the cosine distances between words to both measure and remove gender bias BIBREF0. This method implicitly defines bias as a geometric asymmetry between words when projected onto a subspace, such as the gender subspace constructed from a set of gender pairs such as $\mathcal {P} = \lbrace (he,she),(man,woman),(king,queen)...\rbrace $. The projection of a vector $v$ onto $B$ (the subspace) is defined by $v_B = \sum _{j=1}^{k} (v \cdot b_j) b_j$ where a subspace $B$ is defined by k orthogonal unit vectors $B = {b_1,...,b_k}$.
Background ::: Geometric Bias Mitigation ::: WEAT
The WEAT statistic BIBREF1 demonstrates the presence of biases in word embeddings with an effect size defined as the mean test statistic across the two word sets:
Where $s$, the test statistic, is defined as: $s(w,A,B) = mean_{a \in A} cos(w,a) - mean_{b \in B} cos(w,a)$, and $X$,$Y$,$A$, and $B$ are groups of words for which the association is measured. Possible values range from $-2$ to 2 depending on the association of the words groups, and a value of zero indicates $X$ and $Y$ are equally associated with $A$ and $B$. See BIBREF4 for further details on WEAT.
Background ::: Geometric Bias Mitigation ::: RIPA
The RIPA (relational inner product association) metric was developed as an alternative to WEAT, with the critique that WEAT is likely to overestimate the bias of a target attribute BIBREF4. The RIPA metric formalizes the measure of bias used in geometric bias mitigation as the inner product association of a word vector $v$ with respect to a relation vector $b$. The relation vector is constructed from the first principal component of the differences between gender word pairs. We report the absolute value of the RIPA metric as the value can be positive or negative according to the direction of the bias. A value of zero indicates a lack of bias, and the value is bound by $[-||w||,||w||]$.
Background ::: Geometric Bias Mitigation ::: Neighborhood Metric
The neighborhood bias metric proposed by BIBREF5 quantifies bias as the proportion of male socially-biased words among the $k$ nearest socially-biased male and female neighboring words, whereby biased words are obtained by projecting neutral words onto a gender relation vector. As we only examine the target word among the 1000 most socially-biased words in the vocabulary (500 male and 500 female), a word’s bias is measured as the ratio of its neighborhood of socially-biased male and socially-biased female words, so that a value of 0.5 in this metric would indicate a perfectly unbiased word, and values closer to 0 and 1 indicate stronger bias.
A Probabilistic Framework for Bias Mitigation
Our objective here is to extend and complement the geometric notions of word embedding bias described in the previous section with an alternative, probabilistic, approach. Intuitively, we seek a notion of equality akin to that of demographic parity in the fairness literature, which requires that a decision or outcome be independent of a protected attribute such as gender. BIBREF7. Similarly, when considering a probabilistic definition of unbiased in word embeddings, we can consider the conditional probabilities of word pairs, ensuring for example that $p(doctor|man) \approx p(doctor|woman)$, and can extend this probabilistic framework to include the neighborhood of a target word, addressing the potential pitfalls of geometric bias mitigation.
Conveniently, most word embedding frameworks allow for immediate computation of the conditional probabilities $P(w|c)$. Here, we focus our attention on the Skip-Gram method with Negative Sampling (SGNS) of BIBREF8, although our framework can be equivalently instantiated for most other popular embedding methods, owing to their core similarities BIBREF6, BIBREF9. Leveraging this probabilistic nature, we construct a bias mitigation method in two steps, and examine each step as an independent method as well as the resulting composite method.
A Probabilistic Framework for Bias Mitigation ::: Probabilistic Bias Mitigation
This component of our bias mitigation framework seeks to enforce that the probability of prediction or outcome cannot depend on a protected class such as gender. We can formalize this intuitive goal through a loss function that penalizes the discrepancy between the conditional probabilities of a target word (i.e., one that should not be affected by the protected attribute) conditioned on two words describing the protected attribute (e.g., man and woman in the case of gender). That is, for every target word we seek to minimize:
where $\mathcal {P} = \lbrace (he,she),(man,woman),(king,queen), \dots \rbrace $ is a set of word pairs characterizing the protected attribute, akin to that used in previous work BIBREF0.
At this point, the specific form of the objective will depend on the type of word embeddings used. For our expample of SGNS, recall that this algorithm models the conditional probability of a target word given a context word as a function of the inner product of their representations. Though an exact method for calculating the conditional probability includes summing over conditional probability of all the words in the vocabulary, we can use the estimation of log conditional probability proposed by BIBREF8, i.e., $ \log p(w_O|w_I) \approx \log \sigma ({v^{\prime }_{wo}}^T v_{wI}) + \sum _{i=1}^{k} [\log {\sigma ({{-v^{\prime }_{wi}}^T v_{wI}})}] $.
A Probabilistic Framework for Bias Mitigation ::: Nearest Neighbor Bias Mitigation
Based on observations by BIBREF5, we extend our method to consider the composition of the neighborhood of socially-gendered words of a target word. We note that bias in a word embedding depends not only on the relationship between a target word and explicitly gendered words like man and woman, but also between a target word and socially-biased male or female words. Bolukbasi et al BIBREF0 proposed a method for eliminating this kind of indirect bias through geometric bias mitigation, but it is shown to be ineffective by the neighborhood metric BIBREF5.
Instead, we extend our method of bias mitigation to account for this neighborhood effect. Specifically, we examine the conditional probabilities of a target word given the $k/2$ nearest neighbors from the male socially-biased words as well as given the $k/2$ female socially-biased words (in sorted order, from smallest to largest). The groups of socially-biased words are constructed as described in the neighborhood metric. If the word is unbiased according to the neighborhood metric, these probabilities should be comparable. We then use the following as our loss function:
where $m$ and $f$ represent the male and female neighbors sorted by distance to the target word $t$ (we use $L1$ distance).
Experiments
We evaluate our framework on fastText embeddings trained on Wikipedia (2017), UMBC webbase corpus and statmt.org news dataset (16B tokens) BIBREF11. For simplicity, only the first 22000 words are used in all embeddings, though preliminary results indicate the findings extend to the full corpus. For our novel methods of mitigating bias, a shallow neural network is used to adjust the embedding. The single layer of the model is an embedding layer with weights initialized to those of the original embedding. For the composite method, these weights are initialized to those of the embedding after probabilistic bias mitigation. A batch of word indices is fed into the model, which are then embedded and for which a loss value is calculated, allowing back-propagation to adjust the embeddings. For each of the models, a fixed number of iterations is used to prevent overfitting, which can eventually hurt performance on the embedding benchmarks (See Figure FIGREF12). We evaluated the embedding after 1000 iterations, and stopped training if performance on a benchmark decreased significantly.
We construct a list of candidate words to debias, taken from the words used in the WEAT gender bias statistics. Words in this list should be gender neutral, and are related to the topics of career, arts, science, math, family and professions (see appendix). We note that this list can easily be expanded to include a greater proportion of words in the corpus. For example, BIBREF4 suggested a method for identifying inappropriately gendered words using unsupervised learning.
We compare this method of bias mitigation with the no bias mitigation ("Orig"), geometric bias mitigation ("Geo"), the two pieces of our method alone ("Prob" and "KNN") and the composite method ("KNN+Prob"). We note that the composite method performs reasonably well according the the RIPA metric, and much better than traditional geometric bias mitigation according to the neighborhood metric, without significant performance loss according to the accepted benchmarks. To our knowledge this is the first bias mitigation method to perform reasonably both on both metrics.
Discussion
We proposed a simple method of bias mitigation based on this probabilistic notions of fairness, and showed that it leads to promising results in various benchmark bias mitigation tasks. Future work should include considering a more rigorous definition and non-binary of bias and experimenting with various embedding algorithms and network architectures.
Discussion ::: Acknowledgements
The authors would like to thank Tommi Jaakkola for stimulating discussions during the initial stages of this work.
Experiment Notes
For Equation 4, as described in the original work, in regards to the k sample words $w_i$ is drawn from the corpus using the Unigram distribution raised to the 3/4 power.
For reference, the most male socially-biased words include words such as:’john’, ’jr’, ’mlb’, ’dick’, ’nfl’, ’cfl’, ’sgt’, ’abbot’, ’halfback’, ’jock’, ’mike’, ’joseph’,while the most female socially-biased words include words such as:’feminine’, ’marital’, ’tatiana’, ’pregnancy’, ’eva’, ’pageant’, ’distress’, ’cristina’, ’ida’, ’beauty’, ’sexuality’,’fertility’
Professions
'accountant', 'acquaintance', 'actor', 'actress', 'administrator', 'adventurer', 'advocate', 'aide', 'alderman', 'ambassador', 'analyst', 'anthropologist', 'archaeologist', 'archbishop', 'architect', 'artist', 'assassin', 'astronaut', 'astronomer', 'athlete', 'attorney', 'author', 'baker', 'banker', 'barber', 'baron', 'barrister', 'bartender', 'biologist', 'bishop', 'bodyguard', 'boss', 'boxer', 'broadcaster', 'broker', 'businessman', 'butcher', 'butler', 'captain', 'caretaker', 'carpenter', 'cartoonist', 'cellist', 'chancellor', 'chaplain', 'character', 'chef', 'chemist', 'choreographer', 'cinematographer', 'citizen', 'cleric', 'clerk', 'coach', 'collector', 'colonel', 'columnist', 'comedian', 'comic', 'commander', 'commentator', 'commissioner', 'composer', 'conductor', 'confesses', 'congressman', 'constable', 'consultant', 'cop', 'correspondent', 'counselor', 'critic', 'crusader', 'curator', 'dad', 'dancer', 'dean', 'dentist', 'deputy', 'detective', 'diplomat', 'director', 'doctor', 'drummer', 'economist', 'editor', 'educator', 'employee', 'entertainer', 'entrepreneur', 'envoy', 'evangelist', 'farmer', 'filmmaker', 'financier', 'fisherman', 'footballer', 'foreman', 'gangster', 'gardener', 'geologist', 'goalkeeper', 'guitarist', 'headmaster', 'historian', 'hooker', 'illustrator', 'industrialist', 'inspector', 'instructor', 'inventor', 'investigator', 'journalist', 'judge', 'jurist', 'landlord', 'lawyer', 'lecturer', 'legislator', 'librarian', 'lieutenant', 'lyricist', 'maestro', 'magician', 'magistrate', 'maid', 'manager', 'marshal', 'mathematician', 'mechanic', 'midfielder', 'minister', 'missionary', 'monk', 'musician', 'nanny', 'narrator', 'naturalist', 'novelist', 'nun', 'nurse', 'observer', 'officer', 'organist', 'painter', 'pastor', 'performer', 'philanthropist', 'philosopher', 'photographer', 'physician', 'physicist', 'pianist', 'planner', 'playwright', 'poet', 'policeman', 'politician', 'preacher', 'president', 'priest', 'principal', 'prisoner', 'professor', 'programmer', 'promoter', 'proprietor', 'prosecutor', 'protagonist', 'provost', 'psychiatrist', 'psychologist', 'rabbi', 'ranger', 'researcher', 'sailor', 'saint', 'salesman', 'saxophonist', 'scholar', 'scientist', 'screenwriter', 'sculptor', 'secretary', 'senator', 'sergeant', 'servant', 'singer', 'skipper', 'sociologist', 'soldier', 'solicitor', 'soloist', 'sportsman', 'statesman', 'steward', 'student', 'substitute', 'superintendent', 'surgeon', 'surveyor', 'swimmer', 'teacher', 'technician', 'teenager', 'therapist', 'trader', 'treasurer', 'trooper', 'trumpeter', 'tutor', 'tycoon', 'violinist', 'vocalist', 'waiter', 'waitress', 'warden', 'warrior', 'worker', 'wrestler', 'writer'
WEAT Word Sets
Words used for WEAT statistic, consisting of baseline bias tests and gender bias tests in the format X vs Y / A vs B
Flowers vs Insects / Pleasant vs Unpleasant
X: "aster", "clover", "hyacinth", "marigold", "poppy", "azalea", "crocus", "iris", "orchid", "rose", "bluebell", "daffodil", "lilac", "pansy", "tulip", "buttercup", "daisy", "lily", "peony", "violet", "carnation", "gladiola", "magnolia", "petunia", "zinnia"
Y: "ant", "caterpillar", "flea", "locust", "spider", "bedbug", "centipede", "fly", "maggot", "tarantula", "bee", "cockroach", "gnat", "mosquito", "termite", "beetle", "cricket", "hornet", "moth", "wasp", "blackfly", "dragonfly", "horsefly", "roach", "weevil"
A: "caress", "freedom", "health", "love", "peace", "cheer", "friend", "heaven", "loyal", "pleasure", "diamond", "gentle", "honest", "lucky", "rainbow", "diploma", "gift", "honor", "miracle", "sunrise", "family", "happy", "laughter", "paradise", "vacation"
B: "abuse", "crash", "filth", "murder", "sickness", "accident", "death", "grief", "poison", "stink", "assault", "disaster", "hatred", "pollute", "tragedy", "divorce", "jail", "poverty", "ugly", "cancer", "kill", "rotten", "vomit", "agony", "prison"
Instruments vs Weapons / Pleasant vs Unpleasant:
X: "bagpipe", "cello", "guitar", "lute", "trombone", "banjo", "clarinet", "harmonica", "mandolin", "trumpet", "bassoon", "drum", "harp", "oboe", "tuba", "bell", "fiddle", "harpsichord", "piano", "viola", "bongo", "flute", "horn", "saxophone", "violin"
Y: "arrow", "club", "gun", "missile", "spear", "ax", "dagger", "harpoon", "pistol", "sword", "blade", "dynamite", "hatchet", "rifle", "tank", "bomb", "firearm", "knife", "shotgun", "teargas", "cannon", "grenade", "mace", "slingshot", "whip"
A: "caress", "freedom", "health", "love", "peace", "cheer", "friend", "heaven", "loyal", "pleasure", "diamond", "gentle", "honest", "lucky", "rainbow", "diploma", "gift", "honor", "miracle", "sunrise", "family", "happy", "laughter", "paradise", "vacation"
B: "abuse", "crash", "filth", "murder", "sickness", "accident", "death", "grief", "poison", "stink", "assault", "disaster", "hatred", "pollute", "tragedy", "divorce", "jail", "poverty", "ugly", "cancer", "kill", "rotten", "vomit", "agony", "prison"
Male vs Female / Career vs Family:
X: "brother", "father", "uncle", "grandfather", "son", "he", "his", "him", "man", "himself", "men", "husband", "boy", "uncle", "nephew", "boyfriend", "king", "actor"
Y: "sister", "mother", "aunt", "grandmother", "daughter", "she", "hers", "her", "woman", "herself", "women", "wife", "aunt", "niece", "girlfriend", "queen", "actress"
A: "executive", "management", "professional", "corporation", "salary", "office", "business", "career", "industry", "company", "promotion", "profession", "CEO", "manager", "coworker", "entrepreneur"
B: "home", "parents", "children", "family", "cousins", "marriage", "wedding", "relatives", "grandparents", "grandchildren", "nurture", "child", "toddler", "infant", "teenager"
Math vs Art / Male vs Female:
X: "math", "algebra", "geometry", "calculus", "equations", "computation", "numbers", "addition", "trigonometry", "arithmetic", "logic", "proofs", "multiplication", "mathematics"
Y: "poetry", "art", "Shakespeare", "dance", "literature", "novel", "symphony", "drama", "orchestra", "music", "ballet", "arts", "creative", "sculpture"
A: "brother", "father", "uncle", "grandfather", "son", "he", "his", "him", "man", "himself", "men", "husband", "boy", "uncle", "nephew", "boyfriend", "king", "actor"
B: "sister", "mother", "aunt", "grandmother", "daughter", "she", "hers", "her", "woman", "herself", "women", "wife", "aunt", "niece", "girlfriend", "queen", "actress"
Science vs Art / Male8 vs Female8:
X:"science", "technology", "physics", "chemistry", "Einstein", "NASA", "experiment", "astronomy", "biology", "aeronautics", "mechanics", "thermodynamics"
Y: "poetry", "art", "Shakespeare", "dance", "literature", "novel", "symphony", "drama", "orchestra", "music", "ballet", "arts", "creative", "sculpture"
A: "brother", "father", "uncle", "grandfather", "son", "he", "his", "him", "man", "himself", "men", "husband", "boy", "uncle", "nephew", "boyfriend"
B: "sister", "mother", "aunt", "grandmother", "daughter", "she", "hers", "her", "woman", "herself", "women", "wife", "aunt", "niece", "girlfriend" | We compare this method of bias mitigation with the no bias mitigation ("Orig"), geometric bias mitigation ("Geo"), the two pieces of our method alone ("Prob" and "KNN") and the composite method ("KNN+Prob"). We note that the composite method performs reasonably well according the the RIPA metric, and much better than traditional geometric bias mitigation according to the neighborhood metric, without significant performance loss according to the accepted benchmarks. To our knowledge this is the first bias mitigation method to perform reasonably both on both metrics. |
47726be8641e1b864f17f85db9644ce676861576 | 47726be8641e1b864f17f85db9644ce676861576_1 | Q: How is embedding quality assessed?
Text: Introduction
Word embeddings, or vector representations of words, are an important component of Natural Language Processing (NLP) models and necessary for many downstream tasks. However, word embeddings, including embeddings commonly deployed for public use, have been shown to exhibit unwanted societal stereotypes and biases, raising concerns about disparate impact on axes of gender, race, ethnicity, and religion BIBREF0, BIBREF1. The impact of this bias has manifested in a range of downstream tasks, ranging from autocomplete suggestions BIBREF2 to advertisement delivery BIBREF3, increasing the likelihood of amplifying harmful biases through the use of these models.
The most well-established method thus far for mitigating bias relies on projecting target words onto a bias subspace (such as a gender subspace) and subtracting out the difference between the resulting distances BIBREF0. On the other hand, the most popular metric for measuring bias is the WEAT statistic BIBREF1, which compares the cosine similarities between groups of words. However, WEAT has been recently shown to overestimate bias as a result of implicitly relying on similar frequencies for the target words BIBREF4, and BIBREF5 demonstrated that evidence of bias can still be recovered after geometric bias mitigation by examining the neighborhood of a target word among socially-biased words.
In response to this, we propose an alternative framework for bias mitigation in word embeddings that approaches this problem from a probabilistic perspective. The motivation for this approach is two-fold. First, most popular word embedding algorithms are probabilistic at their core – i.e., they are trained (explicitly or implicitly BIBREF6) to minimize some form of word co-occurrence probabilities. Thus, we argue that a framework for measuring and treating bias in these embeddings should take into account, in addition to their geometric aspect, their probabilistic nature too. On the other hand, the issue of bias has also been approached (albeit in different contexts) in the fairness literature, where various intuitive notions of equity such as equalized odds have been formalized through probabilistic criteria. By considering analogous criteria for the word embedding setting, we seek to draw connections between these two bodies of work.
We present experiments on various bias mitigation benchmarks and show that our framework is comparable to state-of-the-art alternatives according to measures of geometric bias mitigation and that it performs far better according to measures of neighborhood bias. For fair comparison, we focus on mitigating a binary gender bias in pre-trained word embeddings using SGNS (skip-gram with negative-sampling), though we note that this framework and methods could be extended to other types of bias and word embedding algorithms.
Background ::: Geometric Bias Mitigation
Geometric bias mitigation uses the cosine distances between words to both measure and remove gender bias BIBREF0. This method implicitly defines bias as a geometric asymmetry between words when projected onto a subspace, such as the gender subspace constructed from a set of gender pairs such as $\mathcal {P} = \lbrace (he,she),(man,woman),(king,queen)...\rbrace $. The projection of a vector $v$ onto $B$ (the subspace) is defined by $v_B = \sum _{j=1}^{k} (v \cdot b_j) b_j$ where a subspace $B$ is defined by k orthogonal unit vectors $B = {b_1,...,b_k}$.
Background ::: Geometric Bias Mitigation ::: WEAT
The WEAT statistic BIBREF1 demonstrates the presence of biases in word embeddings with an effect size defined as the mean test statistic across the two word sets:
Where $s$, the test statistic, is defined as: $s(w,A,B) = mean_{a \in A} cos(w,a) - mean_{b \in B} cos(w,a)$, and $X$,$Y$,$A$, and $B$ are groups of words for which the association is measured. Possible values range from $-2$ to 2 depending on the association of the words groups, and a value of zero indicates $X$ and $Y$ are equally associated with $A$ and $B$. See BIBREF4 for further details on WEAT.
Background ::: Geometric Bias Mitigation ::: RIPA
The RIPA (relational inner product association) metric was developed as an alternative to WEAT, with the critique that WEAT is likely to overestimate the bias of a target attribute BIBREF4. The RIPA metric formalizes the measure of bias used in geometric bias mitigation as the inner product association of a word vector $v$ with respect to a relation vector $b$. The relation vector is constructed from the first principal component of the differences between gender word pairs. We report the absolute value of the RIPA metric as the value can be positive or negative according to the direction of the bias. A value of zero indicates a lack of bias, and the value is bound by $[-||w||,||w||]$.
Background ::: Geometric Bias Mitigation ::: Neighborhood Metric
The neighborhood bias metric proposed by BIBREF5 quantifies bias as the proportion of male socially-biased words among the $k$ nearest socially-biased male and female neighboring words, whereby biased words are obtained by projecting neutral words onto a gender relation vector. As we only examine the target word among the 1000 most socially-biased words in the vocabulary (500 male and 500 female), a word’s bias is measured as the ratio of its neighborhood of socially-biased male and socially-biased female words, so that a value of 0.5 in this metric would indicate a perfectly unbiased word, and values closer to 0 and 1 indicate stronger bias.
A Probabilistic Framework for Bias Mitigation
Our objective here is to extend and complement the geometric notions of word embedding bias described in the previous section with an alternative, probabilistic, approach. Intuitively, we seek a notion of equality akin to that of demographic parity in the fairness literature, which requires that a decision or outcome be independent of a protected attribute such as gender. BIBREF7. Similarly, when considering a probabilistic definition of unbiased in word embeddings, we can consider the conditional probabilities of word pairs, ensuring for example that $p(doctor|man) \approx p(doctor|woman)$, and can extend this probabilistic framework to include the neighborhood of a target word, addressing the potential pitfalls of geometric bias mitigation.
Conveniently, most word embedding frameworks allow for immediate computation of the conditional probabilities $P(w|c)$. Here, we focus our attention on the Skip-Gram method with Negative Sampling (SGNS) of BIBREF8, although our framework can be equivalently instantiated for most other popular embedding methods, owing to their core similarities BIBREF6, BIBREF9. Leveraging this probabilistic nature, we construct a bias mitigation method in two steps, and examine each step as an independent method as well as the resulting composite method.
A Probabilistic Framework for Bias Mitigation ::: Probabilistic Bias Mitigation
This component of our bias mitigation framework seeks to enforce that the probability of prediction or outcome cannot depend on a protected class such as gender. We can formalize this intuitive goal through a loss function that penalizes the discrepancy between the conditional probabilities of a target word (i.e., one that should not be affected by the protected attribute) conditioned on two words describing the protected attribute (e.g., man and woman in the case of gender). That is, for every target word we seek to minimize:
where $\mathcal {P} = \lbrace (he,she),(man,woman),(king,queen), \dots \rbrace $ is a set of word pairs characterizing the protected attribute, akin to that used in previous work BIBREF0.
At this point, the specific form of the objective will depend on the type of word embeddings used. For our expample of SGNS, recall that this algorithm models the conditional probability of a target word given a context word as a function of the inner product of their representations. Though an exact method for calculating the conditional probability includes summing over conditional probability of all the words in the vocabulary, we can use the estimation of log conditional probability proposed by BIBREF8, i.e., $ \log p(w_O|w_I) \approx \log \sigma ({v^{\prime }_{wo}}^T v_{wI}) + \sum _{i=1}^{k} [\log {\sigma ({{-v^{\prime }_{wi}}^T v_{wI}})}] $.
A Probabilistic Framework for Bias Mitigation ::: Nearest Neighbor Bias Mitigation
Based on observations by BIBREF5, we extend our method to consider the composition of the neighborhood of socially-gendered words of a target word. We note that bias in a word embedding depends not only on the relationship between a target word and explicitly gendered words like man and woman, but also between a target word and socially-biased male or female words. Bolukbasi et al BIBREF0 proposed a method for eliminating this kind of indirect bias through geometric bias mitigation, but it is shown to be ineffective by the neighborhood metric BIBREF5.
Instead, we extend our method of bias mitigation to account for this neighborhood effect. Specifically, we examine the conditional probabilities of a target word given the $k/2$ nearest neighbors from the male socially-biased words as well as given the $k/2$ female socially-biased words (in sorted order, from smallest to largest). The groups of socially-biased words are constructed as described in the neighborhood metric. If the word is unbiased according to the neighborhood metric, these probabilities should be comparable. We then use the following as our loss function:
where $m$ and $f$ represent the male and female neighbors sorted by distance to the target word $t$ (we use $L1$ distance).
Experiments
We evaluate our framework on fastText embeddings trained on Wikipedia (2017), UMBC webbase corpus and statmt.org news dataset (16B tokens) BIBREF11. For simplicity, only the first 22000 words are used in all embeddings, though preliminary results indicate the findings extend to the full corpus. For our novel methods of mitigating bias, a shallow neural network is used to adjust the embedding. The single layer of the model is an embedding layer with weights initialized to those of the original embedding. For the composite method, these weights are initialized to those of the embedding after probabilistic bias mitigation. A batch of word indices is fed into the model, which are then embedded and for which a loss value is calculated, allowing back-propagation to adjust the embeddings. For each of the models, a fixed number of iterations is used to prevent overfitting, which can eventually hurt performance on the embedding benchmarks (See Figure FIGREF12). We evaluated the embedding after 1000 iterations, and stopped training if performance on a benchmark decreased significantly.
We construct a list of candidate words to debias, taken from the words used in the WEAT gender bias statistics. Words in this list should be gender neutral, and are related to the topics of career, arts, science, math, family and professions (see appendix). We note that this list can easily be expanded to include a greater proportion of words in the corpus. For example, BIBREF4 suggested a method for identifying inappropriately gendered words using unsupervised learning.
We compare this method of bias mitigation with the no bias mitigation ("Orig"), geometric bias mitigation ("Geo"), the two pieces of our method alone ("Prob" and "KNN") and the composite method ("KNN+Prob"). We note that the composite method performs reasonably well according the the RIPA metric, and much better than traditional geometric bias mitigation according to the neighborhood metric, without significant performance loss according to the accepted benchmarks. To our knowledge this is the first bias mitigation method to perform reasonably both on both metrics.
Discussion
We proposed a simple method of bias mitigation based on this probabilistic notions of fairness, and showed that it leads to promising results in various benchmark bias mitigation tasks. Future work should include considering a more rigorous definition and non-binary of bias and experimenting with various embedding algorithms and network architectures.
Discussion ::: Acknowledgements
The authors would like to thank Tommi Jaakkola for stimulating discussions during the initial stages of this work.
Experiment Notes
For Equation 4, as described in the original work, in regards to the k sample words $w_i$ is drawn from the corpus using the Unigram distribution raised to the 3/4 power.
For reference, the most male socially-biased words include words such as:’john’, ’jr’, ’mlb’, ’dick’, ’nfl’, ’cfl’, ’sgt’, ’abbot’, ’halfback’, ’jock’, ’mike’, ’joseph’,while the most female socially-biased words include words such as:’feminine’, ’marital’, ’tatiana’, ’pregnancy’, ’eva’, ’pageant’, ’distress’, ’cristina’, ’ida’, ’beauty’, ’sexuality’,’fertility’
Professions
'accountant', 'acquaintance', 'actor', 'actress', 'administrator', 'adventurer', 'advocate', 'aide', 'alderman', 'ambassador', 'analyst', 'anthropologist', 'archaeologist', 'archbishop', 'architect', 'artist', 'assassin', 'astronaut', 'astronomer', 'athlete', 'attorney', 'author', 'baker', 'banker', 'barber', 'baron', 'barrister', 'bartender', 'biologist', 'bishop', 'bodyguard', 'boss', 'boxer', 'broadcaster', 'broker', 'businessman', 'butcher', 'butler', 'captain', 'caretaker', 'carpenter', 'cartoonist', 'cellist', 'chancellor', 'chaplain', 'character', 'chef', 'chemist', 'choreographer', 'cinematographer', 'citizen', 'cleric', 'clerk', 'coach', 'collector', 'colonel', 'columnist', 'comedian', 'comic', 'commander', 'commentator', 'commissioner', 'composer', 'conductor', 'confesses', 'congressman', 'constable', 'consultant', 'cop', 'correspondent', 'counselor', 'critic', 'crusader', 'curator', 'dad', 'dancer', 'dean', 'dentist', 'deputy', 'detective', 'diplomat', 'director', 'doctor', 'drummer', 'economist', 'editor', 'educator', 'employee', 'entertainer', 'entrepreneur', 'envoy', 'evangelist', 'farmer', 'filmmaker', 'financier', 'fisherman', 'footballer', 'foreman', 'gangster', 'gardener', 'geologist', 'goalkeeper', 'guitarist', 'headmaster', 'historian', 'hooker', 'illustrator', 'industrialist', 'inspector', 'instructor', 'inventor', 'investigator', 'journalist', 'judge', 'jurist', 'landlord', 'lawyer', 'lecturer', 'legislator', 'librarian', 'lieutenant', 'lyricist', 'maestro', 'magician', 'magistrate', 'maid', 'manager', 'marshal', 'mathematician', 'mechanic', 'midfielder', 'minister', 'missionary', 'monk', 'musician', 'nanny', 'narrator', 'naturalist', 'novelist', 'nun', 'nurse', 'observer', 'officer', 'organist', 'painter', 'pastor', 'performer', 'philanthropist', 'philosopher', 'photographer', 'physician', 'physicist', 'pianist', 'planner', 'playwright', 'poet', 'policeman', 'politician', 'preacher', 'president', 'priest', 'principal', 'prisoner', 'professor', 'programmer', 'promoter', 'proprietor', 'prosecutor', 'protagonist', 'provost', 'psychiatrist', 'psychologist', 'rabbi', 'ranger', 'researcher', 'sailor', 'saint', 'salesman', 'saxophonist', 'scholar', 'scientist', 'screenwriter', 'sculptor', 'secretary', 'senator', 'sergeant', 'servant', 'singer', 'skipper', 'sociologist', 'soldier', 'solicitor', 'soloist', 'sportsman', 'statesman', 'steward', 'student', 'substitute', 'superintendent', 'surgeon', 'surveyor', 'swimmer', 'teacher', 'technician', 'teenager', 'therapist', 'trader', 'treasurer', 'trooper', 'trumpeter', 'tutor', 'tycoon', 'violinist', 'vocalist', 'waiter', 'waitress', 'warden', 'warrior', 'worker', 'wrestler', 'writer'
WEAT Word Sets
Words used for WEAT statistic, consisting of baseline bias tests and gender bias tests in the format X vs Y / A vs B
Flowers vs Insects / Pleasant vs Unpleasant
X: "aster", "clover", "hyacinth", "marigold", "poppy", "azalea", "crocus", "iris", "orchid", "rose", "bluebell", "daffodil", "lilac", "pansy", "tulip", "buttercup", "daisy", "lily", "peony", "violet", "carnation", "gladiola", "magnolia", "petunia", "zinnia"
Y: "ant", "caterpillar", "flea", "locust", "spider", "bedbug", "centipede", "fly", "maggot", "tarantula", "bee", "cockroach", "gnat", "mosquito", "termite", "beetle", "cricket", "hornet", "moth", "wasp", "blackfly", "dragonfly", "horsefly", "roach", "weevil"
A: "caress", "freedom", "health", "love", "peace", "cheer", "friend", "heaven", "loyal", "pleasure", "diamond", "gentle", "honest", "lucky", "rainbow", "diploma", "gift", "honor", "miracle", "sunrise", "family", "happy", "laughter", "paradise", "vacation"
B: "abuse", "crash", "filth", "murder", "sickness", "accident", "death", "grief", "poison", "stink", "assault", "disaster", "hatred", "pollute", "tragedy", "divorce", "jail", "poverty", "ugly", "cancer", "kill", "rotten", "vomit", "agony", "prison"
Instruments vs Weapons / Pleasant vs Unpleasant:
X: "bagpipe", "cello", "guitar", "lute", "trombone", "banjo", "clarinet", "harmonica", "mandolin", "trumpet", "bassoon", "drum", "harp", "oboe", "tuba", "bell", "fiddle", "harpsichord", "piano", "viola", "bongo", "flute", "horn", "saxophone", "violin"
Y: "arrow", "club", "gun", "missile", "spear", "ax", "dagger", "harpoon", "pistol", "sword", "blade", "dynamite", "hatchet", "rifle", "tank", "bomb", "firearm", "knife", "shotgun", "teargas", "cannon", "grenade", "mace", "slingshot", "whip"
A: "caress", "freedom", "health", "love", "peace", "cheer", "friend", "heaven", "loyal", "pleasure", "diamond", "gentle", "honest", "lucky", "rainbow", "diploma", "gift", "honor", "miracle", "sunrise", "family", "happy", "laughter", "paradise", "vacation"
B: "abuse", "crash", "filth", "murder", "sickness", "accident", "death", "grief", "poison", "stink", "assault", "disaster", "hatred", "pollute", "tragedy", "divorce", "jail", "poverty", "ugly", "cancer", "kill", "rotten", "vomit", "agony", "prison"
Male vs Female / Career vs Family:
X: "brother", "father", "uncle", "grandfather", "son", "he", "his", "him", "man", "himself", "men", "husband", "boy", "uncle", "nephew", "boyfriend", "king", "actor"
Y: "sister", "mother", "aunt", "grandmother", "daughter", "she", "hers", "her", "woman", "herself", "women", "wife", "aunt", "niece", "girlfriend", "queen", "actress"
A: "executive", "management", "professional", "corporation", "salary", "office", "business", "career", "industry", "company", "promotion", "profession", "CEO", "manager", "coworker", "entrepreneur"
B: "home", "parents", "children", "family", "cousins", "marriage", "wedding", "relatives", "grandparents", "grandchildren", "nurture", "child", "toddler", "infant", "teenager"
Math vs Art / Male vs Female:
X: "math", "algebra", "geometry", "calculus", "equations", "computation", "numbers", "addition", "trigonometry", "arithmetic", "logic", "proofs", "multiplication", "mathematics"
Y: "poetry", "art", "Shakespeare", "dance", "literature", "novel", "symphony", "drama", "orchestra", "music", "ballet", "arts", "creative", "sculpture"
A: "brother", "father", "uncle", "grandfather", "son", "he", "his", "him", "man", "himself", "men", "husband", "boy", "uncle", "nephew", "boyfriend", "king", "actor"
B: "sister", "mother", "aunt", "grandmother", "daughter", "she", "hers", "her", "woman", "herself", "women", "wife", "aunt", "niece", "girlfriend", "queen", "actress"
Science vs Art / Male8 vs Female8:
X:"science", "technology", "physics", "chemistry", "Einstein", "NASA", "experiment", "astronomy", "biology", "aeronautics", "mechanics", "thermodynamics"
Y: "poetry", "art", "Shakespeare", "dance", "literature", "novel", "symphony", "drama", "orchestra", "music", "ballet", "arts", "creative", "sculpture"
A: "brother", "father", "uncle", "grandfather", "son", "he", "his", "him", "man", "himself", "men", "husband", "boy", "uncle", "nephew", "boyfriend"
B: "sister", "mother", "aunt", "grandmother", "daughter", "she", "hers", "her", "woman", "herself", "women", "wife", "aunt", "niece", "girlfriend" | Unanswerable |
8958465d1eaf81c8b781ba4d764a4f5329f026aa | 8958465d1eaf81c8b781ba4d764a4f5329f026aa_0 | Q: What are the three measures of bias which are reduced in experiments?
Text: Introduction
Word embeddings, or vector representations of words, are an important component of Natural Language Processing (NLP) models and necessary for many downstream tasks. However, word embeddings, including embeddings commonly deployed for public use, have been shown to exhibit unwanted societal stereotypes and biases, raising concerns about disparate impact on axes of gender, race, ethnicity, and religion BIBREF0, BIBREF1. The impact of this bias has manifested in a range of downstream tasks, ranging from autocomplete suggestions BIBREF2 to advertisement delivery BIBREF3, increasing the likelihood of amplifying harmful biases through the use of these models.
The most well-established method thus far for mitigating bias relies on projecting target words onto a bias subspace (such as a gender subspace) and subtracting out the difference between the resulting distances BIBREF0. On the other hand, the most popular metric for measuring bias is the WEAT statistic BIBREF1, which compares the cosine similarities between groups of words. However, WEAT has been recently shown to overestimate bias as a result of implicitly relying on similar frequencies for the target words BIBREF4, and BIBREF5 demonstrated that evidence of bias can still be recovered after geometric bias mitigation by examining the neighborhood of a target word among socially-biased words.
In response to this, we propose an alternative framework for bias mitigation in word embeddings that approaches this problem from a probabilistic perspective. The motivation for this approach is two-fold. First, most popular word embedding algorithms are probabilistic at their core – i.e., they are trained (explicitly or implicitly BIBREF6) to minimize some form of word co-occurrence probabilities. Thus, we argue that a framework for measuring and treating bias in these embeddings should take into account, in addition to their geometric aspect, their probabilistic nature too. On the other hand, the issue of bias has also been approached (albeit in different contexts) in the fairness literature, where various intuitive notions of equity such as equalized odds have been formalized through probabilistic criteria. By considering analogous criteria for the word embedding setting, we seek to draw connections between these two bodies of work.
We present experiments on various bias mitigation benchmarks and show that our framework is comparable to state-of-the-art alternatives according to measures of geometric bias mitigation and that it performs far better according to measures of neighborhood bias. For fair comparison, we focus on mitigating a binary gender bias in pre-trained word embeddings using SGNS (skip-gram with negative-sampling), though we note that this framework and methods could be extended to other types of bias and word embedding algorithms.
Background ::: Geometric Bias Mitigation
Geometric bias mitigation uses the cosine distances between words to both measure and remove gender bias BIBREF0. This method implicitly defines bias as a geometric asymmetry between words when projected onto a subspace, such as the gender subspace constructed from a set of gender pairs such as $\mathcal {P} = \lbrace (he,she),(man,woman),(king,queen)...\rbrace $. The projection of a vector $v$ onto $B$ (the subspace) is defined by $v_B = \sum _{j=1}^{k} (v \cdot b_j) b_j$ where a subspace $B$ is defined by k orthogonal unit vectors $B = {b_1,...,b_k}$.
Background ::: Geometric Bias Mitigation ::: WEAT
The WEAT statistic BIBREF1 demonstrates the presence of biases in word embeddings with an effect size defined as the mean test statistic across the two word sets:
Where $s$, the test statistic, is defined as: $s(w,A,B) = mean_{a \in A} cos(w,a) - mean_{b \in B} cos(w,a)$, and $X$,$Y$,$A$, and $B$ are groups of words for which the association is measured. Possible values range from $-2$ to 2 depending on the association of the words groups, and a value of zero indicates $X$ and $Y$ are equally associated with $A$ and $B$. See BIBREF4 for further details on WEAT.
Background ::: Geometric Bias Mitigation ::: RIPA
The RIPA (relational inner product association) metric was developed as an alternative to WEAT, with the critique that WEAT is likely to overestimate the bias of a target attribute BIBREF4. The RIPA metric formalizes the measure of bias used in geometric bias mitigation as the inner product association of a word vector $v$ with respect to a relation vector $b$. The relation vector is constructed from the first principal component of the differences between gender word pairs. We report the absolute value of the RIPA metric as the value can be positive or negative according to the direction of the bias. A value of zero indicates a lack of bias, and the value is bound by $[-||w||,||w||]$.
Background ::: Geometric Bias Mitigation ::: Neighborhood Metric
The neighborhood bias metric proposed by BIBREF5 quantifies bias as the proportion of male socially-biased words among the $k$ nearest socially-biased male and female neighboring words, whereby biased words are obtained by projecting neutral words onto a gender relation vector. As we only examine the target word among the 1000 most socially-biased words in the vocabulary (500 male and 500 female), a word’s bias is measured as the ratio of its neighborhood of socially-biased male and socially-biased female words, so that a value of 0.5 in this metric would indicate a perfectly unbiased word, and values closer to 0 and 1 indicate stronger bias.
A Probabilistic Framework for Bias Mitigation
Our objective here is to extend and complement the geometric notions of word embedding bias described in the previous section with an alternative, probabilistic, approach. Intuitively, we seek a notion of equality akin to that of demographic parity in the fairness literature, which requires that a decision or outcome be independent of a protected attribute such as gender. BIBREF7. Similarly, when considering a probabilistic definition of unbiased in word embeddings, we can consider the conditional probabilities of word pairs, ensuring for example that $p(doctor|man) \approx p(doctor|woman)$, and can extend this probabilistic framework to include the neighborhood of a target word, addressing the potential pitfalls of geometric bias mitigation.
Conveniently, most word embedding frameworks allow for immediate computation of the conditional probabilities $P(w|c)$. Here, we focus our attention on the Skip-Gram method with Negative Sampling (SGNS) of BIBREF8, although our framework can be equivalently instantiated for most other popular embedding methods, owing to their core similarities BIBREF6, BIBREF9. Leveraging this probabilistic nature, we construct a bias mitigation method in two steps, and examine each step as an independent method as well as the resulting composite method.
A Probabilistic Framework for Bias Mitigation ::: Probabilistic Bias Mitigation
This component of our bias mitigation framework seeks to enforce that the probability of prediction or outcome cannot depend on a protected class such as gender. We can formalize this intuitive goal through a loss function that penalizes the discrepancy between the conditional probabilities of a target word (i.e., one that should not be affected by the protected attribute) conditioned on two words describing the protected attribute (e.g., man and woman in the case of gender). That is, for every target word we seek to minimize:
where $\mathcal {P} = \lbrace (he,she),(man,woman),(king,queen), \dots \rbrace $ is a set of word pairs characterizing the protected attribute, akin to that used in previous work BIBREF0.
At this point, the specific form of the objective will depend on the type of word embeddings used. For our expample of SGNS, recall that this algorithm models the conditional probability of a target word given a context word as a function of the inner product of their representations. Though an exact method for calculating the conditional probability includes summing over conditional probability of all the words in the vocabulary, we can use the estimation of log conditional probability proposed by BIBREF8, i.e., $ \log p(w_O|w_I) \approx \log \sigma ({v^{\prime }_{wo}}^T v_{wI}) + \sum _{i=1}^{k} [\log {\sigma ({{-v^{\prime }_{wi}}^T v_{wI}})}] $.
A Probabilistic Framework for Bias Mitigation ::: Nearest Neighbor Bias Mitigation
Based on observations by BIBREF5, we extend our method to consider the composition of the neighborhood of socially-gendered words of a target word. We note that bias in a word embedding depends not only on the relationship between a target word and explicitly gendered words like man and woman, but also between a target word and socially-biased male or female words. Bolukbasi et al BIBREF0 proposed a method for eliminating this kind of indirect bias through geometric bias mitigation, but it is shown to be ineffective by the neighborhood metric BIBREF5.
Instead, we extend our method of bias mitigation to account for this neighborhood effect. Specifically, we examine the conditional probabilities of a target word given the $k/2$ nearest neighbors from the male socially-biased words as well as given the $k/2$ female socially-biased words (in sorted order, from smallest to largest). The groups of socially-biased words are constructed as described in the neighborhood metric. If the word is unbiased according to the neighborhood metric, these probabilities should be comparable. We then use the following as our loss function:
where $m$ and $f$ represent the male and female neighbors sorted by distance to the target word $t$ (we use $L1$ distance).
Experiments
We evaluate our framework on fastText embeddings trained on Wikipedia (2017), UMBC webbase corpus and statmt.org news dataset (16B tokens) BIBREF11. For simplicity, only the first 22000 words are used in all embeddings, though preliminary results indicate the findings extend to the full corpus. For our novel methods of mitigating bias, a shallow neural network is used to adjust the embedding. The single layer of the model is an embedding layer with weights initialized to those of the original embedding. For the composite method, these weights are initialized to those of the embedding after probabilistic bias mitigation. A batch of word indices is fed into the model, which are then embedded and for which a loss value is calculated, allowing back-propagation to adjust the embeddings. For each of the models, a fixed number of iterations is used to prevent overfitting, which can eventually hurt performance on the embedding benchmarks (See Figure FIGREF12). We evaluated the embedding after 1000 iterations, and stopped training if performance on a benchmark decreased significantly.
We construct a list of candidate words to debias, taken from the words used in the WEAT gender bias statistics. Words in this list should be gender neutral, and are related to the topics of career, arts, science, math, family and professions (see appendix). We note that this list can easily be expanded to include a greater proportion of words in the corpus. For example, BIBREF4 suggested a method for identifying inappropriately gendered words using unsupervised learning.
We compare this method of bias mitigation with the no bias mitigation ("Orig"), geometric bias mitigation ("Geo"), the two pieces of our method alone ("Prob" and "KNN") and the composite method ("KNN+Prob"). We note that the composite method performs reasonably well according the the RIPA metric, and much better than traditional geometric bias mitigation according to the neighborhood metric, without significant performance loss according to the accepted benchmarks. To our knowledge this is the first bias mitigation method to perform reasonably both on both metrics.
Discussion
We proposed a simple method of bias mitigation based on this probabilistic notions of fairness, and showed that it leads to promising results in various benchmark bias mitigation tasks. Future work should include considering a more rigorous definition and non-binary of bias and experimenting with various embedding algorithms and network architectures.
Discussion ::: Acknowledgements
The authors would like to thank Tommi Jaakkola for stimulating discussions during the initial stages of this work.
Experiment Notes
For Equation 4, as described in the original work, in regards to the k sample words $w_i$ is drawn from the corpus using the Unigram distribution raised to the 3/4 power.
For reference, the most male socially-biased words include words such as:’john’, ’jr’, ’mlb’, ’dick’, ’nfl’, ’cfl’, ’sgt’, ’abbot’, ’halfback’, ’jock’, ’mike’, ’joseph’,while the most female socially-biased words include words such as:’feminine’, ’marital’, ’tatiana’, ’pregnancy’, ’eva’, ’pageant’, ’distress’, ’cristina’, ’ida’, ’beauty’, ’sexuality’,’fertility’
Professions
'accountant', 'acquaintance', 'actor', 'actress', 'administrator', 'adventurer', 'advocate', 'aide', 'alderman', 'ambassador', 'analyst', 'anthropologist', 'archaeologist', 'archbishop', 'architect', 'artist', 'assassin', 'astronaut', 'astronomer', 'athlete', 'attorney', 'author', 'baker', 'banker', 'barber', 'baron', 'barrister', 'bartender', 'biologist', 'bishop', 'bodyguard', 'boss', 'boxer', 'broadcaster', 'broker', 'businessman', 'butcher', 'butler', 'captain', 'caretaker', 'carpenter', 'cartoonist', 'cellist', 'chancellor', 'chaplain', 'character', 'chef', 'chemist', 'choreographer', 'cinematographer', 'citizen', 'cleric', 'clerk', 'coach', 'collector', 'colonel', 'columnist', 'comedian', 'comic', 'commander', 'commentator', 'commissioner', 'composer', 'conductor', 'confesses', 'congressman', 'constable', 'consultant', 'cop', 'correspondent', 'counselor', 'critic', 'crusader', 'curator', 'dad', 'dancer', 'dean', 'dentist', 'deputy', 'detective', 'diplomat', 'director', 'doctor', 'drummer', 'economist', 'editor', 'educator', 'employee', 'entertainer', 'entrepreneur', 'envoy', 'evangelist', 'farmer', 'filmmaker', 'financier', 'fisherman', 'footballer', 'foreman', 'gangster', 'gardener', 'geologist', 'goalkeeper', 'guitarist', 'headmaster', 'historian', 'hooker', 'illustrator', 'industrialist', 'inspector', 'instructor', 'inventor', 'investigator', 'journalist', 'judge', 'jurist', 'landlord', 'lawyer', 'lecturer', 'legislator', 'librarian', 'lieutenant', 'lyricist', 'maestro', 'magician', 'magistrate', 'maid', 'manager', 'marshal', 'mathematician', 'mechanic', 'midfielder', 'minister', 'missionary', 'monk', 'musician', 'nanny', 'narrator', 'naturalist', 'novelist', 'nun', 'nurse', 'observer', 'officer', 'organist', 'painter', 'pastor', 'performer', 'philanthropist', 'philosopher', 'photographer', 'physician', 'physicist', 'pianist', 'planner', 'playwright', 'poet', 'policeman', 'politician', 'preacher', 'president', 'priest', 'principal', 'prisoner', 'professor', 'programmer', 'promoter', 'proprietor', 'prosecutor', 'protagonist', 'provost', 'psychiatrist', 'psychologist', 'rabbi', 'ranger', 'researcher', 'sailor', 'saint', 'salesman', 'saxophonist', 'scholar', 'scientist', 'screenwriter', 'sculptor', 'secretary', 'senator', 'sergeant', 'servant', 'singer', 'skipper', 'sociologist', 'soldier', 'solicitor', 'soloist', 'sportsman', 'statesman', 'steward', 'student', 'substitute', 'superintendent', 'surgeon', 'surveyor', 'swimmer', 'teacher', 'technician', 'teenager', 'therapist', 'trader', 'treasurer', 'trooper', 'trumpeter', 'tutor', 'tycoon', 'violinist', 'vocalist', 'waiter', 'waitress', 'warden', 'warrior', 'worker', 'wrestler', 'writer'
WEAT Word Sets
Words used for WEAT statistic, consisting of baseline bias tests and gender bias tests in the format X vs Y / A vs B
Flowers vs Insects / Pleasant vs Unpleasant
X: "aster", "clover", "hyacinth", "marigold", "poppy", "azalea", "crocus", "iris", "orchid", "rose", "bluebell", "daffodil", "lilac", "pansy", "tulip", "buttercup", "daisy", "lily", "peony", "violet", "carnation", "gladiola", "magnolia", "petunia", "zinnia"
Y: "ant", "caterpillar", "flea", "locust", "spider", "bedbug", "centipede", "fly", "maggot", "tarantula", "bee", "cockroach", "gnat", "mosquito", "termite", "beetle", "cricket", "hornet", "moth", "wasp", "blackfly", "dragonfly", "horsefly", "roach", "weevil"
A: "caress", "freedom", "health", "love", "peace", "cheer", "friend", "heaven", "loyal", "pleasure", "diamond", "gentle", "honest", "lucky", "rainbow", "diploma", "gift", "honor", "miracle", "sunrise", "family", "happy", "laughter", "paradise", "vacation"
B: "abuse", "crash", "filth", "murder", "sickness", "accident", "death", "grief", "poison", "stink", "assault", "disaster", "hatred", "pollute", "tragedy", "divorce", "jail", "poverty", "ugly", "cancer", "kill", "rotten", "vomit", "agony", "prison"
Instruments vs Weapons / Pleasant vs Unpleasant:
X: "bagpipe", "cello", "guitar", "lute", "trombone", "banjo", "clarinet", "harmonica", "mandolin", "trumpet", "bassoon", "drum", "harp", "oboe", "tuba", "bell", "fiddle", "harpsichord", "piano", "viola", "bongo", "flute", "horn", "saxophone", "violin"
Y: "arrow", "club", "gun", "missile", "spear", "ax", "dagger", "harpoon", "pistol", "sword", "blade", "dynamite", "hatchet", "rifle", "tank", "bomb", "firearm", "knife", "shotgun", "teargas", "cannon", "grenade", "mace", "slingshot", "whip"
A: "caress", "freedom", "health", "love", "peace", "cheer", "friend", "heaven", "loyal", "pleasure", "diamond", "gentle", "honest", "lucky", "rainbow", "diploma", "gift", "honor", "miracle", "sunrise", "family", "happy", "laughter", "paradise", "vacation"
B: "abuse", "crash", "filth", "murder", "sickness", "accident", "death", "grief", "poison", "stink", "assault", "disaster", "hatred", "pollute", "tragedy", "divorce", "jail", "poverty", "ugly", "cancer", "kill", "rotten", "vomit", "agony", "prison"
Male vs Female / Career vs Family:
X: "brother", "father", "uncle", "grandfather", "son", "he", "his", "him", "man", "himself", "men", "husband", "boy", "uncle", "nephew", "boyfriend", "king", "actor"
Y: "sister", "mother", "aunt", "grandmother", "daughter", "she", "hers", "her", "woman", "herself", "women", "wife", "aunt", "niece", "girlfriend", "queen", "actress"
A: "executive", "management", "professional", "corporation", "salary", "office", "business", "career", "industry", "company", "promotion", "profession", "CEO", "manager", "coworker", "entrepreneur"
B: "home", "parents", "children", "family", "cousins", "marriage", "wedding", "relatives", "grandparents", "grandchildren", "nurture", "child", "toddler", "infant", "teenager"
Math vs Art / Male vs Female:
X: "math", "algebra", "geometry", "calculus", "equations", "computation", "numbers", "addition", "trigonometry", "arithmetic", "logic", "proofs", "multiplication", "mathematics"
Y: "poetry", "art", "Shakespeare", "dance", "literature", "novel", "symphony", "drama", "orchestra", "music", "ballet", "arts", "creative", "sculpture"
A: "brother", "father", "uncle", "grandfather", "son", "he", "his", "him", "man", "himself", "men", "husband", "boy", "uncle", "nephew", "boyfriend", "king", "actor"
B: "sister", "mother", "aunt", "grandmother", "daughter", "she", "hers", "her", "woman", "herself", "women", "wife", "aunt", "niece", "girlfriend", "queen", "actress"
Science vs Art / Male8 vs Female8:
X:"science", "technology", "physics", "chemistry", "Einstein", "NASA", "experiment", "astronomy", "biology", "aeronautics", "mechanics", "thermodynamics"
Y: "poetry", "art", "Shakespeare", "dance", "literature", "novel", "symphony", "drama", "orchestra", "music", "ballet", "arts", "creative", "sculpture"
A: "brother", "father", "uncle", "grandfather", "son", "he", "his", "him", "man", "himself", "men", "husband", "boy", "uncle", "nephew", "boyfriend"
B: "sister", "mother", "aunt", "grandmother", "daughter", "she", "hers", "her", "woman", "herself", "women", "wife", "aunt", "niece", "girlfriend" | RIPA, Neighborhood Metric, WEAT |
31b6544346e9a31d656e197ad01756813ee89422 | 31b6544346e9a31d656e197ad01756813ee89422_0 | Q: What are the probabilistic observations which contribute to the more robust algorithm?
Text: Introduction
Word embeddings, or vector representations of words, are an important component of Natural Language Processing (NLP) models and necessary for many downstream tasks. However, word embeddings, including embeddings commonly deployed for public use, have been shown to exhibit unwanted societal stereotypes and biases, raising concerns about disparate impact on axes of gender, race, ethnicity, and religion BIBREF0, BIBREF1. The impact of this bias has manifested in a range of downstream tasks, ranging from autocomplete suggestions BIBREF2 to advertisement delivery BIBREF3, increasing the likelihood of amplifying harmful biases through the use of these models.
The most well-established method thus far for mitigating bias relies on projecting target words onto a bias subspace (such as a gender subspace) and subtracting out the difference between the resulting distances BIBREF0. On the other hand, the most popular metric for measuring bias is the WEAT statistic BIBREF1, which compares the cosine similarities between groups of words. However, WEAT has been recently shown to overestimate bias as a result of implicitly relying on similar frequencies for the target words BIBREF4, and BIBREF5 demonstrated that evidence of bias can still be recovered after geometric bias mitigation by examining the neighborhood of a target word among socially-biased words.
In response to this, we propose an alternative framework for bias mitigation in word embeddings that approaches this problem from a probabilistic perspective. The motivation for this approach is two-fold. First, most popular word embedding algorithms are probabilistic at their core – i.e., they are trained (explicitly or implicitly BIBREF6) to minimize some form of word co-occurrence probabilities. Thus, we argue that a framework for measuring and treating bias in these embeddings should take into account, in addition to their geometric aspect, their probabilistic nature too. On the other hand, the issue of bias has also been approached (albeit in different contexts) in the fairness literature, where various intuitive notions of equity such as equalized odds have been formalized through probabilistic criteria. By considering analogous criteria for the word embedding setting, we seek to draw connections between these two bodies of work.
We present experiments on various bias mitigation benchmarks and show that our framework is comparable to state-of-the-art alternatives according to measures of geometric bias mitigation and that it performs far better according to measures of neighborhood bias. For fair comparison, we focus on mitigating a binary gender bias in pre-trained word embeddings using SGNS (skip-gram with negative-sampling), though we note that this framework and methods could be extended to other types of bias and word embedding algorithms.
Background ::: Geometric Bias Mitigation
Geometric bias mitigation uses the cosine distances between words to both measure and remove gender bias BIBREF0. This method implicitly defines bias as a geometric asymmetry between words when projected onto a subspace, such as the gender subspace constructed from a set of gender pairs such as $\mathcal {P} = \lbrace (he,she),(man,woman),(king,queen)...\rbrace $. The projection of a vector $v$ onto $B$ (the subspace) is defined by $v_B = \sum _{j=1}^{k} (v \cdot b_j) b_j$ where a subspace $B$ is defined by k orthogonal unit vectors $B = {b_1,...,b_k}$.
Background ::: Geometric Bias Mitigation ::: WEAT
The WEAT statistic BIBREF1 demonstrates the presence of biases in word embeddings with an effect size defined as the mean test statistic across the two word sets:
Where $s$, the test statistic, is defined as: $s(w,A,B) = mean_{a \in A} cos(w,a) - mean_{b \in B} cos(w,a)$, and $X$,$Y$,$A$, and $B$ are groups of words for which the association is measured. Possible values range from $-2$ to 2 depending on the association of the words groups, and a value of zero indicates $X$ and $Y$ are equally associated with $A$ and $B$. See BIBREF4 for further details on WEAT.
Background ::: Geometric Bias Mitigation ::: RIPA
The RIPA (relational inner product association) metric was developed as an alternative to WEAT, with the critique that WEAT is likely to overestimate the bias of a target attribute BIBREF4. The RIPA metric formalizes the measure of bias used in geometric bias mitigation as the inner product association of a word vector $v$ with respect to a relation vector $b$. The relation vector is constructed from the first principal component of the differences between gender word pairs. We report the absolute value of the RIPA metric as the value can be positive or negative according to the direction of the bias. A value of zero indicates a lack of bias, and the value is bound by $[-||w||,||w||]$.
Background ::: Geometric Bias Mitigation ::: Neighborhood Metric
The neighborhood bias metric proposed by BIBREF5 quantifies bias as the proportion of male socially-biased words among the $k$ nearest socially-biased male and female neighboring words, whereby biased words are obtained by projecting neutral words onto a gender relation vector. As we only examine the target word among the 1000 most socially-biased words in the vocabulary (500 male and 500 female), a word’s bias is measured as the ratio of its neighborhood of socially-biased male and socially-biased female words, so that a value of 0.5 in this metric would indicate a perfectly unbiased word, and values closer to 0 and 1 indicate stronger bias.
A Probabilistic Framework for Bias Mitigation
Our objective here is to extend and complement the geometric notions of word embedding bias described in the previous section with an alternative, probabilistic, approach. Intuitively, we seek a notion of equality akin to that of demographic parity in the fairness literature, which requires that a decision or outcome be independent of a protected attribute such as gender. BIBREF7. Similarly, when considering a probabilistic definition of unbiased in word embeddings, we can consider the conditional probabilities of word pairs, ensuring for example that $p(doctor|man) \approx p(doctor|woman)$, and can extend this probabilistic framework to include the neighborhood of a target word, addressing the potential pitfalls of geometric bias mitigation.
Conveniently, most word embedding frameworks allow for immediate computation of the conditional probabilities $P(w|c)$. Here, we focus our attention on the Skip-Gram method with Negative Sampling (SGNS) of BIBREF8, although our framework can be equivalently instantiated for most other popular embedding methods, owing to their core similarities BIBREF6, BIBREF9. Leveraging this probabilistic nature, we construct a bias mitigation method in two steps, and examine each step as an independent method as well as the resulting composite method.
A Probabilistic Framework for Bias Mitigation ::: Probabilistic Bias Mitigation
This component of our bias mitigation framework seeks to enforce that the probability of prediction or outcome cannot depend on a protected class such as gender. We can formalize this intuitive goal through a loss function that penalizes the discrepancy between the conditional probabilities of a target word (i.e., one that should not be affected by the protected attribute) conditioned on two words describing the protected attribute (e.g., man and woman in the case of gender). That is, for every target word we seek to minimize:
where $\mathcal {P} = \lbrace (he,she),(man,woman),(king,queen), \dots \rbrace $ is a set of word pairs characterizing the protected attribute, akin to that used in previous work BIBREF0.
At this point, the specific form of the objective will depend on the type of word embeddings used. For our expample of SGNS, recall that this algorithm models the conditional probability of a target word given a context word as a function of the inner product of their representations. Though an exact method for calculating the conditional probability includes summing over conditional probability of all the words in the vocabulary, we can use the estimation of log conditional probability proposed by BIBREF8, i.e., $ \log p(w_O|w_I) \approx \log \sigma ({v^{\prime }_{wo}}^T v_{wI}) + \sum _{i=1}^{k} [\log {\sigma ({{-v^{\prime }_{wi}}^T v_{wI}})}] $.
A Probabilistic Framework for Bias Mitigation ::: Nearest Neighbor Bias Mitigation
Based on observations by BIBREF5, we extend our method to consider the composition of the neighborhood of socially-gendered words of a target word. We note that bias in a word embedding depends not only on the relationship between a target word and explicitly gendered words like man and woman, but also between a target word and socially-biased male or female words. Bolukbasi et al BIBREF0 proposed a method for eliminating this kind of indirect bias through geometric bias mitigation, but it is shown to be ineffective by the neighborhood metric BIBREF5.
Instead, we extend our method of bias mitigation to account for this neighborhood effect. Specifically, we examine the conditional probabilities of a target word given the $k/2$ nearest neighbors from the male socially-biased words as well as given the $k/2$ female socially-biased words (in sorted order, from smallest to largest). The groups of socially-biased words are constructed as described in the neighborhood metric. If the word is unbiased according to the neighborhood metric, these probabilities should be comparable. We then use the following as our loss function:
where $m$ and $f$ represent the male and female neighbors sorted by distance to the target word $t$ (we use $L1$ distance).
Experiments
We evaluate our framework on fastText embeddings trained on Wikipedia (2017), UMBC webbase corpus and statmt.org news dataset (16B tokens) BIBREF11. For simplicity, only the first 22000 words are used in all embeddings, though preliminary results indicate the findings extend to the full corpus. For our novel methods of mitigating bias, a shallow neural network is used to adjust the embedding. The single layer of the model is an embedding layer with weights initialized to those of the original embedding. For the composite method, these weights are initialized to those of the embedding after probabilistic bias mitigation. A batch of word indices is fed into the model, which are then embedded and for which a loss value is calculated, allowing back-propagation to adjust the embeddings. For each of the models, a fixed number of iterations is used to prevent overfitting, which can eventually hurt performance on the embedding benchmarks (See Figure FIGREF12). We evaluated the embedding after 1000 iterations, and stopped training if performance on a benchmark decreased significantly.
We construct a list of candidate words to debias, taken from the words used in the WEAT gender bias statistics. Words in this list should be gender neutral, and are related to the topics of career, arts, science, math, family and professions (see appendix). We note that this list can easily be expanded to include a greater proportion of words in the corpus. For example, BIBREF4 suggested a method for identifying inappropriately gendered words using unsupervised learning.
We compare this method of bias mitigation with the no bias mitigation ("Orig"), geometric bias mitigation ("Geo"), the two pieces of our method alone ("Prob" and "KNN") and the composite method ("KNN+Prob"). We note that the composite method performs reasonably well according the the RIPA metric, and much better than traditional geometric bias mitigation according to the neighborhood metric, without significant performance loss according to the accepted benchmarks. To our knowledge this is the first bias mitigation method to perform reasonably both on both metrics.
Discussion
We proposed a simple method of bias mitigation based on this probabilistic notions of fairness, and showed that it leads to promising results in various benchmark bias mitigation tasks. Future work should include considering a more rigorous definition and non-binary of bias and experimenting with various embedding algorithms and network architectures.
Discussion ::: Acknowledgements
The authors would like to thank Tommi Jaakkola for stimulating discussions during the initial stages of this work.
Experiment Notes
For Equation 4, as described in the original work, in regards to the k sample words $w_i$ is drawn from the corpus using the Unigram distribution raised to the 3/4 power.
For reference, the most male socially-biased words include words such as:’john’, ’jr’, ’mlb’, ’dick’, ’nfl’, ’cfl’, ’sgt’, ’abbot’, ’halfback’, ’jock’, ’mike’, ’joseph’,while the most female socially-biased words include words such as:’feminine’, ’marital’, ’tatiana’, ’pregnancy’, ’eva’, ’pageant’, ’distress’, ’cristina’, ’ida’, ’beauty’, ’sexuality’,’fertility’
Professions
'accountant', 'acquaintance', 'actor', 'actress', 'administrator', 'adventurer', 'advocate', 'aide', 'alderman', 'ambassador', 'analyst', 'anthropologist', 'archaeologist', 'archbishop', 'architect', 'artist', 'assassin', 'astronaut', 'astronomer', 'athlete', 'attorney', 'author', 'baker', 'banker', 'barber', 'baron', 'barrister', 'bartender', 'biologist', 'bishop', 'bodyguard', 'boss', 'boxer', 'broadcaster', 'broker', 'businessman', 'butcher', 'butler', 'captain', 'caretaker', 'carpenter', 'cartoonist', 'cellist', 'chancellor', 'chaplain', 'character', 'chef', 'chemist', 'choreographer', 'cinematographer', 'citizen', 'cleric', 'clerk', 'coach', 'collector', 'colonel', 'columnist', 'comedian', 'comic', 'commander', 'commentator', 'commissioner', 'composer', 'conductor', 'confesses', 'congressman', 'constable', 'consultant', 'cop', 'correspondent', 'counselor', 'critic', 'crusader', 'curator', 'dad', 'dancer', 'dean', 'dentist', 'deputy', 'detective', 'diplomat', 'director', 'doctor', 'drummer', 'economist', 'editor', 'educator', 'employee', 'entertainer', 'entrepreneur', 'envoy', 'evangelist', 'farmer', 'filmmaker', 'financier', 'fisherman', 'footballer', 'foreman', 'gangster', 'gardener', 'geologist', 'goalkeeper', 'guitarist', 'headmaster', 'historian', 'hooker', 'illustrator', 'industrialist', 'inspector', 'instructor', 'inventor', 'investigator', 'journalist', 'judge', 'jurist', 'landlord', 'lawyer', 'lecturer', 'legislator', 'librarian', 'lieutenant', 'lyricist', 'maestro', 'magician', 'magistrate', 'maid', 'manager', 'marshal', 'mathematician', 'mechanic', 'midfielder', 'minister', 'missionary', 'monk', 'musician', 'nanny', 'narrator', 'naturalist', 'novelist', 'nun', 'nurse', 'observer', 'officer', 'organist', 'painter', 'pastor', 'performer', 'philanthropist', 'philosopher', 'photographer', 'physician', 'physicist', 'pianist', 'planner', 'playwright', 'poet', 'policeman', 'politician', 'preacher', 'president', 'priest', 'principal', 'prisoner', 'professor', 'programmer', 'promoter', 'proprietor', 'prosecutor', 'protagonist', 'provost', 'psychiatrist', 'psychologist', 'rabbi', 'ranger', 'researcher', 'sailor', 'saint', 'salesman', 'saxophonist', 'scholar', 'scientist', 'screenwriter', 'sculptor', 'secretary', 'senator', 'sergeant', 'servant', 'singer', 'skipper', 'sociologist', 'soldier', 'solicitor', 'soloist', 'sportsman', 'statesman', 'steward', 'student', 'substitute', 'superintendent', 'surgeon', 'surveyor', 'swimmer', 'teacher', 'technician', 'teenager', 'therapist', 'trader', 'treasurer', 'trooper', 'trumpeter', 'tutor', 'tycoon', 'violinist', 'vocalist', 'waiter', 'waitress', 'warden', 'warrior', 'worker', 'wrestler', 'writer'
WEAT Word Sets
Words used for WEAT statistic, consisting of baseline bias tests and gender bias tests in the format X vs Y / A vs B
Flowers vs Insects / Pleasant vs Unpleasant
X: "aster", "clover", "hyacinth", "marigold", "poppy", "azalea", "crocus", "iris", "orchid", "rose", "bluebell", "daffodil", "lilac", "pansy", "tulip", "buttercup", "daisy", "lily", "peony", "violet", "carnation", "gladiola", "magnolia", "petunia", "zinnia"
Y: "ant", "caterpillar", "flea", "locust", "spider", "bedbug", "centipede", "fly", "maggot", "tarantula", "bee", "cockroach", "gnat", "mosquito", "termite", "beetle", "cricket", "hornet", "moth", "wasp", "blackfly", "dragonfly", "horsefly", "roach", "weevil"
A: "caress", "freedom", "health", "love", "peace", "cheer", "friend", "heaven", "loyal", "pleasure", "diamond", "gentle", "honest", "lucky", "rainbow", "diploma", "gift", "honor", "miracle", "sunrise", "family", "happy", "laughter", "paradise", "vacation"
B: "abuse", "crash", "filth", "murder", "sickness", "accident", "death", "grief", "poison", "stink", "assault", "disaster", "hatred", "pollute", "tragedy", "divorce", "jail", "poverty", "ugly", "cancer", "kill", "rotten", "vomit", "agony", "prison"
Instruments vs Weapons / Pleasant vs Unpleasant:
X: "bagpipe", "cello", "guitar", "lute", "trombone", "banjo", "clarinet", "harmonica", "mandolin", "trumpet", "bassoon", "drum", "harp", "oboe", "tuba", "bell", "fiddle", "harpsichord", "piano", "viola", "bongo", "flute", "horn", "saxophone", "violin"
Y: "arrow", "club", "gun", "missile", "spear", "ax", "dagger", "harpoon", "pistol", "sword", "blade", "dynamite", "hatchet", "rifle", "tank", "bomb", "firearm", "knife", "shotgun", "teargas", "cannon", "grenade", "mace", "slingshot", "whip"
A: "caress", "freedom", "health", "love", "peace", "cheer", "friend", "heaven", "loyal", "pleasure", "diamond", "gentle", "honest", "lucky", "rainbow", "diploma", "gift", "honor", "miracle", "sunrise", "family", "happy", "laughter", "paradise", "vacation"
B: "abuse", "crash", "filth", "murder", "sickness", "accident", "death", "grief", "poison", "stink", "assault", "disaster", "hatred", "pollute", "tragedy", "divorce", "jail", "poverty", "ugly", "cancer", "kill", "rotten", "vomit", "agony", "prison"
Male vs Female / Career vs Family:
X: "brother", "father", "uncle", "grandfather", "son", "he", "his", "him", "man", "himself", "men", "husband", "boy", "uncle", "nephew", "boyfriend", "king", "actor"
Y: "sister", "mother", "aunt", "grandmother", "daughter", "she", "hers", "her", "woman", "herself", "women", "wife", "aunt", "niece", "girlfriend", "queen", "actress"
A: "executive", "management", "professional", "corporation", "salary", "office", "business", "career", "industry", "company", "promotion", "profession", "CEO", "manager", "coworker", "entrepreneur"
B: "home", "parents", "children", "family", "cousins", "marriage", "wedding", "relatives", "grandparents", "grandchildren", "nurture", "child", "toddler", "infant", "teenager"
Math vs Art / Male vs Female:
X: "math", "algebra", "geometry", "calculus", "equations", "computation", "numbers", "addition", "trigonometry", "arithmetic", "logic", "proofs", "multiplication", "mathematics"
Y: "poetry", "art", "Shakespeare", "dance", "literature", "novel", "symphony", "drama", "orchestra", "music", "ballet", "arts", "creative", "sculpture"
A: "brother", "father", "uncle", "grandfather", "son", "he", "his", "him", "man", "himself", "men", "husband", "boy", "uncle", "nephew", "boyfriend", "king", "actor"
B: "sister", "mother", "aunt", "grandmother", "daughter", "she", "hers", "her", "woman", "herself", "women", "wife", "aunt", "niece", "girlfriend", "queen", "actress"
Science vs Art / Male8 vs Female8:
X:"science", "technology", "physics", "chemistry", "Einstein", "NASA", "experiment", "astronomy", "biology", "aeronautics", "mechanics", "thermodynamics"
Y: "poetry", "art", "Shakespeare", "dance", "literature", "novel", "symphony", "drama", "orchestra", "music", "ballet", "arts", "creative", "sculpture"
A: "brother", "father", "uncle", "grandfather", "son", "he", "his", "him", "man", "himself", "men", "husband", "boy", "uncle", "nephew", "boyfriend"
B: "sister", "mother", "aunt", "grandmother", "daughter", "she", "hers", "her", "woman", "herself", "women", "wife", "aunt", "niece", "girlfriend" | Unanswerable |
347e86893e8002024c2d10f618ca98e14689675f | 347e86893e8002024c2d10f618ca98e14689675f_0 | Q: What turn out to be more important high volume or high quality data?
Text: Introduction
In recent years, word embeddings BIBREF0, BIBREF1, BIBREF2 have been proven to be very useful for training downstream natural language processing (NLP) tasks. Moreover, contextualized embeddings BIBREF3, BIBREF4 have been shown to further improve the performance of NLP tasks such as named entity recognition, question answering, or text classification when used as word features because they are able to resolve ambiguities of word representations when they appear in different contexts. Different deep learning architectures such as multilingual BERT BIBREF4, LASER BIBREF5 and XLM BIBREF6 have proved successful in the multilingual setting. All these architectures learn the semantic representations from unannotated text, making them cheap given the availability of texts in online multilingual resources such as Wikipedia. However, the evaluation of such resources is usually done for the high-resourced languages, where one has a smorgasbord of tasks and test sets to evaluate on. This is the best-case scenario, languages with tones of data for training that generate high-quality models.
For low-resourced languages, the evaluation is more difficult and therefore normally ignored simply because of the lack of resources. In these cases, training data is scarce, and the assumption that the capability of deep learning architectures to learn (multilingual) representations in the high-resourced setting holds in the low-resourced one does not need to be true. In this work, we focus on two African languages, Yorùbá and Twi, and carry out several experiments to verify this claim. Just by a simple inspection of the word embeddings trained on Wikipedia by fastText, we see a high number of non-Yorùbá or non-Twi words in the vocabularies. For Twi, the vocabulary has only 935 words, and for Yorùbá we estimate that 135 k out of the 150 k words belong to other languages such as English, French and Arabic.
In order to improve the semantic representations for these languages, we collect online texts and study the influence of the quality and quantity of the data in the final models. We also examine the most appropriate architecture depending on the characteristics of each language. Finally, we translate test sets and annotate corpora to evaluate the performance of both our models together with fastText and BERT pre-trained embeddings which could not be evaluated otherwise for Yorùbá and Twi. The evaluation is carried out in a word similarity and relatedness task using the wordsim-353 test set, and in a named entity recognition (NER) task where embeddings play a crucial role. Of course, the evaluation of the models in only two tasks is not exhaustive but it is an indication of the quality we can obtain for these two low-resourced languages as compared to others such as English where these evaluations are already available.
The rest of the paper is organized as follows. Related works are reviewed in Section SECREF2 The two languages under study are described in Section SECREF3. We introduce the corpora and test sets in Section SECREF4. The fifth section explores the different training architectures we consider, and the experiments that are carried out. Finally, discussion and concluding remarks are given in Section SECREF6
Related Work
The large amount of freely available text in the internet for multiple languages is facilitating the massive and automatic creation of multilingual resources. The resource par excellence is Wikipedia, an online encyclopedia currently available in 307 languages. Other initiatives such as Common Crawl or the Jehovah’s Witnesses site are also repositories for multilingual data, usually assumed to be noisier than Wikipedia. Word and contextual embeddings have been pre-trained on these data, so that the resources are nowadays at hand for more than 100 languages. Some examples include fastText word embeddings BIBREF2, BIBREF7, MUSE embeddings BIBREF8, BERT multilingual embeddings BIBREF4 and LASER sentence embeddings BIBREF5. In all cases, embeddings are trained either simultaneously for multiple languages, joining high- and low-resource data, or following the same methodology.
On the other hand, different approaches try to specifically design architectures to learn embeddings in a low-resourced setting. ChaudharyEtAl:2018 follow a transfer learning approach that uses phonemes, lemmas and morphological tags to transfer the knowledge from related high-resource language into the low-resource one. jiangEtal:2018 apply Positive-Unlabeled Learning for word embedding calculations, assuming that unobserved pairs of words in a corpus also convey information, and this is specially important for small corpora.
In order to assess the quality of word embeddings, word similarity and relatedness tasks are usually used. wordsim-353 BIBREF9 is a collection of 353 pairs annotated with semantic similarity scores in a scale from 0 to 10. Even the problems detected in this dataset BIBREF10, it is widely used by the community. The test set was originally created for English, but the need for comparison with other languages has motivated several translations/adaptations. In hassanMihalcea:2009 the test was translated manually into Spanish, Romanian and Arabic and the scores were adapted to reflect similarities in the new language. The reported correlation between the English scores and the Spanish ones is 0.86. Later, JoubarneInkpen:2011 show indications that the measures of similarity highly correlate across languages. leviantReichart:2015 translated also wordsim-353 into German, Italian and Russian and used crowdsourcing to score the pairs. Finally, jiangEtal:2018 translated with Google Cloud the test set from English into Czech, Danish and Dutch. In our work, native speakers translate wordsim-353 into Yorùbá and Twi, and similarity scores are kept unless the discrepancy with English is big (see Section SECREF11 for details). A similar approach to our work is done for Gujarati in JoshiEtAl:2019.
Languages under Study ::: Yorùbá
is a language in the West Africa with over 50 million speakers. It is spoken among other languages in Nigeria, republic of Togo, Benin Republic, Ghana and Sierra Leon. It is also a language of Òrìsà in Cuba, Brazil, and some Caribbean countries. It is one of the three major languages in Nigeria and it is regarded as the third most spoken native African language. There are different dialects of Yorùbá in Nigeria BIBREF11, BIBREF12, BIBREF13. However, in this paper our focus is the standard Yorùbá based upon a report from the 1974 Joint Consultative Committee on Education BIBREF14.
Standard Yorùbá has 25 letters without the Latin characters c, q, v, x and z. There are 18 consonants (b, d, f, g, gb, j[dz], k, l, m, n, p[kp], r, s, ṣ, t, w y[j]), 7 oral vowels (a, e, ẹ, i, o, ọ, u), five nasal vowels, (an, $ \underaccent{\dot{}}{e}$n, in, $ \underaccent{\dot{}}{o}$n, un) and syllabic nasals (m̀, ḿ, ǹ, ń). Yorùbá is a tone language which makes heavy use of lexical tones which are indicated by the use of diacritics. There are three tones in Yorùbá namely low, mid and high which are represented as grave ($\setminus $), macron ($-$) and acute ($/$) symbols respectively. These tones are applied on vowels and syllabic nasals. Mid tone is usually left unmarked on vowels and every initial or first vowel in a word cannot have a high tone. It is important to note that tone information is needed for correct pronunciation and to have the meaning of a word BIBREF15, BIBREF12, BIBREF14. For example, owó (money), ọw (broom), òwò (business), w (honour), ọw (hand), and w (group) are different words with different dots and diacritic combinations. According to Asahiah2014, Standard Yorùbá uses 4 diacritics, 3 are for marking tones while the fourth which is the dot below is used to indicate the open phonetic variants of letter "e" and "o" and the long variant of "s". Also, there are 19 single diacritic letters, 3 are marked with dots below (ẹ, ọ, ṣ) while the rest are either having the grave or acute accent. The four double diacritics are divided between the grave and the acute accent as well.
As noted in Asahiah2014, most of the Yorùbá texts found in websites or public domain repositories (i) either use the correct Yorùbá orthography or (ii) replace diacritized characters with un-diacritized ones.
This happens as a result of many factors, but most especially to the unavailability of appropriate input devices for the accurate application of the diacritical marks BIBREF11. This has led to research on restoration models for diacritics BIBREF16, but the problem is not well solved and we find that most Yorùbá text in the public domain today is not well diacritized. Wikipedia is not an exception.
Languages under Study ::: Twi
is an Akan language of the Central Tano Branch of the Niger Congo family of languages. It is the most widely spoken of the about 80 indigenous languages in Ghana BIBREF17. It has about 9 million native speakers and about a total of 17–18 million Ghanaians have it as either first or second language. There are two mutually intelligible dialects, Asante and Akuapem, and sub-dialectical variants which are mostly unknown to and unnoticed by non-native speakers. It is also mutually intelligible with Fante and to a large extent Bono, another of the Akan languages. It is one of, if not the, easiest to learn to speak of the indigenous Ghanaian languages. The same is however not true when it comes to reading and especially writing. This is due to a number of easily overlooked complexities in the structure of the language. First of all, similarly to Yorùbá, Twi is a tonal language but written without diacritics or accents. As a result, words which are pronounced differently and unambiguous in speech tend to be ambiguous in writing. Besides, most of such words fit interchangeably in the same context and some of them can have more than two meanings. A simple example is:
Me papa aba nti na me ne wo redi no yie no. S wo ara wo nim s me papa ba a, me suban fofor adi.
This sentence could be translated as
(i) I'm only treating you nicely because I'm in a good mood. You already know I'm a completely different person when I'm in a good mood.
(ii) I'm only treating you nicely because my dad is around. You already know I'm a completely different person when my dad comes around.
Another characteristic of Twi is the fact that a good number of stop words have the same written form as content words. For instance, “na” or “na” could be the words “and, then”, the phrase “and then” or the word “mother”. This kind of ambiguity has consequences in several natural language applications where stop words are removed from text.
Finally, we want to point out that words can also be written with or without prefixes. An example is this same na and na which happen to be the same word with an omissible prefix across its multiple senses. For some words, the prefix characters are mostly used when the word begins a sentence and omitted in the middle. This however depends on the author/speaker. For the word embeddings calculation, this implies that one would have different embeddings for the same word found in different contexts.
Data
We collect clean and noisy corpora for Yorùbá and Twi in order to quantify the effect of noise on the quality of the embeddings, where noisy has a different meaning depending on the language as it will be explained in the next subsections.
Data ::: Training Corpora
For Yorùbá, we use several corpora collected by the Niger-Volta Language Technologies Institute with texts from different sources, including the Lagos-NWU conversational speech corpus, fully-diacritized Yorùbá language websites and an online Bible. The largest source with clean data is the JW300 corpus. We also created our own small-sized corpus by web-crawling three Yorùbá language websites (Alàkwé, r Yorùbá and Èdè Yorùbá Rẹw in Table TABREF7), some Yoruba Tweets with full diacritics and also news corpora (BBC Yorùbá and VON Yorùbá) with poor diacritics which we use to introduce noise. By noisy corpus, we refer to texts with incorrect diacritics (e.g in BBC Yorùbá), removal of tonal symbols (e.g in VON Yorùbá) and removal of all diacritics/under-dots (e.g some articles in Yorùbá Wikipedia). Furthermore, we got two manually typed fully-diacritized Yorùbá literature (Ìrìnkèrindò nínú igbó elégbèje and Igbó Olódùmarè) both written by Daniel Orowole Olorunfemi Fagunwa a popular Yorùbá author. The number of tokens available from each source, the link to the original source and the quality of the data is summarised in Table TABREF7.
The gathering of clean data in Twi is more difficult. We use as the base text as it has been shown that the Bible is the most available resource for low and endangered languages BIBREF18. This is the cleanest of all the text we could obtain. In addition, we use the available (and small) Wikipedia dumps which are quite noisy, i.e. Wikipedia contains a good number of English words, spelling errors and Twi sentences formulated in a non-natural way (formulated as L2 speakers would speak Twi as compared to native speakers). Lastly, we added text crawled from jw and the JW300 Twi corpus. Notice that the Bible text, is mainly written in the Asante dialect whilst the last, Jehovah's Witnesses, was written mainly in the Akuapem dialect. The Wikipedia text is a mixture of the two dialects. This introduces a lot of noise into the embeddings as the spelling of most words differs especially at the end of the words due to the mixture of dialects. The JW300 Twi corpus also contains mixed dialects but is mainly Akuampem. In this case, the noise comes also from spelling errors and the uncommon addition of diacritics which are not standardised on certain vowels. Figures for Twi corpora are summarised in the bottom block of Table TABREF7.
Data ::: Evaluation Test Sets ::: Yorùbá.
One of the contribution of this work is the introduction of the wordsim-353 word pairs dataset for Yorùbá. All the 353 word pairs were translated from English to Yorùbá by 3 native speakers. The set is composed of 446 unique English words, 348 of which can be expressed as one-word translation in Yorùbá (e.g. book translates to ìwé). In 61 cases (most countries and locations but also other content words) translations are transliterations (e.g. Doctor is dókítà and cucumber kùkúmbà.). 98 words were translated by short phrases instead of single words. This mostly affects words from science and technology (e.g. keyboard translates to pátákó ìtwé —literally meaning typing board—, laboratory translates to ìyàrá ìṣèwádìí —research room—, and ecology translates to ìm nípa àyíká while psychology translates to ìm nípa dá). Finally, 6 terms have the same form in English and Yorùbá therefore they are retained like that in the dataset (e.g. Jazz, Rock and acronyms such as FBI or OPEC).
We also annotate the Global Voices Yorùbá corpus to test the performance of our trained Yorùbá BERT embeddings on the named entity recognition task. The corpus consists of 25 k tokens which we annotate with four named entity types: DATE, location (LOC), organization (ORG) and personal names (PER). Any other token that does not belong to the four named entities is tagged with "O". The dataset is further split into training (70%), development (10%) and test (20%) partitions. Table TABREF12 shows the number of named entities per type and partition.
Data ::: Evaluation Test Sets ::: Twi
Just like Yorùbá, the wordsim-353 word pairs dataset was translated for Twi. Out of the 353 word pairs, 274 were used in this case. The remaining 79 pairs contain words that translate into longer phrases.
The number of words that can be translated by a single token is higher than for Yorùbá. Within the 274 pairs, there are 351 unique English words which translated to 310 unique Twi words. 298 of the 310 Twi words are single word translations, 4 transliterations and 16 are used as is.
Even if JoubarneInkpen:2011 showed indications that semantic similarity has a high correlation across languages, different nuances between words are captured differently by languages. For instance, both money and currency in English translate into sika in Twi (and other 32 English words which translate to 14 Twi words belong to this category) and drink in English is translated as Nsa or nom depending on the part of speech (noun for the former, verb for the latter). 17 English words fall into this category. In translating these, we picked the translation that best suits the context (other word in the pair). In two cases, the correlation is not fulfilled at all: soap–opera and star–movies are not related in the Twi language and the score has been modified accordingly.
Semantic Representations
In this section, we describe the architectures used for learning word embeddings for the Twi and Yorùbá languages. Also, we discuss the quality of the embeddings as measured by the correlation with human judgements on the translated wordSim-353 test sets and by the F1 score in a NER task.
Semantic Representations ::: Word Embeddings Architectures
Modeling sub-word units has recently become a popular way to address out-of-vocabulary word problem in NLP especially in word representation learning BIBREF19, BIBREF2, BIBREF4. A sub-word unit can be a character, character $n$-grams, or heuristically learned Byte Pair Encodings (BPE) which work very well in practice especially for morphologically rich languages. Here, we consider two word embedding models that make use of character-level information together with word information: Character Word Embedding (CWE) BIBREF20 and fastText BIBREF2. Both of them are extensions of the Word2Vec architectures BIBREF0 that model sub-word units, character embeddings in the case of CWE and character $n$-grams for fastText.
CWE was introduced in 2015 to model the embeddings of characters jointly with words in order to address the issues of character ambiguities and non-compositional words especially in the Chinese language. A word or character embedding is learned in CWE using either CBOW or skipgram architectures, and then the final word embedding is computed by adding the character embeddings to the word itself:
where $w_j$ is the word embedding of $x_j$, $N_j$ is the number of characters in $x_j$, and $c_k$ is the embedding of the $k$-th character $c_k$ in $x_j$.
Similarly, in 2017 fastText was introduced as an extension to skipgram in order to take into account morphology and improve the representation of rare words. In this case the embedding of a word also includes the embeddings of its character $n$-grams:
where $w_j$ is the word embedding of $x_j$, $G_j$ is the number of character $n$-grams in $x_j$ and $g_k$ is the embedding of the $k$-th $n$-gram.
cwe also proposed three alternatives to learn multiple embeddings per character and resolve ambiguities: (i) position-based character embeddings where each character has different embeddings depending on the position it appears in a word, i.e., beginning, middle or end (ii) cluster-based character embeddings where a character can have $K$ different cluster embeddings, and (iii) position-based cluster embeddings (CWE-LP) where for each position $K$ different embeddings are learned. We use the latter in our experiments with CWE but no positional embeddings are used with fastText.
Finally, we consider a contextualized embedding architecture, BERT BIBREF4. BERT is a masked language model based on the highly efficient and parallelizable Transformer architecture BIBREF21 known to produce very rich contextualized representations for downstream NLP tasks.
The architecture is trained by jointly conditioning on both left and right contexts in all the transformer layers using two unsupervised objectives: Masked LM and Next-sentence prediction. The representation of a word is therefore learned according to the context it is found in.
Training contextual embeddings needs of huge amounts of corpora which are not available for low-resourced languages such as Yorùbá and Twi. However, Google provided pre-trained multilingual embeddings for 102 languages including Yorùbá (but not Twi).
Semantic Representations ::: Experiments ::: FastText Training and Evaluation
As a first experiment, we compare the quality of fastText embeddings trained on (high-quality) curated data and (low-quality) massively extracted data for Twi and Yorùbá languages.
Facebook released pre-trained word embeddings using fastText for 294 languages trained on Wikipedia BIBREF2 (F1 in tables) and for 157 languages trained on Wikipedia and Common Crawl BIBREF7 (F2). For Yorùbá, both versions are available but only embeddings trained on Wikipedia are available for Twi. We consider these embeddings the result of training on what we call massively-extracted corpora. Notice that training settings for both embeddings are not exactly the same, and differences in performance might come both from corpus size/quality but also from the background model. The 294-languages version is trained using skipgram, in dimension 300, with character $n$-grams of length 5, a window of size 5 and 5 negatives. The 157-languages version is trained using CBOW with position-weights, in dimension 300, with character $n$-grams of length 5, a window of size 5 and 10 negatives.
We want to compare the performance of these embeddings with the equivalent models that can be obtained by training on the different sources verified by native speakers of Twi and Yorùbá; what we call curated corpora and has been described in Section SECREF4 For the comparison, we define 3 datasets according to the quality and quantity of textual data used for training: (i) Curated Small Dataset (clean), C1, about 1.6 million tokens for Yorùbá and over 735 k tokens for Twi. The clean text for Twi is the Bible and for Yoruba all texts marked under the C1 column in Table TABREF7. (ii) In Curated Small Dataset (clean + noisy), C2, we add noise to the clean corpus (Wikipedia articles for Twi, and BBC Yorùbá news articles for Yorùbá). This increases the number of training tokens for Twi to 742 k tokens and Yorùbá to about 2 million tokens. (iii) Curated Large Dataset, C3 consists of all available texts we are able to crawl and source out for, either clean or noisy. The addition of JW300 BIBREF22 texts increases the vocabulary to more than 10 k tokens in both languages.
We train our fastText systems using a skipgram model with an embedding size of 300 dimensions, context window size of 5, 10 negatives and $n$-grams ranging from 3 to 6 characters similarly to the pre-trained models for both languages. Best results are obtained with minimum word count of 3.
Table TABREF15 shows the Spearman correlation between human judgements and cosine similarity scores on the wordSim-353 test set. Notice that pre-trained embeddings on Wikipedia show a very low correlation with humans on the similarity task for both languages ($\rho $=$0.14$) and their performance is even lower when Common Crawl is also considered ($\rho $=$0.07$ for Yorùbá). An important reason for the low performance is the limited vocabulary. The pre-trained Twi model has only 935 tokens. For Yorùbá, things are apparently better with more than 150 k tokens when both Wikipedia and Common Crawl are used but correlation is even lower. An inspection of the pre-trained embeddings indicates that over 135 k words belong to other languages mostly English, French and Arabic.
If we focus only on Wikipedia, we see that many texts are without diacritics in Yorùbá and often make use of mixed dialects and English sentences in Twi.
The Spearman $\rho $ correlation for fastText models on the curated small dataset (clean), C1, improves the baselines by a large margin ($\rho =0.354$ for Twi and 0.322 for Yorùbá) even with a small dataset. The improvement could be justified just by the larger vocabulary in Twi, but in the case of Yorùbá the enhancement is there with almost half of the vocabulary size. We found out that adding some noisy texts (C2 dataset) slightly improves the correlation for Twi language but not for the Yorùbá language. The Twi language benefits from Wikipedia articles because its inclusion doubles the vocabulary and reduces the bias of the model towards religious texts. However, for Yorùbá, noisy texts often ignore diacritics or tonal marks which increases the vocabulary size at the cost of an increment in the ambiguity too. As a result, the correlation is slightly hurt. One would expect that training with more data would improve the quality of the embeddings, but we found out with the results obtained with the C3 dataset, that only high-quality data helps. The addition of JW300 boosts the vocabulary in both cases, but whereas for Twi the corpus mixes dialects and is noisy, for Yorùbá it is very clean and with full diacritics. Consequently, the best embeddings for Yorùbá are obtained when training with the C3 dataset, whereas for Twi, C2 is the best option. In both cases, the curated embeddings improve the correlation with human judgements on the similarity task a $\Delta \rho =+0.25$ or, equivalently, by an increment on $\rho $ of 170% (Twi) and 180% (Yorùbá).
Semantic Representations ::: Experiments ::: CWE Training and Evaluation
The huge ambiguity in the written Twi language motivates the exploration of different approaches to word embedding estimations. In this work, we compare the standard fastText methodology to include sub-word information with the character-enhanced approach with position-based clustered embeddings (CWE-LP as introduced in Section SECREF17). With the latter, we expect to specifically address the ambiguity present in a language that does not translate the different oral tones on vowels into the written language.
The character-enhanced word embeddings are trained using a skipgram architecture with cluster-based embeddings and an embedding size of 300 dimensions, context window-size of 5, and 5 negative samples. In this case, the best performance is obtained with a minimum word count of 1, and that increases the effective vocabulary that is used for training the embeddings with respect to the fastText experiments reported in Table TABREF15.
We repeat the same experiments as with fastText and summarise them in Table TABREF16. If we compare the relative numbers for the three datasets (C1, C2 and C3) we observe the same trends as before: the performance of the embeddings in the similarity task improves with the vocabulary size when the training data can be considered clean, but the performance diminishes when the data is noisy.
According to the results, CWE is specially beneficial for Twi but not always for Yorùbá. Clean Yorùbá text, does not have the ambiguity issues at character-level, therefore the $n$-gram approximation works better when enough clean data is used ($\rho ^{C3}_{CWE}=0.354$ vs. $\rho ^{C3}_{fastText}=0.391$) but it does not when too much noisy data (no diacritics, therefore character-level information would be needed) is used ($\rho ^{C2}_{CWE}=0.345$ vs. $\rho ^{C2}_{fastText}=0.302$). For Twi, the character-level information reinforces the benefits of clean data and the best correlation with human judgements is reached with CWE embeddings ($\rho ^{C2}_{CWE}=0.437$ vs. $\rho ^{C2}_{fastText}=0.388$).
Semantic Representations ::: Experiments ::: BERT Evaluation on NER Task
In order to go beyond the similarity task using static word vectors, we also investigate the quality of the multilingual BERT embeddings by fine-tuning a named entity recognition task on the Yorùbá Global Voices corpus.
One of the major advantages of pre-trained BERT embeddings is that fine-tuning of the model on downstream NLP tasks is typically computationally inexpensive, often with few number of epochs. However, the data the embeddings are trained on has the same limitations as that used in massive word embeddings. Fine-tuning involves replacing the last layer of BERT used optimizing the masked LM with a task-dependent linear classifier or any other deep learning architecture, and training all the model parameters end-to-end. For the NER task, we obtain the token-level representation from BERT and train a linear classifier for sequence tagging.
Similar to our observations with non-contextualized embeddings, we find out that fine-tuning the pre-trained multilingual-uncased BERT for 4 epochs on the NER task gives an F1 score of 0. If we do the same experiment in English, F1 is 58.1 after 4 epochs.
That shows how pre-trained embeddings by themselves do not perform well in downstream tasks on low-resource languages. To address this problem for Yorùbá, we fine-tune BERT representations on the Yorùbá corpus in two ways: (i) using the multilingual vocabulary, and (ii) using only Yorùbá vocabulary. In both cases diacritics are ignored to be consistent with the base model training.
As expected, the fine-tuning of the pre-trained BERT on the Yorùbá corpus in the two configurations generates better representations than the base model. These models are able to achieve a better performance on the NER task with an average F1 score of over 47% (see Table TABREF26 for the comparative). The fine-tuned BERT model with only Yorùbá vocabulary further increases by more than 4% in F1 score obtained with the tuning that uses the multilingual vocabulary. Although we do not have enough data to train BERT from scratch, we observe that fine-tuning BERT on a limited amount of monolingual data of a low-resource language helps to improve the quality of the embeddings. The same observation holds true for high-resource languages like German and French BIBREF23.
Summary and Discussion
In this paper, we present curated word and contextual embeddings for Yorùbá and Twi. For this purpose, we gather and select corpora and study the most appropriate techniques for the languages. We also create test sets for the evaluation of the word embeddings within a word similarity task (wordsim353) and the contextual embeddings within a NER task. Corpora, embeddings and test sets are available in github.
In our analysis, we show how massively generated embeddings perform poorly for low-resourced languages as compared to the performance for high-resourced ones. This is due both to the quantity but also the quality of the data used. While the Pearson $\rho $ correlation for English obtained with fastText embeddings trained on Wikipedia (WP) and Common Crawl (CC) are $\rho _{WP}$=$0.67$ and $\rho _{WP+CC}$=$0.78$, the equivalent ones for Yorùbá are $\rho _{WP}$=$0.14$ and $\rho _{WP+CC}$=$0.07$. For Twi, only embeddings with Wikipedia are available ($\rho _{WP}$=$0.14$). By carefully gathering high-quality data and optimising the models to the characteristics of each language, we deliver embeddings with correlations of $\rho $=$0.39$ (Yorùbá) and $\rho $=$0.44$ (Twi) on the same test set, still far from the high-resourced models, but representing an improvement over $170\%$ on the task.
In a low-resourced setting, the data quality, processing and model selection is more critical than in a high-resourced scenario. We show how the characteristics of a language (such as diacritization in our case) should be taken into account in order to choose the relevant data and model to use. As an example, Twi word embeddings are significantly better when training on 742 k selected tokens than on 16 million noisy tokens, and when using a model that takes into account single character information (CWE-LP) instead of $n$-gram information (fastText).
Finally, we want to note that, even within a corpus, the quality of the data might depend on the language. Wikipedia is usually used as a high-quality freely available multilingual corpus as compared to noisier data such as Common Crawl. However, for the two languages under study, Wikipedia resulted to have too much noise: interference from other languages, text clearly written by non-native speakers, lack of diacritics and mixture of dialects. The JW300 corpus on the other hand, has been rated as high-quality by our native Yorùbá speakers, but as noisy by our native Twi speakers. In both cases, experiments confirm the conclusions.
Acknowledgements
The authors thank Dr. Clement Odoje of the Department of Linguistics and African Languages, University of Ibadan, Nigeria and Olóyè Gbémisóyè Àrdèó for helping us with the Yorùbá translation of the WordSim-353 word pairs and Dr. Felix Y. Adu-Gyamfi and Ps. Isaac Sarfo for helping with the Twi translation. We also thank the members of the Niger-Volta Language Technologies Institute for providing us with clean Yorùbá corpus
The project on which this paper is based was partially funded by the German Federal Ministry of Education and Research under the funding code 01IW17001 (Deeplee). Responsibility for the content of this publication is with the authors. | only high-quality data helps |
347e86893e8002024c2d10f618ca98e14689675f | 347e86893e8002024c2d10f618ca98e14689675f_1 | Q: What turn out to be more important high volume or high quality data?
Text: Introduction
In recent years, word embeddings BIBREF0, BIBREF1, BIBREF2 have been proven to be very useful for training downstream natural language processing (NLP) tasks. Moreover, contextualized embeddings BIBREF3, BIBREF4 have been shown to further improve the performance of NLP tasks such as named entity recognition, question answering, or text classification when used as word features because they are able to resolve ambiguities of word representations when they appear in different contexts. Different deep learning architectures such as multilingual BERT BIBREF4, LASER BIBREF5 and XLM BIBREF6 have proved successful in the multilingual setting. All these architectures learn the semantic representations from unannotated text, making them cheap given the availability of texts in online multilingual resources such as Wikipedia. However, the evaluation of such resources is usually done for the high-resourced languages, where one has a smorgasbord of tasks and test sets to evaluate on. This is the best-case scenario, languages with tones of data for training that generate high-quality models.
For low-resourced languages, the evaluation is more difficult and therefore normally ignored simply because of the lack of resources. In these cases, training data is scarce, and the assumption that the capability of deep learning architectures to learn (multilingual) representations in the high-resourced setting holds in the low-resourced one does not need to be true. In this work, we focus on two African languages, Yorùbá and Twi, and carry out several experiments to verify this claim. Just by a simple inspection of the word embeddings trained on Wikipedia by fastText, we see a high number of non-Yorùbá or non-Twi words in the vocabularies. For Twi, the vocabulary has only 935 words, and for Yorùbá we estimate that 135 k out of the 150 k words belong to other languages such as English, French and Arabic.
In order to improve the semantic representations for these languages, we collect online texts and study the influence of the quality and quantity of the data in the final models. We also examine the most appropriate architecture depending on the characteristics of each language. Finally, we translate test sets and annotate corpora to evaluate the performance of both our models together with fastText and BERT pre-trained embeddings which could not be evaluated otherwise for Yorùbá and Twi. The evaluation is carried out in a word similarity and relatedness task using the wordsim-353 test set, and in a named entity recognition (NER) task where embeddings play a crucial role. Of course, the evaluation of the models in only two tasks is not exhaustive but it is an indication of the quality we can obtain for these two low-resourced languages as compared to others such as English where these evaluations are already available.
The rest of the paper is organized as follows. Related works are reviewed in Section SECREF2 The two languages under study are described in Section SECREF3. We introduce the corpora and test sets in Section SECREF4. The fifth section explores the different training architectures we consider, and the experiments that are carried out. Finally, discussion and concluding remarks are given in Section SECREF6
Related Work
The large amount of freely available text in the internet for multiple languages is facilitating the massive and automatic creation of multilingual resources. The resource par excellence is Wikipedia, an online encyclopedia currently available in 307 languages. Other initiatives such as Common Crawl or the Jehovah’s Witnesses site are also repositories for multilingual data, usually assumed to be noisier than Wikipedia. Word and contextual embeddings have been pre-trained on these data, so that the resources are nowadays at hand for more than 100 languages. Some examples include fastText word embeddings BIBREF2, BIBREF7, MUSE embeddings BIBREF8, BERT multilingual embeddings BIBREF4 and LASER sentence embeddings BIBREF5. In all cases, embeddings are trained either simultaneously for multiple languages, joining high- and low-resource data, or following the same methodology.
On the other hand, different approaches try to specifically design architectures to learn embeddings in a low-resourced setting. ChaudharyEtAl:2018 follow a transfer learning approach that uses phonemes, lemmas and morphological tags to transfer the knowledge from related high-resource language into the low-resource one. jiangEtal:2018 apply Positive-Unlabeled Learning for word embedding calculations, assuming that unobserved pairs of words in a corpus also convey information, and this is specially important for small corpora.
In order to assess the quality of word embeddings, word similarity and relatedness tasks are usually used. wordsim-353 BIBREF9 is a collection of 353 pairs annotated with semantic similarity scores in a scale from 0 to 10. Even the problems detected in this dataset BIBREF10, it is widely used by the community. The test set was originally created for English, but the need for comparison with other languages has motivated several translations/adaptations. In hassanMihalcea:2009 the test was translated manually into Spanish, Romanian and Arabic and the scores were adapted to reflect similarities in the new language. The reported correlation between the English scores and the Spanish ones is 0.86. Later, JoubarneInkpen:2011 show indications that the measures of similarity highly correlate across languages. leviantReichart:2015 translated also wordsim-353 into German, Italian and Russian and used crowdsourcing to score the pairs. Finally, jiangEtal:2018 translated with Google Cloud the test set from English into Czech, Danish and Dutch. In our work, native speakers translate wordsim-353 into Yorùbá and Twi, and similarity scores are kept unless the discrepancy with English is big (see Section SECREF11 for details). A similar approach to our work is done for Gujarati in JoshiEtAl:2019.
Languages under Study ::: Yorùbá
is a language in the West Africa with over 50 million speakers. It is spoken among other languages in Nigeria, republic of Togo, Benin Republic, Ghana and Sierra Leon. It is also a language of Òrìsà in Cuba, Brazil, and some Caribbean countries. It is one of the three major languages in Nigeria and it is regarded as the third most spoken native African language. There are different dialects of Yorùbá in Nigeria BIBREF11, BIBREF12, BIBREF13. However, in this paper our focus is the standard Yorùbá based upon a report from the 1974 Joint Consultative Committee on Education BIBREF14.
Standard Yorùbá has 25 letters without the Latin characters c, q, v, x and z. There are 18 consonants (b, d, f, g, gb, j[dz], k, l, m, n, p[kp], r, s, ṣ, t, w y[j]), 7 oral vowels (a, e, ẹ, i, o, ọ, u), five nasal vowels, (an, $ \underaccent{\dot{}}{e}$n, in, $ \underaccent{\dot{}}{o}$n, un) and syllabic nasals (m̀, ḿ, ǹ, ń). Yorùbá is a tone language which makes heavy use of lexical tones which are indicated by the use of diacritics. There are three tones in Yorùbá namely low, mid and high which are represented as grave ($\setminus $), macron ($-$) and acute ($/$) symbols respectively. These tones are applied on vowels and syllabic nasals. Mid tone is usually left unmarked on vowels and every initial or first vowel in a word cannot have a high tone. It is important to note that tone information is needed for correct pronunciation and to have the meaning of a word BIBREF15, BIBREF12, BIBREF14. For example, owó (money), ọw (broom), òwò (business), w (honour), ọw (hand), and w (group) are different words with different dots and diacritic combinations. According to Asahiah2014, Standard Yorùbá uses 4 diacritics, 3 are for marking tones while the fourth which is the dot below is used to indicate the open phonetic variants of letter "e" and "o" and the long variant of "s". Also, there are 19 single diacritic letters, 3 are marked with dots below (ẹ, ọ, ṣ) while the rest are either having the grave or acute accent. The four double diacritics are divided between the grave and the acute accent as well.
As noted in Asahiah2014, most of the Yorùbá texts found in websites or public domain repositories (i) either use the correct Yorùbá orthography or (ii) replace diacritized characters with un-diacritized ones.
This happens as a result of many factors, but most especially to the unavailability of appropriate input devices for the accurate application of the diacritical marks BIBREF11. This has led to research on restoration models for diacritics BIBREF16, but the problem is not well solved and we find that most Yorùbá text in the public domain today is not well diacritized. Wikipedia is not an exception.
Languages under Study ::: Twi
is an Akan language of the Central Tano Branch of the Niger Congo family of languages. It is the most widely spoken of the about 80 indigenous languages in Ghana BIBREF17. It has about 9 million native speakers and about a total of 17–18 million Ghanaians have it as either first or second language. There are two mutually intelligible dialects, Asante and Akuapem, and sub-dialectical variants which are mostly unknown to and unnoticed by non-native speakers. It is also mutually intelligible with Fante and to a large extent Bono, another of the Akan languages. It is one of, if not the, easiest to learn to speak of the indigenous Ghanaian languages. The same is however not true when it comes to reading and especially writing. This is due to a number of easily overlooked complexities in the structure of the language. First of all, similarly to Yorùbá, Twi is a tonal language but written without diacritics or accents. As a result, words which are pronounced differently and unambiguous in speech tend to be ambiguous in writing. Besides, most of such words fit interchangeably in the same context and some of them can have more than two meanings. A simple example is:
Me papa aba nti na me ne wo redi no yie no. S wo ara wo nim s me papa ba a, me suban fofor adi.
This sentence could be translated as
(i) I'm only treating you nicely because I'm in a good mood. You already know I'm a completely different person when I'm in a good mood.
(ii) I'm only treating you nicely because my dad is around. You already know I'm a completely different person when my dad comes around.
Another characteristic of Twi is the fact that a good number of stop words have the same written form as content words. For instance, “na” or “na” could be the words “and, then”, the phrase “and then” or the word “mother”. This kind of ambiguity has consequences in several natural language applications where stop words are removed from text.
Finally, we want to point out that words can also be written with or without prefixes. An example is this same na and na which happen to be the same word with an omissible prefix across its multiple senses. For some words, the prefix characters are mostly used when the word begins a sentence and omitted in the middle. This however depends on the author/speaker. For the word embeddings calculation, this implies that one would have different embeddings for the same word found in different contexts.
Data
We collect clean and noisy corpora for Yorùbá and Twi in order to quantify the effect of noise on the quality of the embeddings, where noisy has a different meaning depending on the language as it will be explained in the next subsections.
Data ::: Training Corpora
For Yorùbá, we use several corpora collected by the Niger-Volta Language Technologies Institute with texts from different sources, including the Lagos-NWU conversational speech corpus, fully-diacritized Yorùbá language websites and an online Bible. The largest source with clean data is the JW300 corpus. We also created our own small-sized corpus by web-crawling three Yorùbá language websites (Alàkwé, r Yorùbá and Èdè Yorùbá Rẹw in Table TABREF7), some Yoruba Tweets with full diacritics and also news corpora (BBC Yorùbá and VON Yorùbá) with poor diacritics which we use to introduce noise. By noisy corpus, we refer to texts with incorrect diacritics (e.g in BBC Yorùbá), removal of tonal symbols (e.g in VON Yorùbá) and removal of all diacritics/under-dots (e.g some articles in Yorùbá Wikipedia). Furthermore, we got two manually typed fully-diacritized Yorùbá literature (Ìrìnkèrindò nínú igbó elégbèje and Igbó Olódùmarè) both written by Daniel Orowole Olorunfemi Fagunwa a popular Yorùbá author. The number of tokens available from each source, the link to the original source and the quality of the data is summarised in Table TABREF7.
The gathering of clean data in Twi is more difficult. We use as the base text as it has been shown that the Bible is the most available resource for low and endangered languages BIBREF18. This is the cleanest of all the text we could obtain. In addition, we use the available (and small) Wikipedia dumps which are quite noisy, i.e. Wikipedia contains a good number of English words, spelling errors and Twi sentences formulated in a non-natural way (formulated as L2 speakers would speak Twi as compared to native speakers). Lastly, we added text crawled from jw and the JW300 Twi corpus. Notice that the Bible text, is mainly written in the Asante dialect whilst the last, Jehovah's Witnesses, was written mainly in the Akuapem dialect. The Wikipedia text is a mixture of the two dialects. This introduces a lot of noise into the embeddings as the spelling of most words differs especially at the end of the words due to the mixture of dialects. The JW300 Twi corpus also contains mixed dialects but is mainly Akuampem. In this case, the noise comes also from spelling errors and the uncommon addition of diacritics which are not standardised on certain vowels. Figures for Twi corpora are summarised in the bottom block of Table TABREF7.
Data ::: Evaluation Test Sets ::: Yorùbá.
One of the contribution of this work is the introduction of the wordsim-353 word pairs dataset for Yorùbá. All the 353 word pairs were translated from English to Yorùbá by 3 native speakers. The set is composed of 446 unique English words, 348 of which can be expressed as one-word translation in Yorùbá (e.g. book translates to ìwé). In 61 cases (most countries and locations but also other content words) translations are transliterations (e.g. Doctor is dókítà and cucumber kùkúmbà.). 98 words were translated by short phrases instead of single words. This mostly affects words from science and technology (e.g. keyboard translates to pátákó ìtwé —literally meaning typing board—, laboratory translates to ìyàrá ìṣèwádìí —research room—, and ecology translates to ìm nípa àyíká while psychology translates to ìm nípa dá). Finally, 6 terms have the same form in English and Yorùbá therefore they are retained like that in the dataset (e.g. Jazz, Rock and acronyms such as FBI or OPEC).
We also annotate the Global Voices Yorùbá corpus to test the performance of our trained Yorùbá BERT embeddings on the named entity recognition task. The corpus consists of 25 k tokens which we annotate with four named entity types: DATE, location (LOC), organization (ORG) and personal names (PER). Any other token that does not belong to the four named entities is tagged with "O". The dataset is further split into training (70%), development (10%) and test (20%) partitions. Table TABREF12 shows the number of named entities per type and partition.
Data ::: Evaluation Test Sets ::: Twi
Just like Yorùbá, the wordsim-353 word pairs dataset was translated for Twi. Out of the 353 word pairs, 274 were used in this case. The remaining 79 pairs contain words that translate into longer phrases.
The number of words that can be translated by a single token is higher than for Yorùbá. Within the 274 pairs, there are 351 unique English words which translated to 310 unique Twi words. 298 of the 310 Twi words are single word translations, 4 transliterations and 16 are used as is.
Even if JoubarneInkpen:2011 showed indications that semantic similarity has a high correlation across languages, different nuances between words are captured differently by languages. For instance, both money and currency in English translate into sika in Twi (and other 32 English words which translate to 14 Twi words belong to this category) and drink in English is translated as Nsa or nom depending on the part of speech (noun for the former, verb for the latter). 17 English words fall into this category. In translating these, we picked the translation that best suits the context (other word in the pair). In two cases, the correlation is not fulfilled at all: soap–opera and star–movies are not related in the Twi language and the score has been modified accordingly.
Semantic Representations
In this section, we describe the architectures used for learning word embeddings for the Twi and Yorùbá languages. Also, we discuss the quality of the embeddings as measured by the correlation with human judgements on the translated wordSim-353 test sets and by the F1 score in a NER task.
Semantic Representations ::: Word Embeddings Architectures
Modeling sub-word units has recently become a popular way to address out-of-vocabulary word problem in NLP especially in word representation learning BIBREF19, BIBREF2, BIBREF4. A sub-word unit can be a character, character $n$-grams, or heuristically learned Byte Pair Encodings (BPE) which work very well in practice especially for morphologically rich languages. Here, we consider two word embedding models that make use of character-level information together with word information: Character Word Embedding (CWE) BIBREF20 and fastText BIBREF2. Both of them are extensions of the Word2Vec architectures BIBREF0 that model sub-word units, character embeddings in the case of CWE and character $n$-grams for fastText.
CWE was introduced in 2015 to model the embeddings of characters jointly with words in order to address the issues of character ambiguities and non-compositional words especially in the Chinese language. A word or character embedding is learned in CWE using either CBOW or skipgram architectures, and then the final word embedding is computed by adding the character embeddings to the word itself:
where $w_j$ is the word embedding of $x_j$, $N_j$ is the number of characters in $x_j$, and $c_k$ is the embedding of the $k$-th character $c_k$ in $x_j$.
Similarly, in 2017 fastText was introduced as an extension to skipgram in order to take into account morphology and improve the representation of rare words. In this case the embedding of a word also includes the embeddings of its character $n$-grams:
where $w_j$ is the word embedding of $x_j$, $G_j$ is the number of character $n$-grams in $x_j$ and $g_k$ is the embedding of the $k$-th $n$-gram.
cwe also proposed three alternatives to learn multiple embeddings per character and resolve ambiguities: (i) position-based character embeddings where each character has different embeddings depending on the position it appears in a word, i.e., beginning, middle or end (ii) cluster-based character embeddings where a character can have $K$ different cluster embeddings, and (iii) position-based cluster embeddings (CWE-LP) where for each position $K$ different embeddings are learned. We use the latter in our experiments with CWE but no positional embeddings are used with fastText.
Finally, we consider a contextualized embedding architecture, BERT BIBREF4. BERT is a masked language model based on the highly efficient and parallelizable Transformer architecture BIBREF21 known to produce very rich contextualized representations for downstream NLP tasks.
The architecture is trained by jointly conditioning on both left and right contexts in all the transformer layers using two unsupervised objectives: Masked LM and Next-sentence prediction. The representation of a word is therefore learned according to the context it is found in.
Training contextual embeddings needs of huge amounts of corpora which are not available for low-resourced languages such as Yorùbá and Twi. However, Google provided pre-trained multilingual embeddings for 102 languages including Yorùbá (but not Twi).
Semantic Representations ::: Experiments ::: FastText Training and Evaluation
As a first experiment, we compare the quality of fastText embeddings trained on (high-quality) curated data and (low-quality) massively extracted data for Twi and Yorùbá languages.
Facebook released pre-trained word embeddings using fastText for 294 languages trained on Wikipedia BIBREF2 (F1 in tables) and for 157 languages trained on Wikipedia and Common Crawl BIBREF7 (F2). For Yorùbá, both versions are available but only embeddings trained on Wikipedia are available for Twi. We consider these embeddings the result of training on what we call massively-extracted corpora. Notice that training settings for both embeddings are not exactly the same, and differences in performance might come both from corpus size/quality but also from the background model. The 294-languages version is trained using skipgram, in dimension 300, with character $n$-grams of length 5, a window of size 5 and 5 negatives. The 157-languages version is trained using CBOW with position-weights, in dimension 300, with character $n$-grams of length 5, a window of size 5 and 10 negatives.
We want to compare the performance of these embeddings with the equivalent models that can be obtained by training on the different sources verified by native speakers of Twi and Yorùbá; what we call curated corpora and has been described in Section SECREF4 For the comparison, we define 3 datasets according to the quality and quantity of textual data used for training: (i) Curated Small Dataset (clean), C1, about 1.6 million tokens for Yorùbá and over 735 k tokens for Twi. The clean text for Twi is the Bible and for Yoruba all texts marked under the C1 column in Table TABREF7. (ii) In Curated Small Dataset (clean + noisy), C2, we add noise to the clean corpus (Wikipedia articles for Twi, and BBC Yorùbá news articles for Yorùbá). This increases the number of training tokens for Twi to 742 k tokens and Yorùbá to about 2 million tokens. (iii) Curated Large Dataset, C3 consists of all available texts we are able to crawl and source out for, either clean or noisy. The addition of JW300 BIBREF22 texts increases the vocabulary to more than 10 k tokens in both languages.
We train our fastText systems using a skipgram model with an embedding size of 300 dimensions, context window size of 5, 10 negatives and $n$-grams ranging from 3 to 6 characters similarly to the pre-trained models for both languages. Best results are obtained with minimum word count of 3.
Table TABREF15 shows the Spearman correlation between human judgements and cosine similarity scores on the wordSim-353 test set. Notice that pre-trained embeddings on Wikipedia show a very low correlation with humans on the similarity task for both languages ($\rho $=$0.14$) and their performance is even lower when Common Crawl is also considered ($\rho $=$0.07$ for Yorùbá). An important reason for the low performance is the limited vocabulary. The pre-trained Twi model has only 935 tokens. For Yorùbá, things are apparently better with more than 150 k tokens when both Wikipedia and Common Crawl are used but correlation is even lower. An inspection of the pre-trained embeddings indicates that over 135 k words belong to other languages mostly English, French and Arabic.
If we focus only on Wikipedia, we see that many texts are without diacritics in Yorùbá and often make use of mixed dialects and English sentences in Twi.
The Spearman $\rho $ correlation for fastText models on the curated small dataset (clean), C1, improves the baselines by a large margin ($\rho =0.354$ for Twi and 0.322 for Yorùbá) even with a small dataset. The improvement could be justified just by the larger vocabulary in Twi, but in the case of Yorùbá the enhancement is there with almost half of the vocabulary size. We found out that adding some noisy texts (C2 dataset) slightly improves the correlation for Twi language but not for the Yorùbá language. The Twi language benefits from Wikipedia articles because its inclusion doubles the vocabulary and reduces the bias of the model towards religious texts. However, for Yorùbá, noisy texts often ignore diacritics or tonal marks which increases the vocabulary size at the cost of an increment in the ambiguity too. As a result, the correlation is slightly hurt. One would expect that training with more data would improve the quality of the embeddings, but we found out with the results obtained with the C3 dataset, that only high-quality data helps. The addition of JW300 boosts the vocabulary in both cases, but whereas for Twi the corpus mixes dialects and is noisy, for Yorùbá it is very clean and with full diacritics. Consequently, the best embeddings for Yorùbá are obtained when training with the C3 dataset, whereas for Twi, C2 is the best option. In both cases, the curated embeddings improve the correlation with human judgements on the similarity task a $\Delta \rho =+0.25$ or, equivalently, by an increment on $\rho $ of 170% (Twi) and 180% (Yorùbá).
Semantic Representations ::: Experiments ::: CWE Training and Evaluation
The huge ambiguity in the written Twi language motivates the exploration of different approaches to word embedding estimations. In this work, we compare the standard fastText methodology to include sub-word information with the character-enhanced approach with position-based clustered embeddings (CWE-LP as introduced in Section SECREF17). With the latter, we expect to specifically address the ambiguity present in a language that does not translate the different oral tones on vowels into the written language.
The character-enhanced word embeddings are trained using a skipgram architecture with cluster-based embeddings and an embedding size of 300 dimensions, context window-size of 5, and 5 negative samples. In this case, the best performance is obtained with a minimum word count of 1, and that increases the effective vocabulary that is used for training the embeddings with respect to the fastText experiments reported in Table TABREF15.
We repeat the same experiments as with fastText and summarise them in Table TABREF16. If we compare the relative numbers for the three datasets (C1, C2 and C3) we observe the same trends as before: the performance of the embeddings in the similarity task improves with the vocabulary size when the training data can be considered clean, but the performance diminishes when the data is noisy.
According to the results, CWE is specially beneficial for Twi but not always for Yorùbá. Clean Yorùbá text, does not have the ambiguity issues at character-level, therefore the $n$-gram approximation works better when enough clean data is used ($\rho ^{C3}_{CWE}=0.354$ vs. $\rho ^{C3}_{fastText}=0.391$) but it does not when too much noisy data (no diacritics, therefore character-level information would be needed) is used ($\rho ^{C2}_{CWE}=0.345$ vs. $\rho ^{C2}_{fastText}=0.302$). For Twi, the character-level information reinforces the benefits of clean data and the best correlation with human judgements is reached with CWE embeddings ($\rho ^{C2}_{CWE}=0.437$ vs. $\rho ^{C2}_{fastText}=0.388$).
Semantic Representations ::: Experiments ::: BERT Evaluation on NER Task
In order to go beyond the similarity task using static word vectors, we also investigate the quality of the multilingual BERT embeddings by fine-tuning a named entity recognition task on the Yorùbá Global Voices corpus.
One of the major advantages of pre-trained BERT embeddings is that fine-tuning of the model on downstream NLP tasks is typically computationally inexpensive, often with few number of epochs. However, the data the embeddings are trained on has the same limitations as that used in massive word embeddings. Fine-tuning involves replacing the last layer of BERT used optimizing the masked LM with a task-dependent linear classifier or any other deep learning architecture, and training all the model parameters end-to-end. For the NER task, we obtain the token-level representation from BERT and train a linear classifier for sequence tagging.
Similar to our observations with non-contextualized embeddings, we find out that fine-tuning the pre-trained multilingual-uncased BERT for 4 epochs on the NER task gives an F1 score of 0. If we do the same experiment in English, F1 is 58.1 after 4 epochs.
That shows how pre-trained embeddings by themselves do not perform well in downstream tasks on low-resource languages. To address this problem for Yorùbá, we fine-tune BERT representations on the Yorùbá corpus in two ways: (i) using the multilingual vocabulary, and (ii) using only Yorùbá vocabulary. In both cases diacritics are ignored to be consistent with the base model training.
As expected, the fine-tuning of the pre-trained BERT on the Yorùbá corpus in the two configurations generates better representations than the base model. These models are able to achieve a better performance on the NER task with an average F1 score of over 47% (see Table TABREF26 for the comparative). The fine-tuned BERT model with only Yorùbá vocabulary further increases by more than 4% in F1 score obtained with the tuning that uses the multilingual vocabulary. Although we do not have enough data to train BERT from scratch, we observe that fine-tuning BERT on a limited amount of monolingual data of a low-resource language helps to improve the quality of the embeddings. The same observation holds true for high-resource languages like German and French BIBREF23.
Summary and Discussion
In this paper, we present curated word and contextual embeddings for Yorùbá and Twi. For this purpose, we gather and select corpora and study the most appropriate techniques for the languages. We also create test sets for the evaluation of the word embeddings within a word similarity task (wordsim353) and the contextual embeddings within a NER task. Corpora, embeddings and test sets are available in github.
In our analysis, we show how massively generated embeddings perform poorly for low-resourced languages as compared to the performance for high-resourced ones. This is due both to the quantity but also the quality of the data used. While the Pearson $\rho $ correlation for English obtained with fastText embeddings trained on Wikipedia (WP) and Common Crawl (CC) are $\rho _{WP}$=$0.67$ and $\rho _{WP+CC}$=$0.78$, the equivalent ones for Yorùbá are $\rho _{WP}$=$0.14$ and $\rho _{WP+CC}$=$0.07$. For Twi, only embeddings with Wikipedia are available ($\rho _{WP}$=$0.14$). By carefully gathering high-quality data and optimising the models to the characteristics of each language, we deliver embeddings with correlations of $\rho $=$0.39$ (Yorùbá) and $\rho $=$0.44$ (Twi) on the same test set, still far from the high-resourced models, but representing an improvement over $170\%$ on the task.
In a low-resourced setting, the data quality, processing and model selection is more critical than in a high-resourced scenario. We show how the characteristics of a language (such as diacritization in our case) should be taken into account in order to choose the relevant data and model to use. As an example, Twi word embeddings are significantly better when training on 742 k selected tokens than on 16 million noisy tokens, and when using a model that takes into account single character information (CWE-LP) instead of $n$-gram information (fastText).
Finally, we want to note that, even within a corpus, the quality of the data might depend on the language. Wikipedia is usually used as a high-quality freely available multilingual corpus as compared to noisier data such as Common Crawl. However, for the two languages under study, Wikipedia resulted to have too much noise: interference from other languages, text clearly written by non-native speakers, lack of diacritics and mixture of dialects. The JW300 corpus on the other hand, has been rated as high-quality by our native Yorùbá speakers, but as noisy by our native Twi speakers. In both cases, experiments confirm the conclusions.
Acknowledgements
The authors thank Dr. Clement Odoje of the Department of Linguistics and African Languages, University of Ibadan, Nigeria and Olóyè Gbémisóyè Àrdèó for helping us with the Yorùbá translation of the WordSim-353 word pairs and Dr. Felix Y. Adu-Gyamfi and Ps. Isaac Sarfo for helping with the Twi translation. We also thank the members of the Niger-Volta Language Technologies Institute for providing us with clean Yorùbá corpus
The project on which this paper is based was partially funded by the German Federal Ministry of Education and Research under the funding code 01IW17001 (Deeplee). Responsibility for the content of this publication is with the authors. | high-quality |
10091275f777e0c2890c3ac0fd0a7d8e266b57cf | 10091275f777e0c2890c3ac0fd0a7d8e266b57cf_0 | Q: How much is model improved by massive data and how much by quality?
Text: Introduction
In recent years, word embeddings BIBREF0, BIBREF1, BIBREF2 have been proven to be very useful for training downstream natural language processing (NLP) tasks. Moreover, contextualized embeddings BIBREF3, BIBREF4 have been shown to further improve the performance of NLP tasks such as named entity recognition, question answering, or text classification when used as word features because they are able to resolve ambiguities of word representations when they appear in different contexts. Different deep learning architectures such as multilingual BERT BIBREF4, LASER BIBREF5 and XLM BIBREF6 have proved successful in the multilingual setting. All these architectures learn the semantic representations from unannotated text, making them cheap given the availability of texts in online multilingual resources such as Wikipedia. However, the evaluation of such resources is usually done for the high-resourced languages, where one has a smorgasbord of tasks and test sets to evaluate on. This is the best-case scenario, languages with tones of data for training that generate high-quality models.
For low-resourced languages, the evaluation is more difficult and therefore normally ignored simply because of the lack of resources. In these cases, training data is scarce, and the assumption that the capability of deep learning architectures to learn (multilingual) representations in the high-resourced setting holds in the low-resourced one does not need to be true. In this work, we focus on two African languages, Yorùbá and Twi, and carry out several experiments to verify this claim. Just by a simple inspection of the word embeddings trained on Wikipedia by fastText, we see a high number of non-Yorùbá or non-Twi words in the vocabularies. For Twi, the vocabulary has only 935 words, and for Yorùbá we estimate that 135 k out of the 150 k words belong to other languages such as English, French and Arabic.
In order to improve the semantic representations for these languages, we collect online texts and study the influence of the quality and quantity of the data in the final models. We also examine the most appropriate architecture depending on the characteristics of each language. Finally, we translate test sets and annotate corpora to evaluate the performance of both our models together with fastText and BERT pre-trained embeddings which could not be evaluated otherwise for Yorùbá and Twi. The evaluation is carried out in a word similarity and relatedness task using the wordsim-353 test set, and in a named entity recognition (NER) task where embeddings play a crucial role. Of course, the evaluation of the models in only two tasks is not exhaustive but it is an indication of the quality we can obtain for these two low-resourced languages as compared to others such as English where these evaluations are already available.
The rest of the paper is organized as follows. Related works are reviewed in Section SECREF2 The two languages under study are described in Section SECREF3. We introduce the corpora and test sets in Section SECREF4. The fifth section explores the different training architectures we consider, and the experiments that are carried out. Finally, discussion and concluding remarks are given in Section SECREF6
Related Work
The large amount of freely available text in the internet for multiple languages is facilitating the massive and automatic creation of multilingual resources. The resource par excellence is Wikipedia, an online encyclopedia currently available in 307 languages. Other initiatives such as Common Crawl or the Jehovah’s Witnesses site are also repositories for multilingual data, usually assumed to be noisier than Wikipedia. Word and contextual embeddings have been pre-trained on these data, so that the resources are nowadays at hand for more than 100 languages. Some examples include fastText word embeddings BIBREF2, BIBREF7, MUSE embeddings BIBREF8, BERT multilingual embeddings BIBREF4 and LASER sentence embeddings BIBREF5. In all cases, embeddings are trained either simultaneously for multiple languages, joining high- and low-resource data, or following the same methodology.
On the other hand, different approaches try to specifically design architectures to learn embeddings in a low-resourced setting. ChaudharyEtAl:2018 follow a transfer learning approach that uses phonemes, lemmas and morphological tags to transfer the knowledge from related high-resource language into the low-resource one. jiangEtal:2018 apply Positive-Unlabeled Learning for word embedding calculations, assuming that unobserved pairs of words in a corpus also convey information, and this is specially important for small corpora.
In order to assess the quality of word embeddings, word similarity and relatedness tasks are usually used. wordsim-353 BIBREF9 is a collection of 353 pairs annotated with semantic similarity scores in a scale from 0 to 10. Even the problems detected in this dataset BIBREF10, it is widely used by the community. The test set was originally created for English, but the need for comparison with other languages has motivated several translations/adaptations. In hassanMihalcea:2009 the test was translated manually into Spanish, Romanian and Arabic and the scores were adapted to reflect similarities in the new language. The reported correlation between the English scores and the Spanish ones is 0.86. Later, JoubarneInkpen:2011 show indications that the measures of similarity highly correlate across languages. leviantReichart:2015 translated also wordsim-353 into German, Italian and Russian and used crowdsourcing to score the pairs. Finally, jiangEtal:2018 translated with Google Cloud the test set from English into Czech, Danish and Dutch. In our work, native speakers translate wordsim-353 into Yorùbá and Twi, and similarity scores are kept unless the discrepancy with English is big (see Section SECREF11 for details). A similar approach to our work is done for Gujarati in JoshiEtAl:2019.
Languages under Study ::: Yorùbá
is a language in the West Africa with over 50 million speakers. It is spoken among other languages in Nigeria, republic of Togo, Benin Republic, Ghana and Sierra Leon. It is also a language of Òrìsà in Cuba, Brazil, and some Caribbean countries. It is one of the three major languages in Nigeria and it is regarded as the third most spoken native African language. There are different dialects of Yorùbá in Nigeria BIBREF11, BIBREF12, BIBREF13. However, in this paper our focus is the standard Yorùbá based upon a report from the 1974 Joint Consultative Committee on Education BIBREF14.
Standard Yorùbá has 25 letters without the Latin characters c, q, v, x and z. There are 18 consonants (b, d, f, g, gb, j[dz], k, l, m, n, p[kp], r, s, ṣ, t, w y[j]), 7 oral vowels (a, e, ẹ, i, o, ọ, u), five nasal vowels, (an, $ \underaccent{\dot{}}{e}$n, in, $ \underaccent{\dot{}}{o}$n, un) and syllabic nasals (m̀, ḿ, ǹ, ń). Yorùbá is a tone language which makes heavy use of lexical tones which are indicated by the use of diacritics. There are three tones in Yorùbá namely low, mid and high which are represented as grave ($\setminus $), macron ($-$) and acute ($/$) symbols respectively. These tones are applied on vowels and syllabic nasals. Mid tone is usually left unmarked on vowels and every initial or first vowel in a word cannot have a high tone. It is important to note that tone information is needed for correct pronunciation and to have the meaning of a word BIBREF15, BIBREF12, BIBREF14. For example, owó (money), ọw (broom), òwò (business), w (honour), ọw (hand), and w (group) are different words with different dots and diacritic combinations. According to Asahiah2014, Standard Yorùbá uses 4 diacritics, 3 are for marking tones while the fourth which is the dot below is used to indicate the open phonetic variants of letter "e" and "o" and the long variant of "s". Also, there are 19 single diacritic letters, 3 are marked with dots below (ẹ, ọ, ṣ) while the rest are either having the grave or acute accent. The four double diacritics are divided between the grave and the acute accent as well.
As noted in Asahiah2014, most of the Yorùbá texts found in websites or public domain repositories (i) either use the correct Yorùbá orthography or (ii) replace diacritized characters with un-diacritized ones.
This happens as a result of many factors, but most especially to the unavailability of appropriate input devices for the accurate application of the diacritical marks BIBREF11. This has led to research on restoration models for diacritics BIBREF16, but the problem is not well solved and we find that most Yorùbá text in the public domain today is not well diacritized. Wikipedia is not an exception.
Languages under Study ::: Twi
is an Akan language of the Central Tano Branch of the Niger Congo family of languages. It is the most widely spoken of the about 80 indigenous languages in Ghana BIBREF17. It has about 9 million native speakers and about a total of 17–18 million Ghanaians have it as either first or second language. There are two mutually intelligible dialects, Asante and Akuapem, and sub-dialectical variants which are mostly unknown to and unnoticed by non-native speakers. It is also mutually intelligible with Fante and to a large extent Bono, another of the Akan languages. It is one of, if not the, easiest to learn to speak of the indigenous Ghanaian languages. The same is however not true when it comes to reading and especially writing. This is due to a number of easily overlooked complexities in the structure of the language. First of all, similarly to Yorùbá, Twi is a tonal language but written without diacritics or accents. As a result, words which are pronounced differently and unambiguous in speech tend to be ambiguous in writing. Besides, most of such words fit interchangeably in the same context and some of them can have more than two meanings. A simple example is:
Me papa aba nti na me ne wo redi no yie no. S wo ara wo nim s me papa ba a, me suban fofor adi.
This sentence could be translated as
(i) I'm only treating you nicely because I'm in a good mood. You already know I'm a completely different person when I'm in a good mood.
(ii) I'm only treating you nicely because my dad is around. You already know I'm a completely different person when my dad comes around.
Another characteristic of Twi is the fact that a good number of stop words have the same written form as content words. For instance, “na” or “na” could be the words “and, then”, the phrase “and then” or the word “mother”. This kind of ambiguity has consequences in several natural language applications where stop words are removed from text.
Finally, we want to point out that words can also be written with or without prefixes. An example is this same na and na which happen to be the same word with an omissible prefix across its multiple senses. For some words, the prefix characters are mostly used when the word begins a sentence and omitted in the middle. This however depends on the author/speaker. For the word embeddings calculation, this implies that one would have different embeddings for the same word found in different contexts.
Data
We collect clean and noisy corpora for Yorùbá and Twi in order to quantify the effect of noise on the quality of the embeddings, where noisy has a different meaning depending on the language as it will be explained in the next subsections.
Data ::: Training Corpora
For Yorùbá, we use several corpora collected by the Niger-Volta Language Technologies Institute with texts from different sources, including the Lagos-NWU conversational speech corpus, fully-diacritized Yorùbá language websites and an online Bible. The largest source with clean data is the JW300 corpus. We also created our own small-sized corpus by web-crawling three Yorùbá language websites (Alàkwé, r Yorùbá and Èdè Yorùbá Rẹw in Table TABREF7), some Yoruba Tweets with full diacritics and also news corpora (BBC Yorùbá and VON Yorùbá) with poor diacritics which we use to introduce noise. By noisy corpus, we refer to texts with incorrect diacritics (e.g in BBC Yorùbá), removal of tonal symbols (e.g in VON Yorùbá) and removal of all diacritics/under-dots (e.g some articles in Yorùbá Wikipedia). Furthermore, we got two manually typed fully-diacritized Yorùbá literature (Ìrìnkèrindò nínú igbó elégbèje and Igbó Olódùmarè) both written by Daniel Orowole Olorunfemi Fagunwa a popular Yorùbá author. The number of tokens available from each source, the link to the original source and the quality of the data is summarised in Table TABREF7.
The gathering of clean data in Twi is more difficult. We use as the base text as it has been shown that the Bible is the most available resource for low and endangered languages BIBREF18. This is the cleanest of all the text we could obtain. In addition, we use the available (and small) Wikipedia dumps which are quite noisy, i.e. Wikipedia contains a good number of English words, spelling errors and Twi sentences formulated in a non-natural way (formulated as L2 speakers would speak Twi as compared to native speakers). Lastly, we added text crawled from jw and the JW300 Twi corpus. Notice that the Bible text, is mainly written in the Asante dialect whilst the last, Jehovah's Witnesses, was written mainly in the Akuapem dialect. The Wikipedia text is a mixture of the two dialects. This introduces a lot of noise into the embeddings as the spelling of most words differs especially at the end of the words due to the mixture of dialects. The JW300 Twi corpus also contains mixed dialects but is mainly Akuampem. In this case, the noise comes also from spelling errors and the uncommon addition of diacritics which are not standardised on certain vowels. Figures for Twi corpora are summarised in the bottom block of Table TABREF7.
Data ::: Evaluation Test Sets ::: Yorùbá.
One of the contribution of this work is the introduction of the wordsim-353 word pairs dataset for Yorùbá. All the 353 word pairs were translated from English to Yorùbá by 3 native speakers. The set is composed of 446 unique English words, 348 of which can be expressed as one-word translation in Yorùbá (e.g. book translates to ìwé). In 61 cases (most countries and locations but also other content words) translations are transliterations (e.g. Doctor is dókítà and cucumber kùkúmbà.). 98 words were translated by short phrases instead of single words. This mostly affects words from science and technology (e.g. keyboard translates to pátákó ìtwé —literally meaning typing board—, laboratory translates to ìyàrá ìṣèwádìí —research room—, and ecology translates to ìm nípa àyíká while psychology translates to ìm nípa dá). Finally, 6 terms have the same form in English and Yorùbá therefore they are retained like that in the dataset (e.g. Jazz, Rock and acronyms such as FBI or OPEC).
We also annotate the Global Voices Yorùbá corpus to test the performance of our trained Yorùbá BERT embeddings on the named entity recognition task. The corpus consists of 25 k tokens which we annotate with four named entity types: DATE, location (LOC), organization (ORG) and personal names (PER). Any other token that does not belong to the four named entities is tagged with "O". The dataset is further split into training (70%), development (10%) and test (20%) partitions. Table TABREF12 shows the number of named entities per type and partition.
Data ::: Evaluation Test Sets ::: Twi
Just like Yorùbá, the wordsim-353 word pairs dataset was translated for Twi. Out of the 353 word pairs, 274 were used in this case. The remaining 79 pairs contain words that translate into longer phrases.
The number of words that can be translated by a single token is higher than for Yorùbá. Within the 274 pairs, there are 351 unique English words which translated to 310 unique Twi words. 298 of the 310 Twi words are single word translations, 4 transliterations and 16 are used as is.
Even if JoubarneInkpen:2011 showed indications that semantic similarity has a high correlation across languages, different nuances between words are captured differently by languages. For instance, both money and currency in English translate into sika in Twi (and other 32 English words which translate to 14 Twi words belong to this category) and drink in English is translated as Nsa or nom depending on the part of speech (noun for the former, verb for the latter). 17 English words fall into this category. In translating these, we picked the translation that best suits the context (other word in the pair). In two cases, the correlation is not fulfilled at all: soap–opera and star–movies are not related in the Twi language and the score has been modified accordingly.
Semantic Representations
In this section, we describe the architectures used for learning word embeddings for the Twi and Yorùbá languages. Also, we discuss the quality of the embeddings as measured by the correlation with human judgements on the translated wordSim-353 test sets and by the F1 score in a NER task.
Semantic Representations ::: Word Embeddings Architectures
Modeling sub-word units has recently become a popular way to address out-of-vocabulary word problem in NLP especially in word representation learning BIBREF19, BIBREF2, BIBREF4. A sub-word unit can be a character, character $n$-grams, or heuristically learned Byte Pair Encodings (BPE) which work very well in practice especially for morphologically rich languages. Here, we consider two word embedding models that make use of character-level information together with word information: Character Word Embedding (CWE) BIBREF20 and fastText BIBREF2. Both of them are extensions of the Word2Vec architectures BIBREF0 that model sub-word units, character embeddings in the case of CWE and character $n$-grams for fastText.
CWE was introduced in 2015 to model the embeddings of characters jointly with words in order to address the issues of character ambiguities and non-compositional words especially in the Chinese language. A word or character embedding is learned in CWE using either CBOW or skipgram architectures, and then the final word embedding is computed by adding the character embeddings to the word itself:
where $w_j$ is the word embedding of $x_j$, $N_j$ is the number of characters in $x_j$, and $c_k$ is the embedding of the $k$-th character $c_k$ in $x_j$.
Similarly, in 2017 fastText was introduced as an extension to skipgram in order to take into account morphology and improve the representation of rare words. In this case the embedding of a word also includes the embeddings of its character $n$-grams:
where $w_j$ is the word embedding of $x_j$, $G_j$ is the number of character $n$-grams in $x_j$ and $g_k$ is the embedding of the $k$-th $n$-gram.
cwe also proposed three alternatives to learn multiple embeddings per character and resolve ambiguities: (i) position-based character embeddings where each character has different embeddings depending on the position it appears in a word, i.e., beginning, middle or end (ii) cluster-based character embeddings where a character can have $K$ different cluster embeddings, and (iii) position-based cluster embeddings (CWE-LP) where for each position $K$ different embeddings are learned. We use the latter in our experiments with CWE but no positional embeddings are used with fastText.
Finally, we consider a contextualized embedding architecture, BERT BIBREF4. BERT is a masked language model based on the highly efficient and parallelizable Transformer architecture BIBREF21 known to produce very rich contextualized representations for downstream NLP tasks.
The architecture is trained by jointly conditioning on both left and right contexts in all the transformer layers using two unsupervised objectives: Masked LM and Next-sentence prediction. The representation of a word is therefore learned according to the context it is found in.
Training contextual embeddings needs of huge amounts of corpora which are not available for low-resourced languages such as Yorùbá and Twi. However, Google provided pre-trained multilingual embeddings for 102 languages including Yorùbá (but not Twi).
Semantic Representations ::: Experiments ::: FastText Training and Evaluation
As a first experiment, we compare the quality of fastText embeddings trained on (high-quality) curated data and (low-quality) massively extracted data for Twi and Yorùbá languages.
Facebook released pre-trained word embeddings using fastText for 294 languages trained on Wikipedia BIBREF2 (F1 in tables) and for 157 languages trained on Wikipedia and Common Crawl BIBREF7 (F2). For Yorùbá, both versions are available but only embeddings trained on Wikipedia are available for Twi. We consider these embeddings the result of training on what we call massively-extracted corpora. Notice that training settings for both embeddings are not exactly the same, and differences in performance might come both from corpus size/quality but also from the background model. The 294-languages version is trained using skipgram, in dimension 300, with character $n$-grams of length 5, a window of size 5 and 5 negatives. The 157-languages version is trained using CBOW with position-weights, in dimension 300, with character $n$-grams of length 5, a window of size 5 and 10 negatives.
We want to compare the performance of these embeddings with the equivalent models that can be obtained by training on the different sources verified by native speakers of Twi and Yorùbá; what we call curated corpora and has been described in Section SECREF4 For the comparison, we define 3 datasets according to the quality and quantity of textual data used for training: (i) Curated Small Dataset (clean), C1, about 1.6 million tokens for Yorùbá and over 735 k tokens for Twi. The clean text for Twi is the Bible and for Yoruba all texts marked under the C1 column in Table TABREF7. (ii) In Curated Small Dataset (clean + noisy), C2, we add noise to the clean corpus (Wikipedia articles for Twi, and BBC Yorùbá news articles for Yorùbá). This increases the number of training tokens for Twi to 742 k tokens and Yorùbá to about 2 million tokens. (iii) Curated Large Dataset, C3 consists of all available texts we are able to crawl and source out for, either clean or noisy. The addition of JW300 BIBREF22 texts increases the vocabulary to more than 10 k tokens in both languages.
We train our fastText systems using a skipgram model with an embedding size of 300 dimensions, context window size of 5, 10 negatives and $n$-grams ranging from 3 to 6 characters similarly to the pre-trained models for both languages. Best results are obtained with minimum word count of 3.
Table TABREF15 shows the Spearman correlation between human judgements and cosine similarity scores on the wordSim-353 test set. Notice that pre-trained embeddings on Wikipedia show a very low correlation with humans on the similarity task for both languages ($\rho $=$0.14$) and their performance is even lower when Common Crawl is also considered ($\rho $=$0.07$ for Yorùbá). An important reason for the low performance is the limited vocabulary. The pre-trained Twi model has only 935 tokens. For Yorùbá, things are apparently better with more than 150 k tokens when both Wikipedia and Common Crawl are used but correlation is even lower. An inspection of the pre-trained embeddings indicates that over 135 k words belong to other languages mostly English, French and Arabic.
If we focus only on Wikipedia, we see that many texts are without diacritics in Yorùbá and often make use of mixed dialects and English sentences in Twi.
The Spearman $\rho $ correlation for fastText models on the curated small dataset (clean), C1, improves the baselines by a large margin ($\rho =0.354$ for Twi and 0.322 for Yorùbá) even with a small dataset. The improvement could be justified just by the larger vocabulary in Twi, but in the case of Yorùbá the enhancement is there with almost half of the vocabulary size. We found out that adding some noisy texts (C2 dataset) slightly improves the correlation for Twi language but not for the Yorùbá language. The Twi language benefits from Wikipedia articles because its inclusion doubles the vocabulary and reduces the bias of the model towards religious texts. However, for Yorùbá, noisy texts often ignore diacritics or tonal marks which increases the vocabulary size at the cost of an increment in the ambiguity too. As a result, the correlation is slightly hurt. One would expect that training with more data would improve the quality of the embeddings, but we found out with the results obtained with the C3 dataset, that only high-quality data helps. The addition of JW300 boosts the vocabulary in both cases, but whereas for Twi the corpus mixes dialects and is noisy, for Yorùbá it is very clean and with full diacritics. Consequently, the best embeddings for Yorùbá are obtained when training with the C3 dataset, whereas for Twi, C2 is the best option. In both cases, the curated embeddings improve the correlation with human judgements on the similarity task a $\Delta \rho =+0.25$ or, equivalently, by an increment on $\rho $ of 170% (Twi) and 180% (Yorùbá).
Semantic Representations ::: Experiments ::: CWE Training and Evaluation
The huge ambiguity in the written Twi language motivates the exploration of different approaches to word embedding estimations. In this work, we compare the standard fastText methodology to include sub-word information with the character-enhanced approach with position-based clustered embeddings (CWE-LP as introduced in Section SECREF17). With the latter, we expect to specifically address the ambiguity present in a language that does not translate the different oral tones on vowels into the written language.
The character-enhanced word embeddings are trained using a skipgram architecture with cluster-based embeddings and an embedding size of 300 dimensions, context window-size of 5, and 5 negative samples. In this case, the best performance is obtained with a minimum word count of 1, and that increases the effective vocabulary that is used for training the embeddings with respect to the fastText experiments reported in Table TABREF15.
We repeat the same experiments as with fastText and summarise them in Table TABREF16. If we compare the relative numbers for the three datasets (C1, C2 and C3) we observe the same trends as before: the performance of the embeddings in the similarity task improves with the vocabulary size when the training data can be considered clean, but the performance diminishes when the data is noisy.
According to the results, CWE is specially beneficial for Twi but not always for Yorùbá. Clean Yorùbá text, does not have the ambiguity issues at character-level, therefore the $n$-gram approximation works better when enough clean data is used ($\rho ^{C3}_{CWE}=0.354$ vs. $\rho ^{C3}_{fastText}=0.391$) but it does not when too much noisy data (no diacritics, therefore character-level information would be needed) is used ($\rho ^{C2}_{CWE}=0.345$ vs. $\rho ^{C2}_{fastText}=0.302$). For Twi, the character-level information reinforces the benefits of clean data and the best correlation with human judgements is reached with CWE embeddings ($\rho ^{C2}_{CWE}=0.437$ vs. $\rho ^{C2}_{fastText}=0.388$).
Semantic Representations ::: Experiments ::: BERT Evaluation on NER Task
In order to go beyond the similarity task using static word vectors, we also investigate the quality of the multilingual BERT embeddings by fine-tuning a named entity recognition task on the Yorùbá Global Voices corpus.
One of the major advantages of pre-trained BERT embeddings is that fine-tuning of the model on downstream NLP tasks is typically computationally inexpensive, often with few number of epochs. However, the data the embeddings are trained on has the same limitations as that used in massive word embeddings. Fine-tuning involves replacing the last layer of BERT used optimizing the masked LM with a task-dependent linear classifier or any other deep learning architecture, and training all the model parameters end-to-end. For the NER task, we obtain the token-level representation from BERT and train a linear classifier for sequence tagging.
Similar to our observations with non-contextualized embeddings, we find out that fine-tuning the pre-trained multilingual-uncased BERT for 4 epochs on the NER task gives an F1 score of 0. If we do the same experiment in English, F1 is 58.1 after 4 epochs.
That shows how pre-trained embeddings by themselves do not perform well in downstream tasks on low-resource languages. To address this problem for Yorùbá, we fine-tune BERT representations on the Yorùbá corpus in two ways: (i) using the multilingual vocabulary, and (ii) using only Yorùbá vocabulary. In both cases diacritics are ignored to be consistent with the base model training.
As expected, the fine-tuning of the pre-trained BERT on the Yorùbá corpus in the two configurations generates better representations than the base model. These models are able to achieve a better performance on the NER task with an average F1 score of over 47% (see Table TABREF26 for the comparative). The fine-tuned BERT model with only Yorùbá vocabulary further increases by more than 4% in F1 score obtained with the tuning that uses the multilingual vocabulary. Although we do not have enough data to train BERT from scratch, we observe that fine-tuning BERT on a limited amount of monolingual data of a low-resource language helps to improve the quality of the embeddings. The same observation holds true for high-resource languages like German and French BIBREF23.
Summary and Discussion
In this paper, we present curated word and contextual embeddings for Yorùbá and Twi. For this purpose, we gather and select corpora and study the most appropriate techniques for the languages. We also create test sets for the evaluation of the word embeddings within a word similarity task (wordsim353) and the contextual embeddings within a NER task. Corpora, embeddings and test sets are available in github.
In our analysis, we show how massively generated embeddings perform poorly for low-resourced languages as compared to the performance for high-resourced ones. This is due both to the quantity but also the quality of the data used. While the Pearson $\rho $ correlation for English obtained with fastText embeddings trained on Wikipedia (WP) and Common Crawl (CC) are $\rho _{WP}$=$0.67$ and $\rho _{WP+CC}$=$0.78$, the equivalent ones for Yorùbá are $\rho _{WP}$=$0.14$ and $\rho _{WP+CC}$=$0.07$. For Twi, only embeddings with Wikipedia are available ($\rho _{WP}$=$0.14$). By carefully gathering high-quality data and optimising the models to the characteristics of each language, we deliver embeddings with correlations of $\rho $=$0.39$ (Yorùbá) and $\rho $=$0.44$ (Twi) on the same test set, still far from the high-resourced models, but representing an improvement over $170\%$ on the task.
In a low-resourced setting, the data quality, processing and model selection is more critical than in a high-resourced scenario. We show how the characteristics of a language (such as diacritization in our case) should be taken into account in order to choose the relevant data and model to use. As an example, Twi word embeddings are significantly better when training on 742 k selected tokens than on 16 million noisy tokens, and when using a model that takes into account single character information (CWE-LP) instead of $n$-gram information (fastText).
Finally, we want to note that, even within a corpus, the quality of the data might depend on the language. Wikipedia is usually used as a high-quality freely available multilingual corpus as compared to noisier data such as Common Crawl. However, for the two languages under study, Wikipedia resulted to have too much noise: interference from other languages, text clearly written by non-native speakers, lack of diacritics and mixture of dialects. The JW300 corpus on the other hand, has been rated as high-quality by our native Yorùbá speakers, but as noisy by our native Twi speakers. In both cases, experiments confirm the conclusions.
Acknowledgements
The authors thank Dr. Clement Odoje of the Department of Linguistics and African Languages, University of Ibadan, Nigeria and Olóyè Gbémisóyè Àrdèó for helping us with the Yorùbá translation of the WordSim-353 word pairs and Dr. Felix Y. Adu-Gyamfi and Ps. Isaac Sarfo for helping with the Twi translation. We also thank the members of the Niger-Volta Language Technologies Institute for providing us with clean Yorùbá corpus
The project on which this paper is based was partially funded by the German Federal Ministry of Education and Research under the funding code 01IW17001 (Deeplee). Responsibility for the content of this publication is with the authors. | Unanswerable |
cbf1137912a47262314c94d36ced3232d5fa1926 | cbf1137912a47262314c94d36ced3232d5fa1926_0 | Q: What two architectures are used?
Text: Introduction
In recent years, word embeddings BIBREF0, BIBREF1, BIBREF2 have been proven to be very useful for training downstream natural language processing (NLP) tasks. Moreover, contextualized embeddings BIBREF3, BIBREF4 have been shown to further improve the performance of NLP tasks such as named entity recognition, question answering, or text classification when used as word features because they are able to resolve ambiguities of word representations when they appear in different contexts. Different deep learning architectures such as multilingual BERT BIBREF4, LASER BIBREF5 and XLM BIBREF6 have proved successful in the multilingual setting. All these architectures learn the semantic representations from unannotated text, making them cheap given the availability of texts in online multilingual resources such as Wikipedia. However, the evaluation of such resources is usually done for the high-resourced languages, where one has a smorgasbord of tasks and test sets to evaluate on. This is the best-case scenario, languages with tones of data for training that generate high-quality models.
For low-resourced languages, the evaluation is more difficult and therefore normally ignored simply because of the lack of resources. In these cases, training data is scarce, and the assumption that the capability of deep learning architectures to learn (multilingual) representations in the high-resourced setting holds in the low-resourced one does not need to be true. In this work, we focus on two African languages, Yorùbá and Twi, and carry out several experiments to verify this claim. Just by a simple inspection of the word embeddings trained on Wikipedia by fastText, we see a high number of non-Yorùbá or non-Twi words in the vocabularies. For Twi, the vocabulary has only 935 words, and for Yorùbá we estimate that 135 k out of the 150 k words belong to other languages such as English, French and Arabic.
In order to improve the semantic representations for these languages, we collect online texts and study the influence of the quality and quantity of the data in the final models. We also examine the most appropriate architecture depending on the characteristics of each language. Finally, we translate test sets and annotate corpora to evaluate the performance of both our models together with fastText and BERT pre-trained embeddings which could not be evaluated otherwise for Yorùbá and Twi. The evaluation is carried out in a word similarity and relatedness task using the wordsim-353 test set, and in a named entity recognition (NER) task where embeddings play a crucial role. Of course, the evaluation of the models in only two tasks is not exhaustive but it is an indication of the quality we can obtain for these two low-resourced languages as compared to others such as English where these evaluations are already available.
The rest of the paper is organized as follows. Related works are reviewed in Section SECREF2 The two languages under study are described in Section SECREF3. We introduce the corpora and test sets in Section SECREF4. The fifth section explores the different training architectures we consider, and the experiments that are carried out. Finally, discussion and concluding remarks are given in Section SECREF6
Related Work
The large amount of freely available text in the internet for multiple languages is facilitating the massive and automatic creation of multilingual resources. The resource par excellence is Wikipedia, an online encyclopedia currently available in 307 languages. Other initiatives such as Common Crawl or the Jehovah’s Witnesses site are also repositories for multilingual data, usually assumed to be noisier than Wikipedia. Word and contextual embeddings have been pre-trained on these data, so that the resources are nowadays at hand for more than 100 languages. Some examples include fastText word embeddings BIBREF2, BIBREF7, MUSE embeddings BIBREF8, BERT multilingual embeddings BIBREF4 and LASER sentence embeddings BIBREF5. In all cases, embeddings are trained either simultaneously for multiple languages, joining high- and low-resource data, or following the same methodology.
On the other hand, different approaches try to specifically design architectures to learn embeddings in a low-resourced setting. ChaudharyEtAl:2018 follow a transfer learning approach that uses phonemes, lemmas and morphological tags to transfer the knowledge from related high-resource language into the low-resource one. jiangEtal:2018 apply Positive-Unlabeled Learning for word embedding calculations, assuming that unobserved pairs of words in a corpus also convey information, and this is specially important for small corpora.
In order to assess the quality of word embeddings, word similarity and relatedness tasks are usually used. wordsim-353 BIBREF9 is a collection of 353 pairs annotated with semantic similarity scores in a scale from 0 to 10. Even the problems detected in this dataset BIBREF10, it is widely used by the community. The test set was originally created for English, but the need for comparison with other languages has motivated several translations/adaptations. In hassanMihalcea:2009 the test was translated manually into Spanish, Romanian and Arabic and the scores were adapted to reflect similarities in the new language. The reported correlation between the English scores and the Spanish ones is 0.86. Later, JoubarneInkpen:2011 show indications that the measures of similarity highly correlate across languages. leviantReichart:2015 translated also wordsim-353 into German, Italian and Russian and used crowdsourcing to score the pairs. Finally, jiangEtal:2018 translated with Google Cloud the test set from English into Czech, Danish and Dutch. In our work, native speakers translate wordsim-353 into Yorùbá and Twi, and similarity scores are kept unless the discrepancy with English is big (see Section SECREF11 for details). A similar approach to our work is done for Gujarati in JoshiEtAl:2019.
Languages under Study ::: Yorùbá
is a language in the West Africa with over 50 million speakers. It is spoken among other languages in Nigeria, republic of Togo, Benin Republic, Ghana and Sierra Leon. It is also a language of Òrìsà in Cuba, Brazil, and some Caribbean countries. It is one of the three major languages in Nigeria and it is regarded as the third most spoken native African language. There are different dialects of Yorùbá in Nigeria BIBREF11, BIBREF12, BIBREF13. However, in this paper our focus is the standard Yorùbá based upon a report from the 1974 Joint Consultative Committee on Education BIBREF14.
Standard Yorùbá has 25 letters without the Latin characters c, q, v, x and z. There are 18 consonants (b, d, f, g, gb, j[dz], k, l, m, n, p[kp], r, s, ṣ, t, w y[j]), 7 oral vowels (a, e, ẹ, i, o, ọ, u), five nasal vowels, (an, $ \underaccent{\dot{}}{e}$n, in, $ \underaccent{\dot{}}{o}$n, un) and syllabic nasals (m̀, ḿ, ǹ, ń). Yorùbá is a tone language which makes heavy use of lexical tones which are indicated by the use of diacritics. There are three tones in Yorùbá namely low, mid and high which are represented as grave ($\setminus $), macron ($-$) and acute ($/$) symbols respectively. These tones are applied on vowels and syllabic nasals. Mid tone is usually left unmarked on vowels and every initial or first vowel in a word cannot have a high tone. It is important to note that tone information is needed for correct pronunciation and to have the meaning of a word BIBREF15, BIBREF12, BIBREF14. For example, owó (money), ọw (broom), òwò (business), w (honour), ọw (hand), and w (group) are different words with different dots and diacritic combinations. According to Asahiah2014, Standard Yorùbá uses 4 diacritics, 3 are for marking tones while the fourth which is the dot below is used to indicate the open phonetic variants of letter "e" and "o" and the long variant of "s". Also, there are 19 single diacritic letters, 3 are marked with dots below (ẹ, ọ, ṣ) while the rest are either having the grave or acute accent. The four double diacritics are divided between the grave and the acute accent as well.
As noted in Asahiah2014, most of the Yorùbá texts found in websites or public domain repositories (i) either use the correct Yorùbá orthography or (ii) replace diacritized characters with un-diacritized ones.
This happens as a result of many factors, but most especially to the unavailability of appropriate input devices for the accurate application of the diacritical marks BIBREF11. This has led to research on restoration models for diacritics BIBREF16, but the problem is not well solved and we find that most Yorùbá text in the public domain today is not well diacritized. Wikipedia is not an exception.
Languages under Study ::: Twi
is an Akan language of the Central Tano Branch of the Niger Congo family of languages. It is the most widely spoken of the about 80 indigenous languages in Ghana BIBREF17. It has about 9 million native speakers and about a total of 17–18 million Ghanaians have it as either first or second language. There are two mutually intelligible dialects, Asante and Akuapem, and sub-dialectical variants which are mostly unknown to and unnoticed by non-native speakers. It is also mutually intelligible with Fante and to a large extent Bono, another of the Akan languages. It is one of, if not the, easiest to learn to speak of the indigenous Ghanaian languages. The same is however not true when it comes to reading and especially writing. This is due to a number of easily overlooked complexities in the structure of the language. First of all, similarly to Yorùbá, Twi is a tonal language but written without diacritics or accents. As a result, words which are pronounced differently and unambiguous in speech tend to be ambiguous in writing. Besides, most of such words fit interchangeably in the same context and some of them can have more than two meanings. A simple example is:
Me papa aba nti na me ne wo redi no yie no. S wo ara wo nim s me papa ba a, me suban fofor adi.
This sentence could be translated as
(i) I'm only treating you nicely because I'm in a good mood. You already know I'm a completely different person when I'm in a good mood.
(ii) I'm only treating you nicely because my dad is around. You already know I'm a completely different person when my dad comes around.
Another characteristic of Twi is the fact that a good number of stop words have the same written form as content words. For instance, “na” or “na” could be the words “and, then”, the phrase “and then” or the word “mother”. This kind of ambiguity has consequences in several natural language applications where stop words are removed from text.
Finally, we want to point out that words can also be written with or without prefixes. An example is this same na and na which happen to be the same word with an omissible prefix across its multiple senses. For some words, the prefix characters are mostly used when the word begins a sentence and omitted in the middle. This however depends on the author/speaker. For the word embeddings calculation, this implies that one would have different embeddings for the same word found in different contexts.
Data
We collect clean and noisy corpora for Yorùbá and Twi in order to quantify the effect of noise on the quality of the embeddings, where noisy has a different meaning depending on the language as it will be explained in the next subsections.
Data ::: Training Corpora
For Yorùbá, we use several corpora collected by the Niger-Volta Language Technologies Institute with texts from different sources, including the Lagos-NWU conversational speech corpus, fully-diacritized Yorùbá language websites and an online Bible. The largest source with clean data is the JW300 corpus. We also created our own small-sized corpus by web-crawling three Yorùbá language websites (Alàkwé, r Yorùbá and Èdè Yorùbá Rẹw in Table TABREF7), some Yoruba Tweets with full diacritics and also news corpora (BBC Yorùbá and VON Yorùbá) with poor diacritics which we use to introduce noise. By noisy corpus, we refer to texts with incorrect diacritics (e.g in BBC Yorùbá), removal of tonal symbols (e.g in VON Yorùbá) and removal of all diacritics/under-dots (e.g some articles in Yorùbá Wikipedia). Furthermore, we got two manually typed fully-diacritized Yorùbá literature (Ìrìnkèrindò nínú igbó elégbèje and Igbó Olódùmarè) both written by Daniel Orowole Olorunfemi Fagunwa a popular Yorùbá author. The number of tokens available from each source, the link to the original source and the quality of the data is summarised in Table TABREF7.
The gathering of clean data in Twi is more difficult. We use as the base text as it has been shown that the Bible is the most available resource for low and endangered languages BIBREF18. This is the cleanest of all the text we could obtain. In addition, we use the available (and small) Wikipedia dumps which are quite noisy, i.e. Wikipedia contains a good number of English words, spelling errors and Twi sentences formulated in a non-natural way (formulated as L2 speakers would speak Twi as compared to native speakers). Lastly, we added text crawled from jw and the JW300 Twi corpus. Notice that the Bible text, is mainly written in the Asante dialect whilst the last, Jehovah's Witnesses, was written mainly in the Akuapem dialect. The Wikipedia text is a mixture of the two dialects. This introduces a lot of noise into the embeddings as the spelling of most words differs especially at the end of the words due to the mixture of dialects. The JW300 Twi corpus also contains mixed dialects but is mainly Akuampem. In this case, the noise comes also from spelling errors and the uncommon addition of diacritics which are not standardised on certain vowels. Figures for Twi corpora are summarised in the bottom block of Table TABREF7.
Data ::: Evaluation Test Sets ::: Yorùbá.
One of the contribution of this work is the introduction of the wordsim-353 word pairs dataset for Yorùbá. All the 353 word pairs were translated from English to Yorùbá by 3 native speakers. The set is composed of 446 unique English words, 348 of which can be expressed as one-word translation in Yorùbá (e.g. book translates to ìwé). In 61 cases (most countries and locations but also other content words) translations are transliterations (e.g. Doctor is dókítà and cucumber kùkúmbà.). 98 words were translated by short phrases instead of single words. This mostly affects words from science and technology (e.g. keyboard translates to pátákó ìtwé —literally meaning typing board—, laboratory translates to ìyàrá ìṣèwádìí —research room—, and ecology translates to ìm nípa àyíká while psychology translates to ìm nípa dá). Finally, 6 terms have the same form in English and Yorùbá therefore they are retained like that in the dataset (e.g. Jazz, Rock and acronyms such as FBI or OPEC).
We also annotate the Global Voices Yorùbá corpus to test the performance of our trained Yorùbá BERT embeddings on the named entity recognition task. The corpus consists of 25 k tokens which we annotate with four named entity types: DATE, location (LOC), organization (ORG) and personal names (PER). Any other token that does not belong to the four named entities is tagged with "O". The dataset is further split into training (70%), development (10%) and test (20%) partitions. Table TABREF12 shows the number of named entities per type and partition.
Data ::: Evaluation Test Sets ::: Twi
Just like Yorùbá, the wordsim-353 word pairs dataset was translated for Twi. Out of the 353 word pairs, 274 were used in this case. The remaining 79 pairs contain words that translate into longer phrases.
The number of words that can be translated by a single token is higher than for Yorùbá. Within the 274 pairs, there are 351 unique English words which translated to 310 unique Twi words. 298 of the 310 Twi words are single word translations, 4 transliterations and 16 are used as is.
Even if JoubarneInkpen:2011 showed indications that semantic similarity has a high correlation across languages, different nuances between words are captured differently by languages. For instance, both money and currency in English translate into sika in Twi (and other 32 English words which translate to 14 Twi words belong to this category) and drink in English is translated as Nsa or nom depending on the part of speech (noun for the former, verb for the latter). 17 English words fall into this category. In translating these, we picked the translation that best suits the context (other word in the pair). In two cases, the correlation is not fulfilled at all: soap–opera and star–movies are not related in the Twi language and the score has been modified accordingly.
Semantic Representations
In this section, we describe the architectures used for learning word embeddings for the Twi and Yorùbá languages. Also, we discuss the quality of the embeddings as measured by the correlation with human judgements on the translated wordSim-353 test sets and by the F1 score in a NER task.
Semantic Representations ::: Word Embeddings Architectures
Modeling sub-word units has recently become a popular way to address out-of-vocabulary word problem in NLP especially in word representation learning BIBREF19, BIBREF2, BIBREF4. A sub-word unit can be a character, character $n$-grams, or heuristically learned Byte Pair Encodings (BPE) which work very well in practice especially for morphologically rich languages. Here, we consider two word embedding models that make use of character-level information together with word information: Character Word Embedding (CWE) BIBREF20 and fastText BIBREF2. Both of them are extensions of the Word2Vec architectures BIBREF0 that model sub-word units, character embeddings in the case of CWE and character $n$-grams for fastText.
CWE was introduced in 2015 to model the embeddings of characters jointly with words in order to address the issues of character ambiguities and non-compositional words especially in the Chinese language. A word or character embedding is learned in CWE using either CBOW or skipgram architectures, and then the final word embedding is computed by adding the character embeddings to the word itself:
where $w_j$ is the word embedding of $x_j$, $N_j$ is the number of characters in $x_j$, and $c_k$ is the embedding of the $k$-th character $c_k$ in $x_j$.
Similarly, in 2017 fastText was introduced as an extension to skipgram in order to take into account morphology and improve the representation of rare words. In this case the embedding of a word also includes the embeddings of its character $n$-grams:
where $w_j$ is the word embedding of $x_j$, $G_j$ is the number of character $n$-grams in $x_j$ and $g_k$ is the embedding of the $k$-th $n$-gram.
cwe also proposed three alternatives to learn multiple embeddings per character and resolve ambiguities: (i) position-based character embeddings where each character has different embeddings depending on the position it appears in a word, i.e., beginning, middle or end (ii) cluster-based character embeddings where a character can have $K$ different cluster embeddings, and (iii) position-based cluster embeddings (CWE-LP) where for each position $K$ different embeddings are learned. We use the latter in our experiments with CWE but no positional embeddings are used with fastText.
Finally, we consider a contextualized embedding architecture, BERT BIBREF4. BERT is a masked language model based on the highly efficient and parallelizable Transformer architecture BIBREF21 known to produce very rich contextualized representations for downstream NLP tasks.
The architecture is trained by jointly conditioning on both left and right contexts in all the transformer layers using two unsupervised objectives: Masked LM and Next-sentence prediction. The representation of a word is therefore learned according to the context it is found in.
Training contextual embeddings needs of huge amounts of corpora which are not available for low-resourced languages such as Yorùbá and Twi. However, Google provided pre-trained multilingual embeddings for 102 languages including Yorùbá (but not Twi).
Semantic Representations ::: Experiments ::: FastText Training and Evaluation
As a first experiment, we compare the quality of fastText embeddings trained on (high-quality) curated data and (low-quality) massively extracted data for Twi and Yorùbá languages.
Facebook released pre-trained word embeddings using fastText for 294 languages trained on Wikipedia BIBREF2 (F1 in tables) and for 157 languages trained on Wikipedia and Common Crawl BIBREF7 (F2). For Yorùbá, both versions are available but only embeddings trained on Wikipedia are available for Twi. We consider these embeddings the result of training on what we call massively-extracted corpora. Notice that training settings for both embeddings are not exactly the same, and differences in performance might come both from corpus size/quality but also from the background model. The 294-languages version is trained using skipgram, in dimension 300, with character $n$-grams of length 5, a window of size 5 and 5 negatives. The 157-languages version is trained using CBOW with position-weights, in dimension 300, with character $n$-grams of length 5, a window of size 5 and 10 negatives.
We want to compare the performance of these embeddings with the equivalent models that can be obtained by training on the different sources verified by native speakers of Twi and Yorùbá; what we call curated corpora and has been described in Section SECREF4 For the comparison, we define 3 datasets according to the quality and quantity of textual data used for training: (i) Curated Small Dataset (clean), C1, about 1.6 million tokens for Yorùbá and over 735 k tokens for Twi. The clean text for Twi is the Bible and for Yoruba all texts marked under the C1 column in Table TABREF7. (ii) In Curated Small Dataset (clean + noisy), C2, we add noise to the clean corpus (Wikipedia articles for Twi, and BBC Yorùbá news articles for Yorùbá). This increases the number of training tokens for Twi to 742 k tokens and Yorùbá to about 2 million tokens. (iii) Curated Large Dataset, C3 consists of all available texts we are able to crawl and source out for, either clean or noisy. The addition of JW300 BIBREF22 texts increases the vocabulary to more than 10 k tokens in both languages.
We train our fastText systems using a skipgram model with an embedding size of 300 dimensions, context window size of 5, 10 negatives and $n$-grams ranging from 3 to 6 characters similarly to the pre-trained models for both languages. Best results are obtained with minimum word count of 3.
Table TABREF15 shows the Spearman correlation between human judgements and cosine similarity scores on the wordSim-353 test set. Notice that pre-trained embeddings on Wikipedia show a very low correlation with humans on the similarity task for both languages ($\rho $=$0.14$) and their performance is even lower when Common Crawl is also considered ($\rho $=$0.07$ for Yorùbá). An important reason for the low performance is the limited vocabulary. The pre-trained Twi model has only 935 tokens. For Yorùbá, things are apparently better with more than 150 k tokens when both Wikipedia and Common Crawl are used but correlation is even lower. An inspection of the pre-trained embeddings indicates that over 135 k words belong to other languages mostly English, French and Arabic.
If we focus only on Wikipedia, we see that many texts are without diacritics in Yorùbá and often make use of mixed dialects and English sentences in Twi.
The Spearman $\rho $ correlation for fastText models on the curated small dataset (clean), C1, improves the baselines by a large margin ($\rho =0.354$ for Twi and 0.322 for Yorùbá) even with a small dataset. The improvement could be justified just by the larger vocabulary in Twi, but in the case of Yorùbá the enhancement is there with almost half of the vocabulary size. We found out that adding some noisy texts (C2 dataset) slightly improves the correlation for Twi language but not for the Yorùbá language. The Twi language benefits from Wikipedia articles because its inclusion doubles the vocabulary and reduces the bias of the model towards religious texts. However, for Yorùbá, noisy texts often ignore diacritics or tonal marks which increases the vocabulary size at the cost of an increment in the ambiguity too. As a result, the correlation is slightly hurt. One would expect that training with more data would improve the quality of the embeddings, but we found out with the results obtained with the C3 dataset, that only high-quality data helps. The addition of JW300 boosts the vocabulary in both cases, but whereas for Twi the corpus mixes dialects and is noisy, for Yorùbá it is very clean and with full diacritics. Consequently, the best embeddings for Yorùbá are obtained when training with the C3 dataset, whereas for Twi, C2 is the best option. In both cases, the curated embeddings improve the correlation with human judgements on the similarity task a $\Delta \rho =+0.25$ or, equivalently, by an increment on $\rho $ of 170% (Twi) and 180% (Yorùbá).
Semantic Representations ::: Experiments ::: CWE Training and Evaluation
The huge ambiguity in the written Twi language motivates the exploration of different approaches to word embedding estimations. In this work, we compare the standard fastText methodology to include sub-word information with the character-enhanced approach with position-based clustered embeddings (CWE-LP as introduced in Section SECREF17). With the latter, we expect to specifically address the ambiguity present in a language that does not translate the different oral tones on vowels into the written language.
The character-enhanced word embeddings are trained using a skipgram architecture with cluster-based embeddings and an embedding size of 300 dimensions, context window-size of 5, and 5 negative samples. In this case, the best performance is obtained with a minimum word count of 1, and that increases the effective vocabulary that is used for training the embeddings with respect to the fastText experiments reported in Table TABREF15.
We repeat the same experiments as with fastText and summarise them in Table TABREF16. If we compare the relative numbers for the three datasets (C1, C2 and C3) we observe the same trends as before: the performance of the embeddings in the similarity task improves with the vocabulary size when the training data can be considered clean, but the performance diminishes when the data is noisy.
According to the results, CWE is specially beneficial for Twi but not always for Yorùbá. Clean Yorùbá text, does not have the ambiguity issues at character-level, therefore the $n$-gram approximation works better when enough clean data is used ($\rho ^{C3}_{CWE}=0.354$ vs. $\rho ^{C3}_{fastText}=0.391$) but it does not when too much noisy data (no diacritics, therefore character-level information would be needed) is used ($\rho ^{C2}_{CWE}=0.345$ vs. $\rho ^{C2}_{fastText}=0.302$). For Twi, the character-level information reinforces the benefits of clean data and the best correlation with human judgements is reached with CWE embeddings ($\rho ^{C2}_{CWE}=0.437$ vs. $\rho ^{C2}_{fastText}=0.388$).
Semantic Representations ::: Experiments ::: BERT Evaluation on NER Task
In order to go beyond the similarity task using static word vectors, we also investigate the quality of the multilingual BERT embeddings by fine-tuning a named entity recognition task on the Yorùbá Global Voices corpus.
One of the major advantages of pre-trained BERT embeddings is that fine-tuning of the model on downstream NLP tasks is typically computationally inexpensive, often with few number of epochs. However, the data the embeddings are trained on has the same limitations as that used in massive word embeddings. Fine-tuning involves replacing the last layer of BERT used optimizing the masked LM with a task-dependent linear classifier or any other deep learning architecture, and training all the model parameters end-to-end. For the NER task, we obtain the token-level representation from BERT and train a linear classifier for sequence tagging.
Similar to our observations with non-contextualized embeddings, we find out that fine-tuning the pre-trained multilingual-uncased BERT for 4 epochs on the NER task gives an F1 score of 0. If we do the same experiment in English, F1 is 58.1 after 4 epochs.
That shows how pre-trained embeddings by themselves do not perform well in downstream tasks on low-resource languages. To address this problem for Yorùbá, we fine-tune BERT representations on the Yorùbá corpus in two ways: (i) using the multilingual vocabulary, and (ii) using only Yorùbá vocabulary. In both cases diacritics are ignored to be consistent with the base model training.
As expected, the fine-tuning of the pre-trained BERT on the Yorùbá corpus in the two configurations generates better representations than the base model. These models are able to achieve a better performance on the NER task with an average F1 score of over 47% (see Table TABREF26 for the comparative). The fine-tuned BERT model with only Yorùbá vocabulary further increases by more than 4% in F1 score obtained with the tuning that uses the multilingual vocabulary. Although we do not have enough data to train BERT from scratch, we observe that fine-tuning BERT on a limited amount of monolingual data of a low-resource language helps to improve the quality of the embeddings. The same observation holds true for high-resource languages like German and French BIBREF23.
Summary and Discussion
In this paper, we present curated word and contextual embeddings for Yorùbá and Twi. For this purpose, we gather and select corpora and study the most appropriate techniques for the languages. We also create test sets for the evaluation of the word embeddings within a word similarity task (wordsim353) and the contextual embeddings within a NER task. Corpora, embeddings and test sets are available in github.
In our analysis, we show how massively generated embeddings perform poorly for low-resourced languages as compared to the performance for high-resourced ones. This is due both to the quantity but also the quality of the data used. While the Pearson $\rho $ correlation for English obtained with fastText embeddings trained on Wikipedia (WP) and Common Crawl (CC) are $\rho _{WP}$=$0.67$ and $\rho _{WP+CC}$=$0.78$, the equivalent ones for Yorùbá are $\rho _{WP}$=$0.14$ and $\rho _{WP+CC}$=$0.07$. For Twi, only embeddings with Wikipedia are available ($\rho _{WP}$=$0.14$). By carefully gathering high-quality data and optimising the models to the characteristics of each language, we deliver embeddings with correlations of $\rho $=$0.39$ (Yorùbá) and $\rho $=$0.44$ (Twi) on the same test set, still far from the high-resourced models, but representing an improvement over $170\%$ on the task.
In a low-resourced setting, the data quality, processing and model selection is more critical than in a high-resourced scenario. We show how the characteristics of a language (such as diacritization in our case) should be taken into account in order to choose the relevant data and model to use. As an example, Twi word embeddings are significantly better when training on 742 k selected tokens than on 16 million noisy tokens, and when using a model that takes into account single character information (CWE-LP) instead of $n$-gram information (fastText).
Finally, we want to note that, even within a corpus, the quality of the data might depend on the language. Wikipedia is usually used as a high-quality freely available multilingual corpus as compared to noisier data such as Common Crawl. However, for the two languages under study, Wikipedia resulted to have too much noise: interference from other languages, text clearly written by non-native speakers, lack of diacritics and mixture of dialects. The JW300 corpus on the other hand, has been rated as high-quality by our native Yorùbá speakers, but as noisy by our native Twi speakers. In both cases, experiments confirm the conclusions.
Acknowledgements
The authors thank Dr. Clement Odoje of the Department of Linguistics and African Languages, University of Ibadan, Nigeria and Olóyè Gbémisóyè Àrdèó for helping us with the Yorùbá translation of the WordSim-353 word pairs and Dr. Felix Y. Adu-Gyamfi and Ps. Isaac Sarfo for helping with the Twi translation. We also thank the members of the Niger-Volta Language Technologies Institute for providing us with clean Yorùbá corpus
The project on which this paper is based was partially funded by the German Federal Ministry of Education and Research under the funding code 01IW17001 (Deeplee). Responsibility for the content of this publication is with the authors. | fastText, CWE-LP |
519db0922376ce1e87fcdedaa626d665d9f3e8ce | 519db0922376ce1e87fcdedaa626d665d9f3e8ce_0 | Q: Does this paper target European or Brazilian Portuguese?
Text: Introduction
Recently, the transformative potential of machine learning (ML) has propelled ML into the forefront of mainstream media. In Brazil, the use of such technique has been widely diffused gaining more space. Thus, it is used to search for patterns, regularities or even concepts expressed in data sets BIBREF0 , and can be applied as a form of aid in several areas of everyday life.
Among the different definitions, ML can be seen as the ability to improve performance in accomplishing a task through the experience BIBREF1 . Thus, BIBREF2 presents this as a method of inferences of functions or hypotheses capable of solving a problem algorithmically from data representing instances of the problem. This is an important way to solve different types of problems that permeate computer science and other areas.
One of the main uses of ML is in text processing, where the analysis of the content the entry point for various learning algorithms. However, the use of this content can represent the insertion of different types of bias in training and may vary with the context worked. This work aims to analyze and remove gender stereotypes from word embedding in Portuguese, analogous to what was done in BIBREF3 for the English language. Hence, we propose to employ a public word2vec model pre-trained to analyze gender bias in the Portuguese language, quantifying biases present in the model so that it is possible to reduce the spreading of sexism of such models. There is also a stage of bias reducing over the results obtained in the model, where it is sought to analyze the effects of the application of gender distinction reduction techniques.
This paper is organized as follows: Section SECREF2 discusses related works. Section SECREF3 presents the Portuguese word2vec embeddings model used in this paper and Section SECREF4 proposes our method. Section SECREF5 presents experimental results, whose purpose is to verify results of a de-bias algorithm application in Portuguese embeddings word2vec model and a short discussion about it. Section SECREF6 brings our concluding remarks.
Related Work
There is a wide range of techniques that provide interesting results in the context of ML algorithms geared to the classification of data without discrimination; these techniques range from the pre-processing of data BIBREF4 to the use of bias removal techniques BIBREF5 in fact. Approaches linked to the data pre-processing step usually consist of methods based on improving the quality of the dataset after which the usual classification tools can be used to train a classifier. So, it starts from a baseline already stipulated by the execution of itself. On the other side of the spectrum, there are Unsupervised and semi-supervised learning techniques, that are attractive because they do not imply the cost of corpus annotation BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 .
The bias reduction is studied as a way to reduce discrimination through classification through different approaches BIBREF10 BIBREF11 . In BIBREF12 the authors propose to specify, implement, and evaluate the “fairness-aware" ML interface called themis-ml. In this interface, the main idea is to pick up a data set from a modified dataset. Themis-ml implements two methods for training fairness-aware models. The tool relies on two methods to make agnostic model type predictions: Reject Option Classification and Discrimination-Aware Ensemble Classification, these procedures being used to post-process predictions in a way that reduces potentially discriminatory predictions. According to the authors, it is possible to perceive the potential use of the method as a means of reducing bias in the use of ML algorithms.
In BIBREF3 , the authors propose a method to hardly reduce bias in English word embeddings collected from Google News. Using word2vec, they performed a geometric analysis of gender direction of the bias contained in the data. Using this property with the generation of gender-neutral analogies, a methodology was provided for modifying an embedding to remove gender stereotypes. Some metrics were defined to quantify both direct and indirect gender biases in embeddings and to develop algorithms to reduce bias in some embedding. Hence, the authors show that embeddings can be used in applications without amplifying gender bias.
Portuguese Embedding
In BIBREF13 , the quality of the representation of words through vectors in several models is discussed. According to the authors, the ability to train high-quality models using simplified architectures is useful in models composed of predictive methods that try to predict neighboring words with one or more context words, such as Word2Vec. Word embeddings have been used to provide meaningful representations for words in an efficient way.
In BIBREF14 , several word embedding models trained in a large Portuguese corpus are evaluated. Within the Word2Vec model, two training strategies were used. In the first, namely Skip-Gram, the model is given the word and attempts to predict its neighboring words. The second, Continuous Bag-of-Words (CBOW), the model is given the sequence of words without the middle one and attempts to predict this omitted word. The latter was chosen for application in the present proposal.
The authors of BIBREF14 claim to have collected a large corpus from several sources to obtain a multi-genre corpus representative of the Portuguese language. Hence, it comprehensively covers different expressions of the language, making it possible to analyze gender bias and stereotype in Portuguese word embeddings. The dataset used was tokenized and normalized by the authors to reduce the corpus vocabulary size, under the premise that vocabulary reduction provides more representative vectors.
Proposed Approach
Some linguists point out that the female gender is, in Portuguese, a particularization of the masculine. In this way the only gender mark is the feminine, the others being considered without gender (including names considered masculine). In BIBREF15 the gender representation in Portuguese is associated with a set of phenomena, not only from a linguistic perspective but also from a socio-cultural perspective. Since most of the termination of words (e.g., advogada and advogado) are used to indicate to whom the expression refers, stereotypes can be explained through communication. This implies the presence of biases when dealing with terms such as those referring to professions.
Figure FIGREF1 illustrates the approach proposed in this work. First, using a list of professions relating the identification of female and male who perform it as a parameter, we evaluate the accuracy of similarity generated by the embeddings. Then, getting the biased results, we apply the De-bias algorithm BIBREF3 aiming to reduce sexist analogies previous generated. Thus, all the results are analyzed by comparing the accuracies.
Using the word2vec model available in a public repository BIBREF14 , the proposal involves the analysis of the most similar analogies generated before and after the application of the BIBREF3 . The work is focused on the analysis of gender bias associated with professions in word embeddings. So therefore into the evaluation of the accuracy of the associations generated, aiming at achieving results as good as possible without prejudicing the evaluation metrics.
Algorithm SECREF4 describes the method performed during the evaluation of the gender bias presence. In this method we try to evaluate the accuracy of the analogies generated through the model, that is, to verify the cases of association matching generated between the words.
[!htb] Model Evaluation [1]
w2v_evaluate INLINEFORM0 open_model( INLINEFORM1 ) count = 0 INLINEFORM2 in INLINEFORM3 read list of tuples x = model.most_similar(positive=[`ela', male], negative=[`ele'])
x = female count += 1 accuracy = count/size(profession_pairs) return accuracy
Experiments
The purpose of this section is to perform different analysis concerning bias in word2vec models with Portuguese embeddings. The Continuous Bag-of-Words model used was provided by BIBREF14 (described in Section SECREF3 ). For these experiments, we use a model containing 934966 words of dimension 300 per vector representation. To realize the experiments, a list containing fifty professions labels for female and male was used as the parameter of similarity comparison.
Using the python library gensim, we evaluate the extreme analogies generated when comparing vectors like: INLINEFORM0 , where INLINEFORM1 represents the item from professions list and INLINEFORM2 the expected association. The most similarity function finds the top-N most similar entities, computing cosine similarity between a simple mean of the projection weight vectors of the given docs. Figure FIGREF4 presents the most extreme analogies results obtained from the model using these comparisons.
Applying the Algorithm SECREF4 , we check the accuracy obtained with the similarity function before and after the application of the de-bias method. Table TABREF3 presents the corresponding results. In cases like the analogy of `garçonete' to `stripper' (Figure FIGREF4 , line 8), it is possible to observe that the relationship stipulated between terms with sexual connotation and females is closer than between females and professions. While in the male model, even in cases of non-compliance, the closest analogy remains in the professional environment.
Using a confidence factor of 99%, when comparing the correctness levels of the model with and without the reduction of bias, the prediction of the model with bias is significantly better. Different authors BIBREF16 BIBREF17 show that the removal of bias in models produces a negative impact on the quality of the model. On the other hand, it is observed that even with a better hit rate the correctness rate in the prediction of related terms is still low.
Final Remarks
This paper presents an analysis of the presence of gender bias in Portuguese word embeddings. Even though it is a work in progress, the proposal showed promising results in analyzing predicting models.
A possible extension of the work involves deepening the analysis of the results obtained, seeking to achieve higher accuracy rates and fairer models to be used in machine learning techniques. Thus, these studies can involve tests with different methods of pre-processing the data to the use of different models, as well as other factors that may influence the results generated. This deepening is necessary since the model's accuracy is not high.
To conclude, we believe that the presence of gender bias and stereotypes in the Portuguese language is found in different spheres of language, and it is important to study ways of mitigating different types of discrimination. As such, it can be easily applied to analyze racists bias into the language, such as different types of preconceptions. | Unanswerable |
519db0922376ce1e87fcdedaa626d665d9f3e8ce | 519db0922376ce1e87fcdedaa626d665d9f3e8ce_1 | Q: Does this paper target European or Brazilian Portuguese?
Text: Introduction
Recently, the transformative potential of machine learning (ML) has propelled ML into the forefront of mainstream media. In Brazil, the use of such technique has been widely diffused gaining more space. Thus, it is used to search for patterns, regularities or even concepts expressed in data sets BIBREF0 , and can be applied as a form of aid in several areas of everyday life.
Among the different definitions, ML can be seen as the ability to improve performance in accomplishing a task through the experience BIBREF1 . Thus, BIBREF2 presents this as a method of inferences of functions or hypotheses capable of solving a problem algorithmically from data representing instances of the problem. This is an important way to solve different types of problems that permeate computer science and other areas.
One of the main uses of ML is in text processing, where the analysis of the content the entry point for various learning algorithms. However, the use of this content can represent the insertion of different types of bias in training and may vary with the context worked. This work aims to analyze and remove gender stereotypes from word embedding in Portuguese, analogous to what was done in BIBREF3 for the English language. Hence, we propose to employ a public word2vec model pre-trained to analyze gender bias in the Portuguese language, quantifying biases present in the model so that it is possible to reduce the spreading of sexism of such models. There is also a stage of bias reducing over the results obtained in the model, where it is sought to analyze the effects of the application of gender distinction reduction techniques.
This paper is organized as follows: Section SECREF2 discusses related works. Section SECREF3 presents the Portuguese word2vec embeddings model used in this paper and Section SECREF4 proposes our method. Section SECREF5 presents experimental results, whose purpose is to verify results of a de-bias algorithm application in Portuguese embeddings word2vec model and a short discussion about it. Section SECREF6 brings our concluding remarks.
Related Work
There is a wide range of techniques that provide interesting results in the context of ML algorithms geared to the classification of data without discrimination; these techniques range from the pre-processing of data BIBREF4 to the use of bias removal techniques BIBREF5 in fact. Approaches linked to the data pre-processing step usually consist of methods based on improving the quality of the dataset after which the usual classification tools can be used to train a classifier. So, it starts from a baseline already stipulated by the execution of itself. On the other side of the spectrum, there are Unsupervised and semi-supervised learning techniques, that are attractive because they do not imply the cost of corpus annotation BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 .
The bias reduction is studied as a way to reduce discrimination through classification through different approaches BIBREF10 BIBREF11 . In BIBREF12 the authors propose to specify, implement, and evaluate the “fairness-aware" ML interface called themis-ml. In this interface, the main idea is to pick up a data set from a modified dataset. Themis-ml implements two methods for training fairness-aware models. The tool relies on two methods to make agnostic model type predictions: Reject Option Classification and Discrimination-Aware Ensemble Classification, these procedures being used to post-process predictions in a way that reduces potentially discriminatory predictions. According to the authors, it is possible to perceive the potential use of the method as a means of reducing bias in the use of ML algorithms.
In BIBREF3 , the authors propose a method to hardly reduce bias in English word embeddings collected from Google News. Using word2vec, they performed a geometric analysis of gender direction of the bias contained in the data. Using this property with the generation of gender-neutral analogies, a methodology was provided for modifying an embedding to remove gender stereotypes. Some metrics were defined to quantify both direct and indirect gender biases in embeddings and to develop algorithms to reduce bias in some embedding. Hence, the authors show that embeddings can be used in applications without amplifying gender bias.
Portuguese Embedding
In BIBREF13 , the quality of the representation of words through vectors in several models is discussed. According to the authors, the ability to train high-quality models using simplified architectures is useful in models composed of predictive methods that try to predict neighboring words with one or more context words, such as Word2Vec. Word embeddings have been used to provide meaningful representations for words in an efficient way.
In BIBREF14 , several word embedding models trained in a large Portuguese corpus are evaluated. Within the Word2Vec model, two training strategies were used. In the first, namely Skip-Gram, the model is given the word and attempts to predict its neighboring words. The second, Continuous Bag-of-Words (CBOW), the model is given the sequence of words without the middle one and attempts to predict this omitted word. The latter was chosen for application in the present proposal.
The authors of BIBREF14 claim to have collected a large corpus from several sources to obtain a multi-genre corpus representative of the Portuguese language. Hence, it comprehensively covers different expressions of the language, making it possible to analyze gender bias and stereotype in Portuguese word embeddings. The dataset used was tokenized and normalized by the authors to reduce the corpus vocabulary size, under the premise that vocabulary reduction provides more representative vectors.
Proposed Approach
Some linguists point out that the female gender is, in Portuguese, a particularization of the masculine. In this way the only gender mark is the feminine, the others being considered without gender (including names considered masculine). In BIBREF15 the gender representation in Portuguese is associated with a set of phenomena, not only from a linguistic perspective but also from a socio-cultural perspective. Since most of the termination of words (e.g., advogada and advogado) are used to indicate to whom the expression refers, stereotypes can be explained through communication. This implies the presence of biases when dealing with terms such as those referring to professions.
Figure FIGREF1 illustrates the approach proposed in this work. First, using a list of professions relating the identification of female and male who perform it as a parameter, we evaluate the accuracy of similarity generated by the embeddings. Then, getting the biased results, we apply the De-bias algorithm BIBREF3 aiming to reduce sexist analogies previous generated. Thus, all the results are analyzed by comparing the accuracies.
Using the word2vec model available in a public repository BIBREF14 , the proposal involves the analysis of the most similar analogies generated before and after the application of the BIBREF3 . The work is focused on the analysis of gender bias associated with professions in word embeddings. So therefore into the evaluation of the accuracy of the associations generated, aiming at achieving results as good as possible without prejudicing the evaluation metrics.
Algorithm SECREF4 describes the method performed during the evaluation of the gender bias presence. In this method we try to evaluate the accuracy of the analogies generated through the model, that is, to verify the cases of association matching generated between the words.
[!htb] Model Evaluation [1]
w2v_evaluate INLINEFORM0 open_model( INLINEFORM1 ) count = 0 INLINEFORM2 in INLINEFORM3 read list of tuples x = model.most_similar(positive=[`ela', male], negative=[`ele'])
x = female count += 1 accuracy = count/size(profession_pairs) return accuracy
Experiments
The purpose of this section is to perform different analysis concerning bias in word2vec models with Portuguese embeddings. The Continuous Bag-of-Words model used was provided by BIBREF14 (described in Section SECREF3 ). For these experiments, we use a model containing 934966 words of dimension 300 per vector representation. To realize the experiments, a list containing fifty professions labels for female and male was used as the parameter of similarity comparison.
Using the python library gensim, we evaluate the extreme analogies generated when comparing vectors like: INLINEFORM0 , where INLINEFORM1 represents the item from professions list and INLINEFORM2 the expected association. The most similarity function finds the top-N most similar entities, computing cosine similarity between a simple mean of the projection weight vectors of the given docs. Figure FIGREF4 presents the most extreme analogies results obtained from the model using these comparisons.
Applying the Algorithm SECREF4 , we check the accuracy obtained with the similarity function before and after the application of the de-bias method. Table TABREF3 presents the corresponding results. In cases like the analogy of `garçonete' to `stripper' (Figure FIGREF4 , line 8), it is possible to observe that the relationship stipulated between terms with sexual connotation and females is closer than between females and professions. While in the male model, even in cases of non-compliance, the closest analogy remains in the professional environment.
Using a confidence factor of 99%, when comparing the correctness levels of the model with and without the reduction of bias, the prediction of the model with bias is significantly better. Different authors BIBREF16 BIBREF17 show that the removal of bias in models produces a negative impact on the quality of the model. On the other hand, it is observed that even with a better hit rate the correctness rate in the prediction of related terms is still low.
Final Remarks
This paper presents an analysis of the presence of gender bias in Portuguese word embeddings. Even though it is a work in progress, the proposal showed promising results in analyzing predicting models.
A possible extension of the work involves deepening the analysis of the results obtained, seeking to achieve higher accuracy rates and fairer models to be used in machine learning techniques. Thus, these studies can involve tests with different methods of pre-processing the data to the use of different models, as well as other factors that may influence the results generated. This deepening is necessary since the model's accuracy is not high.
To conclude, we believe that the presence of gender bias and stereotypes in the Portuguese language is found in different spheres of language, and it is important to study ways of mitigating different types of discrimination. As such, it can be easily applied to analyze racists bias into the language, such as different types of preconceptions. | Unanswerable |
99a10823623f78dbff9ccecb210f187105a196e9 | 99a10823623f78dbff9ccecb210f187105a196e9_0 | Q: What were the word embeddings trained on?
Text: Introduction
Recently, the transformative potential of machine learning (ML) has propelled ML into the forefront of mainstream media. In Brazil, the use of such technique has been widely diffused gaining more space. Thus, it is used to search for patterns, regularities or even concepts expressed in data sets BIBREF0 , and can be applied as a form of aid in several areas of everyday life.
Among the different definitions, ML can be seen as the ability to improve performance in accomplishing a task through the experience BIBREF1 . Thus, BIBREF2 presents this as a method of inferences of functions or hypotheses capable of solving a problem algorithmically from data representing instances of the problem. This is an important way to solve different types of problems that permeate computer science and other areas.
One of the main uses of ML is in text processing, where the analysis of the content the entry point for various learning algorithms. However, the use of this content can represent the insertion of different types of bias in training and may vary with the context worked. This work aims to analyze and remove gender stereotypes from word embedding in Portuguese, analogous to what was done in BIBREF3 for the English language. Hence, we propose to employ a public word2vec model pre-trained to analyze gender bias in the Portuguese language, quantifying biases present in the model so that it is possible to reduce the spreading of sexism of such models. There is also a stage of bias reducing over the results obtained in the model, where it is sought to analyze the effects of the application of gender distinction reduction techniques.
This paper is organized as follows: Section SECREF2 discusses related works. Section SECREF3 presents the Portuguese word2vec embeddings model used in this paper and Section SECREF4 proposes our method. Section SECREF5 presents experimental results, whose purpose is to verify results of a de-bias algorithm application in Portuguese embeddings word2vec model and a short discussion about it. Section SECREF6 brings our concluding remarks.
Related Work
There is a wide range of techniques that provide interesting results in the context of ML algorithms geared to the classification of data without discrimination; these techniques range from the pre-processing of data BIBREF4 to the use of bias removal techniques BIBREF5 in fact. Approaches linked to the data pre-processing step usually consist of methods based on improving the quality of the dataset after which the usual classification tools can be used to train a classifier. So, it starts from a baseline already stipulated by the execution of itself. On the other side of the spectrum, there are Unsupervised and semi-supervised learning techniques, that are attractive because they do not imply the cost of corpus annotation BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 .
The bias reduction is studied as a way to reduce discrimination through classification through different approaches BIBREF10 BIBREF11 . In BIBREF12 the authors propose to specify, implement, and evaluate the “fairness-aware" ML interface called themis-ml. In this interface, the main idea is to pick up a data set from a modified dataset. Themis-ml implements two methods for training fairness-aware models. The tool relies on two methods to make agnostic model type predictions: Reject Option Classification and Discrimination-Aware Ensemble Classification, these procedures being used to post-process predictions in a way that reduces potentially discriminatory predictions. According to the authors, it is possible to perceive the potential use of the method as a means of reducing bias in the use of ML algorithms.
In BIBREF3 , the authors propose a method to hardly reduce bias in English word embeddings collected from Google News. Using word2vec, they performed a geometric analysis of gender direction of the bias contained in the data. Using this property with the generation of gender-neutral analogies, a methodology was provided for modifying an embedding to remove gender stereotypes. Some metrics were defined to quantify both direct and indirect gender biases in embeddings and to develop algorithms to reduce bias in some embedding. Hence, the authors show that embeddings can be used in applications without amplifying gender bias.
Portuguese Embedding
In BIBREF13 , the quality of the representation of words through vectors in several models is discussed. According to the authors, the ability to train high-quality models using simplified architectures is useful in models composed of predictive methods that try to predict neighboring words with one or more context words, such as Word2Vec. Word embeddings have been used to provide meaningful representations for words in an efficient way.
In BIBREF14 , several word embedding models trained in a large Portuguese corpus are evaluated. Within the Word2Vec model, two training strategies were used. In the first, namely Skip-Gram, the model is given the word and attempts to predict its neighboring words. The second, Continuous Bag-of-Words (CBOW), the model is given the sequence of words without the middle one and attempts to predict this omitted word. The latter was chosen for application in the present proposal.
The authors of BIBREF14 claim to have collected a large corpus from several sources to obtain a multi-genre corpus representative of the Portuguese language. Hence, it comprehensively covers different expressions of the language, making it possible to analyze gender bias and stereotype in Portuguese word embeddings. The dataset used was tokenized and normalized by the authors to reduce the corpus vocabulary size, under the premise that vocabulary reduction provides more representative vectors.
Proposed Approach
Some linguists point out that the female gender is, in Portuguese, a particularization of the masculine. In this way the only gender mark is the feminine, the others being considered without gender (including names considered masculine). In BIBREF15 the gender representation in Portuguese is associated with a set of phenomena, not only from a linguistic perspective but also from a socio-cultural perspective. Since most of the termination of words (e.g., advogada and advogado) are used to indicate to whom the expression refers, stereotypes can be explained through communication. This implies the presence of biases when dealing with terms such as those referring to professions.
Figure FIGREF1 illustrates the approach proposed in this work. First, using a list of professions relating the identification of female and male who perform it as a parameter, we evaluate the accuracy of similarity generated by the embeddings. Then, getting the biased results, we apply the De-bias algorithm BIBREF3 aiming to reduce sexist analogies previous generated. Thus, all the results are analyzed by comparing the accuracies.
Using the word2vec model available in a public repository BIBREF14 , the proposal involves the analysis of the most similar analogies generated before and after the application of the BIBREF3 . The work is focused on the analysis of gender bias associated with professions in word embeddings. So therefore into the evaluation of the accuracy of the associations generated, aiming at achieving results as good as possible without prejudicing the evaluation metrics.
Algorithm SECREF4 describes the method performed during the evaluation of the gender bias presence. In this method we try to evaluate the accuracy of the analogies generated through the model, that is, to verify the cases of association matching generated between the words.
[!htb] Model Evaluation [1]
w2v_evaluate INLINEFORM0 open_model( INLINEFORM1 ) count = 0 INLINEFORM2 in INLINEFORM3 read list of tuples x = model.most_similar(positive=[`ela', male], negative=[`ele'])
x = female count += 1 accuracy = count/size(profession_pairs) return accuracy
Experiments
The purpose of this section is to perform different analysis concerning bias in word2vec models with Portuguese embeddings. The Continuous Bag-of-Words model used was provided by BIBREF14 (described in Section SECREF3 ). For these experiments, we use a model containing 934966 words of dimension 300 per vector representation. To realize the experiments, a list containing fifty professions labels for female and male was used as the parameter of similarity comparison.
Using the python library gensim, we evaluate the extreme analogies generated when comparing vectors like: INLINEFORM0 , where INLINEFORM1 represents the item from professions list and INLINEFORM2 the expected association. The most similarity function finds the top-N most similar entities, computing cosine similarity between a simple mean of the projection weight vectors of the given docs. Figure FIGREF4 presents the most extreme analogies results obtained from the model using these comparisons.
Applying the Algorithm SECREF4 , we check the accuracy obtained with the similarity function before and after the application of the de-bias method. Table TABREF3 presents the corresponding results. In cases like the analogy of `garçonete' to `stripper' (Figure FIGREF4 , line 8), it is possible to observe that the relationship stipulated between terms with sexual connotation and females is closer than between females and professions. While in the male model, even in cases of non-compliance, the closest analogy remains in the professional environment.
Using a confidence factor of 99%, when comparing the correctness levels of the model with and without the reduction of bias, the prediction of the model with bias is significantly better. Different authors BIBREF16 BIBREF17 show that the removal of bias in models produces a negative impact on the quality of the model. On the other hand, it is observed that even with a better hit rate the correctness rate in the prediction of related terms is still low.
Final Remarks
This paper presents an analysis of the presence of gender bias in Portuguese word embeddings. Even though it is a work in progress, the proposal showed promising results in analyzing predicting models.
A possible extension of the work involves deepening the analysis of the results obtained, seeking to achieve higher accuracy rates and fairer models to be used in machine learning techniques. Thus, these studies can involve tests with different methods of pre-processing the data to the use of different models, as well as other factors that may influence the results generated. This deepening is necessary since the model's accuracy is not high.
To conclude, we believe that the presence of gender bias and stereotypes in the Portuguese language is found in different spheres of language, and it is important to study ways of mitigating different types of discrimination. As such, it can be easily applied to analyze racists bias into the language, such as different types of preconceptions. | large Portuguese corpus |
09f0dce416a1e40cc6a24a8b42a802747d2c9363 | 09f0dce416a1e40cc6a24a8b42a802747d2c9363_0 | Q: Which word embeddings are analysed?
Text: Introduction
Recently, the transformative potential of machine learning (ML) has propelled ML into the forefront of mainstream media. In Brazil, the use of such technique has been widely diffused gaining more space. Thus, it is used to search for patterns, regularities or even concepts expressed in data sets BIBREF0 , and can be applied as a form of aid in several areas of everyday life.
Among the different definitions, ML can be seen as the ability to improve performance in accomplishing a task through the experience BIBREF1 . Thus, BIBREF2 presents this as a method of inferences of functions or hypotheses capable of solving a problem algorithmically from data representing instances of the problem. This is an important way to solve different types of problems that permeate computer science and other areas.
One of the main uses of ML is in text processing, where the analysis of the content the entry point for various learning algorithms. However, the use of this content can represent the insertion of different types of bias in training and may vary with the context worked. This work aims to analyze and remove gender stereotypes from word embedding in Portuguese, analogous to what was done in BIBREF3 for the English language. Hence, we propose to employ a public word2vec model pre-trained to analyze gender bias in the Portuguese language, quantifying biases present in the model so that it is possible to reduce the spreading of sexism of such models. There is also a stage of bias reducing over the results obtained in the model, where it is sought to analyze the effects of the application of gender distinction reduction techniques.
This paper is organized as follows: Section SECREF2 discusses related works. Section SECREF3 presents the Portuguese word2vec embeddings model used in this paper and Section SECREF4 proposes our method. Section SECREF5 presents experimental results, whose purpose is to verify results of a de-bias algorithm application in Portuguese embeddings word2vec model and a short discussion about it. Section SECREF6 brings our concluding remarks.
Related Work
There is a wide range of techniques that provide interesting results in the context of ML algorithms geared to the classification of data without discrimination; these techniques range from the pre-processing of data BIBREF4 to the use of bias removal techniques BIBREF5 in fact. Approaches linked to the data pre-processing step usually consist of methods based on improving the quality of the dataset after which the usual classification tools can be used to train a classifier. So, it starts from a baseline already stipulated by the execution of itself. On the other side of the spectrum, there are Unsupervised and semi-supervised learning techniques, that are attractive because they do not imply the cost of corpus annotation BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 .
The bias reduction is studied as a way to reduce discrimination through classification through different approaches BIBREF10 BIBREF11 . In BIBREF12 the authors propose to specify, implement, and evaluate the “fairness-aware" ML interface called themis-ml. In this interface, the main idea is to pick up a data set from a modified dataset. Themis-ml implements two methods for training fairness-aware models. The tool relies on two methods to make agnostic model type predictions: Reject Option Classification and Discrimination-Aware Ensemble Classification, these procedures being used to post-process predictions in a way that reduces potentially discriminatory predictions. According to the authors, it is possible to perceive the potential use of the method as a means of reducing bias in the use of ML algorithms.
In BIBREF3 , the authors propose a method to hardly reduce bias in English word embeddings collected from Google News. Using word2vec, they performed a geometric analysis of gender direction of the bias contained in the data. Using this property with the generation of gender-neutral analogies, a methodology was provided for modifying an embedding to remove gender stereotypes. Some metrics were defined to quantify both direct and indirect gender biases in embeddings and to develop algorithms to reduce bias in some embedding. Hence, the authors show that embeddings can be used in applications without amplifying gender bias.
Portuguese Embedding
In BIBREF13 , the quality of the representation of words through vectors in several models is discussed. According to the authors, the ability to train high-quality models using simplified architectures is useful in models composed of predictive methods that try to predict neighboring words with one or more context words, such as Word2Vec. Word embeddings have been used to provide meaningful representations for words in an efficient way.
In BIBREF14 , several word embedding models trained in a large Portuguese corpus are evaluated. Within the Word2Vec model, two training strategies were used. In the first, namely Skip-Gram, the model is given the word and attempts to predict its neighboring words. The second, Continuous Bag-of-Words (CBOW), the model is given the sequence of words without the middle one and attempts to predict this omitted word. The latter was chosen for application in the present proposal.
The authors of BIBREF14 claim to have collected a large corpus from several sources to obtain a multi-genre corpus representative of the Portuguese language. Hence, it comprehensively covers different expressions of the language, making it possible to analyze gender bias and stereotype in Portuguese word embeddings. The dataset used was tokenized and normalized by the authors to reduce the corpus vocabulary size, under the premise that vocabulary reduction provides more representative vectors.
Proposed Approach
Some linguists point out that the female gender is, in Portuguese, a particularization of the masculine. In this way the only gender mark is the feminine, the others being considered without gender (including names considered masculine). In BIBREF15 the gender representation in Portuguese is associated with a set of phenomena, not only from a linguistic perspective but also from a socio-cultural perspective. Since most of the termination of words (e.g., advogada and advogado) are used to indicate to whom the expression refers, stereotypes can be explained through communication. This implies the presence of biases when dealing with terms such as those referring to professions.
Figure FIGREF1 illustrates the approach proposed in this work. First, using a list of professions relating the identification of female and male who perform it as a parameter, we evaluate the accuracy of similarity generated by the embeddings. Then, getting the biased results, we apply the De-bias algorithm BIBREF3 aiming to reduce sexist analogies previous generated. Thus, all the results are analyzed by comparing the accuracies.
Using the word2vec model available in a public repository BIBREF14 , the proposal involves the analysis of the most similar analogies generated before and after the application of the BIBREF3 . The work is focused on the analysis of gender bias associated with professions in word embeddings. So therefore into the evaluation of the accuracy of the associations generated, aiming at achieving results as good as possible without prejudicing the evaluation metrics.
Algorithm SECREF4 describes the method performed during the evaluation of the gender bias presence. In this method we try to evaluate the accuracy of the analogies generated through the model, that is, to verify the cases of association matching generated between the words.
[!htb] Model Evaluation [1]
w2v_evaluate INLINEFORM0 open_model( INLINEFORM1 ) count = 0 INLINEFORM2 in INLINEFORM3 read list of tuples x = model.most_similar(positive=[`ela', male], negative=[`ele'])
x = female count += 1 accuracy = count/size(profession_pairs) return accuracy
Experiments
The purpose of this section is to perform different analysis concerning bias in word2vec models with Portuguese embeddings. The Continuous Bag-of-Words model used was provided by BIBREF14 (described in Section SECREF3 ). For these experiments, we use a model containing 934966 words of dimension 300 per vector representation. To realize the experiments, a list containing fifty professions labels for female and male was used as the parameter of similarity comparison.
Using the python library gensim, we evaluate the extreme analogies generated when comparing vectors like: INLINEFORM0 , where INLINEFORM1 represents the item from professions list and INLINEFORM2 the expected association. The most similarity function finds the top-N most similar entities, computing cosine similarity between a simple mean of the projection weight vectors of the given docs. Figure FIGREF4 presents the most extreme analogies results obtained from the model using these comparisons.
Applying the Algorithm SECREF4 , we check the accuracy obtained with the similarity function before and after the application of the de-bias method. Table TABREF3 presents the corresponding results. In cases like the analogy of `garçonete' to `stripper' (Figure FIGREF4 , line 8), it is possible to observe that the relationship stipulated between terms with sexual connotation and females is closer than between females and professions. While in the male model, even in cases of non-compliance, the closest analogy remains in the professional environment.
Using a confidence factor of 99%, when comparing the correctness levels of the model with and without the reduction of bias, the prediction of the model with bias is significantly better. Different authors BIBREF16 BIBREF17 show that the removal of bias in models produces a negative impact on the quality of the model. On the other hand, it is observed that even with a better hit rate the correctness rate in the prediction of related terms is still low.
Final Remarks
This paper presents an analysis of the presence of gender bias in Portuguese word embeddings. Even though it is a work in progress, the proposal showed promising results in analyzing predicting models.
A possible extension of the work involves deepening the analysis of the results obtained, seeking to achieve higher accuracy rates and fairer models to be used in machine learning techniques. Thus, these studies can involve tests with different methods of pre-processing the data to the use of different models, as well as other factors that may influence the results generated. This deepening is necessary since the model's accuracy is not high.
To conclude, we believe that the presence of gender bias and stereotypes in the Portuguese language is found in different spheres of language, and it is important to study ways of mitigating different types of discrimination. As such, it can be easily applied to analyze racists bias into the language, such as different types of preconceptions. | Continuous Bag-of-Words (CBOW) |
ac706631f2b3fa39bf173cd62480072601e44f66 | ac706631f2b3fa39bf173cd62480072601e44f66_0 | Q: Did they experiment on this dataset?
Text: Introduction
Analysis of the way court decisions refer to each other provides us with important insights into the decision-making process at courts. This is true both for the common law courts and for their counterparts in the countries belonging to the continental legal system. Citation data can be used for both qualitative and quantitative studies, casting light in the behavior of specific judges through document analysis or allowing complex studies into changing the nature of courts in transforming countries.
That being said, it is still difficult to create sufficiently large citation datasets to allow a complex research. In the case of the Czech Republic, it was difficult to obtain a relevant dataset of the court decisions of the apex courts (Supreme Court, Supreme Administrative Court and Constitutional Court). Due to its size, it is nearly impossible to extract the references manually. One has to reach out for an automation of such task. However, study of court decisions displayed many different ways that courts use to cite even decisions of their own, not to mention the decisions of other courts.The great diversity in citations led us to the use of means of the natural language processing for the recognition and the extraction of the citation data from court decisions of the Czech apex courts.
In this paper, we describe the tool ultimately used for the extraction of the references from the court decisions, together with a subsequent way of manual processing of the raw data to achieve a higher-quality dataset. Section SECREF2 maps the related work in the area of legal citation analysis (SectionSECREF1), reference recognition (Section SECREF2), text segmentation (Section SECREF4), and data availability (Section SECREF3). Section SECREF3 describes the method we used for the citation extraction, listing the individual models and the way we have combined these models into the NLP pipeline. Section SECREF4 presents results in the terms of evaluation of the performance of our pipeline, the statistics of the raw data, further manual processing and statistics of the final citation dataset. Section SECREF5 discusses limitations of our work and outlines the possible future development. Section SECREF6 concludes this paper.
Related work ::: Legal Citation Analysis
The legal citation analysis is an emerging phenomenon in the field of the legal theory and the legal empirical research.The legal citation analysis employs tools provided by the field of network analysis.
In spite of the long-term use of the citations in the legal domain (eg. the use of Shepard's Citations since 1873), interest in the network citation analysis increased significantly when Fowler et al. published the two pivotal works on the case law citations by the Supreme Court of the United States BIBREF0, BIBREF1. Authors used the citation data and network analysis to test the hypotheses about the function of stare decisis the doctrine and other issues of legal precedents. In the continental legal system, this work was followed by Winkels and de Ruyter BIBREF2. Authors adopted similar approach to Fowler to the court decisions of the Dutch Supreme Court. Similar methods were later used by Derlén and Lindholm BIBREF3, BIBREF4 and Panagis and Šadl BIBREF5 for the citation data of the Court of Justice of the European Union, and by Olsen and Küçüksu for the citation data of the European Court of Human Rights BIBREF6.
Additionally, a minor part in research in the legal network analysis resulted in the past in practical tools designed to help lawyers conduct the case law research. Kuppevelt and van Dijck built prototypes employing these techniques in the Netherlands BIBREF7. Görög a Weisz introduced the new legal information retrieval system, Justeus, based on a large database of the legal sources and partly on the network analysis methods. BIBREF8
Related work ::: Reference Recognition
The area of reference recognition already contains a large amount of work. It is concerned with recognizing text spans in documents that are referring to other documents. As such, it is a classical topic within the AI & Law literature.
The extraction of references from the Italian legislation based on regular expressions was reported by Palmirani et al. BIBREF9. The main goal was to bring references under a set of common standards to ensure the interoperability between different legal information systems.
De Maat et al. BIBREF10 focused on an automated detection of references to legal acts in Dutch language. Their approach consisted of a grammar covering increasingly complex citation patterns.
Opijnen BIBREF11 aimed for a reference recognition and a reference standardization using regular expressions accounting for multiple the variant of the same reference and multiple vendor-specific identifiers.
The language specific work by Kríž et al. BIBREF12 focused on the detecting and classification references to other court decisions and legal acts. Authors used a statistical recognition (HMM and Perceptron algorithms) and reported F1-measure over 90% averaged over all entities. It is the state-of-art in the automatic recognition of references in the Czech court decisions. Unfortunately, it allows only for the detection of docket numbers and it is unable to recognize court-specific or vendor-specific identifiers in the court decisions.
Other language specific-work includes our previous reference recognition model presented in BIBREF13. Prediction model is based on conditional random fields and it allows recognition of different constituents which then establish both explicit and implicit case-law and doctrinal references. Parts of this model were used in the pipeline described further within this paper in Section SECREF3.
Related work ::: Data Availability
Large scale quantitative and qualitative studies are often hindered by the unavailability of court data. Access to court decisions is often hindered by different obstacles. In some countries, court decisions are not available at all, while in some other they are accessible only through legal information systems, often proprietary. This effectively restricts the access to court decisions in terms of the bulk data. This issue was already approached by many researchers either through making available selected data for computational linguistics studies or by making available datasets of digitized data for various purposes. Non-exhaustive list of publicly available corpora includes British Law Report Corpus BIBREF14, The Corpus of US Supreme Court Opinions BIBREF15,the HOLJ corpus BIBREF16, the Corpus of Historical English Law Reports, Corpus de Sentencias Penales BIBREF17, Juristisches Referenzkorpus BIBREF18 and many others.
Language specific work in this area is presented by the publicly available Czech Court Decisions Corpus (CzCDC 1.0) BIBREF19. This corpus contains majority of court decisions of the Czech Supreme Court, the Supreme Administrative Court and the Constitutional Court, hence allowing a large-scale extraction of references to yield representative results. The CzCDC 1.0 was used as a dataset for extraction of the references as is described further within this paper in Section SECREF3. Unfortunately, despite containing 237 723 court decisions issued between 1st January 1993 and 30th September 2018, it is not complete. This fact is reflected in the analysis of the results.
Related work ::: Document Segmentation
A large volume of legal information is available in unstructured form, which makes processing these data a challenging task – both for human lawyers and for computers. Schweighofer BIBREF20 called for generic tools allowing a document segmentation to ease the processing of unstructured data by giving them some structure.
Topic-based segmentation often focuses on the identifying specific sentences that present borderlines of different textual segments.
The automatic segmentation is not an individual goal – it always serves as a prerequisite for further tasks requiring structured data. Segmentation is required for the text summarization BIBREF21, BIBREF22, keyword extraction BIBREF23, textual information retrieval BIBREF24, and other applications requiring input in the form of structured data.
Major part of research is focused on semantic similarity methods.The computing similarity between the parts of text presumes that a decrease of similarity means a topical border of two text segments. This approach was introduced by Hearst BIBREF22 and was used by Choi BIBREF25 and Heinonen BIBREF26 as well.
Another approach takes word frequencies and presumes a border according to different key words extracted. Reynar BIBREF27 authored graphical method based on statistics called dotplotting. Similar techniques were used by Ye BIBREF28 or Saravanan BIBREF29. Bommarito et al. BIBREF30 introduced a Python library combining different features including pre-trained models to the use for automatic legal text segmentation. Li BIBREF31 included neural network into his method to segment Chinese legal texts.
Šavelka and Ashley BIBREF32 similarly introduced the machine learning based approach for the segmentation of US court decisions texts into seven different parts. Authors reached high success rates in recognizing especially the Introduction and Analysis parts of the decisions.
Language specific work includes the model presented by Harašta et al. BIBREF33. This work focuses on segmentation of the Czech court decisions into pre-defined topical segments. Parts of this segmentation model were used in the pipeline described further within this paper in Section SECREF3.
Methodology
In this paper, we present and describe the citation dataset of the Czech top-tier courts. To obtain this dataset, we have processed the court decisions contained in CzCDC 1.0 dataset by the NLP pipeline consisting of the segmentation model introduced in BIBREF33, and parts of the reference recognition model presented in BIBREF13. The process is described in this section.
Methodology ::: Dataset and models ::: CzCDC 1.0 dataset
Novotná and Harašta BIBREF19 prepared a dataset of the court decisions of the Czech Supreme Court, the Supreme Administrative Court and the Constitutional Court. The dataset contains 237,723 decisions published between 1st January 1993 and the 30th September 2018. These decisions are organised into three sub-corpora. The sub-corpus of the Supreme Court contains 111,977 decisions, the sub-corpus of the Supreme Administrative Court contains 52,660 decisions and the sub-corpus of the Constitutional Court contains 73,086 decisions. Authors in BIBREF19 assessed that the CzCDC currently contains approximately 91% of all decisions of the Supreme Court, 99,5% of all decisions of the Constitutional Court, and 99,9% of all decisions of the Supreme Administrative Court. As such, it presents the best currently available dataset of the Czech top-tier court decisions.
Methodology ::: Dataset and models ::: Reference recognition model
Harašta and Šavelka BIBREF13 introduced a reference recognition model trained specifically for the Czech top-tier courts. Moreover, authors made their training data available in the BIBREF34. Given the lack of a single citation standard, references in this work consist of smaller units, because these were identified as more uniform and therefore better suited for the automatic detection. The model was trained using conditional random fields, which is a random field model that is globally conditioned on an observation sequence O. The states of the model correspond to event labels E. Authors used a first-order conditional random fields. Model was trained for each type of the smaller unit independently.
Methodology ::: Dataset and models ::: Text segmentation model
Harašta et al. BIBREF33, authors introduced the model for the automatic segmentation of the Czech court decisions into pre-defined multi-paragraph parts. These segments include the Header (introduction of given case), History (procedural history prior the apex court proceeding), Submission/Rejoinder (petition of plaintiff and response of defendant), Argumentation (argumentation of the court hearing the case), Footer (legally required information, such as information about further proceedings), Dissent and Footnotes. The model for automatic segmentation of the text was trained using conditional random fields. The model was trained for each type independently.
Methodology ::: Pipeline
In order to obtain the citation data of the Czech apex courts, it was necessary to recognize and extract the references from the CzCDC 1.0. Given that training data for both the reference recognition model BIBREF13, BIBREF34 and the text segmentation model BIBREF33 are publicly available, we were able to conduct extensive error analysis and put together a pipeline to arguably achieve the maximum efficiency in the task. The pipeline described in this part is graphically represented in Figure FIGREF10.
As the first step, every document in the CzCDC 1.0 was segmented using the text segmentation model. This allowed us to treat different parts of processed court documents differently in the further text processing. Specifically, it allowed us to subject only the specific part of a court decision, in this case the court argumentation, to further the reference recognition and extraction. A textual segment recognised as the court argumentation is then processed further.
As the second step, parts recognised by the text segmentation model as a court argumentation was processed using the reference recognition model. After carefully studying the evaluation of the model's performance in BIBREF13, we have decided to use only part of the said model. Specifically, we have employed the recognition of the court identifiers, as we consider the rest of the smaller units introduced by Harašta and Šavelka of a lesser value for our task. Also, deploying only the recognition of the court identifiers allowed us to avoid the problematic parsing of smaller textual units into the references. The text spans recognised as identifiers of court decisions are then processed further.
At this point, it is necessary to evaluate the performance of the above mentioned part of the pipeline before proceeding further. The evaluation of the performance is summarised in Table TABREF11. It shows that organising the two models into the pipeline boosted the performance of the reference recognition model, leading to a higher F1 measure in the initial recognition of the text spans and their classification.
Further processing included:
control and repair of incompletely identified court identifiers (manual);
identification and sorting of identifiers as belonging to Supreme Court, Supreme Administrative Court or Constitutional Court (rule-based, manual);
standardisation of different types of court identifiers (rule-based, manual);
parsing of identifiers with court decisions available in CzCDC 1.0.
Results
Overall, through the process described in Section SECREF3, we have retrieved three datasets of extracted references - one dataset per each of the apex courts. These datasets consist of the individual pairs containing the identification of the decision from which the reference was retrieved, and the identification of the referred documents. As we only extracted references to other judicial decisions, we obtained 471,319 references from Supreme Court decisions, 167,237 references from Supreme Administrative Court decisions and 264,463 references from Constitutional Court Decisions. These are numbers of text spans identified as references prior the further processing described in Section SECREF3.
These references include all identifiers extracted from the court decisions contained in the CzCDC 1.0. Therefore, this number includes all other court decisions, including lower courts, the Court of Justice of the European Union, the European Court of Human Rights, decisions of other public authorities etc. Therefore, it was necessary to classify these into references referring to decisions of the Supreme Court, Supreme Administrative Court, Constitutional Court and others. These groups then underwent a standardisation - or more precisely a resolution - of different court identifiers used by the Czech courts. Numbers of the references resulting from this step are shown in Table TABREF16.
Following this step, we linked court identifiers with court decisions contained in the CzCDC 1.0. Given that, the CzCDC 1.0 does not contain all the decisions of the respective courts, we were not able to parse all the references. Numbers of the references resulting from this step are shown in Table TABREF17.
Discussion
This paper introduced the first dataset of citation data of the three Czech apex courts. Understandably, there are some pitfalls and limitations to our approach.
As we admitted in the evaluation in Section SECREF9, the models we included in our NLP pipelines are far from perfect. Overall, we were able to achieve a reasonable recall and precision rate, which was further enhanced by several round of manual processing of the resulting data. However, it is safe to say that we did not manage to extract all the references. Similarly, because the CzCDC 1.0 dataset we used does not contain all the decisions of the respective courts, we were not able to parse all court identifiers to the documents these refer to. Therefore, the future work in this area may include further development of the resources we used. The CzCDC 1.0 would benefit from the inclusion of more documents of the Supreme Court, the reference recognition model would benefit from more refined training methods etc.
That being said, the presented dataset is currently the only available resource of its kind focusing on the Czech court decisions that is freely available to research teams. This significantly reduces the costs necessary to conduct these types of studies involving network analysis, and the similar techniques requiring a large amount of citation data.
Conclusion
In this paper, we have described the process of the creation of the first dataset of citation data of the three Czech apex courts. The dataset is publicly available for download at https://github.com/czech-case-law-relevance/czech-court-citations-dataset.
Acknowledgment
J.H., and T.N. gratefully acknowledge the support from the Czech Science Foundation under grant no. GA-17-20645S. T.N. also acknowledges the institutional support of the Masaryk University. This paper was presented at CEILI Workshop on Legal Data Analysis held in conjunction with Jurix 2019 in Madrid, Spain. | No |
ac706631f2b3fa39bf173cd62480072601e44f66 | ac706631f2b3fa39bf173cd62480072601e44f66_1 | Q: Did they experiment on this dataset?
Text: Introduction
Analysis of the way court decisions refer to each other provides us with important insights into the decision-making process at courts. This is true both for the common law courts and for their counterparts in the countries belonging to the continental legal system. Citation data can be used for both qualitative and quantitative studies, casting light in the behavior of specific judges through document analysis or allowing complex studies into changing the nature of courts in transforming countries.
That being said, it is still difficult to create sufficiently large citation datasets to allow a complex research. In the case of the Czech Republic, it was difficult to obtain a relevant dataset of the court decisions of the apex courts (Supreme Court, Supreme Administrative Court and Constitutional Court). Due to its size, it is nearly impossible to extract the references manually. One has to reach out for an automation of such task. However, study of court decisions displayed many different ways that courts use to cite even decisions of their own, not to mention the decisions of other courts.The great diversity in citations led us to the use of means of the natural language processing for the recognition and the extraction of the citation data from court decisions of the Czech apex courts.
In this paper, we describe the tool ultimately used for the extraction of the references from the court decisions, together with a subsequent way of manual processing of the raw data to achieve a higher-quality dataset. Section SECREF2 maps the related work in the area of legal citation analysis (SectionSECREF1), reference recognition (Section SECREF2), text segmentation (Section SECREF4), and data availability (Section SECREF3). Section SECREF3 describes the method we used for the citation extraction, listing the individual models and the way we have combined these models into the NLP pipeline. Section SECREF4 presents results in the terms of evaluation of the performance of our pipeline, the statistics of the raw data, further manual processing and statistics of the final citation dataset. Section SECREF5 discusses limitations of our work and outlines the possible future development. Section SECREF6 concludes this paper.
Related work ::: Legal Citation Analysis
The legal citation analysis is an emerging phenomenon in the field of the legal theory and the legal empirical research.The legal citation analysis employs tools provided by the field of network analysis.
In spite of the long-term use of the citations in the legal domain (eg. the use of Shepard's Citations since 1873), interest in the network citation analysis increased significantly when Fowler et al. published the two pivotal works on the case law citations by the Supreme Court of the United States BIBREF0, BIBREF1. Authors used the citation data and network analysis to test the hypotheses about the function of stare decisis the doctrine and other issues of legal precedents. In the continental legal system, this work was followed by Winkels and de Ruyter BIBREF2. Authors adopted similar approach to Fowler to the court decisions of the Dutch Supreme Court. Similar methods were later used by Derlén and Lindholm BIBREF3, BIBREF4 and Panagis and Šadl BIBREF5 for the citation data of the Court of Justice of the European Union, and by Olsen and Küçüksu for the citation data of the European Court of Human Rights BIBREF6.
Additionally, a minor part in research in the legal network analysis resulted in the past in practical tools designed to help lawyers conduct the case law research. Kuppevelt and van Dijck built prototypes employing these techniques in the Netherlands BIBREF7. Görög a Weisz introduced the new legal information retrieval system, Justeus, based on a large database of the legal sources and partly on the network analysis methods. BIBREF8
Related work ::: Reference Recognition
The area of reference recognition already contains a large amount of work. It is concerned with recognizing text spans in documents that are referring to other documents. As such, it is a classical topic within the AI & Law literature.
The extraction of references from the Italian legislation based on regular expressions was reported by Palmirani et al. BIBREF9. The main goal was to bring references under a set of common standards to ensure the interoperability between different legal information systems.
De Maat et al. BIBREF10 focused on an automated detection of references to legal acts in Dutch language. Their approach consisted of a grammar covering increasingly complex citation patterns.
Opijnen BIBREF11 aimed for a reference recognition and a reference standardization using regular expressions accounting for multiple the variant of the same reference and multiple vendor-specific identifiers.
The language specific work by Kríž et al. BIBREF12 focused on the detecting and classification references to other court decisions and legal acts. Authors used a statistical recognition (HMM and Perceptron algorithms) and reported F1-measure over 90% averaged over all entities. It is the state-of-art in the automatic recognition of references in the Czech court decisions. Unfortunately, it allows only for the detection of docket numbers and it is unable to recognize court-specific or vendor-specific identifiers in the court decisions.
Other language specific-work includes our previous reference recognition model presented in BIBREF13. Prediction model is based on conditional random fields and it allows recognition of different constituents which then establish both explicit and implicit case-law and doctrinal references. Parts of this model were used in the pipeline described further within this paper in Section SECREF3.
Related work ::: Data Availability
Large scale quantitative and qualitative studies are often hindered by the unavailability of court data. Access to court decisions is often hindered by different obstacles. In some countries, court decisions are not available at all, while in some other they are accessible only through legal information systems, often proprietary. This effectively restricts the access to court decisions in terms of the bulk data. This issue was already approached by many researchers either through making available selected data for computational linguistics studies or by making available datasets of digitized data for various purposes. Non-exhaustive list of publicly available corpora includes British Law Report Corpus BIBREF14, The Corpus of US Supreme Court Opinions BIBREF15,the HOLJ corpus BIBREF16, the Corpus of Historical English Law Reports, Corpus de Sentencias Penales BIBREF17, Juristisches Referenzkorpus BIBREF18 and many others.
Language specific work in this area is presented by the publicly available Czech Court Decisions Corpus (CzCDC 1.0) BIBREF19. This corpus contains majority of court decisions of the Czech Supreme Court, the Supreme Administrative Court and the Constitutional Court, hence allowing a large-scale extraction of references to yield representative results. The CzCDC 1.0 was used as a dataset for extraction of the references as is described further within this paper in Section SECREF3. Unfortunately, despite containing 237 723 court decisions issued between 1st January 1993 and 30th September 2018, it is not complete. This fact is reflected in the analysis of the results.
Related work ::: Document Segmentation
A large volume of legal information is available in unstructured form, which makes processing these data a challenging task – both for human lawyers and for computers. Schweighofer BIBREF20 called for generic tools allowing a document segmentation to ease the processing of unstructured data by giving them some structure.
Topic-based segmentation often focuses on the identifying specific sentences that present borderlines of different textual segments.
The automatic segmentation is not an individual goal – it always serves as a prerequisite for further tasks requiring structured data. Segmentation is required for the text summarization BIBREF21, BIBREF22, keyword extraction BIBREF23, textual information retrieval BIBREF24, and other applications requiring input in the form of structured data.
Major part of research is focused on semantic similarity methods.The computing similarity between the parts of text presumes that a decrease of similarity means a topical border of two text segments. This approach was introduced by Hearst BIBREF22 and was used by Choi BIBREF25 and Heinonen BIBREF26 as well.
Another approach takes word frequencies and presumes a border according to different key words extracted. Reynar BIBREF27 authored graphical method based on statistics called dotplotting. Similar techniques were used by Ye BIBREF28 or Saravanan BIBREF29. Bommarito et al. BIBREF30 introduced a Python library combining different features including pre-trained models to the use for automatic legal text segmentation. Li BIBREF31 included neural network into his method to segment Chinese legal texts.
Šavelka and Ashley BIBREF32 similarly introduced the machine learning based approach for the segmentation of US court decisions texts into seven different parts. Authors reached high success rates in recognizing especially the Introduction and Analysis parts of the decisions.
Language specific work includes the model presented by Harašta et al. BIBREF33. This work focuses on segmentation of the Czech court decisions into pre-defined topical segments. Parts of this segmentation model were used in the pipeline described further within this paper in Section SECREF3.
Methodology
In this paper, we present and describe the citation dataset of the Czech top-tier courts. To obtain this dataset, we have processed the court decisions contained in CzCDC 1.0 dataset by the NLP pipeline consisting of the segmentation model introduced in BIBREF33, and parts of the reference recognition model presented in BIBREF13. The process is described in this section.
Methodology ::: Dataset and models ::: CzCDC 1.0 dataset
Novotná and Harašta BIBREF19 prepared a dataset of the court decisions of the Czech Supreme Court, the Supreme Administrative Court and the Constitutional Court. The dataset contains 237,723 decisions published between 1st January 1993 and the 30th September 2018. These decisions are organised into three sub-corpora. The sub-corpus of the Supreme Court contains 111,977 decisions, the sub-corpus of the Supreme Administrative Court contains 52,660 decisions and the sub-corpus of the Constitutional Court contains 73,086 decisions. Authors in BIBREF19 assessed that the CzCDC currently contains approximately 91% of all decisions of the Supreme Court, 99,5% of all decisions of the Constitutional Court, and 99,9% of all decisions of the Supreme Administrative Court. As such, it presents the best currently available dataset of the Czech top-tier court decisions.
Methodology ::: Dataset and models ::: Reference recognition model
Harašta and Šavelka BIBREF13 introduced a reference recognition model trained specifically for the Czech top-tier courts. Moreover, authors made their training data available in the BIBREF34. Given the lack of a single citation standard, references in this work consist of smaller units, because these were identified as more uniform and therefore better suited for the automatic detection. The model was trained using conditional random fields, which is a random field model that is globally conditioned on an observation sequence O. The states of the model correspond to event labels E. Authors used a first-order conditional random fields. Model was trained for each type of the smaller unit independently.
Methodology ::: Dataset and models ::: Text segmentation model
Harašta et al. BIBREF33, authors introduced the model for the automatic segmentation of the Czech court decisions into pre-defined multi-paragraph parts. These segments include the Header (introduction of given case), History (procedural history prior the apex court proceeding), Submission/Rejoinder (petition of plaintiff and response of defendant), Argumentation (argumentation of the court hearing the case), Footer (legally required information, such as information about further proceedings), Dissent and Footnotes. The model for automatic segmentation of the text was trained using conditional random fields. The model was trained for each type independently.
Methodology ::: Pipeline
In order to obtain the citation data of the Czech apex courts, it was necessary to recognize and extract the references from the CzCDC 1.0. Given that training data for both the reference recognition model BIBREF13, BIBREF34 and the text segmentation model BIBREF33 are publicly available, we were able to conduct extensive error analysis and put together a pipeline to arguably achieve the maximum efficiency in the task. The pipeline described in this part is graphically represented in Figure FIGREF10.
As the first step, every document in the CzCDC 1.0 was segmented using the text segmentation model. This allowed us to treat different parts of processed court documents differently in the further text processing. Specifically, it allowed us to subject only the specific part of a court decision, in this case the court argumentation, to further the reference recognition and extraction. A textual segment recognised as the court argumentation is then processed further.
As the second step, parts recognised by the text segmentation model as a court argumentation was processed using the reference recognition model. After carefully studying the evaluation of the model's performance in BIBREF13, we have decided to use only part of the said model. Specifically, we have employed the recognition of the court identifiers, as we consider the rest of the smaller units introduced by Harašta and Šavelka of a lesser value for our task. Also, deploying only the recognition of the court identifiers allowed us to avoid the problematic parsing of smaller textual units into the references. The text spans recognised as identifiers of court decisions are then processed further.
At this point, it is necessary to evaluate the performance of the above mentioned part of the pipeline before proceeding further. The evaluation of the performance is summarised in Table TABREF11. It shows that organising the two models into the pipeline boosted the performance of the reference recognition model, leading to a higher F1 measure in the initial recognition of the text spans and their classification.
Further processing included:
control and repair of incompletely identified court identifiers (manual);
identification and sorting of identifiers as belonging to Supreme Court, Supreme Administrative Court or Constitutional Court (rule-based, manual);
standardisation of different types of court identifiers (rule-based, manual);
parsing of identifiers with court decisions available in CzCDC 1.0.
Results
Overall, through the process described in Section SECREF3, we have retrieved three datasets of extracted references - one dataset per each of the apex courts. These datasets consist of the individual pairs containing the identification of the decision from which the reference was retrieved, and the identification of the referred documents. As we only extracted references to other judicial decisions, we obtained 471,319 references from Supreme Court decisions, 167,237 references from Supreme Administrative Court decisions and 264,463 references from Constitutional Court Decisions. These are numbers of text spans identified as references prior the further processing described in Section SECREF3.
These references include all identifiers extracted from the court decisions contained in the CzCDC 1.0. Therefore, this number includes all other court decisions, including lower courts, the Court of Justice of the European Union, the European Court of Human Rights, decisions of other public authorities etc. Therefore, it was necessary to classify these into references referring to decisions of the Supreme Court, Supreme Administrative Court, Constitutional Court and others. These groups then underwent a standardisation - or more precisely a resolution - of different court identifiers used by the Czech courts. Numbers of the references resulting from this step are shown in Table TABREF16.
Following this step, we linked court identifiers with court decisions contained in the CzCDC 1.0. Given that, the CzCDC 1.0 does not contain all the decisions of the respective courts, we were not able to parse all the references. Numbers of the references resulting from this step are shown in Table TABREF17.
Discussion
This paper introduced the first dataset of citation data of the three Czech apex courts. Understandably, there are some pitfalls and limitations to our approach.
As we admitted in the evaluation in Section SECREF9, the models we included in our NLP pipelines are far from perfect. Overall, we were able to achieve a reasonable recall and precision rate, which was further enhanced by several round of manual processing of the resulting data. However, it is safe to say that we did not manage to extract all the references. Similarly, because the CzCDC 1.0 dataset we used does not contain all the decisions of the respective courts, we were not able to parse all court identifiers to the documents these refer to. Therefore, the future work in this area may include further development of the resources we used. The CzCDC 1.0 would benefit from the inclusion of more documents of the Supreme Court, the reference recognition model would benefit from more refined training methods etc.
That being said, the presented dataset is currently the only available resource of its kind focusing on the Czech court decisions that is freely available to research teams. This significantly reduces the costs necessary to conduct these types of studies involving network analysis, and the similar techniques requiring a large amount of citation data.
Conclusion
In this paper, we have described the process of the creation of the first dataset of citation data of the three Czech apex courts. The dataset is publicly available for download at https://github.com/czech-case-law-relevance/czech-court-citations-dataset.
Acknowledgment
J.H., and T.N. gratefully acknowledge the support from the Czech Science Foundation under grant no. GA-17-20645S. T.N. also acknowledges the institutional support of the Masaryk University. This paper was presented at CEILI Workshop on Legal Data Analysis held in conjunction with Jurix 2019 in Madrid, Spain. | Yes |
8b71ede8170162883f785040e8628a97fc6b5bcb | 8b71ede8170162883f785040e8628a97fc6b5bcb_0 | Q: How is quality of the citation measured?
Text: Introduction
Analysis of the way court decisions refer to each other provides us with important insights into the decision-making process at courts. This is true both for the common law courts and for their counterparts in the countries belonging to the continental legal system. Citation data can be used for both qualitative and quantitative studies, casting light in the behavior of specific judges through document analysis or allowing complex studies into changing the nature of courts in transforming countries.
That being said, it is still difficult to create sufficiently large citation datasets to allow a complex research. In the case of the Czech Republic, it was difficult to obtain a relevant dataset of the court decisions of the apex courts (Supreme Court, Supreme Administrative Court and Constitutional Court). Due to its size, it is nearly impossible to extract the references manually. One has to reach out for an automation of such task. However, study of court decisions displayed many different ways that courts use to cite even decisions of their own, not to mention the decisions of other courts.The great diversity in citations led us to the use of means of the natural language processing for the recognition and the extraction of the citation data from court decisions of the Czech apex courts.
In this paper, we describe the tool ultimately used for the extraction of the references from the court decisions, together with a subsequent way of manual processing of the raw data to achieve a higher-quality dataset. Section SECREF2 maps the related work in the area of legal citation analysis (SectionSECREF1), reference recognition (Section SECREF2), text segmentation (Section SECREF4), and data availability (Section SECREF3). Section SECREF3 describes the method we used for the citation extraction, listing the individual models and the way we have combined these models into the NLP pipeline. Section SECREF4 presents results in the terms of evaluation of the performance of our pipeline, the statistics of the raw data, further manual processing and statistics of the final citation dataset. Section SECREF5 discusses limitations of our work and outlines the possible future development. Section SECREF6 concludes this paper.
Related work ::: Legal Citation Analysis
The legal citation analysis is an emerging phenomenon in the field of the legal theory and the legal empirical research.The legal citation analysis employs tools provided by the field of network analysis.
In spite of the long-term use of the citations in the legal domain (eg. the use of Shepard's Citations since 1873), interest in the network citation analysis increased significantly when Fowler et al. published the two pivotal works on the case law citations by the Supreme Court of the United States BIBREF0, BIBREF1. Authors used the citation data and network analysis to test the hypotheses about the function of stare decisis the doctrine and other issues of legal precedents. In the continental legal system, this work was followed by Winkels and de Ruyter BIBREF2. Authors adopted similar approach to Fowler to the court decisions of the Dutch Supreme Court. Similar methods were later used by Derlén and Lindholm BIBREF3, BIBREF4 and Panagis and Šadl BIBREF5 for the citation data of the Court of Justice of the European Union, and by Olsen and Küçüksu for the citation data of the European Court of Human Rights BIBREF6.
Additionally, a minor part in research in the legal network analysis resulted in the past in practical tools designed to help lawyers conduct the case law research. Kuppevelt and van Dijck built prototypes employing these techniques in the Netherlands BIBREF7. Görög a Weisz introduced the new legal information retrieval system, Justeus, based on a large database of the legal sources and partly on the network analysis methods. BIBREF8
Related work ::: Reference Recognition
The area of reference recognition already contains a large amount of work. It is concerned with recognizing text spans in documents that are referring to other documents. As such, it is a classical topic within the AI & Law literature.
The extraction of references from the Italian legislation based on regular expressions was reported by Palmirani et al. BIBREF9. The main goal was to bring references under a set of common standards to ensure the interoperability between different legal information systems.
De Maat et al. BIBREF10 focused on an automated detection of references to legal acts in Dutch language. Their approach consisted of a grammar covering increasingly complex citation patterns.
Opijnen BIBREF11 aimed for a reference recognition and a reference standardization using regular expressions accounting for multiple the variant of the same reference and multiple vendor-specific identifiers.
The language specific work by Kríž et al. BIBREF12 focused on the detecting and classification references to other court decisions and legal acts. Authors used a statistical recognition (HMM and Perceptron algorithms) and reported F1-measure over 90% averaged over all entities. It is the state-of-art in the automatic recognition of references in the Czech court decisions. Unfortunately, it allows only for the detection of docket numbers and it is unable to recognize court-specific or vendor-specific identifiers in the court decisions.
Other language specific-work includes our previous reference recognition model presented in BIBREF13. Prediction model is based on conditional random fields and it allows recognition of different constituents which then establish both explicit and implicit case-law and doctrinal references. Parts of this model were used in the pipeline described further within this paper in Section SECREF3.
Related work ::: Data Availability
Large scale quantitative and qualitative studies are often hindered by the unavailability of court data. Access to court decisions is often hindered by different obstacles. In some countries, court decisions are not available at all, while in some other they are accessible only through legal information systems, often proprietary. This effectively restricts the access to court decisions in terms of the bulk data. This issue was already approached by many researchers either through making available selected data for computational linguistics studies or by making available datasets of digitized data for various purposes. Non-exhaustive list of publicly available corpora includes British Law Report Corpus BIBREF14, The Corpus of US Supreme Court Opinions BIBREF15,the HOLJ corpus BIBREF16, the Corpus of Historical English Law Reports, Corpus de Sentencias Penales BIBREF17, Juristisches Referenzkorpus BIBREF18 and many others.
Language specific work in this area is presented by the publicly available Czech Court Decisions Corpus (CzCDC 1.0) BIBREF19. This corpus contains majority of court decisions of the Czech Supreme Court, the Supreme Administrative Court and the Constitutional Court, hence allowing a large-scale extraction of references to yield representative results. The CzCDC 1.0 was used as a dataset for extraction of the references as is described further within this paper in Section SECREF3. Unfortunately, despite containing 237 723 court decisions issued between 1st January 1993 and 30th September 2018, it is not complete. This fact is reflected in the analysis of the results.
Related work ::: Document Segmentation
A large volume of legal information is available in unstructured form, which makes processing these data a challenging task – both for human lawyers and for computers. Schweighofer BIBREF20 called for generic tools allowing a document segmentation to ease the processing of unstructured data by giving them some structure.
Topic-based segmentation often focuses on the identifying specific sentences that present borderlines of different textual segments.
The automatic segmentation is not an individual goal – it always serves as a prerequisite for further tasks requiring structured data. Segmentation is required for the text summarization BIBREF21, BIBREF22, keyword extraction BIBREF23, textual information retrieval BIBREF24, and other applications requiring input in the form of structured data.
Major part of research is focused on semantic similarity methods.The computing similarity between the parts of text presumes that a decrease of similarity means a topical border of two text segments. This approach was introduced by Hearst BIBREF22 and was used by Choi BIBREF25 and Heinonen BIBREF26 as well.
Another approach takes word frequencies and presumes a border according to different key words extracted. Reynar BIBREF27 authored graphical method based on statistics called dotplotting. Similar techniques were used by Ye BIBREF28 or Saravanan BIBREF29. Bommarito et al. BIBREF30 introduced a Python library combining different features including pre-trained models to the use for automatic legal text segmentation. Li BIBREF31 included neural network into his method to segment Chinese legal texts.
Šavelka and Ashley BIBREF32 similarly introduced the machine learning based approach for the segmentation of US court decisions texts into seven different parts. Authors reached high success rates in recognizing especially the Introduction and Analysis parts of the decisions.
Language specific work includes the model presented by Harašta et al. BIBREF33. This work focuses on segmentation of the Czech court decisions into pre-defined topical segments. Parts of this segmentation model were used in the pipeline described further within this paper in Section SECREF3.
Methodology
In this paper, we present and describe the citation dataset of the Czech top-tier courts. To obtain this dataset, we have processed the court decisions contained in CzCDC 1.0 dataset by the NLP pipeline consisting of the segmentation model introduced in BIBREF33, and parts of the reference recognition model presented in BIBREF13. The process is described in this section.
Methodology ::: Dataset and models ::: CzCDC 1.0 dataset
Novotná and Harašta BIBREF19 prepared a dataset of the court decisions of the Czech Supreme Court, the Supreme Administrative Court and the Constitutional Court. The dataset contains 237,723 decisions published between 1st January 1993 and the 30th September 2018. These decisions are organised into three sub-corpora. The sub-corpus of the Supreme Court contains 111,977 decisions, the sub-corpus of the Supreme Administrative Court contains 52,660 decisions and the sub-corpus of the Constitutional Court contains 73,086 decisions. Authors in BIBREF19 assessed that the CzCDC currently contains approximately 91% of all decisions of the Supreme Court, 99,5% of all decisions of the Constitutional Court, and 99,9% of all decisions of the Supreme Administrative Court. As such, it presents the best currently available dataset of the Czech top-tier court decisions.
Methodology ::: Dataset and models ::: Reference recognition model
Harašta and Šavelka BIBREF13 introduced a reference recognition model trained specifically for the Czech top-tier courts. Moreover, authors made their training data available in the BIBREF34. Given the lack of a single citation standard, references in this work consist of smaller units, because these were identified as more uniform and therefore better suited for the automatic detection. The model was trained using conditional random fields, which is a random field model that is globally conditioned on an observation sequence O. The states of the model correspond to event labels E. Authors used a first-order conditional random fields. Model was trained for each type of the smaller unit independently.
Methodology ::: Dataset and models ::: Text segmentation model
Harašta et al. BIBREF33, authors introduced the model for the automatic segmentation of the Czech court decisions into pre-defined multi-paragraph parts. These segments include the Header (introduction of given case), History (procedural history prior the apex court proceeding), Submission/Rejoinder (petition of plaintiff and response of defendant), Argumentation (argumentation of the court hearing the case), Footer (legally required information, such as information about further proceedings), Dissent and Footnotes. The model for automatic segmentation of the text was trained using conditional random fields. The model was trained for each type independently.
Methodology ::: Pipeline
In order to obtain the citation data of the Czech apex courts, it was necessary to recognize and extract the references from the CzCDC 1.0. Given that training data for both the reference recognition model BIBREF13, BIBREF34 and the text segmentation model BIBREF33 are publicly available, we were able to conduct extensive error analysis and put together a pipeline to arguably achieve the maximum efficiency in the task. The pipeline described in this part is graphically represented in Figure FIGREF10.
As the first step, every document in the CzCDC 1.0 was segmented using the text segmentation model. This allowed us to treat different parts of processed court documents differently in the further text processing. Specifically, it allowed us to subject only the specific part of a court decision, in this case the court argumentation, to further the reference recognition and extraction. A textual segment recognised as the court argumentation is then processed further.
As the second step, parts recognised by the text segmentation model as a court argumentation was processed using the reference recognition model. After carefully studying the evaluation of the model's performance in BIBREF13, we have decided to use only part of the said model. Specifically, we have employed the recognition of the court identifiers, as we consider the rest of the smaller units introduced by Harašta and Šavelka of a lesser value for our task. Also, deploying only the recognition of the court identifiers allowed us to avoid the problematic parsing of smaller textual units into the references. The text spans recognised as identifiers of court decisions are then processed further.
At this point, it is necessary to evaluate the performance of the above mentioned part of the pipeline before proceeding further. The evaluation of the performance is summarised in Table TABREF11. It shows that organising the two models into the pipeline boosted the performance of the reference recognition model, leading to a higher F1 measure in the initial recognition of the text spans and their classification.
Further processing included:
control and repair of incompletely identified court identifiers (manual);
identification and sorting of identifiers as belonging to Supreme Court, Supreme Administrative Court or Constitutional Court (rule-based, manual);
standardisation of different types of court identifiers (rule-based, manual);
parsing of identifiers with court decisions available in CzCDC 1.0.
Results
Overall, through the process described in Section SECREF3, we have retrieved three datasets of extracted references - one dataset per each of the apex courts. These datasets consist of the individual pairs containing the identification of the decision from which the reference was retrieved, and the identification of the referred documents. As we only extracted references to other judicial decisions, we obtained 471,319 references from Supreme Court decisions, 167,237 references from Supreme Administrative Court decisions and 264,463 references from Constitutional Court Decisions. These are numbers of text spans identified as references prior the further processing described in Section SECREF3.
These references include all identifiers extracted from the court decisions contained in the CzCDC 1.0. Therefore, this number includes all other court decisions, including lower courts, the Court of Justice of the European Union, the European Court of Human Rights, decisions of other public authorities etc. Therefore, it was necessary to classify these into references referring to decisions of the Supreme Court, Supreme Administrative Court, Constitutional Court and others. These groups then underwent a standardisation - or more precisely a resolution - of different court identifiers used by the Czech courts. Numbers of the references resulting from this step are shown in Table TABREF16.
Following this step, we linked court identifiers with court decisions contained in the CzCDC 1.0. Given that, the CzCDC 1.0 does not contain all the decisions of the respective courts, we were not able to parse all the references. Numbers of the references resulting from this step are shown in Table TABREF17.
Discussion
This paper introduced the first dataset of citation data of the three Czech apex courts. Understandably, there are some pitfalls and limitations to our approach.
As we admitted in the evaluation in Section SECREF9, the models we included in our NLP pipelines are far from perfect. Overall, we were able to achieve a reasonable recall and precision rate, which was further enhanced by several round of manual processing of the resulting data. However, it is safe to say that we did not manage to extract all the references. Similarly, because the CzCDC 1.0 dataset we used does not contain all the decisions of the respective courts, we were not able to parse all court identifiers to the documents these refer to. Therefore, the future work in this area may include further development of the resources we used. The CzCDC 1.0 would benefit from the inclusion of more documents of the Supreme Court, the reference recognition model would benefit from more refined training methods etc.
That being said, the presented dataset is currently the only available resource of its kind focusing on the Czech court decisions that is freely available to research teams. This significantly reduces the costs necessary to conduct these types of studies involving network analysis, and the similar techniques requiring a large amount of citation data.
Conclusion
In this paper, we have described the process of the creation of the first dataset of citation data of the three Czech apex courts. The dataset is publicly available for download at https://github.com/czech-case-law-relevance/czech-court-citations-dataset.
Acknowledgment
J.H., and T.N. gratefully acknowledge the support from the Czech Science Foundation under grant no. GA-17-20645S. T.N. also acknowledges the institutional support of the Masaryk University. This paper was presented at CEILI Workshop on Legal Data Analysis held in conjunction with Jurix 2019 in Madrid, Spain. | it is necessary to evaluate the performance of the above mentioned part of the pipeline before proceeding further. The evaluation of the performance is summarised in Table TABREF11. It shows that organising the two models into the pipeline boosted the performance of the reference recognition model, leading to a higher F1 measure in the initial recognition of the text spans and their classification. |
fa2a384a23f5d0fe114ef6a39dced139bddac20e | fa2a384a23f5d0fe114ef6a39dced139bddac20e_0 | Q: How big is the dataset?
Text: Introduction
Analysis of the way court decisions refer to each other provides us with important insights into the decision-making process at courts. This is true both for the common law courts and for their counterparts in the countries belonging to the continental legal system. Citation data can be used for both qualitative and quantitative studies, casting light in the behavior of specific judges through document analysis or allowing complex studies into changing the nature of courts in transforming countries.
That being said, it is still difficult to create sufficiently large citation datasets to allow a complex research. In the case of the Czech Republic, it was difficult to obtain a relevant dataset of the court decisions of the apex courts (Supreme Court, Supreme Administrative Court and Constitutional Court). Due to its size, it is nearly impossible to extract the references manually. One has to reach out for an automation of such task. However, study of court decisions displayed many different ways that courts use to cite even decisions of their own, not to mention the decisions of other courts.The great diversity in citations led us to the use of means of the natural language processing for the recognition and the extraction of the citation data from court decisions of the Czech apex courts.
In this paper, we describe the tool ultimately used for the extraction of the references from the court decisions, together with a subsequent way of manual processing of the raw data to achieve a higher-quality dataset. Section SECREF2 maps the related work in the area of legal citation analysis (SectionSECREF1), reference recognition (Section SECREF2), text segmentation (Section SECREF4), and data availability (Section SECREF3). Section SECREF3 describes the method we used for the citation extraction, listing the individual models and the way we have combined these models into the NLP pipeline. Section SECREF4 presents results in the terms of evaluation of the performance of our pipeline, the statistics of the raw data, further manual processing and statistics of the final citation dataset. Section SECREF5 discusses limitations of our work and outlines the possible future development. Section SECREF6 concludes this paper.
Related work ::: Legal Citation Analysis
The legal citation analysis is an emerging phenomenon in the field of the legal theory and the legal empirical research.The legal citation analysis employs tools provided by the field of network analysis.
In spite of the long-term use of the citations in the legal domain (eg. the use of Shepard's Citations since 1873), interest in the network citation analysis increased significantly when Fowler et al. published the two pivotal works on the case law citations by the Supreme Court of the United States BIBREF0, BIBREF1. Authors used the citation data and network analysis to test the hypotheses about the function of stare decisis the doctrine and other issues of legal precedents. In the continental legal system, this work was followed by Winkels and de Ruyter BIBREF2. Authors adopted similar approach to Fowler to the court decisions of the Dutch Supreme Court. Similar methods were later used by Derlén and Lindholm BIBREF3, BIBREF4 and Panagis and Šadl BIBREF5 for the citation data of the Court of Justice of the European Union, and by Olsen and Küçüksu for the citation data of the European Court of Human Rights BIBREF6.
Additionally, a minor part in research in the legal network analysis resulted in the past in practical tools designed to help lawyers conduct the case law research. Kuppevelt and van Dijck built prototypes employing these techniques in the Netherlands BIBREF7. Görög a Weisz introduced the new legal information retrieval system, Justeus, based on a large database of the legal sources and partly on the network analysis methods. BIBREF8
Related work ::: Reference Recognition
The area of reference recognition already contains a large amount of work. It is concerned with recognizing text spans in documents that are referring to other documents. As such, it is a classical topic within the AI & Law literature.
The extraction of references from the Italian legislation based on regular expressions was reported by Palmirani et al. BIBREF9. The main goal was to bring references under a set of common standards to ensure the interoperability between different legal information systems.
De Maat et al. BIBREF10 focused on an automated detection of references to legal acts in Dutch language. Their approach consisted of a grammar covering increasingly complex citation patterns.
Opijnen BIBREF11 aimed for a reference recognition and a reference standardization using regular expressions accounting for multiple the variant of the same reference and multiple vendor-specific identifiers.
The language specific work by Kríž et al. BIBREF12 focused on the detecting and classification references to other court decisions and legal acts. Authors used a statistical recognition (HMM and Perceptron algorithms) and reported F1-measure over 90% averaged over all entities. It is the state-of-art in the automatic recognition of references in the Czech court decisions. Unfortunately, it allows only for the detection of docket numbers and it is unable to recognize court-specific or vendor-specific identifiers in the court decisions.
Other language specific-work includes our previous reference recognition model presented in BIBREF13. Prediction model is based on conditional random fields and it allows recognition of different constituents which then establish both explicit and implicit case-law and doctrinal references. Parts of this model were used in the pipeline described further within this paper in Section SECREF3.
Related work ::: Data Availability
Large scale quantitative and qualitative studies are often hindered by the unavailability of court data. Access to court decisions is often hindered by different obstacles. In some countries, court decisions are not available at all, while in some other they are accessible only through legal information systems, often proprietary. This effectively restricts the access to court decisions in terms of the bulk data. This issue was already approached by many researchers either through making available selected data for computational linguistics studies or by making available datasets of digitized data for various purposes. Non-exhaustive list of publicly available corpora includes British Law Report Corpus BIBREF14, The Corpus of US Supreme Court Opinions BIBREF15,the HOLJ corpus BIBREF16, the Corpus of Historical English Law Reports, Corpus de Sentencias Penales BIBREF17, Juristisches Referenzkorpus BIBREF18 and many others.
Language specific work in this area is presented by the publicly available Czech Court Decisions Corpus (CzCDC 1.0) BIBREF19. This corpus contains majority of court decisions of the Czech Supreme Court, the Supreme Administrative Court and the Constitutional Court, hence allowing a large-scale extraction of references to yield representative results. The CzCDC 1.0 was used as a dataset for extraction of the references as is described further within this paper in Section SECREF3. Unfortunately, despite containing 237 723 court decisions issued between 1st January 1993 and 30th September 2018, it is not complete. This fact is reflected in the analysis of the results.
Related work ::: Document Segmentation
A large volume of legal information is available in unstructured form, which makes processing these data a challenging task – both for human lawyers and for computers. Schweighofer BIBREF20 called for generic tools allowing a document segmentation to ease the processing of unstructured data by giving them some structure.
Topic-based segmentation often focuses on the identifying specific sentences that present borderlines of different textual segments.
The automatic segmentation is not an individual goal – it always serves as a prerequisite for further tasks requiring structured data. Segmentation is required for the text summarization BIBREF21, BIBREF22, keyword extraction BIBREF23, textual information retrieval BIBREF24, and other applications requiring input in the form of structured data.
Major part of research is focused on semantic similarity methods.The computing similarity between the parts of text presumes that a decrease of similarity means a topical border of two text segments. This approach was introduced by Hearst BIBREF22 and was used by Choi BIBREF25 and Heinonen BIBREF26 as well.
Another approach takes word frequencies and presumes a border according to different key words extracted. Reynar BIBREF27 authored graphical method based on statistics called dotplotting. Similar techniques were used by Ye BIBREF28 or Saravanan BIBREF29. Bommarito et al. BIBREF30 introduced a Python library combining different features including pre-trained models to the use for automatic legal text segmentation. Li BIBREF31 included neural network into his method to segment Chinese legal texts.
Šavelka and Ashley BIBREF32 similarly introduced the machine learning based approach for the segmentation of US court decisions texts into seven different parts. Authors reached high success rates in recognizing especially the Introduction and Analysis parts of the decisions.
Language specific work includes the model presented by Harašta et al. BIBREF33. This work focuses on segmentation of the Czech court decisions into pre-defined topical segments. Parts of this segmentation model were used in the pipeline described further within this paper in Section SECREF3.
Methodology
In this paper, we present and describe the citation dataset of the Czech top-tier courts. To obtain this dataset, we have processed the court decisions contained in CzCDC 1.0 dataset by the NLP pipeline consisting of the segmentation model introduced in BIBREF33, and parts of the reference recognition model presented in BIBREF13. The process is described in this section.
Methodology ::: Dataset and models ::: CzCDC 1.0 dataset
Novotná and Harašta BIBREF19 prepared a dataset of the court decisions of the Czech Supreme Court, the Supreme Administrative Court and the Constitutional Court. The dataset contains 237,723 decisions published between 1st January 1993 and the 30th September 2018. These decisions are organised into three sub-corpora. The sub-corpus of the Supreme Court contains 111,977 decisions, the sub-corpus of the Supreme Administrative Court contains 52,660 decisions and the sub-corpus of the Constitutional Court contains 73,086 decisions. Authors in BIBREF19 assessed that the CzCDC currently contains approximately 91% of all decisions of the Supreme Court, 99,5% of all decisions of the Constitutional Court, and 99,9% of all decisions of the Supreme Administrative Court. As such, it presents the best currently available dataset of the Czech top-tier court decisions.
Methodology ::: Dataset and models ::: Reference recognition model
Harašta and Šavelka BIBREF13 introduced a reference recognition model trained specifically for the Czech top-tier courts. Moreover, authors made their training data available in the BIBREF34. Given the lack of a single citation standard, references in this work consist of smaller units, because these were identified as more uniform and therefore better suited for the automatic detection. The model was trained using conditional random fields, which is a random field model that is globally conditioned on an observation sequence O. The states of the model correspond to event labels E. Authors used a first-order conditional random fields. Model was trained for each type of the smaller unit independently.
Methodology ::: Dataset and models ::: Text segmentation model
Harašta et al. BIBREF33, authors introduced the model for the automatic segmentation of the Czech court decisions into pre-defined multi-paragraph parts. These segments include the Header (introduction of given case), History (procedural history prior the apex court proceeding), Submission/Rejoinder (petition of plaintiff and response of defendant), Argumentation (argumentation of the court hearing the case), Footer (legally required information, such as information about further proceedings), Dissent and Footnotes. The model for automatic segmentation of the text was trained using conditional random fields. The model was trained for each type independently.
Methodology ::: Pipeline
In order to obtain the citation data of the Czech apex courts, it was necessary to recognize and extract the references from the CzCDC 1.0. Given that training data for both the reference recognition model BIBREF13, BIBREF34 and the text segmentation model BIBREF33 are publicly available, we were able to conduct extensive error analysis and put together a pipeline to arguably achieve the maximum efficiency in the task. The pipeline described in this part is graphically represented in Figure FIGREF10.
As the first step, every document in the CzCDC 1.0 was segmented using the text segmentation model. This allowed us to treat different parts of processed court documents differently in the further text processing. Specifically, it allowed us to subject only the specific part of a court decision, in this case the court argumentation, to further the reference recognition and extraction. A textual segment recognised as the court argumentation is then processed further.
As the second step, parts recognised by the text segmentation model as a court argumentation was processed using the reference recognition model. After carefully studying the evaluation of the model's performance in BIBREF13, we have decided to use only part of the said model. Specifically, we have employed the recognition of the court identifiers, as we consider the rest of the smaller units introduced by Harašta and Šavelka of a lesser value for our task. Also, deploying only the recognition of the court identifiers allowed us to avoid the problematic parsing of smaller textual units into the references. The text spans recognised as identifiers of court decisions are then processed further.
At this point, it is necessary to evaluate the performance of the above mentioned part of the pipeline before proceeding further. The evaluation of the performance is summarised in Table TABREF11. It shows that organising the two models into the pipeline boosted the performance of the reference recognition model, leading to a higher F1 measure in the initial recognition of the text spans and their classification.
Further processing included:
control and repair of incompletely identified court identifiers (manual);
identification and sorting of identifiers as belonging to Supreme Court, Supreme Administrative Court or Constitutional Court (rule-based, manual);
standardisation of different types of court identifiers (rule-based, manual);
parsing of identifiers with court decisions available in CzCDC 1.0.
Results
Overall, through the process described in Section SECREF3, we have retrieved three datasets of extracted references - one dataset per each of the apex courts. These datasets consist of the individual pairs containing the identification of the decision from which the reference was retrieved, and the identification of the referred documents. As we only extracted references to other judicial decisions, we obtained 471,319 references from Supreme Court decisions, 167,237 references from Supreme Administrative Court decisions and 264,463 references from Constitutional Court Decisions. These are numbers of text spans identified as references prior the further processing described in Section SECREF3.
These references include all identifiers extracted from the court decisions contained in the CzCDC 1.0. Therefore, this number includes all other court decisions, including lower courts, the Court of Justice of the European Union, the European Court of Human Rights, decisions of other public authorities etc. Therefore, it was necessary to classify these into references referring to decisions of the Supreme Court, Supreme Administrative Court, Constitutional Court and others. These groups then underwent a standardisation - or more precisely a resolution - of different court identifiers used by the Czech courts. Numbers of the references resulting from this step are shown in Table TABREF16.
Following this step, we linked court identifiers with court decisions contained in the CzCDC 1.0. Given that, the CzCDC 1.0 does not contain all the decisions of the respective courts, we were not able to parse all the references. Numbers of the references resulting from this step are shown in Table TABREF17.
Discussion
This paper introduced the first dataset of citation data of the three Czech apex courts. Understandably, there are some pitfalls and limitations to our approach.
As we admitted in the evaluation in Section SECREF9, the models we included in our NLP pipelines are far from perfect. Overall, we were able to achieve a reasonable recall and precision rate, which was further enhanced by several round of manual processing of the resulting data. However, it is safe to say that we did not manage to extract all the references. Similarly, because the CzCDC 1.0 dataset we used does not contain all the decisions of the respective courts, we were not able to parse all court identifiers to the documents these refer to. Therefore, the future work in this area may include further development of the resources we used. The CzCDC 1.0 would benefit from the inclusion of more documents of the Supreme Court, the reference recognition model would benefit from more refined training methods etc.
That being said, the presented dataset is currently the only available resource of its kind focusing on the Czech court decisions that is freely available to research teams. This significantly reduces the costs necessary to conduct these types of studies involving network analysis, and the similar techniques requiring a large amount of citation data.
Conclusion
In this paper, we have described the process of the creation of the first dataset of citation data of the three Czech apex courts. The dataset is publicly available for download at https://github.com/czech-case-law-relevance/czech-court-citations-dataset.
Acknowledgment
J.H., and T.N. gratefully acknowledge the support from the Czech Science Foundation under grant no. GA-17-20645S. T.N. also acknowledges the institutional support of the Masaryk University. This paper was presented at CEILI Workshop on Legal Data Analysis held in conjunction with Jurix 2019 in Madrid, Spain. | 903019 references |
53712f0ce764633dbb034e550bb6604f15c0cacd | 53712f0ce764633dbb034e550bb6604f15c0cacd_0 | Q: Do they evaluate only on English datasets?
Text: Introduction
Combat veterans diagnosed with PTSD are substantially more likely to engage in a number of high risk activities including engaging in interpersonal violence, attempting suicide, committing suicide, binge drinking, and drug abuse BIBREF0. Despite improved diagnostic screening, outpatient mental health and inpatient treatment for PTSD, the syndrome remains treatment resistant, is typically chronic, and is associated with numerous negative health effects and higher treatment costs BIBREF1. As a result, the Veteran Administration's National Center for PTSD (NCPTSD) suggests to reconceptualize PTSD not just in terms of a psychiatric symptom cluster, but focusing instead on the specific high risk behaviors associated with it, as these may be directly addressed though behavioral change efforts BIBREF0. Consensus prevalence estimates suggest that PTSD impacts between 15-20% of the veteran population which is typically chronic and treatment resistant BIBREF0. The PTSD patients support programs organized by different veterans peer support organization use a set of surveys for local weekly assessment to detect the intensity of PTSD among the returning veterans. However, recent advanced evidence-based care for PTSD sufferers surveys have showed that veterans, suffered with chronic PTSD are reluctant in participating assessments to the professionals which is another significant symptom of war returning veterans with PTSD. Several existing researches showed that, twitter posts of war veterans could be a significant indicator of their mental health and could be utilized to predict PTSD sufferers in time before going out of control BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8. However, all of the proposed methods relied on either blackbox machine learning methods or language models based sentiments extraction of posted texts which failed to obtain acceptability and trust of clinicians due to the lack of their explainability.
In the context of the above research problem, we aim to answer the following research questions
Given clinicians have trust on clinically validated PTSD assessment surveys, can we fill out PTSD assessment surveys using twitter posts analysis of war-veterans?
If possible, what sort of analysis and approach are needed to develop such XAI model to detect the prevalence and intensity of PTSD among war-veterans only using the social media (twitter) analysis where users are free to share their everyday mental and social conditions?
How much quantitative improvement do we observe in our model's ability to explain both detection and intensity estimation of PTSD?
In this paper, we propose LAXARY, an explainable and trustworthy representation of PTSD classification and its intensity for clinicians.
The key contributions of our work are summarized below,
The novelty of LAXARY lies on the proposed clinical surveys-based PTSD Linguistic dictionary creation with words/aspects which represents the instantaneous perturbation of twitter-based sentiments as a specific pattern and help calculate the possible scores of each survey question.
LAXARY includes a modified LIWC model to calculate the possible scores of each survey question using PTSD Linguistic Dictionary to fill out the PTSD assessment surveys which provides a practical way not only to determine fine-grained discrimination of physiological and psychological health markers of PTSD without incurring the expensive and laborious in-situ laboratory testing or surveys, but also obtain trusts of clinicians who are expected to see traditional survey results of the PTSD assessment.
Finally, we evaluate the accuracy of LAXARY model performance and reliability-validity of generated PTSD Linguistic Dictionary using real twitter users' posts. Our results show that, given normal weekly messages posted in twitter, LAXARY can provide very high accuracy in filling up surveys towards identifying PTSD ($\approx 96\%$) and its intensity ($\approx 1.2$ mean squared error).
Overview
Fig. FIGREF7 shows a schematic representation of our proposed model. It consists of the following logical steps: (i) Develop PTSD Detection System using twitter posts of war-veterans(ii) design real surveys from the popular symptoms based mental disease assessment surveys; (iii) define single category and create PTSD Linguistic Dictionary for each survey question and multiple aspect/words for each question; (iv) calculate $\alpha $-scores for each category and dimension based on linguistic inquiry and word count as well as the aspects/words based dictionary; (v) calculate scaling scores ($s$-scores) for each dimension based on the $\alpha $-scores and $s$-scores of each category based on the $s$-scores of its dimensions; (vi) rank features according to the contributions of achieving separation among categories associated with different $\alpha $-scores and $s$-scores; and select feature sets that minimize the overlap among categories as associated with the target classifier (SGD); and finally (vii) estimate the quality of selected features-based classification for filling up surveys based on classified categories i.e. PTSD assessment which is trustworthy among the psychiatry community.
Related Works
Twitter activity based mental health assessment has been utmost importance to the Natural Language Processing (NLP) researchers and social media analysts for decades. Several studies have turned to social media data to study mental health, since it provides an unbiased collection of a person's language and behavior, which has been shown to be useful in diagnosing conditions. BIBREF9 used n-gram language model (CLM) based s-score measure setting up some user centric emotional word sets. BIBREF10 used positive and negative PTSD data to train three classifiers: (i) one unigram language model (ULM); (ii) one character n-gram language model (CLM); and 3) one from the LIWC categories $\alpha $-scores and found that last one gives more accuracy than other ones. BIBREF11 used two types of $s$-scores taking the ratio of negative and positive language models. Differences in language use have been observed in the personal writing of students who score highly on depression scales BIBREF2, forum posts for depression BIBREF3, self narratives for PTSD (BIBREF4, BIBREF5), and chat rooms for bipolar BIBREF6. Specifically in social media, differences have previously been observed between depressed and control groups (as assessed by internet-administered batteries) via LIWC: depressed users more frequently use first person pronouns (BIBREF7) and more frequently use negative emotion words and anger words on Twitter, but show no differences in positive emotion word usage (BIBREF8). Similarly, an increase in negative emotion and first person pronouns, and a decrease in third person pronouns, (via LIWC) is observed, as well as many manifestations of literature findings in the pattern of life of depressed users (e.g., social engagement, demographics) (BIBREF12). Differences in language use in social media via LIWC have also been observed between PTSD and control groups (BIBREF13).
All of the prior works used some random dictionary related to the human sentiment (positive/negative) word sets as category words to estimate the mental health but very few of them addressed the problem of explainability of their solution to obtain trust of clinicians. Islam et. al proposed an explainable topic modeling framework to rank different mental health features using Local Interpretable Model-Agnostic Explanations and visualize them to understand the features involved in mental health status classification using the BIBREF14 which fails to provide trust of clinicians due to its lack of interpretability in clinical terms. In this paper, we develop LAXARY model where first we start investigating clinically validated survey tools which are trustworthy methods of PTSD assessment among clinicians, build our category sets based on the survey questions and use these as dictionary words in terms of first person singular number pronouns aspect for next level LIWC algorithm. Finally, we develop a modified LIWC algorithm to estimate survey scores (similar to sentiment category scores of naive LIWC) which is both explainable and trustworthy to clinicians.
Demographics of Clinically Validated PTSD Assessment Tools
There are many clinically validated PTSD assessment tools that are being used both to detect the prevalence of PTSD and its intensity among sufferers. Among all of the tools, the most popular and well accepted one is Domain-Specific Risk-Taking (DOSPERT) Scale BIBREF15. This is a psychometric scale that assesses risk taking in five content domains: financial decisions (separately for investing versus gambling), health/safety, recreational, ethical, and social decisions. Respondents rate the likelihood that they would engage in domain-specific risky activities (Part I). An optional Part II assesses respondents' perceptions of the magnitude of the risks and expected benefits of the activities judged in Part I. There are more scales that are used in risky behavior analysis of individual's daily activities such as, The Berlin Social Support Scales (BSSS) BIBREF16 and Values In Action Scale (VIAS) BIBREF17. Dryhootch America BIBREF18, BIBREF19, a veteran peer support community organization, chooses 5, 6 and 5 questions respectively from the above mentioned survey systems to assess the PTSD among war veterans and consider rest of them as irrelevant to PTSD. The details of dryhootch chosen survey scale are stated in Table TABREF13. Table!TABREF14 shows a sample DOSPERT scale demographic chosen by dryhootch. The threshold (in Table TABREF13) is used to calculate the risky behavior limits. For example, if one individual's weekly DOSPERT score goes over 28, he is in critical situation in terms of risk taking symptoms of PTSD. Dryhootch defines the intensity of PTSD into four categories based on the weekly survey results of all three clinical survey tools (DOSPERT, BSSS and VIAS )
High risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for all three PTSD assessment tools i.e. DOSPERT, BSSS and VIAS, then he/she is in high risk situation which needs immediate mental support to avoid catastrophic effect of individual's health or surrounding people's life.
Moderate risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any two of the three PTSD assessment tools, then he/she is in moderate risk situation which needs close observation and peer mentoring to avoid their risk progression.
Low risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any one of the three PTSD assessment tools, then he/she has light symptoms of PTSD.
No PTSD: If one individual veteran's weekly PTSD assessment scores go below the threshold for all three PTSD assessment tools, then he/she has no PTSD.
Twitter-based PTSD Detection
To develop an explainable model, we first need to develop twitter-based PTSD detection algorithm. In this section, we describe the data collection and the development of our core LAXARY model.
Twitter-based PTSD Detection ::: Data Collection
We use an automated regular expression based searching to find potential veterans with PTSD in twitter, and then refine the list manually. First, we select different keywords to search twitter users of different categories. For example, to search self-claimed diagnosed PTSD sufferers, we select keywords related to PTSD for example, post trauma, post traumatic disorder, PTSD etc. We use a regular expression to search for statements where the user self-identifies as being diagnosed with PTSD. For example, Table TABREF27 shows a self-identified tweet posts. To search veterans, we mostly visit to different twitter accounts of veterans organizations such as "MA Women Veterans @WomenVeterans", "Illinois Veterans @ILVetsAffairs", "Veterans Benefits @VAVetBenefits" etc. We define an inclusion criteria as follows: one twitter user will be part of this study if he/she describes himself/herself as a veteran in the introduction and have at least 25 tweets in last week. After choosing the initial twitter users, we search for self-identified PTSD sufferers who claim to be diagnosed with PTSD in their twitter posts. We find 685 matching tweets which are manually reviewed to determine if they indicate a genuine statement of a diagnosis for PTSD. Next, we select the username that authored each of these tweets and retrieve last week's tweets via the Twitter API. We then filtered out users with less than 25 tweets and those whose tweets were not at least 75% in English (measured using an automated language ID system.) This filtering left us with 305 users as positive examples. We repeated this process for a group of randomly selected users. We randomly selected 3,000 twitter users who are veterans as per their introduction and have at least 25 tweets in last one week. After filtering (as above) in total 2,423 users remain, whose tweets are used as negative examples developing a 2,728 user's entire weeks' twitter posts where 305 users are self-claimed PTSD sufferers. We distributed Dryhootch chosen surveys among 1,200 users (305 users are self claimed PTSD sufferers and rest of them are randomly chosen from previous 2,423 users) and received 210 successful responses. Among these responses, 92 users were diagnosed as PTSD by any of the three surveys and rest of the 118 users are diagnosed with NO PTSD. Among the clinically diagnosed PTSD sufferers, 17 of them were not self-identified before. However, 7 of the self-identified PTSD sufferers are assessed with no PTSD by PTSD assessment tools. The response rates of PTSD and NO PTSD users are 27% and 12%. In summary, we have collected one week of tweets from 2,728 veterans where 305 users claimed to have diagnosed with PTSD. After distributing Dryhootch surveys, we have a dataset of 210 veteran twitter users among them 92 users are assessed with PTSD and 118 users are diagnosed with no PTSD using clinically validated surveys. The severity of the PTSD are estimated as Non-existent, light, moderate and high PTSD based on how many surveys support the existence of PTSD among the participants according to dryhootch manual BIBREF18, BIBREF19.
Twitter-based PTSD Detection ::: Pre-processing
We download 210 users' all twitter posts who are war veterans and clinically diagnosed with PTSD sufferers as well which resulted a total 12,385 tweets. Fig FIGREF16 shows each of the 210 veteran twitter users' monthly average tweets. We categorize these Tweets into two groups: Tweets related to work and Tweets not related to work. That is, only the Tweets that use a form of the word “work*” (e.g. work,worked, working, worker, etc.) or “job*” (e.g. job, jobs, jobless, etc.) are identified as work-related Tweets, with the remaining categorized as non-work-related Tweets. This categorization method increases the likelihood that most Tweets in the work group are indeed talking about work or job; for instance, “Back to work. Projects are firing back up and moving ahead now that baseball is done.” This categorization results in 456 work-related Tweets, about 5.4% of all Tweets written in English (and 75 unique Twitter users). To conduct weekly-level analysis, we consider three categorizations of Tweets (i.e. overall Tweets, work-related Tweets, and non work-related Tweets) on a daily basis, and create a text file for each week for each group.
Twitter-based PTSD Detection ::: PTSD Detection Baseline Model
We use Coppersmith proposed PTSD classification algorithm to develop our baseline blackbox model BIBREF11. We utilize our positive and negative PTSD data (+92,-118) to train three classifiers: (i) unigram language model (ULM) examining individual whole words, (ii) character n-gram language model (CLM), and (iii) LIWC based categorical models above all of the prior ones. The LMs have been shown effective for Twitter classification tasks BIBREF9 and LIWC has been previously used for analysis of mental health in Twitter BIBREF10. The language models measure the probability that a word (ULM) or a string of characters (CLM) was generated by the same underlying process as the training data. We first train one of each language model ($clm^{+}$ and $ulm^{+}$) from the tweets of PTSD users, and another model ($clm^{-}$ and $ulm^{-}$) from the tweets from No PTSD users. Each test tweet $t$ is scored by comparing probabilities from each LM called $s-score$
A threshold of 1 for $s-score$ divides scores into positive and negative classes. In a multi-class setting, the algorithm minimizes the cross entropy, selecting the model with the highest probability. For each user, we calculate the proportion of tweets scored positively by each LIWC category. These proportions are used as a feature vector in a loglinear regression model BIBREF20. Prior to training, we preprocess the text of each tweet: we replace all usernames with a single token (USER), lowercase all text, and remove extraneous whitespace. We also exclude any tweet that contained a URL, as these often pertain to events external to the user.
We conduct a LIWC analysis of the PTSD and non-PTSD tweets to determine if there are differences in the language usage of PTSD users. We applied the LIWC battery and examined the distribution of words in their language. Each tweet was tokenized by separating on whitespace. For each user, for a subset of the LIWC categories, we measured the proportion of tweets that contained at least one word from that category. Specifically, we examined the following nine categories: first, second and third person pronouns, swear, anger, positive emotion, negative emotion, death, and anxiety words. Second person pronouns were used significantly less often by PTSD users, while third person pronouns and words about anxiety were used significantly more often.
LAXARY: Explainable PTSD Detection Model
The heart of LAXARY framework is the construction of PTSD Linguistic Dictionary. Prior works show that linguistic dictionary based text analysis has been much effective in twitter based sentiment analysis BIBREF21, BIBREF22. Our work is the first of its kind that develops its own linguistic dictionary to explain automatic PTSD assessment to confirm trustworthiness to clinicians.
LAXARY: Explainable PTSD Detection Model ::: PTSD Linguistic Dictionary Creation
We use LIWC developed WordStat dictionary format for our text analysis BIBREF23. The LIWC application relies on an internal default dictionary that defines which words should be counted in the target text files. To avoid confusion in the subsequent discussion, text words that are read and analyzed by WordStat are referred to as target words. Words in the WordStat dictionary file will be referred to as dictionary words. Groups of dictionary words that tap a particular domain (e.g., negative emotion words) are variously referred to as subdictionaries or word categories. Fig FIGREF8 is a sample WordStat dictionary. There are several steps to use this dictionary which are stated as follows:
Pronoun selection: At first we have to define the pronouns of the target sentiment. Here we used first person singular number pronouns (i.e., I, me, mine etc.) that means we only count those sentences or segments which are only related to first person singular number i.e., related to the person himself.
Category selection: We have to define the categories of each word set thus we can analyze the categories as well as dimensions' text analysis scores. We chose three categories based on the three different surveys: 1) DOSPERT scale; 2) BSSS scale; and 3) VIAS scale.
Dimension selection: We have to define the word sets (also called dimension) for each category. We chose one dimension for each of the questions under each category to reflect real survey system evaluation. Our chosen categories are state in Fig FIGREF20.
Score calculation $\alpha $-score: $\alpha $-scores refer to the Cronbach's alphas for the internal reliability of the specific words within each category. The binary alphas are computed on the ratio of occurrence and non-occurrence of each dictionary word whereas the raw or uncorrected alphas are based on the percentage of use of each of the category words within texts.
LAXARY: Explainable PTSD Detection Model ::: Psychometric Validation of PTSD Linguistic Dictionary
After the PTSD Linguistic Dictionary has been created, we empirically evaluate its psychometric properties such as reliability and validity as per American Standards for educational and psychological testing guideline BIBREF24. In psychometrics, reliability is most commonly evaluated by Cronbach's alpha, which assesses internal consistency based on inter-correlations and the number of measured items. In the text analysis scenario, each word in our PTSD Linguistic dictionary is considered an item, and reliability is calculated based on each text file's response to each word item, which forms an $N$(number of text files) $\times $ $J$(number of words or stems in a dictionary) data matrix. There are two ways to quantify such responses: using percentage data (uncorrected method), or using "present or not" data (binary method) BIBREF23. For the uncorrected method, the data matrix comprises percentage values of each word/stem are calculated from each text file. For the binary method, the data matrix quantifies whether or not a word was used in a text file where "1" represents yes and "0" represents no. Once the data matrix is created, it is used to calculate Cronbach's alpha based on its inter-correlation matrix among the word percentages. We assess reliability based on our selected 210 users' Tweets which further generated a 23,562 response matrix after running the PTSD Linguistic Dictionary for each user. The response matrix yields reliability of .89 based on the uncorrected method, and .96 based on the binary method, which confirm the high reliability of our PTSD Dictionary created PTSD survey based categories. After assessing the reliability of the PTSD Linguistic dictionary, we focus on the two most common forms of construct validity: convergent validity and discriminant validity BIBREF25. Convergent validity provides evidence that two measures designed to assess the same construct are indeed related; discriminate validity involves evidence that two measures designed to assess different constructs are not too strongly related. In theory, we expect that the PTSD Linguistic dictionary should be positively correlated with other negative PTSD constructs to show convergent validity, and not strongly correlated with positive PTSD constructs to show discriminant validity. To test these two types of validity, we use the same 210 users' tweets used for the reliability assessment. The results revealed that the PTSD Linguistic dictionary is indeed positively correlated with negative construct dictionaries, including the overall negative PTSD dictionary (r=3.664,p$<$.001). Table TABREF25 shows all 16 categorical dictionaries. These results provide strong support for the measurement validity for our newly created PTSD Linguistic dictionary.
LAXARY: Explainable PTSD Detection Model ::: Feature Extraction and Survey Score Estimation
We use the exact similar method of LIWC to extract $\alpha $-scores for each dimension and categories except we use our generated PTSD Linguistic Dictionary for the task BIBREF23. Thus we have total 16 $\alpha $-scores in total. Meanwhile, we propose a new type of feature in this regard, which we called scaling-score ($s$-score). $s$-score is calculated from $\alpha $-scores. The purpose of using $s$-score is to put exact scores of each of the dimension and category thus we can apply the same method used in real weekly survey system. The idea is, we divide each category into their corresponding scale factor (i.e., for DOSPERT scale, BSSS scale and VIAS scales) and divide them into 8, 3 and 5 scaling factors which are used in real survey system. Then we set the $s$-score from the scaling factors from the $\alpha $-scores of the corresponding dimension of the questions. The algorithm is stated in Figure FIGREF23. Following Fig FIGREF23, we calculate the $s$-score for each dimension. Then we add up all the $s$-score of the dimensions to calculate cumulative $s$-score of particular categories which is displayed in Fig FIGREF22. Finally, we have total 32 features among them 16 are $\alpha $-scores and 16 are $s$-scores for each category (i.e. each question). We add both of $\alpha $ and $s$ scores together and scale according to their corresponding survey score scales using min-max standardization. Then, the final output is a 16 valued matrix which represent the score for each questions from three different Dryhootch surveys. We use the output to fill up each survey, estimate the prevalence of PTSD and its intensity based on each tool's respective evaluation metric.
Experimental Evaluation
To validate the performance of LAXARY framework, we first divide the entire 210 users' twitter posts into training and test dataset. Then, we first developed PTSD Linguistic Dictionary from the twitter posts from training dataset and apply LAXARY framework on test dataset.
Experimental Evaluation ::: Results
To provide an initial results, we take 50% of users' last week's (the week they responded of having PTSD) data to develop PTSD Linguistic dictionary and apply LAXARY framework to fill up surveys on rest of 50% dataset. The distribution of this training-test dataset segmentation followed a 50% distribution of PTSD and No PTSD from the original dataset. Our final survey based classification results showed an accuracy of 96% in detecting PTSD and mean squared error of 1.2 in estimating its intensity given we have four intensity, No PTSD, Low Risk PTSD, Moderate Risk PTSD and High Risk PTSD with a score of 0, 1, 2 and 3 respectively. Table TABREF29 shows the classification details of our experiment which provide the very good accuracy of our classification. To compare the outperformance of our method, we also implemented Coppersmith et. al. proposed method and achieved an 86% overall accuracy of detecting PTSD users BIBREF11 following the same training-test dataset distribution. Fig FIGREF28 illustrates the comparisons between LAXARY and Coppersmith et. al. proposed method. Here we can see, the outperformance of our proposed method as well as the importance of $s-score$ estimation. We also illustrates the importance of $\alpha -score$ and $S-score$ in Fig FIGREF30. Fig FIGREF30 illustrates that if we change the number of training samples (%), LAXARY models outperforms Coppersmith et. al. proposed model under any condition. In terms of intensity, Coppersmith et. al. totally fails to provide any idea however LAXARY provides extremely accurate measures of intensity estimation for PTSD sufferers (as shown in Fig FIGREF31) which can be explained simply providing LAXARY model filled out survey details. Table TABREF29 shows the details of accuracies of both PTSD detection and intensity estimation. Fig FIGREF32 shows the classification accuracy changes over the training sample sizes for each survey which shows that DOSPERT scale outperform other surveys. Fig FIGREF33 shows that if we take previous weeks (instead of only the week diagnosis of PTSD was taken), there are no significant patterns of PTSD detection.
Challenges and Future Work
LAXARY is a highly ambitious model that targets to fill up clinically validated survey tools using only twitter posts. Unlike the previous twitter based mental health assessment tools, LAXARY provides a clinically interpretable model which can provide better classification accuracy and intensity of PTSD assessment and can easily obtain the trust of clinicians. The central challenge of LAXARY is to search twitter users from twitter search engine and manually label them for analysis. While developing PTSD Linguistic Dictionary, although we followed exactly same development idea of LIWC WordStat dictionary and tested reliability and validity, our dictionary was not still validated by domain experts as PTSD detection is highly sensitive issue than stress/depression detection. Moreover, given the extreme challenges of searching veterans in twitter using our selection and inclusion criteria, it was extremely difficult to manually find the evidence of the self-claimed PTSD sufferers. Although, we have shown extremely promising initial findings about the representation of a blackbox model into clinically trusted tools, using only 210 users' data is not enough to come up with a trustworthy model. Moreover, more clinical validation must be done in future with real clinicians to firmly validate LAXARY model provided PTSD assessment outcomes. In future, we aim to collect more data and run not only nationwide but also international-wide data collection to establish our innovation into a real tool. Apart from that, as we achieved promising results in detecting PTSD and its intensity using only twitter data, we aim to develop Linguistic Dictionary for other mental health issues too. Moreover, we will apply our proposed method in other types of mental illness such as depression, bipolar disorder, suicidal ideation and seasonal affective disorder (SAD) etc. As we know, accuracy of particular social media analysis depends on the dataset mostly. We aim to collect more data engaging more researchers to establish a set of mental illness specific Linguistic Database and evaluation technique to solidify the genralizability of our proposed method.
Conclusion
To promote better comfort to the trauma patients, it is really important to detect Post Traumatic Stress Disorder (PTSD) sufferers in time before going out of control that may result catastrophic impacts on society, people around or even sufferers themselves. Although, psychiatrists invented several clinical diagnosis tools (i.e., surveys) by assessing symptoms, signs and impairment associated with PTSD, most of the times, the process of diagnosis happens at the severe stage of illness which may have already caused some irreversible damages of mental health of the sufferers. On the other hand, due to lack of explainability, existing twitter based methods are not trusted by the clinicians. In this paper, we proposed, LAXARY, a novel method of filling up PTSD assessment surveys using weekly twitter posts. As the clinical surveys are trusted and understandable method, we believe that this method will be able to gain trust of clinicians towards early detection of PTSD. Moreover, our proposed LAXARY model, which is first of its kind, can be used to develop any type of mental disorder Linguistic Dictionary providing a generalized and trustworthy mental health assessment framework of any kind. | Unanswerable |
0bffc3d82d02910d4816c16b390125e5df55fd01 | 0bffc3d82d02910d4816c16b390125e5df55fd01_0 | Q: Do the authors mention any possible confounds in this study?
Text: Introduction
Combat veterans diagnosed with PTSD are substantially more likely to engage in a number of high risk activities including engaging in interpersonal violence, attempting suicide, committing suicide, binge drinking, and drug abuse BIBREF0. Despite improved diagnostic screening, outpatient mental health and inpatient treatment for PTSD, the syndrome remains treatment resistant, is typically chronic, and is associated with numerous negative health effects and higher treatment costs BIBREF1. As a result, the Veteran Administration's National Center for PTSD (NCPTSD) suggests to reconceptualize PTSD not just in terms of a psychiatric symptom cluster, but focusing instead on the specific high risk behaviors associated with it, as these may be directly addressed though behavioral change efforts BIBREF0. Consensus prevalence estimates suggest that PTSD impacts between 15-20% of the veteran population which is typically chronic and treatment resistant BIBREF0. The PTSD patients support programs organized by different veterans peer support organization use a set of surveys for local weekly assessment to detect the intensity of PTSD among the returning veterans. However, recent advanced evidence-based care for PTSD sufferers surveys have showed that veterans, suffered with chronic PTSD are reluctant in participating assessments to the professionals which is another significant symptom of war returning veterans with PTSD. Several existing researches showed that, twitter posts of war veterans could be a significant indicator of their mental health and could be utilized to predict PTSD sufferers in time before going out of control BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8. However, all of the proposed methods relied on either blackbox machine learning methods or language models based sentiments extraction of posted texts which failed to obtain acceptability and trust of clinicians due to the lack of their explainability.
In the context of the above research problem, we aim to answer the following research questions
Given clinicians have trust on clinically validated PTSD assessment surveys, can we fill out PTSD assessment surveys using twitter posts analysis of war-veterans?
If possible, what sort of analysis and approach are needed to develop such XAI model to detect the prevalence and intensity of PTSD among war-veterans only using the social media (twitter) analysis where users are free to share their everyday mental and social conditions?
How much quantitative improvement do we observe in our model's ability to explain both detection and intensity estimation of PTSD?
In this paper, we propose LAXARY, an explainable and trustworthy representation of PTSD classification and its intensity for clinicians.
The key contributions of our work are summarized below,
The novelty of LAXARY lies on the proposed clinical surveys-based PTSD Linguistic dictionary creation with words/aspects which represents the instantaneous perturbation of twitter-based sentiments as a specific pattern and help calculate the possible scores of each survey question.
LAXARY includes a modified LIWC model to calculate the possible scores of each survey question using PTSD Linguistic Dictionary to fill out the PTSD assessment surveys which provides a practical way not only to determine fine-grained discrimination of physiological and psychological health markers of PTSD without incurring the expensive and laborious in-situ laboratory testing or surveys, but also obtain trusts of clinicians who are expected to see traditional survey results of the PTSD assessment.
Finally, we evaluate the accuracy of LAXARY model performance and reliability-validity of generated PTSD Linguistic Dictionary using real twitter users' posts. Our results show that, given normal weekly messages posted in twitter, LAXARY can provide very high accuracy in filling up surveys towards identifying PTSD ($\approx 96\%$) and its intensity ($\approx 1.2$ mean squared error).
Overview
Fig. FIGREF7 shows a schematic representation of our proposed model. It consists of the following logical steps: (i) Develop PTSD Detection System using twitter posts of war-veterans(ii) design real surveys from the popular symptoms based mental disease assessment surveys; (iii) define single category and create PTSD Linguistic Dictionary for each survey question and multiple aspect/words for each question; (iv) calculate $\alpha $-scores for each category and dimension based on linguistic inquiry and word count as well as the aspects/words based dictionary; (v) calculate scaling scores ($s$-scores) for each dimension based on the $\alpha $-scores and $s$-scores of each category based on the $s$-scores of its dimensions; (vi) rank features according to the contributions of achieving separation among categories associated with different $\alpha $-scores and $s$-scores; and select feature sets that minimize the overlap among categories as associated with the target classifier (SGD); and finally (vii) estimate the quality of selected features-based classification for filling up surveys based on classified categories i.e. PTSD assessment which is trustworthy among the psychiatry community.
Related Works
Twitter activity based mental health assessment has been utmost importance to the Natural Language Processing (NLP) researchers and social media analysts for decades. Several studies have turned to social media data to study mental health, since it provides an unbiased collection of a person's language and behavior, which has been shown to be useful in diagnosing conditions. BIBREF9 used n-gram language model (CLM) based s-score measure setting up some user centric emotional word sets. BIBREF10 used positive and negative PTSD data to train three classifiers: (i) one unigram language model (ULM); (ii) one character n-gram language model (CLM); and 3) one from the LIWC categories $\alpha $-scores and found that last one gives more accuracy than other ones. BIBREF11 used two types of $s$-scores taking the ratio of negative and positive language models. Differences in language use have been observed in the personal writing of students who score highly on depression scales BIBREF2, forum posts for depression BIBREF3, self narratives for PTSD (BIBREF4, BIBREF5), and chat rooms for bipolar BIBREF6. Specifically in social media, differences have previously been observed between depressed and control groups (as assessed by internet-administered batteries) via LIWC: depressed users more frequently use first person pronouns (BIBREF7) and more frequently use negative emotion words and anger words on Twitter, but show no differences in positive emotion word usage (BIBREF8). Similarly, an increase in negative emotion and first person pronouns, and a decrease in third person pronouns, (via LIWC) is observed, as well as many manifestations of literature findings in the pattern of life of depressed users (e.g., social engagement, demographics) (BIBREF12). Differences in language use in social media via LIWC have also been observed between PTSD and control groups (BIBREF13).
All of the prior works used some random dictionary related to the human sentiment (positive/negative) word sets as category words to estimate the mental health but very few of them addressed the problem of explainability of their solution to obtain trust of clinicians. Islam et. al proposed an explainable topic modeling framework to rank different mental health features using Local Interpretable Model-Agnostic Explanations and visualize them to understand the features involved in mental health status classification using the BIBREF14 which fails to provide trust of clinicians due to its lack of interpretability in clinical terms. In this paper, we develop LAXARY model where first we start investigating clinically validated survey tools which are trustworthy methods of PTSD assessment among clinicians, build our category sets based on the survey questions and use these as dictionary words in terms of first person singular number pronouns aspect for next level LIWC algorithm. Finally, we develop a modified LIWC algorithm to estimate survey scores (similar to sentiment category scores of naive LIWC) which is both explainable and trustworthy to clinicians.
Demographics of Clinically Validated PTSD Assessment Tools
There are many clinically validated PTSD assessment tools that are being used both to detect the prevalence of PTSD and its intensity among sufferers. Among all of the tools, the most popular and well accepted one is Domain-Specific Risk-Taking (DOSPERT) Scale BIBREF15. This is a psychometric scale that assesses risk taking in five content domains: financial decisions (separately for investing versus gambling), health/safety, recreational, ethical, and social decisions. Respondents rate the likelihood that they would engage in domain-specific risky activities (Part I). An optional Part II assesses respondents' perceptions of the magnitude of the risks and expected benefits of the activities judged in Part I. There are more scales that are used in risky behavior analysis of individual's daily activities such as, The Berlin Social Support Scales (BSSS) BIBREF16 and Values In Action Scale (VIAS) BIBREF17. Dryhootch America BIBREF18, BIBREF19, a veteran peer support community organization, chooses 5, 6 and 5 questions respectively from the above mentioned survey systems to assess the PTSD among war veterans and consider rest of them as irrelevant to PTSD. The details of dryhootch chosen survey scale are stated in Table TABREF13. Table!TABREF14 shows a sample DOSPERT scale demographic chosen by dryhootch. The threshold (in Table TABREF13) is used to calculate the risky behavior limits. For example, if one individual's weekly DOSPERT score goes over 28, he is in critical situation in terms of risk taking symptoms of PTSD. Dryhootch defines the intensity of PTSD into four categories based on the weekly survey results of all three clinical survey tools (DOSPERT, BSSS and VIAS )
High risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for all three PTSD assessment tools i.e. DOSPERT, BSSS and VIAS, then he/she is in high risk situation which needs immediate mental support to avoid catastrophic effect of individual's health or surrounding people's life.
Moderate risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any two of the three PTSD assessment tools, then he/she is in moderate risk situation which needs close observation and peer mentoring to avoid their risk progression.
Low risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any one of the three PTSD assessment tools, then he/she has light symptoms of PTSD.
No PTSD: If one individual veteran's weekly PTSD assessment scores go below the threshold for all three PTSD assessment tools, then he/she has no PTSD.
Twitter-based PTSD Detection
To develop an explainable model, we first need to develop twitter-based PTSD detection algorithm. In this section, we describe the data collection and the development of our core LAXARY model.
Twitter-based PTSD Detection ::: Data Collection
We use an automated regular expression based searching to find potential veterans with PTSD in twitter, and then refine the list manually. First, we select different keywords to search twitter users of different categories. For example, to search self-claimed diagnosed PTSD sufferers, we select keywords related to PTSD for example, post trauma, post traumatic disorder, PTSD etc. We use a regular expression to search for statements where the user self-identifies as being diagnosed with PTSD. For example, Table TABREF27 shows a self-identified tweet posts. To search veterans, we mostly visit to different twitter accounts of veterans organizations such as "MA Women Veterans @WomenVeterans", "Illinois Veterans @ILVetsAffairs", "Veterans Benefits @VAVetBenefits" etc. We define an inclusion criteria as follows: one twitter user will be part of this study if he/she describes himself/herself as a veteran in the introduction and have at least 25 tweets in last week. After choosing the initial twitter users, we search for self-identified PTSD sufferers who claim to be diagnosed with PTSD in their twitter posts. We find 685 matching tweets which are manually reviewed to determine if they indicate a genuine statement of a diagnosis for PTSD. Next, we select the username that authored each of these tweets and retrieve last week's tweets via the Twitter API. We then filtered out users with less than 25 tweets and those whose tweets were not at least 75% in English (measured using an automated language ID system.) This filtering left us with 305 users as positive examples. We repeated this process for a group of randomly selected users. We randomly selected 3,000 twitter users who are veterans as per their introduction and have at least 25 tweets in last one week. After filtering (as above) in total 2,423 users remain, whose tweets are used as negative examples developing a 2,728 user's entire weeks' twitter posts where 305 users are self-claimed PTSD sufferers. We distributed Dryhootch chosen surveys among 1,200 users (305 users are self claimed PTSD sufferers and rest of them are randomly chosen from previous 2,423 users) and received 210 successful responses. Among these responses, 92 users were diagnosed as PTSD by any of the three surveys and rest of the 118 users are diagnosed with NO PTSD. Among the clinically diagnosed PTSD sufferers, 17 of them were not self-identified before. However, 7 of the self-identified PTSD sufferers are assessed with no PTSD by PTSD assessment tools. The response rates of PTSD and NO PTSD users are 27% and 12%. In summary, we have collected one week of tweets from 2,728 veterans where 305 users claimed to have diagnosed with PTSD. After distributing Dryhootch surveys, we have a dataset of 210 veteran twitter users among them 92 users are assessed with PTSD and 118 users are diagnosed with no PTSD using clinically validated surveys. The severity of the PTSD are estimated as Non-existent, light, moderate and high PTSD based on how many surveys support the existence of PTSD among the participants according to dryhootch manual BIBREF18, BIBREF19.
Twitter-based PTSD Detection ::: Pre-processing
We download 210 users' all twitter posts who are war veterans and clinically diagnosed with PTSD sufferers as well which resulted a total 12,385 tweets. Fig FIGREF16 shows each of the 210 veteran twitter users' monthly average tweets. We categorize these Tweets into two groups: Tweets related to work and Tweets not related to work. That is, only the Tweets that use a form of the word “work*” (e.g. work,worked, working, worker, etc.) or “job*” (e.g. job, jobs, jobless, etc.) are identified as work-related Tweets, with the remaining categorized as non-work-related Tweets. This categorization method increases the likelihood that most Tweets in the work group are indeed talking about work or job; for instance, “Back to work. Projects are firing back up and moving ahead now that baseball is done.” This categorization results in 456 work-related Tweets, about 5.4% of all Tweets written in English (and 75 unique Twitter users). To conduct weekly-level analysis, we consider three categorizations of Tweets (i.e. overall Tweets, work-related Tweets, and non work-related Tweets) on a daily basis, and create a text file for each week for each group.
Twitter-based PTSD Detection ::: PTSD Detection Baseline Model
We use Coppersmith proposed PTSD classification algorithm to develop our baseline blackbox model BIBREF11. We utilize our positive and negative PTSD data (+92,-118) to train three classifiers: (i) unigram language model (ULM) examining individual whole words, (ii) character n-gram language model (CLM), and (iii) LIWC based categorical models above all of the prior ones. The LMs have been shown effective for Twitter classification tasks BIBREF9 and LIWC has been previously used for analysis of mental health in Twitter BIBREF10. The language models measure the probability that a word (ULM) or a string of characters (CLM) was generated by the same underlying process as the training data. We first train one of each language model ($clm^{+}$ and $ulm^{+}$) from the tweets of PTSD users, and another model ($clm^{-}$ and $ulm^{-}$) from the tweets from No PTSD users. Each test tweet $t$ is scored by comparing probabilities from each LM called $s-score$
A threshold of 1 for $s-score$ divides scores into positive and negative classes. In a multi-class setting, the algorithm minimizes the cross entropy, selecting the model with the highest probability. For each user, we calculate the proportion of tweets scored positively by each LIWC category. These proportions are used as a feature vector in a loglinear regression model BIBREF20. Prior to training, we preprocess the text of each tweet: we replace all usernames with a single token (USER), lowercase all text, and remove extraneous whitespace. We also exclude any tweet that contained a URL, as these often pertain to events external to the user.
We conduct a LIWC analysis of the PTSD and non-PTSD tweets to determine if there are differences in the language usage of PTSD users. We applied the LIWC battery and examined the distribution of words in their language. Each tweet was tokenized by separating on whitespace. For each user, for a subset of the LIWC categories, we measured the proportion of tweets that contained at least one word from that category. Specifically, we examined the following nine categories: first, second and third person pronouns, swear, anger, positive emotion, negative emotion, death, and anxiety words. Second person pronouns were used significantly less often by PTSD users, while third person pronouns and words about anxiety were used significantly more often.
LAXARY: Explainable PTSD Detection Model
The heart of LAXARY framework is the construction of PTSD Linguistic Dictionary. Prior works show that linguistic dictionary based text analysis has been much effective in twitter based sentiment analysis BIBREF21, BIBREF22. Our work is the first of its kind that develops its own linguistic dictionary to explain automatic PTSD assessment to confirm trustworthiness to clinicians.
LAXARY: Explainable PTSD Detection Model ::: PTSD Linguistic Dictionary Creation
We use LIWC developed WordStat dictionary format for our text analysis BIBREF23. The LIWC application relies on an internal default dictionary that defines which words should be counted in the target text files. To avoid confusion in the subsequent discussion, text words that are read and analyzed by WordStat are referred to as target words. Words in the WordStat dictionary file will be referred to as dictionary words. Groups of dictionary words that tap a particular domain (e.g., negative emotion words) are variously referred to as subdictionaries or word categories. Fig FIGREF8 is a sample WordStat dictionary. There are several steps to use this dictionary which are stated as follows:
Pronoun selection: At first we have to define the pronouns of the target sentiment. Here we used first person singular number pronouns (i.e., I, me, mine etc.) that means we only count those sentences or segments which are only related to first person singular number i.e., related to the person himself.
Category selection: We have to define the categories of each word set thus we can analyze the categories as well as dimensions' text analysis scores. We chose three categories based on the three different surveys: 1) DOSPERT scale; 2) BSSS scale; and 3) VIAS scale.
Dimension selection: We have to define the word sets (also called dimension) for each category. We chose one dimension for each of the questions under each category to reflect real survey system evaluation. Our chosen categories are state in Fig FIGREF20.
Score calculation $\alpha $-score: $\alpha $-scores refer to the Cronbach's alphas for the internal reliability of the specific words within each category. The binary alphas are computed on the ratio of occurrence and non-occurrence of each dictionary word whereas the raw or uncorrected alphas are based on the percentage of use of each of the category words within texts.
LAXARY: Explainable PTSD Detection Model ::: Psychometric Validation of PTSD Linguistic Dictionary
After the PTSD Linguistic Dictionary has been created, we empirically evaluate its psychometric properties such as reliability and validity as per American Standards for educational and psychological testing guideline BIBREF24. In psychometrics, reliability is most commonly evaluated by Cronbach's alpha, which assesses internal consistency based on inter-correlations and the number of measured items. In the text analysis scenario, each word in our PTSD Linguistic dictionary is considered an item, and reliability is calculated based on each text file's response to each word item, which forms an $N$(number of text files) $\times $ $J$(number of words or stems in a dictionary) data matrix. There are two ways to quantify such responses: using percentage data (uncorrected method), or using "present or not" data (binary method) BIBREF23. For the uncorrected method, the data matrix comprises percentage values of each word/stem are calculated from each text file. For the binary method, the data matrix quantifies whether or not a word was used in a text file where "1" represents yes and "0" represents no. Once the data matrix is created, it is used to calculate Cronbach's alpha based on its inter-correlation matrix among the word percentages. We assess reliability based on our selected 210 users' Tweets which further generated a 23,562 response matrix after running the PTSD Linguistic Dictionary for each user. The response matrix yields reliability of .89 based on the uncorrected method, and .96 based on the binary method, which confirm the high reliability of our PTSD Dictionary created PTSD survey based categories. After assessing the reliability of the PTSD Linguistic dictionary, we focus on the two most common forms of construct validity: convergent validity and discriminant validity BIBREF25. Convergent validity provides evidence that two measures designed to assess the same construct are indeed related; discriminate validity involves evidence that two measures designed to assess different constructs are not too strongly related. In theory, we expect that the PTSD Linguistic dictionary should be positively correlated with other negative PTSD constructs to show convergent validity, and not strongly correlated with positive PTSD constructs to show discriminant validity. To test these two types of validity, we use the same 210 users' tweets used for the reliability assessment. The results revealed that the PTSD Linguistic dictionary is indeed positively correlated with negative construct dictionaries, including the overall negative PTSD dictionary (r=3.664,p$<$.001). Table TABREF25 shows all 16 categorical dictionaries. These results provide strong support for the measurement validity for our newly created PTSD Linguistic dictionary.
LAXARY: Explainable PTSD Detection Model ::: Feature Extraction and Survey Score Estimation
We use the exact similar method of LIWC to extract $\alpha $-scores for each dimension and categories except we use our generated PTSD Linguistic Dictionary for the task BIBREF23. Thus we have total 16 $\alpha $-scores in total. Meanwhile, we propose a new type of feature in this regard, which we called scaling-score ($s$-score). $s$-score is calculated from $\alpha $-scores. The purpose of using $s$-score is to put exact scores of each of the dimension and category thus we can apply the same method used in real weekly survey system. The idea is, we divide each category into their corresponding scale factor (i.e., for DOSPERT scale, BSSS scale and VIAS scales) and divide them into 8, 3 and 5 scaling factors which are used in real survey system. Then we set the $s$-score from the scaling factors from the $\alpha $-scores of the corresponding dimension of the questions. The algorithm is stated in Figure FIGREF23. Following Fig FIGREF23, we calculate the $s$-score for each dimension. Then we add up all the $s$-score of the dimensions to calculate cumulative $s$-score of particular categories which is displayed in Fig FIGREF22. Finally, we have total 32 features among them 16 are $\alpha $-scores and 16 are $s$-scores for each category (i.e. each question). We add both of $\alpha $ and $s$ scores together and scale according to their corresponding survey score scales using min-max standardization. Then, the final output is a 16 valued matrix which represent the score for each questions from three different Dryhootch surveys. We use the output to fill up each survey, estimate the prevalence of PTSD and its intensity based on each tool's respective evaluation metric.
Experimental Evaluation
To validate the performance of LAXARY framework, we first divide the entire 210 users' twitter posts into training and test dataset. Then, we first developed PTSD Linguistic Dictionary from the twitter posts from training dataset and apply LAXARY framework on test dataset.
Experimental Evaluation ::: Results
To provide an initial results, we take 50% of users' last week's (the week they responded of having PTSD) data to develop PTSD Linguistic dictionary and apply LAXARY framework to fill up surveys on rest of 50% dataset. The distribution of this training-test dataset segmentation followed a 50% distribution of PTSD and No PTSD from the original dataset. Our final survey based classification results showed an accuracy of 96% in detecting PTSD and mean squared error of 1.2 in estimating its intensity given we have four intensity, No PTSD, Low Risk PTSD, Moderate Risk PTSD and High Risk PTSD with a score of 0, 1, 2 and 3 respectively. Table TABREF29 shows the classification details of our experiment which provide the very good accuracy of our classification. To compare the outperformance of our method, we also implemented Coppersmith et. al. proposed method and achieved an 86% overall accuracy of detecting PTSD users BIBREF11 following the same training-test dataset distribution. Fig FIGREF28 illustrates the comparisons between LAXARY and Coppersmith et. al. proposed method. Here we can see, the outperformance of our proposed method as well as the importance of $s-score$ estimation. We also illustrates the importance of $\alpha -score$ and $S-score$ in Fig FIGREF30. Fig FIGREF30 illustrates that if we change the number of training samples (%), LAXARY models outperforms Coppersmith et. al. proposed model under any condition. In terms of intensity, Coppersmith et. al. totally fails to provide any idea however LAXARY provides extremely accurate measures of intensity estimation for PTSD sufferers (as shown in Fig FIGREF31) which can be explained simply providing LAXARY model filled out survey details. Table TABREF29 shows the details of accuracies of both PTSD detection and intensity estimation. Fig FIGREF32 shows the classification accuracy changes over the training sample sizes for each survey which shows that DOSPERT scale outperform other surveys. Fig FIGREF33 shows that if we take previous weeks (instead of only the week diagnosis of PTSD was taken), there are no significant patterns of PTSD detection.
Challenges and Future Work
LAXARY is a highly ambitious model that targets to fill up clinically validated survey tools using only twitter posts. Unlike the previous twitter based mental health assessment tools, LAXARY provides a clinically interpretable model which can provide better classification accuracy and intensity of PTSD assessment and can easily obtain the trust of clinicians. The central challenge of LAXARY is to search twitter users from twitter search engine and manually label them for analysis. While developing PTSD Linguistic Dictionary, although we followed exactly same development idea of LIWC WordStat dictionary and tested reliability and validity, our dictionary was not still validated by domain experts as PTSD detection is highly sensitive issue than stress/depression detection. Moreover, given the extreme challenges of searching veterans in twitter using our selection and inclusion criteria, it was extremely difficult to manually find the evidence of the self-claimed PTSD sufferers. Although, we have shown extremely promising initial findings about the representation of a blackbox model into clinically trusted tools, using only 210 users' data is not enough to come up with a trustworthy model. Moreover, more clinical validation must be done in future with real clinicians to firmly validate LAXARY model provided PTSD assessment outcomes. In future, we aim to collect more data and run not only nationwide but also international-wide data collection to establish our innovation into a real tool. Apart from that, as we achieved promising results in detecting PTSD and its intensity using only twitter data, we aim to develop Linguistic Dictionary for other mental health issues too. Moreover, we will apply our proposed method in other types of mental illness such as depression, bipolar disorder, suicidal ideation and seasonal affective disorder (SAD) etc. As we know, accuracy of particular social media analysis depends on the dataset mostly. We aim to collect more data engaging more researchers to establish a set of mental illness specific Linguistic Database and evaluation technique to solidify the genralizability of our proposed method.
Conclusion
To promote better comfort to the trauma patients, it is really important to detect Post Traumatic Stress Disorder (PTSD) sufferers in time before going out of control that may result catastrophic impacts on society, people around or even sufferers themselves. Although, psychiatrists invented several clinical diagnosis tools (i.e., surveys) by assessing symptoms, signs and impairment associated with PTSD, most of the times, the process of diagnosis happens at the severe stage of illness which may have already caused some irreversible damages of mental health of the sufferers. On the other hand, due to lack of explainability, existing twitter based methods are not trusted by the clinicians. In this paper, we proposed, LAXARY, a novel method of filling up PTSD assessment surveys using weekly twitter posts. As the clinical surveys are trusted and understandable method, we believe that this method will be able to gain trust of clinicians towards early detection of PTSD. Moreover, our proposed LAXARY model, which is first of its kind, can be used to develop any type of mental disorder Linguistic Dictionary providing a generalized and trustworthy mental health assessment framework of any kind. | No |
bdd8368debcb1bdad14c454aaf96695ac5186b09 | bdd8368debcb1bdad14c454aaf96695ac5186b09_0 | Q: How is the intensity of the PTSD established?
Text: Introduction
Combat veterans diagnosed with PTSD are substantially more likely to engage in a number of high risk activities including engaging in interpersonal violence, attempting suicide, committing suicide, binge drinking, and drug abuse BIBREF0. Despite improved diagnostic screening, outpatient mental health and inpatient treatment for PTSD, the syndrome remains treatment resistant, is typically chronic, and is associated with numerous negative health effects and higher treatment costs BIBREF1. As a result, the Veteran Administration's National Center for PTSD (NCPTSD) suggests to reconceptualize PTSD not just in terms of a psychiatric symptom cluster, but focusing instead on the specific high risk behaviors associated with it, as these may be directly addressed though behavioral change efforts BIBREF0. Consensus prevalence estimates suggest that PTSD impacts between 15-20% of the veteran population which is typically chronic and treatment resistant BIBREF0. The PTSD patients support programs organized by different veterans peer support organization use a set of surveys for local weekly assessment to detect the intensity of PTSD among the returning veterans. However, recent advanced evidence-based care for PTSD sufferers surveys have showed that veterans, suffered with chronic PTSD are reluctant in participating assessments to the professionals which is another significant symptom of war returning veterans with PTSD. Several existing researches showed that, twitter posts of war veterans could be a significant indicator of their mental health and could be utilized to predict PTSD sufferers in time before going out of control BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8. However, all of the proposed methods relied on either blackbox machine learning methods or language models based sentiments extraction of posted texts which failed to obtain acceptability and trust of clinicians due to the lack of their explainability.
In the context of the above research problem, we aim to answer the following research questions
Given clinicians have trust on clinically validated PTSD assessment surveys, can we fill out PTSD assessment surveys using twitter posts analysis of war-veterans?
If possible, what sort of analysis and approach are needed to develop such XAI model to detect the prevalence and intensity of PTSD among war-veterans only using the social media (twitter) analysis where users are free to share their everyday mental and social conditions?
How much quantitative improvement do we observe in our model's ability to explain both detection and intensity estimation of PTSD?
In this paper, we propose LAXARY, an explainable and trustworthy representation of PTSD classification and its intensity for clinicians.
The key contributions of our work are summarized below,
The novelty of LAXARY lies on the proposed clinical surveys-based PTSD Linguistic dictionary creation with words/aspects which represents the instantaneous perturbation of twitter-based sentiments as a specific pattern and help calculate the possible scores of each survey question.
LAXARY includes a modified LIWC model to calculate the possible scores of each survey question using PTSD Linguistic Dictionary to fill out the PTSD assessment surveys which provides a practical way not only to determine fine-grained discrimination of physiological and psychological health markers of PTSD without incurring the expensive and laborious in-situ laboratory testing or surveys, but also obtain trusts of clinicians who are expected to see traditional survey results of the PTSD assessment.
Finally, we evaluate the accuracy of LAXARY model performance and reliability-validity of generated PTSD Linguistic Dictionary using real twitter users' posts. Our results show that, given normal weekly messages posted in twitter, LAXARY can provide very high accuracy in filling up surveys towards identifying PTSD ($\approx 96\%$) and its intensity ($\approx 1.2$ mean squared error).
Overview
Fig. FIGREF7 shows a schematic representation of our proposed model. It consists of the following logical steps: (i) Develop PTSD Detection System using twitter posts of war-veterans(ii) design real surveys from the popular symptoms based mental disease assessment surveys; (iii) define single category and create PTSD Linguistic Dictionary for each survey question and multiple aspect/words for each question; (iv) calculate $\alpha $-scores for each category and dimension based on linguistic inquiry and word count as well as the aspects/words based dictionary; (v) calculate scaling scores ($s$-scores) for each dimension based on the $\alpha $-scores and $s$-scores of each category based on the $s$-scores of its dimensions; (vi) rank features according to the contributions of achieving separation among categories associated with different $\alpha $-scores and $s$-scores; and select feature sets that minimize the overlap among categories as associated with the target classifier (SGD); and finally (vii) estimate the quality of selected features-based classification for filling up surveys based on classified categories i.e. PTSD assessment which is trustworthy among the psychiatry community.
Related Works
Twitter activity based mental health assessment has been utmost importance to the Natural Language Processing (NLP) researchers and social media analysts for decades. Several studies have turned to social media data to study mental health, since it provides an unbiased collection of a person's language and behavior, which has been shown to be useful in diagnosing conditions. BIBREF9 used n-gram language model (CLM) based s-score measure setting up some user centric emotional word sets. BIBREF10 used positive and negative PTSD data to train three classifiers: (i) one unigram language model (ULM); (ii) one character n-gram language model (CLM); and 3) one from the LIWC categories $\alpha $-scores and found that last one gives more accuracy than other ones. BIBREF11 used two types of $s$-scores taking the ratio of negative and positive language models. Differences in language use have been observed in the personal writing of students who score highly on depression scales BIBREF2, forum posts for depression BIBREF3, self narratives for PTSD (BIBREF4, BIBREF5), and chat rooms for bipolar BIBREF6. Specifically in social media, differences have previously been observed between depressed and control groups (as assessed by internet-administered batteries) via LIWC: depressed users more frequently use first person pronouns (BIBREF7) and more frequently use negative emotion words and anger words on Twitter, but show no differences in positive emotion word usage (BIBREF8). Similarly, an increase in negative emotion and first person pronouns, and a decrease in third person pronouns, (via LIWC) is observed, as well as many manifestations of literature findings in the pattern of life of depressed users (e.g., social engagement, demographics) (BIBREF12). Differences in language use in social media via LIWC have also been observed between PTSD and control groups (BIBREF13).
All of the prior works used some random dictionary related to the human sentiment (positive/negative) word sets as category words to estimate the mental health but very few of them addressed the problem of explainability of their solution to obtain trust of clinicians. Islam et. al proposed an explainable topic modeling framework to rank different mental health features using Local Interpretable Model-Agnostic Explanations and visualize them to understand the features involved in mental health status classification using the BIBREF14 which fails to provide trust of clinicians due to its lack of interpretability in clinical terms. In this paper, we develop LAXARY model where first we start investigating clinically validated survey tools which are trustworthy methods of PTSD assessment among clinicians, build our category sets based on the survey questions and use these as dictionary words in terms of first person singular number pronouns aspect for next level LIWC algorithm. Finally, we develop a modified LIWC algorithm to estimate survey scores (similar to sentiment category scores of naive LIWC) which is both explainable and trustworthy to clinicians.
Demographics of Clinically Validated PTSD Assessment Tools
There are many clinically validated PTSD assessment tools that are being used both to detect the prevalence of PTSD and its intensity among sufferers. Among all of the tools, the most popular and well accepted one is Domain-Specific Risk-Taking (DOSPERT) Scale BIBREF15. This is a psychometric scale that assesses risk taking in five content domains: financial decisions (separately for investing versus gambling), health/safety, recreational, ethical, and social decisions. Respondents rate the likelihood that they would engage in domain-specific risky activities (Part I). An optional Part II assesses respondents' perceptions of the magnitude of the risks and expected benefits of the activities judged in Part I. There are more scales that are used in risky behavior analysis of individual's daily activities such as, The Berlin Social Support Scales (BSSS) BIBREF16 and Values In Action Scale (VIAS) BIBREF17. Dryhootch America BIBREF18, BIBREF19, a veteran peer support community organization, chooses 5, 6 and 5 questions respectively from the above mentioned survey systems to assess the PTSD among war veterans and consider rest of them as irrelevant to PTSD. The details of dryhootch chosen survey scale are stated in Table TABREF13. Table!TABREF14 shows a sample DOSPERT scale demographic chosen by dryhootch. The threshold (in Table TABREF13) is used to calculate the risky behavior limits. For example, if one individual's weekly DOSPERT score goes over 28, he is in critical situation in terms of risk taking symptoms of PTSD. Dryhootch defines the intensity of PTSD into four categories based on the weekly survey results of all three clinical survey tools (DOSPERT, BSSS and VIAS )
High risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for all three PTSD assessment tools i.e. DOSPERT, BSSS and VIAS, then he/she is in high risk situation which needs immediate mental support to avoid catastrophic effect of individual's health or surrounding people's life.
Moderate risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any two of the three PTSD assessment tools, then he/she is in moderate risk situation which needs close observation and peer mentoring to avoid their risk progression.
Low risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any one of the three PTSD assessment tools, then he/she has light symptoms of PTSD.
No PTSD: If one individual veteran's weekly PTSD assessment scores go below the threshold for all three PTSD assessment tools, then he/she has no PTSD.
Twitter-based PTSD Detection
To develop an explainable model, we first need to develop twitter-based PTSD detection algorithm. In this section, we describe the data collection and the development of our core LAXARY model.
Twitter-based PTSD Detection ::: Data Collection
We use an automated regular expression based searching to find potential veterans with PTSD in twitter, and then refine the list manually. First, we select different keywords to search twitter users of different categories. For example, to search self-claimed diagnosed PTSD sufferers, we select keywords related to PTSD for example, post trauma, post traumatic disorder, PTSD etc. We use a regular expression to search for statements where the user self-identifies as being diagnosed with PTSD. For example, Table TABREF27 shows a self-identified tweet posts. To search veterans, we mostly visit to different twitter accounts of veterans organizations such as "MA Women Veterans @WomenVeterans", "Illinois Veterans @ILVetsAffairs", "Veterans Benefits @VAVetBenefits" etc. We define an inclusion criteria as follows: one twitter user will be part of this study if he/she describes himself/herself as a veteran in the introduction and have at least 25 tweets in last week. After choosing the initial twitter users, we search for self-identified PTSD sufferers who claim to be diagnosed with PTSD in their twitter posts. We find 685 matching tweets which are manually reviewed to determine if they indicate a genuine statement of a diagnosis for PTSD. Next, we select the username that authored each of these tweets and retrieve last week's tweets via the Twitter API. We then filtered out users with less than 25 tweets and those whose tweets were not at least 75% in English (measured using an automated language ID system.) This filtering left us with 305 users as positive examples. We repeated this process for a group of randomly selected users. We randomly selected 3,000 twitter users who are veterans as per their introduction and have at least 25 tweets in last one week. After filtering (as above) in total 2,423 users remain, whose tweets are used as negative examples developing a 2,728 user's entire weeks' twitter posts where 305 users are self-claimed PTSD sufferers. We distributed Dryhootch chosen surveys among 1,200 users (305 users are self claimed PTSD sufferers and rest of them are randomly chosen from previous 2,423 users) and received 210 successful responses. Among these responses, 92 users were diagnosed as PTSD by any of the three surveys and rest of the 118 users are diagnosed with NO PTSD. Among the clinically diagnosed PTSD sufferers, 17 of them were not self-identified before. However, 7 of the self-identified PTSD sufferers are assessed with no PTSD by PTSD assessment tools. The response rates of PTSD and NO PTSD users are 27% and 12%. In summary, we have collected one week of tweets from 2,728 veterans where 305 users claimed to have diagnosed with PTSD. After distributing Dryhootch surveys, we have a dataset of 210 veteran twitter users among them 92 users are assessed with PTSD and 118 users are diagnosed with no PTSD using clinically validated surveys. The severity of the PTSD are estimated as Non-existent, light, moderate and high PTSD based on how many surveys support the existence of PTSD among the participants according to dryhootch manual BIBREF18, BIBREF19.
Twitter-based PTSD Detection ::: Pre-processing
We download 210 users' all twitter posts who are war veterans and clinically diagnosed with PTSD sufferers as well which resulted a total 12,385 tweets. Fig FIGREF16 shows each of the 210 veteran twitter users' monthly average tweets. We categorize these Tweets into two groups: Tweets related to work and Tweets not related to work. That is, only the Tweets that use a form of the word “work*” (e.g. work,worked, working, worker, etc.) or “job*” (e.g. job, jobs, jobless, etc.) are identified as work-related Tweets, with the remaining categorized as non-work-related Tweets. This categorization method increases the likelihood that most Tweets in the work group are indeed talking about work or job; for instance, “Back to work. Projects are firing back up and moving ahead now that baseball is done.” This categorization results in 456 work-related Tweets, about 5.4% of all Tweets written in English (and 75 unique Twitter users). To conduct weekly-level analysis, we consider three categorizations of Tweets (i.e. overall Tweets, work-related Tweets, and non work-related Tweets) on a daily basis, and create a text file for each week for each group.
Twitter-based PTSD Detection ::: PTSD Detection Baseline Model
We use Coppersmith proposed PTSD classification algorithm to develop our baseline blackbox model BIBREF11. We utilize our positive and negative PTSD data (+92,-118) to train three classifiers: (i) unigram language model (ULM) examining individual whole words, (ii) character n-gram language model (CLM), and (iii) LIWC based categorical models above all of the prior ones. The LMs have been shown effective for Twitter classification tasks BIBREF9 and LIWC has been previously used for analysis of mental health in Twitter BIBREF10. The language models measure the probability that a word (ULM) or a string of characters (CLM) was generated by the same underlying process as the training data. We first train one of each language model ($clm^{+}$ and $ulm^{+}$) from the tweets of PTSD users, and another model ($clm^{-}$ and $ulm^{-}$) from the tweets from No PTSD users. Each test tweet $t$ is scored by comparing probabilities from each LM called $s-score$
A threshold of 1 for $s-score$ divides scores into positive and negative classes. In a multi-class setting, the algorithm minimizes the cross entropy, selecting the model with the highest probability. For each user, we calculate the proportion of tweets scored positively by each LIWC category. These proportions are used as a feature vector in a loglinear regression model BIBREF20. Prior to training, we preprocess the text of each tweet: we replace all usernames with a single token (USER), lowercase all text, and remove extraneous whitespace. We also exclude any tweet that contained a URL, as these often pertain to events external to the user.
We conduct a LIWC analysis of the PTSD and non-PTSD tweets to determine if there are differences in the language usage of PTSD users. We applied the LIWC battery and examined the distribution of words in their language. Each tweet was tokenized by separating on whitespace. For each user, for a subset of the LIWC categories, we measured the proportion of tweets that contained at least one word from that category. Specifically, we examined the following nine categories: first, second and third person pronouns, swear, anger, positive emotion, negative emotion, death, and anxiety words. Second person pronouns were used significantly less often by PTSD users, while third person pronouns and words about anxiety were used significantly more often.
LAXARY: Explainable PTSD Detection Model
The heart of LAXARY framework is the construction of PTSD Linguistic Dictionary. Prior works show that linguistic dictionary based text analysis has been much effective in twitter based sentiment analysis BIBREF21, BIBREF22. Our work is the first of its kind that develops its own linguistic dictionary to explain automatic PTSD assessment to confirm trustworthiness to clinicians.
LAXARY: Explainable PTSD Detection Model ::: PTSD Linguistic Dictionary Creation
We use LIWC developed WordStat dictionary format for our text analysis BIBREF23. The LIWC application relies on an internal default dictionary that defines which words should be counted in the target text files. To avoid confusion in the subsequent discussion, text words that are read and analyzed by WordStat are referred to as target words. Words in the WordStat dictionary file will be referred to as dictionary words. Groups of dictionary words that tap a particular domain (e.g., negative emotion words) are variously referred to as subdictionaries or word categories. Fig FIGREF8 is a sample WordStat dictionary. There are several steps to use this dictionary which are stated as follows:
Pronoun selection: At first we have to define the pronouns of the target sentiment. Here we used first person singular number pronouns (i.e., I, me, mine etc.) that means we only count those sentences or segments which are only related to first person singular number i.e., related to the person himself.
Category selection: We have to define the categories of each word set thus we can analyze the categories as well as dimensions' text analysis scores. We chose three categories based on the three different surveys: 1) DOSPERT scale; 2) BSSS scale; and 3) VIAS scale.
Dimension selection: We have to define the word sets (also called dimension) for each category. We chose one dimension for each of the questions under each category to reflect real survey system evaluation. Our chosen categories are state in Fig FIGREF20.
Score calculation $\alpha $-score: $\alpha $-scores refer to the Cronbach's alphas for the internal reliability of the specific words within each category. The binary alphas are computed on the ratio of occurrence and non-occurrence of each dictionary word whereas the raw or uncorrected alphas are based on the percentage of use of each of the category words within texts.
LAXARY: Explainable PTSD Detection Model ::: Psychometric Validation of PTSD Linguistic Dictionary
After the PTSD Linguistic Dictionary has been created, we empirically evaluate its psychometric properties such as reliability and validity as per American Standards for educational and psychological testing guideline BIBREF24. In psychometrics, reliability is most commonly evaluated by Cronbach's alpha, which assesses internal consistency based on inter-correlations and the number of measured items. In the text analysis scenario, each word in our PTSD Linguistic dictionary is considered an item, and reliability is calculated based on each text file's response to each word item, which forms an $N$(number of text files) $\times $ $J$(number of words or stems in a dictionary) data matrix. There are two ways to quantify such responses: using percentage data (uncorrected method), or using "present or not" data (binary method) BIBREF23. For the uncorrected method, the data matrix comprises percentage values of each word/stem are calculated from each text file. For the binary method, the data matrix quantifies whether or not a word was used in a text file where "1" represents yes and "0" represents no. Once the data matrix is created, it is used to calculate Cronbach's alpha based on its inter-correlation matrix among the word percentages. We assess reliability based on our selected 210 users' Tweets which further generated a 23,562 response matrix after running the PTSD Linguistic Dictionary for each user. The response matrix yields reliability of .89 based on the uncorrected method, and .96 based on the binary method, which confirm the high reliability of our PTSD Dictionary created PTSD survey based categories. After assessing the reliability of the PTSD Linguistic dictionary, we focus on the two most common forms of construct validity: convergent validity and discriminant validity BIBREF25. Convergent validity provides evidence that two measures designed to assess the same construct are indeed related; discriminate validity involves evidence that two measures designed to assess different constructs are not too strongly related. In theory, we expect that the PTSD Linguistic dictionary should be positively correlated with other negative PTSD constructs to show convergent validity, and not strongly correlated with positive PTSD constructs to show discriminant validity. To test these two types of validity, we use the same 210 users' tweets used for the reliability assessment. The results revealed that the PTSD Linguistic dictionary is indeed positively correlated with negative construct dictionaries, including the overall negative PTSD dictionary (r=3.664,p$<$.001). Table TABREF25 shows all 16 categorical dictionaries. These results provide strong support for the measurement validity for our newly created PTSD Linguistic dictionary.
LAXARY: Explainable PTSD Detection Model ::: Feature Extraction and Survey Score Estimation
We use the exact similar method of LIWC to extract $\alpha $-scores for each dimension and categories except we use our generated PTSD Linguistic Dictionary for the task BIBREF23. Thus we have total 16 $\alpha $-scores in total. Meanwhile, we propose a new type of feature in this regard, which we called scaling-score ($s$-score). $s$-score is calculated from $\alpha $-scores. The purpose of using $s$-score is to put exact scores of each of the dimension and category thus we can apply the same method used in real weekly survey system. The idea is, we divide each category into their corresponding scale factor (i.e., for DOSPERT scale, BSSS scale and VIAS scales) and divide them into 8, 3 and 5 scaling factors which are used in real survey system. Then we set the $s$-score from the scaling factors from the $\alpha $-scores of the corresponding dimension of the questions. The algorithm is stated in Figure FIGREF23. Following Fig FIGREF23, we calculate the $s$-score for each dimension. Then we add up all the $s$-score of the dimensions to calculate cumulative $s$-score of particular categories which is displayed in Fig FIGREF22. Finally, we have total 32 features among them 16 are $\alpha $-scores and 16 are $s$-scores for each category (i.e. each question). We add both of $\alpha $ and $s$ scores together and scale according to their corresponding survey score scales using min-max standardization. Then, the final output is a 16 valued matrix which represent the score for each questions from three different Dryhootch surveys. We use the output to fill up each survey, estimate the prevalence of PTSD and its intensity based on each tool's respective evaluation metric.
Experimental Evaluation
To validate the performance of LAXARY framework, we first divide the entire 210 users' twitter posts into training and test dataset. Then, we first developed PTSD Linguistic Dictionary from the twitter posts from training dataset and apply LAXARY framework on test dataset.
Experimental Evaluation ::: Results
To provide an initial results, we take 50% of users' last week's (the week they responded of having PTSD) data to develop PTSD Linguistic dictionary and apply LAXARY framework to fill up surveys on rest of 50% dataset. The distribution of this training-test dataset segmentation followed a 50% distribution of PTSD and No PTSD from the original dataset. Our final survey based classification results showed an accuracy of 96% in detecting PTSD and mean squared error of 1.2 in estimating its intensity given we have four intensity, No PTSD, Low Risk PTSD, Moderate Risk PTSD and High Risk PTSD with a score of 0, 1, 2 and 3 respectively. Table TABREF29 shows the classification details of our experiment which provide the very good accuracy of our classification. To compare the outperformance of our method, we also implemented Coppersmith et. al. proposed method and achieved an 86% overall accuracy of detecting PTSD users BIBREF11 following the same training-test dataset distribution. Fig FIGREF28 illustrates the comparisons between LAXARY and Coppersmith et. al. proposed method. Here we can see, the outperformance of our proposed method as well as the importance of $s-score$ estimation. We also illustrates the importance of $\alpha -score$ and $S-score$ in Fig FIGREF30. Fig FIGREF30 illustrates that if we change the number of training samples (%), LAXARY models outperforms Coppersmith et. al. proposed model under any condition. In terms of intensity, Coppersmith et. al. totally fails to provide any idea however LAXARY provides extremely accurate measures of intensity estimation for PTSD sufferers (as shown in Fig FIGREF31) which can be explained simply providing LAXARY model filled out survey details. Table TABREF29 shows the details of accuracies of both PTSD detection and intensity estimation. Fig FIGREF32 shows the classification accuracy changes over the training sample sizes for each survey which shows that DOSPERT scale outperform other surveys. Fig FIGREF33 shows that if we take previous weeks (instead of only the week diagnosis of PTSD was taken), there are no significant patterns of PTSD detection.
Challenges and Future Work
LAXARY is a highly ambitious model that targets to fill up clinically validated survey tools using only twitter posts. Unlike the previous twitter based mental health assessment tools, LAXARY provides a clinically interpretable model which can provide better classification accuracy and intensity of PTSD assessment and can easily obtain the trust of clinicians. The central challenge of LAXARY is to search twitter users from twitter search engine and manually label them for analysis. While developing PTSD Linguistic Dictionary, although we followed exactly same development idea of LIWC WordStat dictionary and tested reliability and validity, our dictionary was not still validated by domain experts as PTSD detection is highly sensitive issue than stress/depression detection. Moreover, given the extreme challenges of searching veterans in twitter using our selection and inclusion criteria, it was extremely difficult to manually find the evidence of the self-claimed PTSD sufferers. Although, we have shown extremely promising initial findings about the representation of a blackbox model into clinically trusted tools, using only 210 users' data is not enough to come up with a trustworthy model. Moreover, more clinical validation must be done in future with real clinicians to firmly validate LAXARY model provided PTSD assessment outcomes. In future, we aim to collect more data and run not only nationwide but also international-wide data collection to establish our innovation into a real tool. Apart from that, as we achieved promising results in detecting PTSD and its intensity using only twitter data, we aim to develop Linguistic Dictionary for other mental health issues too. Moreover, we will apply our proposed method in other types of mental illness such as depression, bipolar disorder, suicidal ideation and seasonal affective disorder (SAD) etc. As we know, accuracy of particular social media analysis depends on the dataset mostly. We aim to collect more data engaging more researchers to establish a set of mental illness specific Linguistic Database and evaluation technique to solidify the genralizability of our proposed method.
Conclusion
To promote better comfort to the trauma patients, it is really important to detect Post Traumatic Stress Disorder (PTSD) sufferers in time before going out of control that may result catastrophic impacts on society, people around or even sufferers themselves. Although, psychiatrists invented several clinical diagnosis tools (i.e., surveys) by assessing symptoms, signs and impairment associated with PTSD, most of the times, the process of diagnosis happens at the severe stage of illness which may have already caused some irreversible damages of mental health of the sufferers. On the other hand, due to lack of explainability, existing twitter based methods are not trusted by the clinicians. In this paper, we proposed, LAXARY, a novel method of filling up PTSD assessment surveys using weekly twitter posts. As the clinical surveys are trusted and understandable method, we believe that this method will be able to gain trust of clinicians towards early detection of PTSD. Moreover, our proposed LAXARY model, which is first of its kind, can be used to develop any type of mental disorder Linguistic Dictionary providing a generalized and trustworthy mental health assessment framework of any kind. | Given we have four intensity, No PTSD, Low Risk PTSD, Moderate Risk PTSD and High Risk PTSD with a score of 0, 1, 2 and 3 respectively, the estimated intensity is established as mean squared error. |
bdd8368debcb1bdad14c454aaf96695ac5186b09 | bdd8368debcb1bdad14c454aaf96695ac5186b09_1 | Q: How is the intensity of the PTSD established?
Text: Introduction
Combat veterans diagnosed with PTSD are substantially more likely to engage in a number of high risk activities including engaging in interpersonal violence, attempting suicide, committing suicide, binge drinking, and drug abuse BIBREF0. Despite improved diagnostic screening, outpatient mental health and inpatient treatment for PTSD, the syndrome remains treatment resistant, is typically chronic, and is associated with numerous negative health effects and higher treatment costs BIBREF1. As a result, the Veteran Administration's National Center for PTSD (NCPTSD) suggests to reconceptualize PTSD not just in terms of a psychiatric symptom cluster, but focusing instead on the specific high risk behaviors associated with it, as these may be directly addressed though behavioral change efforts BIBREF0. Consensus prevalence estimates suggest that PTSD impacts between 15-20% of the veteran population which is typically chronic and treatment resistant BIBREF0. The PTSD patients support programs organized by different veterans peer support organization use a set of surveys for local weekly assessment to detect the intensity of PTSD among the returning veterans. However, recent advanced evidence-based care for PTSD sufferers surveys have showed that veterans, suffered with chronic PTSD are reluctant in participating assessments to the professionals which is another significant symptom of war returning veterans with PTSD. Several existing researches showed that, twitter posts of war veterans could be a significant indicator of their mental health and could be utilized to predict PTSD sufferers in time before going out of control BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8. However, all of the proposed methods relied on either blackbox machine learning methods or language models based sentiments extraction of posted texts which failed to obtain acceptability and trust of clinicians due to the lack of their explainability.
In the context of the above research problem, we aim to answer the following research questions
Given clinicians have trust on clinically validated PTSD assessment surveys, can we fill out PTSD assessment surveys using twitter posts analysis of war-veterans?
If possible, what sort of analysis and approach are needed to develop such XAI model to detect the prevalence and intensity of PTSD among war-veterans only using the social media (twitter) analysis where users are free to share their everyday mental and social conditions?
How much quantitative improvement do we observe in our model's ability to explain both detection and intensity estimation of PTSD?
In this paper, we propose LAXARY, an explainable and trustworthy representation of PTSD classification and its intensity for clinicians.
The key contributions of our work are summarized below,
The novelty of LAXARY lies on the proposed clinical surveys-based PTSD Linguistic dictionary creation with words/aspects which represents the instantaneous perturbation of twitter-based sentiments as a specific pattern and help calculate the possible scores of each survey question.
LAXARY includes a modified LIWC model to calculate the possible scores of each survey question using PTSD Linguistic Dictionary to fill out the PTSD assessment surveys which provides a practical way not only to determine fine-grained discrimination of physiological and psychological health markers of PTSD without incurring the expensive and laborious in-situ laboratory testing or surveys, but also obtain trusts of clinicians who are expected to see traditional survey results of the PTSD assessment.
Finally, we evaluate the accuracy of LAXARY model performance and reliability-validity of generated PTSD Linguistic Dictionary using real twitter users' posts. Our results show that, given normal weekly messages posted in twitter, LAXARY can provide very high accuracy in filling up surveys towards identifying PTSD ($\approx 96\%$) and its intensity ($\approx 1.2$ mean squared error).
Overview
Fig. FIGREF7 shows a schematic representation of our proposed model. It consists of the following logical steps: (i) Develop PTSD Detection System using twitter posts of war-veterans(ii) design real surveys from the popular symptoms based mental disease assessment surveys; (iii) define single category and create PTSD Linguistic Dictionary for each survey question and multiple aspect/words for each question; (iv) calculate $\alpha $-scores for each category and dimension based on linguistic inquiry and word count as well as the aspects/words based dictionary; (v) calculate scaling scores ($s$-scores) for each dimension based on the $\alpha $-scores and $s$-scores of each category based on the $s$-scores of its dimensions; (vi) rank features according to the contributions of achieving separation among categories associated with different $\alpha $-scores and $s$-scores; and select feature sets that minimize the overlap among categories as associated with the target classifier (SGD); and finally (vii) estimate the quality of selected features-based classification for filling up surveys based on classified categories i.e. PTSD assessment which is trustworthy among the psychiatry community.
Related Works
Twitter activity based mental health assessment has been utmost importance to the Natural Language Processing (NLP) researchers and social media analysts for decades. Several studies have turned to social media data to study mental health, since it provides an unbiased collection of a person's language and behavior, which has been shown to be useful in diagnosing conditions. BIBREF9 used n-gram language model (CLM) based s-score measure setting up some user centric emotional word sets. BIBREF10 used positive and negative PTSD data to train three classifiers: (i) one unigram language model (ULM); (ii) one character n-gram language model (CLM); and 3) one from the LIWC categories $\alpha $-scores and found that last one gives more accuracy than other ones. BIBREF11 used two types of $s$-scores taking the ratio of negative and positive language models. Differences in language use have been observed in the personal writing of students who score highly on depression scales BIBREF2, forum posts for depression BIBREF3, self narratives for PTSD (BIBREF4, BIBREF5), and chat rooms for bipolar BIBREF6. Specifically in social media, differences have previously been observed between depressed and control groups (as assessed by internet-administered batteries) via LIWC: depressed users more frequently use first person pronouns (BIBREF7) and more frequently use negative emotion words and anger words on Twitter, but show no differences in positive emotion word usage (BIBREF8). Similarly, an increase in negative emotion and first person pronouns, and a decrease in third person pronouns, (via LIWC) is observed, as well as many manifestations of literature findings in the pattern of life of depressed users (e.g., social engagement, demographics) (BIBREF12). Differences in language use in social media via LIWC have also been observed between PTSD and control groups (BIBREF13).
All of the prior works used some random dictionary related to the human sentiment (positive/negative) word sets as category words to estimate the mental health but very few of them addressed the problem of explainability of their solution to obtain trust of clinicians. Islam et. al proposed an explainable topic modeling framework to rank different mental health features using Local Interpretable Model-Agnostic Explanations and visualize them to understand the features involved in mental health status classification using the BIBREF14 which fails to provide trust of clinicians due to its lack of interpretability in clinical terms. In this paper, we develop LAXARY model where first we start investigating clinically validated survey tools which are trustworthy methods of PTSD assessment among clinicians, build our category sets based on the survey questions and use these as dictionary words in terms of first person singular number pronouns aspect for next level LIWC algorithm. Finally, we develop a modified LIWC algorithm to estimate survey scores (similar to sentiment category scores of naive LIWC) which is both explainable and trustworthy to clinicians.
Demographics of Clinically Validated PTSD Assessment Tools
There are many clinically validated PTSD assessment tools that are being used both to detect the prevalence of PTSD and its intensity among sufferers. Among all of the tools, the most popular and well accepted one is Domain-Specific Risk-Taking (DOSPERT) Scale BIBREF15. This is a psychometric scale that assesses risk taking in five content domains: financial decisions (separately for investing versus gambling), health/safety, recreational, ethical, and social decisions. Respondents rate the likelihood that they would engage in domain-specific risky activities (Part I). An optional Part II assesses respondents' perceptions of the magnitude of the risks and expected benefits of the activities judged in Part I. There are more scales that are used in risky behavior analysis of individual's daily activities such as, The Berlin Social Support Scales (BSSS) BIBREF16 and Values In Action Scale (VIAS) BIBREF17. Dryhootch America BIBREF18, BIBREF19, a veteran peer support community organization, chooses 5, 6 and 5 questions respectively from the above mentioned survey systems to assess the PTSD among war veterans and consider rest of them as irrelevant to PTSD. The details of dryhootch chosen survey scale are stated in Table TABREF13. Table!TABREF14 shows a sample DOSPERT scale demographic chosen by dryhootch. The threshold (in Table TABREF13) is used to calculate the risky behavior limits. For example, if one individual's weekly DOSPERT score goes over 28, he is in critical situation in terms of risk taking symptoms of PTSD. Dryhootch defines the intensity of PTSD into four categories based on the weekly survey results of all three clinical survey tools (DOSPERT, BSSS and VIAS )
High risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for all three PTSD assessment tools i.e. DOSPERT, BSSS and VIAS, then he/she is in high risk situation which needs immediate mental support to avoid catastrophic effect of individual's health or surrounding people's life.
Moderate risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any two of the three PTSD assessment tools, then he/she is in moderate risk situation which needs close observation and peer mentoring to avoid their risk progression.
Low risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any one of the three PTSD assessment tools, then he/she has light symptoms of PTSD.
No PTSD: If one individual veteran's weekly PTSD assessment scores go below the threshold for all three PTSD assessment tools, then he/she has no PTSD.
Twitter-based PTSD Detection
To develop an explainable model, we first need to develop twitter-based PTSD detection algorithm. In this section, we describe the data collection and the development of our core LAXARY model.
Twitter-based PTSD Detection ::: Data Collection
We use an automated regular expression based searching to find potential veterans with PTSD in twitter, and then refine the list manually. First, we select different keywords to search twitter users of different categories. For example, to search self-claimed diagnosed PTSD sufferers, we select keywords related to PTSD for example, post trauma, post traumatic disorder, PTSD etc. We use a regular expression to search for statements where the user self-identifies as being diagnosed with PTSD. For example, Table TABREF27 shows a self-identified tweet posts. To search veterans, we mostly visit to different twitter accounts of veterans organizations such as "MA Women Veterans @WomenVeterans", "Illinois Veterans @ILVetsAffairs", "Veterans Benefits @VAVetBenefits" etc. We define an inclusion criteria as follows: one twitter user will be part of this study if he/she describes himself/herself as a veteran in the introduction and have at least 25 tweets in last week. After choosing the initial twitter users, we search for self-identified PTSD sufferers who claim to be diagnosed with PTSD in their twitter posts. We find 685 matching tweets which are manually reviewed to determine if they indicate a genuine statement of a diagnosis for PTSD. Next, we select the username that authored each of these tweets and retrieve last week's tweets via the Twitter API. We then filtered out users with less than 25 tweets and those whose tweets were not at least 75% in English (measured using an automated language ID system.) This filtering left us with 305 users as positive examples. We repeated this process for a group of randomly selected users. We randomly selected 3,000 twitter users who are veterans as per their introduction and have at least 25 tweets in last one week. After filtering (as above) in total 2,423 users remain, whose tweets are used as negative examples developing a 2,728 user's entire weeks' twitter posts where 305 users are self-claimed PTSD sufferers. We distributed Dryhootch chosen surveys among 1,200 users (305 users are self claimed PTSD sufferers and rest of them are randomly chosen from previous 2,423 users) and received 210 successful responses. Among these responses, 92 users were diagnosed as PTSD by any of the three surveys and rest of the 118 users are diagnosed with NO PTSD. Among the clinically diagnosed PTSD sufferers, 17 of them were not self-identified before. However, 7 of the self-identified PTSD sufferers are assessed with no PTSD by PTSD assessment tools. The response rates of PTSD and NO PTSD users are 27% and 12%. In summary, we have collected one week of tweets from 2,728 veterans where 305 users claimed to have diagnosed with PTSD. After distributing Dryhootch surveys, we have a dataset of 210 veteran twitter users among them 92 users are assessed with PTSD and 118 users are diagnosed with no PTSD using clinically validated surveys. The severity of the PTSD are estimated as Non-existent, light, moderate and high PTSD based on how many surveys support the existence of PTSD among the participants according to dryhootch manual BIBREF18, BIBREF19.
Twitter-based PTSD Detection ::: Pre-processing
We download 210 users' all twitter posts who are war veterans and clinically diagnosed with PTSD sufferers as well which resulted a total 12,385 tweets. Fig FIGREF16 shows each of the 210 veteran twitter users' monthly average tweets. We categorize these Tweets into two groups: Tweets related to work and Tweets not related to work. That is, only the Tweets that use a form of the word “work*” (e.g. work,worked, working, worker, etc.) or “job*” (e.g. job, jobs, jobless, etc.) are identified as work-related Tweets, with the remaining categorized as non-work-related Tweets. This categorization method increases the likelihood that most Tweets in the work group are indeed talking about work or job; for instance, “Back to work. Projects are firing back up and moving ahead now that baseball is done.” This categorization results in 456 work-related Tweets, about 5.4% of all Tweets written in English (and 75 unique Twitter users). To conduct weekly-level analysis, we consider three categorizations of Tweets (i.e. overall Tweets, work-related Tweets, and non work-related Tweets) on a daily basis, and create a text file for each week for each group.
Twitter-based PTSD Detection ::: PTSD Detection Baseline Model
We use Coppersmith proposed PTSD classification algorithm to develop our baseline blackbox model BIBREF11. We utilize our positive and negative PTSD data (+92,-118) to train three classifiers: (i) unigram language model (ULM) examining individual whole words, (ii) character n-gram language model (CLM), and (iii) LIWC based categorical models above all of the prior ones. The LMs have been shown effective for Twitter classification tasks BIBREF9 and LIWC has been previously used for analysis of mental health in Twitter BIBREF10. The language models measure the probability that a word (ULM) or a string of characters (CLM) was generated by the same underlying process as the training data. We first train one of each language model ($clm^{+}$ and $ulm^{+}$) from the tweets of PTSD users, and another model ($clm^{-}$ and $ulm^{-}$) from the tweets from No PTSD users. Each test tweet $t$ is scored by comparing probabilities from each LM called $s-score$
A threshold of 1 for $s-score$ divides scores into positive and negative classes. In a multi-class setting, the algorithm minimizes the cross entropy, selecting the model with the highest probability. For each user, we calculate the proportion of tweets scored positively by each LIWC category. These proportions are used as a feature vector in a loglinear regression model BIBREF20. Prior to training, we preprocess the text of each tweet: we replace all usernames with a single token (USER), lowercase all text, and remove extraneous whitespace. We also exclude any tweet that contained a URL, as these often pertain to events external to the user.
We conduct a LIWC analysis of the PTSD and non-PTSD tweets to determine if there are differences in the language usage of PTSD users. We applied the LIWC battery and examined the distribution of words in their language. Each tweet was tokenized by separating on whitespace. For each user, for a subset of the LIWC categories, we measured the proportion of tweets that contained at least one word from that category. Specifically, we examined the following nine categories: first, second and third person pronouns, swear, anger, positive emotion, negative emotion, death, and anxiety words. Second person pronouns were used significantly less often by PTSD users, while third person pronouns and words about anxiety were used significantly more often.
LAXARY: Explainable PTSD Detection Model
The heart of LAXARY framework is the construction of PTSD Linguistic Dictionary. Prior works show that linguistic dictionary based text analysis has been much effective in twitter based sentiment analysis BIBREF21, BIBREF22. Our work is the first of its kind that develops its own linguistic dictionary to explain automatic PTSD assessment to confirm trustworthiness to clinicians.
LAXARY: Explainable PTSD Detection Model ::: PTSD Linguistic Dictionary Creation
We use LIWC developed WordStat dictionary format for our text analysis BIBREF23. The LIWC application relies on an internal default dictionary that defines which words should be counted in the target text files. To avoid confusion in the subsequent discussion, text words that are read and analyzed by WordStat are referred to as target words. Words in the WordStat dictionary file will be referred to as dictionary words. Groups of dictionary words that tap a particular domain (e.g., negative emotion words) are variously referred to as subdictionaries or word categories. Fig FIGREF8 is a sample WordStat dictionary. There are several steps to use this dictionary which are stated as follows:
Pronoun selection: At first we have to define the pronouns of the target sentiment. Here we used first person singular number pronouns (i.e., I, me, mine etc.) that means we only count those sentences or segments which are only related to first person singular number i.e., related to the person himself.
Category selection: We have to define the categories of each word set thus we can analyze the categories as well as dimensions' text analysis scores. We chose three categories based on the three different surveys: 1) DOSPERT scale; 2) BSSS scale; and 3) VIAS scale.
Dimension selection: We have to define the word sets (also called dimension) for each category. We chose one dimension for each of the questions under each category to reflect real survey system evaluation. Our chosen categories are state in Fig FIGREF20.
Score calculation $\alpha $-score: $\alpha $-scores refer to the Cronbach's alphas for the internal reliability of the specific words within each category. The binary alphas are computed on the ratio of occurrence and non-occurrence of each dictionary word whereas the raw or uncorrected alphas are based on the percentage of use of each of the category words within texts.
LAXARY: Explainable PTSD Detection Model ::: Psychometric Validation of PTSD Linguistic Dictionary
After the PTSD Linguistic Dictionary has been created, we empirically evaluate its psychometric properties such as reliability and validity as per American Standards for educational and psychological testing guideline BIBREF24. In psychometrics, reliability is most commonly evaluated by Cronbach's alpha, which assesses internal consistency based on inter-correlations and the number of measured items. In the text analysis scenario, each word in our PTSD Linguistic dictionary is considered an item, and reliability is calculated based on each text file's response to each word item, which forms an $N$(number of text files) $\times $ $J$(number of words or stems in a dictionary) data matrix. There are two ways to quantify such responses: using percentage data (uncorrected method), or using "present or not" data (binary method) BIBREF23. For the uncorrected method, the data matrix comprises percentage values of each word/stem are calculated from each text file. For the binary method, the data matrix quantifies whether or not a word was used in a text file where "1" represents yes and "0" represents no. Once the data matrix is created, it is used to calculate Cronbach's alpha based on its inter-correlation matrix among the word percentages. We assess reliability based on our selected 210 users' Tweets which further generated a 23,562 response matrix after running the PTSD Linguistic Dictionary for each user. The response matrix yields reliability of .89 based on the uncorrected method, and .96 based on the binary method, which confirm the high reliability of our PTSD Dictionary created PTSD survey based categories. After assessing the reliability of the PTSD Linguistic dictionary, we focus on the two most common forms of construct validity: convergent validity and discriminant validity BIBREF25. Convergent validity provides evidence that two measures designed to assess the same construct are indeed related; discriminate validity involves evidence that two measures designed to assess different constructs are not too strongly related. In theory, we expect that the PTSD Linguistic dictionary should be positively correlated with other negative PTSD constructs to show convergent validity, and not strongly correlated with positive PTSD constructs to show discriminant validity. To test these two types of validity, we use the same 210 users' tweets used for the reliability assessment. The results revealed that the PTSD Linguistic dictionary is indeed positively correlated with negative construct dictionaries, including the overall negative PTSD dictionary (r=3.664,p$<$.001). Table TABREF25 shows all 16 categorical dictionaries. These results provide strong support for the measurement validity for our newly created PTSD Linguistic dictionary.
LAXARY: Explainable PTSD Detection Model ::: Feature Extraction and Survey Score Estimation
We use the exact similar method of LIWC to extract $\alpha $-scores for each dimension and categories except we use our generated PTSD Linguistic Dictionary for the task BIBREF23. Thus we have total 16 $\alpha $-scores in total. Meanwhile, we propose a new type of feature in this regard, which we called scaling-score ($s$-score). $s$-score is calculated from $\alpha $-scores. The purpose of using $s$-score is to put exact scores of each of the dimension and category thus we can apply the same method used in real weekly survey system. The idea is, we divide each category into their corresponding scale factor (i.e., for DOSPERT scale, BSSS scale and VIAS scales) and divide them into 8, 3 and 5 scaling factors which are used in real survey system. Then we set the $s$-score from the scaling factors from the $\alpha $-scores of the corresponding dimension of the questions. The algorithm is stated in Figure FIGREF23. Following Fig FIGREF23, we calculate the $s$-score for each dimension. Then we add up all the $s$-score of the dimensions to calculate cumulative $s$-score of particular categories which is displayed in Fig FIGREF22. Finally, we have total 32 features among them 16 are $\alpha $-scores and 16 are $s$-scores for each category (i.e. each question). We add both of $\alpha $ and $s$ scores together and scale according to their corresponding survey score scales using min-max standardization. Then, the final output is a 16 valued matrix which represent the score for each questions from three different Dryhootch surveys. We use the output to fill up each survey, estimate the prevalence of PTSD and its intensity based on each tool's respective evaluation metric.
Experimental Evaluation
To validate the performance of LAXARY framework, we first divide the entire 210 users' twitter posts into training and test dataset. Then, we first developed PTSD Linguistic Dictionary from the twitter posts from training dataset and apply LAXARY framework on test dataset.
Experimental Evaluation ::: Results
To provide an initial results, we take 50% of users' last week's (the week they responded of having PTSD) data to develop PTSD Linguistic dictionary and apply LAXARY framework to fill up surveys on rest of 50% dataset. The distribution of this training-test dataset segmentation followed a 50% distribution of PTSD and No PTSD from the original dataset. Our final survey based classification results showed an accuracy of 96% in detecting PTSD and mean squared error of 1.2 in estimating its intensity given we have four intensity, No PTSD, Low Risk PTSD, Moderate Risk PTSD and High Risk PTSD with a score of 0, 1, 2 and 3 respectively. Table TABREF29 shows the classification details of our experiment which provide the very good accuracy of our classification. To compare the outperformance of our method, we also implemented Coppersmith et. al. proposed method and achieved an 86% overall accuracy of detecting PTSD users BIBREF11 following the same training-test dataset distribution. Fig FIGREF28 illustrates the comparisons between LAXARY and Coppersmith et. al. proposed method. Here we can see, the outperformance of our proposed method as well as the importance of $s-score$ estimation. We also illustrates the importance of $\alpha -score$ and $S-score$ in Fig FIGREF30. Fig FIGREF30 illustrates that if we change the number of training samples (%), LAXARY models outperforms Coppersmith et. al. proposed model under any condition. In terms of intensity, Coppersmith et. al. totally fails to provide any idea however LAXARY provides extremely accurate measures of intensity estimation for PTSD sufferers (as shown in Fig FIGREF31) which can be explained simply providing LAXARY model filled out survey details. Table TABREF29 shows the details of accuracies of both PTSD detection and intensity estimation. Fig FIGREF32 shows the classification accuracy changes over the training sample sizes for each survey which shows that DOSPERT scale outperform other surveys. Fig FIGREF33 shows that if we take previous weeks (instead of only the week diagnosis of PTSD was taken), there are no significant patterns of PTSD detection.
Challenges and Future Work
LAXARY is a highly ambitious model that targets to fill up clinically validated survey tools using only twitter posts. Unlike the previous twitter based mental health assessment tools, LAXARY provides a clinically interpretable model which can provide better classification accuracy and intensity of PTSD assessment and can easily obtain the trust of clinicians. The central challenge of LAXARY is to search twitter users from twitter search engine and manually label them for analysis. While developing PTSD Linguistic Dictionary, although we followed exactly same development idea of LIWC WordStat dictionary and tested reliability and validity, our dictionary was not still validated by domain experts as PTSD detection is highly sensitive issue than stress/depression detection. Moreover, given the extreme challenges of searching veterans in twitter using our selection and inclusion criteria, it was extremely difficult to manually find the evidence of the self-claimed PTSD sufferers. Although, we have shown extremely promising initial findings about the representation of a blackbox model into clinically trusted tools, using only 210 users' data is not enough to come up with a trustworthy model. Moreover, more clinical validation must be done in future with real clinicians to firmly validate LAXARY model provided PTSD assessment outcomes. In future, we aim to collect more data and run not only nationwide but also international-wide data collection to establish our innovation into a real tool. Apart from that, as we achieved promising results in detecting PTSD and its intensity using only twitter data, we aim to develop Linguistic Dictionary for other mental health issues too. Moreover, we will apply our proposed method in other types of mental illness such as depression, bipolar disorder, suicidal ideation and seasonal affective disorder (SAD) etc. As we know, accuracy of particular social media analysis depends on the dataset mostly. We aim to collect more data engaging more researchers to establish a set of mental illness specific Linguistic Database and evaluation technique to solidify the genralizability of our proposed method.
Conclusion
To promote better comfort to the trauma patients, it is really important to detect Post Traumatic Stress Disorder (PTSD) sufferers in time before going out of control that may result catastrophic impacts on society, people around or even sufferers themselves. Although, psychiatrists invented several clinical diagnosis tools (i.e., surveys) by assessing symptoms, signs and impairment associated with PTSD, most of the times, the process of diagnosis happens at the severe stage of illness which may have already caused some irreversible damages of mental health of the sufferers. On the other hand, due to lack of explainability, existing twitter based methods are not trusted by the clinicians. In this paper, we proposed, LAXARY, a novel method of filling up PTSD assessment surveys using weekly twitter posts. As the clinical surveys are trusted and understandable method, we believe that this method will be able to gain trust of clinicians towards early detection of PTSD. Moreover, our proposed LAXARY model, which is first of its kind, can be used to develop any type of mental disorder Linguistic Dictionary providing a generalized and trustworthy mental health assessment framework of any kind. | defined into four categories from high risk, moderate risk, to low risk |
3334f50fe1796ce0df9dd58540e9c08be5856c23 | 3334f50fe1796ce0df9dd58540e9c08be5856c23_0 | Q: How is LIWC incorporated into this system?
Text: Introduction
Combat veterans diagnosed with PTSD are substantially more likely to engage in a number of high risk activities including engaging in interpersonal violence, attempting suicide, committing suicide, binge drinking, and drug abuse BIBREF0. Despite improved diagnostic screening, outpatient mental health and inpatient treatment for PTSD, the syndrome remains treatment resistant, is typically chronic, and is associated with numerous negative health effects and higher treatment costs BIBREF1. As a result, the Veteran Administration's National Center for PTSD (NCPTSD) suggests to reconceptualize PTSD not just in terms of a psychiatric symptom cluster, but focusing instead on the specific high risk behaviors associated with it, as these may be directly addressed though behavioral change efforts BIBREF0. Consensus prevalence estimates suggest that PTSD impacts between 15-20% of the veteran population which is typically chronic and treatment resistant BIBREF0. The PTSD patients support programs organized by different veterans peer support organization use a set of surveys for local weekly assessment to detect the intensity of PTSD among the returning veterans. However, recent advanced evidence-based care for PTSD sufferers surveys have showed that veterans, suffered with chronic PTSD are reluctant in participating assessments to the professionals which is another significant symptom of war returning veterans with PTSD. Several existing researches showed that, twitter posts of war veterans could be a significant indicator of their mental health and could be utilized to predict PTSD sufferers in time before going out of control BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8. However, all of the proposed methods relied on either blackbox machine learning methods or language models based sentiments extraction of posted texts which failed to obtain acceptability and trust of clinicians due to the lack of their explainability.
In the context of the above research problem, we aim to answer the following research questions
Given clinicians have trust on clinically validated PTSD assessment surveys, can we fill out PTSD assessment surveys using twitter posts analysis of war-veterans?
If possible, what sort of analysis and approach are needed to develop such XAI model to detect the prevalence and intensity of PTSD among war-veterans only using the social media (twitter) analysis where users are free to share their everyday mental and social conditions?
How much quantitative improvement do we observe in our model's ability to explain both detection and intensity estimation of PTSD?
In this paper, we propose LAXARY, an explainable and trustworthy representation of PTSD classification and its intensity for clinicians.
The key contributions of our work are summarized below,
The novelty of LAXARY lies on the proposed clinical surveys-based PTSD Linguistic dictionary creation with words/aspects which represents the instantaneous perturbation of twitter-based sentiments as a specific pattern and help calculate the possible scores of each survey question.
LAXARY includes a modified LIWC model to calculate the possible scores of each survey question using PTSD Linguistic Dictionary to fill out the PTSD assessment surveys which provides a practical way not only to determine fine-grained discrimination of physiological and psychological health markers of PTSD without incurring the expensive and laborious in-situ laboratory testing or surveys, but also obtain trusts of clinicians who are expected to see traditional survey results of the PTSD assessment.
Finally, we evaluate the accuracy of LAXARY model performance and reliability-validity of generated PTSD Linguistic Dictionary using real twitter users' posts. Our results show that, given normal weekly messages posted in twitter, LAXARY can provide very high accuracy in filling up surveys towards identifying PTSD ($\approx 96\%$) and its intensity ($\approx 1.2$ mean squared error).
Overview
Fig. FIGREF7 shows a schematic representation of our proposed model. It consists of the following logical steps: (i) Develop PTSD Detection System using twitter posts of war-veterans(ii) design real surveys from the popular symptoms based mental disease assessment surveys; (iii) define single category and create PTSD Linguistic Dictionary for each survey question and multiple aspect/words for each question; (iv) calculate $\alpha $-scores for each category and dimension based on linguistic inquiry and word count as well as the aspects/words based dictionary; (v) calculate scaling scores ($s$-scores) for each dimension based on the $\alpha $-scores and $s$-scores of each category based on the $s$-scores of its dimensions; (vi) rank features according to the contributions of achieving separation among categories associated with different $\alpha $-scores and $s$-scores; and select feature sets that minimize the overlap among categories as associated with the target classifier (SGD); and finally (vii) estimate the quality of selected features-based classification for filling up surveys based on classified categories i.e. PTSD assessment which is trustworthy among the psychiatry community.
Related Works
Twitter activity based mental health assessment has been utmost importance to the Natural Language Processing (NLP) researchers and social media analysts for decades. Several studies have turned to social media data to study mental health, since it provides an unbiased collection of a person's language and behavior, which has been shown to be useful in diagnosing conditions. BIBREF9 used n-gram language model (CLM) based s-score measure setting up some user centric emotional word sets. BIBREF10 used positive and negative PTSD data to train three classifiers: (i) one unigram language model (ULM); (ii) one character n-gram language model (CLM); and 3) one from the LIWC categories $\alpha $-scores and found that last one gives more accuracy than other ones. BIBREF11 used two types of $s$-scores taking the ratio of negative and positive language models. Differences in language use have been observed in the personal writing of students who score highly on depression scales BIBREF2, forum posts for depression BIBREF3, self narratives for PTSD (BIBREF4, BIBREF5), and chat rooms for bipolar BIBREF6. Specifically in social media, differences have previously been observed between depressed and control groups (as assessed by internet-administered batteries) via LIWC: depressed users more frequently use first person pronouns (BIBREF7) and more frequently use negative emotion words and anger words on Twitter, but show no differences in positive emotion word usage (BIBREF8). Similarly, an increase in negative emotion and first person pronouns, and a decrease in third person pronouns, (via LIWC) is observed, as well as many manifestations of literature findings in the pattern of life of depressed users (e.g., social engagement, demographics) (BIBREF12). Differences in language use in social media via LIWC have also been observed between PTSD and control groups (BIBREF13).
All of the prior works used some random dictionary related to the human sentiment (positive/negative) word sets as category words to estimate the mental health but very few of them addressed the problem of explainability of their solution to obtain trust of clinicians. Islam et. al proposed an explainable topic modeling framework to rank different mental health features using Local Interpretable Model-Agnostic Explanations and visualize them to understand the features involved in mental health status classification using the BIBREF14 which fails to provide trust of clinicians due to its lack of interpretability in clinical terms. In this paper, we develop LAXARY model where first we start investigating clinically validated survey tools which are trustworthy methods of PTSD assessment among clinicians, build our category sets based on the survey questions and use these as dictionary words in terms of first person singular number pronouns aspect for next level LIWC algorithm. Finally, we develop a modified LIWC algorithm to estimate survey scores (similar to sentiment category scores of naive LIWC) which is both explainable and trustworthy to clinicians.
Demographics of Clinically Validated PTSD Assessment Tools
There are many clinically validated PTSD assessment tools that are being used both to detect the prevalence of PTSD and its intensity among sufferers. Among all of the tools, the most popular and well accepted one is Domain-Specific Risk-Taking (DOSPERT) Scale BIBREF15. This is a psychometric scale that assesses risk taking in five content domains: financial decisions (separately for investing versus gambling), health/safety, recreational, ethical, and social decisions. Respondents rate the likelihood that they would engage in domain-specific risky activities (Part I). An optional Part II assesses respondents' perceptions of the magnitude of the risks and expected benefits of the activities judged in Part I. There are more scales that are used in risky behavior analysis of individual's daily activities such as, The Berlin Social Support Scales (BSSS) BIBREF16 and Values In Action Scale (VIAS) BIBREF17. Dryhootch America BIBREF18, BIBREF19, a veteran peer support community organization, chooses 5, 6 and 5 questions respectively from the above mentioned survey systems to assess the PTSD among war veterans and consider rest of them as irrelevant to PTSD. The details of dryhootch chosen survey scale are stated in Table TABREF13. Table!TABREF14 shows a sample DOSPERT scale demographic chosen by dryhootch. The threshold (in Table TABREF13) is used to calculate the risky behavior limits. For example, if one individual's weekly DOSPERT score goes over 28, he is in critical situation in terms of risk taking symptoms of PTSD. Dryhootch defines the intensity of PTSD into four categories based on the weekly survey results of all three clinical survey tools (DOSPERT, BSSS and VIAS )
High risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for all three PTSD assessment tools i.e. DOSPERT, BSSS and VIAS, then he/she is in high risk situation which needs immediate mental support to avoid catastrophic effect of individual's health or surrounding people's life.
Moderate risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any two of the three PTSD assessment tools, then he/she is in moderate risk situation which needs close observation and peer mentoring to avoid their risk progression.
Low risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any one of the three PTSD assessment tools, then he/she has light symptoms of PTSD.
No PTSD: If one individual veteran's weekly PTSD assessment scores go below the threshold for all three PTSD assessment tools, then he/she has no PTSD.
Twitter-based PTSD Detection
To develop an explainable model, we first need to develop twitter-based PTSD detection algorithm. In this section, we describe the data collection and the development of our core LAXARY model.
Twitter-based PTSD Detection ::: Data Collection
We use an automated regular expression based searching to find potential veterans with PTSD in twitter, and then refine the list manually. First, we select different keywords to search twitter users of different categories. For example, to search self-claimed diagnosed PTSD sufferers, we select keywords related to PTSD for example, post trauma, post traumatic disorder, PTSD etc. We use a regular expression to search for statements where the user self-identifies as being diagnosed with PTSD. For example, Table TABREF27 shows a self-identified tweet posts. To search veterans, we mostly visit to different twitter accounts of veterans organizations such as "MA Women Veterans @WomenVeterans", "Illinois Veterans @ILVetsAffairs", "Veterans Benefits @VAVetBenefits" etc. We define an inclusion criteria as follows: one twitter user will be part of this study if he/she describes himself/herself as a veteran in the introduction and have at least 25 tweets in last week. After choosing the initial twitter users, we search for self-identified PTSD sufferers who claim to be diagnosed with PTSD in their twitter posts. We find 685 matching tweets which are manually reviewed to determine if they indicate a genuine statement of a diagnosis for PTSD. Next, we select the username that authored each of these tweets and retrieve last week's tweets via the Twitter API. We then filtered out users with less than 25 tweets and those whose tweets were not at least 75% in English (measured using an automated language ID system.) This filtering left us with 305 users as positive examples. We repeated this process for a group of randomly selected users. We randomly selected 3,000 twitter users who are veterans as per their introduction and have at least 25 tweets in last one week. After filtering (as above) in total 2,423 users remain, whose tweets are used as negative examples developing a 2,728 user's entire weeks' twitter posts where 305 users are self-claimed PTSD sufferers. We distributed Dryhootch chosen surveys among 1,200 users (305 users are self claimed PTSD sufferers and rest of them are randomly chosen from previous 2,423 users) and received 210 successful responses. Among these responses, 92 users were diagnosed as PTSD by any of the three surveys and rest of the 118 users are diagnosed with NO PTSD. Among the clinically diagnosed PTSD sufferers, 17 of them were not self-identified before. However, 7 of the self-identified PTSD sufferers are assessed with no PTSD by PTSD assessment tools. The response rates of PTSD and NO PTSD users are 27% and 12%. In summary, we have collected one week of tweets from 2,728 veterans where 305 users claimed to have diagnosed with PTSD. After distributing Dryhootch surveys, we have a dataset of 210 veteran twitter users among them 92 users are assessed with PTSD and 118 users are diagnosed with no PTSD using clinically validated surveys. The severity of the PTSD are estimated as Non-existent, light, moderate and high PTSD based on how many surveys support the existence of PTSD among the participants according to dryhootch manual BIBREF18, BIBREF19.
Twitter-based PTSD Detection ::: Pre-processing
We download 210 users' all twitter posts who are war veterans and clinically diagnosed with PTSD sufferers as well which resulted a total 12,385 tweets. Fig FIGREF16 shows each of the 210 veteran twitter users' monthly average tweets. We categorize these Tweets into two groups: Tweets related to work and Tweets not related to work. That is, only the Tweets that use a form of the word “work*” (e.g. work,worked, working, worker, etc.) or “job*” (e.g. job, jobs, jobless, etc.) are identified as work-related Tweets, with the remaining categorized as non-work-related Tweets. This categorization method increases the likelihood that most Tweets in the work group are indeed talking about work or job; for instance, “Back to work. Projects are firing back up and moving ahead now that baseball is done.” This categorization results in 456 work-related Tweets, about 5.4% of all Tweets written in English (and 75 unique Twitter users). To conduct weekly-level analysis, we consider three categorizations of Tweets (i.e. overall Tweets, work-related Tweets, and non work-related Tweets) on a daily basis, and create a text file for each week for each group.
Twitter-based PTSD Detection ::: PTSD Detection Baseline Model
We use Coppersmith proposed PTSD classification algorithm to develop our baseline blackbox model BIBREF11. We utilize our positive and negative PTSD data (+92,-118) to train three classifiers: (i) unigram language model (ULM) examining individual whole words, (ii) character n-gram language model (CLM), and (iii) LIWC based categorical models above all of the prior ones. The LMs have been shown effective for Twitter classification tasks BIBREF9 and LIWC has been previously used for analysis of mental health in Twitter BIBREF10. The language models measure the probability that a word (ULM) or a string of characters (CLM) was generated by the same underlying process as the training data. We first train one of each language model ($clm^{+}$ and $ulm^{+}$) from the tweets of PTSD users, and another model ($clm^{-}$ and $ulm^{-}$) from the tweets from No PTSD users. Each test tweet $t$ is scored by comparing probabilities from each LM called $s-score$
A threshold of 1 for $s-score$ divides scores into positive and negative classes. In a multi-class setting, the algorithm minimizes the cross entropy, selecting the model with the highest probability. For each user, we calculate the proportion of tweets scored positively by each LIWC category. These proportions are used as a feature vector in a loglinear regression model BIBREF20. Prior to training, we preprocess the text of each tweet: we replace all usernames with a single token (USER), lowercase all text, and remove extraneous whitespace. We also exclude any tweet that contained a URL, as these often pertain to events external to the user.
We conduct a LIWC analysis of the PTSD and non-PTSD tweets to determine if there are differences in the language usage of PTSD users. We applied the LIWC battery and examined the distribution of words in their language. Each tweet was tokenized by separating on whitespace. For each user, for a subset of the LIWC categories, we measured the proportion of tweets that contained at least one word from that category. Specifically, we examined the following nine categories: first, second and third person pronouns, swear, anger, positive emotion, negative emotion, death, and anxiety words. Second person pronouns were used significantly less often by PTSD users, while third person pronouns and words about anxiety were used significantly more often.
LAXARY: Explainable PTSD Detection Model
The heart of LAXARY framework is the construction of PTSD Linguistic Dictionary. Prior works show that linguistic dictionary based text analysis has been much effective in twitter based sentiment analysis BIBREF21, BIBREF22. Our work is the first of its kind that develops its own linguistic dictionary to explain automatic PTSD assessment to confirm trustworthiness to clinicians.
LAXARY: Explainable PTSD Detection Model ::: PTSD Linguistic Dictionary Creation
We use LIWC developed WordStat dictionary format for our text analysis BIBREF23. The LIWC application relies on an internal default dictionary that defines which words should be counted in the target text files. To avoid confusion in the subsequent discussion, text words that are read and analyzed by WordStat are referred to as target words. Words in the WordStat dictionary file will be referred to as dictionary words. Groups of dictionary words that tap a particular domain (e.g., negative emotion words) are variously referred to as subdictionaries or word categories. Fig FIGREF8 is a sample WordStat dictionary. There are several steps to use this dictionary which are stated as follows:
Pronoun selection: At first we have to define the pronouns of the target sentiment. Here we used first person singular number pronouns (i.e., I, me, mine etc.) that means we only count those sentences or segments which are only related to first person singular number i.e., related to the person himself.
Category selection: We have to define the categories of each word set thus we can analyze the categories as well as dimensions' text analysis scores. We chose three categories based on the three different surveys: 1) DOSPERT scale; 2) BSSS scale; and 3) VIAS scale.
Dimension selection: We have to define the word sets (also called dimension) for each category. We chose one dimension for each of the questions under each category to reflect real survey system evaluation. Our chosen categories are state in Fig FIGREF20.
Score calculation $\alpha $-score: $\alpha $-scores refer to the Cronbach's alphas for the internal reliability of the specific words within each category. The binary alphas are computed on the ratio of occurrence and non-occurrence of each dictionary word whereas the raw or uncorrected alphas are based on the percentage of use of each of the category words within texts.
LAXARY: Explainable PTSD Detection Model ::: Psychometric Validation of PTSD Linguistic Dictionary
After the PTSD Linguistic Dictionary has been created, we empirically evaluate its psychometric properties such as reliability and validity as per American Standards for educational and psychological testing guideline BIBREF24. In psychometrics, reliability is most commonly evaluated by Cronbach's alpha, which assesses internal consistency based on inter-correlations and the number of measured items. In the text analysis scenario, each word in our PTSD Linguistic dictionary is considered an item, and reliability is calculated based on each text file's response to each word item, which forms an $N$(number of text files) $\times $ $J$(number of words or stems in a dictionary) data matrix. There are two ways to quantify such responses: using percentage data (uncorrected method), or using "present or not" data (binary method) BIBREF23. For the uncorrected method, the data matrix comprises percentage values of each word/stem are calculated from each text file. For the binary method, the data matrix quantifies whether or not a word was used in a text file where "1" represents yes and "0" represents no. Once the data matrix is created, it is used to calculate Cronbach's alpha based on its inter-correlation matrix among the word percentages. We assess reliability based on our selected 210 users' Tweets which further generated a 23,562 response matrix after running the PTSD Linguistic Dictionary for each user. The response matrix yields reliability of .89 based on the uncorrected method, and .96 based on the binary method, which confirm the high reliability of our PTSD Dictionary created PTSD survey based categories. After assessing the reliability of the PTSD Linguistic dictionary, we focus on the two most common forms of construct validity: convergent validity and discriminant validity BIBREF25. Convergent validity provides evidence that two measures designed to assess the same construct are indeed related; discriminate validity involves evidence that two measures designed to assess different constructs are not too strongly related. In theory, we expect that the PTSD Linguistic dictionary should be positively correlated with other negative PTSD constructs to show convergent validity, and not strongly correlated with positive PTSD constructs to show discriminant validity. To test these two types of validity, we use the same 210 users' tweets used for the reliability assessment. The results revealed that the PTSD Linguistic dictionary is indeed positively correlated with negative construct dictionaries, including the overall negative PTSD dictionary (r=3.664,p$<$.001). Table TABREF25 shows all 16 categorical dictionaries. These results provide strong support for the measurement validity for our newly created PTSD Linguistic dictionary.
LAXARY: Explainable PTSD Detection Model ::: Feature Extraction and Survey Score Estimation
We use the exact similar method of LIWC to extract $\alpha $-scores for each dimension and categories except we use our generated PTSD Linguistic Dictionary for the task BIBREF23. Thus we have total 16 $\alpha $-scores in total. Meanwhile, we propose a new type of feature in this regard, which we called scaling-score ($s$-score). $s$-score is calculated from $\alpha $-scores. The purpose of using $s$-score is to put exact scores of each of the dimension and category thus we can apply the same method used in real weekly survey system. The idea is, we divide each category into their corresponding scale factor (i.e., for DOSPERT scale, BSSS scale and VIAS scales) and divide them into 8, 3 and 5 scaling factors which are used in real survey system. Then we set the $s$-score from the scaling factors from the $\alpha $-scores of the corresponding dimension of the questions. The algorithm is stated in Figure FIGREF23. Following Fig FIGREF23, we calculate the $s$-score for each dimension. Then we add up all the $s$-score of the dimensions to calculate cumulative $s$-score of particular categories which is displayed in Fig FIGREF22. Finally, we have total 32 features among them 16 are $\alpha $-scores and 16 are $s$-scores for each category (i.e. each question). We add both of $\alpha $ and $s$ scores together and scale according to their corresponding survey score scales using min-max standardization. Then, the final output is a 16 valued matrix which represent the score for each questions from three different Dryhootch surveys. We use the output to fill up each survey, estimate the prevalence of PTSD and its intensity based on each tool's respective evaluation metric.
Experimental Evaluation
To validate the performance of LAXARY framework, we first divide the entire 210 users' twitter posts into training and test dataset. Then, we first developed PTSD Linguistic Dictionary from the twitter posts from training dataset and apply LAXARY framework on test dataset.
Experimental Evaluation ::: Results
To provide an initial results, we take 50% of users' last week's (the week they responded of having PTSD) data to develop PTSD Linguistic dictionary and apply LAXARY framework to fill up surveys on rest of 50% dataset. The distribution of this training-test dataset segmentation followed a 50% distribution of PTSD and No PTSD from the original dataset. Our final survey based classification results showed an accuracy of 96% in detecting PTSD and mean squared error of 1.2 in estimating its intensity given we have four intensity, No PTSD, Low Risk PTSD, Moderate Risk PTSD and High Risk PTSD with a score of 0, 1, 2 and 3 respectively. Table TABREF29 shows the classification details of our experiment which provide the very good accuracy of our classification. To compare the outperformance of our method, we also implemented Coppersmith et. al. proposed method and achieved an 86% overall accuracy of detecting PTSD users BIBREF11 following the same training-test dataset distribution. Fig FIGREF28 illustrates the comparisons between LAXARY and Coppersmith et. al. proposed method. Here we can see, the outperformance of our proposed method as well as the importance of $s-score$ estimation. We also illustrates the importance of $\alpha -score$ and $S-score$ in Fig FIGREF30. Fig FIGREF30 illustrates that if we change the number of training samples (%), LAXARY models outperforms Coppersmith et. al. proposed model under any condition. In terms of intensity, Coppersmith et. al. totally fails to provide any idea however LAXARY provides extremely accurate measures of intensity estimation for PTSD sufferers (as shown in Fig FIGREF31) which can be explained simply providing LAXARY model filled out survey details. Table TABREF29 shows the details of accuracies of both PTSD detection and intensity estimation. Fig FIGREF32 shows the classification accuracy changes over the training sample sizes for each survey which shows that DOSPERT scale outperform other surveys. Fig FIGREF33 shows that if we take previous weeks (instead of only the week diagnosis of PTSD was taken), there are no significant patterns of PTSD detection.
Challenges and Future Work
LAXARY is a highly ambitious model that targets to fill up clinically validated survey tools using only twitter posts. Unlike the previous twitter based mental health assessment tools, LAXARY provides a clinically interpretable model which can provide better classification accuracy and intensity of PTSD assessment and can easily obtain the trust of clinicians. The central challenge of LAXARY is to search twitter users from twitter search engine and manually label them for analysis. While developing PTSD Linguistic Dictionary, although we followed exactly same development idea of LIWC WordStat dictionary and tested reliability and validity, our dictionary was not still validated by domain experts as PTSD detection is highly sensitive issue than stress/depression detection. Moreover, given the extreme challenges of searching veterans in twitter using our selection and inclusion criteria, it was extremely difficult to manually find the evidence of the self-claimed PTSD sufferers. Although, we have shown extremely promising initial findings about the representation of a blackbox model into clinically trusted tools, using only 210 users' data is not enough to come up with a trustworthy model. Moreover, more clinical validation must be done in future with real clinicians to firmly validate LAXARY model provided PTSD assessment outcomes. In future, we aim to collect more data and run not only nationwide but also international-wide data collection to establish our innovation into a real tool. Apart from that, as we achieved promising results in detecting PTSD and its intensity using only twitter data, we aim to develop Linguistic Dictionary for other mental health issues too. Moreover, we will apply our proposed method in other types of mental illness such as depression, bipolar disorder, suicidal ideation and seasonal affective disorder (SAD) etc. As we know, accuracy of particular social media analysis depends on the dataset mostly. We aim to collect more data engaging more researchers to establish a set of mental illness specific Linguistic Database and evaluation technique to solidify the genralizability of our proposed method.
Conclusion
To promote better comfort to the trauma patients, it is really important to detect Post Traumatic Stress Disorder (PTSD) sufferers in time before going out of control that may result catastrophic impacts on society, people around or even sufferers themselves. Although, psychiatrists invented several clinical diagnosis tools (i.e., surveys) by assessing symptoms, signs and impairment associated with PTSD, most of the times, the process of diagnosis happens at the severe stage of illness which may have already caused some irreversible damages of mental health of the sufferers. On the other hand, due to lack of explainability, existing twitter based methods are not trusted by the clinicians. In this paper, we proposed, LAXARY, a novel method of filling up PTSD assessment surveys using weekly twitter posts. As the clinical surveys are trusted and understandable method, we believe that this method will be able to gain trust of clinicians towards early detection of PTSD. Moreover, our proposed LAXARY model, which is first of its kind, can be used to develop any type of mental disorder Linguistic Dictionary providing a generalized and trustworthy mental health assessment framework of any kind. | For each user, we calculate the proportion of tweets scored positively by each LIWC category. |
3334f50fe1796ce0df9dd58540e9c08be5856c23 | 3334f50fe1796ce0df9dd58540e9c08be5856c23_1 | Q: How is LIWC incorporated into this system?
Text: Introduction
Combat veterans diagnosed with PTSD are substantially more likely to engage in a number of high risk activities including engaging in interpersonal violence, attempting suicide, committing suicide, binge drinking, and drug abuse BIBREF0. Despite improved diagnostic screening, outpatient mental health and inpatient treatment for PTSD, the syndrome remains treatment resistant, is typically chronic, and is associated with numerous negative health effects and higher treatment costs BIBREF1. As a result, the Veteran Administration's National Center for PTSD (NCPTSD) suggests to reconceptualize PTSD not just in terms of a psychiatric symptom cluster, but focusing instead on the specific high risk behaviors associated with it, as these may be directly addressed though behavioral change efforts BIBREF0. Consensus prevalence estimates suggest that PTSD impacts between 15-20% of the veteran population which is typically chronic and treatment resistant BIBREF0. The PTSD patients support programs organized by different veterans peer support organization use a set of surveys for local weekly assessment to detect the intensity of PTSD among the returning veterans. However, recent advanced evidence-based care for PTSD sufferers surveys have showed that veterans, suffered with chronic PTSD are reluctant in participating assessments to the professionals which is another significant symptom of war returning veterans with PTSD. Several existing researches showed that, twitter posts of war veterans could be a significant indicator of their mental health and could be utilized to predict PTSD sufferers in time before going out of control BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8. However, all of the proposed methods relied on either blackbox machine learning methods or language models based sentiments extraction of posted texts which failed to obtain acceptability and trust of clinicians due to the lack of their explainability.
In the context of the above research problem, we aim to answer the following research questions
Given clinicians have trust on clinically validated PTSD assessment surveys, can we fill out PTSD assessment surveys using twitter posts analysis of war-veterans?
If possible, what sort of analysis and approach are needed to develop such XAI model to detect the prevalence and intensity of PTSD among war-veterans only using the social media (twitter) analysis where users are free to share their everyday mental and social conditions?
How much quantitative improvement do we observe in our model's ability to explain both detection and intensity estimation of PTSD?
In this paper, we propose LAXARY, an explainable and trustworthy representation of PTSD classification and its intensity for clinicians.
The key contributions of our work are summarized below,
The novelty of LAXARY lies on the proposed clinical surveys-based PTSD Linguistic dictionary creation with words/aspects which represents the instantaneous perturbation of twitter-based sentiments as a specific pattern and help calculate the possible scores of each survey question.
LAXARY includes a modified LIWC model to calculate the possible scores of each survey question using PTSD Linguistic Dictionary to fill out the PTSD assessment surveys which provides a practical way not only to determine fine-grained discrimination of physiological and psychological health markers of PTSD without incurring the expensive and laborious in-situ laboratory testing or surveys, but also obtain trusts of clinicians who are expected to see traditional survey results of the PTSD assessment.
Finally, we evaluate the accuracy of LAXARY model performance and reliability-validity of generated PTSD Linguistic Dictionary using real twitter users' posts. Our results show that, given normal weekly messages posted in twitter, LAXARY can provide very high accuracy in filling up surveys towards identifying PTSD ($\approx 96\%$) and its intensity ($\approx 1.2$ mean squared error).
Overview
Fig. FIGREF7 shows a schematic representation of our proposed model. It consists of the following logical steps: (i) Develop PTSD Detection System using twitter posts of war-veterans(ii) design real surveys from the popular symptoms based mental disease assessment surveys; (iii) define single category and create PTSD Linguistic Dictionary for each survey question and multiple aspect/words for each question; (iv) calculate $\alpha $-scores for each category and dimension based on linguistic inquiry and word count as well as the aspects/words based dictionary; (v) calculate scaling scores ($s$-scores) for each dimension based on the $\alpha $-scores and $s$-scores of each category based on the $s$-scores of its dimensions; (vi) rank features according to the contributions of achieving separation among categories associated with different $\alpha $-scores and $s$-scores; and select feature sets that minimize the overlap among categories as associated with the target classifier (SGD); and finally (vii) estimate the quality of selected features-based classification for filling up surveys based on classified categories i.e. PTSD assessment which is trustworthy among the psychiatry community.
Related Works
Twitter activity based mental health assessment has been utmost importance to the Natural Language Processing (NLP) researchers and social media analysts for decades. Several studies have turned to social media data to study mental health, since it provides an unbiased collection of a person's language and behavior, which has been shown to be useful in diagnosing conditions. BIBREF9 used n-gram language model (CLM) based s-score measure setting up some user centric emotional word sets. BIBREF10 used positive and negative PTSD data to train three classifiers: (i) one unigram language model (ULM); (ii) one character n-gram language model (CLM); and 3) one from the LIWC categories $\alpha $-scores and found that last one gives more accuracy than other ones. BIBREF11 used two types of $s$-scores taking the ratio of negative and positive language models. Differences in language use have been observed in the personal writing of students who score highly on depression scales BIBREF2, forum posts for depression BIBREF3, self narratives for PTSD (BIBREF4, BIBREF5), and chat rooms for bipolar BIBREF6. Specifically in social media, differences have previously been observed between depressed and control groups (as assessed by internet-administered batteries) via LIWC: depressed users more frequently use first person pronouns (BIBREF7) and more frequently use negative emotion words and anger words on Twitter, but show no differences in positive emotion word usage (BIBREF8). Similarly, an increase in negative emotion and first person pronouns, and a decrease in third person pronouns, (via LIWC) is observed, as well as many manifestations of literature findings in the pattern of life of depressed users (e.g., social engagement, demographics) (BIBREF12). Differences in language use in social media via LIWC have also been observed between PTSD and control groups (BIBREF13).
All of the prior works used some random dictionary related to the human sentiment (positive/negative) word sets as category words to estimate the mental health but very few of them addressed the problem of explainability of their solution to obtain trust of clinicians. Islam et. al proposed an explainable topic modeling framework to rank different mental health features using Local Interpretable Model-Agnostic Explanations and visualize them to understand the features involved in mental health status classification using the BIBREF14 which fails to provide trust of clinicians due to its lack of interpretability in clinical terms. In this paper, we develop LAXARY model where first we start investigating clinically validated survey tools which are trustworthy methods of PTSD assessment among clinicians, build our category sets based on the survey questions and use these as dictionary words in terms of first person singular number pronouns aspect for next level LIWC algorithm. Finally, we develop a modified LIWC algorithm to estimate survey scores (similar to sentiment category scores of naive LIWC) which is both explainable and trustworthy to clinicians.
Demographics of Clinically Validated PTSD Assessment Tools
There are many clinically validated PTSD assessment tools that are being used both to detect the prevalence of PTSD and its intensity among sufferers. Among all of the tools, the most popular and well accepted one is Domain-Specific Risk-Taking (DOSPERT) Scale BIBREF15. This is a psychometric scale that assesses risk taking in five content domains: financial decisions (separately for investing versus gambling), health/safety, recreational, ethical, and social decisions. Respondents rate the likelihood that they would engage in domain-specific risky activities (Part I). An optional Part II assesses respondents' perceptions of the magnitude of the risks and expected benefits of the activities judged in Part I. There are more scales that are used in risky behavior analysis of individual's daily activities such as, The Berlin Social Support Scales (BSSS) BIBREF16 and Values In Action Scale (VIAS) BIBREF17. Dryhootch America BIBREF18, BIBREF19, a veteran peer support community organization, chooses 5, 6 and 5 questions respectively from the above mentioned survey systems to assess the PTSD among war veterans and consider rest of them as irrelevant to PTSD. The details of dryhootch chosen survey scale are stated in Table TABREF13. Table!TABREF14 shows a sample DOSPERT scale demographic chosen by dryhootch. The threshold (in Table TABREF13) is used to calculate the risky behavior limits. For example, if one individual's weekly DOSPERT score goes over 28, he is in critical situation in terms of risk taking symptoms of PTSD. Dryhootch defines the intensity of PTSD into four categories based on the weekly survey results of all three clinical survey tools (DOSPERT, BSSS and VIAS )
High risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for all three PTSD assessment tools i.e. DOSPERT, BSSS and VIAS, then he/she is in high risk situation which needs immediate mental support to avoid catastrophic effect of individual's health or surrounding people's life.
Moderate risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any two of the three PTSD assessment tools, then he/she is in moderate risk situation which needs close observation and peer mentoring to avoid their risk progression.
Low risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any one of the three PTSD assessment tools, then he/she has light symptoms of PTSD.
No PTSD: If one individual veteran's weekly PTSD assessment scores go below the threshold for all three PTSD assessment tools, then he/she has no PTSD.
Twitter-based PTSD Detection
To develop an explainable model, we first need to develop twitter-based PTSD detection algorithm. In this section, we describe the data collection and the development of our core LAXARY model.
Twitter-based PTSD Detection ::: Data Collection
We use an automated regular expression based searching to find potential veterans with PTSD in twitter, and then refine the list manually. First, we select different keywords to search twitter users of different categories. For example, to search self-claimed diagnosed PTSD sufferers, we select keywords related to PTSD for example, post trauma, post traumatic disorder, PTSD etc. We use a regular expression to search for statements where the user self-identifies as being diagnosed with PTSD. For example, Table TABREF27 shows a self-identified tweet posts. To search veterans, we mostly visit to different twitter accounts of veterans organizations such as "MA Women Veterans @WomenVeterans", "Illinois Veterans @ILVetsAffairs", "Veterans Benefits @VAVetBenefits" etc. We define an inclusion criteria as follows: one twitter user will be part of this study if he/she describes himself/herself as a veteran in the introduction and have at least 25 tweets in last week. After choosing the initial twitter users, we search for self-identified PTSD sufferers who claim to be diagnosed with PTSD in their twitter posts. We find 685 matching tweets which are manually reviewed to determine if they indicate a genuine statement of a diagnosis for PTSD. Next, we select the username that authored each of these tweets and retrieve last week's tweets via the Twitter API. We then filtered out users with less than 25 tweets and those whose tweets were not at least 75% in English (measured using an automated language ID system.) This filtering left us with 305 users as positive examples. We repeated this process for a group of randomly selected users. We randomly selected 3,000 twitter users who are veterans as per their introduction and have at least 25 tweets in last one week. After filtering (as above) in total 2,423 users remain, whose tweets are used as negative examples developing a 2,728 user's entire weeks' twitter posts where 305 users are self-claimed PTSD sufferers. We distributed Dryhootch chosen surveys among 1,200 users (305 users are self claimed PTSD sufferers and rest of them are randomly chosen from previous 2,423 users) and received 210 successful responses. Among these responses, 92 users were diagnosed as PTSD by any of the three surveys and rest of the 118 users are diagnosed with NO PTSD. Among the clinically diagnosed PTSD sufferers, 17 of them were not self-identified before. However, 7 of the self-identified PTSD sufferers are assessed with no PTSD by PTSD assessment tools. The response rates of PTSD and NO PTSD users are 27% and 12%. In summary, we have collected one week of tweets from 2,728 veterans where 305 users claimed to have diagnosed with PTSD. After distributing Dryhootch surveys, we have a dataset of 210 veteran twitter users among them 92 users are assessed with PTSD and 118 users are diagnosed with no PTSD using clinically validated surveys. The severity of the PTSD are estimated as Non-existent, light, moderate and high PTSD based on how many surveys support the existence of PTSD among the participants according to dryhootch manual BIBREF18, BIBREF19.
Twitter-based PTSD Detection ::: Pre-processing
We download 210 users' all twitter posts who are war veterans and clinically diagnosed with PTSD sufferers as well which resulted a total 12,385 tweets. Fig FIGREF16 shows each of the 210 veteran twitter users' monthly average tweets. We categorize these Tweets into two groups: Tweets related to work and Tweets not related to work. That is, only the Tweets that use a form of the word “work*” (e.g. work,worked, working, worker, etc.) or “job*” (e.g. job, jobs, jobless, etc.) are identified as work-related Tweets, with the remaining categorized as non-work-related Tweets. This categorization method increases the likelihood that most Tweets in the work group are indeed talking about work or job; for instance, “Back to work. Projects are firing back up and moving ahead now that baseball is done.” This categorization results in 456 work-related Tweets, about 5.4% of all Tweets written in English (and 75 unique Twitter users). To conduct weekly-level analysis, we consider three categorizations of Tweets (i.e. overall Tweets, work-related Tweets, and non work-related Tweets) on a daily basis, and create a text file for each week for each group.
Twitter-based PTSD Detection ::: PTSD Detection Baseline Model
We use Coppersmith proposed PTSD classification algorithm to develop our baseline blackbox model BIBREF11. We utilize our positive and negative PTSD data (+92,-118) to train three classifiers: (i) unigram language model (ULM) examining individual whole words, (ii) character n-gram language model (CLM), and (iii) LIWC based categorical models above all of the prior ones. The LMs have been shown effective for Twitter classification tasks BIBREF9 and LIWC has been previously used for analysis of mental health in Twitter BIBREF10. The language models measure the probability that a word (ULM) or a string of characters (CLM) was generated by the same underlying process as the training data. We first train one of each language model ($clm^{+}$ and $ulm^{+}$) from the tweets of PTSD users, and another model ($clm^{-}$ and $ulm^{-}$) from the tweets from No PTSD users. Each test tweet $t$ is scored by comparing probabilities from each LM called $s-score$
A threshold of 1 for $s-score$ divides scores into positive and negative classes. In a multi-class setting, the algorithm minimizes the cross entropy, selecting the model with the highest probability. For each user, we calculate the proportion of tweets scored positively by each LIWC category. These proportions are used as a feature vector in a loglinear regression model BIBREF20. Prior to training, we preprocess the text of each tweet: we replace all usernames with a single token (USER), lowercase all text, and remove extraneous whitespace. We also exclude any tweet that contained a URL, as these often pertain to events external to the user.
We conduct a LIWC analysis of the PTSD and non-PTSD tweets to determine if there are differences in the language usage of PTSD users. We applied the LIWC battery and examined the distribution of words in their language. Each tweet was tokenized by separating on whitespace. For each user, for a subset of the LIWC categories, we measured the proportion of tweets that contained at least one word from that category. Specifically, we examined the following nine categories: first, second and third person pronouns, swear, anger, positive emotion, negative emotion, death, and anxiety words. Second person pronouns were used significantly less often by PTSD users, while third person pronouns and words about anxiety were used significantly more often.
LAXARY: Explainable PTSD Detection Model
The heart of LAXARY framework is the construction of PTSD Linguistic Dictionary. Prior works show that linguistic dictionary based text analysis has been much effective in twitter based sentiment analysis BIBREF21, BIBREF22. Our work is the first of its kind that develops its own linguistic dictionary to explain automatic PTSD assessment to confirm trustworthiness to clinicians.
LAXARY: Explainable PTSD Detection Model ::: PTSD Linguistic Dictionary Creation
We use LIWC developed WordStat dictionary format for our text analysis BIBREF23. The LIWC application relies on an internal default dictionary that defines which words should be counted in the target text files. To avoid confusion in the subsequent discussion, text words that are read and analyzed by WordStat are referred to as target words. Words in the WordStat dictionary file will be referred to as dictionary words. Groups of dictionary words that tap a particular domain (e.g., negative emotion words) are variously referred to as subdictionaries or word categories. Fig FIGREF8 is a sample WordStat dictionary. There are several steps to use this dictionary which are stated as follows:
Pronoun selection: At first we have to define the pronouns of the target sentiment. Here we used first person singular number pronouns (i.e., I, me, mine etc.) that means we only count those sentences or segments which are only related to first person singular number i.e., related to the person himself.
Category selection: We have to define the categories of each word set thus we can analyze the categories as well as dimensions' text analysis scores. We chose three categories based on the three different surveys: 1) DOSPERT scale; 2) BSSS scale; and 3) VIAS scale.
Dimension selection: We have to define the word sets (also called dimension) for each category. We chose one dimension for each of the questions under each category to reflect real survey system evaluation. Our chosen categories are state in Fig FIGREF20.
Score calculation $\alpha $-score: $\alpha $-scores refer to the Cronbach's alphas for the internal reliability of the specific words within each category. The binary alphas are computed on the ratio of occurrence and non-occurrence of each dictionary word whereas the raw or uncorrected alphas are based on the percentage of use of each of the category words within texts.
LAXARY: Explainable PTSD Detection Model ::: Psychometric Validation of PTSD Linguistic Dictionary
After the PTSD Linguistic Dictionary has been created, we empirically evaluate its psychometric properties such as reliability and validity as per American Standards for educational and psychological testing guideline BIBREF24. In psychometrics, reliability is most commonly evaluated by Cronbach's alpha, which assesses internal consistency based on inter-correlations and the number of measured items. In the text analysis scenario, each word in our PTSD Linguistic dictionary is considered an item, and reliability is calculated based on each text file's response to each word item, which forms an $N$(number of text files) $\times $ $J$(number of words or stems in a dictionary) data matrix. There are two ways to quantify such responses: using percentage data (uncorrected method), or using "present or not" data (binary method) BIBREF23. For the uncorrected method, the data matrix comprises percentage values of each word/stem are calculated from each text file. For the binary method, the data matrix quantifies whether or not a word was used in a text file where "1" represents yes and "0" represents no. Once the data matrix is created, it is used to calculate Cronbach's alpha based on its inter-correlation matrix among the word percentages. We assess reliability based on our selected 210 users' Tweets which further generated a 23,562 response matrix after running the PTSD Linguistic Dictionary for each user. The response matrix yields reliability of .89 based on the uncorrected method, and .96 based on the binary method, which confirm the high reliability of our PTSD Dictionary created PTSD survey based categories. After assessing the reliability of the PTSD Linguistic dictionary, we focus on the two most common forms of construct validity: convergent validity and discriminant validity BIBREF25. Convergent validity provides evidence that two measures designed to assess the same construct are indeed related; discriminate validity involves evidence that two measures designed to assess different constructs are not too strongly related. In theory, we expect that the PTSD Linguistic dictionary should be positively correlated with other negative PTSD constructs to show convergent validity, and not strongly correlated with positive PTSD constructs to show discriminant validity. To test these two types of validity, we use the same 210 users' tweets used for the reliability assessment. The results revealed that the PTSD Linguistic dictionary is indeed positively correlated with negative construct dictionaries, including the overall negative PTSD dictionary (r=3.664,p$<$.001). Table TABREF25 shows all 16 categorical dictionaries. These results provide strong support for the measurement validity for our newly created PTSD Linguistic dictionary.
LAXARY: Explainable PTSD Detection Model ::: Feature Extraction and Survey Score Estimation
We use the exact similar method of LIWC to extract $\alpha $-scores for each dimension and categories except we use our generated PTSD Linguistic Dictionary for the task BIBREF23. Thus we have total 16 $\alpha $-scores in total. Meanwhile, we propose a new type of feature in this regard, which we called scaling-score ($s$-score). $s$-score is calculated from $\alpha $-scores. The purpose of using $s$-score is to put exact scores of each of the dimension and category thus we can apply the same method used in real weekly survey system. The idea is, we divide each category into their corresponding scale factor (i.e., for DOSPERT scale, BSSS scale and VIAS scales) and divide them into 8, 3 and 5 scaling factors which are used in real survey system. Then we set the $s$-score from the scaling factors from the $\alpha $-scores of the corresponding dimension of the questions. The algorithm is stated in Figure FIGREF23. Following Fig FIGREF23, we calculate the $s$-score for each dimension. Then we add up all the $s$-score of the dimensions to calculate cumulative $s$-score of particular categories which is displayed in Fig FIGREF22. Finally, we have total 32 features among them 16 are $\alpha $-scores and 16 are $s$-scores for each category (i.e. each question). We add both of $\alpha $ and $s$ scores together and scale according to their corresponding survey score scales using min-max standardization. Then, the final output is a 16 valued matrix which represent the score for each questions from three different Dryhootch surveys. We use the output to fill up each survey, estimate the prevalence of PTSD and its intensity based on each tool's respective evaluation metric.
Experimental Evaluation
To validate the performance of LAXARY framework, we first divide the entire 210 users' twitter posts into training and test dataset. Then, we first developed PTSD Linguistic Dictionary from the twitter posts from training dataset and apply LAXARY framework on test dataset.
Experimental Evaluation ::: Results
To provide an initial results, we take 50% of users' last week's (the week they responded of having PTSD) data to develop PTSD Linguistic dictionary and apply LAXARY framework to fill up surveys on rest of 50% dataset. The distribution of this training-test dataset segmentation followed a 50% distribution of PTSD and No PTSD from the original dataset. Our final survey based classification results showed an accuracy of 96% in detecting PTSD and mean squared error of 1.2 in estimating its intensity given we have four intensity, No PTSD, Low Risk PTSD, Moderate Risk PTSD and High Risk PTSD with a score of 0, 1, 2 and 3 respectively. Table TABREF29 shows the classification details of our experiment which provide the very good accuracy of our classification. To compare the outperformance of our method, we also implemented Coppersmith et. al. proposed method and achieved an 86% overall accuracy of detecting PTSD users BIBREF11 following the same training-test dataset distribution. Fig FIGREF28 illustrates the comparisons between LAXARY and Coppersmith et. al. proposed method. Here we can see, the outperformance of our proposed method as well as the importance of $s-score$ estimation. We also illustrates the importance of $\alpha -score$ and $S-score$ in Fig FIGREF30. Fig FIGREF30 illustrates that if we change the number of training samples (%), LAXARY models outperforms Coppersmith et. al. proposed model under any condition. In terms of intensity, Coppersmith et. al. totally fails to provide any idea however LAXARY provides extremely accurate measures of intensity estimation for PTSD sufferers (as shown in Fig FIGREF31) which can be explained simply providing LAXARY model filled out survey details. Table TABREF29 shows the details of accuracies of both PTSD detection and intensity estimation. Fig FIGREF32 shows the classification accuracy changes over the training sample sizes for each survey which shows that DOSPERT scale outperform other surveys. Fig FIGREF33 shows that if we take previous weeks (instead of only the week diagnosis of PTSD was taken), there are no significant patterns of PTSD detection.
Challenges and Future Work
LAXARY is a highly ambitious model that targets to fill up clinically validated survey tools using only twitter posts. Unlike the previous twitter based mental health assessment tools, LAXARY provides a clinically interpretable model which can provide better classification accuracy and intensity of PTSD assessment and can easily obtain the trust of clinicians. The central challenge of LAXARY is to search twitter users from twitter search engine and manually label them for analysis. While developing PTSD Linguistic Dictionary, although we followed exactly same development idea of LIWC WordStat dictionary and tested reliability and validity, our dictionary was not still validated by domain experts as PTSD detection is highly sensitive issue than stress/depression detection. Moreover, given the extreme challenges of searching veterans in twitter using our selection and inclusion criteria, it was extremely difficult to manually find the evidence of the self-claimed PTSD sufferers. Although, we have shown extremely promising initial findings about the representation of a blackbox model into clinically trusted tools, using only 210 users' data is not enough to come up with a trustworthy model. Moreover, more clinical validation must be done in future with real clinicians to firmly validate LAXARY model provided PTSD assessment outcomes. In future, we aim to collect more data and run not only nationwide but also international-wide data collection to establish our innovation into a real tool. Apart from that, as we achieved promising results in detecting PTSD and its intensity using only twitter data, we aim to develop Linguistic Dictionary for other mental health issues too. Moreover, we will apply our proposed method in other types of mental illness such as depression, bipolar disorder, suicidal ideation and seasonal affective disorder (SAD) etc. As we know, accuracy of particular social media analysis depends on the dataset mostly. We aim to collect more data engaging more researchers to establish a set of mental illness specific Linguistic Database and evaluation technique to solidify the genralizability of our proposed method.
Conclusion
To promote better comfort to the trauma patients, it is really important to detect Post Traumatic Stress Disorder (PTSD) sufferers in time before going out of control that may result catastrophic impacts on society, people around or even sufferers themselves. Although, psychiatrists invented several clinical diagnosis tools (i.e., surveys) by assessing symptoms, signs and impairment associated with PTSD, most of the times, the process of diagnosis happens at the severe stage of illness which may have already caused some irreversible damages of mental health of the sufferers. On the other hand, due to lack of explainability, existing twitter based methods are not trusted by the clinicians. In this paper, we proposed, LAXARY, a novel method of filling up PTSD assessment surveys using weekly twitter posts. As the clinical surveys are trusted and understandable method, we believe that this method will be able to gain trust of clinicians towards early detection of PTSD. Moreover, our proposed LAXARY model, which is first of its kind, can be used to develop any type of mental disorder Linguistic Dictionary providing a generalized and trustworthy mental health assessment framework of any kind. | to calculate the possible scores of each survey question using PTSD Linguistic Dictionary |
7081b6909cb87b58a7b85017a2278275be58bf60 | 7081b6909cb87b58a7b85017a2278275be58bf60_0 | Q: How many twitter users are surveyed using the clinically validated survey?
Text: Introduction
Combat veterans diagnosed with PTSD are substantially more likely to engage in a number of high risk activities including engaging in interpersonal violence, attempting suicide, committing suicide, binge drinking, and drug abuse BIBREF0. Despite improved diagnostic screening, outpatient mental health and inpatient treatment for PTSD, the syndrome remains treatment resistant, is typically chronic, and is associated with numerous negative health effects and higher treatment costs BIBREF1. As a result, the Veteran Administration's National Center for PTSD (NCPTSD) suggests to reconceptualize PTSD not just in terms of a psychiatric symptom cluster, but focusing instead on the specific high risk behaviors associated with it, as these may be directly addressed though behavioral change efforts BIBREF0. Consensus prevalence estimates suggest that PTSD impacts between 15-20% of the veteran population which is typically chronic and treatment resistant BIBREF0. The PTSD patients support programs organized by different veterans peer support organization use a set of surveys for local weekly assessment to detect the intensity of PTSD among the returning veterans. However, recent advanced evidence-based care for PTSD sufferers surveys have showed that veterans, suffered with chronic PTSD are reluctant in participating assessments to the professionals which is another significant symptom of war returning veterans with PTSD. Several existing researches showed that, twitter posts of war veterans could be a significant indicator of their mental health and could be utilized to predict PTSD sufferers in time before going out of control BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8. However, all of the proposed methods relied on either blackbox machine learning methods or language models based sentiments extraction of posted texts which failed to obtain acceptability and trust of clinicians due to the lack of their explainability.
In the context of the above research problem, we aim to answer the following research questions
Given clinicians have trust on clinically validated PTSD assessment surveys, can we fill out PTSD assessment surveys using twitter posts analysis of war-veterans?
If possible, what sort of analysis and approach are needed to develop such XAI model to detect the prevalence and intensity of PTSD among war-veterans only using the social media (twitter) analysis where users are free to share their everyday mental and social conditions?
How much quantitative improvement do we observe in our model's ability to explain both detection and intensity estimation of PTSD?
In this paper, we propose LAXARY, an explainable and trustworthy representation of PTSD classification and its intensity for clinicians.
The key contributions of our work are summarized below,
The novelty of LAXARY lies on the proposed clinical surveys-based PTSD Linguistic dictionary creation with words/aspects which represents the instantaneous perturbation of twitter-based sentiments as a specific pattern and help calculate the possible scores of each survey question.
LAXARY includes a modified LIWC model to calculate the possible scores of each survey question using PTSD Linguistic Dictionary to fill out the PTSD assessment surveys which provides a practical way not only to determine fine-grained discrimination of physiological and psychological health markers of PTSD without incurring the expensive and laborious in-situ laboratory testing or surveys, but also obtain trusts of clinicians who are expected to see traditional survey results of the PTSD assessment.
Finally, we evaluate the accuracy of LAXARY model performance and reliability-validity of generated PTSD Linguistic Dictionary using real twitter users' posts. Our results show that, given normal weekly messages posted in twitter, LAXARY can provide very high accuracy in filling up surveys towards identifying PTSD ($\approx 96\%$) and its intensity ($\approx 1.2$ mean squared error).
Overview
Fig. FIGREF7 shows a schematic representation of our proposed model. It consists of the following logical steps: (i) Develop PTSD Detection System using twitter posts of war-veterans(ii) design real surveys from the popular symptoms based mental disease assessment surveys; (iii) define single category and create PTSD Linguistic Dictionary for each survey question and multiple aspect/words for each question; (iv) calculate $\alpha $-scores for each category and dimension based on linguistic inquiry and word count as well as the aspects/words based dictionary; (v) calculate scaling scores ($s$-scores) for each dimension based on the $\alpha $-scores and $s$-scores of each category based on the $s$-scores of its dimensions; (vi) rank features according to the contributions of achieving separation among categories associated with different $\alpha $-scores and $s$-scores; and select feature sets that minimize the overlap among categories as associated with the target classifier (SGD); and finally (vii) estimate the quality of selected features-based classification for filling up surveys based on classified categories i.e. PTSD assessment which is trustworthy among the psychiatry community.
Related Works
Twitter activity based mental health assessment has been utmost importance to the Natural Language Processing (NLP) researchers and social media analysts for decades. Several studies have turned to social media data to study mental health, since it provides an unbiased collection of a person's language and behavior, which has been shown to be useful in diagnosing conditions. BIBREF9 used n-gram language model (CLM) based s-score measure setting up some user centric emotional word sets. BIBREF10 used positive and negative PTSD data to train three classifiers: (i) one unigram language model (ULM); (ii) one character n-gram language model (CLM); and 3) one from the LIWC categories $\alpha $-scores and found that last one gives more accuracy than other ones. BIBREF11 used two types of $s$-scores taking the ratio of negative and positive language models. Differences in language use have been observed in the personal writing of students who score highly on depression scales BIBREF2, forum posts for depression BIBREF3, self narratives for PTSD (BIBREF4, BIBREF5), and chat rooms for bipolar BIBREF6. Specifically in social media, differences have previously been observed between depressed and control groups (as assessed by internet-administered batteries) via LIWC: depressed users more frequently use first person pronouns (BIBREF7) and more frequently use negative emotion words and anger words on Twitter, but show no differences in positive emotion word usage (BIBREF8). Similarly, an increase in negative emotion and first person pronouns, and a decrease in third person pronouns, (via LIWC) is observed, as well as many manifestations of literature findings in the pattern of life of depressed users (e.g., social engagement, demographics) (BIBREF12). Differences in language use in social media via LIWC have also been observed between PTSD and control groups (BIBREF13).
All of the prior works used some random dictionary related to the human sentiment (positive/negative) word sets as category words to estimate the mental health but very few of them addressed the problem of explainability of their solution to obtain trust of clinicians. Islam et. al proposed an explainable topic modeling framework to rank different mental health features using Local Interpretable Model-Agnostic Explanations and visualize them to understand the features involved in mental health status classification using the BIBREF14 which fails to provide trust of clinicians due to its lack of interpretability in clinical terms. In this paper, we develop LAXARY model where first we start investigating clinically validated survey tools which are trustworthy methods of PTSD assessment among clinicians, build our category sets based on the survey questions and use these as dictionary words in terms of first person singular number pronouns aspect for next level LIWC algorithm. Finally, we develop a modified LIWC algorithm to estimate survey scores (similar to sentiment category scores of naive LIWC) which is both explainable and trustworthy to clinicians.
Demographics of Clinically Validated PTSD Assessment Tools
There are many clinically validated PTSD assessment tools that are being used both to detect the prevalence of PTSD and its intensity among sufferers. Among all of the tools, the most popular and well accepted one is Domain-Specific Risk-Taking (DOSPERT) Scale BIBREF15. This is a psychometric scale that assesses risk taking in five content domains: financial decisions (separately for investing versus gambling), health/safety, recreational, ethical, and social decisions. Respondents rate the likelihood that they would engage in domain-specific risky activities (Part I). An optional Part II assesses respondents' perceptions of the magnitude of the risks and expected benefits of the activities judged in Part I. There are more scales that are used in risky behavior analysis of individual's daily activities such as, The Berlin Social Support Scales (BSSS) BIBREF16 and Values In Action Scale (VIAS) BIBREF17. Dryhootch America BIBREF18, BIBREF19, a veteran peer support community organization, chooses 5, 6 and 5 questions respectively from the above mentioned survey systems to assess the PTSD among war veterans and consider rest of them as irrelevant to PTSD. The details of dryhootch chosen survey scale are stated in Table TABREF13. Table!TABREF14 shows a sample DOSPERT scale demographic chosen by dryhootch. The threshold (in Table TABREF13) is used to calculate the risky behavior limits. For example, if one individual's weekly DOSPERT score goes over 28, he is in critical situation in terms of risk taking symptoms of PTSD. Dryhootch defines the intensity of PTSD into four categories based on the weekly survey results of all three clinical survey tools (DOSPERT, BSSS and VIAS )
High risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for all three PTSD assessment tools i.e. DOSPERT, BSSS and VIAS, then he/she is in high risk situation which needs immediate mental support to avoid catastrophic effect of individual's health or surrounding people's life.
Moderate risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any two of the three PTSD assessment tools, then he/she is in moderate risk situation which needs close observation and peer mentoring to avoid their risk progression.
Low risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any one of the three PTSD assessment tools, then he/she has light symptoms of PTSD.
No PTSD: If one individual veteran's weekly PTSD assessment scores go below the threshold for all three PTSD assessment tools, then he/she has no PTSD.
Twitter-based PTSD Detection
To develop an explainable model, we first need to develop twitter-based PTSD detection algorithm. In this section, we describe the data collection and the development of our core LAXARY model.
Twitter-based PTSD Detection ::: Data Collection
We use an automated regular expression based searching to find potential veterans with PTSD in twitter, and then refine the list manually. First, we select different keywords to search twitter users of different categories. For example, to search self-claimed diagnosed PTSD sufferers, we select keywords related to PTSD for example, post trauma, post traumatic disorder, PTSD etc. We use a regular expression to search for statements where the user self-identifies as being diagnosed with PTSD. For example, Table TABREF27 shows a self-identified tweet posts. To search veterans, we mostly visit to different twitter accounts of veterans organizations such as "MA Women Veterans @WomenVeterans", "Illinois Veterans @ILVetsAffairs", "Veterans Benefits @VAVetBenefits" etc. We define an inclusion criteria as follows: one twitter user will be part of this study if he/she describes himself/herself as a veteran in the introduction and have at least 25 tweets in last week. After choosing the initial twitter users, we search for self-identified PTSD sufferers who claim to be diagnosed with PTSD in their twitter posts. We find 685 matching tweets which are manually reviewed to determine if they indicate a genuine statement of a diagnosis for PTSD. Next, we select the username that authored each of these tweets and retrieve last week's tweets via the Twitter API. We then filtered out users with less than 25 tweets and those whose tweets were not at least 75% in English (measured using an automated language ID system.) This filtering left us with 305 users as positive examples. We repeated this process for a group of randomly selected users. We randomly selected 3,000 twitter users who are veterans as per their introduction and have at least 25 tweets in last one week. After filtering (as above) in total 2,423 users remain, whose tweets are used as negative examples developing a 2,728 user's entire weeks' twitter posts where 305 users are self-claimed PTSD sufferers. We distributed Dryhootch chosen surveys among 1,200 users (305 users are self claimed PTSD sufferers and rest of them are randomly chosen from previous 2,423 users) and received 210 successful responses. Among these responses, 92 users were diagnosed as PTSD by any of the three surveys and rest of the 118 users are diagnosed with NO PTSD. Among the clinically diagnosed PTSD sufferers, 17 of them were not self-identified before. However, 7 of the self-identified PTSD sufferers are assessed with no PTSD by PTSD assessment tools. The response rates of PTSD and NO PTSD users are 27% and 12%. In summary, we have collected one week of tweets from 2,728 veterans where 305 users claimed to have diagnosed with PTSD. After distributing Dryhootch surveys, we have a dataset of 210 veteran twitter users among them 92 users are assessed with PTSD and 118 users are diagnosed with no PTSD using clinically validated surveys. The severity of the PTSD are estimated as Non-existent, light, moderate and high PTSD based on how many surveys support the existence of PTSD among the participants according to dryhootch manual BIBREF18, BIBREF19.
Twitter-based PTSD Detection ::: Pre-processing
We download 210 users' all twitter posts who are war veterans and clinically diagnosed with PTSD sufferers as well which resulted a total 12,385 tweets. Fig FIGREF16 shows each of the 210 veteran twitter users' monthly average tweets. We categorize these Tweets into two groups: Tweets related to work and Tweets not related to work. That is, only the Tweets that use a form of the word “work*” (e.g. work,worked, working, worker, etc.) or “job*” (e.g. job, jobs, jobless, etc.) are identified as work-related Tweets, with the remaining categorized as non-work-related Tweets. This categorization method increases the likelihood that most Tweets in the work group are indeed talking about work or job; for instance, “Back to work. Projects are firing back up and moving ahead now that baseball is done.” This categorization results in 456 work-related Tweets, about 5.4% of all Tweets written in English (and 75 unique Twitter users). To conduct weekly-level analysis, we consider three categorizations of Tweets (i.e. overall Tweets, work-related Tweets, and non work-related Tweets) on a daily basis, and create a text file for each week for each group.
Twitter-based PTSD Detection ::: PTSD Detection Baseline Model
We use Coppersmith proposed PTSD classification algorithm to develop our baseline blackbox model BIBREF11. We utilize our positive and negative PTSD data (+92,-118) to train three classifiers: (i) unigram language model (ULM) examining individual whole words, (ii) character n-gram language model (CLM), and (iii) LIWC based categorical models above all of the prior ones. The LMs have been shown effective for Twitter classification tasks BIBREF9 and LIWC has been previously used for analysis of mental health in Twitter BIBREF10. The language models measure the probability that a word (ULM) or a string of characters (CLM) was generated by the same underlying process as the training data. We first train one of each language model ($clm^{+}$ and $ulm^{+}$) from the tweets of PTSD users, and another model ($clm^{-}$ and $ulm^{-}$) from the tweets from No PTSD users. Each test tweet $t$ is scored by comparing probabilities from each LM called $s-score$
A threshold of 1 for $s-score$ divides scores into positive and negative classes. In a multi-class setting, the algorithm minimizes the cross entropy, selecting the model with the highest probability. For each user, we calculate the proportion of tweets scored positively by each LIWC category. These proportions are used as a feature vector in a loglinear regression model BIBREF20. Prior to training, we preprocess the text of each tweet: we replace all usernames with a single token (USER), lowercase all text, and remove extraneous whitespace. We also exclude any tweet that contained a URL, as these often pertain to events external to the user.
We conduct a LIWC analysis of the PTSD and non-PTSD tweets to determine if there are differences in the language usage of PTSD users. We applied the LIWC battery and examined the distribution of words in their language. Each tweet was tokenized by separating on whitespace. For each user, for a subset of the LIWC categories, we measured the proportion of tweets that contained at least one word from that category. Specifically, we examined the following nine categories: first, second and third person pronouns, swear, anger, positive emotion, negative emotion, death, and anxiety words. Second person pronouns were used significantly less often by PTSD users, while third person pronouns and words about anxiety were used significantly more often.
LAXARY: Explainable PTSD Detection Model
The heart of LAXARY framework is the construction of PTSD Linguistic Dictionary. Prior works show that linguistic dictionary based text analysis has been much effective in twitter based sentiment analysis BIBREF21, BIBREF22. Our work is the first of its kind that develops its own linguistic dictionary to explain automatic PTSD assessment to confirm trustworthiness to clinicians.
LAXARY: Explainable PTSD Detection Model ::: PTSD Linguistic Dictionary Creation
We use LIWC developed WordStat dictionary format for our text analysis BIBREF23. The LIWC application relies on an internal default dictionary that defines which words should be counted in the target text files. To avoid confusion in the subsequent discussion, text words that are read and analyzed by WordStat are referred to as target words. Words in the WordStat dictionary file will be referred to as dictionary words. Groups of dictionary words that tap a particular domain (e.g., negative emotion words) are variously referred to as subdictionaries or word categories. Fig FIGREF8 is a sample WordStat dictionary. There are several steps to use this dictionary which are stated as follows:
Pronoun selection: At first we have to define the pronouns of the target sentiment. Here we used first person singular number pronouns (i.e., I, me, mine etc.) that means we only count those sentences or segments which are only related to first person singular number i.e., related to the person himself.
Category selection: We have to define the categories of each word set thus we can analyze the categories as well as dimensions' text analysis scores. We chose three categories based on the three different surveys: 1) DOSPERT scale; 2) BSSS scale; and 3) VIAS scale.
Dimension selection: We have to define the word sets (also called dimension) for each category. We chose one dimension for each of the questions under each category to reflect real survey system evaluation. Our chosen categories are state in Fig FIGREF20.
Score calculation $\alpha $-score: $\alpha $-scores refer to the Cronbach's alphas for the internal reliability of the specific words within each category. The binary alphas are computed on the ratio of occurrence and non-occurrence of each dictionary word whereas the raw or uncorrected alphas are based on the percentage of use of each of the category words within texts.
LAXARY: Explainable PTSD Detection Model ::: Psychometric Validation of PTSD Linguistic Dictionary
After the PTSD Linguistic Dictionary has been created, we empirically evaluate its psychometric properties such as reliability and validity as per American Standards for educational and psychological testing guideline BIBREF24. In psychometrics, reliability is most commonly evaluated by Cronbach's alpha, which assesses internal consistency based on inter-correlations and the number of measured items. In the text analysis scenario, each word in our PTSD Linguistic dictionary is considered an item, and reliability is calculated based on each text file's response to each word item, which forms an $N$(number of text files) $\times $ $J$(number of words or stems in a dictionary) data matrix. There are two ways to quantify such responses: using percentage data (uncorrected method), or using "present or not" data (binary method) BIBREF23. For the uncorrected method, the data matrix comprises percentage values of each word/stem are calculated from each text file. For the binary method, the data matrix quantifies whether or not a word was used in a text file where "1" represents yes and "0" represents no. Once the data matrix is created, it is used to calculate Cronbach's alpha based on its inter-correlation matrix among the word percentages. We assess reliability based on our selected 210 users' Tweets which further generated a 23,562 response matrix after running the PTSD Linguistic Dictionary for each user. The response matrix yields reliability of .89 based on the uncorrected method, and .96 based on the binary method, which confirm the high reliability of our PTSD Dictionary created PTSD survey based categories. After assessing the reliability of the PTSD Linguistic dictionary, we focus on the two most common forms of construct validity: convergent validity and discriminant validity BIBREF25. Convergent validity provides evidence that two measures designed to assess the same construct are indeed related; discriminate validity involves evidence that two measures designed to assess different constructs are not too strongly related. In theory, we expect that the PTSD Linguistic dictionary should be positively correlated with other negative PTSD constructs to show convergent validity, and not strongly correlated with positive PTSD constructs to show discriminant validity. To test these two types of validity, we use the same 210 users' tweets used for the reliability assessment. The results revealed that the PTSD Linguistic dictionary is indeed positively correlated with negative construct dictionaries, including the overall negative PTSD dictionary (r=3.664,p$<$.001). Table TABREF25 shows all 16 categorical dictionaries. These results provide strong support for the measurement validity for our newly created PTSD Linguistic dictionary.
LAXARY: Explainable PTSD Detection Model ::: Feature Extraction and Survey Score Estimation
We use the exact similar method of LIWC to extract $\alpha $-scores for each dimension and categories except we use our generated PTSD Linguistic Dictionary for the task BIBREF23. Thus we have total 16 $\alpha $-scores in total. Meanwhile, we propose a new type of feature in this regard, which we called scaling-score ($s$-score). $s$-score is calculated from $\alpha $-scores. The purpose of using $s$-score is to put exact scores of each of the dimension and category thus we can apply the same method used in real weekly survey system. The idea is, we divide each category into their corresponding scale factor (i.e., for DOSPERT scale, BSSS scale and VIAS scales) and divide them into 8, 3 and 5 scaling factors which are used in real survey system. Then we set the $s$-score from the scaling factors from the $\alpha $-scores of the corresponding dimension of the questions. The algorithm is stated in Figure FIGREF23. Following Fig FIGREF23, we calculate the $s$-score for each dimension. Then we add up all the $s$-score of the dimensions to calculate cumulative $s$-score of particular categories which is displayed in Fig FIGREF22. Finally, we have total 32 features among them 16 are $\alpha $-scores and 16 are $s$-scores for each category (i.e. each question). We add both of $\alpha $ and $s$ scores together and scale according to their corresponding survey score scales using min-max standardization. Then, the final output is a 16 valued matrix which represent the score for each questions from three different Dryhootch surveys. We use the output to fill up each survey, estimate the prevalence of PTSD and its intensity based on each tool's respective evaluation metric.
Experimental Evaluation
To validate the performance of LAXARY framework, we first divide the entire 210 users' twitter posts into training and test dataset. Then, we first developed PTSD Linguistic Dictionary from the twitter posts from training dataset and apply LAXARY framework on test dataset.
Experimental Evaluation ::: Results
To provide an initial results, we take 50% of users' last week's (the week they responded of having PTSD) data to develop PTSD Linguistic dictionary and apply LAXARY framework to fill up surveys on rest of 50% dataset. The distribution of this training-test dataset segmentation followed a 50% distribution of PTSD and No PTSD from the original dataset. Our final survey based classification results showed an accuracy of 96% in detecting PTSD and mean squared error of 1.2 in estimating its intensity given we have four intensity, No PTSD, Low Risk PTSD, Moderate Risk PTSD and High Risk PTSD with a score of 0, 1, 2 and 3 respectively. Table TABREF29 shows the classification details of our experiment which provide the very good accuracy of our classification. To compare the outperformance of our method, we also implemented Coppersmith et. al. proposed method and achieved an 86% overall accuracy of detecting PTSD users BIBREF11 following the same training-test dataset distribution. Fig FIGREF28 illustrates the comparisons between LAXARY and Coppersmith et. al. proposed method. Here we can see, the outperformance of our proposed method as well as the importance of $s-score$ estimation. We also illustrates the importance of $\alpha -score$ and $S-score$ in Fig FIGREF30. Fig FIGREF30 illustrates that if we change the number of training samples (%), LAXARY models outperforms Coppersmith et. al. proposed model under any condition. In terms of intensity, Coppersmith et. al. totally fails to provide any idea however LAXARY provides extremely accurate measures of intensity estimation for PTSD sufferers (as shown in Fig FIGREF31) which can be explained simply providing LAXARY model filled out survey details. Table TABREF29 shows the details of accuracies of both PTSD detection and intensity estimation. Fig FIGREF32 shows the classification accuracy changes over the training sample sizes for each survey which shows that DOSPERT scale outperform other surveys. Fig FIGREF33 shows that if we take previous weeks (instead of only the week diagnosis of PTSD was taken), there are no significant patterns of PTSD detection.
Challenges and Future Work
LAXARY is a highly ambitious model that targets to fill up clinically validated survey tools using only twitter posts. Unlike the previous twitter based mental health assessment tools, LAXARY provides a clinically interpretable model which can provide better classification accuracy and intensity of PTSD assessment and can easily obtain the trust of clinicians. The central challenge of LAXARY is to search twitter users from twitter search engine and manually label them for analysis. While developing PTSD Linguistic Dictionary, although we followed exactly same development idea of LIWC WordStat dictionary and tested reliability and validity, our dictionary was not still validated by domain experts as PTSD detection is highly sensitive issue than stress/depression detection. Moreover, given the extreme challenges of searching veterans in twitter using our selection and inclusion criteria, it was extremely difficult to manually find the evidence of the self-claimed PTSD sufferers. Although, we have shown extremely promising initial findings about the representation of a blackbox model into clinically trusted tools, using only 210 users' data is not enough to come up with a trustworthy model. Moreover, more clinical validation must be done in future with real clinicians to firmly validate LAXARY model provided PTSD assessment outcomes. In future, we aim to collect more data and run not only nationwide but also international-wide data collection to establish our innovation into a real tool. Apart from that, as we achieved promising results in detecting PTSD and its intensity using only twitter data, we aim to develop Linguistic Dictionary for other mental health issues too. Moreover, we will apply our proposed method in other types of mental illness such as depression, bipolar disorder, suicidal ideation and seasonal affective disorder (SAD) etc. As we know, accuracy of particular social media analysis depends on the dataset mostly. We aim to collect more data engaging more researchers to establish a set of mental illness specific Linguistic Database and evaluation technique to solidify the genralizability of our proposed method.
Conclusion
To promote better comfort to the trauma patients, it is really important to detect Post Traumatic Stress Disorder (PTSD) sufferers in time before going out of control that may result catastrophic impacts on society, people around or even sufferers themselves. Although, psychiatrists invented several clinical diagnosis tools (i.e., surveys) by assessing symptoms, signs and impairment associated with PTSD, most of the times, the process of diagnosis happens at the severe stage of illness which may have already caused some irreversible damages of mental health of the sufferers. On the other hand, due to lack of explainability, existing twitter based methods are not trusted by the clinicians. In this paper, we proposed, LAXARY, a novel method of filling up PTSD assessment surveys using weekly twitter posts. As the clinical surveys are trusted and understandable method, we believe that this method will be able to gain trust of clinicians towards early detection of PTSD. Moreover, our proposed LAXARY model, which is first of its kind, can be used to develop any type of mental disorder Linguistic Dictionary providing a generalized and trustworthy mental health assessment framework of any kind. | 210 |
1870f871a5bcea418c44f81f352897a2f53d0971 | 1870f871a5bcea418c44f81f352897a2f53d0971_0 | Q: Which clinically validated survey tools are used?
Text: Introduction
Combat veterans diagnosed with PTSD are substantially more likely to engage in a number of high risk activities including engaging in interpersonal violence, attempting suicide, committing suicide, binge drinking, and drug abuse BIBREF0. Despite improved diagnostic screening, outpatient mental health and inpatient treatment for PTSD, the syndrome remains treatment resistant, is typically chronic, and is associated with numerous negative health effects and higher treatment costs BIBREF1. As a result, the Veteran Administration's National Center for PTSD (NCPTSD) suggests to reconceptualize PTSD not just in terms of a psychiatric symptom cluster, but focusing instead on the specific high risk behaviors associated with it, as these may be directly addressed though behavioral change efforts BIBREF0. Consensus prevalence estimates suggest that PTSD impacts between 15-20% of the veteran population which is typically chronic and treatment resistant BIBREF0. The PTSD patients support programs organized by different veterans peer support organization use a set of surveys for local weekly assessment to detect the intensity of PTSD among the returning veterans. However, recent advanced evidence-based care for PTSD sufferers surveys have showed that veterans, suffered with chronic PTSD are reluctant in participating assessments to the professionals which is another significant symptom of war returning veterans with PTSD. Several existing researches showed that, twitter posts of war veterans could be a significant indicator of their mental health and could be utilized to predict PTSD sufferers in time before going out of control BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8. However, all of the proposed methods relied on either blackbox machine learning methods or language models based sentiments extraction of posted texts which failed to obtain acceptability and trust of clinicians due to the lack of their explainability.
In the context of the above research problem, we aim to answer the following research questions
Given clinicians have trust on clinically validated PTSD assessment surveys, can we fill out PTSD assessment surveys using twitter posts analysis of war-veterans?
If possible, what sort of analysis and approach are needed to develop such XAI model to detect the prevalence and intensity of PTSD among war-veterans only using the social media (twitter) analysis where users are free to share their everyday mental and social conditions?
How much quantitative improvement do we observe in our model's ability to explain both detection and intensity estimation of PTSD?
In this paper, we propose LAXARY, an explainable and trustworthy representation of PTSD classification and its intensity for clinicians.
The key contributions of our work are summarized below,
The novelty of LAXARY lies on the proposed clinical surveys-based PTSD Linguistic dictionary creation with words/aspects which represents the instantaneous perturbation of twitter-based sentiments as a specific pattern and help calculate the possible scores of each survey question.
LAXARY includes a modified LIWC model to calculate the possible scores of each survey question using PTSD Linguistic Dictionary to fill out the PTSD assessment surveys which provides a practical way not only to determine fine-grained discrimination of physiological and psychological health markers of PTSD without incurring the expensive and laborious in-situ laboratory testing or surveys, but also obtain trusts of clinicians who are expected to see traditional survey results of the PTSD assessment.
Finally, we evaluate the accuracy of LAXARY model performance and reliability-validity of generated PTSD Linguistic Dictionary using real twitter users' posts. Our results show that, given normal weekly messages posted in twitter, LAXARY can provide very high accuracy in filling up surveys towards identifying PTSD ($\approx 96\%$) and its intensity ($\approx 1.2$ mean squared error).
Overview
Fig. FIGREF7 shows a schematic representation of our proposed model. It consists of the following logical steps: (i) Develop PTSD Detection System using twitter posts of war-veterans(ii) design real surveys from the popular symptoms based mental disease assessment surveys; (iii) define single category and create PTSD Linguistic Dictionary for each survey question and multiple aspect/words for each question; (iv) calculate $\alpha $-scores for each category and dimension based on linguistic inquiry and word count as well as the aspects/words based dictionary; (v) calculate scaling scores ($s$-scores) for each dimension based on the $\alpha $-scores and $s$-scores of each category based on the $s$-scores of its dimensions; (vi) rank features according to the contributions of achieving separation among categories associated with different $\alpha $-scores and $s$-scores; and select feature sets that minimize the overlap among categories as associated with the target classifier (SGD); and finally (vii) estimate the quality of selected features-based classification for filling up surveys based on classified categories i.e. PTSD assessment which is trustworthy among the psychiatry community.
Related Works
Twitter activity based mental health assessment has been utmost importance to the Natural Language Processing (NLP) researchers and social media analysts for decades. Several studies have turned to social media data to study mental health, since it provides an unbiased collection of a person's language and behavior, which has been shown to be useful in diagnosing conditions. BIBREF9 used n-gram language model (CLM) based s-score measure setting up some user centric emotional word sets. BIBREF10 used positive and negative PTSD data to train three classifiers: (i) one unigram language model (ULM); (ii) one character n-gram language model (CLM); and 3) one from the LIWC categories $\alpha $-scores and found that last one gives more accuracy than other ones. BIBREF11 used two types of $s$-scores taking the ratio of negative and positive language models. Differences in language use have been observed in the personal writing of students who score highly on depression scales BIBREF2, forum posts for depression BIBREF3, self narratives for PTSD (BIBREF4, BIBREF5), and chat rooms for bipolar BIBREF6. Specifically in social media, differences have previously been observed between depressed and control groups (as assessed by internet-administered batteries) via LIWC: depressed users more frequently use first person pronouns (BIBREF7) and more frequently use negative emotion words and anger words on Twitter, but show no differences in positive emotion word usage (BIBREF8). Similarly, an increase in negative emotion and first person pronouns, and a decrease in third person pronouns, (via LIWC) is observed, as well as many manifestations of literature findings in the pattern of life of depressed users (e.g., social engagement, demographics) (BIBREF12). Differences in language use in social media via LIWC have also been observed between PTSD and control groups (BIBREF13).
All of the prior works used some random dictionary related to the human sentiment (positive/negative) word sets as category words to estimate the mental health but very few of them addressed the problem of explainability of their solution to obtain trust of clinicians. Islam et. al proposed an explainable topic modeling framework to rank different mental health features using Local Interpretable Model-Agnostic Explanations and visualize them to understand the features involved in mental health status classification using the BIBREF14 which fails to provide trust of clinicians due to its lack of interpretability in clinical terms. In this paper, we develop LAXARY model where first we start investigating clinically validated survey tools which are trustworthy methods of PTSD assessment among clinicians, build our category sets based on the survey questions and use these as dictionary words in terms of first person singular number pronouns aspect for next level LIWC algorithm. Finally, we develop a modified LIWC algorithm to estimate survey scores (similar to sentiment category scores of naive LIWC) which is both explainable and trustworthy to clinicians.
Demographics of Clinically Validated PTSD Assessment Tools
There are many clinically validated PTSD assessment tools that are being used both to detect the prevalence of PTSD and its intensity among sufferers. Among all of the tools, the most popular and well accepted one is Domain-Specific Risk-Taking (DOSPERT) Scale BIBREF15. This is a psychometric scale that assesses risk taking in five content domains: financial decisions (separately for investing versus gambling), health/safety, recreational, ethical, and social decisions. Respondents rate the likelihood that they would engage in domain-specific risky activities (Part I). An optional Part II assesses respondents' perceptions of the magnitude of the risks and expected benefits of the activities judged in Part I. There are more scales that are used in risky behavior analysis of individual's daily activities such as, The Berlin Social Support Scales (BSSS) BIBREF16 and Values In Action Scale (VIAS) BIBREF17. Dryhootch America BIBREF18, BIBREF19, a veteran peer support community organization, chooses 5, 6 and 5 questions respectively from the above mentioned survey systems to assess the PTSD among war veterans and consider rest of them as irrelevant to PTSD. The details of dryhootch chosen survey scale are stated in Table TABREF13. Table!TABREF14 shows a sample DOSPERT scale demographic chosen by dryhootch. The threshold (in Table TABREF13) is used to calculate the risky behavior limits. For example, if one individual's weekly DOSPERT score goes over 28, he is in critical situation in terms of risk taking symptoms of PTSD. Dryhootch defines the intensity of PTSD into four categories based on the weekly survey results of all three clinical survey tools (DOSPERT, BSSS and VIAS )
High risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for all three PTSD assessment tools i.e. DOSPERT, BSSS and VIAS, then he/she is in high risk situation which needs immediate mental support to avoid catastrophic effect of individual's health or surrounding people's life.
Moderate risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any two of the three PTSD assessment tools, then he/she is in moderate risk situation which needs close observation and peer mentoring to avoid their risk progression.
Low risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any one of the three PTSD assessment tools, then he/she has light symptoms of PTSD.
No PTSD: If one individual veteran's weekly PTSD assessment scores go below the threshold for all three PTSD assessment tools, then he/she has no PTSD.
Twitter-based PTSD Detection
To develop an explainable model, we first need to develop twitter-based PTSD detection algorithm. In this section, we describe the data collection and the development of our core LAXARY model.
Twitter-based PTSD Detection ::: Data Collection
We use an automated regular expression based searching to find potential veterans with PTSD in twitter, and then refine the list manually. First, we select different keywords to search twitter users of different categories. For example, to search self-claimed diagnosed PTSD sufferers, we select keywords related to PTSD for example, post trauma, post traumatic disorder, PTSD etc. We use a regular expression to search for statements where the user self-identifies as being diagnosed with PTSD. For example, Table TABREF27 shows a self-identified tweet posts. To search veterans, we mostly visit to different twitter accounts of veterans organizations such as "MA Women Veterans @WomenVeterans", "Illinois Veterans @ILVetsAffairs", "Veterans Benefits @VAVetBenefits" etc. We define an inclusion criteria as follows: one twitter user will be part of this study if he/she describes himself/herself as a veteran in the introduction and have at least 25 tweets in last week. After choosing the initial twitter users, we search for self-identified PTSD sufferers who claim to be diagnosed with PTSD in their twitter posts. We find 685 matching tweets which are manually reviewed to determine if they indicate a genuine statement of a diagnosis for PTSD. Next, we select the username that authored each of these tweets and retrieve last week's tweets via the Twitter API. We then filtered out users with less than 25 tweets and those whose tweets were not at least 75% in English (measured using an automated language ID system.) This filtering left us with 305 users as positive examples. We repeated this process for a group of randomly selected users. We randomly selected 3,000 twitter users who are veterans as per their introduction and have at least 25 tweets in last one week. After filtering (as above) in total 2,423 users remain, whose tweets are used as negative examples developing a 2,728 user's entire weeks' twitter posts where 305 users are self-claimed PTSD sufferers. We distributed Dryhootch chosen surveys among 1,200 users (305 users are self claimed PTSD sufferers and rest of them are randomly chosen from previous 2,423 users) and received 210 successful responses. Among these responses, 92 users were diagnosed as PTSD by any of the three surveys and rest of the 118 users are diagnosed with NO PTSD. Among the clinically diagnosed PTSD sufferers, 17 of them were not self-identified before. However, 7 of the self-identified PTSD sufferers are assessed with no PTSD by PTSD assessment tools. The response rates of PTSD and NO PTSD users are 27% and 12%. In summary, we have collected one week of tweets from 2,728 veterans where 305 users claimed to have diagnosed with PTSD. After distributing Dryhootch surveys, we have a dataset of 210 veteran twitter users among them 92 users are assessed with PTSD and 118 users are diagnosed with no PTSD using clinically validated surveys. The severity of the PTSD are estimated as Non-existent, light, moderate and high PTSD based on how many surveys support the existence of PTSD among the participants according to dryhootch manual BIBREF18, BIBREF19.
Twitter-based PTSD Detection ::: Pre-processing
We download 210 users' all twitter posts who are war veterans and clinically diagnosed with PTSD sufferers as well which resulted a total 12,385 tweets. Fig FIGREF16 shows each of the 210 veteran twitter users' monthly average tweets. We categorize these Tweets into two groups: Tweets related to work and Tweets not related to work. That is, only the Tweets that use a form of the word “work*” (e.g. work,worked, working, worker, etc.) or “job*” (e.g. job, jobs, jobless, etc.) are identified as work-related Tweets, with the remaining categorized as non-work-related Tweets. This categorization method increases the likelihood that most Tweets in the work group are indeed talking about work or job; for instance, “Back to work. Projects are firing back up and moving ahead now that baseball is done.” This categorization results in 456 work-related Tweets, about 5.4% of all Tweets written in English (and 75 unique Twitter users). To conduct weekly-level analysis, we consider three categorizations of Tweets (i.e. overall Tweets, work-related Tweets, and non work-related Tweets) on a daily basis, and create a text file for each week for each group.
Twitter-based PTSD Detection ::: PTSD Detection Baseline Model
We use Coppersmith proposed PTSD classification algorithm to develop our baseline blackbox model BIBREF11. We utilize our positive and negative PTSD data (+92,-118) to train three classifiers: (i) unigram language model (ULM) examining individual whole words, (ii) character n-gram language model (CLM), and (iii) LIWC based categorical models above all of the prior ones. The LMs have been shown effective for Twitter classification tasks BIBREF9 and LIWC has been previously used for analysis of mental health in Twitter BIBREF10. The language models measure the probability that a word (ULM) or a string of characters (CLM) was generated by the same underlying process as the training data. We first train one of each language model ($clm^{+}$ and $ulm^{+}$) from the tweets of PTSD users, and another model ($clm^{-}$ and $ulm^{-}$) from the tweets from No PTSD users. Each test tweet $t$ is scored by comparing probabilities from each LM called $s-score$
A threshold of 1 for $s-score$ divides scores into positive and negative classes. In a multi-class setting, the algorithm minimizes the cross entropy, selecting the model with the highest probability. For each user, we calculate the proportion of tweets scored positively by each LIWC category. These proportions are used as a feature vector in a loglinear regression model BIBREF20. Prior to training, we preprocess the text of each tweet: we replace all usernames with a single token (USER), lowercase all text, and remove extraneous whitespace. We also exclude any tweet that contained a URL, as these often pertain to events external to the user.
We conduct a LIWC analysis of the PTSD and non-PTSD tweets to determine if there are differences in the language usage of PTSD users. We applied the LIWC battery and examined the distribution of words in their language. Each tweet was tokenized by separating on whitespace. For each user, for a subset of the LIWC categories, we measured the proportion of tweets that contained at least one word from that category. Specifically, we examined the following nine categories: first, second and third person pronouns, swear, anger, positive emotion, negative emotion, death, and anxiety words. Second person pronouns were used significantly less often by PTSD users, while third person pronouns and words about anxiety were used significantly more often.
LAXARY: Explainable PTSD Detection Model
The heart of LAXARY framework is the construction of PTSD Linguistic Dictionary. Prior works show that linguistic dictionary based text analysis has been much effective in twitter based sentiment analysis BIBREF21, BIBREF22. Our work is the first of its kind that develops its own linguistic dictionary to explain automatic PTSD assessment to confirm trustworthiness to clinicians.
LAXARY: Explainable PTSD Detection Model ::: PTSD Linguistic Dictionary Creation
We use LIWC developed WordStat dictionary format for our text analysis BIBREF23. The LIWC application relies on an internal default dictionary that defines which words should be counted in the target text files. To avoid confusion in the subsequent discussion, text words that are read and analyzed by WordStat are referred to as target words. Words in the WordStat dictionary file will be referred to as dictionary words. Groups of dictionary words that tap a particular domain (e.g., negative emotion words) are variously referred to as subdictionaries or word categories. Fig FIGREF8 is a sample WordStat dictionary. There are several steps to use this dictionary which are stated as follows:
Pronoun selection: At first we have to define the pronouns of the target sentiment. Here we used first person singular number pronouns (i.e., I, me, mine etc.) that means we only count those sentences or segments which are only related to first person singular number i.e., related to the person himself.
Category selection: We have to define the categories of each word set thus we can analyze the categories as well as dimensions' text analysis scores. We chose three categories based on the three different surveys: 1) DOSPERT scale; 2) BSSS scale; and 3) VIAS scale.
Dimension selection: We have to define the word sets (also called dimension) for each category. We chose one dimension for each of the questions under each category to reflect real survey system evaluation. Our chosen categories are state in Fig FIGREF20.
Score calculation $\alpha $-score: $\alpha $-scores refer to the Cronbach's alphas for the internal reliability of the specific words within each category. The binary alphas are computed on the ratio of occurrence and non-occurrence of each dictionary word whereas the raw or uncorrected alphas are based on the percentage of use of each of the category words within texts.
LAXARY: Explainable PTSD Detection Model ::: Psychometric Validation of PTSD Linguistic Dictionary
After the PTSD Linguistic Dictionary has been created, we empirically evaluate its psychometric properties such as reliability and validity as per American Standards for educational and psychological testing guideline BIBREF24. In psychometrics, reliability is most commonly evaluated by Cronbach's alpha, which assesses internal consistency based on inter-correlations and the number of measured items. In the text analysis scenario, each word in our PTSD Linguistic dictionary is considered an item, and reliability is calculated based on each text file's response to each word item, which forms an $N$(number of text files) $\times $ $J$(number of words or stems in a dictionary) data matrix. There are two ways to quantify such responses: using percentage data (uncorrected method), or using "present or not" data (binary method) BIBREF23. For the uncorrected method, the data matrix comprises percentage values of each word/stem are calculated from each text file. For the binary method, the data matrix quantifies whether or not a word was used in a text file where "1" represents yes and "0" represents no. Once the data matrix is created, it is used to calculate Cronbach's alpha based on its inter-correlation matrix among the word percentages. We assess reliability based on our selected 210 users' Tweets which further generated a 23,562 response matrix after running the PTSD Linguistic Dictionary for each user. The response matrix yields reliability of .89 based on the uncorrected method, and .96 based on the binary method, which confirm the high reliability of our PTSD Dictionary created PTSD survey based categories. After assessing the reliability of the PTSD Linguistic dictionary, we focus on the two most common forms of construct validity: convergent validity and discriminant validity BIBREF25. Convergent validity provides evidence that two measures designed to assess the same construct are indeed related; discriminate validity involves evidence that two measures designed to assess different constructs are not too strongly related. In theory, we expect that the PTSD Linguistic dictionary should be positively correlated with other negative PTSD constructs to show convergent validity, and not strongly correlated with positive PTSD constructs to show discriminant validity. To test these two types of validity, we use the same 210 users' tweets used for the reliability assessment. The results revealed that the PTSD Linguistic dictionary is indeed positively correlated with negative construct dictionaries, including the overall negative PTSD dictionary (r=3.664,p$<$.001). Table TABREF25 shows all 16 categorical dictionaries. These results provide strong support for the measurement validity for our newly created PTSD Linguistic dictionary.
LAXARY: Explainable PTSD Detection Model ::: Feature Extraction and Survey Score Estimation
We use the exact similar method of LIWC to extract $\alpha $-scores for each dimension and categories except we use our generated PTSD Linguistic Dictionary for the task BIBREF23. Thus we have total 16 $\alpha $-scores in total. Meanwhile, we propose a new type of feature in this regard, which we called scaling-score ($s$-score). $s$-score is calculated from $\alpha $-scores. The purpose of using $s$-score is to put exact scores of each of the dimension and category thus we can apply the same method used in real weekly survey system. The idea is, we divide each category into their corresponding scale factor (i.e., for DOSPERT scale, BSSS scale and VIAS scales) and divide them into 8, 3 and 5 scaling factors which are used in real survey system. Then we set the $s$-score from the scaling factors from the $\alpha $-scores of the corresponding dimension of the questions. The algorithm is stated in Figure FIGREF23. Following Fig FIGREF23, we calculate the $s$-score for each dimension. Then we add up all the $s$-score of the dimensions to calculate cumulative $s$-score of particular categories which is displayed in Fig FIGREF22. Finally, we have total 32 features among them 16 are $\alpha $-scores and 16 are $s$-scores for each category (i.e. each question). We add both of $\alpha $ and $s$ scores together and scale according to their corresponding survey score scales using min-max standardization. Then, the final output is a 16 valued matrix which represent the score for each questions from three different Dryhootch surveys. We use the output to fill up each survey, estimate the prevalence of PTSD and its intensity based on each tool's respective evaluation metric.
Experimental Evaluation
To validate the performance of LAXARY framework, we first divide the entire 210 users' twitter posts into training and test dataset. Then, we first developed PTSD Linguistic Dictionary from the twitter posts from training dataset and apply LAXARY framework on test dataset.
Experimental Evaluation ::: Results
To provide an initial results, we take 50% of users' last week's (the week they responded of having PTSD) data to develop PTSD Linguistic dictionary and apply LAXARY framework to fill up surveys on rest of 50% dataset. The distribution of this training-test dataset segmentation followed a 50% distribution of PTSD and No PTSD from the original dataset. Our final survey based classification results showed an accuracy of 96% in detecting PTSD and mean squared error of 1.2 in estimating its intensity given we have four intensity, No PTSD, Low Risk PTSD, Moderate Risk PTSD and High Risk PTSD with a score of 0, 1, 2 and 3 respectively. Table TABREF29 shows the classification details of our experiment which provide the very good accuracy of our classification. To compare the outperformance of our method, we also implemented Coppersmith et. al. proposed method and achieved an 86% overall accuracy of detecting PTSD users BIBREF11 following the same training-test dataset distribution. Fig FIGREF28 illustrates the comparisons between LAXARY and Coppersmith et. al. proposed method. Here we can see, the outperformance of our proposed method as well as the importance of $s-score$ estimation. We also illustrates the importance of $\alpha -score$ and $S-score$ in Fig FIGREF30. Fig FIGREF30 illustrates that if we change the number of training samples (%), LAXARY models outperforms Coppersmith et. al. proposed model under any condition. In terms of intensity, Coppersmith et. al. totally fails to provide any idea however LAXARY provides extremely accurate measures of intensity estimation for PTSD sufferers (as shown in Fig FIGREF31) which can be explained simply providing LAXARY model filled out survey details. Table TABREF29 shows the details of accuracies of both PTSD detection and intensity estimation. Fig FIGREF32 shows the classification accuracy changes over the training sample sizes for each survey which shows that DOSPERT scale outperform other surveys. Fig FIGREF33 shows that if we take previous weeks (instead of only the week diagnosis of PTSD was taken), there are no significant patterns of PTSD detection.
Challenges and Future Work
LAXARY is a highly ambitious model that targets to fill up clinically validated survey tools using only twitter posts. Unlike the previous twitter based mental health assessment tools, LAXARY provides a clinically interpretable model which can provide better classification accuracy and intensity of PTSD assessment and can easily obtain the trust of clinicians. The central challenge of LAXARY is to search twitter users from twitter search engine and manually label them for analysis. While developing PTSD Linguistic Dictionary, although we followed exactly same development idea of LIWC WordStat dictionary and tested reliability and validity, our dictionary was not still validated by domain experts as PTSD detection is highly sensitive issue than stress/depression detection. Moreover, given the extreme challenges of searching veterans in twitter using our selection and inclusion criteria, it was extremely difficult to manually find the evidence of the self-claimed PTSD sufferers. Although, we have shown extremely promising initial findings about the representation of a blackbox model into clinically trusted tools, using only 210 users' data is not enough to come up with a trustworthy model. Moreover, more clinical validation must be done in future with real clinicians to firmly validate LAXARY model provided PTSD assessment outcomes. In future, we aim to collect more data and run not only nationwide but also international-wide data collection to establish our innovation into a real tool. Apart from that, as we achieved promising results in detecting PTSD and its intensity using only twitter data, we aim to develop Linguistic Dictionary for other mental health issues too. Moreover, we will apply our proposed method in other types of mental illness such as depression, bipolar disorder, suicidal ideation and seasonal affective disorder (SAD) etc. As we know, accuracy of particular social media analysis depends on the dataset mostly. We aim to collect more data engaging more researchers to establish a set of mental illness specific Linguistic Database and evaluation technique to solidify the genralizability of our proposed method.
Conclusion
To promote better comfort to the trauma patients, it is really important to detect Post Traumatic Stress Disorder (PTSD) sufferers in time before going out of control that may result catastrophic impacts on society, people around or even sufferers themselves. Although, psychiatrists invented several clinical diagnosis tools (i.e., surveys) by assessing symptoms, signs and impairment associated with PTSD, most of the times, the process of diagnosis happens at the severe stage of illness which may have already caused some irreversible damages of mental health of the sufferers. On the other hand, due to lack of explainability, existing twitter based methods are not trusted by the clinicians. In this paper, we proposed, LAXARY, a novel method of filling up PTSD assessment surveys using weekly twitter posts. As the clinical surveys are trusted and understandable method, we believe that this method will be able to gain trust of clinicians towards early detection of PTSD. Moreover, our proposed LAXARY model, which is first of its kind, can be used to develop any type of mental disorder Linguistic Dictionary providing a generalized and trustworthy mental health assessment framework of any kind. | DOSPERT, BSSS and VIAS |
ce6201435cc1196ad72b742db92abd709e0f9e8d | ce6201435cc1196ad72b742db92abd709e0f9e8d_0 | Q: Did they experiment with the dataset?
Text: Introduction
Coronavirus disease 2019 (COVID-19) is an infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The disease was first identified in 2019 in Wuhan, Central China, and has since spread globally, resulting in the 2019–2020 coronavirus pandemic. On March 16th, 2020, researchers and leaders from the Allen Institute for AI, Chan Zuckerberg Initiative (CZI), Georgetown University’s Center for Security and Emerging Technology (CSET), Microsoft, and the National Library of Medicine (NLM) at the National Institutes of Health released the COVID-19 Open Research Dataset (CORD-19) of scholarly literature about COVID-19, SARS-CoV-2, and the coronavirus group.
Named entity recognition (NER) is a fundamental step in text mining system development to facilitate the COVID-19 studies. There is critical need for NER methods that can quickly adapt to all the COVID-19 related new types without much human effort for training data annotation. We created this CORD-19-NER dataset with comprehensive named entity annotation on the CORD-19 corpus (2020-03-13). This dataset covers 75 fine-grained named entity types. CORD-19-NER is automatically generated by combining the annotation results from four sources. In the following sections, we introduce the details of CORD-19-NER dataset construction. We also show some NER annotation results in this dataset.
CORD-19-NER Dataset ::: Corpus
The corpus is generated from the 29,500 documents in the CORD-19 corpus (2020-03-13). We first merge all the meta-data (all_sources_metadata_2020-03-13.csv) with their corresponding full-text papers. Then we create a tokenized corpus (CORD-19-corpus.json) for further NER annotations.
Corpus Tokenization. The raw corpus is a combination of the “title", “abstract" and “full-text" from the CORD-19 corpus. We first conduct automatic phrase mining on the raw corpus using AutoPhrase BIBREF0. Then we do the second round of tokenization with Spacy on the phrase-replaced corpus. We have observed that keeping the AutoPhrase results will significantly improve the distantly- and weakly-supervised NER performance.
Key Items. The tokenized corpus includes the following items:
doc_id: the line number (0-29499) in “all_sources_metadata_2020-03-13.csv" in the CORD-19 corpus (2020-03-13).
sents: [sent_id, sent_tokens], tokenized sentences and words as described above.
source: CZI (1236 records), PMC (27337), bioRxiv (566) and medRxiv (361).
doi: populated for all BioRxiv/MedRxiv paper records and most of the other records (26357 non null).
pmcid: populated for all PMC paper records (27337 non null).
pubmed_id: populated for some of the records.
Other keys: publish_time, authors and journal.
The tokenized corpus (CORD-19-corpus.json) with the file schema and detailed descriptions can be found in our CORD-19-NER dataset.
CORD-19-NER Dataset ::: NER Methods
CORD-19-NER annotation is a combination from four sources with different NER methods:
Pre-trained NER on 18 general entity types from Spacy using the model “en_core_web_sm".
Pre-trained NER on 18 biomedical entity types from SciSpacy using the model “en_ner_bionlp13cg_md".
Knowledge base (KB)-guided NER on 127 biomedical entity types with our distantly-supervised NER methods BIBREF1, BIBREF2. We do not require any human annotated training data for the NER model training. Instead, We rely on UMLS as the input KB for distant supervision.
Seed-guided NER on 9 new entity types (specifically related to the COVID-19 studies) with our weakly-supervised NER method. We only require several (10-20) human-input seed entities for each new type. Then we expand the seed entity sets with CatE BIBREF3 and apply our distant NER method for the new entity type recognition.
The 9 new entity types with examples of their input seed are as follows:
Coronavirus: COVID-19, SARS, MERS, etc.
Viral Protein: Hemagglutinin, GP120, etc.
Livestock: cattle, sheep, pig, etc.
Wildlife: bats, wild animals, wild birds, etc
Evolution: genetic drift, natural selection, mutation rate, etc
Physical Science: atomic charge, Amber force fields, Van der Waals interactions, etc.
Substrate: blood, sputum, urine, etc.
Material: copper, stainless steel, plastic, etc.
Immune Response: adaptive immune response, cell mediated immunity, innate immunity, etc.
We merged all the entity types from the four sources and reorganized them into one entity type hierarchy. Specifically, we align all the types from SciSpacy to UMLS. We also merge some fine-grained UMLS entity types to their more coarse-grained types based on the corpus count. Then we get a final entity type hierarchy with 75 fine-grained entity types used in our annotations. The entity type hierarchy (CORD-19-types.xlsx) can be found in our CORD-19-NER dataset.
Then we conduct named entity annotation with the four NER methods on the 75 fine-grained entity types. After we get the NER annotation results with the four different methods, we merge the results into one file. The conflicts are resolved by giving priority to different entity types annotated by different methods according to their annotation quality. The final entity annotation results (CORD-19-ner.json) with the file schema and detailed descriptions can be found in our CORD-19-NER dataset.
Results ::: NER Annotation Results
In Figure FIGREF28, we show some examples of the annotation results in CORD-19-NER. We can see that our distantly- or weakly supervised methods achieve high quality recognizing the new entity types, requiring only several seed examples as the input. For example, we recognized “SARS-CoV-2" as the “CORONAVIRUS" type, “bat" and “pangolins" as the “WILDLIFE" type and “Van der Waals forces" as the “PHYSICAL_SCIENCE" type. This NER annotation results help downstream text mining tasks in discovering the origin and the physical nature of the virus. Our NER methods are domain-independent that can be applied to corpus in different domains. In addition, we show another example of NER annotation on New York Times with our system in Figure FIGREF29.
In Figure FIGREF30, we show the comparison of our annotation results with existing NER/BioNER systems. In Figure FIGREF30, we can see that only our method can identify “SARS-CoV-2" as a coronavirus. In Figure FIGREF30, we can see that our method can identify many more entities such as “pylogenetic" as a evolution term and “bat" as a wildlife. In Figure FIGREF30, we can also see that our method can identify many more entities such as “racism" as a social behavior. In summary, our distantly- and weakly-supervised NER methods are reliable for high-quality entity recognition without requiring human effort for training data annotation.
Results ::: Top-Frequent Entity Summarization
In Table TABREF34, we show some examples of the most frequent entities in the annotated corpus. Specifically, we show the entity types including both our new types and some UMLS types that have not been manually annotated before. We find our annotated entities very informative for the COVID-19 studies. For example, the most frequent entities for the type “SIGN_OR_SYMPTOM behavior" includes “cough" and “respiratory symptoms" that are the most common symptoms for COVID-19 . The most frequent entities for the type “INDIVIDUAL_BEHAVIOR" includes “hand hygiene", “disclosures" and “absenteeism", which indicates that people focus more on hand cleaning for the COVID-19 issue. Also, the most frequent entities for the type “MACHINE_ACTIVITY" includes “machine learning", “data processing" and “automation", which indicates that people focus more on the automated methods that can process massive data for the COVID-19 studies. This type also includes “telecommunication" as the top results, which is quite reasonable under the current COVID-19 situation. More examples can be found in our dataset.
Conclusion
In the future, we will further improve the CORD-19-NER dataset quality. We will also build text mining systems based on the CORD-19-NER dataset with richer functionalities. We hope this dataset can help the text mining community build downstream applications. We also hope this dataset can bring insights for the COVID-19 studies, both on the biomedical side and on the social side.
Acknowledgment
Research was sponsored in part by US DARPA KAIROS Program No. FA8750-19-2-1004 and SocialSim Program No. W911NF-17-C-0099, National Science Foundation IIS 16-18481, IIS 17-04532, and IIS-17-41317, and DTRA HDTRA11810026. Any opinions, findings, and conclusions or recommendations expressed herein are those of the authors and should not be interpreted as necessarily representing the views, either expressed or implied, of DARPA or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright annotation hereon. The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing any funding agencies. | Yes |
928828544e38fe26c53d81d1b9c70a9fb1cc3feb | 928828544e38fe26c53d81d1b9c70a9fb1cc3feb_0 | Q: What is the size of this dataset?
Text: Introduction
Coronavirus disease 2019 (COVID-19) is an infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The disease was first identified in 2019 in Wuhan, Central China, and has since spread globally, resulting in the 2019–2020 coronavirus pandemic. On March 16th, 2020, researchers and leaders from the Allen Institute for AI, Chan Zuckerberg Initiative (CZI), Georgetown University’s Center for Security and Emerging Technology (CSET), Microsoft, and the National Library of Medicine (NLM) at the National Institutes of Health released the COVID-19 Open Research Dataset (CORD-19) of scholarly literature about COVID-19, SARS-CoV-2, and the coronavirus group.
Named entity recognition (NER) is a fundamental step in text mining system development to facilitate the COVID-19 studies. There is critical need for NER methods that can quickly adapt to all the COVID-19 related new types without much human effort for training data annotation. We created this CORD-19-NER dataset with comprehensive named entity annotation on the CORD-19 corpus (2020-03-13). This dataset covers 75 fine-grained named entity types. CORD-19-NER is automatically generated by combining the annotation results from four sources. In the following sections, we introduce the details of CORD-19-NER dataset construction. We also show some NER annotation results in this dataset.
CORD-19-NER Dataset ::: Corpus
The corpus is generated from the 29,500 documents in the CORD-19 corpus (2020-03-13). We first merge all the meta-data (all_sources_metadata_2020-03-13.csv) with their corresponding full-text papers. Then we create a tokenized corpus (CORD-19-corpus.json) for further NER annotations.
Corpus Tokenization. The raw corpus is a combination of the “title", “abstract" and “full-text" from the CORD-19 corpus. We first conduct automatic phrase mining on the raw corpus using AutoPhrase BIBREF0. Then we do the second round of tokenization with Spacy on the phrase-replaced corpus. We have observed that keeping the AutoPhrase results will significantly improve the distantly- and weakly-supervised NER performance.
Key Items. The tokenized corpus includes the following items:
doc_id: the line number (0-29499) in “all_sources_metadata_2020-03-13.csv" in the CORD-19 corpus (2020-03-13).
sents: [sent_id, sent_tokens], tokenized sentences and words as described above.
source: CZI (1236 records), PMC (27337), bioRxiv (566) and medRxiv (361).
doi: populated for all BioRxiv/MedRxiv paper records and most of the other records (26357 non null).
pmcid: populated for all PMC paper records (27337 non null).
pubmed_id: populated for some of the records.
Other keys: publish_time, authors and journal.
The tokenized corpus (CORD-19-corpus.json) with the file schema and detailed descriptions can be found in our CORD-19-NER dataset.
CORD-19-NER Dataset ::: NER Methods
CORD-19-NER annotation is a combination from four sources with different NER methods:
Pre-trained NER on 18 general entity types from Spacy using the model “en_core_web_sm".
Pre-trained NER on 18 biomedical entity types from SciSpacy using the model “en_ner_bionlp13cg_md".
Knowledge base (KB)-guided NER on 127 biomedical entity types with our distantly-supervised NER methods BIBREF1, BIBREF2. We do not require any human annotated training data for the NER model training. Instead, We rely on UMLS as the input KB for distant supervision.
Seed-guided NER on 9 new entity types (specifically related to the COVID-19 studies) with our weakly-supervised NER method. We only require several (10-20) human-input seed entities for each new type. Then we expand the seed entity sets with CatE BIBREF3 and apply our distant NER method for the new entity type recognition.
The 9 new entity types with examples of their input seed are as follows:
Coronavirus: COVID-19, SARS, MERS, etc.
Viral Protein: Hemagglutinin, GP120, etc.
Livestock: cattle, sheep, pig, etc.
Wildlife: bats, wild animals, wild birds, etc
Evolution: genetic drift, natural selection, mutation rate, etc
Physical Science: atomic charge, Amber force fields, Van der Waals interactions, etc.
Substrate: blood, sputum, urine, etc.
Material: copper, stainless steel, plastic, etc.
Immune Response: adaptive immune response, cell mediated immunity, innate immunity, etc.
We merged all the entity types from the four sources and reorganized them into one entity type hierarchy. Specifically, we align all the types from SciSpacy to UMLS. We also merge some fine-grained UMLS entity types to their more coarse-grained types based on the corpus count. Then we get a final entity type hierarchy with 75 fine-grained entity types used in our annotations. The entity type hierarchy (CORD-19-types.xlsx) can be found in our CORD-19-NER dataset.
Then we conduct named entity annotation with the four NER methods on the 75 fine-grained entity types. After we get the NER annotation results with the four different methods, we merge the results into one file. The conflicts are resolved by giving priority to different entity types annotated by different methods according to their annotation quality. The final entity annotation results (CORD-19-ner.json) with the file schema and detailed descriptions can be found in our CORD-19-NER dataset.
Results ::: NER Annotation Results
In Figure FIGREF28, we show some examples of the annotation results in CORD-19-NER. We can see that our distantly- or weakly supervised methods achieve high quality recognizing the new entity types, requiring only several seed examples as the input. For example, we recognized “SARS-CoV-2" as the “CORONAVIRUS" type, “bat" and “pangolins" as the “WILDLIFE" type and “Van der Waals forces" as the “PHYSICAL_SCIENCE" type. This NER annotation results help downstream text mining tasks in discovering the origin and the physical nature of the virus. Our NER methods are domain-independent that can be applied to corpus in different domains. In addition, we show another example of NER annotation on New York Times with our system in Figure FIGREF29.
In Figure FIGREF30, we show the comparison of our annotation results with existing NER/BioNER systems. In Figure FIGREF30, we can see that only our method can identify “SARS-CoV-2" as a coronavirus. In Figure FIGREF30, we can see that our method can identify many more entities such as “pylogenetic" as a evolution term and “bat" as a wildlife. In Figure FIGREF30, we can also see that our method can identify many more entities such as “racism" as a social behavior. In summary, our distantly- and weakly-supervised NER methods are reliable for high-quality entity recognition without requiring human effort for training data annotation.
Results ::: Top-Frequent Entity Summarization
In Table TABREF34, we show some examples of the most frequent entities in the annotated corpus. Specifically, we show the entity types including both our new types and some UMLS types that have not been manually annotated before. We find our annotated entities very informative for the COVID-19 studies. For example, the most frequent entities for the type “SIGN_OR_SYMPTOM behavior" includes “cough" and “respiratory symptoms" that are the most common symptoms for COVID-19 . The most frequent entities for the type “INDIVIDUAL_BEHAVIOR" includes “hand hygiene", “disclosures" and “absenteeism", which indicates that people focus more on hand cleaning for the COVID-19 issue. Also, the most frequent entities for the type “MACHINE_ACTIVITY" includes “machine learning", “data processing" and “automation", which indicates that people focus more on the automated methods that can process massive data for the COVID-19 studies. This type also includes “telecommunication" as the top results, which is quite reasonable under the current COVID-19 situation. More examples can be found in our dataset.
Conclusion
In the future, we will further improve the CORD-19-NER dataset quality. We will also build text mining systems based on the CORD-19-NER dataset with richer functionalities. We hope this dataset can help the text mining community build downstream applications. We also hope this dataset can bring insights for the COVID-19 studies, both on the biomedical side and on the social side.
Acknowledgment
Research was sponsored in part by US DARPA KAIROS Program No. FA8750-19-2-1004 and SocialSim Program No. W911NF-17-C-0099, National Science Foundation IIS 16-18481, IIS 17-04532, and IIS-17-41317, and DTRA HDTRA11810026. Any opinions, findings, and conclusions or recommendations expressed herein are those of the authors and should not be interpreted as necessarily representing the views, either expressed or implied, of DARPA or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright annotation hereon. The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing any funding agencies. | 29,500 documents |
928828544e38fe26c53d81d1b9c70a9fb1cc3feb | 928828544e38fe26c53d81d1b9c70a9fb1cc3feb_1 | Q: What is the size of this dataset?
Text: Introduction
Coronavirus disease 2019 (COVID-19) is an infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The disease was first identified in 2019 in Wuhan, Central China, and has since spread globally, resulting in the 2019–2020 coronavirus pandemic. On March 16th, 2020, researchers and leaders from the Allen Institute for AI, Chan Zuckerberg Initiative (CZI), Georgetown University’s Center for Security and Emerging Technology (CSET), Microsoft, and the National Library of Medicine (NLM) at the National Institutes of Health released the COVID-19 Open Research Dataset (CORD-19) of scholarly literature about COVID-19, SARS-CoV-2, and the coronavirus group.
Named entity recognition (NER) is a fundamental step in text mining system development to facilitate the COVID-19 studies. There is critical need for NER methods that can quickly adapt to all the COVID-19 related new types without much human effort for training data annotation. We created this CORD-19-NER dataset with comprehensive named entity annotation on the CORD-19 corpus (2020-03-13). This dataset covers 75 fine-grained named entity types. CORD-19-NER is automatically generated by combining the annotation results from four sources. In the following sections, we introduce the details of CORD-19-NER dataset construction. We also show some NER annotation results in this dataset.
CORD-19-NER Dataset ::: Corpus
The corpus is generated from the 29,500 documents in the CORD-19 corpus (2020-03-13). We first merge all the meta-data (all_sources_metadata_2020-03-13.csv) with their corresponding full-text papers. Then we create a tokenized corpus (CORD-19-corpus.json) for further NER annotations.
Corpus Tokenization. The raw corpus is a combination of the “title", “abstract" and “full-text" from the CORD-19 corpus. We first conduct automatic phrase mining on the raw corpus using AutoPhrase BIBREF0. Then we do the second round of tokenization with Spacy on the phrase-replaced corpus. We have observed that keeping the AutoPhrase results will significantly improve the distantly- and weakly-supervised NER performance.
Key Items. The tokenized corpus includes the following items:
doc_id: the line number (0-29499) in “all_sources_metadata_2020-03-13.csv" in the CORD-19 corpus (2020-03-13).
sents: [sent_id, sent_tokens], tokenized sentences and words as described above.
source: CZI (1236 records), PMC (27337), bioRxiv (566) and medRxiv (361).
doi: populated for all BioRxiv/MedRxiv paper records and most of the other records (26357 non null).
pmcid: populated for all PMC paper records (27337 non null).
pubmed_id: populated for some of the records.
Other keys: publish_time, authors and journal.
The tokenized corpus (CORD-19-corpus.json) with the file schema and detailed descriptions can be found in our CORD-19-NER dataset.
CORD-19-NER Dataset ::: NER Methods
CORD-19-NER annotation is a combination from four sources with different NER methods:
Pre-trained NER on 18 general entity types from Spacy using the model “en_core_web_sm".
Pre-trained NER on 18 biomedical entity types from SciSpacy using the model “en_ner_bionlp13cg_md".
Knowledge base (KB)-guided NER on 127 biomedical entity types with our distantly-supervised NER methods BIBREF1, BIBREF2. We do not require any human annotated training data for the NER model training. Instead, We rely on UMLS as the input KB for distant supervision.
Seed-guided NER on 9 new entity types (specifically related to the COVID-19 studies) with our weakly-supervised NER method. We only require several (10-20) human-input seed entities for each new type. Then we expand the seed entity sets with CatE BIBREF3 and apply our distant NER method for the new entity type recognition.
The 9 new entity types with examples of their input seed are as follows:
Coronavirus: COVID-19, SARS, MERS, etc.
Viral Protein: Hemagglutinin, GP120, etc.
Livestock: cattle, sheep, pig, etc.
Wildlife: bats, wild animals, wild birds, etc
Evolution: genetic drift, natural selection, mutation rate, etc
Physical Science: atomic charge, Amber force fields, Van der Waals interactions, etc.
Substrate: blood, sputum, urine, etc.
Material: copper, stainless steel, plastic, etc.
Immune Response: adaptive immune response, cell mediated immunity, innate immunity, etc.
We merged all the entity types from the four sources and reorganized them into one entity type hierarchy. Specifically, we align all the types from SciSpacy to UMLS. We also merge some fine-grained UMLS entity types to their more coarse-grained types based on the corpus count. Then we get a final entity type hierarchy with 75 fine-grained entity types used in our annotations. The entity type hierarchy (CORD-19-types.xlsx) can be found in our CORD-19-NER dataset.
Then we conduct named entity annotation with the four NER methods on the 75 fine-grained entity types. After we get the NER annotation results with the four different methods, we merge the results into one file. The conflicts are resolved by giving priority to different entity types annotated by different methods according to their annotation quality. The final entity annotation results (CORD-19-ner.json) with the file schema and detailed descriptions can be found in our CORD-19-NER dataset.
Results ::: NER Annotation Results
In Figure FIGREF28, we show some examples of the annotation results in CORD-19-NER. We can see that our distantly- or weakly supervised methods achieve high quality recognizing the new entity types, requiring only several seed examples as the input. For example, we recognized “SARS-CoV-2" as the “CORONAVIRUS" type, “bat" and “pangolins" as the “WILDLIFE" type and “Van der Waals forces" as the “PHYSICAL_SCIENCE" type. This NER annotation results help downstream text mining tasks in discovering the origin and the physical nature of the virus. Our NER methods are domain-independent that can be applied to corpus in different domains. In addition, we show another example of NER annotation on New York Times with our system in Figure FIGREF29.
In Figure FIGREF30, we show the comparison of our annotation results with existing NER/BioNER systems. In Figure FIGREF30, we can see that only our method can identify “SARS-CoV-2" as a coronavirus. In Figure FIGREF30, we can see that our method can identify many more entities such as “pylogenetic" as a evolution term and “bat" as a wildlife. In Figure FIGREF30, we can also see that our method can identify many more entities such as “racism" as a social behavior. In summary, our distantly- and weakly-supervised NER methods are reliable for high-quality entity recognition without requiring human effort for training data annotation.
Results ::: Top-Frequent Entity Summarization
In Table TABREF34, we show some examples of the most frequent entities in the annotated corpus. Specifically, we show the entity types including both our new types and some UMLS types that have not been manually annotated before. We find our annotated entities very informative for the COVID-19 studies. For example, the most frequent entities for the type “SIGN_OR_SYMPTOM behavior" includes “cough" and “respiratory symptoms" that are the most common symptoms for COVID-19 . The most frequent entities for the type “INDIVIDUAL_BEHAVIOR" includes “hand hygiene", “disclosures" and “absenteeism", which indicates that people focus more on hand cleaning for the COVID-19 issue. Also, the most frequent entities for the type “MACHINE_ACTIVITY" includes “machine learning", “data processing" and “automation", which indicates that people focus more on the automated methods that can process massive data for the COVID-19 studies. This type also includes “telecommunication" as the top results, which is quite reasonable under the current COVID-19 situation. More examples can be found in our dataset.
Conclusion
In the future, we will further improve the CORD-19-NER dataset quality. We will also build text mining systems based on the CORD-19-NER dataset with richer functionalities. We hope this dataset can help the text mining community build downstream applications. We also hope this dataset can bring insights for the COVID-19 studies, both on the biomedical side and on the social side.
Acknowledgment
Research was sponsored in part by US DARPA KAIROS Program No. FA8750-19-2-1004 and SocialSim Program No. W911NF-17-C-0099, National Science Foundation IIS 16-18481, IIS 17-04532, and IIS-17-41317, and DTRA HDTRA11810026. Any opinions, findings, and conclusions or recommendations expressed herein are those of the authors and should not be interpreted as necessarily representing the views, either expressed or implied, of DARPA or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright annotation hereon. The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing any funding agencies. | 29,500 documents in the CORD-19 corpus (2020-03-13) |
4f243056e63a74d1349488983dc1238228ca76a7 | 4f243056e63a74d1349488983dc1238228ca76a7_0 | Q: Do they list all the named entity types present?
Text: Introduction
Coronavirus disease 2019 (COVID-19) is an infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The disease was first identified in 2019 in Wuhan, Central China, and has since spread globally, resulting in the 2019–2020 coronavirus pandemic. On March 16th, 2020, researchers and leaders from the Allen Institute for AI, Chan Zuckerberg Initiative (CZI), Georgetown University’s Center for Security and Emerging Technology (CSET), Microsoft, and the National Library of Medicine (NLM) at the National Institutes of Health released the COVID-19 Open Research Dataset (CORD-19) of scholarly literature about COVID-19, SARS-CoV-2, and the coronavirus group.
Named entity recognition (NER) is a fundamental step in text mining system development to facilitate the COVID-19 studies. There is critical need for NER methods that can quickly adapt to all the COVID-19 related new types without much human effort for training data annotation. We created this CORD-19-NER dataset with comprehensive named entity annotation on the CORD-19 corpus (2020-03-13). This dataset covers 75 fine-grained named entity types. CORD-19-NER is automatically generated by combining the annotation results from four sources. In the following sections, we introduce the details of CORD-19-NER dataset construction. We also show some NER annotation results in this dataset.
CORD-19-NER Dataset ::: Corpus
The corpus is generated from the 29,500 documents in the CORD-19 corpus (2020-03-13). We first merge all the meta-data (all_sources_metadata_2020-03-13.csv) with their corresponding full-text papers. Then we create a tokenized corpus (CORD-19-corpus.json) for further NER annotations.
Corpus Tokenization. The raw corpus is a combination of the “title", “abstract" and “full-text" from the CORD-19 corpus. We first conduct automatic phrase mining on the raw corpus using AutoPhrase BIBREF0. Then we do the second round of tokenization with Spacy on the phrase-replaced corpus. We have observed that keeping the AutoPhrase results will significantly improve the distantly- and weakly-supervised NER performance.
Key Items. The tokenized corpus includes the following items:
doc_id: the line number (0-29499) in “all_sources_metadata_2020-03-13.csv" in the CORD-19 corpus (2020-03-13).
sents: [sent_id, sent_tokens], tokenized sentences and words as described above.
source: CZI (1236 records), PMC (27337), bioRxiv (566) and medRxiv (361).
doi: populated for all BioRxiv/MedRxiv paper records and most of the other records (26357 non null).
pmcid: populated for all PMC paper records (27337 non null).
pubmed_id: populated for some of the records.
Other keys: publish_time, authors and journal.
The tokenized corpus (CORD-19-corpus.json) with the file schema and detailed descriptions can be found in our CORD-19-NER dataset.
CORD-19-NER Dataset ::: NER Methods
CORD-19-NER annotation is a combination from four sources with different NER methods:
Pre-trained NER on 18 general entity types from Spacy using the model “en_core_web_sm".
Pre-trained NER on 18 biomedical entity types from SciSpacy using the model “en_ner_bionlp13cg_md".
Knowledge base (KB)-guided NER on 127 biomedical entity types with our distantly-supervised NER methods BIBREF1, BIBREF2. We do not require any human annotated training data for the NER model training. Instead, We rely on UMLS as the input KB for distant supervision.
Seed-guided NER on 9 new entity types (specifically related to the COVID-19 studies) with our weakly-supervised NER method. We only require several (10-20) human-input seed entities for each new type. Then we expand the seed entity sets with CatE BIBREF3 and apply our distant NER method for the new entity type recognition.
The 9 new entity types with examples of their input seed are as follows:
Coronavirus: COVID-19, SARS, MERS, etc.
Viral Protein: Hemagglutinin, GP120, etc.
Livestock: cattle, sheep, pig, etc.
Wildlife: bats, wild animals, wild birds, etc
Evolution: genetic drift, natural selection, mutation rate, etc
Physical Science: atomic charge, Amber force fields, Van der Waals interactions, etc.
Substrate: blood, sputum, urine, etc.
Material: copper, stainless steel, plastic, etc.
Immune Response: adaptive immune response, cell mediated immunity, innate immunity, etc.
We merged all the entity types from the four sources and reorganized them into one entity type hierarchy. Specifically, we align all the types from SciSpacy to UMLS. We also merge some fine-grained UMLS entity types to their more coarse-grained types based on the corpus count. Then we get a final entity type hierarchy with 75 fine-grained entity types used in our annotations. The entity type hierarchy (CORD-19-types.xlsx) can be found in our CORD-19-NER dataset.
Then we conduct named entity annotation with the four NER methods on the 75 fine-grained entity types. After we get the NER annotation results with the four different methods, we merge the results into one file. The conflicts are resolved by giving priority to different entity types annotated by different methods according to their annotation quality. The final entity annotation results (CORD-19-ner.json) with the file schema and detailed descriptions can be found in our CORD-19-NER dataset.
Results ::: NER Annotation Results
In Figure FIGREF28, we show some examples of the annotation results in CORD-19-NER. We can see that our distantly- or weakly supervised methods achieve high quality recognizing the new entity types, requiring only several seed examples as the input. For example, we recognized “SARS-CoV-2" as the “CORONAVIRUS" type, “bat" and “pangolins" as the “WILDLIFE" type and “Van der Waals forces" as the “PHYSICAL_SCIENCE" type. This NER annotation results help downstream text mining tasks in discovering the origin and the physical nature of the virus. Our NER methods are domain-independent that can be applied to corpus in different domains. In addition, we show another example of NER annotation on New York Times with our system in Figure FIGREF29.
In Figure FIGREF30, we show the comparison of our annotation results with existing NER/BioNER systems. In Figure FIGREF30, we can see that only our method can identify “SARS-CoV-2" as a coronavirus. In Figure FIGREF30, we can see that our method can identify many more entities such as “pylogenetic" as a evolution term and “bat" as a wildlife. In Figure FIGREF30, we can also see that our method can identify many more entities such as “racism" as a social behavior. In summary, our distantly- and weakly-supervised NER methods are reliable for high-quality entity recognition without requiring human effort for training data annotation.
Results ::: Top-Frequent Entity Summarization
In Table TABREF34, we show some examples of the most frequent entities in the annotated corpus. Specifically, we show the entity types including both our new types and some UMLS types that have not been manually annotated before. We find our annotated entities very informative for the COVID-19 studies. For example, the most frequent entities for the type “SIGN_OR_SYMPTOM behavior" includes “cough" and “respiratory symptoms" that are the most common symptoms for COVID-19 . The most frequent entities for the type “INDIVIDUAL_BEHAVIOR" includes “hand hygiene", “disclosures" and “absenteeism", which indicates that people focus more on hand cleaning for the COVID-19 issue. Also, the most frequent entities for the type “MACHINE_ACTIVITY" includes “machine learning", “data processing" and “automation", which indicates that people focus more on the automated methods that can process massive data for the COVID-19 studies. This type also includes “telecommunication" as the top results, which is quite reasonable under the current COVID-19 situation. More examples can be found in our dataset.
Conclusion
In the future, we will further improve the CORD-19-NER dataset quality. We will also build text mining systems based on the CORD-19-NER dataset with richer functionalities. We hope this dataset can help the text mining community build downstream applications. We also hope this dataset can bring insights for the COVID-19 studies, both on the biomedical side and on the social side.
Acknowledgment
Research was sponsored in part by US DARPA KAIROS Program No. FA8750-19-2-1004 and SocialSim Program No. W911NF-17-C-0099, National Science Foundation IIS 16-18481, IIS 17-04532, and IIS-17-41317, and DTRA HDTRA11810026. Any opinions, findings, and conclusions or recommendations expressed herein are those of the authors and should not be interpreted as necessarily representing the views, either expressed or implied, of DARPA or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright annotation hereon. The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing any funding agencies. | No |
d94ac550dfdb9e4bbe04392156065c072b9d75e1 | d94ac550dfdb9e4bbe04392156065c072b9d75e1_0 | Q: Is the method described in this work a clustering-based method?
Text:
1.1em
:::
1.1.1em
::: :::
1.1.1.1em
ru=russian
$^1$Skolkovo Institute of Science and Technology, Moscow, Russia
v.logacheva@skoltech.ru
$^2$Ural Federal University, Yekaterinburg, Russia
$^3$Universität Hamburg, Hamburg, Germany
$^4$Universität Mannheim, Mannheim, Germany
$^5$University of Oslo, Oslo, Norway
$^6$Higher School of Economics, Moscow, Russia
Disambiguation of word senses in context is easy for humans, but is a major challenge for automatic approaches. Sophisticated supervised and knowledge-based models were developed to solve this task. However, (i) the inherent Zipfian distribution of supervised training instances for a given word and/or (ii) the quality of linguistic knowledge representations motivate the development of completely unsupervised and knowledge-free approaches to word sense disambiguation (WSD). They are particularly useful for under-resourced languages which do not have any resources for building either supervised and/or knowledge-based models. In this paper, we present a method that takes as input a standard pre-trained word embedding model and induces a fully-fledged word sense inventory, which can be used for disambiguation in context. We use this method to induce a collection of sense inventories for 158 languages on the basis of the original pre-trained fastText word embeddings by Grave:18, enabling WSD in these languages. Models and system are available online.
word sense induction, word sense disambiguation, word embeddings, sense embeddings, graph clustering
Introduction
There are many polysemous words in virtually any language. If not treated as such, they can hamper the performance of all semantic NLP tasks BIBREF0. Therefore, the task of resolving the polysemy and choosing the most appropriate meaning of a word in context has been an important NLP task for a long time. It is usually referred to as Word Sense Disambiguation (WSD) and aims at assigning meaning to a word in context.
The majority of approaches to WSD are based on the use of knowledge bases, taxonomies, and other external manually built resources BIBREF1, BIBREF2. However, different senses of a polysemous word occur in very diverse contexts and can potentially be discriminated with their help. The fact that semantically related words occur in similar contexts, and diverse words do not share common contexts, is known as distributional hypothesis and underlies the technique of constructing word embeddings from unlabelled texts. The same intuition can be used to discriminate between different senses of individual words. There exist methods of training word embeddings that can detect polysemous words and assign them different vectors depending on their contexts BIBREF3, BIBREF4. Unfortunately, many wide-spread word embedding models, such as GloVe BIBREF5, word2vec BIBREF6, fastText BIBREF7, do not handle polysemous words. Words in these models are represented with single vectors, which were constructed from diverse sets of contexts corresponding to different senses. In such cases, their disambiguation needs knowledge-rich approaches.
We tackle this problem by suggesting a method of post-hoc unsupervised WSD. It does not require any external knowledge and can separate different senses of a polysemous word using only the information encoded in pre-trained word embeddings. We construct a semantic similarity graph for words and partition it into densely connected subgraphs. This partition allows for separating different senses of polysemous words. Thus, the only language resource we need is a large unlabelled text corpus used to train embeddings. This makes our method applicable to under-resourced languages. Moreover, while other methods of unsupervised WSD need to train embeddings from scratch, we perform retrofitting of sense vectors based on existing word embeddings.
We create a massively multilingual application for on-the-fly word sense disambiguation. When receiving a text, the system identifies its language and performs disambiguation of all the polysemous words in it based on pre-extracted word sense inventories. The system works for 158 languages, for which pre-trained fastText embeddings available BIBREF8. The created inventories are based on these embeddings. To the best of our knowledge, our system is the only WSD system for the majority of the presented languages. Although it does not match the state of the art for resource-rich languages, it is fully unsupervised and can be used for virtually any language.
The contributions of our work are the following:
[noitemsep]
We release word sense inventories associated with fastText embeddings for 158 languages.
We release a system that allows on-the-fly word sense disambiguation for 158 languages.
We present egvi (Ego-Graph Vector Induction), a new algorithm of unsupervised word sense induction, which creates sense inventories based on pre-trained word vectors.
Related Work
There are two main scenarios for WSD: the supervised approach that leverages training corpora explicitly labelled for word sense, and the knowledge-based approach that derives sense representation from lexical resources, such as WordNet BIBREF9. In the supervised case WSD can be treated as a classification problem. Knowledge-based approaches construct sense embeddings, i.e. embeddings that separate various word senses.
SupWSD BIBREF10 is a state-of-the-art system for supervised WSD. It makes use of linear classifiers and a number of features such as POS tags, surrounding words, local collocations, word embeddings, and syntactic relations. GlossBERT model BIBREF11, which is another implementation of supervised WSD, achieves a significant improvement by leveraging gloss information. This model benefits from sentence-pair classification approach, introduced by Devlin:19 in their BERT contextualized embedding model. The input to the model consists of a context (a sentence which contains an ambiguous word) and a gloss (sense definition) from WordNet. The context-gloss pair is concatenated through a special token ([SEP]) and classified as positive or negative.
On the other hand, sense embeddings are an alternative to traditional word vector models such as word2vec, fastText or GloVe, which represent monosemous words well but fail for ambiguous words. Sense embeddings represent individual senses of polysemous words as separate vectors. They can be linked to an explicit inventory BIBREF12 or induce a sense inventory from unlabelled data BIBREF13. LSTMEmbed BIBREF13 aims at learning sense embeddings linked to BabelNet BIBREF14, at the same time handling word ordering, and using pre-trained embeddings as an objective. Although it was tested only on English, the approach can be easily adapted to other languages present in BabelNet. However, manually labelled datasets as well as knowledge bases exist only for a small number of well-resourced languages. Thus, to disambiguate polysemous words in other languages one has to resort to fully unsupervised techniques.
The task of Word Sense Induction (WSI) can be seen as an unsupervised version of WSD. WSI aims at clustering word senses and does not require to map each cluster to a predefined sense. Instead of that, word sense inventories are induced automatically from the clusters, treating each cluster as a single sense of a word. WSI approaches fall into three main groups: context clustering, word ego-network clustering and synonyms (or substitute) clustering.
Context clustering approaches consist in creating vectors which characterise words' contexts and clustering these vectors. Here, the definition of context may vary from window-based context to latent topic-alike context. Afterwards, the resulting clusters are either used as senses directly BIBREF15, or employed further to learn sense embeddings via Chinese Restaurant Process algorithm BIBREF16, AdaGram, a Bayesian extension of the Skip-Gram model BIBREF17, AutoSense, an extension of the LDA topic model BIBREF18, and other techniques.
Word ego-network clustering is applied to semantic graphs. The nodes of a semantic graph are words, and edges between them denote semantic relatedness which is usually evaluated with cosine similarity of the corresponding embeddings BIBREF19 or by PMI-like measures BIBREF20. Word senses are induced via graph clustering algorithms, such as Chinese Whispers BIBREF21 or MaxMax BIBREF22. The technique suggested in our work belongs to this class of methods and is an extension of the method presented by Pelevina:16.
Synonyms and substitute clustering approaches create vectors which represent synonyms or substitutes of polysemous words. Such vectors are created using synonymy dictionaries BIBREF23 or context-dependent substitutes obtained from a language model BIBREF24. Analogously to previously described techniques, word senses are induced by clustering these vectors.
Algorithm for Word Sense Induction
The majority of word vector models do not discriminate between multiple senses of individual words. However, a polysemous word can be identified via manual analysis of its nearest neighbours—they reflect different senses of the word. Table TABREF7 shows manually sense-labelled most similar terms to the word Ruby according to the pre-trained fastText model BIBREF8. As it was suggested early by Widdows:02, the distributional properties of a word can be used to construct a graph of words that are semantically related to it, and if a word is polysemous, such graph can easily be partitioned into a number of densely connected subgraphs corresponding to different senses of this word. Our algorithm is based on the same principle.
Algorithm for Word Sense Induction ::: SenseGram: A Baseline Graph-based Word Sense Induction Algorithm
SenseGram is the method proposed by Pelevina:16 that separates nearest neighbours to induce word senses and constructs sense embeddings for each sense. It starts by constructing an ego-graph (semantic graph centred at a particular word) of the word and its nearest neighbours. The edges between the words denote their semantic relatedness, e.g. the two nodes are joined with an edge if cosine similarity of the corresponding embeddings is higher than a pre-defined threshold. The resulting graph can be clustered into subgraphs which correspond to senses of the word.
The sense vectors are then constructed by averaging embeddings of words in each resulting cluster. In order to use these sense vectors for word sense disambiguation in text, the authors compute the probabilities of sense vectors of a word given its context or the similarity of the sense vectors to the context.
Algorithm for Word Sense Induction ::: egvi (Ego-Graph Vector Induction): A Novel Word Sense Induction Algorithm ::: Induction of Sense Inventories
One of the downsides of the described above algorithm is noise in the generated graph, namely, unrelated words and wrong connections. They hamper the separation of the graph. Another weak point is the imbalance in the nearest neighbour list, when a large part of it is attributed to the most frequent sense, not sufficiently representing the other senses. This can lead to construction of incorrect sense vectors.
We suggest a more advanced procedure of graph construction that uses the interpretability of vector addition and subtraction operations in word embedding space BIBREF6 while the previous algorithm only relies on the list of nearest neighbours in word embedding space. The key innovation of our algorithm is the use of vector subtraction to find pairs of most dissimilar graph nodes and construct the graph only from the nodes included in such “anti-edges”. Thus, our algorithm is based on graph-based word sense induction, but it also relies on vector-based operations between word embeddings to perform filtering of graph nodes. Analogously to the work of Pelevina:16, we construct a semantic relatedness graph from a list of nearest neighbours, but we filter this list using the following procedure:
Extract a list $\mathcal {N}$ = {$w_{1}$, $w_{2}$, ..., $w_{N}$} of $N$ nearest neighbours for the target (ego) word vector $w$.
Compute a list $\Delta $ = {$\delta _{1}$, $\delta _{2}$, ..., $\delta _{N}$} for each $w_{i}$ in $\mathcal {N}$, where $\delta _{i}~=~w-w_{i}$. The vectors in $\delta $ contain the components of sense of $w$ which are not related to the corresponding nearest neighbours from $\mathcal {N}$.
Compute a list $\overline{\mathcal {N}}$ = {$\overline{w_{1}}$, $\overline{w_{2}}$, ..., $\overline{w_{N}}$}, such that $\overline{w_{i}}$ is in the top nearest neighbours of $\delta _{i}$ in the embedding space. In other words, $\overline{w_{i}}$ is a word which is the most similar to the target (ego) word $w$ and least similar to its neighbour $w_{i}$. We refer to $\overline{w_{i}}$ as an anti-pair of $w_{i}$. The set of $N$ nearest neighbours and their anti-pairs form a set of anti-edges i.e. pairs of most dissimilar nodes – those which should not be connected: $\overline{E} = \lbrace (w_{1},\overline{w_{1}}), (w_{2},\overline{w_{2}}), ..., (w_{N},\overline{w_{N}})\rbrace $.
To clarify this, consider the target (ego) word $w = \textit {python}$, its top similar term $w_1 = \textit {Java}$ and the resulting anti-pair $\overline{w_i} = \textit {snake}$ which is the top related term of $\delta _1 = w - w_1$. Together they form an anti-edge $(w_i,\overline{w_i})=(\textit {Java}, \textit {snake})$ composed of a pair of semantically dissimilar terms.
Construct $V$, the set of vertices of our semantic graph $G=(V,E)$ from the list of anti-edges $\overline{E}$, with the following recurrent procedure: $V = V \cup \lbrace w_{i}, \overline{w_{i}}: w_{i} \in \mathcal {N}, \overline{w_{i}} \in \mathcal {N}\rbrace $, i.e. we add a word from the list of nearest neighbours and its anti-pair only if both of them are nearest neighbours of the original word $w$. We do not add $w$'s nearest neighbours if their anti-pairs do not belong to $\mathcal {N}$. Thus, we add only words which can help discriminating between different senses of $w$.
Construct the set of edges $E$ as follows. For each $w_{i}~\in ~\mathcal {N}$ we extract a set of its $K$ nearest neighbours $\mathcal {N}^{\prime }_{i} = \lbrace u_{1}, u_{2}, ..., u_{K}\rbrace $ and define $E = \lbrace (w_{i}, u_{j}): w_{i}~\in ~V, u_j~\in ~V, u_{j}~\in ~\mathcal {N}^{\prime }_{i}, u_{j}~\ne ~\overline{w_{i}}\rbrace $. In other words, we remove edges between a word $w_{i}$ and its nearest neighbour $u_j$ if $u_j$ is also its anti-pair. According to our hypothesis, $w_{i}$ and $\overline{w_{i}}$ belong to different senses of $w$, so they should not be connected (i.e. we never add anti-edges into $E$). Therefore, we consider any connection between them as noise and remove it.
Note that $N$ (the number of nearest neighbours for the target word $w$) and $K$ (the number of nearest neighbours of $w_{ci}$) do not have to match. The difference between these parameters is the following. $N$ defines how many words will be considered for the construction of ego-graph. On the other hand, $K$ defines the degree of relatedness between words in the ego-graph — if $K = 50$, then we will connect vertices $w$ and $u$ with an edge only if $u$ is in the list of 50 nearest neighbours of $w$. Increasing $K$ increases the graph connectivity and leads to lower granularity of senses.
According to our hypothesis, nearest neighbours of $w$ are grouped into clusters in the vector space, and each of the clusters corresponds to a sense of $w$. The described vertices selection procedure allows picking the most representative members of these clusters which are better at discriminating between the clusters. In addition to that, it helps dealing with the cases when one of the clusters is over-represented in the nearest neighbour list. In this case, many elements of such a cluster are not added to $V$ because their anti-pairs fall outside the nearest neighbour list. This also improves the quality of clustering.
After the graph construction, the clustering is performed using the Chinese Whispers algorithm BIBREF21. This is a bottom-up clustering procedure that does not require to pre-define the number of clusters, so it can correctly process polysemous words with varying numbers of senses as well as unambiguous words.
Figure FIGREF17 shows an example of the resulting pruned graph of for the word Ruby for $N = 50$ nearest neighbours in terms of the fastText cosine similarity. In contrast to the baseline method by BIBREF19 where all 50 terms are clustered, in the method presented in this section we sparsify the graph by removing 13 nodes which were not in the set of the “anti-edges” i.e. pairs of most dissimilar terms out of these 50 neighbours. Examples of anti-edges i.e. pairs of most dissimilar terms for this graph include: (Haskell, Sapphire), (Garnet, Rails), (Opal, Rubyist), (Hazel, RubyOnRails), and (Coffeescript, Opal).
Algorithm for Word Sense Induction ::: egvi (Ego-Graph Vector Induction): A Novel Word Sense Induction Algorithm ::: Labelling of Induced Senses
We label each word cluster representing a sense to make them and the WSD results interpretable by humans. Prior systems used hypernyms to label the clusters BIBREF25, BIBREF26, e.g. “animal” in the “python (animal)”. However, neither hypernyms nor rules for their automatic extraction are available for all 158 languages. Therefore, we use a simpler method to select a keyword which would help to interpret each cluster. For each graph node $v \in V$ we count the number of anti-edges it belongs to: $count(v) = | \lbrace (w_i,\overline{w_i}) : (w_i,\overline{w_i}) \in \overline{E} \wedge (v = w_i \vee v = \overline{w_i}) \rbrace |$. A graph clustering yields a partition of $V$ into $n$ clusters: $V~=~\lbrace V_1, V_2, ..., V_n\rbrace $. For each cluster $V_i$ we define a keyword $w^{key}_i$ as the word with the largest number of anti-edges $count(\cdot )$ among words in this cluster.
Algorithm for Word Sense Induction ::: egvi (Ego-Graph Vector Induction): A Novel Word Sense Induction Algorithm ::: Word Sense Disambiguation
We use keywords defined above to obtain vector representations of senses. In particular, we simply use word embedding of the keyword $w^{key}_i$ as a sense representation $\mathbf {s}_i$ of the target word $w$ to avoid explicit computation of sense embeddings like in BIBREF19. Given a sentence $\lbrace w_1, w_2, ..., w_{j}, w, w_{j+1}, ..., w_n\rbrace $ represented as a matrix of word vectors, we define the context of the target word $w$ as $\textbf {c}_w = \dfrac{\sum _{j=1}^{n} w_j}{n}$. Then, we define the most appropriate sense $\hat{s}$ as the sense with the highest cosine similarity to the embedding of the word's context:
System Design
We release a system for on-the-fly WSD for 158 languages. Given textual input, it identifies polysemous words and retrieves senses that are the most appropriate in the context.
System Design ::: Construction of Sense Inventories
To build word sense inventories (sense vectors) for 158 languages, we utilised GPU-accelerated routines for search of similar vectors implemented in Faiss library BIBREF27. The search of nearest neighbours takes substantial time, therefore, acceleration with GPUs helps to significantly reduce the word sense construction time. To further speed up the process, we keep all intermediate results in memory, which results in substantial RAM consumption of up to 200 Gb.
The construction of word senses for all of the 158 languages takes a lot of computational resources and imposes high requirements to the hardware. For calculations, we use in parallel 10–20 nodes of the Zhores cluster BIBREF28 empowered with Nvidia Tesla V100 graphic cards. For each of the languages, we construct inventories based on 50, 100, and 200 neighbours for 100,000 most frequent words. The vocabulary was limited in order to make the computation time feasible. The construction of inventories for one language takes up to 10 hours, with $6.5$ hours on average. Building the inventories for all languages took more than 1,000 hours of GPU-accelerated computations. We release the constructed sense inventories for all the available languages. They contain all the necessary information for using them in the proposed WSD system or in other downstream tasks.
System Design ::: Word Sense Disambiguation System
The first text pre-processing step is language identification, for which we use the fastText language identification models by Bojanowski:17. Then the input is tokenised. For languages which use Latin, Cyrillic, Hebrew, or Greek scripts, we employ the Europarl tokeniser. For Chinese, we use the Stanford Word Segmenter BIBREF29. For Japanese, we use Mecab BIBREF30. We tokenise Vietnamese with UETsegmenter BIBREF31. All other languages are processed with the ICU tokeniser, as implemented in the PyICU project. After the tokenisation, the system analyses all the input words with pre-extracted sense inventories and defines the most appropriate sense for polysemous words.
Figure FIGREF19 shows the interface of the system. It has a textual input form. The automatically identified language of text is shown above. A click on any of the words displays a prompt (shown in black) with the most appropriate sense of a word in the specified context and the confidence score. In the given example, the word Jaguar is correctly identified as a car brand. This system is based on the system by Ustalov:18, extending it with a back-end for multiple languages, language detection, and sense browsing capabilities.
Evaluation
We first evaluate our converted embedding models on multi-language lexical similarity and relatedness tasks, as a sanity check, to make sure the word sense induction process did not hurt the general performance of the embeddings. Then, we test the sense embeddings on WSD task.
Evaluation ::: Lexical Similarity and Relatedness ::: Experimental Setup
We use the SemR-11 datasets BIBREF32, which contain word pairs with manually assigned similarity scores from 0 (words are not related) to 10 (words are fully interchangeable) for 12 languages: English (en), Arabic (ar), German (de), Spanish (es), Farsi (fa), French (fr), Italian (it), Dutch (nl), Portuguese (pt), Russian (ru), Swedish (sv), Chinese (zh). The task is to assign relatedness scores to these pairs so that the ranking of the pairs by this score is close to the ranking defined by the oracle score. The performance is measured with Pearson correlation of the rankings. Since one word can have several different senses in our setup, we follow Remus:18 and define the relatedness score for a pair of words as the maximum cosine similarity between any of their sense vectors.
We extract the sense inventories from fastText embedding vectors. We set $N=K$ for all our experiments, i.e. the number of vertices in the graph and the maximum number of vertices' nearest neighbours match. We conduct experiments with $N=K$ set to 50, 100, and 200. For each cluster $V_i$ we create a sense vector $s_i$ by averaging vectors that belong to this cluster. We rely on the methodology of BIBREF33 shifting the generated sense vector to the direction of the original word vector: $s_i~=~\lambda ~w + (1-\lambda )~\dfrac{1}{n}~\sum _{u~\in ~V_i} cos(w, u)\cdot u, $ where, $\lambda \in [0, 1]$, $w$ is the embedding of the original word, $cos(w, u)$ is the cosine similarity between $w$ and $u$, and $n=|V_i|$. By introducing the linear combination of $w$ and $u~\in ~V_i$ we enforce the similarity of sense vectors to the original word important for this task. In addition to that, we weight $u$ by their similarity to the original word, so that more similar neighbours contribute more to the sense vector. The shifting parameter $\lambda $ is set to $0.5$, following Remus:18.
A fastText model is able to generate a vector for each word even if it is not represented in the vocabulary, due to the use of subword information. However, our system cannot assemble sense vectors for out-of-vocabulary words, for such words it returns their original fastText vector. Still, the coverage of the benchmark datasets by our vocabulary is at least 85% and approaches 100% for some languages, so we do not have to resort to this back-off strategy very often.
We use the original fastText vectors as a baseline. In this case, we compute the relatedness scores of the two words as a cosine similarity of their vectors.
Evaluation ::: Lexical Similarity and Relatedness ::: Discussion of Results
We compute the relatedness scores for all benchmark datasets using our sense vectors and compare them to cosine similarity scores of original fastText vectors. The results vary for different languages. Figure FIGREF28 shows the change in Pearson correlation score when switching from the baseline fastText embeddings to our sense vectors. The new vectors significantly improve the relatedness detection for German, Farsi, Russian, and Chinese, whereas for Italian, Dutch, and Swedish the score slightly falls behind the baseline. For other languages, the performance of sense vectors is on par with regular fastText.
Evaluation ::: Word Sense Disambiguation
The purpose of our sense vectors is disambiguation of polysemous words. Therefore, we test the inventories constructed with egvi on the Task 13 of SemEval-2013 — Word Sense Induction BIBREF34. The task is to identify the different senses of a target word in context in a fully unsupervised manner.
Evaluation ::: Word Sense Disambiguation ::: Experimental Setup
The dataset consists of a set of polysemous words: 20 nouns, 20 verbs, and 10 adjectives and specifies 20 to 100 contexts per word, with the total of 4,664 contexts, drawn from the Open American National Corpus. Given a set of contexts of a polysemous word, the participants of the competition had to divide them into clusters by sense of the word. The contexts are manually labelled with WordNet senses of the target words, the gold standard clustering is generated from this labelling.
The task allows two setups: graded WSI where participants can submit multiple senses per word and provide the probability of each sense in a particular context, and non-graded WSI where a model determines a single sense for a word in context. In our experiments we performed non-graded WSI. We considered the most suitable sense as the one with the highest cosine similarity with embeddings of the context, as described in Section SECREF9.
The performance of WSI models is measured with three metrics that require mapping of sense inventories (Jaccard Index, Kendall's $\tau $, and WNDCG) and two cluster comparison metrics (Fuzzy NMI and Fuzzy B-Cubed).
Evaluation ::: Word Sense Disambiguation ::: Discussion of Results
We compare our model with the models that participated in the task, the baseline ego-graph clustering model by Pelevina:16, and AdaGram BIBREF17, a method that learns sense embeddings based on a Bayesian extension of the Skip-gram model. Besides that, we provide the scores of the simple baselines originally used in the task: assigning one sense to all words, assigning the most frequent sense to all words, and considering each context as expressing a different sense. The evaluation of our model was performed using the open source context-eval tool.
Table TABREF31 shows the performance of these models on the SemEval dataset. Due to space constraints, we only report the scores of the best-performing SemEval participants, please refer to jurgens-klapaftis-2013-semeval for the full results. The performance of AdaGram and SenseGram models is reported according to Pelevina:16.
The table shows that the performance of egvi is similar to state-of-the-art word sense disambiguation and word sense induction models. In particular, we can see that it outperforms SenseGram on the majority of metrics. We should note that this comparison is not fully rigorous, because SenseGram induces sense inventories from word2vec as opposed to fastText vectors used in our work.
Evaluation ::: Analysis
In order to see how the separation of word contexts that we perform corresponds to actual senses of polysemous words, we visualise ego-graphs produced by our method. Figure FIGREF17 shows the nearest neighbours clustering for the word Ruby, which divides the graph into five senses: Ruby-related programming tools, e.g. RubyOnRails (orange cluster), female names, e.g. Josie (magenta cluster), gems, e.g. Sapphire (yellow cluster), programming languages in general, e.g. Haskell (red cluster). Besides, this is typical for fastText embeddings featuring sub-string similarity, one can observe a cluster of different spelling of the word Ruby in green.
Analogously, the word python (see Figure FIGREF35) is divided into the senses of animals, e.g. crocodile (yellow cluster), programming languages, e.g. perl5 (magenta cluster), and conference, e.g. pycon (red cluster).
In addition, we show a qualitative analysis of senses of mouse and apple. Table TABREF38 shows nearest neighbours of the original words separated into clusters (labels for clusters were assigned manually). These inventories demonstrate clear separation of different senses, although it can be too fine-grained. For example, the first and the second cluster for mouse both refer to computer mouse, but the first one addresses the different types of computer mice, and the second one is used in the context of mouse actions. Similarly, we see that iphone and macbook are separated into two clusters. Interestingly, fastText handles typos, code-switching, and emojis by correctly associating all non-standard variants to the word they refer, and our method is able to cluster them appropriately. Both inventories were produced with $K=200$, which ensures stronger connectivity of graph. However, we see that this setting still produces too many clusters. We computed the average numbers of clusters produced by our model with $K=200$ for words from the word relatedness datasets and compared these numbers with the number of senses in WordNet for English and RuWordNet BIBREF35 for Russian (see Table TABREF37). We can see that the number of senses extracted by our method is consistently higher than the real number of senses.
We also compute the average number of senses per word for all the languages and different values of $K$ (see Figure FIGREF36). The average across languages does not change much as we increase $K$. However, for larger $K$ the average exceed the median value, indicating that more languages have lower number of senses per word. At the same time, while at smaller $K$ the maximum average number of senses per word does not exceed 6, larger values of $K$ produce outliers, e.g. English with $12.5$ senses.
Notably, there are no languages with an average number of senses less than 2, while numbers on English and Russian WordNets are considerably lower. This confirms that our method systematically over-generates senses. The presence of outliers shows that this effect cannot be eliminated by further increasing $K$, because the $i$-th nearest neighbour of a word for $i>200$ can be only remotely related to this word, even if the word is rare. Thus, our sense clustering algorithm needs a method of merging spurious senses.
Conclusions and Future Work
We present egvi, a new algorithm for word sense induction based on graph clustering that is fully unsupervised and relies on graph operations between word vectors. We apply this algorithm to a large collection of pre-trained fastText word embeddings, releasing sense inventories for 158 languages. These inventories contain all the necessary information for constructing sense vectors and using them in downstream tasks. The sense vectors for polysemous words can be directly retrofitted with the pre-trained word embeddings and do not need any external resources. As one application of these multilingual sense inventories, we present a multilingual word sense disambiguation system that performs unsupervised and knowledge-free WSD for 158 languages without the use of any dictionary or sense-labelled corpus.
The evaluation of quality of the produced sense inventories is performed on multilingual word similarity benchmarks, showing that our sense vectors improve the scores compared to non-disambiguated word embeddings. Therefore, our system in its present state can improve WSD and downstream tasks for languages where knowledge bases, taxonomies, and annotated corpora are not available and supervised WSD models cannot be trained.
A promising direction for future work is combining distributional information from the induced sense inventories with lexical knowledge bases to improve WSD performance. Besides, we encourage the use of the produced word sense inventories in other downstream tasks.
Acknowledgements
We acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) foundation under the “JOIN-T 2” and “ACQuA” projects. Ekaterina Artemova was supported by the framework of the HSE University Basic Research Program and Russian Academic Excellence Project “5-100”. | Yes |
d94ac550dfdb9e4bbe04392156065c072b9d75e1 | d94ac550dfdb9e4bbe04392156065c072b9d75e1_1 | Q: Is the method described in this work a clustering-based method?
Text:
1.1em
:::
1.1.1em
::: :::
1.1.1.1em
ru=russian
$^1$Skolkovo Institute of Science and Technology, Moscow, Russia
v.logacheva@skoltech.ru
$^2$Ural Federal University, Yekaterinburg, Russia
$^3$Universität Hamburg, Hamburg, Germany
$^4$Universität Mannheim, Mannheim, Germany
$^5$University of Oslo, Oslo, Norway
$^6$Higher School of Economics, Moscow, Russia
Disambiguation of word senses in context is easy for humans, but is a major challenge for automatic approaches. Sophisticated supervised and knowledge-based models were developed to solve this task. However, (i) the inherent Zipfian distribution of supervised training instances for a given word and/or (ii) the quality of linguistic knowledge representations motivate the development of completely unsupervised and knowledge-free approaches to word sense disambiguation (WSD). They are particularly useful for under-resourced languages which do not have any resources for building either supervised and/or knowledge-based models. In this paper, we present a method that takes as input a standard pre-trained word embedding model and induces a fully-fledged word sense inventory, which can be used for disambiguation in context. We use this method to induce a collection of sense inventories for 158 languages on the basis of the original pre-trained fastText word embeddings by Grave:18, enabling WSD in these languages. Models and system are available online.
word sense induction, word sense disambiguation, word embeddings, sense embeddings, graph clustering
Introduction
There are many polysemous words in virtually any language. If not treated as such, they can hamper the performance of all semantic NLP tasks BIBREF0. Therefore, the task of resolving the polysemy and choosing the most appropriate meaning of a word in context has been an important NLP task for a long time. It is usually referred to as Word Sense Disambiguation (WSD) and aims at assigning meaning to a word in context.
The majority of approaches to WSD are based on the use of knowledge bases, taxonomies, and other external manually built resources BIBREF1, BIBREF2. However, different senses of a polysemous word occur in very diverse contexts and can potentially be discriminated with their help. The fact that semantically related words occur in similar contexts, and diverse words do not share common contexts, is known as distributional hypothesis and underlies the technique of constructing word embeddings from unlabelled texts. The same intuition can be used to discriminate between different senses of individual words. There exist methods of training word embeddings that can detect polysemous words and assign them different vectors depending on their contexts BIBREF3, BIBREF4. Unfortunately, many wide-spread word embedding models, such as GloVe BIBREF5, word2vec BIBREF6, fastText BIBREF7, do not handle polysemous words. Words in these models are represented with single vectors, which were constructed from diverse sets of contexts corresponding to different senses. In such cases, their disambiguation needs knowledge-rich approaches.
We tackle this problem by suggesting a method of post-hoc unsupervised WSD. It does not require any external knowledge and can separate different senses of a polysemous word using only the information encoded in pre-trained word embeddings. We construct a semantic similarity graph for words and partition it into densely connected subgraphs. This partition allows for separating different senses of polysemous words. Thus, the only language resource we need is a large unlabelled text corpus used to train embeddings. This makes our method applicable to under-resourced languages. Moreover, while other methods of unsupervised WSD need to train embeddings from scratch, we perform retrofitting of sense vectors based on existing word embeddings.
We create a massively multilingual application for on-the-fly word sense disambiguation. When receiving a text, the system identifies its language and performs disambiguation of all the polysemous words in it based on pre-extracted word sense inventories. The system works for 158 languages, for which pre-trained fastText embeddings available BIBREF8. The created inventories are based on these embeddings. To the best of our knowledge, our system is the only WSD system for the majority of the presented languages. Although it does not match the state of the art for resource-rich languages, it is fully unsupervised and can be used for virtually any language.
The contributions of our work are the following:
[noitemsep]
We release word sense inventories associated with fastText embeddings for 158 languages.
We release a system that allows on-the-fly word sense disambiguation for 158 languages.
We present egvi (Ego-Graph Vector Induction), a new algorithm of unsupervised word sense induction, which creates sense inventories based on pre-trained word vectors.
Related Work
There are two main scenarios for WSD: the supervised approach that leverages training corpora explicitly labelled for word sense, and the knowledge-based approach that derives sense representation from lexical resources, such as WordNet BIBREF9. In the supervised case WSD can be treated as a classification problem. Knowledge-based approaches construct sense embeddings, i.e. embeddings that separate various word senses.
SupWSD BIBREF10 is a state-of-the-art system for supervised WSD. It makes use of linear classifiers and a number of features such as POS tags, surrounding words, local collocations, word embeddings, and syntactic relations. GlossBERT model BIBREF11, which is another implementation of supervised WSD, achieves a significant improvement by leveraging gloss information. This model benefits from sentence-pair classification approach, introduced by Devlin:19 in their BERT contextualized embedding model. The input to the model consists of a context (a sentence which contains an ambiguous word) and a gloss (sense definition) from WordNet. The context-gloss pair is concatenated through a special token ([SEP]) and classified as positive or negative.
On the other hand, sense embeddings are an alternative to traditional word vector models such as word2vec, fastText or GloVe, which represent monosemous words well but fail for ambiguous words. Sense embeddings represent individual senses of polysemous words as separate vectors. They can be linked to an explicit inventory BIBREF12 or induce a sense inventory from unlabelled data BIBREF13. LSTMEmbed BIBREF13 aims at learning sense embeddings linked to BabelNet BIBREF14, at the same time handling word ordering, and using pre-trained embeddings as an objective. Although it was tested only on English, the approach can be easily adapted to other languages present in BabelNet. However, manually labelled datasets as well as knowledge bases exist only for a small number of well-resourced languages. Thus, to disambiguate polysemous words in other languages one has to resort to fully unsupervised techniques.
The task of Word Sense Induction (WSI) can be seen as an unsupervised version of WSD. WSI aims at clustering word senses and does not require to map each cluster to a predefined sense. Instead of that, word sense inventories are induced automatically from the clusters, treating each cluster as a single sense of a word. WSI approaches fall into three main groups: context clustering, word ego-network clustering and synonyms (or substitute) clustering.
Context clustering approaches consist in creating vectors which characterise words' contexts and clustering these vectors. Here, the definition of context may vary from window-based context to latent topic-alike context. Afterwards, the resulting clusters are either used as senses directly BIBREF15, or employed further to learn sense embeddings via Chinese Restaurant Process algorithm BIBREF16, AdaGram, a Bayesian extension of the Skip-Gram model BIBREF17, AutoSense, an extension of the LDA topic model BIBREF18, and other techniques.
Word ego-network clustering is applied to semantic graphs. The nodes of a semantic graph are words, and edges between them denote semantic relatedness which is usually evaluated with cosine similarity of the corresponding embeddings BIBREF19 or by PMI-like measures BIBREF20. Word senses are induced via graph clustering algorithms, such as Chinese Whispers BIBREF21 or MaxMax BIBREF22. The technique suggested in our work belongs to this class of methods and is an extension of the method presented by Pelevina:16.
Synonyms and substitute clustering approaches create vectors which represent synonyms or substitutes of polysemous words. Such vectors are created using synonymy dictionaries BIBREF23 or context-dependent substitutes obtained from a language model BIBREF24. Analogously to previously described techniques, word senses are induced by clustering these vectors.
Algorithm for Word Sense Induction
The majority of word vector models do not discriminate between multiple senses of individual words. However, a polysemous word can be identified via manual analysis of its nearest neighbours—they reflect different senses of the word. Table TABREF7 shows manually sense-labelled most similar terms to the word Ruby according to the pre-trained fastText model BIBREF8. As it was suggested early by Widdows:02, the distributional properties of a word can be used to construct a graph of words that are semantically related to it, and if a word is polysemous, such graph can easily be partitioned into a number of densely connected subgraphs corresponding to different senses of this word. Our algorithm is based on the same principle.
Algorithm for Word Sense Induction ::: SenseGram: A Baseline Graph-based Word Sense Induction Algorithm
SenseGram is the method proposed by Pelevina:16 that separates nearest neighbours to induce word senses and constructs sense embeddings for each sense. It starts by constructing an ego-graph (semantic graph centred at a particular word) of the word and its nearest neighbours. The edges between the words denote their semantic relatedness, e.g. the two nodes are joined with an edge if cosine similarity of the corresponding embeddings is higher than a pre-defined threshold. The resulting graph can be clustered into subgraphs which correspond to senses of the word.
The sense vectors are then constructed by averaging embeddings of words in each resulting cluster. In order to use these sense vectors for word sense disambiguation in text, the authors compute the probabilities of sense vectors of a word given its context or the similarity of the sense vectors to the context.
Algorithm for Word Sense Induction ::: egvi (Ego-Graph Vector Induction): A Novel Word Sense Induction Algorithm ::: Induction of Sense Inventories
One of the downsides of the described above algorithm is noise in the generated graph, namely, unrelated words and wrong connections. They hamper the separation of the graph. Another weak point is the imbalance in the nearest neighbour list, when a large part of it is attributed to the most frequent sense, not sufficiently representing the other senses. This can lead to construction of incorrect sense vectors.
We suggest a more advanced procedure of graph construction that uses the interpretability of vector addition and subtraction operations in word embedding space BIBREF6 while the previous algorithm only relies on the list of nearest neighbours in word embedding space. The key innovation of our algorithm is the use of vector subtraction to find pairs of most dissimilar graph nodes and construct the graph only from the nodes included in such “anti-edges”. Thus, our algorithm is based on graph-based word sense induction, but it also relies on vector-based operations between word embeddings to perform filtering of graph nodes. Analogously to the work of Pelevina:16, we construct a semantic relatedness graph from a list of nearest neighbours, but we filter this list using the following procedure:
Extract a list $\mathcal {N}$ = {$w_{1}$, $w_{2}$, ..., $w_{N}$} of $N$ nearest neighbours for the target (ego) word vector $w$.
Compute a list $\Delta $ = {$\delta _{1}$, $\delta _{2}$, ..., $\delta _{N}$} for each $w_{i}$ in $\mathcal {N}$, where $\delta _{i}~=~w-w_{i}$. The vectors in $\delta $ contain the components of sense of $w$ which are not related to the corresponding nearest neighbours from $\mathcal {N}$.
Compute a list $\overline{\mathcal {N}}$ = {$\overline{w_{1}}$, $\overline{w_{2}}$, ..., $\overline{w_{N}}$}, such that $\overline{w_{i}}$ is in the top nearest neighbours of $\delta _{i}$ in the embedding space. In other words, $\overline{w_{i}}$ is a word which is the most similar to the target (ego) word $w$ and least similar to its neighbour $w_{i}$. We refer to $\overline{w_{i}}$ as an anti-pair of $w_{i}$. The set of $N$ nearest neighbours and their anti-pairs form a set of anti-edges i.e. pairs of most dissimilar nodes – those which should not be connected: $\overline{E} = \lbrace (w_{1},\overline{w_{1}}), (w_{2},\overline{w_{2}}), ..., (w_{N},\overline{w_{N}})\rbrace $.
To clarify this, consider the target (ego) word $w = \textit {python}$, its top similar term $w_1 = \textit {Java}$ and the resulting anti-pair $\overline{w_i} = \textit {snake}$ which is the top related term of $\delta _1 = w - w_1$. Together they form an anti-edge $(w_i,\overline{w_i})=(\textit {Java}, \textit {snake})$ composed of a pair of semantically dissimilar terms.
Construct $V$, the set of vertices of our semantic graph $G=(V,E)$ from the list of anti-edges $\overline{E}$, with the following recurrent procedure: $V = V \cup \lbrace w_{i}, \overline{w_{i}}: w_{i} \in \mathcal {N}, \overline{w_{i}} \in \mathcal {N}\rbrace $, i.e. we add a word from the list of nearest neighbours and its anti-pair only if both of them are nearest neighbours of the original word $w$. We do not add $w$'s nearest neighbours if their anti-pairs do not belong to $\mathcal {N}$. Thus, we add only words which can help discriminating between different senses of $w$.
Construct the set of edges $E$ as follows. For each $w_{i}~\in ~\mathcal {N}$ we extract a set of its $K$ nearest neighbours $\mathcal {N}^{\prime }_{i} = \lbrace u_{1}, u_{2}, ..., u_{K}\rbrace $ and define $E = \lbrace (w_{i}, u_{j}): w_{i}~\in ~V, u_j~\in ~V, u_{j}~\in ~\mathcal {N}^{\prime }_{i}, u_{j}~\ne ~\overline{w_{i}}\rbrace $. In other words, we remove edges between a word $w_{i}$ and its nearest neighbour $u_j$ if $u_j$ is also its anti-pair. According to our hypothesis, $w_{i}$ and $\overline{w_{i}}$ belong to different senses of $w$, so they should not be connected (i.e. we never add anti-edges into $E$). Therefore, we consider any connection between them as noise and remove it.
Note that $N$ (the number of nearest neighbours for the target word $w$) and $K$ (the number of nearest neighbours of $w_{ci}$) do not have to match. The difference between these parameters is the following. $N$ defines how many words will be considered for the construction of ego-graph. On the other hand, $K$ defines the degree of relatedness between words in the ego-graph — if $K = 50$, then we will connect vertices $w$ and $u$ with an edge only if $u$ is in the list of 50 nearest neighbours of $w$. Increasing $K$ increases the graph connectivity and leads to lower granularity of senses.
According to our hypothesis, nearest neighbours of $w$ are grouped into clusters in the vector space, and each of the clusters corresponds to a sense of $w$. The described vertices selection procedure allows picking the most representative members of these clusters which are better at discriminating between the clusters. In addition to that, it helps dealing with the cases when one of the clusters is over-represented in the nearest neighbour list. In this case, many elements of such a cluster are not added to $V$ because their anti-pairs fall outside the nearest neighbour list. This also improves the quality of clustering.
After the graph construction, the clustering is performed using the Chinese Whispers algorithm BIBREF21. This is a bottom-up clustering procedure that does not require to pre-define the number of clusters, so it can correctly process polysemous words with varying numbers of senses as well as unambiguous words.
Figure FIGREF17 shows an example of the resulting pruned graph of for the word Ruby for $N = 50$ nearest neighbours in terms of the fastText cosine similarity. In contrast to the baseline method by BIBREF19 where all 50 terms are clustered, in the method presented in this section we sparsify the graph by removing 13 nodes which were not in the set of the “anti-edges” i.e. pairs of most dissimilar terms out of these 50 neighbours. Examples of anti-edges i.e. pairs of most dissimilar terms for this graph include: (Haskell, Sapphire), (Garnet, Rails), (Opal, Rubyist), (Hazel, RubyOnRails), and (Coffeescript, Opal).
Algorithm for Word Sense Induction ::: egvi (Ego-Graph Vector Induction): A Novel Word Sense Induction Algorithm ::: Labelling of Induced Senses
We label each word cluster representing a sense to make them and the WSD results interpretable by humans. Prior systems used hypernyms to label the clusters BIBREF25, BIBREF26, e.g. “animal” in the “python (animal)”. However, neither hypernyms nor rules for their automatic extraction are available for all 158 languages. Therefore, we use a simpler method to select a keyword which would help to interpret each cluster. For each graph node $v \in V$ we count the number of anti-edges it belongs to: $count(v) = | \lbrace (w_i,\overline{w_i}) : (w_i,\overline{w_i}) \in \overline{E} \wedge (v = w_i \vee v = \overline{w_i}) \rbrace |$. A graph clustering yields a partition of $V$ into $n$ clusters: $V~=~\lbrace V_1, V_2, ..., V_n\rbrace $. For each cluster $V_i$ we define a keyword $w^{key}_i$ as the word with the largest number of anti-edges $count(\cdot )$ among words in this cluster.
Algorithm for Word Sense Induction ::: egvi (Ego-Graph Vector Induction): A Novel Word Sense Induction Algorithm ::: Word Sense Disambiguation
We use keywords defined above to obtain vector representations of senses. In particular, we simply use word embedding of the keyword $w^{key}_i$ as a sense representation $\mathbf {s}_i$ of the target word $w$ to avoid explicit computation of sense embeddings like in BIBREF19. Given a sentence $\lbrace w_1, w_2, ..., w_{j}, w, w_{j+1}, ..., w_n\rbrace $ represented as a matrix of word vectors, we define the context of the target word $w$ as $\textbf {c}_w = \dfrac{\sum _{j=1}^{n} w_j}{n}$. Then, we define the most appropriate sense $\hat{s}$ as the sense with the highest cosine similarity to the embedding of the word's context:
System Design
We release a system for on-the-fly WSD for 158 languages. Given textual input, it identifies polysemous words and retrieves senses that are the most appropriate in the context.
System Design ::: Construction of Sense Inventories
To build word sense inventories (sense vectors) for 158 languages, we utilised GPU-accelerated routines for search of similar vectors implemented in Faiss library BIBREF27. The search of nearest neighbours takes substantial time, therefore, acceleration with GPUs helps to significantly reduce the word sense construction time. To further speed up the process, we keep all intermediate results in memory, which results in substantial RAM consumption of up to 200 Gb.
The construction of word senses for all of the 158 languages takes a lot of computational resources and imposes high requirements to the hardware. For calculations, we use in parallel 10–20 nodes of the Zhores cluster BIBREF28 empowered with Nvidia Tesla V100 graphic cards. For each of the languages, we construct inventories based on 50, 100, and 200 neighbours for 100,000 most frequent words. The vocabulary was limited in order to make the computation time feasible. The construction of inventories for one language takes up to 10 hours, with $6.5$ hours on average. Building the inventories for all languages took more than 1,000 hours of GPU-accelerated computations. We release the constructed sense inventories for all the available languages. They contain all the necessary information for using them in the proposed WSD system or in other downstream tasks.
System Design ::: Word Sense Disambiguation System
The first text pre-processing step is language identification, for which we use the fastText language identification models by Bojanowski:17. Then the input is tokenised. For languages which use Latin, Cyrillic, Hebrew, or Greek scripts, we employ the Europarl tokeniser. For Chinese, we use the Stanford Word Segmenter BIBREF29. For Japanese, we use Mecab BIBREF30. We tokenise Vietnamese with UETsegmenter BIBREF31. All other languages are processed with the ICU tokeniser, as implemented in the PyICU project. After the tokenisation, the system analyses all the input words with pre-extracted sense inventories and defines the most appropriate sense for polysemous words.
Figure FIGREF19 shows the interface of the system. It has a textual input form. The automatically identified language of text is shown above. A click on any of the words displays a prompt (shown in black) with the most appropriate sense of a word in the specified context and the confidence score. In the given example, the word Jaguar is correctly identified as a car brand. This system is based on the system by Ustalov:18, extending it with a back-end for multiple languages, language detection, and sense browsing capabilities.
Evaluation
We first evaluate our converted embedding models on multi-language lexical similarity and relatedness tasks, as a sanity check, to make sure the word sense induction process did not hurt the general performance of the embeddings. Then, we test the sense embeddings on WSD task.
Evaluation ::: Lexical Similarity and Relatedness ::: Experimental Setup
We use the SemR-11 datasets BIBREF32, which contain word pairs with manually assigned similarity scores from 0 (words are not related) to 10 (words are fully interchangeable) for 12 languages: English (en), Arabic (ar), German (de), Spanish (es), Farsi (fa), French (fr), Italian (it), Dutch (nl), Portuguese (pt), Russian (ru), Swedish (sv), Chinese (zh). The task is to assign relatedness scores to these pairs so that the ranking of the pairs by this score is close to the ranking defined by the oracle score. The performance is measured with Pearson correlation of the rankings. Since one word can have several different senses in our setup, we follow Remus:18 and define the relatedness score for a pair of words as the maximum cosine similarity between any of their sense vectors.
We extract the sense inventories from fastText embedding vectors. We set $N=K$ for all our experiments, i.e. the number of vertices in the graph and the maximum number of vertices' nearest neighbours match. We conduct experiments with $N=K$ set to 50, 100, and 200. For each cluster $V_i$ we create a sense vector $s_i$ by averaging vectors that belong to this cluster. We rely on the methodology of BIBREF33 shifting the generated sense vector to the direction of the original word vector: $s_i~=~\lambda ~w + (1-\lambda )~\dfrac{1}{n}~\sum _{u~\in ~V_i} cos(w, u)\cdot u, $ where, $\lambda \in [0, 1]$, $w$ is the embedding of the original word, $cos(w, u)$ is the cosine similarity between $w$ and $u$, and $n=|V_i|$. By introducing the linear combination of $w$ and $u~\in ~V_i$ we enforce the similarity of sense vectors to the original word important for this task. In addition to that, we weight $u$ by their similarity to the original word, so that more similar neighbours contribute more to the sense vector. The shifting parameter $\lambda $ is set to $0.5$, following Remus:18.
A fastText model is able to generate a vector for each word even if it is not represented in the vocabulary, due to the use of subword information. However, our system cannot assemble sense vectors for out-of-vocabulary words, for such words it returns their original fastText vector. Still, the coverage of the benchmark datasets by our vocabulary is at least 85% and approaches 100% for some languages, so we do not have to resort to this back-off strategy very often.
We use the original fastText vectors as a baseline. In this case, we compute the relatedness scores of the two words as a cosine similarity of their vectors.
Evaluation ::: Lexical Similarity and Relatedness ::: Discussion of Results
We compute the relatedness scores for all benchmark datasets using our sense vectors and compare them to cosine similarity scores of original fastText vectors. The results vary for different languages. Figure FIGREF28 shows the change in Pearson correlation score when switching from the baseline fastText embeddings to our sense vectors. The new vectors significantly improve the relatedness detection for German, Farsi, Russian, and Chinese, whereas for Italian, Dutch, and Swedish the score slightly falls behind the baseline. For other languages, the performance of sense vectors is on par with regular fastText.
Evaluation ::: Word Sense Disambiguation
The purpose of our sense vectors is disambiguation of polysemous words. Therefore, we test the inventories constructed with egvi on the Task 13 of SemEval-2013 — Word Sense Induction BIBREF34. The task is to identify the different senses of a target word in context in a fully unsupervised manner.
Evaluation ::: Word Sense Disambiguation ::: Experimental Setup
The dataset consists of a set of polysemous words: 20 nouns, 20 verbs, and 10 adjectives and specifies 20 to 100 contexts per word, with the total of 4,664 contexts, drawn from the Open American National Corpus. Given a set of contexts of a polysemous word, the participants of the competition had to divide them into clusters by sense of the word. The contexts are manually labelled with WordNet senses of the target words, the gold standard clustering is generated from this labelling.
The task allows two setups: graded WSI where participants can submit multiple senses per word and provide the probability of each sense in a particular context, and non-graded WSI where a model determines a single sense for a word in context. In our experiments we performed non-graded WSI. We considered the most suitable sense as the one with the highest cosine similarity with embeddings of the context, as described in Section SECREF9.
The performance of WSI models is measured with three metrics that require mapping of sense inventories (Jaccard Index, Kendall's $\tau $, and WNDCG) and two cluster comparison metrics (Fuzzy NMI and Fuzzy B-Cubed).
Evaluation ::: Word Sense Disambiguation ::: Discussion of Results
We compare our model with the models that participated in the task, the baseline ego-graph clustering model by Pelevina:16, and AdaGram BIBREF17, a method that learns sense embeddings based on a Bayesian extension of the Skip-gram model. Besides that, we provide the scores of the simple baselines originally used in the task: assigning one sense to all words, assigning the most frequent sense to all words, and considering each context as expressing a different sense. The evaluation of our model was performed using the open source context-eval tool.
Table TABREF31 shows the performance of these models on the SemEval dataset. Due to space constraints, we only report the scores of the best-performing SemEval participants, please refer to jurgens-klapaftis-2013-semeval for the full results. The performance of AdaGram and SenseGram models is reported according to Pelevina:16.
The table shows that the performance of egvi is similar to state-of-the-art word sense disambiguation and word sense induction models. In particular, we can see that it outperforms SenseGram on the majority of metrics. We should note that this comparison is not fully rigorous, because SenseGram induces sense inventories from word2vec as opposed to fastText vectors used in our work.
Evaluation ::: Analysis
In order to see how the separation of word contexts that we perform corresponds to actual senses of polysemous words, we visualise ego-graphs produced by our method. Figure FIGREF17 shows the nearest neighbours clustering for the word Ruby, which divides the graph into five senses: Ruby-related programming tools, e.g. RubyOnRails (orange cluster), female names, e.g. Josie (magenta cluster), gems, e.g. Sapphire (yellow cluster), programming languages in general, e.g. Haskell (red cluster). Besides, this is typical for fastText embeddings featuring sub-string similarity, one can observe a cluster of different spelling of the word Ruby in green.
Analogously, the word python (see Figure FIGREF35) is divided into the senses of animals, e.g. crocodile (yellow cluster), programming languages, e.g. perl5 (magenta cluster), and conference, e.g. pycon (red cluster).
In addition, we show a qualitative analysis of senses of mouse and apple. Table TABREF38 shows nearest neighbours of the original words separated into clusters (labels for clusters were assigned manually). These inventories demonstrate clear separation of different senses, although it can be too fine-grained. For example, the first and the second cluster for mouse both refer to computer mouse, but the first one addresses the different types of computer mice, and the second one is used in the context of mouse actions. Similarly, we see that iphone and macbook are separated into two clusters. Interestingly, fastText handles typos, code-switching, and emojis by correctly associating all non-standard variants to the word they refer, and our method is able to cluster them appropriately. Both inventories were produced with $K=200$, which ensures stronger connectivity of graph. However, we see that this setting still produces too many clusters. We computed the average numbers of clusters produced by our model with $K=200$ for words from the word relatedness datasets and compared these numbers with the number of senses in WordNet for English and RuWordNet BIBREF35 for Russian (see Table TABREF37). We can see that the number of senses extracted by our method is consistently higher than the real number of senses.
We also compute the average number of senses per word for all the languages and different values of $K$ (see Figure FIGREF36). The average across languages does not change much as we increase $K$. However, for larger $K$ the average exceed the median value, indicating that more languages have lower number of senses per word. At the same time, while at smaller $K$ the maximum average number of senses per word does not exceed 6, larger values of $K$ produce outliers, e.g. English with $12.5$ senses.
Notably, there are no languages with an average number of senses less than 2, while numbers on English and Russian WordNets are considerably lower. This confirms that our method systematically over-generates senses. The presence of outliers shows that this effect cannot be eliminated by further increasing $K$, because the $i$-th nearest neighbour of a word for $i>200$ can be only remotely related to this word, even if the word is rare. Thus, our sense clustering algorithm needs a method of merging spurious senses.
Conclusions and Future Work
We present egvi, a new algorithm for word sense induction based on graph clustering that is fully unsupervised and relies on graph operations between word vectors. We apply this algorithm to a large collection of pre-trained fastText word embeddings, releasing sense inventories for 158 languages. These inventories contain all the necessary information for constructing sense vectors and using them in downstream tasks. The sense vectors for polysemous words can be directly retrofitted with the pre-trained word embeddings and do not need any external resources. As one application of these multilingual sense inventories, we present a multilingual word sense disambiguation system that performs unsupervised and knowledge-free WSD for 158 languages without the use of any dictionary or sense-labelled corpus.
The evaluation of quality of the produced sense inventories is performed on multilingual word similarity benchmarks, showing that our sense vectors improve the scores compared to non-disambiguated word embeddings. Therefore, our system in its present state can improve WSD and downstream tasks for languages where knowledge bases, taxonomies, and annotated corpora are not available and supervised WSD models cannot be trained.
A promising direction for future work is combining distributional information from the induced sense inventories with lexical knowledge bases to improve WSD performance. Besides, we encourage the use of the produced word sense inventories in other downstream tasks.
Acknowledgements
We acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) foundation under the “JOIN-T 2” and “ACQuA” projects. Ekaterina Artemova was supported by the framework of the HSE University Basic Research Program and Russian Academic Excellence Project “5-100”. | Yes |
eeb6e0caa4cf5fdd887e1930e22c816b99306473 | eeb6e0caa4cf5fdd887e1930e22c816b99306473_0 | Q: How are the different senses annotated/labeled?
Text:
1.1em
:::
1.1.1em
::: :::
1.1.1.1em
ru=russian
$^1$Skolkovo Institute of Science and Technology, Moscow, Russia
v.logacheva@skoltech.ru
$^2$Ural Federal University, Yekaterinburg, Russia
$^3$Universität Hamburg, Hamburg, Germany
$^4$Universität Mannheim, Mannheim, Germany
$^5$University of Oslo, Oslo, Norway
$^6$Higher School of Economics, Moscow, Russia
Disambiguation of word senses in context is easy for humans, but is a major challenge for automatic approaches. Sophisticated supervised and knowledge-based models were developed to solve this task. However, (i) the inherent Zipfian distribution of supervised training instances for a given word and/or (ii) the quality of linguistic knowledge representations motivate the development of completely unsupervised and knowledge-free approaches to word sense disambiguation (WSD). They are particularly useful for under-resourced languages which do not have any resources for building either supervised and/or knowledge-based models. In this paper, we present a method that takes as input a standard pre-trained word embedding model and induces a fully-fledged word sense inventory, which can be used for disambiguation in context. We use this method to induce a collection of sense inventories for 158 languages on the basis of the original pre-trained fastText word embeddings by Grave:18, enabling WSD in these languages. Models and system are available online.
word sense induction, word sense disambiguation, word embeddings, sense embeddings, graph clustering
Introduction
There are many polysemous words in virtually any language. If not treated as such, they can hamper the performance of all semantic NLP tasks BIBREF0. Therefore, the task of resolving the polysemy and choosing the most appropriate meaning of a word in context has been an important NLP task for a long time. It is usually referred to as Word Sense Disambiguation (WSD) and aims at assigning meaning to a word in context.
The majority of approaches to WSD are based on the use of knowledge bases, taxonomies, and other external manually built resources BIBREF1, BIBREF2. However, different senses of a polysemous word occur in very diverse contexts and can potentially be discriminated with their help. The fact that semantically related words occur in similar contexts, and diverse words do not share common contexts, is known as distributional hypothesis and underlies the technique of constructing word embeddings from unlabelled texts. The same intuition can be used to discriminate between different senses of individual words. There exist methods of training word embeddings that can detect polysemous words and assign them different vectors depending on their contexts BIBREF3, BIBREF4. Unfortunately, many wide-spread word embedding models, such as GloVe BIBREF5, word2vec BIBREF6, fastText BIBREF7, do not handle polysemous words. Words in these models are represented with single vectors, which were constructed from diverse sets of contexts corresponding to different senses. In such cases, their disambiguation needs knowledge-rich approaches.
We tackle this problem by suggesting a method of post-hoc unsupervised WSD. It does not require any external knowledge and can separate different senses of a polysemous word using only the information encoded in pre-trained word embeddings. We construct a semantic similarity graph for words and partition it into densely connected subgraphs. This partition allows for separating different senses of polysemous words. Thus, the only language resource we need is a large unlabelled text corpus used to train embeddings. This makes our method applicable to under-resourced languages. Moreover, while other methods of unsupervised WSD need to train embeddings from scratch, we perform retrofitting of sense vectors based on existing word embeddings.
We create a massively multilingual application for on-the-fly word sense disambiguation. When receiving a text, the system identifies its language and performs disambiguation of all the polysemous words in it based on pre-extracted word sense inventories. The system works for 158 languages, for which pre-trained fastText embeddings available BIBREF8. The created inventories are based on these embeddings. To the best of our knowledge, our system is the only WSD system for the majority of the presented languages. Although it does not match the state of the art for resource-rich languages, it is fully unsupervised and can be used for virtually any language.
The contributions of our work are the following:
[noitemsep]
We release word sense inventories associated with fastText embeddings for 158 languages.
We release a system that allows on-the-fly word sense disambiguation for 158 languages.
We present egvi (Ego-Graph Vector Induction), a new algorithm of unsupervised word sense induction, which creates sense inventories based on pre-trained word vectors.
Related Work
There are two main scenarios for WSD: the supervised approach that leverages training corpora explicitly labelled for word sense, and the knowledge-based approach that derives sense representation from lexical resources, such as WordNet BIBREF9. In the supervised case WSD can be treated as a classification problem. Knowledge-based approaches construct sense embeddings, i.e. embeddings that separate various word senses.
SupWSD BIBREF10 is a state-of-the-art system for supervised WSD. It makes use of linear classifiers and a number of features such as POS tags, surrounding words, local collocations, word embeddings, and syntactic relations. GlossBERT model BIBREF11, which is another implementation of supervised WSD, achieves a significant improvement by leveraging gloss information. This model benefits from sentence-pair classification approach, introduced by Devlin:19 in their BERT contextualized embedding model. The input to the model consists of a context (a sentence which contains an ambiguous word) and a gloss (sense definition) from WordNet. The context-gloss pair is concatenated through a special token ([SEP]) and classified as positive or negative.
On the other hand, sense embeddings are an alternative to traditional word vector models such as word2vec, fastText or GloVe, which represent monosemous words well but fail for ambiguous words. Sense embeddings represent individual senses of polysemous words as separate vectors. They can be linked to an explicit inventory BIBREF12 or induce a sense inventory from unlabelled data BIBREF13. LSTMEmbed BIBREF13 aims at learning sense embeddings linked to BabelNet BIBREF14, at the same time handling word ordering, and using pre-trained embeddings as an objective. Although it was tested only on English, the approach can be easily adapted to other languages present in BabelNet. However, manually labelled datasets as well as knowledge bases exist only for a small number of well-resourced languages. Thus, to disambiguate polysemous words in other languages one has to resort to fully unsupervised techniques.
The task of Word Sense Induction (WSI) can be seen as an unsupervised version of WSD. WSI aims at clustering word senses and does not require to map each cluster to a predefined sense. Instead of that, word sense inventories are induced automatically from the clusters, treating each cluster as a single sense of a word. WSI approaches fall into three main groups: context clustering, word ego-network clustering and synonyms (or substitute) clustering.
Context clustering approaches consist in creating vectors which characterise words' contexts and clustering these vectors. Here, the definition of context may vary from window-based context to latent topic-alike context. Afterwards, the resulting clusters are either used as senses directly BIBREF15, or employed further to learn sense embeddings via Chinese Restaurant Process algorithm BIBREF16, AdaGram, a Bayesian extension of the Skip-Gram model BIBREF17, AutoSense, an extension of the LDA topic model BIBREF18, and other techniques.
Word ego-network clustering is applied to semantic graphs. The nodes of a semantic graph are words, and edges between them denote semantic relatedness which is usually evaluated with cosine similarity of the corresponding embeddings BIBREF19 or by PMI-like measures BIBREF20. Word senses are induced via graph clustering algorithms, such as Chinese Whispers BIBREF21 or MaxMax BIBREF22. The technique suggested in our work belongs to this class of methods and is an extension of the method presented by Pelevina:16.
Synonyms and substitute clustering approaches create vectors which represent synonyms or substitutes of polysemous words. Such vectors are created using synonymy dictionaries BIBREF23 or context-dependent substitutes obtained from a language model BIBREF24. Analogously to previously described techniques, word senses are induced by clustering these vectors.
Algorithm for Word Sense Induction
The majority of word vector models do not discriminate between multiple senses of individual words. However, a polysemous word can be identified via manual analysis of its nearest neighbours—they reflect different senses of the word. Table TABREF7 shows manually sense-labelled most similar terms to the word Ruby according to the pre-trained fastText model BIBREF8. As it was suggested early by Widdows:02, the distributional properties of a word can be used to construct a graph of words that are semantically related to it, and if a word is polysemous, such graph can easily be partitioned into a number of densely connected subgraphs corresponding to different senses of this word. Our algorithm is based on the same principle.
Algorithm for Word Sense Induction ::: SenseGram: A Baseline Graph-based Word Sense Induction Algorithm
SenseGram is the method proposed by Pelevina:16 that separates nearest neighbours to induce word senses and constructs sense embeddings for each sense. It starts by constructing an ego-graph (semantic graph centred at a particular word) of the word and its nearest neighbours. The edges between the words denote their semantic relatedness, e.g. the two nodes are joined with an edge if cosine similarity of the corresponding embeddings is higher than a pre-defined threshold. The resulting graph can be clustered into subgraphs which correspond to senses of the word.
The sense vectors are then constructed by averaging embeddings of words in each resulting cluster. In order to use these sense vectors for word sense disambiguation in text, the authors compute the probabilities of sense vectors of a word given its context or the similarity of the sense vectors to the context.
Algorithm for Word Sense Induction ::: egvi (Ego-Graph Vector Induction): A Novel Word Sense Induction Algorithm ::: Induction of Sense Inventories
One of the downsides of the described above algorithm is noise in the generated graph, namely, unrelated words and wrong connections. They hamper the separation of the graph. Another weak point is the imbalance in the nearest neighbour list, when a large part of it is attributed to the most frequent sense, not sufficiently representing the other senses. This can lead to construction of incorrect sense vectors.
We suggest a more advanced procedure of graph construction that uses the interpretability of vector addition and subtraction operations in word embedding space BIBREF6 while the previous algorithm only relies on the list of nearest neighbours in word embedding space. The key innovation of our algorithm is the use of vector subtraction to find pairs of most dissimilar graph nodes and construct the graph only from the nodes included in such “anti-edges”. Thus, our algorithm is based on graph-based word sense induction, but it also relies on vector-based operations between word embeddings to perform filtering of graph nodes. Analogously to the work of Pelevina:16, we construct a semantic relatedness graph from a list of nearest neighbours, but we filter this list using the following procedure:
Extract a list $\mathcal {N}$ = {$w_{1}$, $w_{2}$, ..., $w_{N}$} of $N$ nearest neighbours for the target (ego) word vector $w$.
Compute a list $\Delta $ = {$\delta _{1}$, $\delta _{2}$, ..., $\delta _{N}$} for each $w_{i}$ in $\mathcal {N}$, where $\delta _{i}~=~w-w_{i}$. The vectors in $\delta $ contain the components of sense of $w$ which are not related to the corresponding nearest neighbours from $\mathcal {N}$.
Compute a list $\overline{\mathcal {N}}$ = {$\overline{w_{1}}$, $\overline{w_{2}}$, ..., $\overline{w_{N}}$}, such that $\overline{w_{i}}$ is in the top nearest neighbours of $\delta _{i}$ in the embedding space. In other words, $\overline{w_{i}}$ is a word which is the most similar to the target (ego) word $w$ and least similar to its neighbour $w_{i}$. We refer to $\overline{w_{i}}$ as an anti-pair of $w_{i}$. The set of $N$ nearest neighbours and their anti-pairs form a set of anti-edges i.e. pairs of most dissimilar nodes – those which should not be connected: $\overline{E} = \lbrace (w_{1},\overline{w_{1}}), (w_{2},\overline{w_{2}}), ..., (w_{N},\overline{w_{N}})\rbrace $.
To clarify this, consider the target (ego) word $w = \textit {python}$, its top similar term $w_1 = \textit {Java}$ and the resulting anti-pair $\overline{w_i} = \textit {snake}$ which is the top related term of $\delta _1 = w - w_1$. Together they form an anti-edge $(w_i,\overline{w_i})=(\textit {Java}, \textit {snake})$ composed of a pair of semantically dissimilar terms.
Construct $V$, the set of vertices of our semantic graph $G=(V,E)$ from the list of anti-edges $\overline{E}$, with the following recurrent procedure: $V = V \cup \lbrace w_{i}, \overline{w_{i}}: w_{i} \in \mathcal {N}, \overline{w_{i}} \in \mathcal {N}\rbrace $, i.e. we add a word from the list of nearest neighbours and its anti-pair only if both of them are nearest neighbours of the original word $w$. We do not add $w$'s nearest neighbours if their anti-pairs do not belong to $\mathcal {N}$. Thus, we add only words which can help discriminating between different senses of $w$.
Construct the set of edges $E$ as follows. For each $w_{i}~\in ~\mathcal {N}$ we extract a set of its $K$ nearest neighbours $\mathcal {N}^{\prime }_{i} = \lbrace u_{1}, u_{2}, ..., u_{K}\rbrace $ and define $E = \lbrace (w_{i}, u_{j}): w_{i}~\in ~V, u_j~\in ~V, u_{j}~\in ~\mathcal {N}^{\prime }_{i}, u_{j}~\ne ~\overline{w_{i}}\rbrace $. In other words, we remove edges between a word $w_{i}$ and its nearest neighbour $u_j$ if $u_j$ is also its anti-pair. According to our hypothesis, $w_{i}$ and $\overline{w_{i}}$ belong to different senses of $w$, so they should not be connected (i.e. we never add anti-edges into $E$). Therefore, we consider any connection between them as noise and remove it.
Note that $N$ (the number of nearest neighbours for the target word $w$) and $K$ (the number of nearest neighbours of $w_{ci}$) do not have to match. The difference between these parameters is the following. $N$ defines how many words will be considered for the construction of ego-graph. On the other hand, $K$ defines the degree of relatedness between words in the ego-graph — if $K = 50$, then we will connect vertices $w$ and $u$ with an edge only if $u$ is in the list of 50 nearest neighbours of $w$. Increasing $K$ increases the graph connectivity and leads to lower granularity of senses.
According to our hypothesis, nearest neighbours of $w$ are grouped into clusters in the vector space, and each of the clusters corresponds to a sense of $w$. The described vertices selection procedure allows picking the most representative members of these clusters which are better at discriminating between the clusters. In addition to that, it helps dealing with the cases when one of the clusters is over-represented in the nearest neighbour list. In this case, many elements of such a cluster are not added to $V$ because their anti-pairs fall outside the nearest neighbour list. This also improves the quality of clustering.
After the graph construction, the clustering is performed using the Chinese Whispers algorithm BIBREF21. This is a bottom-up clustering procedure that does not require to pre-define the number of clusters, so it can correctly process polysemous words with varying numbers of senses as well as unambiguous words.
Figure FIGREF17 shows an example of the resulting pruned graph of for the word Ruby for $N = 50$ nearest neighbours in terms of the fastText cosine similarity. In contrast to the baseline method by BIBREF19 where all 50 terms are clustered, in the method presented in this section we sparsify the graph by removing 13 nodes which were not in the set of the “anti-edges” i.e. pairs of most dissimilar terms out of these 50 neighbours. Examples of anti-edges i.e. pairs of most dissimilar terms for this graph include: (Haskell, Sapphire), (Garnet, Rails), (Opal, Rubyist), (Hazel, RubyOnRails), and (Coffeescript, Opal).
Algorithm for Word Sense Induction ::: egvi (Ego-Graph Vector Induction): A Novel Word Sense Induction Algorithm ::: Labelling of Induced Senses
We label each word cluster representing a sense to make them and the WSD results interpretable by humans. Prior systems used hypernyms to label the clusters BIBREF25, BIBREF26, e.g. “animal” in the “python (animal)”. However, neither hypernyms nor rules for their automatic extraction are available for all 158 languages. Therefore, we use a simpler method to select a keyword which would help to interpret each cluster. For each graph node $v \in V$ we count the number of anti-edges it belongs to: $count(v) = | \lbrace (w_i,\overline{w_i}) : (w_i,\overline{w_i}) \in \overline{E} \wedge (v = w_i \vee v = \overline{w_i}) \rbrace |$. A graph clustering yields a partition of $V$ into $n$ clusters: $V~=~\lbrace V_1, V_2, ..., V_n\rbrace $. For each cluster $V_i$ we define a keyword $w^{key}_i$ as the word with the largest number of anti-edges $count(\cdot )$ among words in this cluster.
Algorithm for Word Sense Induction ::: egvi (Ego-Graph Vector Induction): A Novel Word Sense Induction Algorithm ::: Word Sense Disambiguation
We use keywords defined above to obtain vector representations of senses. In particular, we simply use word embedding of the keyword $w^{key}_i$ as a sense representation $\mathbf {s}_i$ of the target word $w$ to avoid explicit computation of sense embeddings like in BIBREF19. Given a sentence $\lbrace w_1, w_2, ..., w_{j}, w, w_{j+1}, ..., w_n\rbrace $ represented as a matrix of word vectors, we define the context of the target word $w$ as $\textbf {c}_w = \dfrac{\sum _{j=1}^{n} w_j}{n}$. Then, we define the most appropriate sense $\hat{s}$ as the sense with the highest cosine similarity to the embedding of the word's context:
System Design
We release a system for on-the-fly WSD for 158 languages. Given textual input, it identifies polysemous words and retrieves senses that are the most appropriate in the context.
System Design ::: Construction of Sense Inventories
To build word sense inventories (sense vectors) for 158 languages, we utilised GPU-accelerated routines for search of similar vectors implemented in Faiss library BIBREF27. The search of nearest neighbours takes substantial time, therefore, acceleration with GPUs helps to significantly reduce the word sense construction time. To further speed up the process, we keep all intermediate results in memory, which results in substantial RAM consumption of up to 200 Gb.
The construction of word senses for all of the 158 languages takes a lot of computational resources and imposes high requirements to the hardware. For calculations, we use in parallel 10–20 nodes of the Zhores cluster BIBREF28 empowered with Nvidia Tesla V100 graphic cards. For each of the languages, we construct inventories based on 50, 100, and 200 neighbours for 100,000 most frequent words. The vocabulary was limited in order to make the computation time feasible. The construction of inventories for one language takes up to 10 hours, with $6.5$ hours on average. Building the inventories for all languages took more than 1,000 hours of GPU-accelerated computations. We release the constructed sense inventories for all the available languages. They contain all the necessary information for using them in the proposed WSD system or in other downstream tasks.
System Design ::: Word Sense Disambiguation System
The first text pre-processing step is language identification, for which we use the fastText language identification models by Bojanowski:17. Then the input is tokenised. For languages which use Latin, Cyrillic, Hebrew, or Greek scripts, we employ the Europarl tokeniser. For Chinese, we use the Stanford Word Segmenter BIBREF29. For Japanese, we use Mecab BIBREF30. We tokenise Vietnamese with UETsegmenter BIBREF31. All other languages are processed with the ICU tokeniser, as implemented in the PyICU project. After the tokenisation, the system analyses all the input words with pre-extracted sense inventories and defines the most appropriate sense for polysemous words.
Figure FIGREF19 shows the interface of the system. It has a textual input form. The automatically identified language of text is shown above. A click on any of the words displays a prompt (shown in black) with the most appropriate sense of a word in the specified context and the confidence score. In the given example, the word Jaguar is correctly identified as a car brand. This system is based on the system by Ustalov:18, extending it with a back-end for multiple languages, language detection, and sense browsing capabilities.
Evaluation
We first evaluate our converted embedding models on multi-language lexical similarity and relatedness tasks, as a sanity check, to make sure the word sense induction process did not hurt the general performance of the embeddings. Then, we test the sense embeddings on WSD task.
Evaluation ::: Lexical Similarity and Relatedness ::: Experimental Setup
We use the SemR-11 datasets BIBREF32, which contain word pairs with manually assigned similarity scores from 0 (words are not related) to 10 (words are fully interchangeable) for 12 languages: English (en), Arabic (ar), German (de), Spanish (es), Farsi (fa), French (fr), Italian (it), Dutch (nl), Portuguese (pt), Russian (ru), Swedish (sv), Chinese (zh). The task is to assign relatedness scores to these pairs so that the ranking of the pairs by this score is close to the ranking defined by the oracle score. The performance is measured with Pearson correlation of the rankings. Since one word can have several different senses in our setup, we follow Remus:18 and define the relatedness score for a pair of words as the maximum cosine similarity between any of their sense vectors.
We extract the sense inventories from fastText embedding vectors. We set $N=K$ for all our experiments, i.e. the number of vertices in the graph and the maximum number of vertices' nearest neighbours match. We conduct experiments with $N=K$ set to 50, 100, and 200. For each cluster $V_i$ we create a sense vector $s_i$ by averaging vectors that belong to this cluster. We rely on the methodology of BIBREF33 shifting the generated sense vector to the direction of the original word vector: $s_i~=~\lambda ~w + (1-\lambda )~\dfrac{1}{n}~\sum _{u~\in ~V_i} cos(w, u)\cdot u, $ where, $\lambda \in [0, 1]$, $w$ is the embedding of the original word, $cos(w, u)$ is the cosine similarity between $w$ and $u$, and $n=|V_i|$. By introducing the linear combination of $w$ and $u~\in ~V_i$ we enforce the similarity of sense vectors to the original word important for this task. In addition to that, we weight $u$ by their similarity to the original word, so that more similar neighbours contribute more to the sense vector. The shifting parameter $\lambda $ is set to $0.5$, following Remus:18.
A fastText model is able to generate a vector for each word even if it is not represented in the vocabulary, due to the use of subword information. However, our system cannot assemble sense vectors for out-of-vocabulary words, for such words it returns their original fastText vector. Still, the coverage of the benchmark datasets by our vocabulary is at least 85% and approaches 100% for some languages, so we do not have to resort to this back-off strategy very often.
We use the original fastText vectors as a baseline. In this case, we compute the relatedness scores of the two words as a cosine similarity of their vectors.
Evaluation ::: Lexical Similarity and Relatedness ::: Discussion of Results
We compute the relatedness scores for all benchmark datasets using our sense vectors and compare them to cosine similarity scores of original fastText vectors. The results vary for different languages. Figure FIGREF28 shows the change in Pearson correlation score when switching from the baseline fastText embeddings to our sense vectors. The new vectors significantly improve the relatedness detection for German, Farsi, Russian, and Chinese, whereas for Italian, Dutch, and Swedish the score slightly falls behind the baseline. For other languages, the performance of sense vectors is on par with regular fastText.
Evaluation ::: Word Sense Disambiguation
The purpose of our sense vectors is disambiguation of polysemous words. Therefore, we test the inventories constructed with egvi on the Task 13 of SemEval-2013 — Word Sense Induction BIBREF34. The task is to identify the different senses of a target word in context in a fully unsupervised manner.
Evaluation ::: Word Sense Disambiguation ::: Experimental Setup
The dataset consists of a set of polysemous words: 20 nouns, 20 verbs, and 10 adjectives and specifies 20 to 100 contexts per word, with the total of 4,664 contexts, drawn from the Open American National Corpus. Given a set of contexts of a polysemous word, the participants of the competition had to divide them into clusters by sense of the word. The contexts are manually labelled with WordNet senses of the target words, the gold standard clustering is generated from this labelling.
The task allows two setups: graded WSI where participants can submit multiple senses per word and provide the probability of each sense in a particular context, and non-graded WSI where a model determines a single sense for a word in context. In our experiments we performed non-graded WSI. We considered the most suitable sense as the one with the highest cosine similarity with embeddings of the context, as described in Section SECREF9.
The performance of WSI models is measured with three metrics that require mapping of sense inventories (Jaccard Index, Kendall's $\tau $, and WNDCG) and two cluster comparison metrics (Fuzzy NMI and Fuzzy B-Cubed).
Evaluation ::: Word Sense Disambiguation ::: Discussion of Results
We compare our model with the models that participated in the task, the baseline ego-graph clustering model by Pelevina:16, and AdaGram BIBREF17, a method that learns sense embeddings based on a Bayesian extension of the Skip-gram model. Besides that, we provide the scores of the simple baselines originally used in the task: assigning one sense to all words, assigning the most frequent sense to all words, and considering each context as expressing a different sense. The evaluation of our model was performed using the open source context-eval tool.
Table TABREF31 shows the performance of these models on the SemEval dataset. Due to space constraints, we only report the scores of the best-performing SemEval participants, please refer to jurgens-klapaftis-2013-semeval for the full results. The performance of AdaGram and SenseGram models is reported according to Pelevina:16.
The table shows that the performance of egvi is similar to state-of-the-art word sense disambiguation and word sense induction models. In particular, we can see that it outperforms SenseGram on the majority of metrics. We should note that this comparison is not fully rigorous, because SenseGram induces sense inventories from word2vec as opposed to fastText vectors used in our work.
Evaluation ::: Analysis
In order to see how the separation of word contexts that we perform corresponds to actual senses of polysemous words, we visualise ego-graphs produced by our method. Figure FIGREF17 shows the nearest neighbours clustering for the word Ruby, which divides the graph into five senses: Ruby-related programming tools, e.g. RubyOnRails (orange cluster), female names, e.g. Josie (magenta cluster), gems, e.g. Sapphire (yellow cluster), programming languages in general, e.g. Haskell (red cluster). Besides, this is typical for fastText embeddings featuring sub-string similarity, one can observe a cluster of different spelling of the word Ruby in green.
Analogously, the word python (see Figure FIGREF35) is divided into the senses of animals, e.g. crocodile (yellow cluster), programming languages, e.g. perl5 (magenta cluster), and conference, e.g. pycon (red cluster).
In addition, we show a qualitative analysis of senses of mouse and apple. Table TABREF38 shows nearest neighbours of the original words separated into clusters (labels for clusters were assigned manually). These inventories demonstrate clear separation of different senses, although it can be too fine-grained. For example, the first and the second cluster for mouse both refer to computer mouse, but the first one addresses the different types of computer mice, and the second one is used in the context of mouse actions. Similarly, we see that iphone and macbook are separated into two clusters. Interestingly, fastText handles typos, code-switching, and emojis by correctly associating all non-standard variants to the word they refer, and our method is able to cluster them appropriately. Both inventories were produced with $K=200$, which ensures stronger connectivity of graph. However, we see that this setting still produces too many clusters. We computed the average numbers of clusters produced by our model with $K=200$ for words from the word relatedness datasets and compared these numbers with the number of senses in WordNet for English and RuWordNet BIBREF35 for Russian (see Table TABREF37). We can see that the number of senses extracted by our method is consistently higher than the real number of senses.
We also compute the average number of senses per word for all the languages and different values of $K$ (see Figure FIGREF36). The average across languages does not change much as we increase $K$. However, for larger $K$ the average exceed the median value, indicating that more languages have lower number of senses per word. At the same time, while at smaller $K$ the maximum average number of senses per word does not exceed 6, larger values of $K$ produce outliers, e.g. English with $12.5$ senses.
Notably, there are no languages with an average number of senses less than 2, while numbers on English and Russian WordNets are considerably lower. This confirms that our method systematically over-generates senses. The presence of outliers shows that this effect cannot be eliminated by further increasing $K$, because the $i$-th nearest neighbour of a word for $i>200$ can be only remotely related to this word, even if the word is rare. Thus, our sense clustering algorithm needs a method of merging spurious senses.
Conclusions and Future Work
We present egvi, a new algorithm for word sense induction based on graph clustering that is fully unsupervised and relies on graph operations between word vectors. We apply this algorithm to a large collection of pre-trained fastText word embeddings, releasing sense inventories for 158 languages. These inventories contain all the necessary information for constructing sense vectors and using them in downstream tasks. The sense vectors for polysemous words can be directly retrofitted with the pre-trained word embeddings and do not need any external resources. As one application of these multilingual sense inventories, we present a multilingual word sense disambiguation system that performs unsupervised and knowledge-free WSD for 158 languages without the use of any dictionary or sense-labelled corpus.
The evaluation of quality of the produced sense inventories is performed on multilingual word similarity benchmarks, showing that our sense vectors improve the scores compared to non-disambiguated word embeddings. Therefore, our system in its present state can improve WSD and downstream tasks for languages where knowledge bases, taxonomies, and annotated corpora are not available and supervised WSD models cannot be trained.
A promising direction for future work is combining distributional information from the induced sense inventories with lexical knowledge bases to improve WSD performance. Besides, we encourage the use of the produced word sense inventories in other downstream tasks.
Acknowledgements
We acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) foundation under the “JOIN-T 2” and “ACQuA” projects. Ekaterina Artemova was supported by the framework of the HSE University Basic Research Program and Russian Academic Excellence Project “5-100”. | The contexts are manually labelled with WordNet senses of the target words |
3c0eaa2e24c1442d988814318de5f25729696ef5 | 3c0eaa2e24c1442d988814318de5f25729696ef5_0 | Q: Was any extrinsic evaluation carried out?
Text:
1.1em
:::
1.1.1em
::: :::
1.1.1.1em
ru=russian
$^1$Skolkovo Institute of Science and Technology, Moscow, Russia
v.logacheva@skoltech.ru
$^2$Ural Federal University, Yekaterinburg, Russia
$^3$Universität Hamburg, Hamburg, Germany
$^4$Universität Mannheim, Mannheim, Germany
$^5$University of Oslo, Oslo, Norway
$^6$Higher School of Economics, Moscow, Russia
Disambiguation of word senses in context is easy for humans, but is a major challenge for automatic approaches. Sophisticated supervised and knowledge-based models were developed to solve this task. However, (i) the inherent Zipfian distribution of supervised training instances for a given word and/or (ii) the quality of linguistic knowledge representations motivate the development of completely unsupervised and knowledge-free approaches to word sense disambiguation (WSD). They are particularly useful for under-resourced languages which do not have any resources for building either supervised and/or knowledge-based models. In this paper, we present a method that takes as input a standard pre-trained word embedding model and induces a fully-fledged word sense inventory, which can be used for disambiguation in context. We use this method to induce a collection of sense inventories for 158 languages on the basis of the original pre-trained fastText word embeddings by Grave:18, enabling WSD in these languages. Models and system are available online.
word sense induction, word sense disambiguation, word embeddings, sense embeddings, graph clustering
Introduction
There are many polysemous words in virtually any language. If not treated as such, they can hamper the performance of all semantic NLP tasks BIBREF0. Therefore, the task of resolving the polysemy and choosing the most appropriate meaning of a word in context has been an important NLP task for a long time. It is usually referred to as Word Sense Disambiguation (WSD) and aims at assigning meaning to a word in context.
The majority of approaches to WSD are based on the use of knowledge bases, taxonomies, and other external manually built resources BIBREF1, BIBREF2. However, different senses of a polysemous word occur in very diverse contexts and can potentially be discriminated with their help. The fact that semantically related words occur in similar contexts, and diverse words do not share common contexts, is known as distributional hypothesis and underlies the technique of constructing word embeddings from unlabelled texts. The same intuition can be used to discriminate between different senses of individual words. There exist methods of training word embeddings that can detect polysemous words and assign them different vectors depending on their contexts BIBREF3, BIBREF4. Unfortunately, many wide-spread word embedding models, such as GloVe BIBREF5, word2vec BIBREF6, fastText BIBREF7, do not handle polysemous words. Words in these models are represented with single vectors, which were constructed from diverse sets of contexts corresponding to different senses. In such cases, their disambiguation needs knowledge-rich approaches.
We tackle this problem by suggesting a method of post-hoc unsupervised WSD. It does not require any external knowledge and can separate different senses of a polysemous word using only the information encoded in pre-trained word embeddings. We construct a semantic similarity graph for words and partition it into densely connected subgraphs. This partition allows for separating different senses of polysemous words. Thus, the only language resource we need is a large unlabelled text corpus used to train embeddings. This makes our method applicable to under-resourced languages. Moreover, while other methods of unsupervised WSD need to train embeddings from scratch, we perform retrofitting of sense vectors based on existing word embeddings.
We create a massively multilingual application for on-the-fly word sense disambiguation. When receiving a text, the system identifies its language and performs disambiguation of all the polysemous words in it based on pre-extracted word sense inventories. The system works for 158 languages, for which pre-trained fastText embeddings available BIBREF8. The created inventories are based on these embeddings. To the best of our knowledge, our system is the only WSD system for the majority of the presented languages. Although it does not match the state of the art for resource-rich languages, it is fully unsupervised and can be used for virtually any language.
The contributions of our work are the following:
[noitemsep]
We release word sense inventories associated with fastText embeddings for 158 languages.
We release a system that allows on-the-fly word sense disambiguation for 158 languages.
We present egvi (Ego-Graph Vector Induction), a new algorithm of unsupervised word sense induction, which creates sense inventories based on pre-trained word vectors.
Related Work
There are two main scenarios for WSD: the supervised approach that leverages training corpora explicitly labelled for word sense, and the knowledge-based approach that derives sense representation from lexical resources, such as WordNet BIBREF9. In the supervised case WSD can be treated as a classification problem. Knowledge-based approaches construct sense embeddings, i.e. embeddings that separate various word senses.
SupWSD BIBREF10 is a state-of-the-art system for supervised WSD. It makes use of linear classifiers and a number of features such as POS tags, surrounding words, local collocations, word embeddings, and syntactic relations. GlossBERT model BIBREF11, which is another implementation of supervised WSD, achieves a significant improvement by leveraging gloss information. This model benefits from sentence-pair classification approach, introduced by Devlin:19 in their BERT contextualized embedding model. The input to the model consists of a context (a sentence which contains an ambiguous word) and a gloss (sense definition) from WordNet. The context-gloss pair is concatenated through a special token ([SEP]) and classified as positive or negative.
On the other hand, sense embeddings are an alternative to traditional word vector models such as word2vec, fastText or GloVe, which represent monosemous words well but fail for ambiguous words. Sense embeddings represent individual senses of polysemous words as separate vectors. They can be linked to an explicit inventory BIBREF12 or induce a sense inventory from unlabelled data BIBREF13. LSTMEmbed BIBREF13 aims at learning sense embeddings linked to BabelNet BIBREF14, at the same time handling word ordering, and using pre-trained embeddings as an objective. Although it was tested only on English, the approach can be easily adapted to other languages present in BabelNet. However, manually labelled datasets as well as knowledge bases exist only for a small number of well-resourced languages. Thus, to disambiguate polysemous words in other languages one has to resort to fully unsupervised techniques.
The task of Word Sense Induction (WSI) can be seen as an unsupervised version of WSD. WSI aims at clustering word senses and does not require to map each cluster to a predefined sense. Instead of that, word sense inventories are induced automatically from the clusters, treating each cluster as a single sense of a word. WSI approaches fall into three main groups: context clustering, word ego-network clustering and synonyms (or substitute) clustering.
Context clustering approaches consist in creating vectors which characterise words' contexts and clustering these vectors. Here, the definition of context may vary from window-based context to latent topic-alike context. Afterwards, the resulting clusters are either used as senses directly BIBREF15, or employed further to learn sense embeddings via Chinese Restaurant Process algorithm BIBREF16, AdaGram, a Bayesian extension of the Skip-Gram model BIBREF17, AutoSense, an extension of the LDA topic model BIBREF18, and other techniques.
Word ego-network clustering is applied to semantic graphs. The nodes of a semantic graph are words, and edges between them denote semantic relatedness which is usually evaluated with cosine similarity of the corresponding embeddings BIBREF19 or by PMI-like measures BIBREF20. Word senses are induced via graph clustering algorithms, such as Chinese Whispers BIBREF21 or MaxMax BIBREF22. The technique suggested in our work belongs to this class of methods and is an extension of the method presented by Pelevina:16.
Synonyms and substitute clustering approaches create vectors which represent synonyms or substitutes of polysemous words. Such vectors are created using synonymy dictionaries BIBREF23 or context-dependent substitutes obtained from a language model BIBREF24. Analogously to previously described techniques, word senses are induced by clustering these vectors.
Algorithm for Word Sense Induction
The majority of word vector models do not discriminate between multiple senses of individual words. However, a polysemous word can be identified via manual analysis of its nearest neighbours—they reflect different senses of the word. Table TABREF7 shows manually sense-labelled most similar terms to the word Ruby according to the pre-trained fastText model BIBREF8. As it was suggested early by Widdows:02, the distributional properties of a word can be used to construct a graph of words that are semantically related to it, and if a word is polysemous, such graph can easily be partitioned into a number of densely connected subgraphs corresponding to different senses of this word. Our algorithm is based on the same principle.
Algorithm for Word Sense Induction ::: SenseGram: A Baseline Graph-based Word Sense Induction Algorithm
SenseGram is the method proposed by Pelevina:16 that separates nearest neighbours to induce word senses and constructs sense embeddings for each sense. It starts by constructing an ego-graph (semantic graph centred at a particular word) of the word and its nearest neighbours. The edges between the words denote their semantic relatedness, e.g. the two nodes are joined with an edge if cosine similarity of the corresponding embeddings is higher than a pre-defined threshold. The resulting graph can be clustered into subgraphs which correspond to senses of the word.
The sense vectors are then constructed by averaging embeddings of words in each resulting cluster. In order to use these sense vectors for word sense disambiguation in text, the authors compute the probabilities of sense vectors of a word given its context or the similarity of the sense vectors to the context.
Algorithm for Word Sense Induction ::: egvi (Ego-Graph Vector Induction): A Novel Word Sense Induction Algorithm ::: Induction of Sense Inventories
One of the downsides of the described above algorithm is noise in the generated graph, namely, unrelated words and wrong connections. They hamper the separation of the graph. Another weak point is the imbalance in the nearest neighbour list, when a large part of it is attributed to the most frequent sense, not sufficiently representing the other senses. This can lead to construction of incorrect sense vectors.
We suggest a more advanced procedure of graph construction that uses the interpretability of vector addition and subtraction operations in word embedding space BIBREF6 while the previous algorithm only relies on the list of nearest neighbours in word embedding space. The key innovation of our algorithm is the use of vector subtraction to find pairs of most dissimilar graph nodes and construct the graph only from the nodes included in such “anti-edges”. Thus, our algorithm is based on graph-based word sense induction, but it also relies on vector-based operations between word embeddings to perform filtering of graph nodes. Analogously to the work of Pelevina:16, we construct a semantic relatedness graph from a list of nearest neighbours, but we filter this list using the following procedure:
Extract a list $\mathcal {N}$ = {$w_{1}$, $w_{2}$, ..., $w_{N}$} of $N$ nearest neighbours for the target (ego) word vector $w$.
Compute a list $\Delta $ = {$\delta _{1}$, $\delta _{2}$, ..., $\delta _{N}$} for each $w_{i}$ in $\mathcal {N}$, where $\delta _{i}~=~w-w_{i}$. The vectors in $\delta $ contain the components of sense of $w$ which are not related to the corresponding nearest neighbours from $\mathcal {N}$.
Compute a list $\overline{\mathcal {N}}$ = {$\overline{w_{1}}$, $\overline{w_{2}}$, ..., $\overline{w_{N}}$}, such that $\overline{w_{i}}$ is in the top nearest neighbours of $\delta _{i}$ in the embedding space. In other words, $\overline{w_{i}}$ is a word which is the most similar to the target (ego) word $w$ and least similar to its neighbour $w_{i}$. We refer to $\overline{w_{i}}$ as an anti-pair of $w_{i}$. The set of $N$ nearest neighbours and their anti-pairs form a set of anti-edges i.e. pairs of most dissimilar nodes – those which should not be connected: $\overline{E} = \lbrace (w_{1},\overline{w_{1}}), (w_{2},\overline{w_{2}}), ..., (w_{N},\overline{w_{N}})\rbrace $.
To clarify this, consider the target (ego) word $w = \textit {python}$, its top similar term $w_1 = \textit {Java}$ and the resulting anti-pair $\overline{w_i} = \textit {snake}$ which is the top related term of $\delta _1 = w - w_1$. Together they form an anti-edge $(w_i,\overline{w_i})=(\textit {Java}, \textit {snake})$ composed of a pair of semantically dissimilar terms.
Construct $V$, the set of vertices of our semantic graph $G=(V,E)$ from the list of anti-edges $\overline{E}$, with the following recurrent procedure: $V = V \cup \lbrace w_{i}, \overline{w_{i}}: w_{i} \in \mathcal {N}, \overline{w_{i}} \in \mathcal {N}\rbrace $, i.e. we add a word from the list of nearest neighbours and its anti-pair only if both of them are nearest neighbours of the original word $w$. We do not add $w$'s nearest neighbours if their anti-pairs do not belong to $\mathcal {N}$. Thus, we add only words which can help discriminating between different senses of $w$.
Construct the set of edges $E$ as follows. For each $w_{i}~\in ~\mathcal {N}$ we extract a set of its $K$ nearest neighbours $\mathcal {N}^{\prime }_{i} = \lbrace u_{1}, u_{2}, ..., u_{K}\rbrace $ and define $E = \lbrace (w_{i}, u_{j}): w_{i}~\in ~V, u_j~\in ~V, u_{j}~\in ~\mathcal {N}^{\prime }_{i}, u_{j}~\ne ~\overline{w_{i}}\rbrace $. In other words, we remove edges between a word $w_{i}$ and its nearest neighbour $u_j$ if $u_j$ is also its anti-pair. According to our hypothesis, $w_{i}$ and $\overline{w_{i}}$ belong to different senses of $w$, so they should not be connected (i.e. we never add anti-edges into $E$). Therefore, we consider any connection between them as noise and remove it.
Note that $N$ (the number of nearest neighbours for the target word $w$) and $K$ (the number of nearest neighbours of $w_{ci}$) do not have to match. The difference between these parameters is the following. $N$ defines how many words will be considered for the construction of ego-graph. On the other hand, $K$ defines the degree of relatedness between words in the ego-graph — if $K = 50$, then we will connect vertices $w$ and $u$ with an edge only if $u$ is in the list of 50 nearest neighbours of $w$. Increasing $K$ increases the graph connectivity and leads to lower granularity of senses.
According to our hypothesis, nearest neighbours of $w$ are grouped into clusters in the vector space, and each of the clusters corresponds to a sense of $w$. The described vertices selection procedure allows picking the most representative members of these clusters which are better at discriminating between the clusters. In addition to that, it helps dealing with the cases when one of the clusters is over-represented in the nearest neighbour list. In this case, many elements of such a cluster are not added to $V$ because their anti-pairs fall outside the nearest neighbour list. This also improves the quality of clustering.
After the graph construction, the clustering is performed using the Chinese Whispers algorithm BIBREF21. This is a bottom-up clustering procedure that does not require to pre-define the number of clusters, so it can correctly process polysemous words with varying numbers of senses as well as unambiguous words.
Figure FIGREF17 shows an example of the resulting pruned graph of for the word Ruby for $N = 50$ nearest neighbours in terms of the fastText cosine similarity. In contrast to the baseline method by BIBREF19 where all 50 terms are clustered, in the method presented in this section we sparsify the graph by removing 13 nodes which were not in the set of the “anti-edges” i.e. pairs of most dissimilar terms out of these 50 neighbours. Examples of anti-edges i.e. pairs of most dissimilar terms for this graph include: (Haskell, Sapphire), (Garnet, Rails), (Opal, Rubyist), (Hazel, RubyOnRails), and (Coffeescript, Opal).
Algorithm for Word Sense Induction ::: egvi (Ego-Graph Vector Induction): A Novel Word Sense Induction Algorithm ::: Labelling of Induced Senses
We label each word cluster representing a sense to make them and the WSD results interpretable by humans. Prior systems used hypernyms to label the clusters BIBREF25, BIBREF26, e.g. “animal” in the “python (animal)”. However, neither hypernyms nor rules for their automatic extraction are available for all 158 languages. Therefore, we use a simpler method to select a keyword which would help to interpret each cluster. For each graph node $v \in V$ we count the number of anti-edges it belongs to: $count(v) = | \lbrace (w_i,\overline{w_i}) : (w_i,\overline{w_i}) \in \overline{E} \wedge (v = w_i \vee v = \overline{w_i}) \rbrace |$. A graph clustering yields a partition of $V$ into $n$ clusters: $V~=~\lbrace V_1, V_2, ..., V_n\rbrace $. For each cluster $V_i$ we define a keyword $w^{key}_i$ as the word with the largest number of anti-edges $count(\cdot )$ among words in this cluster.
Algorithm for Word Sense Induction ::: egvi (Ego-Graph Vector Induction): A Novel Word Sense Induction Algorithm ::: Word Sense Disambiguation
We use keywords defined above to obtain vector representations of senses. In particular, we simply use word embedding of the keyword $w^{key}_i$ as a sense representation $\mathbf {s}_i$ of the target word $w$ to avoid explicit computation of sense embeddings like in BIBREF19. Given a sentence $\lbrace w_1, w_2, ..., w_{j}, w, w_{j+1}, ..., w_n\rbrace $ represented as a matrix of word vectors, we define the context of the target word $w$ as $\textbf {c}_w = \dfrac{\sum _{j=1}^{n} w_j}{n}$. Then, we define the most appropriate sense $\hat{s}$ as the sense with the highest cosine similarity to the embedding of the word's context:
System Design
We release a system for on-the-fly WSD for 158 languages. Given textual input, it identifies polysemous words and retrieves senses that are the most appropriate in the context.
System Design ::: Construction of Sense Inventories
To build word sense inventories (sense vectors) for 158 languages, we utilised GPU-accelerated routines for search of similar vectors implemented in Faiss library BIBREF27. The search of nearest neighbours takes substantial time, therefore, acceleration with GPUs helps to significantly reduce the word sense construction time. To further speed up the process, we keep all intermediate results in memory, which results in substantial RAM consumption of up to 200 Gb.
The construction of word senses for all of the 158 languages takes a lot of computational resources and imposes high requirements to the hardware. For calculations, we use in parallel 10–20 nodes of the Zhores cluster BIBREF28 empowered with Nvidia Tesla V100 graphic cards. For each of the languages, we construct inventories based on 50, 100, and 200 neighbours for 100,000 most frequent words. The vocabulary was limited in order to make the computation time feasible. The construction of inventories for one language takes up to 10 hours, with $6.5$ hours on average. Building the inventories for all languages took more than 1,000 hours of GPU-accelerated computations. We release the constructed sense inventories for all the available languages. They contain all the necessary information for using them in the proposed WSD system or in other downstream tasks.
System Design ::: Word Sense Disambiguation System
The first text pre-processing step is language identification, for which we use the fastText language identification models by Bojanowski:17. Then the input is tokenised. For languages which use Latin, Cyrillic, Hebrew, or Greek scripts, we employ the Europarl tokeniser. For Chinese, we use the Stanford Word Segmenter BIBREF29. For Japanese, we use Mecab BIBREF30. We tokenise Vietnamese with UETsegmenter BIBREF31. All other languages are processed with the ICU tokeniser, as implemented in the PyICU project. After the tokenisation, the system analyses all the input words with pre-extracted sense inventories and defines the most appropriate sense for polysemous words.
Figure FIGREF19 shows the interface of the system. It has a textual input form. The automatically identified language of text is shown above. A click on any of the words displays a prompt (shown in black) with the most appropriate sense of a word in the specified context and the confidence score. In the given example, the word Jaguar is correctly identified as a car brand. This system is based on the system by Ustalov:18, extending it with a back-end for multiple languages, language detection, and sense browsing capabilities.
Evaluation
We first evaluate our converted embedding models on multi-language lexical similarity and relatedness tasks, as a sanity check, to make sure the word sense induction process did not hurt the general performance of the embeddings. Then, we test the sense embeddings on WSD task.
Evaluation ::: Lexical Similarity and Relatedness ::: Experimental Setup
We use the SemR-11 datasets BIBREF32, which contain word pairs with manually assigned similarity scores from 0 (words are not related) to 10 (words are fully interchangeable) for 12 languages: English (en), Arabic (ar), German (de), Spanish (es), Farsi (fa), French (fr), Italian (it), Dutch (nl), Portuguese (pt), Russian (ru), Swedish (sv), Chinese (zh). The task is to assign relatedness scores to these pairs so that the ranking of the pairs by this score is close to the ranking defined by the oracle score. The performance is measured with Pearson correlation of the rankings. Since one word can have several different senses in our setup, we follow Remus:18 and define the relatedness score for a pair of words as the maximum cosine similarity between any of their sense vectors.
We extract the sense inventories from fastText embedding vectors. We set $N=K$ for all our experiments, i.e. the number of vertices in the graph and the maximum number of vertices' nearest neighbours match. We conduct experiments with $N=K$ set to 50, 100, and 200. For each cluster $V_i$ we create a sense vector $s_i$ by averaging vectors that belong to this cluster. We rely on the methodology of BIBREF33 shifting the generated sense vector to the direction of the original word vector: $s_i~=~\lambda ~w + (1-\lambda )~\dfrac{1}{n}~\sum _{u~\in ~V_i} cos(w, u)\cdot u, $ where, $\lambda \in [0, 1]$, $w$ is the embedding of the original word, $cos(w, u)$ is the cosine similarity between $w$ and $u$, and $n=|V_i|$. By introducing the linear combination of $w$ and $u~\in ~V_i$ we enforce the similarity of sense vectors to the original word important for this task. In addition to that, we weight $u$ by their similarity to the original word, so that more similar neighbours contribute more to the sense vector. The shifting parameter $\lambda $ is set to $0.5$, following Remus:18.
A fastText model is able to generate a vector for each word even if it is not represented in the vocabulary, due to the use of subword information. However, our system cannot assemble sense vectors for out-of-vocabulary words, for such words it returns their original fastText vector. Still, the coverage of the benchmark datasets by our vocabulary is at least 85% and approaches 100% for some languages, so we do not have to resort to this back-off strategy very often.
We use the original fastText vectors as a baseline. In this case, we compute the relatedness scores of the two words as a cosine similarity of their vectors.
Evaluation ::: Lexical Similarity and Relatedness ::: Discussion of Results
We compute the relatedness scores for all benchmark datasets using our sense vectors and compare them to cosine similarity scores of original fastText vectors. The results vary for different languages. Figure FIGREF28 shows the change in Pearson correlation score when switching from the baseline fastText embeddings to our sense vectors. The new vectors significantly improve the relatedness detection for German, Farsi, Russian, and Chinese, whereas for Italian, Dutch, and Swedish the score slightly falls behind the baseline. For other languages, the performance of sense vectors is on par with regular fastText.
Evaluation ::: Word Sense Disambiguation
The purpose of our sense vectors is disambiguation of polysemous words. Therefore, we test the inventories constructed with egvi on the Task 13 of SemEval-2013 — Word Sense Induction BIBREF34. The task is to identify the different senses of a target word in context in a fully unsupervised manner.
Evaluation ::: Word Sense Disambiguation ::: Experimental Setup
The dataset consists of a set of polysemous words: 20 nouns, 20 verbs, and 10 adjectives and specifies 20 to 100 contexts per word, with the total of 4,664 contexts, drawn from the Open American National Corpus. Given a set of contexts of a polysemous word, the participants of the competition had to divide them into clusters by sense of the word. The contexts are manually labelled with WordNet senses of the target words, the gold standard clustering is generated from this labelling.
The task allows two setups: graded WSI where participants can submit multiple senses per word and provide the probability of each sense in a particular context, and non-graded WSI where a model determines a single sense for a word in context. In our experiments we performed non-graded WSI. We considered the most suitable sense as the one with the highest cosine similarity with embeddings of the context, as described in Section SECREF9.
The performance of WSI models is measured with three metrics that require mapping of sense inventories (Jaccard Index, Kendall's $\tau $, and WNDCG) and two cluster comparison metrics (Fuzzy NMI and Fuzzy B-Cubed).
Evaluation ::: Word Sense Disambiguation ::: Discussion of Results
We compare our model with the models that participated in the task, the baseline ego-graph clustering model by Pelevina:16, and AdaGram BIBREF17, a method that learns sense embeddings based on a Bayesian extension of the Skip-gram model. Besides that, we provide the scores of the simple baselines originally used in the task: assigning one sense to all words, assigning the most frequent sense to all words, and considering each context as expressing a different sense. The evaluation of our model was performed using the open source context-eval tool.
Table TABREF31 shows the performance of these models on the SemEval dataset. Due to space constraints, we only report the scores of the best-performing SemEval participants, please refer to jurgens-klapaftis-2013-semeval for the full results. The performance of AdaGram and SenseGram models is reported according to Pelevina:16.
The table shows that the performance of egvi is similar to state-of-the-art word sense disambiguation and word sense induction models. In particular, we can see that it outperforms SenseGram on the majority of metrics. We should note that this comparison is not fully rigorous, because SenseGram induces sense inventories from word2vec as opposed to fastText vectors used in our work.
Evaluation ::: Analysis
In order to see how the separation of word contexts that we perform corresponds to actual senses of polysemous words, we visualise ego-graphs produced by our method. Figure FIGREF17 shows the nearest neighbours clustering for the word Ruby, which divides the graph into five senses: Ruby-related programming tools, e.g. RubyOnRails (orange cluster), female names, e.g. Josie (magenta cluster), gems, e.g. Sapphire (yellow cluster), programming languages in general, e.g. Haskell (red cluster). Besides, this is typical for fastText embeddings featuring sub-string similarity, one can observe a cluster of different spelling of the word Ruby in green.
Analogously, the word python (see Figure FIGREF35) is divided into the senses of animals, e.g. crocodile (yellow cluster), programming languages, e.g. perl5 (magenta cluster), and conference, e.g. pycon (red cluster).
In addition, we show a qualitative analysis of senses of mouse and apple. Table TABREF38 shows nearest neighbours of the original words separated into clusters (labels for clusters were assigned manually). These inventories demonstrate clear separation of different senses, although it can be too fine-grained. For example, the first and the second cluster for mouse both refer to computer mouse, but the first one addresses the different types of computer mice, and the second one is used in the context of mouse actions. Similarly, we see that iphone and macbook are separated into two clusters. Interestingly, fastText handles typos, code-switching, and emojis by correctly associating all non-standard variants to the word they refer, and our method is able to cluster them appropriately. Both inventories were produced with $K=200$, which ensures stronger connectivity of graph. However, we see that this setting still produces too many clusters. We computed the average numbers of clusters produced by our model with $K=200$ for words from the word relatedness datasets and compared these numbers with the number of senses in WordNet for English and RuWordNet BIBREF35 for Russian (see Table TABREF37). We can see that the number of senses extracted by our method is consistently higher than the real number of senses.
We also compute the average number of senses per word for all the languages and different values of $K$ (see Figure FIGREF36). The average across languages does not change much as we increase $K$. However, for larger $K$ the average exceed the median value, indicating that more languages have lower number of senses per word. At the same time, while at smaller $K$ the maximum average number of senses per word does not exceed 6, larger values of $K$ produce outliers, e.g. English with $12.5$ senses.
Notably, there are no languages with an average number of senses less than 2, while numbers on English and Russian WordNets are considerably lower. This confirms that our method systematically over-generates senses. The presence of outliers shows that this effect cannot be eliminated by further increasing $K$, because the $i$-th nearest neighbour of a word for $i>200$ can be only remotely related to this word, even if the word is rare. Thus, our sense clustering algorithm needs a method of merging spurious senses.
Conclusions and Future Work
We present egvi, a new algorithm for word sense induction based on graph clustering that is fully unsupervised and relies on graph operations between word vectors. We apply this algorithm to a large collection of pre-trained fastText word embeddings, releasing sense inventories for 158 languages. These inventories contain all the necessary information for constructing sense vectors and using them in downstream tasks. The sense vectors for polysemous words can be directly retrofitted with the pre-trained word embeddings and do not need any external resources. As one application of these multilingual sense inventories, we present a multilingual word sense disambiguation system that performs unsupervised and knowledge-free WSD for 158 languages without the use of any dictionary or sense-labelled corpus.
The evaluation of quality of the produced sense inventories is performed on multilingual word similarity benchmarks, showing that our sense vectors improve the scores compared to non-disambiguated word embeddings. Therefore, our system in its present state can improve WSD and downstream tasks for languages where knowledge bases, taxonomies, and annotated corpora are not available and supervised WSD models cannot be trained.
A promising direction for future work is combining distributional information from the induced sense inventories with lexical knowledge bases to improve WSD performance. Besides, we encourage the use of the produced word sense inventories in other downstream tasks.
Acknowledgements
We acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) foundation under the “JOIN-T 2” and “ACQuA” projects. Ekaterina Artemova was supported by the framework of the HSE University Basic Research Program and Russian Academic Excellence Project “5-100”. | Yes |
dc1fe3359faa2d7daa891c1df33df85558bc461b | dc1fe3359faa2d7daa891c1df33df85558bc461b_0 | Q: Does the model use both spectrogram images and raw waveforms as features?
Text: Introduction
Language Identification (LI) is a problem which involves classifying the language being spoken by a speaker. LI systems can be used in call centers to route international calls to an operator who is fluent in that identified language BIBREF0. In speech-based assistants, LI acts as the first step which chooses the corresponding grammar from a list of available languages for its further semantic analysis BIBREF1. It can also be used in multi-lingual voice-controlled information retrieval systems, for example, Apple Siri and Amazon Alexa.
Over the years, studies have utilized many prosodic and acoustic features to construct machine learning models for LI systems BIBREF2. Every language is composed of phonemes, which are distinct unit of sounds in that language, such as b of black and g of green. Several prosodic and acoustic features are based on phonemes, which become the underlying features on whom the performance of the statistical model depends BIBREF3, BIBREF4. If two languages have many overlapping phonemes, then identifying them becomes a challenging task for a classifier. For example, the word cat in English, kat in Dutch, katze in German have different consonants but when used in a speech they all would sound quite similar.
Due to such drawbacks several studies have switched over to using Deep Neural Networks (DNNs) to harness their novel auto-extraction techniques BIBREF1, BIBREF5. This work follows an implicit approach for identifying six languages with overlapping phonemes on the VoxForge BIBREF6 dataset and achieves 95.4% overall accuracy.
In previous studies BIBREF1, BIBREF7, BIBREF5, authors use log-Mel spectrum of a raw audio as inputs to their models. One of our contributions is to enhance the performance of this approach by utilising recent techniques like Mixup augmentation of inputs and exploring the effectiveness of Attention mechanism in enhancing performance of neural network. As log-Mel spectrum needs to be computed for each raw audio input and processing time for generating log-Mel spectrum increases linearly with length of audio, this acts as a bottleneck for these models. Hence, we propose the use of raw audio waveforms as inputs to deep neural network which boosts performance by avoiding additional overhead of computing log-Mel spectrum for each audio. Our 1D-ConvNet architecture auto-extracts and classifies features from this raw audio input.
The structure of the work is as follows. In Section 2 we discuss about the previous related studies in this field. The model architecture for both the raw waveforms and log-Mel spectrogram images is discussed in Section 3 along with the a discussion on hyperparameter space exploration. In Section 4 we present the experimental results. Finally, in Section 5 we discuss the conclusions drawn from the experiment and future work.
Related Work
Extraction of language dependent features like prosody and phonemes was a popular approach to classify spoken languages BIBREF8, BIBREF9, BIBREF10. Following their success in speaker verification systems, i-vectors have also been used as features in various classification networks. These approaches required significant domain knowledge BIBREF11, BIBREF9. Nowadays most of the attempts on spoken language identification rely on neural networks for meaningful feature extraction and classification BIBREF12, BIBREF13.
Revay et al. BIBREF5 used the ResNet50 BIBREF14 architecture for classifying languages by generating the log-Mel spectra of each raw audio. The model uses a cyclic learning rate where learning rate increases and then decreases linearly. Maximum learning rate for a cycle is set by finding the optimal learning rate using fastai BIBREF15 library. The model classified six languages – English, French, Spanish, Russian, Italian and German – and achieving an accuracy of 89.0%.
Gazeau et al. BIBREF16 in his research showed how Neural Networks, Support Vector Machine and Hidden Markov Model (HMM) can be used to identify French, English, Spanish and German. Dataset was prepared using voice samples from Youtube News BIBREF17and VoxForge BIBREF6 datasets. Hidden Markov models convert speech into a sequence of vectors, was used to capture temporal features in speech. HMMs trained on VoxForge BIBREF6 dataset performed best in comparison to other models proposed by him on same VoxForge dataset. They reported an accuracy of 70.0%.
Bartz et al. BIBREF1 proposed two different hybrid Convolutional Recurrent Neural Networks for language identification. They proposed a new architecture for extracting spatial features from log-Mel spectra of raw audio using CNNs and then using RNNs for capturing temporal features to identify the language. This model achieved an accuracy of 91.0% on Youtube News Dataset BIBREF17. In their second architecture they used the Inception-v3 BIBREF18 architecture to extract spatial features which were then used as input for bi-directional LSTMs to predict the language accurately. This model achieved an accuracy of 96.0% on four languages which were English, German, French and Spanish. They also trained their CNN model (obtained after removing RNN from CRNN model) and the Inception-v3 on their dataset. However they were not able to achieve better results achieving and reported 90% and 95% accuracies, respectively.
Kumar et al. BIBREF0 used Mel-frequency cepstral coefficients (MFCC), Perceptual linear prediction coefficients (PLP), Bark Frequency Cepstral Coefficients (BFCC) and Revised Perceptual Linear Prediction Coefficients (RPLP) as features for language identification. BFCC and RPLP are hybrid features derived using MFCC and PLP. They used two different models based on Vector Quantization (VQ) with Dynamic Time Warping (DTW) and Gaussian Mixture Model (GMM) for classification. These classification models were trained with different features. The authors were able to show that these models worked better with hybrid features (BFCC and RPLP) as compared to conventional features (MFCC and PLP). GMM combined with RPLP features gave the most promising results and achieved an accuracy of 88.8% on ten languages. They designed their own dataset comprising of ten languages being Dutch, English, French, German, Italian, Russian, Spanish, Hindi, Telegu, and Bengali.
Montavon BIBREF7 generated Mel spectrogram as features for a time-delay neural network (TDNN). This network had two-dimensional convolutional layers for feature extraction. An elaborate analysis of how deep architectures outperform their shallow counterparts is presented in this reseacrch. The difficulties in classifying perceptually similar languages like German and English were also put forward in this work. It is mentioned that the proposed approach is less robust to new speakers present in the test dataset. This method was able to achieve an accuracy of 91.2% on dataset comprising of 3 languages – English, French and German.
In Table TABREF1, we summarize the quantitative results of the above previous studies. It includes the model basis, feature description, languages classified and the used dataset along with accuracy obtained. The table also lists the overall results of our proposed models (at the top). The languages used by various authors along with their acronyms are English (En), Spanish (Es), French (Fr), German (De), Russian (Ru), Italian (It), Bengali (Ben), Hindi (Hi) and Telegu (Tel).
Proposed Method ::: Motivations
Several state-of-the-art results on various audio classification tasks have been obtained by using log-Mel spectrograms of raw audio, as features BIBREF19. Convolutional Neural Networks have demonstrated an excellent performance gain in classification of these features BIBREF20, BIBREF21 against other machine learning techniques. It has been shown that using attention layers with ConvNets further enhanced their performance BIBREF22. This motivated us to develop a CNN-based architecture with attention since this approach hasn’t been applied to the task of language identification before.
Recently, using raw audio waveform as features to neural networks has become a popular approach in audio classification BIBREF23, BIBREF22. Raw waveforms have several artifacts which are not effectively captured by various conventional feature extraction techniques like Mel Frequency Cepstral Coefficients (MFCC), Constant Q Transform (CQT), Fast Fourier Transform (FFT), etc.
Audio files are a sequence of spoken words, hence they have temporal features too.A CNN is better at capturing spatial features only and RNNs are better at capturing temporal features as demonstrated by Bartz et al. BIBREF1 using audio files. Therefore, we combined both of these to make a CRNN model.
We propose three types of models to tackle the problem with different approaches, discussed as follows.
Proposed Method ::: Description of Features
As an average human's voice is around 300 Hz and according to Nyquist-Shannon sampling theorem all the useful frequencies (0-300 Hz) are preserved with sampling at 8 kHz, therefore, we sampled raw audio files from all six languages at 8 kHz
The average length of audio files in this dataset was about 10.4 seconds and standard deviation was 2.3 seconds. For our experiments, the audio length was set to 10 seconds. If the audio files were shorter than 10 second, then the data was repeated and concatenated. If audio files were longer, then the data was truncated.
Proposed Method ::: Model Description
We applied the following design principles to all our models:
Every convolutional layer is always followed by an appropriate max pooling layer. This helps in containing the explosion of parameters and keeps the model small and nimble.
Convolutional blocks are defined as an individual block with multiple pairs of one convolutional layer and one max pooling layer. Each convolutional block is preceded or succeded by a convolutional layer.
Batch Normalization and Rectified linear unit activations were applied after each convolutional layer. Batch Normalization helps speed up convergence during training of a neural network.
Model ends with a dense layer which acts the final output layer.
Proposed Method ::: Model Details: 1D ConvNet
As the sampling rate is 8 kHz and audio length is 10 s, hence the input is raw audio to the models with input size of (batch size, 1, 80000). In Table TABREF10, we present a detailed layer-by-layer illustration of the model along with its hyperparameter.
-10pt
Proposed Method ::: Model Details: 1D ConvNet ::: Hyperparameter Optimization:
Tuning hyperparameters is a cumbersome process as the hyperparamter space expands exponentially with the number of parameters, therefore efficient exploration is needed for any feasible study. We used the random search algorithm supported by Hyperopt BIBREF24 library to randomly search for an optimal set of hyperparameters from a given parameter space. In Fig. FIGREF12, various hyperparameters we considered are plotted against the validation accuracy as violin plots. Our observations for each hyperparameter are summarized below:
Number of filters in first layer: We observe that having 128 filters gives better results as compared to other filter values of 32 and 64 in the first layer. A higher number of filters in the first layer of network is able to preserve most of the characteristics of input.
Kernel Size: We varied the receptive fields of convolutional layers by choosing the kernel size from among the set of {3, 5, 7, 9}. We observe that a kernel size of 9 gives better accuracy at the cost of increased computation time and larger number of parameters. A large kernel size is able to capture longer patterns in its input due to bigger receptive power which results in an improved accuracy.
Dropout: Dropout randomly turns-off (sets to 0) various individual nodes during training of the network. In a deep CNN it is important that nodes do not develop a co-dependency amongst each other during training in order to prevent overfitting on training data BIBREF25. Dropout rate of $0.1$ works well for our model. When using a higher dropout rate the network is not able to capture the patterns in training dataset.
Batch Size: We chose batch sizes from amongst the set {32, 64, 128}. There is more noise while calculating error in a smaller batch size as compared to a larger one. This tends to have a regularizing effect during training of the network and hence gives better results. Thus, batch size of 32 works best for the model.
Layers in Convolutional block 1 and 2: We varied the number of layers in both the convolutional blocks. If the number of layers is low, then the network does not have enough depth to capture patterns in the data whereas having large number of layers leads to overfitting on the data. In our network, two layers in the first block and one layer in the second block give optimal results.
Proposed Method ::: Model Details: 2D ConvNet with Attention and bi-directional GRU
Log-Mel spectrogram is the most commonly used method for converting audio into the image domain. The audio data was again sampled at 8 kHz. The input to this model was the log-Mel spectra. We generated log-Mel spectrogram using the LibROSA BIBREF26 library. In Table TABREF16, we present a detailed layer-by-layer illustration of the model along with its hyperparameter.
Proposed Method ::: Model Details: 2D ConvNet with Attention and bi-directional GRU :::
We took some specific design choices for this model, which are as follows:
We added residual connections with each convolutional layer. Residual connections in a way makes the model selective of the contributing layers, determines the optimal number of layers required for training and solves the problem of vanishing gradients. Residual connections or skip connections skip training of those layers that do not contribute much in the overall outcome of model.
We added spatial attention BIBREF27 networks to help the model in focusing on specific regions or areas in an image. Spatial attention aids learning irrespective of transformations, scaling and rotation done on the input images making the model more robust and helping it to achieve better results.
We added Channel Attention networks so as to help the model to find interdependencies among color channels of log-Mel spectra. It adaptively assigns importance to each color channel in a deep convolutional multi-channel network. In our model we apply channel and spatial attention just before feeding the input into bi-directional GRU. This helps the model to focus on selected regions and at the same time find patterns among channels to better determine the language.
Proposed Method ::: Model Details: 2D ConvNet with Attention and bi-directional GRU ::: Hyperparameter Optimization:
We used the random search algorithm supported by Hyperopt BIBREF24 library to randomly search for an optimal set of hyperparameters from a given parameter space. In Fig. FIGREF19 ,various hyperparameters we tuned are plotted against the validation accuracy. Our observations for each hyperparameter are summarized below:
Filter Size: 64 filters in the first layer of network can preserve most of the characteristics of input, but increasing it to 128 is inefficient as overfitting occurs.
Kernel Size: There is a trade-off between kernel size and capturing complex non-linear features. Using a small kernel size will require more layers to capture features whereas using a large kernel size will require less layers. Large kernels capture simple non-linear features whereas using a smaller kernel will help us capture more complex non-linear features. However, with more layers, backpropagation necessitates the need for a large memory. We experimented with large kernel size and gradually increased the layers in order to capture more complex features. The results are not conclusive and thus we chose kernel size of 7 against 3.
Dropout: Dropout rate of 0.1 works well for our data. When using a higher dropout rate the network is not able to capture the patterns in training dataset.
Batch Size: There is always a trade-off between batch size and getting accurate gradients. Using a large batch size helps the model to get more accurate gradients since the model tries to optimize gradients over a large set of images. We found that using a batch size of 128 helped the model to train faster and get better results than using a batch size less than 128.
Number of hidden units in bi-directional GRU: Varying the number of hidden units and layers in GRU helps the model to capture temporal features which can play a significant role in identifying the language correctly. The optimal number of hidden units and layers depends on the complexity of the dataset. Using less number of hidden units may capture less features whereas using large number of hidden units may be computationally expensive. In our case we found that using 1536 hidden units in a single bi-directional GRU layer leads to the best result.
Image Size: We experimented with log-Mel spectra images of sizes $64 \times 64$ and $128 \times 128$ pixels and found that our model worked best with images of size of $128 \times 128$ pixels.
We also evaluated our model on data with mixup augmentation BIBREF28. It is a data augmentation technique that also acts as a regularization technique and prevents overfitting. Instead of directly taking images from the training dataset as input, mixup takes a linear combination of any two random images and feeds it as input. The following equations were used to prepared a mixed-up dataset:
and
where $\alpha \in [0, 1]$ is a random variable from a $\beta $-distribution, $I_1$.
Proposed Method ::: Model details: 2D-ConvNet
This model is a similar model to 2D-ConvNet with Attention and bi-directional GRU described in section SECREF13 except that it lacks skip connections, attention layers, bi-directional GRU and the embedding layer incorporated in the previous model.
Proposed Method ::: Dataset
We classified six languages (English, French, German, Spanish, Russian and Italian) from the VoxForge BIBREF6 dataset. VoxForge is an open-source speech corpus which primarily consists of samples recorded and submitted by users using their own microphone. This results in significant variation of speech quality between samples making it more representative of real world scenarios.
Our dataset consists of 1,500 samples for each of six languages. Out of 1,500 samples for each language, 1,200 were randomly selected as training dataset for that language and rest 300 as validation dataset using k-fold cross-validation. To sum up, we trained our model on 7,200 samples and validated it on 1800 samples comprising six languages. The results are discussed in next section.
Results and Discussion
This paper discusses two end-to-end approaches which achieve state-of-the-art results in both the image as well as audio domain on the VoxForge dataset BIBREF6. In Table TABREF25, we present all the classification accuracies of the two models of the cases with and without mixup for six and four languages.
In the audio domain (using raw audio waveform as input), 1D-ConvNet achieved a mean accuracy of 93.7% with a standard deviation of 0.3% on running k-fold cross validation. In Fig FIGREF27 (a) we present the confusion matrix for the 1D-ConvNet model.
In the image domain (obtained by taking log-Mel spectra of raw audio), 2D-ConvNet with 2D attention (channel and spatial attention) and bi-directional GRU achieved a mean accuracy of 95.0% with a standard deviation of 1.2% for six languages. This model performed better when mixup regularization was applied. 2D-ConvNet achieved a mean accuracy of 95.4% with standard deviation of 0.6% on running k-fold cross validation for six languages when mixup was applied. In Fig FIGREF27 (b) we present the confusion matrix for the 2D-ConvNet model. 2D attention models focused on the important features extracted by convolutional layers and bi-directional GRU captured the temporal features.
Results and Discussion ::: Misclassification
Several of the spoken languages in Europe belong to the Indo-European family. Within this family, the languages are divided into three phyla which are Romance, Germanic and Slavic. Of the 6 languages that we selected Spanish (Es), French (Fr) and Italian (It) belong to the Romance phyla, English and German belong to Germanic phyla and Russian in Slavic phyla. Our model also confuses between languages belonging to the similar phyla which acts as an insanity check since languages in same phyla have many similar pronounced words such as cat in English becomes Katze in German and Ciao in Italian becomes Chao in Spanish.
Our model confuses between French (Fr) and Russian (Ru) while these languages belong to different phyla, many words from French were adopted into Russian such as automate (oot-oo-mate) in French becomes ABTOMaT (aff-taa-maat) in Russian which have similar pronunciation.
Results and Discussion ::: Future Scope
The performance of raw audio waveforms as input features to ConvNet can be further improved by applying silence removal in the audio. Also, there is scope for improvement by augmenting available data through various conventional techniques like pitch shifting, adding random noise and changing speed of audio. These help in making neural networks more robust to variations which might be present in real world scenarios. There can be further exploration of various feature extraction techniques like Constant-Q transform and Fast Fourier Transform and assessment of their impact on Language Identification.
There can be further improvements in neural network architectures like concatenating the high level features obtained from 1D-ConvNet and 2D-ConvNet, before performing classification. There can be experiments using deeper networks with skip connections and Inception modules. These are known to have positively impacted the performance of Convolutional Neural Networks.
Conclusion
There are two main contributions of this paper in the domain of spoken language identification. Firstly, we presented an extensive analysis of raw audio waveforms as input features to 1D-ConvNet. We experimented with various hyperparameters in our 1D-ConvNet and evaluated their effect on validation accuracy. This method is able to bypass the computational overhead of conventional approaches which depend on generation of spectrograms as a necessary pre-procesing step. We were able to achieve an accauracy of 93.7% using this technique.
Next, we discussed the enhancement in performance of 2D-ConvNet using mixup augmentation, which is a recently developed technique to prevent overfitting on test data.This approach achieved an accuracy of 95.4%. We also analysed how attention mechanism and recurrent layers impact the performance of networks. This approach achieved an accuracy of 95.0%. | No |
922f1b740f8b13fdc8371e2a275269a44c86195e | 922f1b740f8b13fdc8371e2a275269a44c86195e_0 | Q: Is the performance compared against a baseline model?
Text: Introduction
Language Identification (LI) is a problem which involves classifying the language being spoken by a speaker. LI systems can be used in call centers to route international calls to an operator who is fluent in that identified language BIBREF0. In speech-based assistants, LI acts as the first step which chooses the corresponding grammar from a list of available languages for its further semantic analysis BIBREF1. It can also be used in multi-lingual voice-controlled information retrieval systems, for example, Apple Siri and Amazon Alexa.
Over the years, studies have utilized many prosodic and acoustic features to construct machine learning models for LI systems BIBREF2. Every language is composed of phonemes, which are distinct unit of sounds in that language, such as b of black and g of green. Several prosodic and acoustic features are based on phonemes, which become the underlying features on whom the performance of the statistical model depends BIBREF3, BIBREF4. If two languages have many overlapping phonemes, then identifying them becomes a challenging task for a classifier. For example, the word cat in English, kat in Dutch, katze in German have different consonants but when used in a speech they all would sound quite similar.
Due to such drawbacks several studies have switched over to using Deep Neural Networks (DNNs) to harness their novel auto-extraction techniques BIBREF1, BIBREF5. This work follows an implicit approach for identifying six languages with overlapping phonemes on the VoxForge BIBREF6 dataset and achieves 95.4% overall accuracy.
In previous studies BIBREF1, BIBREF7, BIBREF5, authors use log-Mel spectrum of a raw audio as inputs to their models. One of our contributions is to enhance the performance of this approach by utilising recent techniques like Mixup augmentation of inputs and exploring the effectiveness of Attention mechanism in enhancing performance of neural network. As log-Mel spectrum needs to be computed for each raw audio input and processing time for generating log-Mel spectrum increases linearly with length of audio, this acts as a bottleneck for these models. Hence, we propose the use of raw audio waveforms as inputs to deep neural network which boosts performance by avoiding additional overhead of computing log-Mel spectrum for each audio. Our 1D-ConvNet architecture auto-extracts and classifies features from this raw audio input.
The structure of the work is as follows. In Section 2 we discuss about the previous related studies in this field. The model architecture for both the raw waveforms and log-Mel spectrogram images is discussed in Section 3 along with the a discussion on hyperparameter space exploration. In Section 4 we present the experimental results. Finally, in Section 5 we discuss the conclusions drawn from the experiment and future work.
Related Work
Extraction of language dependent features like prosody and phonemes was a popular approach to classify spoken languages BIBREF8, BIBREF9, BIBREF10. Following their success in speaker verification systems, i-vectors have also been used as features in various classification networks. These approaches required significant domain knowledge BIBREF11, BIBREF9. Nowadays most of the attempts on spoken language identification rely on neural networks for meaningful feature extraction and classification BIBREF12, BIBREF13.
Revay et al. BIBREF5 used the ResNet50 BIBREF14 architecture for classifying languages by generating the log-Mel spectra of each raw audio. The model uses a cyclic learning rate where learning rate increases and then decreases linearly. Maximum learning rate for a cycle is set by finding the optimal learning rate using fastai BIBREF15 library. The model classified six languages – English, French, Spanish, Russian, Italian and German – and achieving an accuracy of 89.0%.
Gazeau et al. BIBREF16 in his research showed how Neural Networks, Support Vector Machine and Hidden Markov Model (HMM) can be used to identify French, English, Spanish and German. Dataset was prepared using voice samples from Youtube News BIBREF17and VoxForge BIBREF6 datasets. Hidden Markov models convert speech into a sequence of vectors, was used to capture temporal features in speech. HMMs trained on VoxForge BIBREF6 dataset performed best in comparison to other models proposed by him on same VoxForge dataset. They reported an accuracy of 70.0%.
Bartz et al. BIBREF1 proposed two different hybrid Convolutional Recurrent Neural Networks for language identification. They proposed a new architecture for extracting spatial features from log-Mel spectra of raw audio using CNNs and then using RNNs for capturing temporal features to identify the language. This model achieved an accuracy of 91.0% on Youtube News Dataset BIBREF17. In their second architecture they used the Inception-v3 BIBREF18 architecture to extract spatial features which were then used as input for bi-directional LSTMs to predict the language accurately. This model achieved an accuracy of 96.0% on four languages which were English, German, French and Spanish. They also trained their CNN model (obtained after removing RNN from CRNN model) and the Inception-v3 on their dataset. However they were not able to achieve better results achieving and reported 90% and 95% accuracies, respectively.
Kumar et al. BIBREF0 used Mel-frequency cepstral coefficients (MFCC), Perceptual linear prediction coefficients (PLP), Bark Frequency Cepstral Coefficients (BFCC) and Revised Perceptual Linear Prediction Coefficients (RPLP) as features for language identification. BFCC and RPLP are hybrid features derived using MFCC and PLP. They used two different models based on Vector Quantization (VQ) with Dynamic Time Warping (DTW) and Gaussian Mixture Model (GMM) for classification. These classification models were trained with different features. The authors were able to show that these models worked better with hybrid features (BFCC and RPLP) as compared to conventional features (MFCC and PLP). GMM combined with RPLP features gave the most promising results and achieved an accuracy of 88.8% on ten languages. They designed their own dataset comprising of ten languages being Dutch, English, French, German, Italian, Russian, Spanish, Hindi, Telegu, and Bengali.
Montavon BIBREF7 generated Mel spectrogram as features for a time-delay neural network (TDNN). This network had two-dimensional convolutional layers for feature extraction. An elaborate analysis of how deep architectures outperform their shallow counterparts is presented in this reseacrch. The difficulties in classifying perceptually similar languages like German and English were also put forward in this work. It is mentioned that the proposed approach is less robust to new speakers present in the test dataset. This method was able to achieve an accuracy of 91.2% on dataset comprising of 3 languages – English, French and German.
In Table TABREF1, we summarize the quantitative results of the above previous studies. It includes the model basis, feature description, languages classified and the used dataset along with accuracy obtained. The table also lists the overall results of our proposed models (at the top). The languages used by various authors along with their acronyms are English (En), Spanish (Es), French (Fr), German (De), Russian (Ru), Italian (It), Bengali (Ben), Hindi (Hi) and Telegu (Tel).
Proposed Method ::: Motivations
Several state-of-the-art results on various audio classification tasks have been obtained by using log-Mel spectrograms of raw audio, as features BIBREF19. Convolutional Neural Networks have demonstrated an excellent performance gain in classification of these features BIBREF20, BIBREF21 against other machine learning techniques. It has been shown that using attention layers with ConvNets further enhanced their performance BIBREF22. This motivated us to develop a CNN-based architecture with attention since this approach hasn’t been applied to the task of language identification before.
Recently, using raw audio waveform as features to neural networks has become a popular approach in audio classification BIBREF23, BIBREF22. Raw waveforms have several artifacts which are not effectively captured by various conventional feature extraction techniques like Mel Frequency Cepstral Coefficients (MFCC), Constant Q Transform (CQT), Fast Fourier Transform (FFT), etc.
Audio files are a sequence of spoken words, hence they have temporal features too.A CNN is better at capturing spatial features only and RNNs are better at capturing temporal features as demonstrated by Bartz et al. BIBREF1 using audio files. Therefore, we combined both of these to make a CRNN model.
We propose three types of models to tackle the problem with different approaches, discussed as follows.
Proposed Method ::: Description of Features
As an average human's voice is around 300 Hz and according to Nyquist-Shannon sampling theorem all the useful frequencies (0-300 Hz) are preserved with sampling at 8 kHz, therefore, we sampled raw audio files from all six languages at 8 kHz
The average length of audio files in this dataset was about 10.4 seconds and standard deviation was 2.3 seconds. For our experiments, the audio length was set to 10 seconds. If the audio files were shorter than 10 second, then the data was repeated and concatenated. If audio files were longer, then the data was truncated.
Proposed Method ::: Model Description
We applied the following design principles to all our models:
Every convolutional layer is always followed by an appropriate max pooling layer. This helps in containing the explosion of parameters and keeps the model small and nimble.
Convolutional blocks are defined as an individual block with multiple pairs of one convolutional layer and one max pooling layer. Each convolutional block is preceded or succeded by a convolutional layer.
Batch Normalization and Rectified linear unit activations were applied after each convolutional layer. Batch Normalization helps speed up convergence during training of a neural network.
Model ends with a dense layer which acts the final output layer.
Proposed Method ::: Model Details: 1D ConvNet
As the sampling rate is 8 kHz and audio length is 10 s, hence the input is raw audio to the models with input size of (batch size, 1, 80000). In Table TABREF10, we present a detailed layer-by-layer illustration of the model along with its hyperparameter.
-10pt
Proposed Method ::: Model Details: 1D ConvNet ::: Hyperparameter Optimization:
Tuning hyperparameters is a cumbersome process as the hyperparamter space expands exponentially with the number of parameters, therefore efficient exploration is needed for any feasible study. We used the random search algorithm supported by Hyperopt BIBREF24 library to randomly search for an optimal set of hyperparameters from a given parameter space. In Fig. FIGREF12, various hyperparameters we considered are plotted against the validation accuracy as violin plots. Our observations for each hyperparameter are summarized below:
Number of filters in first layer: We observe that having 128 filters gives better results as compared to other filter values of 32 and 64 in the first layer. A higher number of filters in the first layer of network is able to preserve most of the characteristics of input.
Kernel Size: We varied the receptive fields of convolutional layers by choosing the kernel size from among the set of {3, 5, 7, 9}. We observe that a kernel size of 9 gives better accuracy at the cost of increased computation time and larger number of parameters. A large kernel size is able to capture longer patterns in its input due to bigger receptive power which results in an improved accuracy.
Dropout: Dropout randomly turns-off (sets to 0) various individual nodes during training of the network. In a deep CNN it is important that nodes do not develop a co-dependency amongst each other during training in order to prevent overfitting on training data BIBREF25. Dropout rate of $0.1$ works well for our model. When using a higher dropout rate the network is not able to capture the patterns in training dataset.
Batch Size: We chose batch sizes from amongst the set {32, 64, 128}. There is more noise while calculating error in a smaller batch size as compared to a larger one. This tends to have a regularizing effect during training of the network and hence gives better results. Thus, batch size of 32 works best for the model.
Layers in Convolutional block 1 and 2: We varied the number of layers in both the convolutional blocks. If the number of layers is low, then the network does not have enough depth to capture patterns in the data whereas having large number of layers leads to overfitting on the data. In our network, two layers in the first block and one layer in the second block give optimal results.
Proposed Method ::: Model Details: 2D ConvNet with Attention and bi-directional GRU
Log-Mel spectrogram is the most commonly used method for converting audio into the image domain. The audio data was again sampled at 8 kHz. The input to this model was the log-Mel spectra. We generated log-Mel spectrogram using the LibROSA BIBREF26 library. In Table TABREF16, we present a detailed layer-by-layer illustration of the model along with its hyperparameter.
Proposed Method ::: Model Details: 2D ConvNet with Attention and bi-directional GRU :::
We took some specific design choices for this model, which are as follows:
We added residual connections with each convolutional layer. Residual connections in a way makes the model selective of the contributing layers, determines the optimal number of layers required for training and solves the problem of vanishing gradients. Residual connections or skip connections skip training of those layers that do not contribute much in the overall outcome of model.
We added spatial attention BIBREF27 networks to help the model in focusing on specific regions or areas in an image. Spatial attention aids learning irrespective of transformations, scaling and rotation done on the input images making the model more robust and helping it to achieve better results.
We added Channel Attention networks so as to help the model to find interdependencies among color channels of log-Mel spectra. It adaptively assigns importance to each color channel in a deep convolutional multi-channel network. In our model we apply channel and spatial attention just before feeding the input into bi-directional GRU. This helps the model to focus on selected regions and at the same time find patterns among channels to better determine the language.
Proposed Method ::: Model Details: 2D ConvNet with Attention and bi-directional GRU ::: Hyperparameter Optimization:
We used the random search algorithm supported by Hyperopt BIBREF24 library to randomly search for an optimal set of hyperparameters from a given parameter space. In Fig. FIGREF19 ,various hyperparameters we tuned are plotted against the validation accuracy. Our observations for each hyperparameter are summarized below:
Filter Size: 64 filters in the first layer of network can preserve most of the characteristics of input, but increasing it to 128 is inefficient as overfitting occurs.
Kernel Size: There is a trade-off between kernel size and capturing complex non-linear features. Using a small kernel size will require more layers to capture features whereas using a large kernel size will require less layers. Large kernels capture simple non-linear features whereas using a smaller kernel will help us capture more complex non-linear features. However, with more layers, backpropagation necessitates the need for a large memory. We experimented with large kernel size and gradually increased the layers in order to capture more complex features. The results are not conclusive and thus we chose kernel size of 7 against 3.
Dropout: Dropout rate of 0.1 works well for our data. When using a higher dropout rate the network is not able to capture the patterns in training dataset.
Batch Size: There is always a trade-off between batch size and getting accurate gradients. Using a large batch size helps the model to get more accurate gradients since the model tries to optimize gradients over a large set of images. We found that using a batch size of 128 helped the model to train faster and get better results than using a batch size less than 128.
Number of hidden units in bi-directional GRU: Varying the number of hidden units and layers in GRU helps the model to capture temporal features which can play a significant role in identifying the language correctly. The optimal number of hidden units and layers depends on the complexity of the dataset. Using less number of hidden units may capture less features whereas using large number of hidden units may be computationally expensive. In our case we found that using 1536 hidden units in a single bi-directional GRU layer leads to the best result.
Image Size: We experimented with log-Mel spectra images of sizes $64 \times 64$ and $128 \times 128$ pixels and found that our model worked best with images of size of $128 \times 128$ pixels.
We also evaluated our model on data with mixup augmentation BIBREF28. It is a data augmentation technique that also acts as a regularization technique and prevents overfitting. Instead of directly taking images from the training dataset as input, mixup takes a linear combination of any two random images and feeds it as input. The following equations were used to prepared a mixed-up dataset:
and
where $\alpha \in [0, 1]$ is a random variable from a $\beta $-distribution, $I_1$.
Proposed Method ::: Model details: 2D-ConvNet
This model is a similar model to 2D-ConvNet with Attention and bi-directional GRU described in section SECREF13 except that it lacks skip connections, attention layers, bi-directional GRU and the embedding layer incorporated in the previous model.
Proposed Method ::: Dataset
We classified six languages (English, French, German, Spanish, Russian and Italian) from the VoxForge BIBREF6 dataset. VoxForge is an open-source speech corpus which primarily consists of samples recorded and submitted by users using their own microphone. This results in significant variation of speech quality between samples making it more representative of real world scenarios.
Our dataset consists of 1,500 samples for each of six languages. Out of 1,500 samples for each language, 1,200 were randomly selected as training dataset for that language and rest 300 as validation dataset using k-fold cross-validation. To sum up, we trained our model on 7,200 samples and validated it on 1800 samples comprising six languages. The results are discussed in next section.
Results and Discussion
This paper discusses two end-to-end approaches which achieve state-of-the-art results in both the image as well as audio domain on the VoxForge dataset BIBREF6. In Table TABREF25, we present all the classification accuracies of the two models of the cases with and without mixup for six and four languages.
In the audio domain (using raw audio waveform as input), 1D-ConvNet achieved a mean accuracy of 93.7% with a standard deviation of 0.3% on running k-fold cross validation. In Fig FIGREF27 (a) we present the confusion matrix for the 1D-ConvNet model.
In the image domain (obtained by taking log-Mel spectra of raw audio), 2D-ConvNet with 2D attention (channel and spatial attention) and bi-directional GRU achieved a mean accuracy of 95.0% with a standard deviation of 1.2% for six languages. This model performed better when mixup regularization was applied. 2D-ConvNet achieved a mean accuracy of 95.4% with standard deviation of 0.6% on running k-fold cross validation for six languages when mixup was applied. In Fig FIGREF27 (b) we present the confusion matrix for the 2D-ConvNet model. 2D attention models focused on the important features extracted by convolutional layers and bi-directional GRU captured the temporal features.
Results and Discussion ::: Misclassification
Several of the spoken languages in Europe belong to the Indo-European family. Within this family, the languages are divided into three phyla which are Romance, Germanic and Slavic. Of the 6 languages that we selected Spanish (Es), French (Fr) and Italian (It) belong to the Romance phyla, English and German belong to Germanic phyla and Russian in Slavic phyla. Our model also confuses between languages belonging to the similar phyla which acts as an insanity check since languages in same phyla have many similar pronounced words such as cat in English becomes Katze in German and Ciao in Italian becomes Chao in Spanish.
Our model confuses between French (Fr) and Russian (Ru) while these languages belong to different phyla, many words from French were adopted into Russian such as automate (oot-oo-mate) in French becomes ABTOMaT (aff-taa-maat) in Russian which have similar pronunciation.
Results and Discussion ::: Future Scope
The performance of raw audio waveforms as input features to ConvNet can be further improved by applying silence removal in the audio. Also, there is scope for improvement by augmenting available data through various conventional techniques like pitch shifting, adding random noise and changing speed of audio. These help in making neural networks more robust to variations which might be present in real world scenarios. There can be further exploration of various feature extraction techniques like Constant-Q transform and Fast Fourier Transform and assessment of their impact on Language Identification.
There can be further improvements in neural network architectures like concatenating the high level features obtained from 1D-ConvNet and 2D-ConvNet, before performing classification. There can be experiments using deeper networks with skip connections and Inception modules. These are known to have positively impacted the performance of Convolutional Neural Networks.
Conclusion
There are two main contributions of this paper in the domain of spoken language identification. Firstly, we presented an extensive analysis of raw audio waveforms as input features to 1D-ConvNet. We experimented with various hyperparameters in our 1D-ConvNet and evaluated their effect on validation accuracy. This method is able to bypass the computational overhead of conventional approaches which depend on generation of spectrograms as a necessary pre-procesing step. We were able to achieve an accauracy of 93.7% using this technique.
Next, we discussed the enhancement in performance of 2D-ConvNet using mixup augmentation, which is a recently developed technique to prevent overfitting on test data.This approach achieved an accuracy of 95.4%. We also analysed how attention mechanism and recurrent layers impact the performance of networks. This approach achieved an accuracy of 95.0%. | Yes |
922f1b740f8b13fdc8371e2a275269a44c86195e | 922f1b740f8b13fdc8371e2a275269a44c86195e_1 | Q: Is the performance compared against a baseline model?
Text: Introduction
Language Identification (LI) is a problem which involves classifying the language being spoken by a speaker. LI systems can be used in call centers to route international calls to an operator who is fluent in that identified language BIBREF0. In speech-based assistants, LI acts as the first step which chooses the corresponding grammar from a list of available languages for its further semantic analysis BIBREF1. It can also be used in multi-lingual voice-controlled information retrieval systems, for example, Apple Siri and Amazon Alexa.
Over the years, studies have utilized many prosodic and acoustic features to construct machine learning models for LI systems BIBREF2. Every language is composed of phonemes, which are distinct unit of sounds in that language, such as b of black and g of green. Several prosodic and acoustic features are based on phonemes, which become the underlying features on whom the performance of the statistical model depends BIBREF3, BIBREF4. If two languages have many overlapping phonemes, then identifying them becomes a challenging task for a classifier. For example, the word cat in English, kat in Dutch, katze in German have different consonants but when used in a speech they all would sound quite similar.
Due to such drawbacks several studies have switched over to using Deep Neural Networks (DNNs) to harness their novel auto-extraction techniques BIBREF1, BIBREF5. This work follows an implicit approach for identifying six languages with overlapping phonemes on the VoxForge BIBREF6 dataset and achieves 95.4% overall accuracy.
In previous studies BIBREF1, BIBREF7, BIBREF5, authors use log-Mel spectrum of a raw audio as inputs to their models. One of our contributions is to enhance the performance of this approach by utilising recent techniques like Mixup augmentation of inputs and exploring the effectiveness of Attention mechanism in enhancing performance of neural network. As log-Mel spectrum needs to be computed for each raw audio input and processing time for generating log-Mel spectrum increases linearly with length of audio, this acts as a bottleneck for these models. Hence, we propose the use of raw audio waveforms as inputs to deep neural network which boosts performance by avoiding additional overhead of computing log-Mel spectrum for each audio. Our 1D-ConvNet architecture auto-extracts and classifies features from this raw audio input.
The structure of the work is as follows. In Section 2 we discuss about the previous related studies in this field. The model architecture for both the raw waveforms and log-Mel spectrogram images is discussed in Section 3 along with the a discussion on hyperparameter space exploration. In Section 4 we present the experimental results. Finally, in Section 5 we discuss the conclusions drawn from the experiment and future work.
Related Work
Extraction of language dependent features like prosody and phonemes was a popular approach to classify spoken languages BIBREF8, BIBREF9, BIBREF10. Following their success in speaker verification systems, i-vectors have also been used as features in various classification networks. These approaches required significant domain knowledge BIBREF11, BIBREF9. Nowadays most of the attempts on spoken language identification rely on neural networks for meaningful feature extraction and classification BIBREF12, BIBREF13.
Revay et al. BIBREF5 used the ResNet50 BIBREF14 architecture for classifying languages by generating the log-Mel spectra of each raw audio. The model uses a cyclic learning rate where learning rate increases and then decreases linearly. Maximum learning rate for a cycle is set by finding the optimal learning rate using fastai BIBREF15 library. The model classified six languages – English, French, Spanish, Russian, Italian and German – and achieving an accuracy of 89.0%.
Gazeau et al. BIBREF16 in his research showed how Neural Networks, Support Vector Machine and Hidden Markov Model (HMM) can be used to identify French, English, Spanish and German. Dataset was prepared using voice samples from Youtube News BIBREF17and VoxForge BIBREF6 datasets. Hidden Markov models convert speech into a sequence of vectors, was used to capture temporal features in speech. HMMs trained on VoxForge BIBREF6 dataset performed best in comparison to other models proposed by him on same VoxForge dataset. They reported an accuracy of 70.0%.
Bartz et al. BIBREF1 proposed two different hybrid Convolutional Recurrent Neural Networks for language identification. They proposed a new architecture for extracting spatial features from log-Mel spectra of raw audio using CNNs and then using RNNs for capturing temporal features to identify the language. This model achieved an accuracy of 91.0% on Youtube News Dataset BIBREF17. In their second architecture they used the Inception-v3 BIBREF18 architecture to extract spatial features which were then used as input for bi-directional LSTMs to predict the language accurately. This model achieved an accuracy of 96.0% on four languages which were English, German, French and Spanish. They also trained their CNN model (obtained after removing RNN from CRNN model) and the Inception-v3 on their dataset. However they were not able to achieve better results achieving and reported 90% and 95% accuracies, respectively.
Kumar et al. BIBREF0 used Mel-frequency cepstral coefficients (MFCC), Perceptual linear prediction coefficients (PLP), Bark Frequency Cepstral Coefficients (BFCC) and Revised Perceptual Linear Prediction Coefficients (RPLP) as features for language identification. BFCC and RPLP are hybrid features derived using MFCC and PLP. They used two different models based on Vector Quantization (VQ) with Dynamic Time Warping (DTW) and Gaussian Mixture Model (GMM) for classification. These classification models were trained with different features. The authors were able to show that these models worked better with hybrid features (BFCC and RPLP) as compared to conventional features (MFCC and PLP). GMM combined with RPLP features gave the most promising results and achieved an accuracy of 88.8% on ten languages. They designed their own dataset comprising of ten languages being Dutch, English, French, German, Italian, Russian, Spanish, Hindi, Telegu, and Bengali.
Montavon BIBREF7 generated Mel spectrogram as features for a time-delay neural network (TDNN). This network had two-dimensional convolutional layers for feature extraction. An elaborate analysis of how deep architectures outperform their shallow counterparts is presented in this reseacrch. The difficulties in classifying perceptually similar languages like German and English were also put forward in this work. It is mentioned that the proposed approach is less robust to new speakers present in the test dataset. This method was able to achieve an accuracy of 91.2% on dataset comprising of 3 languages – English, French and German.
In Table TABREF1, we summarize the quantitative results of the above previous studies. It includes the model basis, feature description, languages classified and the used dataset along with accuracy obtained. The table also lists the overall results of our proposed models (at the top). The languages used by various authors along with their acronyms are English (En), Spanish (Es), French (Fr), German (De), Russian (Ru), Italian (It), Bengali (Ben), Hindi (Hi) and Telegu (Tel).
Proposed Method ::: Motivations
Several state-of-the-art results on various audio classification tasks have been obtained by using log-Mel spectrograms of raw audio, as features BIBREF19. Convolutional Neural Networks have demonstrated an excellent performance gain in classification of these features BIBREF20, BIBREF21 against other machine learning techniques. It has been shown that using attention layers with ConvNets further enhanced their performance BIBREF22. This motivated us to develop a CNN-based architecture with attention since this approach hasn’t been applied to the task of language identification before.
Recently, using raw audio waveform as features to neural networks has become a popular approach in audio classification BIBREF23, BIBREF22. Raw waveforms have several artifacts which are not effectively captured by various conventional feature extraction techniques like Mel Frequency Cepstral Coefficients (MFCC), Constant Q Transform (CQT), Fast Fourier Transform (FFT), etc.
Audio files are a sequence of spoken words, hence they have temporal features too.A CNN is better at capturing spatial features only and RNNs are better at capturing temporal features as demonstrated by Bartz et al. BIBREF1 using audio files. Therefore, we combined both of these to make a CRNN model.
We propose three types of models to tackle the problem with different approaches, discussed as follows.
Proposed Method ::: Description of Features
As an average human's voice is around 300 Hz and according to Nyquist-Shannon sampling theorem all the useful frequencies (0-300 Hz) are preserved with sampling at 8 kHz, therefore, we sampled raw audio files from all six languages at 8 kHz
The average length of audio files in this dataset was about 10.4 seconds and standard deviation was 2.3 seconds. For our experiments, the audio length was set to 10 seconds. If the audio files were shorter than 10 second, then the data was repeated and concatenated. If audio files were longer, then the data was truncated.
Proposed Method ::: Model Description
We applied the following design principles to all our models:
Every convolutional layer is always followed by an appropriate max pooling layer. This helps in containing the explosion of parameters and keeps the model small and nimble.
Convolutional blocks are defined as an individual block with multiple pairs of one convolutional layer and one max pooling layer. Each convolutional block is preceded or succeded by a convolutional layer.
Batch Normalization and Rectified linear unit activations were applied after each convolutional layer. Batch Normalization helps speed up convergence during training of a neural network.
Model ends with a dense layer which acts the final output layer.
Proposed Method ::: Model Details: 1D ConvNet
As the sampling rate is 8 kHz and audio length is 10 s, hence the input is raw audio to the models with input size of (batch size, 1, 80000). In Table TABREF10, we present a detailed layer-by-layer illustration of the model along with its hyperparameter.
-10pt
Proposed Method ::: Model Details: 1D ConvNet ::: Hyperparameter Optimization:
Tuning hyperparameters is a cumbersome process as the hyperparamter space expands exponentially with the number of parameters, therefore efficient exploration is needed for any feasible study. We used the random search algorithm supported by Hyperopt BIBREF24 library to randomly search for an optimal set of hyperparameters from a given parameter space. In Fig. FIGREF12, various hyperparameters we considered are plotted against the validation accuracy as violin plots. Our observations for each hyperparameter are summarized below:
Number of filters in first layer: We observe that having 128 filters gives better results as compared to other filter values of 32 and 64 in the first layer. A higher number of filters in the first layer of network is able to preserve most of the characteristics of input.
Kernel Size: We varied the receptive fields of convolutional layers by choosing the kernel size from among the set of {3, 5, 7, 9}. We observe that a kernel size of 9 gives better accuracy at the cost of increased computation time and larger number of parameters. A large kernel size is able to capture longer patterns in its input due to bigger receptive power which results in an improved accuracy.
Dropout: Dropout randomly turns-off (sets to 0) various individual nodes during training of the network. In a deep CNN it is important that nodes do not develop a co-dependency amongst each other during training in order to prevent overfitting on training data BIBREF25. Dropout rate of $0.1$ works well for our model. When using a higher dropout rate the network is not able to capture the patterns in training dataset.
Batch Size: We chose batch sizes from amongst the set {32, 64, 128}. There is more noise while calculating error in a smaller batch size as compared to a larger one. This tends to have a regularizing effect during training of the network and hence gives better results. Thus, batch size of 32 works best for the model.
Layers in Convolutional block 1 and 2: We varied the number of layers in both the convolutional blocks. If the number of layers is low, then the network does not have enough depth to capture patterns in the data whereas having large number of layers leads to overfitting on the data. In our network, two layers in the first block and one layer in the second block give optimal results.
Proposed Method ::: Model Details: 2D ConvNet with Attention and bi-directional GRU
Log-Mel spectrogram is the most commonly used method for converting audio into the image domain. The audio data was again sampled at 8 kHz. The input to this model was the log-Mel spectra. We generated log-Mel spectrogram using the LibROSA BIBREF26 library. In Table TABREF16, we present a detailed layer-by-layer illustration of the model along with its hyperparameter.
Proposed Method ::: Model Details: 2D ConvNet with Attention and bi-directional GRU :::
We took some specific design choices for this model, which are as follows:
We added residual connections with each convolutional layer. Residual connections in a way makes the model selective of the contributing layers, determines the optimal number of layers required for training and solves the problem of vanishing gradients. Residual connections or skip connections skip training of those layers that do not contribute much in the overall outcome of model.
We added spatial attention BIBREF27 networks to help the model in focusing on specific regions or areas in an image. Spatial attention aids learning irrespective of transformations, scaling and rotation done on the input images making the model more robust and helping it to achieve better results.
We added Channel Attention networks so as to help the model to find interdependencies among color channels of log-Mel spectra. It adaptively assigns importance to each color channel in a deep convolutional multi-channel network. In our model we apply channel and spatial attention just before feeding the input into bi-directional GRU. This helps the model to focus on selected regions and at the same time find patterns among channels to better determine the language.
Proposed Method ::: Model Details: 2D ConvNet with Attention and bi-directional GRU ::: Hyperparameter Optimization:
We used the random search algorithm supported by Hyperopt BIBREF24 library to randomly search for an optimal set of hyperparameters from a given parameter space. In Fig. FIGREF19 ,various hyperparameters we tuned are plotted against the validation accuracy. Our observations for each hyperparameter are summarized below:
Filter Size: 64 filters in the first layer of network can preserve most of the characteristics of input, but increasing it to 128 is inefficient as overfitting occurs.
Kernel Size: There is a trade-off between kernel size and capturing complex non-linear features. Using a small kernel size will require more layers to capture features whereas using a large kernel size will require less layers. Large kernels capture simple non-linear features whereas using a smaller kernel will help us capture more complex non-linear features. However, with more layers, backpropagation necessitates the need for a large memory. We experimented with large kernel size and gradually increased the layers in order to capture more complex features. The results are not conclusive and thus we chose kernel size of 7 against 3.
Dropout: Dropout rate of 0.1 works well for our data. When using a higher dropout rate the network is not able to capture the patterns in training dataset.
Batch Size: There is always a trade-off between batch size and getting accurate gradients. Using a large batch size helps the model to get more accurate gradients since the model tries to optimize gradients over a large set of images. We found that using a batch size of 128 helped the model to train faster and get better results than using a batch size less than 128.
Number of hidden units in bi-directional GRU: Varying the number of hidden units and layers in GRU helps the model to capture temporal features which can play a significant role in identifying the language correctly. The optimal number of hidden units and layers depends on the complexity of the dataset. Using less number of hidden units may capture less features whereas using large number of hidden units may be computationally expensive. In our case we found that using 1536 hidden units in a single bi-directional GRU layer leads to the best result.
Image Size: We experimented with log-Mel spectra images of sizes $64 \times 64$ and $128 \times 128$ pixels and found that our model worked best with images of size of $128 \times 128$ pixels.
We also evaluated our model on data with mixup augmentation BIBREF28. It is a data augmentation technique that also acts as a regularization technique and prevents overfitting. Instead of directly taking images from the training dataset as input, mixup takes a linear combination of any two random images and feeds it as input. The following equations were used to prepared a mixed-up dataset:
and
where $\alpha \in [0, 1]$ is a random variable from a $\beta $-distribution, $I_1$.
Proposed Method ::: Model details: 2D-ConvNet
This model is a similar model to 2D-ConvNet with Attention and bi-directional GRU described in section SECREF13 except that it lacks skip connections, attention layers, bi-directional GRU and the embedding layer incorporated in the previous model.
Proposed Method ::: Dataset
We classified six languages (English, French, German, Spanish, Russian and Italian) from the VoxForge BIBREF6 dataset. VoxForge is an open-source speech corpus which primarily consists of samples recorded and submitted by users using their own microphone. This results in significant variation of speech quality between samples making it more representative of real world scenarios.
Our dataset consists of 1,500 samples for each of six languages. Out of 1,500 samples for each language, 1,200 were randomly selected as training dataset for that language and rest 300 as validation dataset using k-fold cross-validation. To sum up, we trained our model on 7,200 samples and validated it on 1800 samples comprising six languages. The results are discussed in next section.
Results and Discussion
This paper discusses two end-to-end approaches which achieve state-of-the-art results in both the image as well as audio domain on the VoxForge dataset BIBREF6. In Table TABREF25, we present all the classification accuracies of the two models of the cases with and without mixup for six and four languages.
In the audio domain (using raw audio waveform as input), 1D-ConvNet achieved a mean accuracy of 93.7% with a standard deviation of 0.3% on running k-fold cross validation. In Fig FIGREF27 (a) we present the confusion matrix for the 1D-ConvNet model.
In the image domain (obtained by taking log-Mel spectra of raw audio), 2D-ConvNet with 2D attention (channel and spatial attention) and bi-directional GRU achieved a mean accuracy of 95.0% with a standard deviation of 1.2% for six languages. This model performed better when mixup regularization was applied. 2D-ConvNet achieved a mean accuracy of 95.4% with standard deviation of 0.6% on running k-fold cross validation for six languages when mixup was applied. In Fig FIGREF27 (b) we present the confusion matrix for the 2D-ConvNet model. 2D attention models focused on the important features extracted by convolutional layers and bi-directional GRU captured the temporal features.
Results and Discussion ::: Misclassification
Several of the spoken languages in Europe belong to the Indo-European family. Within this family, the languages are divided into three phyla which are Romance, Germanic and Slavic. Of the 6 languages that we selected Spanish (Es), French (Fr) and Italian (It) belong to the Romance phyla, English and German belong to Germanic phyla and Russian in Slavic phyla. Our model also confuses between languages belonging to the similar phyla which acts as an insanity check since languages in same phyla have many similar pronounced words such as cat in English becomes Katze in German and Ciao in Italian becomes Chao in Spanish.
Our model confuses between French (Fr) and Russian (Ru) while these languages belong to different phyla, many words from French were adopted into Russian such as automate (oot-oo-mate) in French becomes ABTOMaT (aff-taa-maat) in Russian which have similar pronunciation.
Results and Discussion ::: Future Scope
The performance of raw audio waveforms as input features to ConvNet can be further improved by applying silence removal in the audio. Also, there is scope for improvement by augmenting available data through various conventional techniques like pitch shifting, adding random noise and changing speed of audio. These help in making neural networks more robust to variations which might be present in real world scenarios. There can be further exploration of various feature extraction techniques like Constant-Q transform and Fast Fourier Transform and assessment of their impact on Language Identification.
There can be further improvements in neural network architectures like concatenating the high level features obtained from 1D-ConvNet and 2D-ConvNet, before performing classification. There can be experiments using deeper networks with skip connections and Inception modules. These are known to have positively impacted the performance of Convolutional Neural Networks.
Conclusion
There are two main contributions of this paper in the domain of spoken language identification. Firstly, we presented an extensive analysis of raw audio waveforms as input features to 1D-ConvNet. We experimented with various hyperparameters in our 1D-ConvNet and evaluated their effect on validation accuracy. This method is able to bypass the computational overhead of conventional approaches which depend on generation of spectrograms as a necessary pre-procesing step. We were able to achieve an accauracy of 93.7% using this technique.
Next, we discussed the enhancement in performance of 2D-ConvNet using mixup augmentation, which is a recently developed technique to prevent overfitting on test data.This approach achieved an accuracy of 95.4%. We also analysed how attention mechanism and recurrent layers impact the performance of networks. This approach achieved an accuracy of 95.0%. | No |
b39f2249a1489a2cef74155496511cc5d1b2a73d | b39f2249a1489a2cef74155496511cc5d1b2a73d_0 | Q: What is the accuracy reported by state-of-the-art methods?
Text: Introduction
Language Identification (LI) is a problem which involves classifying the language being spoken by a speaker. LI systems can be used in call centers to route international calls to an operator who is fluent in that identified language BIBREF0. In speech-based assistants, LI acts as the first step which chooses the corresponding grammar from a list of available languages for its further semantic analysis BIBREF1. It can also be used in multi-lingual voice-controlled information retrieval systems, for example, Apple Siri and Amazon Alexa.
Over the years, studies have utilized many prosodic and acoustic features to construct machine learning models for LI systems BIBREF2. Every language is composed of phonemes, which are distinct unit of sounds in that language, such as b of black and g of green. Several prosodic and acoustic features are based on phonemes, which become the underlying features on whom the performance of the statistical model depends BIBREF3, BIBREF4. If two languages have many overlapping phonemes, then identifying them becomes a challenging task for a classifier. For example, the word cat in English, kat in Dutch, katze in German have different consonants but when used in a speech they all would sound quite similar.
Due to such drawbacks several studies have switched over to using Deep Neural Networks (DNNs) to harness their novel auto-extraction techniques BIBREF1, BIBREF5. This work follows an implicit approach for identifying six languages with overlapping phonemes on the VoxForge BIBREF6 dataset and achieves 95.4% overall accuracy.
In previous studies BIBREF1, BIBREF7, BIBREF5, authors use log-Mel spectrum of a raw audio as inputs to their models. One of our contributions is to enhance the performance of this approach by utilising recent techniques like Mixup augmentation of inputs and exploring the effectiveness of Attention mechanism in enhancing performance of neural network. As log-Mel spectrum needs to be computed for each raw audio input and processing time for generating log-Mel spectrum increases linearly with length of audio, this acts as a bottleneck for these models. Hence, we propose the use of raw audio waveforms as inputs to deep neural network which boosts performance by avoiding additional overhead of computing log-Mel spectrum for each audio. Our 1D-ConvNet architecture auto-extracts and classifies features from this raw audio input.
The structure of the work is as follows. In Section 2 we discuss about the previous related studies in this field. The model architecture for both the raw waveforms and log-Mel spectrogram images is discussed in Section 3 along with the a discussion on hyperparameter space exploration. In Section 4 we present the experimental results. Finally, in Section 5 we discuss the conclusions drawn from the experiment and future work.
Related Work
Extraction of language dependent features like prosody and phonemes was a popular approach to classify spoken languages BIBREF8, BIBREF9, BIBREF10. Following their success in speaker verification systems, i-vectors have also been used as features in various classification networks. These approaches required significant domain knowledge BIBREF11, BIBREF9. Nowadays most of the attempts on spoken language identification rely on neural networks for meaningful feature extraction and classification BIBREF12, BIBREF13.
Revay et al. BIBREF5 used the ResNet50 BIBREF14 architecture for classifying languages by generating the log-Mel spectra of each raw audio. The model uses a cyclic learning rate where learning rate increases and then decreases linearly. Maximum learning rate for a cycle is set by finding the optimal learning rate using fastai BIBREF15 library. The model classified six languages – English, French, Spanish, Russian, Italian and German – and achieving an accuracy of 89.0%.
Gazeau et al. BIBREF16 in his research showed how Neural Networks, Support Vector Machine and Hidden Markov Model (HMM) can be used to identify French, English, Spanish and German. Dataset was prepared using voice samples from Youtube News BIBREF17and VoxForge BIBREF6 datasets. Hidden Markov models convert speech into a sequence of vectors, was used to capture temporal features in speech. HMMs trained on VoxForge BIBREF6 dataset performed best in comparison to other models proposed by him on same VoxForge dataset. They reported an accuracy of 70.0%.
Bartz et al. BIBREF1 proposed two different hybrid Convolutional Recurrent Neural Networks for language identification. They proposed a new architecture for extracting spatial features from log-Mel spectra of raw audio using CNNs and then using RNNs for capturing temporal features to identify the language. This model achieved an accuracy of 91.0% on Youtube News Dataset BIBREF17. In their second architecture they used the Inception-v3 BIBREF18 architecture to extract spatial features which were then used as input for bi-directional LSTMs to predict the language accurately. This model achieved an accuracy of 96.0% on four languages which were English, German, French and Spanish. They also trained their CNN model (obtained after removing RNN from CRNN model) and the Inception-v3 on their dataset. However they were not able to achieve better results achieving and reported 90% and 95% accuracies, respectively.
Kumar et al. BIBREF0 used Mel-frequency cepstral coefficients (MFCC), Perceptual linear prediction coefficients (PLP), Bark Frequency Cepstral Coefficients (BFCC) and Revised Perceptual Linear Prediction Coefficients (RPLP) as features for language identification. BFCC and RPLP are hybrid features derived using MFCC and PLP. They used two different models based on Vector Quantization (VQ) with Dynamic Time Warping (DTW) and Gaussian Mixture Model (GMM) for classification. These classification models were trained with different features. The authors were able to show that these models worked better with hybrid features (BFCC and RPLP) as compared to conventional features (MFCC and PLP). GMM combined with RPLP features gave the most promising results and achieved an accuracy of 88.8% on ten languages. They designed their own dataset comprising of ten languages being Dutch, English, French, German, Italian, Russian, Spanish, Hindi, Telegu, and Bengali.
Montavon BIBREF7 generated Mel spectrogram as features for a time-delay neural network (TDNN). This network had two-dimensional convolutional layers for feature extraction. An elaborate analysis of how deep architectures outperform their shallow counterparts is presented in this reseacrch. The difficulties in classifying perceptually similar languages like German and English were also put forward in this work. It is mentioned that the proposed approach is less robust to new speakers present in the test dataset. This method was able to achieve an accuracy of 91.2% on dataset comprising of 3 languages – English, French and German.
In Table TABREF1, we summarize the quantitative results of the above previous studies. It includes the model basis, feature description, languages classified and the used dataset along with accuracy obtained. The table also lists the overall results of our proposed models (at the top). The languages used by various authors along with their acronyms are English (En), Spanish (Es), French (Fr), German (De), Russian (Ru), Italian (It), Bengali (Ben), Hindi (Hi) and Telegu (Tel).
Proposed Method ::: Motivations
Several state-of-the-art results on various audio classification tasks have been obtained by using log-Mel spectrograms of raw audio, as features BIBREF19. Convolutional Neural Networks have demonstrated an excellent performance gain in classification of these features BIBREF20, BIBREF21 against other machine learning techniques. It has been shown that using attention layers with ConvNets further enhanced their performance BIBREF22. This motivated us to develop a CNN-based architecture with attention since this approach hasn’t been applied to the task of language identification before.
Recently, using raw audio waveform as features to neural networks has become a popular approach in audio classification BIBREF23, BIBREF22. Raw waveforms have several artifacts which are not effectively captured by various conventional feature extraction techniques like Mel Frequency Cepstral Coefficients (MFCC), Constant Q Transform (CQT), Fast Fourier Transform (FFT), etc.
Audio files are a sequence of spoken words, hence they have temporal features too.A CNN is better at capturing spatial features only and RNNs are better at capturing temporal features as demonstrated by Bartz et al. BIBREF1 using audio files. Therefore, we combined both of these to make a CRNN model.
We propose three types of models to tackle the problem with different approaches, discussed as follows.
Proposed Method ::: Description of Features
As an average human's voice is around 300 Hz and according to Nyquist-Shannon sampling theorem all the useful frequencies (0-300 Hz) are preserved with sampling at 8 kHz, therefore, we sampled raw audio files from all six languages at 8 kHz
The average length of audio files in this dataset was about 10.4 seconds and standard deviation was 2.3 seconds. For our experiments, the audio length was set to 10 seconds. If the audio files were shorter than 10 second, then the data was repeated and concatenated. If audio files were longer, then the data was truncated.
Proposed Method ::: Model Description
We applied the following design principles to all our models:
Every convolutional layer is always followed by an appropriate max pooling layer. This helps in containing the explosion of parameters and keeps the model small and nimble.
Convolutional blocks are defined as an individual block with multiple pairs of one convolutional layer and one max pooling layer. Each convolutional block is preceded or succeded by a convolutional layer.
Batch Normalization and Rectified linear unit activations were applied after each convolutional layer. Batch Normalization helps speed up convergence during training of a neural network.
Model ends with a dense layer which acts the final output layer.
Proposed Method ::: Model Details: 1D ConvNet
As the sampling rate is 8 kHz and audio length is 10 s, hence the input is raw audio to the models with input size of (batch size, 1, 80000). In Table TABREF10, we present a detailed layer-by-layer illustration of the model along with its hyperparameter.
-10pt
Proposed Method ::: Model Details: 1D ConvNet ::: Hyperparameter Optimization:
Tuning hyperparameters is a cumbersome process as the hyperparamter space expands exponentially with the number of parameters, therefore efficient exploration is needed for any feasible study. We used the random search algorithm supported by Hyperopt BIBREF24 library to randomly search for an optimal set of hyperparameters from a given parameter space. In Fig. FIGREF12, various hyperparameters we considered are plotted against the validation accuracy as violin plots. Our observations for each hyperparameter are summarized below:
Number of filters in first layer: We observe that having 128 filters gives better results as compared to other filter values of 32 and 64 in the first layer. A higher number of filters in the first layer of network is able to preserve most of the characteristics of input.
Kernel Size: We varied the receptive fields of convolutional layers by choosing the kernel size from among the set of {3, 5, 7, 9}. We observe that a kernel size of 9 gives better accuracy at the cost of increased computation time and larger number of parameters. A large kernel size is able to capture longer patterns in its input due to bigger receptive power which results in an improved accuracy.
Dropout: Dropout randomly turns-off (sets to 0) various individual nodes during training of the network. In a deep CNN it is important that nodes do not develop a co-dependency amongst each other during training in order to prevent overfitting on training data BIBREF25. Dropout rate of $0.1$ works well for our model. When using a higher dropout rate the network is not able to capture the patterns in training dataset.
Batch Size: We chose batch sizes from amongst the set {32, 64, 128}. There is more noise while calculating error in a smaller batch size as compared to a larger one. This tends to have a regularizing effect during training of the network and hence gives better results. Thus, batch size of 32 works best for the model.
Layers in Convolutional block 1 and 2: We varied the number of layers in both the convolutional blocks. If the number of layers is low, then the network does not have enough depth to capture patterns in the data whereas having large number of layers leads to overfitting on the data. In our network, two layers in the first block and one layer in the second block give optimal results.
Proposed Method ::: Model Details: 2D ConvNet with Attention and bi-directional GRU
Log-Mel spectrogram is the most commonly used method for converting audio into the image domain. The audio data was again sampled at 8 kHz. The input to this model was the log-Mel spectra. We generated log-Mel spectrogram using the LibROSA BIBREF26 library. In Table TABREF16, we present a detailed layer-by-layer illustration of the model along with its hyperparameter.
Proposed Method ::: Model Details: 2D ConvNet with Attention and bi-directional GRU :::
We took some specific design choices for this model, which are as follows:
We added residual connections with each convolutional layer. Residual connections in a way makes the model selective of the contributing layers, determines the optimal number of layers required for training and solves the problem of vanishing gradients. Residual connections or skip connections skip training of those layers that do not contribute much in the overall outcome of model.
We added spatial attention BIBREF27 networks to help the model in focusing on specific regions or areas in an image. Spatial attention aids learning irrespective of transformations, scaling and rotation done on the input images making the model more robust and helping it to achieve better results.
We added Channel Attention networks so as to help the model to find interdependencies among color channels of log-Mel spectra. It adaptively assigns importance to each color channel in a deep convolutional multi-channel network. In our model we apply channel and spatial attention just before feeding the input into bi-directional GRU. This helps the model to focus on selected regions and at the same time find patterns among channels to better determine the language.
Proposed Method ::: Model Details: 2D ConvNet with Attention and bi-directional GRU ::: Hyperparameter Optimization:
We used the random search algorithm supported by Hyperopt BIBREF24 library to randomly search for an optimal set of hyperparameters from a given parameter space. In Fig. FIGREF19 ,various hyperparameters we tuned are plotted against the validation accuracy. Our observations for each hyperparameter are summarized below:
Filter Size: 64 filters in the first layer of network can preserve most of the characteristics of input, but increasing it to 128 is inefficient as overfitting occurs.
Kernel Size: There is a trade-off between kernel size and capturing complex non-linear features. Using a small kernel size will require more layers to capture features whereas using a large kernel size will require less layers. Large kernels capture simple non-linear features whereas using a smaller kernel will help us capture more complex non-linear features. However, with more layers, backpropagation necessitates the need for a large memory. We experimented with large kernel size and gradually increased the layers in order to capture more complex features. The results are not conclusive and thus we chose kernel size of 7 against 3.
Dropout: Dropout rate of 0.1 works well for our data. When using a higher dropout rate the network is not able to capture the patterns in training dataset.
Batch Size: There is always a trade-off between batch size and getting accurate gradients. Using a large batch size helps the model to get more accurate gradients since the model tries to optimize gradients over a large set of images. We found that using a batch size of 128 helped the model to train faster and get better results than using a batch size less than 128.
Number of hidden units in bi-directional GRU: Varying the number of hidden units and layers in GRU helps the model to capture temporal features which can play a significant role in identifying the language correctly. The optimal number of hidden units and layers depends on the complexity of the dataset. Using less number of hidden units may capture less features whereas using large number of hidden units may be computationally expensive. In our case we found that using 1536 hidden units in a single bi-directional GRU layer leads to the best result.
Image Size: We experimented with log-Mel spectra images of sizes $64 \times 64$ and $128 \times 128$ pixels and found that our model worked best with images of size of $128 \times 128$ pixels.
We also evaluated our model on data with mixup augmentation BIBREF28. It is a data augmentation technique that also acts as a regularization technique and prevents overfitting. Instead of directly taking images from the training dataset as input, mixup takes a linear combination of any two random images and feeds it as input. The following equations were used to prepared a mixed-up dataset:
and
where $\alpha \in [0, 1]$ is a random variable from a $\beta $-distribution, $I_1$.
Proposed Method ::: Model details: 2D-ConvNet
This model is a similar model to 2D-ConvNet with Attention and bi-directional GRU described in section SECREF13 except that it lacks skip connections, attention layers, bi-directional GRU and the embedding layer incorporated in the previous model.
Proposed Method ::: Dataset
We classified six languages (English, French, German, Spanish, Russian and Italian) from the VoxForge BIBREF6 dataset. VoxForge is an open-source speech corpus which primarily consists of samples recorded and submitted by users using their own microphone. This results in significant variation of speech quality between samples making it more representative of real world scenarios.
Our dataset consists of 1,500 samples for each of six languages. Out of 1,500 samples for each language, 1,200 were randomly selected as training dataset for that language and rest 300 as validation dataset using k-fold cross-validation. To sum up, we trained our model on 7,200 samples and validated it on 1800 samples comprising six languages. The results are discussed in next section.
Results and Discussion
This paper discusses two end-to-end approaches which achieve state-of-the-art results in both the image as well as audio domain on the VoxForge dataset BIBREF6. In Table TABREF25, we present all the classification accuracies of the two models of the cases with and without mixup for six and four languages.
In the audio domain (using raw audio waveform as input), 1D-ConvNet achieved a mean accuracy of 93.7% with a standard deviation of 0.3% on running k-fold cross validation. In Fig FIGREF27 (a) we present the confusion matrix for the 1D-ConvNet model.
In the image domain (obtained by taking log-Mel spectra of raw audio), 2D-ConvNet with 2D attention (channel and spatial attention) and bi-directional GRU achieved a mean accuracy of 95.0% with a standard deviation of 1.2% for six languages. This model performed better when mixup regularization was applied. 2D-ConvNet achieved a mean accuracy of 95.4% with standard deviation of 0.6% on running k-fold cross validation for six languages when mixup was applied. In Fig FIGREF27 (b) we present the confusion matrix for the 2D-ConvNet model. 2D attention models focused on the important features extracted by convolutional layers and bi-directional GRU captured the temporal features.
Results and Discussion ::: Misclassification
Several of the spoken languages in Europe belong to the Indo-European family. Within this family, the languages are divided into three phyla which are Romance, Germanic and Slavic. Of the 6 languages that we selected Spanish (Es), French (Fr) and Italian (It) belong to the Romance phyla, English and German belong to Germanic phyla and Russian in Slavic phyla. Our model also confuses between languages belonging to the similar phyla which acts as an insanity check since languages in same phyla have many similar pronounced words such as cat in English becomes Katze in German and Ciao in Italian becomes Chao in Spanish.
Our model confuses between French (Fr) and Russian (Ru) while these languages belong to different phyla, many words from French were adopted into Russian such as automate (oot-oo-mate) in French becomes ABTOMaT (aff-taa-maat) in Russian which have similar pronunciation.
Results and Discussion ::: Future Scope
The performance of raw audio waveforms as input features to ConvNet can be further improved by applying silence removal in the audio. Also, there is scope for improvement by augmenting available data through various conventional techniques like pitch shifting, adding random noise and changing speed of audio. These help in making neural networks more robust to variations which might be present in real world scenarios. There can be further exploration of various feature extraction techniques like Constant-Q transform and Fast Fourier Transform and assessment of their impact on Language Identification.
There can be further improvements in neural network architectures like concatenating the high level features obtained from 1D-ConvNet and 2D-ConvNet, before performing classification. There can be experiments using deeper networks with skip connections and Inception modules. These are known to have positively impacted the performance of Convolutional Neural Networks.
Conclusion
There are two main contributions of this paper in the domain of spoken language identification. Firstly, we presented an extensive analysis of raw audio waveforms as input features to 1D-ConvNet. We experimented with various hyperparameters in our 1D-ConvNet and evaluated their effect on validation accuracy. This method is able to bypass the computational overhead of conventional approaches which depend on generation of spectrograms as a necessary pre-procesing step. We were able to achieve an accauracy of 93.7% using this technique.
Next, we discussed the enhancement in performance of 2D-ConvNet using mixup augmentation, which is a recently developed technique to prevent overfitting on test data.This approach achieved an accuracy of 95.4%. We also analysed how attention mechanism and recurrent layers impact the performance of networks. This approach achieved an accuracy of 95.0%. | Answer with content missing: (Table 1)
Previous state-of-the art on same dataset: ResNet50 89% (6 languages), SVM-HMM 70% (4 languages) |
591231d75ff492160958f8aa1e6bfcbbcd85a776 | 591231d75ff492160958f8aa1e6bfcbbcd85a776_0 | Q: Which vision-based approaches does this approach outperform?
Text: Introduction
The bilingual lexicon induction task aims to automatically build word translation dictionaries across different languages, which is beneficial for various natural language processing tasks such as cross-lingual information retrieval BIBREF0 , multi-lingual sentiment analysis BIBREF1 , machine translation BIBREF2 and so on. Although building bilingual lexicon has achieved success with parallel sentences in resource-rich languages BIBREF2 , the parallel data is insufficient or even unavailable especially for resource-scarce languages and it is expensive to collect. On the contrary, there are abundant multimodal mono-lingual data on the Internet, such as images and their associated tags and descriptions, which motivates researchers to induce bilingual lexicon from these non-parallel data without supervision.
There are mainly two types of mono-lingual approaches to build bilingual dictionaries in recent works. The first is purely text-based, which explores the structure similarity between different linguistic space. The most popular approach among them is to linearly map source word embedding into the target word embedding space BIBREF3 , BIBREF4 . The second type utilizes vision as bridge to connect different languages BIBREF5 , BIBREF6 , BIBREF7 . It assumes that words correlating to similar images should share similar semantic meanings. So previous vision-based methods search images with multi-lingual words and translate words according to similarities of visual features extracted from the corresponding images. It has been proved that the visual-grounded word representation improves the semantic quality of the words BIBREF8 .
However, previous vision-based methods suffer from two limitations for bilingual lexicon induction. Firstly, the accurate translation performance is confined to concrete visual-relevant words such as nouns and adjectives as shown in Figure SECREF2 . For words without high-quality visual groundings, previous methods would generate poor translations BIBREF7 . Secondly, previous works extract visual features from the whole image to represent words and thus require object-centered images in order to obtain reliable visual groundings. However, common images usually contain multiple objects or scenes, and the word might only be grounded to part of the image, therefore the global visual features will be quite noisy to represent the word.
In this paper, we address the two limitations via learning from mono-lingual multimodal data with both sentence and visual context (e.g., image and caption data) to induce bilingual lexicon. Such multimodal data is also easily obtained for different languages on the Internet BIBREF9 . We propose a multi-lingual image caption model trained on multiple mono-lingual image caption data, which is able to induce two types of word representations for different languages in the joint space. The first is the linguistic feature learned from the sentence context with visual semantic constraints, so that it is able to generate more accurate translations for words that are less visual-relevant. The second is the localized visual feature which attends to the local region of the object or scene in the image for the corresponding word, so that the visual representation of words will be more salient than previous global visual features. The two representations are complementary and can be combined to induce better bilingual word translation.
We carry out experiments on multiple language pairs including German-English, French-English, and Japanese-English. The experimental results show that the proposed multi-lingual caption model not only achieves better caption performance than independent mono-lingual models for data-scarce languages, but also can induce the two types of features, linguistic and visual features, for different languages in joint spaces. Our proposed method consistently outperforms previous state-of-the-art vision-based bilingual word induction approaches on different languages. The contributions of this paper are as follows:
Related Work
The early works for bilingual lexicon induction require parallel data in different languages. BIBREF2 systematically investigates various word alignment methods with parallel texts to induce bilingual lexicon. However, the parallel data is scarce or even unavailable for low-resource languages. Therefore, methods with less dependency on the availability of parallel corpora are highly desired.
There are mainly two types of mono-lingual approaches for bilingual lexicon induction: text-based and vision-based methods. The text-based methods purely exploit the linguistic information to translate words. The initiative works BIBREF10 , BIBREF11 utilize word co-occurrences in different languages as clue for word alignment. With the improvement in word representation based on deep learning, BIBREF3 finds the structure similarity of the deep-learned word embeddings in different languages, and employs a parallel vocabulary to learn a linear mapping from the source to target word embeddings. BIBREF12 improves the translation performance via adding an orthogonality constraint to the mapping. BIBREF13 further introduces a matching mechanism to induce bilingual lexicon with fewer seeds. However, these models require seed lexicon as the start-point to train the bilingual mapping. Recently, BIBREF4 proposes an adversarial learning approach to learn the joint bilingual embedding space without any seed lexicon.
The vision-based methods exploit images to connect different languages, which assume that words corresponding to similar images are semantically alike. BIBREF5 collects images with labeled words in different languages to learn word translation with image as pivot. BIBREF6 improves the visual-based word translation performance via using more powerful visual representations: the CNN-based BIBREF14 features. The above works mainly focus on the translation of nouns and are limited in the number of collected languages. The recent work BIBREF7 constructs the current largest (with respect to the number of language pairs and types of part-of-speech) multimodal word translation dataset, MMID. They show that concrete words are easiest for vision-based translation methods while others are much less accurate. In our work, we alleviate the limitations of previous vision-based methods via exploring images and their captions rather than images with unstructured tags to connect different languages.
Image captioning has received more and more research attentions. Most image caption works focus on the English caption generation BIBREF15 , BIBREF16 , while there are limited works considering generating multi-lingual captions. The recent WMT workshop BIBREF17 has proposed a subtask of multi-lingual caption generation, where different strategies such as multi-task captioning and source-to-target translation followed by captioning have been proposed to generate captions in target languages. Our work proposes a multi-lingual image caption model that shares part of the parameters across different languages in order to benefit each other.
Unsupervised Bilingual Lexicon Induction
Our goal is to induce bilingual lexicon without supervision of parallel sentences or seed word pairs, purely based on the mono-lingual image caption data. In the following, we introduce the multi-lingual image caption model whose objectives for bilingual lexicon induction are two folds: 1) explicitly build multi-lingual word embeddings in the joint linguistic space; 2) implicitly extract the localized visual features for each word in the shared visual space. The former encodes linguistic information of words while the latter encodes the visual-grounded information, which are complementary for bilingual lexicon induction.
Multi-lingual Image Caption Model
Suppose we have mono-lingual image caption datasets INLINEFORM0 in the source language and INLINEFORM1 in the target language. The images INLINEFORM2 in INLINEFORM3 and INLINEFORM4 do not necessarily overlap, but cover overlapped object or scene classes which is the basic assumption of vision-based methods. For notation simplicity, we omit the superscript INLINEFORM5 for the data sample. Each image caption INLINEFORM6 and INLINEFORM7 is composed of word sequences INLINEFORM8 and INLINEFORM9 respectively, where INLINEFORM10 is the sentence length.
The proposed multi-lingual image caption model aims to generate sentences in different languages to describe the image content, which connects the vision and multi-lingual sentences. Figure FIGREF15 illustrates the framework of the caption model, which consists of three parts: the image encoder, word embedding module and language decoder.
The image encoder encodes the image into the shared visual space. We apply the Resnet152 BIBREF18 as our encoder INLINEFORM0 , which produces INLINEFORM1 vectors corresponding to different spatial locations in the image: DISPLAYFORM0
where INLINEFORM0 . The parameter INLINEFORM1 of the encoder is shared for different languages in order to encode all the images in the same visual space.
The word embedding module maps the one-hot word representation in each language into low-dimensional distributional embeddings: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 is the word embedding matrix for the source and target languages respectively. INLINEFORM2 and INLINEFORM3 are the vocabulary size of the two languages.
The decoder then generates word step by step conditioning on the encoded image feature and previous generated words. The probability of generating INLINEFORM0 in the source language is as follows: DISPLAYFORM0
where INLINEFORM0 is the hidden state of the decoder at step INLINEFORM1 , which is functioned by LSTM BIBREF19 : DISPLAYFORM0
The INLINEFORM0 is the dynamically located contextual image feature to generate word INLINEFORM1 via attention mechanism, which is the weighted sum of INLINEFORM2 computed by DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 is a fully connected neural network. The parameter INLINEFORM1 in the decoder includes all the weights in the LSTM and the attention network INLINEFORM2 .
Similarly, INLINEFORM0 is the probability of generating INLINEFORM1 in the target language, which shares INLINEFORM2 with the source language. By sharing the same parameters across different languages in the encoder and decoder, both the visual features and the learned word embeddings for different languages are enforced to project in a joint semantic space. To be noted, the proposed multi-lingual parameter sharing strategy is not constrained to the presented image captioning model, but can be applied in various image captioning models such as show-tell model BIBREF15 and so on.
We use maximum likelihood as objective function to train the multi-lingual caption model, which maximizes the log-probability of the ground-truth captions: DISPLAYFORM0
Visual-guided Word Representation
The proposed multi-lingual caption model can induce similarities of words in different languages from two aspects: the linguistic similarity and the visual similarity. In the following, we discuss the two types of similarity and then construct the source and target word representations.
The linguistic similarity is reflected from the learned word embeddings INLINEFORM0 and INLINEFORM1 in the multi-lingual caption model. As shown in previous works BIBREF20 , word embeddings learned from the language contexts can capture syntactic and semantic regularities in the language. However, if the word embeddings of different languages are trained independently, they are not in the same linguistic space and we cannot compute similarities directly. In our multi-lingual caption model, since images in INLINEFORM2 and INLINEFORM3 share the same visual space, the features of sentence INLINEFORM4 and INLINEFORM5 belonging to similar images are bound to be close in the same space with the visual constraints. Meanwhile, the language decoder is also shared, which enforces the word embeddings across languages into the same semantic space in order to generate similar sentence features. Therefore, INLINEFORM6 and INLINEFORM7 not only encode the linguistic information of different languages but also share the embedding space which enables direct cross-lingual similarity comparison. We refer the linguistic features of source and target words INLINEFORM8 and INLINEFORM9 as INLINEFORM10 and INLINEFORM11 respectively.
For the visual similarity, the multi-lingual caption model locates the image region to generate each word base on the spatial attention in Eq ( EQREF13 ), which can be used to calculate the localized visual representation of the word. However, since the attention is computed before word generation, the localization performance can be less accurate. It also cannot be generalized to image captioning models without spatial attention. Therefore, inspired by BIBREF21 , where they occlude over regions of the image to observe the change of classification probabilities, we feed different parts of the image to the caption model and investigate the probability changes for each word in the sentence. Algorithm SECREF16 presents the procedure of word localization and the grounded visual feature generation. Please note that such visual-grounding is learned unsupervisedly from the image caption data. Therefore, every word can be represented as a set of grounded visual features (the set size equals to the word occurrence number in the dataset). We refer the localized visual feature set for source word INLINEFORM0 as INLINEFORM1 , for target word INLINEFORM2 as INLINEFORM3 .
Generating localized visual features. Encoded image features INLINEFORM0 , sentence INLINEFORM1 . Localized visual features for each word INLINEFORM2 each INLINEFORM3 compute INLINEFORM4 according to Eq ( EQREF10 ) INLINEFORM5 INLINEFORM6 INLINEFORM7
Word Translation Prediction
Since the word representations of the source and target language are in the same space, we could directly compute the similarities across languages. We apply l2-normalization on the word representations and measure with the cosine similarity. For linguistic features, the similarity is measured as: DISPLAYFORM0
However, there are a set of visual features associated with one word, so the visual similarity measurement between two words is required to take two sets of visual features as input. We aggregate the visual features in a single representation and then compute cosine similarity instead of point-wise similarities among two sets: DISPLAYFORM0
The reasons for performing aggregation are two folds. Firstly, the number of visual features is proportional to the word occurrence in our approach instead of fixed numbers as in BIBREF6 , BIBREF7 . So the computation cost for frequent words are much higher. Secondly, the aggregation helps to reduce noise, which is especially important for abstract words. The abstract words such as `event' are more visually diverse, but the overall styles of multiple images can reflect its visual semantics.
Due to the complementary characteristics of the two features, we combine them to predict the word translation. The translated word for INLINEFORM0 is DISPLAYFORM0
Datasets
For image captioning, we utilize the multi30k BIBREF22 , COCO BIBREF23 and STAIR BIBREF24 datasets. The multi30k dataset contains 30k images and annotations under two tasks. In task 1, each image is annotated with one English description which is then translated into German and French. In task 2, the image is independently annotated with 5 descriptions in English and German respectively. For German and English languages, we utilize annotations in task 2. For the French language, we can only employ French descriptions in task 1, so the training size for French is less than the other two languages. The COCO and STAIR datasets contain the same image set but are independently annotated in English and Japanese. Since the images in the wild for different languages might not overlap, we randomly split the image set into two disjoint parts of equal size. The images in each part only contain the mono-lingual captions. We use Moses SMT Toolkit to tokenize sentences and select words occurring more than five times in our vocabulary for each language. Table TABREF21 summarizes the statistics of caption datasets.
For bilingual lexicon induction, we use two visual datasets: BERGSMA and MMID. The BERGSMA dataset BIBREF5 consists of 500 German-English word translation pairs. Each word is associated with no more than 20 images. The words in BERGSMA dataset are all nouns. The MMID dataset BIBREF7 covers a larger variety of words and languages, including 9,808 German-English pairs and 9,887 French-English pairs. The source word can be mapped to multiple target words in their dictionary. Each word is associated with no more than 100 retrieved images. Since both these image datasets do not contain Japanese language, we download the Japanese-to-English dictionary online. We select words in each dataset that overlap with our caption vocabulary, which results in 230 German-English pairs in BERGSMA dataset, 1,311 German-English pairs and 1,217 French-English pairs in MMID dataset, and 2,408 Japanese-English pairs.
Experimental Setup
For the multi-lingual caption model, we set the word embedding size and the hidden size of LSTM as 512. Adam algorithm is applied to optimize the model with learning rate of 0.0001 and batch size of 128. The caption model is trained up to 100 epochs and the best model is selected according to caption performance on the validation set.
We compare our approach with two baseline vision-based methods proposed in BIBREF6 , BIBREF7 , which measure the similarity of two sets of global visual features for bilingual lexicon induction:
CNN-mean: taking the similarity score of the averaged feature of the two image sets.
CNN-avgmax: taking the average of the maximum similarity scores of two image sets.
We evaluate the word translation performance using MRR (mean-reciprocal rank) as follows: DISPLAYFORM0
where INLINEFORM0 is the groundtruth translated words for source word INLINEFORM1 , and INLINEFORM2 denotes the rank of groundtruth word INLINEFORM3 in the rank list of translation candidates. We also measure the precision at K (P@K) score, which is the proportion of source words whose groundtruth translations rank in the top K words. We set K as 1, 5, 10 and 20.
Evaluation of Multi-lingual Image Caption
We first evaluate the captioning performance of the proposed multi-lingual caption model, which serves as the foundation stone for our bilingual lexicon induction method.
We compare the proposed multi-lingual caption model with the mono-lingual model, which consists of the same model structure, but is trained separately for each language. Table TABREF22 presents the captioning results on the multi30k dataset, where all the languages are from the Latin family. The multi-lingual caption model achieves comparable performance with mono-lingual model for data sufficient languages such as English and German, and significantly outperforms the mono-lingual model for the data-scarce language French with absolute 3.22 gains on the CIDEr metric. For languages with distinctive grammar structures such as English and Japanese, the multi-lingual model is also on par with the mono-lingual model as shown in Table TABREF29 . To be noted, the multi-lingual model contains about twice less of parameters than the independent mono-lingual models, which is more computation efficient.
We visualize the learned visual groundings from the multi-lingual caption model in Figure FIGREF32 . Though there is certain mistakes such as `musicians' in the bottom image, most of the words are grounded well with correct objects or scenes, and thus can obtain more salient visual features.
Evaluation of Bilingual Lexicon Induction
We induce the linguistic features and localized visual features from the multi-lingual caption model for word translation from the source to target languages. Table TABREF30 presents the German-to-English word translation performance of the proposed features. In the BERGSMA dataset, the visual features achieve better translation results than the linguistic features while they are inferior to the linguistic features in the MMID dataset. This is because the vocabulary in BERGSMA dataset mainly consists of nouns, but the parts-of-speech is more diverse in the MMID dataset. The visual features contribute most to translate concrete noun words, while the linguistic features are beneficial to other abstract words. The fusion of the two features performs best for word translation, which demonstrates that the two features are complementary with each other.
We also compare our approach with previous state-of-the-art vision-based methods in Table TABREF30 . Since our visual feature is the averaged representation, it is fair to compare with the CNN-mean baseline method where the only difference lies in the feature rather than similarity measurement. The localized features perform substantially better than the global image features which demonstrate the effectiveness of the attention learned from the caption model. The combination of visual and linguistic features also significantly improves the state-of-the-art visual-based CNN-avgmax method with 11.6% and 6.7% absolute gains on P@1 on the BERGSMA and MMID dataset respectively.
In Figure FIGREF36 , we present the word translation performance for different POS (part-of-speech) labels. We assign the POS label for words in different languages according to their translations in English. We can see that the previous state-of-the-art vision-based approach contributes mostly to noun words which are most visual-relevant, while generates poor translations for other part-of-speech words. Our approach, however, substantially improves the translation performance for all part-of-speech classes. For concrete words such as nouns and adjectives, the localized visual features produce better representation than previous global visual features; and for other part-of-speech words, the linguistic features, which are learned with sentence context, are effective to complement the visual features. The fusion of the linguistic and localized visual features in our approach leads to significant performance improvement over the state-of-the-art baseline method for all types of POS classes.
Some correct and incorrect translation examples for different POS classes are shown in Table TABREF34 . The visual-relevant concrete words are easier to translate such as `phone' and `red'. But our approach still generates reasonable results for abstract words such as `area' and functional words such as `for' due to the fusion of visual and sentence contexts.
We also evaluate the influence of different image captioning structures on the bilingual lexicon induction. We compare our attention model (`attn') with the vanilla show-tell model (`mp') BIBREF15 , which applies mean pooling over spatial image features to generate captions and achieves inferior caption performance to the attention model. Table TABREF35 shows the word translation performance of the two caption models. The attention model with better caption performance also induces better linguistic and localized visual features for bilingual lexicon induction. Nevertheless, the show-tell model still outperforms the previous vision-based methods in Table TABREF30 .
Generalization to Diverse Language Pairs
Beside German-to-English word translation, we expand our approach to other languages including French and Japanese which is more distant from English.
The French-to-English word translation performance is presented in Table TABREF39 . To be noted, the training data of the French captions is five times less than German captions, which makes French-to-English word translation performance less competitive with German-to-English. But similarly, the fusion of linguistic and visual features achieves the best performance, which has boosted the baseline methods with 4.2% relative gains on the MRR metric and 17.4% relative improvements on the P@20 metric.
Table TABREF40 shows the Japanese-to-English word translation performance. Since the language structures of Japanese and English are quite different, the linguistic features learned from the multi-lingual caption model are less effective but still can benefit the visual features to improve the translation quality. The results on multiple diverse language pairs further demonstrate the generalization of our approach for different languages.
Conclusion
In this paper, we address the problem of bilingual lexicon induction without reliance on parallel corpora. Based on the experience that we humans can understand words better when they are within the context and can learn word translations with external world (e.g. images) as pivot, we propose a new vision-based approach to induce bilingual lexicon with images and their associated sentences. We build a multi-lingual caption model from multiple mono-lingual multimodal data to map words in different languages into joint spaces. Two types of word representation, linguistic features and localized visual features, are induced from the caption model. The two types of features are complementary for word translation. Experimental results on multiple language pairs demonstrate the effectiveness of our proposed method, which leads to significant performance improvement over the state-of-the-art vision-based approaches for all types of part-of-speech. In the future, we will further expand the vision-pivot approaches for zero-resource machine translation without parallel sentences.
Acknowledgments
This work was supported by National Natural Science Foundation of China under Grant No. 61772535, National Key Research and Development Plan under Grant No. 2016YFB1001202 and Research Foundation of Beijing Municipal Science & Technology Commission under Grant No. Z181100008918002. | CNN-mean, CNN-avgmax |
9e805020132d950b54531b1a2620f61552f06114 | 9e805020132d950b54531b1a2620f61552f06114_0 | Q: What baseline is used for the experimental setup?
Text: Introduction
The bilingual lexicon induction task aims to automatically build word translation dictionaries across different languages, which is beneficial for various natural language processing tasks such as cross-lingual information retrieval BIBREF0 , multi-lingual sentiment analysis BIBREF1 , machine translation BIBREF2 and so on. Although building bilingual lexicon has achieved success with parallel sentences in resource-rich languages BIBREF2 , the parallel data is insufficient or even unavailable especially for resource-scarce languages and it is expensive to collect. On the contrary, there are abundant multimodal mono-lingual data on the Internet, such as images and their associated tags and descriptions, which motivates researchers to induce bilingual lexicon from these non-parallel data without supervision.
There are mainly two types of mono-lingual approaches to build bilingual dictionaries in recent works. The first is purely text-based, which explores the structure similarity between different linguistic space. The most popular approach among them is to linearly map source word embedding into the target word embedding space BIBREF3 , BIBREF4 . The second type utilizes vision as bridge to connect different languages BIBREF5 , BIBREF6 , BIBREF7 . It assumes that words correlating to similar images should share similar semantic meanings. So previous vision-based methods search images with multi-lingual words and translate words according to similarities of visual features extracted from the corresponding images. It has been proved that the visual-grounded word representation improves the semantic quality of the words BIBREF8 .
However, previous vision-based methods suffer from two limitations for bilingual lexicon induction. Firstly, the accurate translation performance is confined to concrete visual-relevant words such as nouns and adjectives as shown in Figure SECREF2 . For words without high-quality visual groundings, previous methods would generate poor translations BIBREF7 . Secondly, previous works extract visual features from the whole image to represent words and thus require object-centered images in order to obtain reliable visual groundings. However, common images usually contain multiple objects or scenes, and the word might only be grounded to part of the image, therefore the global visual features will be quite noisy to represent the word.
In this paper, we address the two limitations via learning from mono-lingual multimodal data with both sentence and visual context (e.g., image and caption data) to induce bilingual lexicon. Such multimodal data is also easily obtained for different languages on the Internet BIBREF9 . We propose a multi-lingual image caption model trained on multiple mono-lingual image caption data, which is able to induce two types of word representations for different languages in the joint space. The first is the linguistic feature learned from the sentence context with visual semantic constraints, so that it is able to generate more accurate translations for words that are less visual-relevant. The second is the localized visual feature which attends to the local region of the object or scene in the image for the corresponding word, so that the visual representation of words will be more salient than previous global visual features. The two representations are complementary and can be combined to induce better bilingual word translation.
We carry out experiments on multiple language pairs including German-English, French-English, and Japanese-English. The experimental results show that the proposed multi-lingual caption model not only achieves better caption performance than independent mono-lingual models for data-scarce languages, but also can induce the two types of features, linguistic and visual features, for different languages in joint spaces. Our proposed method consistently outperforms previous state-of-the-art vision-based bilingual word induction approaches on different languages. The contributions of this paper are as follows:
Related Work
The early works for bilingual lexicon induction require parallel data in different languages. BIBREF2 systematically investigates various word alignment methods with parallel texts to induce bilingual lexicon. However, the parallel data is scarce or even unavailable for low-resource languages. Therefore, methods with less dependency on the availability of parallel corpora are highly desired.
There are mainly two types of mono-lingual approaches for bilingual lexicon induction: text-based and vision-based methods. The text-based methods purely exploit the linguistic information to translate words. The initiative works BIBREF10 , BIBREF11 utilize word co-occurrences in different languages as clue for word alignment. With the improvement in word representation based on deep learning, BIBREF3 finds the structure similarity of the deep-learned word embeddings in different languages, and employs a parallel vocabulary to learn a linear mapping from the source to target word embeddings. BIBREF12 improves the translation performance via adding an orthogonality constraint to the mapping. BIBREF13 further introduces a matching mechanism to induce bilingual lexicon with fewer seeds. However, these models require seed lexicon as the start-point to train the bilingual mapping. Recently, BIBREF4 proposes an adversarial learning approach to learn the joint bilingual embedding space without any seed lexicon.
The vision-based methods exploit images to connect different languages, which assume that words corresponding to similar images are semantically alike. BIBREF5 collects images with labeled words in different languages to learn word translation with image as pivot. BIBREF6 improves the visual-based word translation performance via using more powerful visual representations: the CNN-based BIBREF14 features. The above works mainly focus on the translation of nouns and are limited in the number of collected languages. The recent work BIBREF7 constructs the current largest (with respect to the number of language pairs and types of part-of-speech) multimodal word translation dataset, MMID. They show that concrete words are easiest for vision-based translation methods while others are much less accurate. In our work, we alleviate the limitations of previous vision-based methods via exploring images and their captions rather than images with unstructured tags to connect different languages.
Image captioning has received more and more research attentions. Most image caption works focus on the English caption generation BIBREF15 , BIBREF16 , while there are limited works considering generating multi-lingual captions. The recent WMT workshop BIBREF17 has proposed a subtask of multi-lingual caption generation, where different strategies such as multi-task captioning and source-to-target translation followed by captioning have been proposed to generate captions in target languages. Our work proposes a multi-lingual image caption model that shares part of the parameters across different languages in order to benefit each other.
Unsupervised Bilingual Lexicon Induction
Our goal is to induce bilingual lexicon without supervision of parallel sentences or seed word pairs, purely based on the mono-lingual image caption data. In the following, we introduce the multi-lingual image caption model whose objectives for bilingual lexicon induction are two folds: 1) explicitly build multi-lingual word embeddings in the joint linguistic space; 2) implicitly extract the localized visual features for each word in the shared visual space. The former encodes linguistic information of words while the latter encodes the visual-grounded information, which are complementary for bilingual lexicon induction.
Multi-lingual Image Caption Model
Suppose we have mono-lingual image caption datasets INLINEFORM0 in the source language and INLINEFORM1 in the target language. The images INLINEFORM2 in INLINEFORM3 and INLINEFORM4 do not necessarily overlap, but cover overlapped object or scene classes which is the basic assumption of vision-based methods. For notation simplicity, we omit the superscript INLINEFORM5 for the data sample. Each image caption INLINEFORM6 and INLINEFORM7 is composed of word sequences INLINEFORM8 and INLINEFORM9 respectively, where INLINEFORM10 is the sentence length.
The proposed multi-lingual image caption model aims to generate sentences in different languages to describe the image content, which connects the vision and multi-lingual sentences. Figure FIGREF15 illustrates the framework of the caption model, which consists of three parts: the image encoder, word embedding module and language decoder.
The image encoder encodes the image into the shared visual space. We apply the Resnet152 BIBREF18 as our encoder INLINEFORM0 , which produces INLINEFORM1 vectors corresponding to different spatial locations in the image: DISPLAYFORM0
where INLINEFORM0 . The parameter INLINEFORM1 of the encoder is shared for different languages in order to encode all the images in the same visual space.
The word embedding module maps the one-hot word representation in each language into low-dimensional distributional embeddings: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 is the word embedding matrix for the source and target languages respectively. INLINEFORM2 and INLINEFORM3 are the vocabulary size of the two languages.
The decoder then generates word step by step conditioning on the encoded image feature and previous generated words. The probability of generating INLINEFORM0 in the source language is as follows: DISPLAYFORM0
where INLINEFORM0 is the hidden state of the decoder at step INLINEFORM1 , which is functioned by LSTM BIBREF19 : DISPLAYFORM0
The INLINEFORM0 is the dynamically located contextual image feature to generate word INLINEFORM1 via attention mechanism, which is the weighted sum of INLINEFORM2 computed by DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 is a fully connected neural network. The parameter INLINEFORM1 in the decoder includes all the weights in the LSTM and the attention network INLINEFORM2 .
Similarly, INLINEFORM0 is the probability of generating INLINEFORM1 in the target language, which shares INLINEFORM2 with the source language. By sharing the same parameters across different languages in the encoder and decoder, both the visual features and the learned word embeddings for different languages are enforced to project in a joint semantic space. To be noted, the proposed multi-lingual parameter sharing strategy is not constrained to the presented image captioning model, but can be applied in various image captioning models such as show-tell model BIBREF15 and so on.
We use maximum likelihood as objective function to train the multi-lingual caption model, which maximizes the log-probability of the ground-truth captions: DISPLAYFORM0
Visual-guided Word Representation
The proposed multi-lingual caption model can induce similarities of words in different languages from two aspects: the linguistic similarity and the visual similarity. In the following, we discuss the two types of similarity and then construct the source and target word representations.
The linguistic similarity is reflected from the learned word embeddings INLINEFORM0 and INLINEFORM1 in the multi-lingual caption model. As shown in previous works BIBREF20 , word embeddings learned from the language contexts can capture syntactic and semantic regularities in the language. However, if the word embeddings of different languages are trained independently, they are not in the same linguistic space and we cannot compute similarities directly. In our multi-lingual caption model, since images in INLINEFORM2 and INLINEFORM3 share the same visual space, the features of sentence INLINEFORM4 and INLINEFORM5 belonging to similar images are bound to be close in the same space with the visual constraints. Meanwhile, the language decoder is also shared, which enforces the word embeddings across languages into the same semantic space in order to generate similar sentence features. Therefore, INLINEFORM6 and INLINEFORM7 not only encode the linguistic information of different languages but also share the embedding space which enables direct cross-lingual similarity comparison. We refer the linguistic features of source and target words INLINEFORM8 and INLINEFORM9 as INLINEFORM10 and INLINEFORM11 respectively.
For the visual similarity, the multi-lingual caption model locates the image region to generate each word base on the spatial attention in Eq ( EQREF13 ), which can be used to calculate the localized visual representation of the word. However, since the attention is computed before word generation, the localization performance can be less accurate. It also cannot be generalized to image captioning models without spatial attention. Therefore, inspired by BIBREF21 , where they occlude over regions of the image to observe the change of classification probabilities, we feed different parts of the image to the caption model and investigate the probability changes for each word in the sentence. Algorithm SECREF16 presents the procedure of word localization and the grounded visual feature generation. Please note that such visual-grounding is learned unsupervisedly from the image caption data. Therefore, every word can be represented as a set of grounded visual features (the set size equals to the word occurrence number in the dataset). We refer the localized visual feature set for source word INLINEFORM0 as INLINEFORM1 , for target word INLINEFORM2 as INLINEFORM3 .
Generating localized visual features. Encoded image features INLINEFORM0 , sentence INLINEFORM1 . Localized visual features for each word INLINEFORM2 each INLINEFORM3 compute INLINEFORM4 according to Eq ( EQREF10 ) INLINEFORM5 INLINEFORM6 INLINEFORM7
Word Translation Prediction
Since the word representations of the source and target language are in the same space, we could directly compute the similarities across languages. We apply l2-normalization on the word representations and measure with the cosine similarity. For linguistic features, the similarity is measured as: DISPLAYFORM0
However, there are a set of visual features associated with one word, so the visual similarity measurement between two words is required to take two sets of visual features as input. We aggregate the visual features in a single representation and then compute cosine similarity instead of point-wise similarities among two sets: DISPLAYFORM0
The reasons for performing aggregation are two folds. Firstly, the number of visual features is proportional to the word occurrence in our approach instead of fixed numbers as in BIBREF6 , BIBREF7 . So the computation cost for frequent words are much higher. Secondly, the aggregation helps to reduce noise, which is especially important for abstract words. The abstract words such as `event' are more visually diverse, but the overall styles of multiple images can reflect its visual semantics.
Due to the complementary characteristics of the two features, we combine them to predict the word translation. The translated word for INLINEFORM0 is DISPLAYFORM0
Datasets
For image captioning, we utilize the multi30k BIBREF22 , COCO BIBREF23 and STAIR BIBREF24 datasets. The multi30k dataset contains 30k images and annotations under two tasks. In task 1, each image is annotated with one English description which is then translated into German and French. In task 2, the image is independently annotated with 5 descriptions in English and German respectively. For German and English languages, we utilize annotations in task 2. For the French language, we can only employ French descriptions in task 1, so the training size for French is less than the other two languages. The COCO and STAIR datasets contain the same image set but are independently annotated in English and Japanese. Since the images in the wild for different languages might not overlap, we randomly split the image set into two disjoint parts of equal size. The images in each part only contain the mono-lingual captions. We use Moses SMT Toolkit to tokenize sentences and select words occurring more than five times in our vocabulary for each language. Table TABREF21 summarizes the statistics of caption datasets.
For bilingual lexicon induction, we use two visual datasets: BERGSMA and MMID. The BERGSMA dataset BIBREF5 consists of 500 German-English word translation pairs. Each word is associated with no more than 20 images. The words in BERGSMA dataset are all nouns. The MMID dataset BIBREF7 covers a larger variety of words and languages, including 9,808 German-English pairs and 9,887 French-English pairs. The source word can be mapped to multiple target words in their dictionary. Each word is associated with no more than 100 retrieved images. Since both these image datasets do not contain Japanese language, we download the Japanese-to-English dictionary online. We select words in each dataset that overlap with our caption vocabulary, which results in 230 German-English pairs in BERGSMA dataset, 1,311 German-English pairs and 1,217 French-English pairs in MMID dataset, and 2,408 Japanese-English pairs.
Experimental Setup
For the multi-lingual caption model, we set the word embedding size and the hidden size of LSTM as 512. Adam algorithm is applied to optimize the model with learning rate of 0.0001 and batch size of 128. The caption model is trained up to 100 epochs and the best model is selected according to caption performance on the validation set.
We compare our approach with two baseline vision-based methods proposed in BIBREF6 , BIBREF7 , which measure the similarity of two sets of global visual features for bilingual lexicon induction:
CNN-mean: taking the similarity score of the averaged feature of the two image sets.
CNN-avgmax: taking the average of the maximum similarity scores of two image sets.
We evaluate the word translation performance using MRR (mean-reciprocal rank) as follows: DISPLAYFORM0
where INLINEFORM0 is the groundtruth translated words for source word INLINEFORM1 , and INLINEFORM2 denotes the rank of groundtruth word INLINEFORM3 in the rank list of translation candidates. We also measure the precision at K (P@K) score, which is the proportion of source words whose groundtruth translations rank in the top K words. We set K as 1, 5, 10 and 20.
Evaluation of Multi-lingual Image Caption
We first evaluate the captioning performance of the proposed multi-lingual caption model, which serves as the foundation stone for our bilingual lexicon induction method.
We compare the proposed multi-lingual caption model with the mono-lingual model, which consists of the same model structure, but is trained separately for each language. Table TABREF22 presents the captioning results on the multi30k dataset, where all the languages are from the Latin family. The multi-lingual caption model achieves comparable performance with mono-lingual model for data sufficient languages such as English and German, and significantly outperforms the mono-lingual model for the data-scarce language French with absolute 3.22 gains on the CIDEr metric. For languages with distinctive grammar structures such as English and Japanese, the multi-lingual model is also on par with the mono-lingual model as shown in Table TABREF29 . To be noted, the multi-lingual model contains about twice less of parameters than the independent mono-lingual models, which is more computation efficient.
We visualize the learned visual groundings from the multi-lingual caption model in Figure FIGREF32 . Though there is certain mistakes such as `musicians' in the bottom image, most of the words are grounded well with correct objects or scenes, and thus can obtain more salient visual features.
Evaluation of Bilingual Lexicon Induction
We induce the linguistic features and localized visual features from the multi-lingual caption model for word translation from the source to target languages. Table TABREF30 presents the German-to-English word translation performance of the proposed features. In the BERGSMA dataset, the visual features achieve better translation results than the linguistic features while they are inferior to the linguistic features in the MMID dataset. This is because the vocabulary in BERGSMA dataset mainly consists of nouns, but the parts-of-speech is more diverse in the MMID dataset. The visual features contribute most to translate concrete noun words, while the linguistic features are beneficial to other abstract words. The fusion of the two features performs best for word translation, which demonstrates that the two features are complementary with each other.
We also compare our approach with previous state-of-the-art vision-based methods in Table TABREF30 . Since our visual feature is the averaged representation, it is fair to compare with the CNN-mean baseline method where the only difference lies in the feature rather than similarity measurement. The localized features perform substantially better than the global image features which demonstrate the effectiveness of the attention learned from the caption model. The combination of visual and linguistic features also significantly improves the state-of-the-art visual-based CNN-avgmax method with 11.6% and 6.7% absolute gains on P@1 on the BERGSMA and MMID dataset respectively.
In Figure FIGREF36 , we present the word translation performance for different POS (part-of-speech) labels. We assign the POS label for words in different languages according to their translations in English. We can see that the previous state-of-the-art vision-based approach contributes mostly to noun words which are most visual-relevant, while generates poor translations for other part-of-speech words. Our approach, however, substantially improves the translation performance for all part-of-speech classes. For concrete words such as nouns and adjectives, the localized visual features produce better representation than previous global visual features; and for other part-of-speech words, the linguistic features, which are learned with sentence context, are effective to complement the visual features. The fusion of the linguistic and localized visual features in our approach leads to significant performance improvement over the state-of-the-art baseline method for all types of POS classes.
Some correct and incorrect translation examples for different POS classes are shown in Table TABREF34 . The visual-relevant concrete words are easier to translate such as `phone' and `red'. But our approach still generates reasonable results for abstract words such as `area' and functional words such as `for' due to the fusion of visual and sentence contexts.
We also evaluate the influence of different image captioning structures on the bilingual lexicon induction. We compare our attention model (`attn') with the vanilla show-tell model (`mp') BIBREF15 , which applies mean pooling over spatial image features to generate captions and achieves inferior caption performance to the attention model. Table TABREF35 shows the word translation performance of the two caption models. The attention model with better caption performance also induces better linguistic and localized visual features for bilingual lexicon induction. Nevertheless, the show-tell model still outperforms the previous vision-based methods in Table TABREF30 .
Generalization to Diverse Language Pairs
Beside German-to-English word translation, we expand our approach to other languages including French and Japanese which is more distant from English.
The French-to-English word translation performance is presented in Table TABREF39 . To be noted, the training data of the French captions is five times less than German captions, which makes French-to-English word translation performance less competitive with German-to-English. But similarly, the fusion of linguistic and visual features achieves the best performance, which has boosted the baseline methods with 4.2% relative gains on the MRR metric and 17.4% relative improvements on the P@20 metric.
Table TABREF40 shows the Japanese-to-English word translation performance. Since the language structures of Japanese and English are quite different, the linguistic features learned from the multi-lingual caption model are less effective but still can benefit the visual features to improve the translation quality. The results on multiple diverse language pairs further demonstrate the generalization of our approach for different languages.
Conclusion
In this paper, we address the problem of bilingual lexicon induction without reliance on parallel corpora. Based on the experience that we humans can understand words better when they are within the context and can learn word translations with external world (e.g. images) as pivot, we propose a new vision-based approach to induce bilingual lexicon with images and their associated sentences. We build a multi-lingual caption model from multiple mono-lingual multimodal data to map words in different languages into joint spaces. Two types of word representation, linguistic features and localized visual features, are induced from the caption model. The two types of features are complementary for word translation. Experimental results on multiple language pairs demonstrate the effectiveness of our proposed method, which leads to significant performance improvement over the state-of-the-art vision-based approaches for all types of part-of-speech. In the future, we will further expand the vision-pivot approaches for zero-resource machine translation without parallel sentences.
Acknowledgments
This work was supported by National Natural Science Foundation of China under Grant No. 61772535, National Key Research and Development Plan under Grant No. 2016YFB1001202 and Research Foundation of Beijing Municipal Science & Technology Commission under Grant No. Z181100008918002. | CNN-mean, CNN-avgmax |
95abda842c4df95b4c5e84ac7d04942f1250b571 | 95abda842c4df95b4c5e84ac7d04942f1250b571_0 | Q: Which languages are used in the multi-lingual caption model?
Text: Introduction
The bilingual lexicon induction task aims to automatically build word translation dictionaries across different languages, which is beneficial for various natural language processing tasks such as cross-lingual information retrieval BIBREF0 , multi-lingual sentiment analysis BIBREF1 , machine translation BIBREF2 and so on. Although building bilingual lexicon has achieved success with parallel sentences in resource-rich languages BIBREF2 , the parallel data is insufficient or even unavailable especially for resource-scarce languages and it is expensive to collect. On the contrary, there are abundant multimodal mono-lingual data on the Internet, such as images and their associated tags and descriptions, which motivates researchers to induce bilingual lexicon from these non-parallel data without supervision.
There are mainly two types of mono-lingual approaches to build bilingual dictionaries in recent works. The first is purely text-based, which explores the structure similarity between different linguistic space. The most popular approach among them is to linearly map source word embedding into the target word embedding space BIBREF3 , BIBREF4 . The second type utilizes vision as bridge to connect different languages BIBREF5 , BIBREF6 , BIBREF7 . It assumes that words correlating to similar images should share similar semantic meanings. So previous vision-based methods search images with multi-lingual words and translate words according to similarities of visual features extracted from the corresponding images. It has been proved that the visual-grounded word representation improves the semantic quality of the words BIBREF8 .
However, previous vision-based methods suffer from two limitations for bilingual lexicon induction. Firstly, the accurate translation performance is confined to concrete visual-relevant words such as nouns and adjectives as shown in Figure SECREF2 . For words without high-quality visual groundings, previous methods would generate poor translations BIBREF7 . Secondly, previous works extract visual features from the whole image to represent words and thus require object-centered images in order to obtain reliable visual groundings. However, common images usually contain multiple objects or scenes, and the word might only be grounded to part of the image, therefore the global visual features will be quite noisy to represent the word.
In this paper, we address the two limitations via learning from mono-lingual multimodal data with both sentence and visual context (e.g., image and caption data) to induce bilingual lexicon. Such multimodal data is also easily obtained for different languages on the Internet BIBREF9 . We propose a multi-lingual image caption model trained on multiple mono-lingual image caption data, which is able to induce two types of word representations for different languages in the joint space. The first is the linguistic feature learned from the sentence context with visual semantic constraints, so that it is able to generate more accurate translations for words that are less visual-relevant. The second is the localized visual feature which attends to the local region of the object or scene in the image for the corresponding word, so that the visual representation of words will be more salient than previous global visual features. The two representations are complementary and can be combined to induce better bilingual word translation.
We carry out experiments on multiple language pairs including German-English, French-English, and Japanese-English. The experimental results show that the proposed multi-lingual caption model not only achieves better caption performance than independent mono-lingual models for data-scarce languages, but also can induce the two types of features, linguistic and visual features, for different languages in joint spaces. Our proposed method consistently outperforms previous state-of-the-art vision-based bilingual word induction approaches on different languages. The contributions of this paper are as follows:
Related Work
The early works for bilingual lexicon induction require parallel data in different languages. BIBREF2 systematically investigates various word alignment methods with parallel texts to induce bilingual lexicon. However, the parallel data is scarce or even unavailable for low-resource languages. Therefore, methods with less dependency on the availability of parallel corpora are highly desired.
There are mainly two types of mono-lingual approaches for bilingual lexicon induction: text-based and vision-based methods. The text-based methods purely exploit the linguistic information to translate words. The initiative works BIBREF10 , BIBREF11 utilize word co-occurrences in different languages as clue for word alignment. With the improvement in word representation based on deep learning, BIBREF3 finds the structure similarity of the deep-learned word embeddings in different languages, and employs a parallel vocabulary to learn a linear mapping from the source to target word embeddings. BIBREF12 improves the translation performance via adding an orthogonality constraint to the mapping. BIBREF13 further introduces a matching mechanism to induce bilingual lexicon with fewer seeds. However, these models require seed lexicon as the start-point to train the bilingual mapping. Recently, BIBREF4 proposes an adversarial learning approach to learn the joint bilingual embedding space without any seed lexicon.
The vision-based methods exploit images to connect different languages, which assume that words corresponding to similar images are semantically alike. BIBREF5 collects images with labeled words in different languages to learn word translation with image as pivot. BIBREF6 improves the visual-based word translation performance via using more powerful visual representations: the CNN-based BIBREF14 features. The above works mainly focus on the translation of nouns and are limited in the number of collected languages. The recent work BIBREF7 constructs the current largest (with respect to the number of language pairs and types of part-of-speech) multimodal word translation dataset, MMID. They show that concrete words are easiest for vision-based translation methods while others are much less accurate. In our work, we alleviate the limitations of previous vision-based methods via exploring images and their captions rather than images with unstructured tags to connect different languages.
Image captioning has received more and more research attentions. Most image caption works focus on the English caption generation BIBREF15 , BIBREF16 , while there are limited works considering generating multi-lingual captions. The recent WMT workshop BIBREF17 has proposed a subtask of multi-lingual caption generation, where different strategies such as multi-task captioning and source-to-target translation followed by captioning have been proposed to generate captions in target languages. Our work proposes a multi-lingual image caption model that shares part of the parameters across different languages in order to benefit each other.
Unsupervised Bilingual Lexicon Induction
Our goal is to induce bilingual lexicon without supervision of parallel sentences or seed word pairs, purely based on the mono-lingual image caption data. In the following, we introduce the multi-lingual image caption model whose objectives for bilingual lexicon induction are two folds: 1) explicitly build multi-lingual word embeddings in the joint linguistic space; 2) implicitly extract the localized visual features for each word in the shared visual space. The former encodes linguistic information of words while the latter encodes the visual-grounded information, which are complementary for bilingual lexicon induction.
Multi-lingual Image Caption Model
Suppose we have mono-lingual image caption datasets INLINEFORM0 in the source language and INLINEFORM1 in the target language. The images INLINEFORM2 in INLINEFORM3 and INLINEFORM4 do not necessarily overlap, but cover overlapped object or scene classes which is the basic assumption of vision-based methods. For notation simplicity, we omit the superscript INLINEFORM5 for the data sample. Each image caption INLINEFORM6 and INLINEFORM7 is composed of word sequences INLINEFORM8 and INLINEFORM9 respectively, where INLINEFORM10 is the sentence length.
The proposed multi-lingual image caption model aims to generate sentences in different languages to describe the image content, which connects the vision and multi-lingual sentences. Figure FIGREF15 illustrates the framework of the caption model, which consists of three parts: the image encoder, word embedding module and language decoder.
The image encoder encodes the image into the shared visual space. We apply the Resnet152 BIBREF18 as our encoder INLINEFORM0 , which produces INLINEFORM1 vectors corresponding to different spatial locations in the image: DISPLAYFORM0
where INLINEFORM0 . The parameter INLINEFORM1 of the encoder is shared for different languages in order to encode all the images in the same visual space.
The word embedding module maps the one-hot word representation in each language into low-dimensional distributional embeddings: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 is the word embedding matrix for the source and target languages respectively. INLINEFORM2 and INLINEFORM3 are the vocabulary size of the two languages.
The decoder then generates word step by step conditioning on the encoded image feature and previous generated words. The probability of generating INLINEFORM0 in the source language is as follows: DISPLAYFORM0
where INLINEFORM0 is the hidden state of the decoder at step INLINEFORM1 , which is functioned by LSTM BIBREF19 : DISPLAYFORM0
The INLINEFORM0 is the dynamically located contextual image feature to generate word INLINEFORM1 via attention mechanism, which is the weighted sum of INLINEFORM2 computed by DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 is a fully connected neural network. The parameter INLINEFORM1 in the decoder includes all the weights in the LSTM and the attention network INLINEFORM2 .
Similarly, INLINEFORM0 is the probability of generating INLINEFORM1 in the target language, which shares INLINEFORM2 with the source language. By sharing the same parameters across different languages in the encoder and decoder, both the visual features and the learned word embeddings for different languages are enforced to project in a joint semantic space. To be noted, the proposed multi-lingual parameter sharing strategy is not constrained to the presented image captioning model, but can be applied in various image captioning models such as show-tell model BIBREF15 and so on.
We use maximum likelihood as objective function to train the multi-lingual caption model, which maximizes the log-probability of the ground-truth captions: DISPLAYFORM0
Visual-guided Word Representation
The proposed multi-lingual caption model can induce similarities of words in different languages from two aspects: the linguistic similarity and the visual similarity. In the following, we discuss the two types of similarity and then construct the source and target word representations.
The linguistic similarity is reflected from the learned word embeddings INLINEFORM0 and INLINEFORM1 in the multi-lingual caption model. As shown in previous works BIBREF20 , word embeddings learned from the language contexts can capture syntactic and semantic regularities in the language. However, if the word embeddings of different languages are trained independently, they are not in the same linguistic space and we cannot compute similarities directly. In our multi-lingual caption model, since images in INLINEFORM2 and INLINEFORM3 share the same visual space, the features of sentence INLINEFORM4 and INLINEFORM5 belonging to similar images are bound to be close in the same space with the visual constraints. Meanwhile, the language decoder is also shared, which enforces the word embeddings across languages into the same semantic space in order to generate similar sentence features. Therefore, INLINEFORM6 and INLINEFORM7 not only encode the linguistic information of different languages but also share the embedding space which enables direct cross-lingual similarity comparison. We refer the linguistic features of source and target words INLINEFORM8 and INLINEFORM9 as INLINEFORM10 and INLINEFORM11 respectively.
For the visual similarity, the multi-lingual caption model locates the image region to generate each word base on the spatial attention in Eq ( EQREF13 ), which can be used to calculate the localized visual representation of the word. However, since the attention is computed before word generation, the localization performance can be less accurate. It also cannot be generalized to image captioning models without spatial attention. Therefore, inspired by BIBREF21 , where they occlude over regions of the image to observe the change of classification probabilities, we feed different parts of the image to the caption model and investigate the probability changes for each word in the sentence. Algorithm SECREF16 presents the procedure of word localization and the grounded visual feature generation. Please note that such visual-grounding is learned unsupervisedly from the image caption data. Therefore, every word can be represented as a set of grounded visual features (the set size equals to the word occurrence number in the dataset). We refer the localized visual feature set for source word INLINEFORM0 as INLINEFORM1 , for target word INLINEFORM2 as INLINEFORM3 .
Generating localized visual features. Encoded image features INLINEFORM0 , sentence INLINEFORM1 . Localized visual features for each word INLINEFORM2 each INLINEFORM3 compute INLINEFORM4 according to Eq ( EQREF10 ) INLINEFORM5 INLINEFORM6 INLINEFORM7
Word Translation Prediction
Since the word representations of the source and target language are in the same space, we could directly compute the similarities across languages. We apply l2-normalization on the word representations and measure with the cosine similarity. For linguistic features, the similarity is measured as: DISPLAYFORM0
However, there are a set of visual features associated with one word, so the visual similarity measurement between two words is required to take two sets of visual features as input. We aggregate the visual features in a single representation and then compute cosine similarity instead of point-wise similarities among two sets: DISPLAYFORM0
The reasons for performing aggregation are two folds. Firstly, the number of visual features is proportional to the word occurrence in our approach instead of fixed numbers as in BIBREF6 , BIBREF7 . So the computation cost for frequent words are much higher. Secondly, the aggregation helps to reduce noise, which is especially important for abstract words. The abstract words such as `event' are more visually diverse, but the overall styles of multiple images can reflect its visual semantics.
Due to the complementary characteristics of the two features, we combine them to predict the word translation. The translated word for INLINEFORM0 is DISPLAYFORM0
Datasets
For image captioning, we utilize the multi30k BIBREF22 , COCO BIBREF23 and STAIR BIBREF24 datasets. The multi30k dataset contains 30k images and annotations under two tasks. In task 1, each image is annotated with one English description which is then translated into German and French. In task 2, the image is independently annotated with 5 descriptions in English and German respectively. For German and English languages, we utilize annotations in task 2. For the French language, we can only employ French descriptions in task 1, so the training size for French is less than the other two languages. The COCO and STAIR datasets contain the same image set but are independently annotated in English and Japanese. Since the images in the wild for different languages might not overlap, we randomly split the image set into two disjoint parts of equal size. The images in each part only contain the mono-lingual captions. We use Moses SMT Toolkit to tokenize sentences and select words occurring more than five times in our vocabulary for each language. Table TABREF21 summarizes the statistics of caption datasets.
For bilingual lexicon induction, we use two visual datasets: BERGSMA and MMID. The BERGSMA dataset BIBREF5 consists of 500 German-English word translation pairs. Each word is associated with no more than 20 images. The words in BERGSMA dataset are all nouns. The MMID dataset BIBREF7 covers a larger variety of words and languages, including 9,808 German-English pairs and 9,887 French-English pairs. The source word can be mapped to multiple target words in their dictionary. Each word is associated with no more than 100 retrieved images. Since both these image datasets do not contain Japanese language, we download the Japanese-to-English dictionary online. We select words in each dataset that overlap with our caption vocabulary, which results in 230 German-English pairs in BERGSMA dataset, 1,311 German-English pairs and 1,217 French-English pairs in MMID dataset, and 2,408 Japanese-English pairs.
Experimental Setup
For the multi-lingual caption model, we set the word embedding size and the hidden size of LSTM as 512. Adam algorithm is applied to optimize the model with learning rate of 0.0001 and batch size of 128. The caption model is trained up to 100 epochs and the best model is selected according to caption performance on the validation set.
We compare our approach with two baseline vision-based methods proposed in BIBREF6 , BIBREF7 , which measure the similarity of two sets of global visual features for bilingual lexicon induction:
CNN-mean: taking the similarity score of the averaged feature of the two image sets.
CNN-avgmax: taking the average of the maximum similarity scores of two image sets.
We evaluate the word translation performance using MRR (mean-reciprocal rank) as follows: DISPLAYFORM0
where INLINEFORM0 is the groundtruth translated words for source word INLINEFORM1 , and INLINEFORM2 denotes the rank of groundtruth word INLINEFORM3 in the rank list of translation candidates. We also measure the precision at K (P@K) score, which is the proportion of source words whose groundtruth translations rank in the top K words. We set K as 1, 5, 10 and 20.
Evaluation of Multi-lingual Image Caption
We first evaluate the captioning performance of the proposed multi-lingual caption model, which serves as the foundation stone for our bilingual lexicon induction method.
We compare the proposed multi-lingual caption model with the mono-lingual model, which consists of the same model structure, but is trained separately for each language. Table TABREF22 presents the captioning results on the multi30k dataset, where all the languages are from the Latin family. The multi-lingual caption model achieves comparable performance with mono-lingual model for data sufficient languages such as English and German, and significantly outperforms the mono-lingual model for the data-scarce language French with absolute 3.22 gains on the CIDEr metric. For languages with distinctive grammar structures such as English and Japanese, the multi-lingual model is also on par with the mono-lingual model as shown in Table TABREF29 . To be noted, the multi-lingual model contains about twice less of parameters than the independent mono-lingual models, which is more computation efficient.
We visualize the learned visual groundings from the multi-lingual caption model in Figure FIGREF32 . Though there is certain mistakes such as `musicians' in the bottom image, most of the words are grounded well with correct objects or scenes, and thus can obtain more salient visual features.
Evaluation of Bilingual Lexicon Induction
We induce the linguistic features and localized visual features from the multi-lingual caption model for word translation from the source to target languages. Table TABREF30 presents the German-to-English word translation performance of the proposed features. In the BERGSMA dataset, the visual features achieve better translation results than the linguistic features while they are inferior to the linguistic features in the MMID dataset. This is because the vocabulary in BERGSMA dataset mainly consists of nouns, but the parts-of-speech is more diverse in the MMID dataset. The visual features contribute most to translate concrete noun words, while the linguistic features are beneficial to other abstract words. The fusion of the two features performs best for word translation, which demonstrates that the two features are complementary with each other.
We also compare our approach with previous state-of-the-art vision-based methods in Table TABREF30 . Since our visual feature is the averaged representation, it is fair to compare with the CNN-mean baseline method where the only difference lies in the feature rather than similarity measurement. The localized features perform substantially better than the global image features which demonstrate the effectiveness of the attention learned from the caption model. The combination of visual and linguistic features also significantly improves the state-of-the-art visual-based CNN-avgmax method with 11.6% and 6.7% absolute gains on P@1 on the BERGSMA and MMID dataset respectively.
In Figure FIGREF36 , we present the word translation performance for different POS (part-of-speech) labels. We assign the POS label for words in different languages according to their translations in English. We can see that the previous state-of-the-art vision-based approach contributes mostly to noun words which are most visual-relevant, while generates poor translations for other part-of-speech words. Our approach, however, substantially improves the translation performance for all part-of-speech classes. For concrete words such as nouns and adjectives, the localized visual features produce better representation than previous global visual features; and for other part-of-speech words, the linguistic features, which are learned with sentence context, are effective to complement the visual features. The fusion of the linguistic and localized visual features in our approach leads to significant performance improvement over the state-of-the-art baseline method for all types of POS classes.
Some correct and incorrect translation examples for different POS classes are shown in Table TABREF34 . The visual-relevant concrete words are easier to translate such as `phone' and `red'. But our approach still generates reasonable results for abstract words such as `area' and functional words such as `for' due to the fusion of visual and sentence contexts.
We also evaluate the influence of different image captioning structures on the bilingual lexicon induction. We compare our attention model (`attn') with the vanilla show-tell model (`mp') BIBREF15 , which applies mean pooling over spatial image features to generate captions and achieves inferior caption performance to the attention model. Table TABREF35 shows the word translation performance of the two caption models. The attention model with better caption performance also induces better linguistic and localized visual features for bilingual lexicon induction. Nevertheless, the show-tell model still outperforms the previous vision-based methods in Table TABREF30 .
Generalization to Diverse Language Pairs
Beside German-to-English word translation, we expand our approach to other languages including French and Japanese which is more distant from English.
The French-to-English word translation performance is presented in Table TABREF39 . To be noted, the training data of the French captions is five times less than German captions, which makes French-to-English word translation performance less competitive with German-to-English. But similarly, the fusion of linguistic and visual features achieves the best performance, which has boosted the baseline methods with 4.2% relative gains on the MRR metric and 17.4% relative improvements on the P@20 metric.
Table TABREF40 shows the Japanese-to-English word translation performance. Since the language structures of Japanese and English are quite different, the linguistic features learned from the multi-lingual caption model are less effective but still can benefit the visual features to improve the translation quality. The results on multiple diverse language pairs further demonstrate the generalization of our approach for different languages.
Conclusion
In this paper, we address the problem of bilingual lexicon induction without reliance on parallel corpora. Based on the experience that we humans can understand words better when they are within the context and can learn word translations with external world (e.g. images) as pivot, we propose a new vision-based approach to induce bilingual lexicon with images and their associated sentences. We build a multi-lingual caption model from multiple mono-lingual multimodal data to map words in different languages into joint spaces. Two types of word representation, linguistic features and localized visual features, are induced from the caption model. The two types of features are complementary for word translation. Experimental results on multiple language pairs demonstrate the effectiveness of our proposed method, which leads to significant performance improvement over the state-of-the-art vision-based approaches for all types of part-of-speech. In the future, we will further expand the vision-pivot approaches for zero-resource machine translation without parallel sentences.
Acknowledgments
This work was supported by National Natural Science Foundation of China under Grant No. 61772535, National Key Research and Development Plan under Grant No. 2016YFB1001202 and Research Foundation of Beijing Municipal Science & Technology Commission under Grant No. Z181100008918002. | German-English, French-English, and Japanese-English |
95abda842c4df95b4c5e84ac7d04942f1250b571 | 95abda842c4df95b4c5e84ac7d04942f1250b571_1 | Q: Which languages are used in the multi-lingual caption model?
Text: Introduction
The bilingual lexicon induction task aims to automatically build word translation dictionaries across different languages, which is beneficial for various natural language processing tasks such as cross-lingual information retrieval BIBREF0 , multi-lingual sentiment analysis BIBREF1 , machine translation BIBREF2 and so on. Although building bilingual lexicon has achieved success with parallel sentences in resource-rich languages BIBREF2 , the parallel data is insufficient or even unavailable especially for resource-scarce languages and it is expensive to collect. On the contrary, there are abundant multimodal mono-lingual data on the Internet, such as images and their associated tags and descriptions, which motivates researchers to induce bilingual lexicon from these non-parallel data without supervision.
There are mainly two types of mono-lingual approaches to build bilingual dictionaries in recent works. The first is purely text-based, which explores the structure similarity between different linguistic space. The most popular approach among them is to linearly map source word embedding into the target word embedding space BIBREF3 , BIBREF4 . The second type utilizes vision as bridge to connect different languages BIBREF5 , BIBREF6 , BIBREF7 . It assumes that words correlating to similar images should share similar semantic meanings. So previous vision-based methods search images with multi-lingual words and translate words according to similarities of visual features extracted from the corresponding images. It has been proved that the visual-grounded word representation improves the semantic quality of the words BIBREF8 .
However, previous vision-based methods suffer from two limitations for bilingual lexicon induction. Firstly, the accurate translation performance is confined to concrete visual-relevant words such as nouns and adjectives as shown in Figure SECREF2 . For words without high-quality visual groundings, previous methods would generate poor translations BIBREF7 . Secondly, previous works extract visual features from the whole image to represent words and thus require object-centered images in order to obtain reliable visual groundings. However, common images usually contain multiple objects or scenes, and the word might only be grounded to part of the image, therefore the global visual features will be quite noisy to represent the word.
In this paper, we address the two limitations via learning from mono-lingual multimodal data with both sentence and visual context (e.g., image and caption data) to induce bilingual lexicon. Such multimodal data is also easily obtained for different languages on the Internet BIBREF9 . We propose a multi-lingual image caption model trained on multiple mono-lingual image caption data, which is able to induce two types of word representations for different languages in the joint space. The first is the linguistic feature learned from the sentence context with visual semantic constraints, so that it is able to generate more accurate translations for words that are less visual-relevant. The second is the localized visual feature which attends to the local region of the object or scene in the image for the corresponding word, so that the visual representation of words will be more salient than previous global visual features. The two representations are complementary and can be combined to induce better bilingual word translation.
We carry out experiments on multiple language pairs including German-English, French-English, and Japanese-English. The experimental results show that the proposed multi-lingual caption model not only achieves better caption performance than independent mono-lingual models for data-scarce languages, but also can induce the two types of features, linguistic and visual features, for different languages in joint spaces. Our proposed method consistently outperforms previous state-of-the-art vision-based bilingual word induction approaches on different languages. The contributions of this paper are as follows:
Related Work
The early works for bilingual lexicon induction require parallel data in different languages. BIBREF2 systematically investigates various word alignment methods with parallel texts to induce bilingual lexicon. However, the parallel data is scarce or even unavailable for low-resource languages. Therefore, methods with less dependency on the availability of parallel corpora are highly desired.
There are mainly two types of mono-lingual approaches for bilingual lexicon induction: text-based and vision-based methods. The text-based methods purely exploit the linguistic information to translate words. The initiative works BIBREF10 , BIBREF11 utilize word co-occurrences in different languages as clue for word alignment. With the improvement in word representation based on deep learning, BIBREF3 finds the structure similarity of the deep-learned word embeddings in different languages, and employs a parallel vocabulary to learn a linear mapping from the source to target word embeddings. BIBREF12 improves the translation performance via adding an orthogonality constraint to the mapping. BIBREF13 further introduces a matching mechanism to induce bilingual lexicon with fewer seeds. However, these models require seed lexicon as the start-point to train the bilingual mapping. Recently, BIBREF4 proposes an adversarial learning approach to learn the joint bilingual embedding space without any seed lexicon.
The vision-based methods exploit images to connect different languages, which assume that words corresponding to similar images are semantically alike. BIBREF5 collects images with labeled words in different languages to learn word translation with image as pivot. BIBREF6 improves the visual-based word translation performance via using more powerful visual representations: the CNN-based BIBREF14 features. The above works mainly focus on the translation of nouns and are limited in the number of collected languages. The recent work BIBREF7 constructs the current largest (with respect to the number of language pairs and types of part-of-speech) multimodal word translation dataset, MMID. They show that concrete words are easiest for vision-based translation methods while others are much less accurate. In our work, we alleviate the limitations of previous vision-based methods via exploring images and their captions rather than images with unstructured tags to connect different languages.
Image captioning has received more and more research attentions. Most image caption works focus on the English caption generation BIBREF15 , BIBREF16 , while there are limited works considering generating multi-lingual captions. The recent WMT workshop BIBREF17 has proposed a subtask of multi-lingual caption generation, where different strategies such as multi-task captioning and source-to-target translation followed by captioning have been proposed to generate captions in target languages. Our work proposes a multi-lingual image caption model that shares part of the parameters across different languages in order to benefit each other.
Unsupervised Bilingual Lexicon Induction
Our goal is to induce bilingual lexicon without supervision of parallel sentences or seed word pairs, purely based on the mono-lingual image caption data. In the following, we introduce the multi-lingual image caption model whose objectives for bilingual lexicon induction are two folds: 1) explicitly build multi-lingual word embeddings in the joint linguistic space; 2) implicitly extract the localized visual features for each word in the shared visual space. The former encodes linguistic information of words while the latter encodes the visual-grounded information, which are complementary for bilingual lexicon induction.
Multi-lingual Image Caption Model
Suppose we have mono-lingual image caption datasets INLINEFORM0 in the source language and INLINEFORM1 in the target language. The images INLINEFORM2 in INLINEFORM3 and INLINEFORM4 do not necessarily overlap, but cover overlapped object or scene classes which is the basic assumption of vision-based methods. For notation simplicity, we omit the superscript INLINEFORM5 for the data sample. Each image caption INLINEFORM6 and INLINEFORM7 is composed of word sequences INLINEFORM8 and INLINEFORM9 respectively, where INLINEFORM10 is the sentence length.
The proposed multi-lingual image caption model aims to generate sentences in different languages to describe the image content, which connects the vision and multi-lingual sentences. Figure FIGREF15 illustrates the framework of the caption model, which consists of three parts: the image encoder, word embedding module and language decoder.
The image encoder encodes the image into the shared visual space. We apply the Resnet152 BIBREF18 as our encoder INLINEFORM0 , which produces INLINEFORM1 vectors corresponding to different spatial locations in the image: DISPLAYFORM0
where INLINEFORM0 . The parameter INLINEFORM1 of the encoder is shared for different languages in order to encode all the images in the same visual space.
The word embedding module maps the one-hot word representation in each language into low-dimensional distributional embeddings: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 is the word embedding matrix for the source and target languages respectively. INLINEFORM2 and INLINEFORM3 are the vocabulary size of the two languages.
The decoder then generates word step by step conditioning on the encoded image feature and previous generated words. The probability of generating INLINEFORM0 in the source language is as follows: DISPLAYFORM0
where INLINEFORM0 is the hidden state of the decoder at step INLINEFORM1 , which is functioned by LSTM BIBREF19 : DISPLAYFORM0
The INLINEFORM0 is the dynamically located contextual image feature to generate word INLINEFORM1 via attention mechanism, which is the weighted sum of INLINEFORM2 computed by DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 is a fully connected neural network. The parameter INLINEFORM1 in the decoder includes all the weights in the LSTM and the attention network INLINEFORM2 .
Similarly, INLINEFORM0 is the probability of generating INLINEFORM1 in the target language, which shares INLINEFORM2 with the source language. By sharing the same parameters across different languages in the encoder and decoder, both the visual features and the learned word embeddings for different languages are enforced to project in a joint semantic space. To be noted, the proposed multi-lingual parameter sharing strategy is not constrained to the presented image captioning model, but can be applied in various image captioning models such as show-tell model BIBREF15 and so on.
We use maximum likelihood as objective function to train the multi-lingual caption model, which maximizes the log-probability of the ground-truth captions: DISPLAYFORM0
Visual-guided Word Representation
The proposed multi-lingual caption model can induce similarities of words in different languages from two aspects: the linguistic similarity and the visual similarity. In the following, we discuss the two types of similarity and then construct the source and target word representations.
The linguistic similarity is reflected from the learned word embeddings INLINEFORM0 and INLINEFORM1 in the multi-lingual caption model. As shown in previous works BIBREF20 , word embeddings learned from the language contexts can capture syntactic and semantic regularities in the language. However, if the word embeddings of different languages are trained independently, they are not in the same linguistic space and we cannot compute similarities directly. In our multi-lingual caption model, since images in INLINEFORM2 and INLINEFORM3 share the same visual space, the features of sentence INLINEFORM4 and INLINEFORM5 belonging to similar images are bound to be close in the same space with the visual constraints. Meanwhile, the language decoder is also shared, which enforces the word embeddings across languages into the same semantic space in order to generate similar sentence features. Therefore, INLINEFORM6 and INLINEFORM7 not only encode the linguistic information of different languages but also share the embedding space which enables direct cross-lingual similarity comparison. We refer the linguistic features of source and target words INLINEFORM8 and INLINEFORM9 as INLINEFORM10 and INLINEFORM11 respectively.
For the visual similarity, the multi-lingual caption model locates the image region to generate each word base on the spatial attention in Eq ( EQREF13 ), which can be used to calculate the localized visual representation of the word. However, since the attention is computed before word generation, the localization performance can be less accurate. It also cannot be generalized to image captioning models without spatial attention. Therefore, inspired by BIBREF21 , where they occlude over regions of the image to observe the change of classification probabilities, we feed different parts of the image to the caption model and investigate the probability changes for each word in the sentence. Algorithm SECREF16 presents the procedure of word localization and the grounded visual feature generation. Please note that such visual-grounding is learned unsupervisedly from the image caption data. Therefore, every word can be represented as a set of grounded visual features (the set size equals to the word occurrence number in the dataset). We refer the localized visual feature set for source word INLINEFORM0 as INLINEFORM1 , for target word INLINEFORM2 as INLINEFORM3 .
Generating localized visual features. Encoded image features INLINEFORM0 , sentence INLINEFORM1 . Localized visual features for each word INLINEFORM2 each INLINEFORM3 compute INLINEFORM4 according to Eq ( EQREF10 ) INLINEFORM5 INLINEFORM6 INLINEFORM7
Word Translation Prediction
Since the word representations of the source and target language are in the same space, we could directly compute the similarities across languages. We apply l2-normalization on the word representations and measure with the cosine similarity. For linguistic features, the similarity is measured as: DISPLAYFORM0
However, there are a set of visual features associated with one word, so the visual similarity measurement between two words is required to take two sets of visual features as input. We aggregate the visual features in a single representation and then compute cosine similarity instead of point-wise similarities among two sets: DISPLAYFORM0
The reasons for performing aggregation are two folds. Firstly, the number of visual features is proportional to the word occurrence in our approach instead of fixed numbers as in BIBREF6 , BIBREF7 . So the computation cost for frequent words are much higher. Secondly, the aggregation helps to reduce noise, which is especially important for abstract words. The abstract words such as `event' are more visually diverse, but the overall styles of multiple images can reflect its visual semantics.
Due to the complementary characteristics of the two features, we combine them to predict the word translation. The translated word for INLINEFORM0 is DISPLAYFORM0
Datasets
For image captioning, we utilize the multi30k BIBREF22 , COCO BIBREF23 and STAIR BIBREF24 datasets. The multi30k dataset contains 30k images and annotations under two tasks. In task 1, each image is annotated with one English description which is then translated into German and French. In task 2, the image is independently annotated with 5 descriptions in English and German respectively. For German and English languages, we utilize annotations in task 2. For the French language, we can only employ French descriptions in task 1, so the training size for French is less than the other two languages. The COCO and STAIR datasets contain the same image set but are independently annotated in English and Japanese. Since the images in the wild for different languages might not overlap, we randomly split the image set into two disjoint parts of equal size. The images in each part only contain the mono-lingual captions. We use Moses SMT Toolkit to tokenize sentences and select words occurring more than five times in our vocabulary for each language. Table TABREF21 summarizes the statistics of caption datasets.
For bilingual lexicon induction, we use two visual datasets: BERGSMA and MMID. The BERGSMA dataset BIBREF5 consists of 500 German-English word translation pairs. Each word is associated with no more than 20 images. The words in BERGSMA dataset are all nouns. The MMID dataset BIBREF7 covers a larger variety of words and languages, including 9,808 German-English pairs and 9,887 French-English pairs. The source word can be mapped to multiple target words in their dictionary. Each word is associated with no more than 100 retrieved images. Since both these image datasets do not contain Japanese language, we download the Japanese-to-English dictionary online. We select words in each dataset that overlap with our caption vocabulary, which results in 230 German-English pairs in BERGSMA dataset, 1,311 German-English pairs and 1,217 French-English pairs in MMID dataset, and 2,408 Japanese-English pairs.
Experimental Setup
For the multi-lingual caption model, we set the word embedding size and the hidden size of LSTM as 512. Adam algorithm is applied to optimize the model with learning rate of 0.0001 and batch size of 128. The caption model is trained up to 100 epochs and the best model is selected according to caption performance on the validation set.
We compare our approach with two baseline vision-based methods proposed in BIBREF6 , BIBREF7 , which measure the similarity of two sets of global visual features for bilingual lexicon induction:
CNN-mean: taking the similarity score of the averaged feature of the two image sets.
CNN-avgmax: taking the average of the maximum similarity scores of two image sets.
We evaluate the word translation performance using MRR (mean-reciprocal rank) as follows: DISPLAYFORM0
where INLINEFORM0 is the groundtruth translated words for source word INLINEFORM1 , and INLINEFORM2 denotes the rank of groundtruth word INLINEFORM3 in the rank list of translation candidates. We also measure the precision at K (P@K) score, which is the proportion of source words whose groundtruth translations rank in the top K words. We set K as 1, 5, 10 and 20.
Evaluation of Multi-lingual Image Caption
We first evaluate the captioning performance of the proposed multi-lingual caption model, which serves as the foundation stone for our bilingual lexicon induction method.
We compare the proposed multi-lingual caption model with the mono-lingual model, which consists of the same model structure, but is trained separately for each language. Table TABREF22 presents the captioning results on the multi30k dataset, where all the languages are from the Latin family. The multi-lingual caption model achieves comparable performance with mono-lingual model for data sufficient languages such as English and German, and significantly outperforms the mono-lingual model for the data-scarce language French with absolute 3.22 gains on the CIDEr metric. For languages with distinctive grammar structures such as English and Japanese, the multi-lingual model is also on par with the mono-lingual model as shown in Table TABREF29 . To be noted, the multi-lingual model contains about twice less of parameters than the independent mono-lingual models, which is more computation efficient.
We visualize the learned visual groundings from the multi-lingual caption model in Figure FIGREF32 . Though there is certain mistakes such as `musicians' in the bottom image, most of the words are grounded well with correct objects or scenes, and thus can obtain more salient visual features.
Evaluation of Bilingual Lexicon Induction
We induce the linguistic features and localized visual features from the multi-lingual caption model for word translation from the source to target languages. Table TABREF30 presents the German-to-English word translation performance of the proposed features. In the BERGSMA dataset, the visual features achieve better translation results than the linguistic features while they are inferior to the linguistic features in the MMID dataset. This is because the vocabulary in BERGSMA dataset mainly consists of nouns, but the parts-of-speech is more diverse in the MMID dataset. The visual features contribute most to translate concrete noun words, while the linguistic features are beneficial to other abstract words. The fusion of the two features performs best for word translation, which demonstrates that the two features are complementary with each other.
We also compare our approach with previous state-of-the-art vision-based methods in Table TABREF30 . Since our visual feature is the averaged representation, it is fair to compare with the CNN-mean baseline method where the only difference lies in the feature rather than similarity measurement. The localized features perform substantially better than the global image features which demonstrate the effectiveness of the attention learned from the caption model. The combination of visual and linguistic features also significantly improves the state-of-the-art visual-based CNN-avgmax method with 11.6% and 6.7% absolute gains on P@1 on the BERGSMA and MMID dataset respectively.
In Figure FIGREF36 , we present the word translation performance for different POS (part-of-speech) labels. We assign the POS label for words in different languages according to their translations in English. We can see that the previous state-of-the-art vision-based approach contributes mostly to noun words which are most visual-relevant, while generates poor translations for other part-of-speech words. Our approach, however, substantially improves the translation performance for all part-of-speech classes. For concrete words such as nouns and adjectives, the localized visual features produce better representation than previous global visual features; and for other part-of-speech words, the linguistic features, which are learned with sentence context, are effective to complement the visual features. The fusion of the linguistic and localized visual features in our approach leads to significant performance improvement over the state-of-the-art baseline method for all types of POS classes.
Some correct and incorrect translation examples for different POS classes are shown in Table TABREF34 . The visual-relevant concrete words are easier to translate such as `phone' and `red'. But our approach still generates reasonable results for abstract words such as `area' and functional words such as `for' due to the fusion of visual and sentence contexts.
We also evaluate the influence of different image captioning structures on the bilingual lexicon induction. We compare our attention model (`attn') with the vanilla show-tell model (`mp') BIBREF15 , which applies mean pooling over spatial image features to generate captions and achieves inferior caption performance to the attention model. Table TABREF35 shows the word translation performance of the two caption models. The attention model with better caption performance also induces better linguistic and localized visual features for bilingual lexicon induction. Nevertheless, the show-tell model still outperforms the previous vision-based methods in Table TABREF30 .
Generalization to Diverse Language Pairs
Beside German-to-English word translation, we expand our approach to other languages including French and Japanese which is more distant from English.
The French-to-English word translation performance is presented in Table TABREF39 . To be noted, the training data of the French captions is five times less than German captions, which makes French-to-English word translation performance less competitive with German-to-English. But similarly, the fusion of linguistic and visual features achieves the best performance, which has boosted the baseline methods with 4.2% relative gains on the MRR metric and 17.4% relative improvements on the P@20 metric.
Table TABREF40 shows the Japanese-to-English word translation performance. Since the language structures of Japanese and English are quite different, the linguistic features learned from the multi-lingual caption model are less effective but still can benefit the visual features to improve the translation quality. The results on multiple diverse language pairs further demonstrate the generalization of our approach for different languages.
Conclusion
In this paper, we address the problem of bilingual lexicon induction without reliance on parallel corpora. Based on the experience that we humans can understand words better when they are within the context and can learn word translations with external world (e.g. images) as pivot, we propose a new vision-based approach to induce bilingual lexicon with images and their associated sentences. We build a multi-lingual caption model from multiple mono-lingual multimodal data to map words in different languages into joint spaces. Two types of word representation, linguistic features and localized visual features, are induced from the caption model. The two types of features are complementary for word translation. Experimental results on multiple language pairs demonstrate the effectiveness of our proposed method, which leads to significant performance improvement over the state-of-the-art vision-based approaches for all types of part-of-speech. In the future, we will further expand the vision-pivot approaches for zero-resource machine translation without parallel sentences.
Acknowledgments
This work was supported by National Natural Science Foundation of China under Grant No. 61772535, National Key Research and Development Plan under Grant No. 2016YFB1001202 and Research Foundation of Beijing Municipal Science & Technology Commission under Grant No. Z181100008918002. | multiple language pairs including German-English, French-English, and Japanese-English. |
2419b38624201d678c530eba877c0c016cccd49f | 2419b38624201d678c530eba877c0c016cccd49f_0 | Q: Did they experiment on all the tasks?
Text: Introduction
The proliferation of social media has made it possible to study large online communities at scale, thus making important discoveries that can facilitate decision making, guide policies, improve health and well-being, aid disaster response, etc. The wide host of languages, languages varieties, and dialects used on social media and the nuanced differences between users of various backgrounds (e.g., different age groups, gender identities) make it especially difficult to derive sufficiently valuable insights based on single prediction tasks. For these reasons, it would be desirable to offer NLP tools that can help stitch together a complete picture of an event across different geographical regions as impacting, and being impacted by, individuals of different identities. We offer AraNet as one such tool for Arabic social media processing.
Introduction :::
For Arabic, a collection of languages and varieties spoken by a wide population of $\sim 400$ million native speakers covering a vast geographical region (shown in Figure FIGREF2), no such suite of tools currently exists. Many works have focused on sentiment analysis, e.g., BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7 and dialect identification BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13. However, there is generally rarity of resources on other tasks such as gender and age detection. This motivates our toolkit, which we hope can meet the current critical need for studying Arabic communities online. This is especially valuable given the waves of protests, uprisings, and revolutions that have been sweeping the region during the last decade.
Although we create new models for tasks such as sentiment analysis and gender detection as part of AraNet, our focus is more on putting together the toolkit itself and providing strong baselines that can be compared to. Hence, although we provide some baseline models for some of the tasks, we do not explicitly compare to previous research since most existing works either exploit smaller data (and so it will not be a fair comparison), use methods pre-dating BERT (and so will likely be outperformed by our models) . For many of the tasks we model, there have not been standard benchmarks for comparisons across models. This makes it difficult to measure progress and identify areas worthy of allocating efforts and budgets. As such, by publishing our toolkit models, we believe model-based comparisons will be one way to relieve this bottleneck. For these reasons, we also package models from our recent works on dialect BIBREF12 and irony BIBREF14 as part of AraNet .
The rest of the paper is organized as follows: In Section SECREF2 we describe our methods. In Section SECREF3, we describe or refer to published literature for the dataset we exploit for each task and provide results our corresponding model acquires. Section SECREF4 is about AraNet design and use, and we overview related works in Section SECREF5 We conclude in Section SECREF6
Methods
Supervised BERT. Across all our tasks, we use Bidirectional Encoder Representations from Transformers (BERT). BERT BIBREF15, dispenses with recurrence and convolution. It is based on a multi-layer bidirectional Transformer encoder BIBREF16, with multi-head attention. It uses masked language models to enable pre-trained deep bidirectional representations, in addition to a binary next sentence prediction task. The pre-trained BERT can be easily fine-tuned on a wide host of sentence-level and token-level tasks. All our models are trained in a fully supervised fashion, with dialect id being the only task where we leverage semi-supervised learning. We briefly outline our semi-supervised methods next.
Self-Training. Only for the dialect id task, we investigate augmenting our human-labeled training data with automatically-predicted data from self-training. Self-training is a wrapper method for semi-supervised learning BIBREF17, BIBREF18 where a classifier is initially trained on a (usually small) set of labeled samples $\textbf {\textit {D}}^{l}$, then is used to classify an unlabeled sample set $\textbf {\textit {D}}^{u}$. Most confident predictions acquired by the original supervised model are added to the labeled set, and the model is iteratively re-trained. We perform self-training using different confidence thresholds and choose different percentages from predicted data to add to our train. We only report best settings here and the reader is referred to our winning system on the MADAR shared task for more details on these different settings BIBREF12.
Implementation & Models Parameters. For all our tasks, we use the BERT-Base Multilingual Cased model released by the authors . The model is trained on 104 languages (including Arabic) with 12 layer, 768 hidden units each, 12 attention heads, and has 110M parameters in entire model. The model has 119,547 shared WordPieces vocabulary, and was pre-trained on the entire Wikipedia for each language. For fine-tuning, we use a maximum sequence size of 50 tokens and a batch size of 32. We set the learning rate to $2e-5$ and train for 15 epochs and choose the best model based on performance on a development set. We use the same hyper-parameters in all of our BERT models. We fine-tune BERT on each respective labeled dataset for each task. For BERT input, we apply WordPiece tokenization, setting the maximal sequence length to 50 words/WordPieces. For all tasks, we use a TensorFlow implementation. An exception is the sentiment analysis task, where we used a PyTorch implementation with the same hyper-parameters but with a learning rate $2e-6$.
Pre-processing. Most of our training data in all tasks come from Twitter. Exceptions are in some of the datasets we use for sentiment analysis, which we point out in Section SECREF23. Our pre-processing thus incorporates methods to clean tweets, other datasets (e.g., from the news domain) being much less noisy. For pre-processing, we remove all usernames, URLs, and diacritics in the data.
Data and Models ::: Age and Gender
Arab-Tweet. For modeling age and gender, we use Arap-Tweet BIBREF19 , which we will refer to as Arab-Tweet. Arab-tweet is a tweet dataset of 11 Arabic regions from 17 different countries. For each region, data from 100 Twitter users were crawled. Users needed to have posted at least 2,000 and were selected based on an initial list of seed words characteristic of each region. The seed list included words such as <برشة> /barsha/ ‘many’ for Tunisian Arabic and <وايد> /wayed/ ‘many’ for Gulf Arabic. BIBREF19 employed human annotators to verify that users do belong to each respective region. Annotators also assigned gender labels from the set male, female and age group labels from the set under-25, 25-to34, above-35 at the user-level, which in turn is assigned at tweet level. Tweets with less than 3 words and re-tweets were removed. Refer to BIBREF19 for details about how annotation was carried out. We provide a description of the data in Table TABREF10. Table TABREF10 also provides class breakdown across our splits.We note that BIBREF19 do not report classification models exploiting the data.
Data and Models ::: Age and Gender :::
We shuffle the Arab-tweet dataset and split it into 80% training (TRAIN), 10% development (DEV), and 10% test (TEST). The distribution of classes in our splits is in Table TABREF10. For pre-processing, we reduce 2 or more consecutive repetitions of the same character into only 2 and remove diacritics. With this dataset, we train a small unidirectional GRU (small-GRU) with a single 500-units hidden layer and $dropout=0.5$ as a baseline. Small-GRU is trained with the TRAIN set, batch size = 8, and up to 30 words of each sequence. Each word in the input sequence is represented as a trainable 300-dimension vector. We use the top 100K words which are weighted by mutual information as our vocabulary in the embedding layer. We evaluate the model on TEST set. Table TABREF14 show small-GRU obtain36.29% XX acc on age classification, and 53.37% acc on gender detection. We also report the accuracy of fine-tuned BERT models on TEST set in Table TABREF14. We can find that BERT models significantly perform better than our baseline on the two tasks. It improve with 15.13% (for age) and 11.93% acc (for gender) over the small-GRU.
UBC Twitter Gender Dataset. We also develop an in-house Twitter dataset for gender. We manually labeled 1,989 users from each of the 21 Arab countries. The data had 1,246 “male", 528 “female", and 215 unknown users. We remove the “unknown" category and balance the dataset to have 528 from each of the two `male" and “female" categories. We ended with 69,509 tweets for `male" and 67,511 tweets for “female". We split the users into 80% TRAIN set (110,750 tweets for 845 users), 10% DEV set (14,158 tweets for 106 users), and 10% TEST set (12,112 tweets for 105 users). We, then, model this dataset with BERT-Base, Multilingual Cased model and evaluate on development and test sets. Table TABREF15 shows that fine-tuned model obtains 62.42% acc on DEV and 60.54% acc on TEST.
We also combine the Arab-tweet gender dataset with our UBC-Twitter dataset for gender on training, development, and test, respectively, to obtain new TRAIN, DEV, and TEST. We fine-tune the BERT-Base, Multilingual Cased model with the combined TRAIN and evaluate on combined DEV and TEST. As Table TABREF15 shows, the model obtains 65.32% acc on combined DEV set, and 65.32% acc on combined TEST set. This is the model we package in AraNet .
Data and Models ::: Dialect
The dialect identification model in AraNet is based on our winning system in the MADAR shared task 2 BIBREF20 as described in BIBREF12. The corpus is divided into train, dev and test, and the organizers masked test set labels. We lost some tweets from training data when we crawled using tweet ids, ultimately acquiring 2,036 (TRAIN-A), 281 (DEV) and 466 (TEST). We also make use of the task 1 corpus (95,000 sentences BIBREF21). More specifically, we concatenate the task 1 data to the training data of task 2, to create TRAIN-B. Again, note that TEST labels were only released to participants after the official task evaluation. Table TABREF17 shows statistics of the data. We used tweets from 21 Arab countries as distributed by task organizers, except that we lost some tweets when we crawled using tweet ids. We had 2,036 (TRAIN-A), 281 (DEV) and 466 (TEST). For our experiments, we also make use of the task 1 corpus (95,000 sentences BIBREF21). More specifically, we concatenate the task 1 data to the training data of task 2, to create TRAIN-B. Note that both DEV and TEST across our experiments are exclusively the data released in task 2, as described above. TEST labels were only released to participants after the official task evaluation. Table TABREF17 shows statistics of the data. More information about the data is in BIBREF21. We use TRAIN-A to perform supervised modeling with BERT and TRAIN-B for self training, under various conditions. We refer the reader to BIBREF12 for more information about our different experimental settings on dialect id. We acquire our best results with self-training, with a classification accuracy of 49.39% and F1 score at 35.44. This is the winning system model in the MADAR shared task and we showed in BIBREF12 that our tweet-level predictions can be ported to user-level prediction. On user-level detection, our models perform superbly, with 77.40% acc and 71.70% F1 score on unseen MADAR blind test data.
Data and Models ::: Emotion
We make use of two datasets, the LAMA-DINA dataset from BIBREF22, a Twitter dataset with a combination of gold labels from BIBREF23 and distant supervision labels. The tweets are labeled with the Plutchik 8 primary emotions from the set: {anger, anticipation, disgust, fear, joy, sadness, surprise, trust}. The distant supervision approach depends on use of seed phrases with the Arabic first person pronoun انا> (Eng. “I") + a seed word expressing an emotion, e.g., فرحان> (Eng. “happy"). The manually labeled part of the data comprises tweets carrying the seed phrases verified by human annotators $9,064$ tweets for inclusion of the respective emotion. The rest of the dataset is only labeled using distant supervision (LAMA-DIST) ($182,605$ tweets) . For more information about the dataset, readers are referred to BIBREF22. The data distribution over the emotion classes is in Table TABREF20. We combine LAMA+DINA and LAMA-DIST training set and refer to this new training set as LAMA-D2 (189,903 tweets). We fine-tune BERT-Based, Multilingual Cased on the LAMA-D2 and evaluate the model with same DEV and TEST sets from LAMA+DINA. On DEV set, the fine-tuned BERT model obtains 61.43% on accuracy and 58.83 on $F_1$ score. On TEST set, we acquire 62.38% acc and 60.32% $F_1$ score.
Data and Models ::: Irony
We use the dataset for irony identification on Arabic tweets released by IDAT@FIRE2019 shared-task BIBREF24. The shared task dataset contains 5,030 tweets related to different political issues and events in the Middle East taking place between 2011 and 2018. Tweets are collected using pre-defined keywords (i.e., targeted political figures or events) and the positive class involves ironic hashtags such as #sokhria, #tahakoum, and #maskhara (Arabic variants for “irony"). Duplicates, retweets, and non-intelligible tweets are removed by organizers. Tweets involve both MSA as well as dialects at various degrees of granularity such as Egyptian, Gulf, and Levantine.
IDAT@FIRE2019 BIBREF24 is set up as a binary classification task where tweets are assigned labels from the set {ironic, non-ironic}. A total of 4,024 tweets were released by organizers as training data. In addition, 1,006 tweets were used by organizers as test data. Test labels were not release; and teams were expected to submit the predictions produced by their systems on the test split. For our models, we split the 4,024 released training data into 90% TRAIN ($n$=3,621 tweets; `ironic'=1,882 and `non-ironic'=1,739) and 10% DEV ($n$=403 tweets; `ironic'=209 and `non-ironic'=194). We use the same small-GRU architecture of Section 3.1 as our baselines. We fine-tune BERT-Based, Multilingual Cased model on our TRAIN, and evaluate on DEV. The small-GRU obtain 73.70% accuracy and 73.47% $F_1$ score. BERT model significantly out-performance than small-GRU, which achieve 81.64% accuracy and 81.62% $F_1$ score.
Data and Models ::: Sentiment
We collect 15 datasets related to sentiment analysis of Arabic, including MSA and dialects BIBREF25, BIBREF26, BIBREF27, BIBREF1, BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34. Table TABREF28 shows all the corpora we use. These datasets involve different types of sentiment analysis tasks such as binary classification (i.e., negative or positive), 3-way classification (i.e., negative, neutral, or positive), and subjective language detection. To combine these datasets for binary sentiment classification, we normalize different types of label to binary labels in the set $\lbrace `positive^{\prime }, `negative^{\prime }\rbrace $ by following rules:
{Positive, Pos, or High-Pos} to `positive';
{Negative, Neg, or High-Neg} to `negative';
Exclude samples which label is not `positive' or `negative' such as `obj', `mixed', `neut', or `neutral'.
After label normalization, we obtain 126,766 samples. We split this datase into 80% training (TRAIN), 10% development (DEV), and 10% test (TEST). The distribution of classes in our splits is presented in Table TABREF27. We fine-tune pre-trained BERT on the TRAIN set using PyTorch implementation with $2e-6$ learning rate and 15 epochs, as explained in Section SECREF2. Our best model on the DEV set obtains 80.24% acc and 80.24% $F_1$. We evaluate this best model on TEST set and obtain 77.31% acc and 76.67% $F_1$.
AraNet Design and Use
AraNet consists of identifier tools including age, gender, dialect, emotion, irony and sentiment. Each tool comes with an embedded model. The tool comes with modules for performing normalization and tokenization. AraNet can be used as a Python library or a command-line tool:
Python Library: Importing AraNet module as a Python library provides identifiers’ functions. Prediction is based on a text or a path to a file and returns the identified class label. It also returns the probability distribution over all available class labels if needed. Figure FIGREF34 shows two examples of using the tool as Python library.
Command-line tool: AraNet provides scripts supporting both command-line and interactive mode. Command-line mode accepts a text or file path. Interaction mode is good for quick interactive line-by-line experiments and also pipeline redirections.
AraNet is available through pip or from source on GitHub with detailed documentation.
Related Works
As we pointed out earlier, there has been several works on some of the tasks but less on others. By far, Arabic sentiment analysis has been the most popular task. Several works have been performed for MSA BIBREF35, BIBREF0 and dialectal BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7 sentiment analysis. A number of works have also been published for dialect detection, including BIBREF9, BIBREF10, BIBREF8, BIBREF11. Some works have been performed on the tasks of age detection BIBREF19, BIBREF36, gender detection BIBREF19, BIBREF36, irony identification BIBREF37, BIBREF24, and emotion analysis BIBREF38, BIBREF22.
A number of tools exist for Arabic natural language processing,including Penn Arabic treebank BIBREF39, POS tagger BIBREF40, BIBREF41, Buckwalter Morphological Analyzer BIBREF42 and Mazajak BIBREF7 for sentiment analysis .
Conclusion
We presented AraNet, a deep learning toolkit for a host of Arabic social media processing. AraNet predicts age, dialect, gender, emotion, irony, and sentiment from social media posts. It delivers state-of-the-art and competitive performance on these tasks and has the advantage of using a unified, simple framework based on the recently-developed BERT model. AraNet has the potential to alleviate issues related to comparing across different Arabic social media NLP tasks, by providing one way to test new models against AraNet predictions. Our toolkit can be used to make important discoveries on the wide region of the Arab world, and can enhance our understating of Arab online communication. AraNet will be publicly available upon acceptance. | Yes |
b99d100d17e2a121c3c8ff789971ce66d1d40a4d | b99d100d17e2a121c3c8ff789971ce66d1d40a4d_0 | Q: What models did they compare to?
Text: Introduction
The proliferation of social media has made it possible to study large online communities at scale, thus making important discoveries that can facilitate decision making, guide policies, improve health and well-being, aid disaster response, etc. The wide host of languages, languages varieties, and dialects used on social media and the nuanced differences between users of various backgrounds (e.g., different age groups, gender identities) make it especially difficult to derive sufficiently valuable insights based on single prediction tasks. For these reasons, it would be desirable to offer NLP tools that can help stitch together a complete picture of an event across different geographical regions as impacting, and being impacted by, individuals of different identities. We offer AraNet as one such tool for Arabic social media processing.
Introduction :::
For Arabic, a collection of languages and varieties spoken by a wide population of $\sim 400$ million native speakers covering a vast geographical region (shown in Figure FIGREF2), no such suite of tools currently exists. Many works have focused on sentiment analysis, e.g., BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7 and dialect identification BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13. However, there is generally rarity of resources on other tasks such as gender and age detection. This motivates our toolkit, which we hope can meet the current critical need for studying Arabic communities online. This is especially valuable given the waves of protests, uprisings, and revolutions that have been sweeping the region during the last decade.
Although we create new models for tasks such as sentiment analysis and gender detection as part of AraNet, our focus is more on putting together the toolkit itself and providing strong baselines that can be compared to. Hence, although we provide some baseline models for some of the tasks, we do not explicitly compare to previous research since most existing works either exploit smaller data (and so it will not be a fair comparison), use methods pre-dating BERT (and so will likely be outperformed by our models) . For many of the tasks we model, there have not been standard benchmarks for comparisons across models. This makes it difficult to measure progress and identify areas worthy of allocating efforts and budgets. As such, by publishing our toolkit models, we believe model-based comparisons will be one way to relieve this bottleneck. For these reasons, we also package models from our recent works on dialect BIBREF12 and irony BIBREF14 as part of AraNet .
The rest of the paper is organized as follows: In Section SECREF2 we describe our methods. In Section SECREF3, we describe or refer to published literature for the dataset we exploit for each task and provide results our corresponding model acquires. Section SECREF4 is about AraNet design and use, and we overview related works in Section SECREF5 We conclude in Section SECREF6
Methods
Supervised BERT. Across all our tasks, we use Bidirectional Encoder Representations from Transformers (BERT). BERT BIBREF15, dispenses with recurrence and convolution. It is based on a multi-layer bidirectional Transformer encoder BIBREF16, with multi-head attention. It uses masked language models to enable pre-trained deep bidirectional representations, in addition to a binary next sentence prediction task. The pre-trained BERT can be easily fine-tuned on a wide host of sentence-level and token-level tasks. All our models are trained in a fully supervised fashion, with dialect id being the only task where we leverage semi-supervised learning. We briefly outline our semi-supervised methods next.
Self-Training. Only for the dialect id task, we investigate augmenting our human-labeled training data with automatically-predicted data from self-training. Self-training is a wrapper method for semi-supervised learning BIBREF17, BIBREF18 where a classifier is initially trained on a (usually small) set of labeled samples $\textbf {\textit {D}}^{l}$, then is used to classify an unlabeled sample set $\textbf {\textit {D}}^{u}$. Most confident predictions acquired by the original supervised model are added to the labeled set, and the model is iteratively re-trained. We perform self-training using different confidence thresholds and choose different percentages from predicted data to add to our train. We only report best settings here and the reader is referred to our winning system on the MADAR shared task for more details on these different settings BIBREF12.
Implementation & Models Parameters. For all our tasks, we use the BERT-Base Multilingual Cased model released by the authors . The model is trained on 104 languages (including Arabic) with 12 layer, 768 hidden units each, 12 attention heads, and has 110M parameters in entire model. The model has 119,547 shared WordPieces vocabulary, and was pre-trained on the entire Wikipedia for each language. For fine-tuning, we use a maximum sequence size of 50 tokens and a batch size of 32. We set the learning rate to $2e-5$ and train for 15 epochs and choose the best model based on performance on a development set. We use the same hyper-parameters in all of our BERT models. We fine-tune BERT on each respective labeled dataset for each task. For BERT input, we apply WordPiece tokenization, setting the maximal sequence length to 50 words/WordPieces. For all tasks, we use a TensorFlow implementation. An exception is the sentiment analysis task, where we used a PyTorch implementation with the same hyper-parameters but with a learning rate $2e-6$.
Pre-processing. Most of our training data in all tasks come from Twitter. Exceptions are in some of the datasets we use for sentiment analysis, which we point out in Section SECREF23. Our pre-processing thus incorporates methods to clean tweets, other datasets (e.g., from the news domain) being much less noisy. For pre-processing, we remove all usernames, URLs, and diacritics in the data.
Data and Models ::: Age and Gender
Arab-Tweet. For modeling age and gender, we use Arap-Tweet BIBREF19 , which we will refer to as Arab-Tweet. Arab-tweet is a tweet dataset of 11 Arabic regions from 17 different countries. For each region, data from 100 Twitter users were crawled. Users needed to have posted at least 2,000 and were selected based on an initial list of seed words characteristic of each region. The seed list included words such as <برشة> /barsha/ ‘many’ for Tunisian Arabic and <وايد> /wayed/ ‘many’ for Gulf Arabic. BIBREF19 employed human annotators to verify that users do belong to each respective region. Annotators also assigned gender labels from the set male, female and age group labels from the set under-25, 25-to34, above-35 at the user-level, which in turn is assigned at tweet level. Tweets with less than 3 words and re-tweets were removed. Refer to BIBREF19 for details about how annotation was carried out. We provide a description of the data in Table TABREF10. Table TABREF10 also provides class breakdown across our splits.We note that BIBREF19 do not report classification models exploiting the data.
Data and Models ::: Age and Gender :::
We shuffle the Arab-tweet dataset and split it into 80% training (TRAIN), 10% development (DEV), and 10% test (TEST). The distribution of classes in our splits is in Table TABREF10. For pre-processing, we reduce 2 or more consecutive repetitions of the same character into only 2 and remove diacritics. With this dataset, we train a small unidirectional GRU (small-GRU) with a single 500-units hidden layer and $dropout=0.5$ as a baseline. Small-GRU is trained with the TRAIN set, batch size = 8, and up to 30 words of each sequence. Each word in the input sequence is represented as a trainable 300-dimension vector. We use the top 100K words which are weighted by mutual information as our vocabulary in the embedding layer. We evaluate the model on TEST set. Table TABREF14 show small-GRU obtain36.29% XX acc on age classification, and 53.37% acc on gender detection. We also report the accuracy of fine-tuned BERT models on TEST set in Table TABREF14. We can find that BERT models significantly perform better than our baseline on the two tasks. It improve with 15.13% (for age) and 11.93% acc (for gender) over the small-GRU.
UBC Twitter Gender Dataset. We also develop an in-house Twitter dataset for gender. We manually labeled 1,989 users from each of the 21 Arab countries. The data had 1,246 “male", 528 “female", and 215 unknown users. We remove the “unknown" category and balance the dataset to have 528 from each of the two `male" and “female" categories. We ended with 69,509 tweets for `male" and 67,511 tweets for “female". We split the users into 80% TRAIN set (110,750 tweets for 845 users), 10% DEV set (14,158 tweets for 106 users), and 10% TEST set (12,112 tweets for 105 users). We, then, model this dataset with BERT-Base, Multilingual Cased model and evaluate on development and test sets. Table TABREF15 shows that fine-tuned model obtains 62.42% acc on DEV and 60.54% acc on TEST.
We also combine the Arab-tweet gender dataset with our UBC-Twitter dataset for gender on training, development, and test, respectively, to obtain new TRAIN, DEV, and TEST. We fine-tune the BERT-Base, Multilingual Cased model with the combined TRAIN and evaluate on combined DEV and TEST. As Table TABREF15 shows, the model obtains 65.32% acc on combined DEV set, and 65.32% acc on combined TEST set. This is the model we package in AraNet .
Data and Models ::: Dialect
The dialect identification model in AraNet is based on our winning system in the MADAR shared task 2 BIBREF20 as described in BIBREF12. The corpus is divided into train, dev and test, and the organizers masked test set labels. We lost some tweets from training data when we crawled using tweet ids, ultimately acquiring 2,036 (TRAIN-A), 281 (DEV) and 466 (TEST). We also make use of the task 1 corpus (95,000 sentences BIBREF21). More specifically, we concatenate the task 1 data to the training data of task 2, to create TRAIN-B. Again, note that TEST labels were only released to participants after the official task evaluation. Table TABREF17 shows statistics of the data. We used tweets from 21 Arab countries as distributed by task organizers, except that we lost some tweets when we crawled using tweet ids. We had 2,036 (TRAIN-A), 281 (DEV) and 466 (TEST). For our experiments, we also make use of the task 1 corpus (95,000 sentences BIBREF21). More specifically, we concatenate the task 1 data to the training data of task 2, to create TRAIN-B. Note that both DEV and TEST across our experiments are exclusively the data released in task 2, as described above. TEST labels were only released to participants after the official task evaluation. Table TABREF17 shows statistics of the data. More information about the data is in BIBREF21. We use TRAIN-A to perform supervised modeling with BERT and TRAIN-B for self training, under various conditions. We refer the reader to BIBREF12 for more information about our different experimental settings on dialect id. We acquire our best results with self-training, with a classification accuracy of 49.39% and F1 score at 35.44. This is the winning system model in the MADAR shared task and we showed in BIBREF12 that our tweet-level predictions can be ported to user-level prediction. On user-level detection, our models perform superbly, with 77.40% acc and 71.70% F1 score on unseen MADAR blind test data.
Data and Models ::: Emotion
We make use of two datasets, the LAMA-DINA dataset from BIBREF22, a Twitter dataset with a combination of gold labels from BIBREF23 and distant supervision labels. The tweets are labeled with the Plutchik 8 primary emotions from the set: {anger, anticipation, disgust, fear, joy, sadness, surprise, trust}. The distant supervision approach depends on use of seed phrases with the Arabic first person pronoun انا> (Eng. “I") + a seed word expressing an emotion, e.g., فرحان> (Eng. “happy"). The manually labeled part of the data comprises tweets carrying the seed phrases verified by human annotators $9,064$ tweets for inclusion of the respective emotion. The rest of the dataset is only labeled using distant supervision (LAMA-DIST) ($182,605$ tweets) . For more information about the dataset, readers are referred to BIBREF22. The data distribution over the emotion classes is in Table TABREF20. We combine LAMA+DINA and LAMA-DIST training set and refer to this new training set as LAMA-D2 (189,903 tweets). We fine-tune BERT-Based, Multilingual Cased on the LAMA-D2 and evaluate the model with same DEV and TEST sets from LAMA+DINA. On DEV set, the fine-tuned BERT model obtains 61.43% on accuracy and 58.83 on $F_1$ score. On TEST set, we acquire 62.38% acc and 60.32% $F_1$ score.
Data and Models ::: Irony
We use the dataset for irony identification on Arabic tweets released by IDAT@FIRE2019 shared-task BIBREF24. The shared task dataset contains 5,030 tweets related to different political issues and events in the Middle East taking place between 2011 and 2018. Tweets are collected using pre-defined keywords (i.e., targeted political figures or events) and the positive class involves ironic hashtags such as #sokhria, #tahakoum, and #maskhara (Arabic variants for “irony"). Duplicates, retweets, and non-intelligible tweets are removed by organizers. Tweets involve both MSA as well as dialects at various degrees of granularity such as Egyptian, Gulf, and Levantine.
IDAT@FIRE2019 BIBREF24 is set up as a binary classification task where tweets are assigned labels from the set {ironic, non-ironic}. A total of 4,024 tweets were released by organizers as training data. In addition, 1,006 tweets were used by organizers as test data. Test labels were not release; and teams were expected to submit the predictions produced by their systems on the test split. For our models, we split the 4,024 released training data into 90% TRAIN ($n$=3,621 tweets; `ironic'=1,882 and `non-ironic'=1,739) and 10% DEV ($n$=403 tweets; `ironic'=209 and `non-ironic'=194). We use the same small-GRU architecture of Section 3.1 as our baselines. We fine-tune BERT-Based, Multilingual Cased model on our TRAIN, and evaluate on DEV. The small-GRU obtain 73.70% accuracy and 73.47% $F_1$ score. BERT model significantly out-performance than small-GRU, which achieve 81.64% accuracy and 81.62% $F_1$ score.
Data and Models ::: Sentiment
We collect 15 datasets related to sentiment analysis of Arabic, including MSA and dialects BIBREF25, BIBREF26, BIBREF27, BIBREF1, BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34. Table TABREF28 shows all the corpora we use. These datasets involve different types of sentiment analysis tasks such as binary classification (i.e., negative or positive), 3-way classification (i.e., negative, neutral, or positive), and subjective language detection. To combine these datasets for binary sentiment classification, we normalize different types of label to binary labels in the set $\lbrace `positive^{\prime }, `negative^{\prime }\rbrace $ by following rules:
{Positive, Pos, or High-Pos} to `positive';
{Negative, Neg, or High-Neg} to `negative';
Exclude samples which label is not `positive' or `negative' such as `obj', `mixed', `neut', or `neutral'.
After label normalization, we obtain 126,766 samples. We split this datase into 80% training (TRAIN), 10% development (DEV), and 10% test (TEST). The distribution of classes in our splits is presented in Table TABREF27. We fine-tune pre-trained BERT on the TRAIN set using PyTorch implementation with $2e-6$ learning rate and 15 epochs, as explained in Section SECREF2. Our best model on the DEV set obtains 80.24% acc and 80.24% $F_1$. We evaluate this best model on TEST set and obtain 77.31% acc and 76.67% $F_1$.
AraNet Design and Use
AraNet consists of identifier tools including age, gender, dialect, emotion, irony and sentiment. Each tool comes with an embedded model. The tool comes with modules for performing normalization and tokenization. AraNet can be used as a Python library or a command-line tool:
Python Library: Importing AraNet module as a Python library provides identifiers’ functions. Prediction is based on a text or a path to a file and returns the identified class label. It also returns the probability distribution over all available class labels if needed. Figure FIGREF34 shows two examples of using the tool as Python library.
Command-line tool: AraNet provides scripts supporting both command-line and interactive mode. Command-line mode accepts a text or file path. Interaction mode is good for quick interactive line-by-line experiments and also pipeline redirections.
AraNet is available through pip or from source on GitHub with detailed documentation.
Related Works
As we pointed out earlier, there has been several works on some of the tasks but less on others. By far, Arabic sentiment analysis has been the most popular task. Several works have been performed for MSA BIBREF35, BIBREF0 and dialectal BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7 sentiment analysis. A number of works have also been published for dialect detection, including BIBREF9, BIBREF10, BIBREF8, BIBREF11. Some works have been performed on the tasks of age detection BIBREF19, BIBREF36, gender detection BIBREF19, BIBREF36, irony identification BIBREF37, BIBREF24, and emotion analysis BIBREF38, BIBREF22.
A number of tools exist for Arabic natural language processing,including Penn Arabic treebank BIBREF39, POS tagger BIBREF40, BIBREF41, Buckwalter Morphological Analyzer BIBREF42 and Mazajak BIBREF7 for sentiment analysis .
Conclusion
We presented AraNet, a deep learning toolkit for a host of Arabic social media processing. AraNet predicts age, dialect, gender, emotion, irony, and sentiment from social media posts. It delivers state-of-the-art and competitive performance on these tasks and has the advantage of using a unified, simple framework based on the recently-developed BERT model. AraNet has the potential to alleviate issues related to comparing across different Arabic social media NLP tasks, by providing one way to test new models against AraNet predictions. Our toolkit can be used to make important discoveries on the wide region of the Arab world, and can enhance our understating of Arab online communication. AraNet will be publicly available upon acceptance. | we do not explicitly compare to previous research since most existing works either exploit smaller data (and so it will not be a fair comparison), use methods pre-dating BERT (and so will likely be outperformed by our models) |
578d0b23cb983b445b1a256a34f969b34d332075 | 578d0b23cb983b445b1a256a34f969b34d332075_0 | Q: What datasets are used in training?
Text: Introduction
The proliferation of social media has made it possible to study large online communities at scale, thus making important discoveries that can facilitate decision making, guide policies, improve health and well-being, aid disaster response, etc. The wide host of languages, languages varieties, and dialects used on social media and the nuanced differences between users of various backgrounds (e.g., different age groups, gender identities) make it especially difficult to derive sufficiently valuable insights based on single prediction tasks. For these reasons, it would be desirable to offer NLP tools that can help stitch together a complete picture of an event across different geographical regions as impacting, and being impacted by, individuals of different identities. We offer AraNet as one such tool for Arabic social media processing.
Introduction :::
For Arabic, a collection of languages and varieties spoken by a wide population of $\sim 400$ million native speakers covering a vast geographical region (shown in Figure FIGREF2), no such suite of tools currently exists. Many works have focused on sentiment analysis, e.g., BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7 and dialect identification BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13. However, there is generally rarity of resources on other tasks such as gender and age detection. This motivates our toolkit, which we hope can meet the current critical need for studying Arabic communities online. This is especially valuable given the waves of protests, uprisings, and revolutions that have been sweeping the region during the last decade.
Although we create new models for tasks such as sentiment analysis and gender detection as part of AraNet, our focus is more on putting together the toolkit itself and providing strong baselines that can be compared to. Hence, although we provide some baseline models for some of the tasks, we do not explicitly compare to previous research since most existing works either exploit smaller data (and so it will not be a fair comparison), use methods pre-dating BERT (and so will likely be outperformed by our models) . For many of the tasks we model, there have not been standard benchmarks for comparisons across models. This makes it difficult to measure progress and identify areas worthy of allocating efforts and budgets. As such, by publishing our toolkit models, we believe model-based comparisons will be one way to relieve this bottleneck. For these reasons, we also package models from our recent works on dialect BIBREF12 and irony BIBREF14 as part of AraNet .
The rest of the paper is organized as follows: In Section SECREF2 we describe our methods. In Section SECREF3, we describe or refer to published literature for the dataset we exploit for each task and provide results our corresponding model acquires. Section SECREF4 is about AraNet design and use, and we overview related works in Section SECREF5 We conclude in Section SECREF6
Methods
Supervised BERT. Across all our tasks, we use Bidirectional Encoder Representations from Transformers (BERT). BERT BIBREF15, dispenses with recurrence and convolution. It is based on a multi-layer bidirectional Transformer encoder BIBREF16, with multi-head attention. It uses masked language models to enable pre-trained deep bidirectional representations, in addition to a binary next sentence prediction task. The pre-trained BERT can be easily fine-tuned on a wide host of sentence-level and token-level tasks. All our models are trained in a fully supervised fashion, with dialect id being the only task where we leverage semi-supervised learning. We briefly outline our semi-supervised methods next.
Self-Training. Only for the dialect id task, we investigate augmenting our human-labeled training data with automatically-predicted data from self-training. Self-training is a wrapper method for semi-supervised learning BIBREF17, BIBREF18 where a classifier is initially trained on a (usually small) set of labeled samples $\textbf {\textit {D}}^{l}$, then is used to classify an unlabeled sample set $\textbf {\textit {D}}^{u}$. Most confident predictions acquired by the original supervised model are added to the labeled set, and the model is iteratively re-trained. We perform self-training using different confidence thresholds and choose different percentages from predicted data to add to our train. We only report best settings here and the reader is referred to our winning system on the MADAR shared task for more details on these different settings BIBREF12.
Implementation & Models Parameters. For all our tasks, we use the BERT-Base Multilingual Cased model released by the authors . The model is trained on 104 languages (including Arabic) with 12 layer, 768 hidden units each, 12 attention heads, and has 110M parameters in entire model. The model has 119,547 shared WordPieces vocabulary, and was pre-trained on the entire Wikipedia for each language. For fine-tuning, we use a maximum sequence size of 50 tokens and a batch size of 32. We set the learning rate to $2e-5$ and train for 15 epochs and choose the best model based on performance on a development set. We use the same hyper-parameters in all of our BERT models. We fine-tune BERT on each respective labeled dataset for each task. For BERT input, we apply WordPiece tokenization, setting the maximal sequence length to 50 words/WordPieces. For all tasks, we use a TensorFlow implementation. An exception is the sentiment analysis task, where we used a PyTorch implementation with the same hyper-parameters but with a learning rate $2e-6$.
Pre-processing. Most of our training data in all tasks come from Twitter. Exceptions are in some of the datasets we use for sentiment analysis, which we point out in Section SECREF23. Our pre-processing thus incorporates methods to clean tweets, other datasets (e.g., from the news domain) being much less noisy. For pre-processing, we remove all usernames, URLs, and diacritics in the data.
Data and Models ::: Age and Gender
Arab-Tweet. For modeling age and gender, we use Arap-Tweet BIBREF19 , which we will refer to as Arab-Tweet. Arab-tweet is a tweet dataset of 11 Arabic regions from 17 different countries. For each region, data from 100 Twitter users were crawled. Users needed to have posted at least 2,000 and were selected based on an initial list of seed words characteristic of each region. The seed list included words such as <برشة> /barsha/ ‘many’ for Tunisian Arabic and <وايد> /wayed/ ‘many’ for Gulf Arabic. BIBREF19 employed human annotators to verify that users do belong to each respective region. Annotators also assigned gender labels from the set male, female and age group labels from the set under-25, 25-to34, above-35 at the user-level, which in turn is assigned at tweet level. Tweets with less than 3 words and re-tweets were removed. Refer to BIBREF19 for details about how annotation was carried out. We provide a description of the data in Table TABREF10. Table TABREF10 also provides class breakdown across our splits.We note that BIBREF19 do not report classification models exploiting the data.
Data and Models ::: Age and Gender :::
We shuffle the Arab-tweet dataset and split it into 80% training (TRAIN), 10% development (DEV), and 10% test (TEST). The distribution of classes in our splits is in Table TABREF10. For pre-processing, we reduce 2 or more consecutive repetitions of the same character into only 2 and remove diacritics. With this dataset, we train a small unidirectional GRU (small-GRU) with a single 500-units hidden layer and $dropout=0.5$ as a baseline. Small-GRU is trained with the TRAIN set, batch size = 8, and up to 30 words of each sequence. Each word in the input sequence is represented as a trainable 300-dimension vector. We use the top 100K words which are weighted by mutual information as our vocabulary in the embedding layer. We evaluate the model on TEST set. Table TABREF14 show small-GRU obtain36.29% XX acc on age classification, and 53.37% acc on gender detection. We also report the accuracy of fine-tuned BERT models on TEST set in Table TABREF14. We can find that BERT models significantly perform better than our baseline on the two tasks. It improve with 15.13% (for age) and 11.93% acc (for gender) over the small-GRU.
UBC Twitter Gender Dataset. We also develop an in-house Twitter dataset for gender. We manually labeled 1,989 users from each of the 21 Arab countries. The data had 1,246 “male", 528 “female", and 215 unknown users. We remove the “unknown" category and balance the dataset to have 528 from each of the two `male" and “female" categories. We ended with 69,509 tweets for `male" and 67,511 tweets for “female". We split the users into 80% TRAIN set (110,750 tweets for 845 users), 10% DEV set (14,158 tweets for 106 users), and 10% TEST set (12,112 tweets for 105 users). We, then, model this dataset with BERT-Base, Multilingual Cased model and evaluate on development and test sets. Table TABREF15 shows that fine-tuned model obtains 62.42% acc on DEV and 60.54% acc on TEST.
We also combine the Arab-tweet gender dataset with our UBC-Twitter dataset for gender on training, development, and test, respectively, to obtain new TRAIN, DEV, and TEST. We fine-tune the BERT-Base, Multilingual Cased model with the combined TRAIN and evaluate on combined DEV and TEST. As Table TABREF15 shows, the model obtains 65.32% acc on combined DEV set, and 65.32% acc on combined TEST set. This is the model we package in AraNet .
Data and Models ::: Dialect
The dialect identification model in AraNet is based on our winning system in the MADAR shared task 2 BIBREF20 as described in BIBREF12. The corpus is divided into train, dev and test, and the organizers masked test set labels. We lost some tweets from training data when we crawled using tweet ids, ultimately acquiring 2,036 (TRAIN-A), 281 (DEV) and 466 (TEST). We also make use of the task 1 corpus (95,000 sentences BIBREF21). More specifically, we concatenate the task 1 data to the training data of task 2, to create TRAIN-B. Again, note that TEST labels were only released to participants after the official task evaluation. Table TABREF17 shows statistics of the data. We used tweets from 21 Arab countries as distributed by task organizers, except that we lost some tweets when we crawled using tweet ids. We had 2,036 (TRAIN-A), 281 (DEV) and 466 (TEST). For our experiments, we also make use of the task 1 corpus (95,000 sentences BIBREF21). More specifically, we concatenate the task 1 data to the training data of task 2, to create TRAIN-B. Note that both DEV and TEST across our experiments are exclusively the data released in task 2, as described above. TEST labels were only released to participants after the official task evaluation. Table TABREF17 shows statistics of the data. More information about the data is in BIBREF21. We use TRAIN-A to perform supervised modeling with BERT and TRAIN-B for self training, under various conditions. We refer the reader to BIBREF12 for more information about our different experimental settings on dialect id. We acquire our best results with self-training, with a classification accuracy of 49.39% and F1 score at 35.44. This is the winning system model in the MADAR shared task and we showed in BIBREF12 that our tweet-level predictions can be ported to user-level prediction. On user-level detection, our models perform superbly, with 77.40% acc and 71.70% F1 score on unseen MADAR blind test data.
Data and Models ::: Emotion
We make use of two datasets, the LAMA-DINA dataset from BIBREF22, a Twitter dataset with a combination of gold labels from BIBREF23 and distant supervision labels. The tweets are labeled with the Plutchik 8 primary emotions from the set: {anger, anticipation, disgust, fear, joy, sadness, surprise, trust}. The distant supervision approach depends on use of seed phrases with the Arabic first person pronoun انا> (Eng. “I") + a seed word expressing an emotion, e.g., فرحان> (Eng. “happy"). The manually labeled part of the data comprises tweets carrying the seed phrases verified by human annotators $9,064$ tweets for inclusion of the respective emotion. The rest of the dataset is only labeled using distant supervision (LAMA-DIST) ($182,605$ tweets) . For more information about the dataset, readers are referred to BIBREF22. The data distribution over the emotion classes is in Table TABREF20. We combine LAMA+DINA and LAMA-DIST training set and refer to this new training set as LAMA-D2 (189,903 tweets). We fine-tune BERT-Based, Multilingual Cased on the LAMA-D2 and evaluate the model with same DEV and TEST sets from LAMA+DINA. On DEV set, the fine-tuned BERT model obtains 61.43% on accuracy and 58.83 on $F_1$ score. On TEST set, we acquire 62.38% acc and 60.32% $F_1$ score.
Data and Models ::: Irony
We use the dataset for irony identification on Arabic tweets released by IDAT@FIRE2019 shared-task BIBREF24. The shared task dataset contains 5,030 tweets related to different political issues and events in the Middle East taking place between 2011 and 2018. Tweets are collected using pre-defined keywords (i.e., targeted political figures or events) and the positive class involves ironic hashtags such as #sokhria, #tahakoum, and #maskhara (Arabic variants for “irony"). Duplicates, retweets, and non-intelligible tweets are removed by organizers. Tweets involve both MSA as well as dialects at various degrees of granularity such as Egyptian, Gulf, and Levantine.
IDAT@FIRE2019 BIBREF24 is set up as a binary classification task where tweets are assigned labels from the set {ironic, non-ironic}. A total of 4,024 tweets were released by organizers as training data. In addition, 1,006 tweets were used by organizers as test data. Test labels were not release; and teams were expected to submit the predictions produced by their systems on the test split. For our models, we split the 4,024 released training data into 90% TRAIN ($n$=3,621 tweets; `ironic'=1,882 and `non-ironic'=1,739) and 10% DEV ($n$=403 tweets; `ironic'=209 and `non-ironic'=194). We use the same small-GRU architecture of Section 3.1 as our baselines. We fine-tune BERT-Based, Multilingual Cased model on our TRAIN, and evaluate on DEV. The small-GRU obtain 73.70% accuracy and 73.47% $F_1$ score. BERT model significantly out-performance than small-GRU, which achieve 81.64% accuracy and 81.62% $F_1$ score.
Data and Models ::: Sentiment
We collect 15 datasets related to sentiment analysis of Arabic, including MSA and dialects BIBREF25, BIBREF26, BIBREF27, BIBREF1, BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34. Table TABREF28 shows all the corpora we use. These datasets involve different types of sentiment analysis tasks such as binary classification (i.e., negative or positive), 3-way classification (i.e., negative, neutral, or positive), and subjective language detection. To combine these datasets for binary sentiment classification, we normalize different types of label to binary labels in the set $\lbrace `positive^{\prime }, `negative^{\prime }\rbrace $ by following rules:
{Positive, Pos, or High-Pos} to `positive';
{Negative, Neg, or High-Neg} to `negative';
Exclude samples which label is not `positive' or `negative' such as `obj', `mixed', `neut', or `neutral'.
After label normalization, we obtain 126,766 samples. We split this datase into 80% training (TRAIN), 10% development (DEV), and 10% test (TEST). The distribution of classes in our splits is presented in Table TABREF27. We fine-tune pre-trained BERT on the TRAIN set using PyTorch implementation with $2e-6$ learning rate and 15 epochs, as explained in Section SECREF2. Our best model on the DEV set obtains 80.24% acc and 80.24% $F_1$. We evaluate this best model on TEST set and obtain 77.31% acc and 76.67% $F_1$.
AraNet Design and Use
AraNet consists of identifier tools including age, gender, dialect, emotion, irony and sentiment. Each tool comes with an embedded model. The tool comes with modules for performing normalization and tokenization. AraNet can be used as a Python library or a command-line tool:
Python Library: Importing AraNet module as a Python library provides identifiers’ functions. Prediction is based on a text or a path to a file and returns the identified class label. It also returns the probability distribution over all available class labels if needed. Figure FIGREF34 shows two examples of using the tool as Python library.
Command-line tool: AraNet provides scripts supporting both command-line and interactive mode. Command-line mode accepts a text or file path. Interaction mode is good for quick interactive line-by-line experiments and also pipeline redirections.
AraNet is available through pip or from source on GitHub with detailed documentation.
Related Works
As we pointed out earlier, there has been several works on some of the tasks but less on others. By far, Arabic sentiment analysis has been the most popular task. Several works have been performed for MSA BIBREF35, BIBREF0 and dialectal BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7 sentiment analysis. A number of works have also been published for dialect detection, including BIBREF9, BIBREF10, BIBREF8, BIBREF11. Some works have been performed on the tasks of age detection BIBREF19, BIBREF36, gender detection BIBREF19, BIBREF36, irony identification BIBREF37, BIBREF24, and emotion analysis BIBREF38, BIBREF22.
A number of tools exist for Arabic natural language processing,including Penn Arabic treebank BIBREF39, POS tagger BIBREF40, BIBREF41, Buckwalter Morphological Analyzer BIBREF42 and Mazajak BIBREF7 for sentiment analysis .
Conclusion
We presented AraNet, a deep learning toolkit for a host of Arabic social media processing. AraNet predicts age, dialect, gender, emotion, irony, and sentiment from social media posts. It delivers state-of-the-art and competitive performance on these tasks and has the advantage of using a unified, simple framework based on the recently-developed BERT model. AraNet has the potential to alleviate issues related to comparing across different Arabic social media NLP tasks, by providing one way to test new models against AraNet predictions. Our toolkit can be used to make important discoveries on the wide region of the Arab world, and can enhance our understating of Arab online communication. AraNet will be publicly available upon acceptance. | Arap-Tweet BIBREF19 , an in-house Twitter dataset for gender, the MADAR shared task 2 BIBREF20, the LAMA-DINA dataset from BIBREF22, LAMA-DIST, Arabic tweets released by IDAT@FIRE2019 shared-task BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF1, BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34 |
578d0b23cb983b445b1a256a34f969b34d332075 | 578d0b23cb983b445b1a256a34f969b34d332075_1 | Q: What datasets are used in training?
Text: Introduction
The proliferation of social media has made it possible to study large online communities at scale, thus making important discoveries that can facilitate decision making, guide policies, improve health and well-being, aid disaster response, etc. The wide host of languages, languages varieties, and dialects used on social media and the nuanced differences between users of various backgrounds (e.g., different age groups, gender identities) make it especially difficult to derive sufficiently valuable insights based on single prediction tasks. For these reasons, it would be desirable to offer NLP tools that can help stitch together a complete picture of an event across different geographical regions as impacting, and being impacted by, individuals of different identities. We offer AraNet as one such tool for Arabic social media processing.
Introduction :::
For Arabic, a collection of languages and varieties spoken by a wide population of $\sim 400$ million native speakers covering a vast geographical region (shown in Figure FIGREF2), no such suite of tools currently exists. Many works have focused on sentiment analysis, e.g., BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7 and dialect identification BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13. However, there is generally rarity of resources on other tasks such as gender and age detection. This motivates our toolkit, which we hope can meet the current critical need for studying Arabic communities online. This is especially valuable given the waves of protests, uprisings, and revolutions that have been sweeping the region during the last decade.
Although we create new models for tasks such as sentiment analysis and gender detection as part of AraNet, our focus is more on putting together the toolkit itself and providing strong baselines that can be compared to. Hence, although we provide some baseline models for some of the tasks, we do not explicitly compare to previous research since most existing works either exploit smaller data (and so it will not be a fair comparison), use methods pre-dating BERT (and so will likely be outperformed by our models) . For many of the tasks we model, there have not been standard benchmarks for comparisons across models. This makes it difficult to measure progress and identify areas worthy of allocating efforts and budgets. As such, by publishing our toolkit models, we believe model-based comparisons will be one way to relieve this bottleneck. For these reasons, we also package models from our recent works on dialect BIBREF12 and irony BIBREF14 as part of AraNet .
The rest of the paper is organized as follows: In Section SECREF2 we describe our methods. In Section SECREF3, we describe or refer to published literature for the dataset we exploit for each task and provide results our corresponding model acquires. Section SECREF4 is about AraNet design and use, and we overview related works in Section SECREF5 We conclude in Section SECREF6
Methods
Supervised BERT. Across all our tasks, we use Bidirectional Encoder Representations from Transformers (BERT). BERT BIBREF15, dispenses with recurrence and convolution. It is based on a multi-layer bidirectional Transformer encoder BIBREF16, with multi-head attention. It uses masked language models to enable pre-trained deep bidirectional representations, in addition to a binary next sentence prediction task. The pre-trained BERT can be easily fine-tuned on a wide host of sentence-level and token-level tasks. All our models are trained in a fully supervised fashion, with dialect id being the only task where we leverage semi-supervised learning. We briefly outline our semi-supervised methods next.
Self-Training. Only for the dialect id task, we investigate augmenting our human-labeled training data with automatically-predicted data from self-training. Self-training is a wrapper method for semi-supervised learning BIBREF17, BIBREF18 where a classifier is initially trained on a (usually small) set of labeled samples $\textbf {\textit {D}}^{l}$, then is used to classify an unlabeled sample set $\textbf {\textit {D}}^{u}$. Most confident predictions acquired by the original supervised model are added to the labeled set, and the model is iteratively re-trained. We perform self-training using different confidence thresholds and choose different percentages from predicted data to add to our train. We only report best settings here and the reader is referred to our winning system on the MADAR shared task for more details on these different settings BIBREF12.
Implementation & Models Parameters. For all our tasks, we use the BERT-Base Multilingual Cased model released by the authors . The model is trained on 104 languages (including Arabic) with 12 layer, 768 hidden units each, 12 attention heads, and has 110M parameters in entire model. The model has 119,547 shared WordPieces vocabulary, and was pre-trained on the entire Wikipedia for each language. For fine-tuning, we use a maximum sequence size of 50 tokens and a batch size of 32. We set the learning rate to $2e-5$ and train for 15 epochs and choose the best model based on performance on a development set. We use the same hyper-parameters in all of our BERT models. We fine-tune BERT on each respective labeled dataset for each task. For BERT input, we apply WordPiece tokenization, setting the maximal sequence length to 50 words/WordPieces. For all tasks, we use a TensorFlow implementation. An exception is the sentiment analysis task, where we used a PyTorch implementation with the same hyper-parameters but with a learning rate $2e-6$.
Pre-processing. Most of our training data in all tasks come from Twitter. Exceptions are in some of the datasets we use for sentiment analysis, which we point out in Section SECREF23. Our pre-processing thus incorporates methods to clean tweets, other datasets (e.g., from the news domain) being much less noisy. For pre-processing, we remove all usernames, URLs, and diacritics in the data.
Data and Models ::: Age and Gender
Arab-Tweet. For modeling age and gender, we use Arap-Tweet BIBREF19 , which we will refer to as Arab-Tweet. Arab-tweet is a tweet dataset of 11 Arabic regions from 17 different countries. For each region, data from 100 Twitter users were crawled. Users needed to have posted at least 2,000 and were selected based on an initial list of seed words characteristic of each region. The seed list included words such as <برشة> /barsha/ ‘many’ for Tunisian Arabic and <وايد> /wayed/ ‘many’ for Gulf Arabic. BIBREF19 employed human annotators to verify that users do belong to each respective region. Annotators also assigned gender labels from the set male, female and age group labels from the set under-25, 25-to34, above-35 at the user-level, which in turn is assigned at tweet level. Tweets with less than 3 words and re-tweets were removed. Refer to BIBREF19 for details about how annotation was carried out. We provide a description of the data in Table TABREF10. Table TABREF10 also provides class breakdown across our splits.We note that BIBREF19 do not report classification models exploiting the data.
Data and Models ::: Age and Gender :::
We shuffle the Arab-tweet dataset and split it into 80% training (TRAIN), 10% development (DEV), and 10% test (TEST). The distribution of classes in our splits is in Table TABREF10. For pre-processing, we reduce 2 or more consecutive repetitions of the same character into only 2 and remove diacritics. With this dataset, we train a small unidirectional GRU (small-GRU) with a single 500-units hidden layer and $dropout=0.5$ as a baseline. Small-GRU is trained with the TRAIN set, batch size = 8, and up to 30 words of each sequence. Each word in the input sequence is represented as a trainable 300-dimension vector. We use the top 100K words which are weighted by mutual information as our vocabulary in the embedding layer. We evaluate the model on TEST set. Table TABREF14 show small-GRU obtain36.29% XX acc on age classification, and 53.37% acc on gender detection. We also report the accuracy of fine-tuned BERT models on TEST set in Table TABREF14. We can find that BERT models significantly perform better than our baseline on the two tasks. It improve with 15.13% (for age) and 11.93% acc (for gender) over the small-GRU.
UBC Twitter Gender Dataset. We also develop an in-house Twitter dataset for gender. We manually labeled 1,989 users from each of the 21 Arab countries. The data had 1,246 “male", 528 “female", and 215 unknown users. We remove the “unknown" category and balance the dataset to have 528 from each of the two `male" and “female" categories. We ended with 69,509 tweets for `male" and 67,511 tweets for “female". We split the users into 80% TRAIN set (110,750 tweets for 845 users), 10% DEV set (14,158 tweets for 106 users), and 10% TEST set (12,112 tweets for 105 users). We, then, model this dataset with BERT-Base, Multilingual Cased model and evaluate on development and test sets. Table TABREF15 shows that fine-tuned model obtains 62.42% acc on DEV and 60.54% acc on TEST.
We also combine the Arab-tweet gender dataset with our UBC-Twitter dataset for gender on training, development, and test, respectively, to obtain new TRAIN, DEV, and TEST. We fine-tune the BERT-Base, Multilingual Cased model with the combined TRAIN and evaluate on combined DEV and TEST. As Table TABREF15 shows, the model obtains 65.32% acc on combined DEV set, and 65.32% acc on combined TEST set. This is the model we package in AraNet .
Data and Models ::: Dialect
The dialect identification model in AraNet is based on our winning system in the MADAR shared task 2 BIBREF20 as described in BIBREF12. The corpus is divided into train, dev and test, and the organizers masked test set labels. We lost some tweets from training data when we crawled using tweet ids, ultimately acquiring 2,036 (TRAIN-A), 281 (DEV) and 466 (TEST). We also make use of the task 1 corpus (95,000 sentences BIBREF21). More specifically, we concatenate the task 1 data to the training data of task 2, to create TRAIN-B. Again, note that TEST labels were only released to participants after the official task evaluation. Table TABREF17 shows statistics of the data. We used tweets from 21 Arab countries as distributed by task organizers, except that we lost some tweets when we crawled using tweet ids. We had 2,036 (TRAIN-A), 281 (DEV) and 466 (TEST). For our experiments, we also make use of the task 1 corpus (95,000 sentences BIBREF21). More specifically, we concatenate the task 1 data to the training data of task 2, to create TRAIN-B. Note that both DEV and TEST across our experiments are exclusively the data released in task 2, as described above. TEST labels were only released to participants after the official task evaluation. Table TABREF17 shows statistics of the data. More information about the data is in BIBREF21. We use TRAIN-A to perform supervised modeling with BERT and TRAIN-B for self training, under various conditions. We refer the reader to BIBREF12 for more information about our different experimental settings on dialect id. We acquire our best results with self-training, with a classification accuracy of 49.39% and F1 score at 35.44. This is the winning system model in the MADAR shared task and we showed in BIBREF12 that our tweet-level predictions can be ported to user-level prediction. On user-level detection, our models perform superbly, with 77.40% acc and 71.70% F1 score on unseen MADAR blind test data.
Data and Models ::: Emotion
We make use of two datasets, the LAMA-DINA dataset from BIBREF22, a Twitter dataset with a combination of gold labels from BIBREF23 and distant supervision labels. The tweets are labeled with the Plutchik 8 primary emotions from the set: {anger, anticipation, disgust, fear, joy, sadness, surprise, trust}. The distant supervision approach depends on use of seed phrases with the Arabic first person pronoun انا> (Eng. “I") + a seed word expressing an emotion, e.g., فرحان> (Eng. “happy"). The manually labeled part of the data comprises tweets carrying the seed phrases verified by human annotators $9,064$ tweets for inclusion of the respective emotion. The rest of the dataset is only labeled using distant supervision (LAMA-DIST) ($182,605$ tweets) . For more information about the dataset, readers are referred to BIBREF22. The data distribution over the emotion classes is in Table TABREF20. We combine LAMA+DINA and LAMA-DIST training set and refer to this new training set as LAMA-D2 (189,903 tweets). We fine-tune BERT-Based, Multilingual Cased on the LAMA-D2 and evaluate the model with same DEV and TEST sets from LAMA+DINA. On DEV set, the fine-tuned BERT model obtains 61.43% on accuracy and 58.83 on $F_1$ score. On TEST set, we acquire 62.38% acc and 60.32% $F_1$ score.
Data and Models ::: Irony
We use the dataset for irony identification on Arabic tweets released by IDAT@FIRE2019 shared-task BIBREF24. The shared task dataset contains 5,030 tweets related to different political issues and events in the Middle East taking place between 2011 and 2018. Tweets are collected using pre-defined keywords (i.e., targeted political figures or events) and the positive class involves ironic hashtags such as #sokhria, #tahakoum, and #maskhara (Arabic variants for “irony"). Duplicates, retweets, and non-intelligible tweets are removed by organizers. Tweets involve both MSA as well as dialects at various degrees of granularity such as Egyptian, Gulf, and Levantine.
IDAT@FIRE2019 BIBREF24 is set up as a binary classification task where tweets are assigned labels from the set {ironic, non-ironic}. A total of 4,024 tweets were released by organizers as training data. In addition, 1,006 tweets were used by organizers as test data. Test labels were not release; and teams were expected to submit the predictions produced by their systems on the test split. For our models, we split the 4,024 released training data into 90% TRAIN ($n$=3,621 tweets; `ironic'=1,882 and `non-ironic'=1,739) and 10% DEV ($n$=403 tweets; `ironic'=209 and `non-ironic'=194). We use the same small-GRU architecture of Section 3.1 as our baselines. We fine-tune BERT-Based, Multilingual Cased model on our TRAIN, and evaluate on DEV. The small-GRU obtain 73.70% accuracy and 73.47% $F_1$ score. BERT model significantly out-performance than small-GRU, which achieve 81.64% accuracy and 81.62% $F_1$ score.
Data and Models ::: Sentiment
We collect 15 datasets related to sentiment analysis of Arabic, including MSA and dialects BIBREF25, BIBREF26, BIBREF27, BIBREF1, BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34. Table TABREF28 shows all the corpora we use. These datasets involve different types of sentiment analysis tasks such as binary classification (i.e., negative or positive), 3-way classification (i.e., negative, neutral, or positive), and subjective language detection. To combine these datasets for binary sentiment classification, we normalize different types of label to binary labels in the set $\lbrace `positive^{\prime }, `negative^{\prime }\rbrace $ by following rules:
{Positive, Pos, or High-Pos} to `positive';
{Negative, Neg, or High-Neg} to `negative';
Exclude samples which label is not `positive' or `negative' such as `obj', `mixed', `neut', or `neutral'.
After label normalization, we obtain 126,766 samples. We split this datase into 80% training (TRAIN), 10% development (DEV), and 10% test (TEST). The distribution of classes in our splits is presented in Table TABREF27. We fine-tune pre-trained BERT on the TRAIN set using PyTorch implementation with $2e-6$ learning rate and 15 epochs, as explained in Section SECREF2. Our best model on the DEV set obtains 80.24% acc and 80.24% $F_1$. We evaluate this best model on TEST set and obtain 77.31% acc and 76.67% $F_1$.
AraNet Design and Use
AraNet consists of identifier tools including age, gender, dialect, emotion, irony and sentiment. Each tool comes with an embedded model. The tool comes with modules for performing normalization and tokenization. AraNet can be used as a Python library or a command-line tool:
Python Library: Importing AraNet module as a Python library provides identifiers’ functions. Prediction is based on a text or a path to a file and returns the identified class label. It also returns the probability distribution over all available class labels if needed. Figure FIGREF34 shows two examples of using the tool as Python library.
Command-line tool: AraNet provides scripts supporting both command-line and interactive mode. Command-line mode accepts a text or file path. Interaction mode is good for quick interactive line-by-line experiments and also pipeline redirections.
AraNet is available through pip or from source on GitHub with detailed documentation.
Related Works
As we pointed out earlier, there has been several works on some of the tasks but less on others. By far, Arabic sentiment analysis has been the most popular task. Several works have been performed for MSA BIBREF35, BIBREF0 and dialectal BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7 sentiment analysis. A number of works have also been published for dialect detection, including BIBREF9, BIBREF10, BIBREF8, BIBREF11. Some works have been performed on the tasks of age detection BIBREF19, BIBREF36, gender detection BIBREF19, BIBREF36, irony identification BIBREF37, BIBREF24, and emotion analysis BIBREF38, BIBREF22.
A number of tools exist for Arabic natural language processing,including Penn Arabic treebank BIBREF39, POS tagger BIBREF40, BIBREF41, Buckwalter Morphological Analyzer BIBREF42 and Mazajak BIBREF7 for sentiment analysis .
Conclusion
We presented AraNet, a deep learning toolkit for a host of Arabic social media processing. AraNet predicts age, dialect, gender, emotion, irony, and sentiment from social media posts. It delivers state-of-the-art and competitive performance on these tasks and has the advantage of using a unified, simple framework based on the recently-developed BERT model. AraNet has the potential to alleviate issues related to comparing across different Arabic social media NLP tasks, by providing one way to test new models against AraNet predictions. Our toolkit can be used to make important discoveries on the wide region of the Arab world, and can enhance our understating of Arab online communication. AraNet will be publicly available upon acceptance. | Arap-Tweet , UBC Twitter Gender Dataset, MADAR , LAMA-DINA , IDAT@FIRE2019, 15 datasets related to sentiment analysis of Arabic, including MSA and dialects |